f01.justanswer.comf01.justanswer.com/nfctogci/external+review+of+the+oie+v.20.3.1… · web...

674
External Review of the Office of Independent Evaluation Caribbean Development Bank April, 2016 Principal Reviewer Marlène Läubli Loud, Review Panel Members John Mayne Bastiaan de Laat 1

Upload: phungnhu

Post on 14-Aug-2019

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

External Review of the Office of Independent Evaluation

Caribbean Development Bank

April, 2016

Principal ReviewerMarlène Läubli Loud,

Review Panel MembersJohn Mayne

Bastiaan de Laat

1

Page 2: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Primary audiencesThe main target audiences of this Review are

Members of the Oversight Assurance Committee Members of the Board of Directors Independent Evaluation Office Caribbean Development Bank Management and Staff

It is assumed that audiences are familiar with the OIE and evaluation function. Background information and description of evaluation processes and procedures are therefore minimal.

Review Panel Members

Marlène Läubli Loud (DPhil) is currently an independent consultant and trainer in public sector evaluation. She has worked with a range of organizations, small and big including the European Commission, the World Health Organisation, the United Nations Evaluation Group, the UK Employment Department, UK Health Promotion Agency (now merged and become NICE), and the English Nursing Board. She was head of the Research and Evaluation Unit at the Swiss Federal Office of Public Health for nearly twenty years where she gained much experience in evaluation management, and especially in the ways and means for improving the use and utility of evaluation in organisations. She continues to have a keen theoretical and practical interest in this area. Prior to this, she was an independent evaluator in the UK, specializing in the evaluation of developmental programmes in health and general education. She was also a research fellow at the Department of Education, University of Surrey and in the Social Science Faculty, University of Oxford, UK.

John Mayne (PhD) is an independent advisor on public sector performance. He has been working with a number of organizations and jurisdictions, including several agencies of the UN, the Challenge Program on Water and Food, the European Union, the Scottish Government, the United Nations Secretariat, the International Development Research Centre, the Asian Development Bank and several Canadian federal departments on results management, evaluation and accountability issues. Until 2004, he was at the Office of the Auditor General where he led efforts at developing practices for effective managing for results and performance reporting in the government of Canada, as well as leading the Office’s audit efforts in accountability and governance. Prior to 1995, John was with the Canadian Treasury Board Secretariat and Office of the Comptroller General. He has authored numerous articles and reports, and edited five books in the areas of program evaluation, public administration and performance monitoring. In 1989 and in 1995, he was awarded the Canadian Evaluation Society Award for Contribution to Evaluation in Canada. In

2

marlene laeubli loud, 20/03/16,
Would be grateful if you would update your bios John and Bastiaan if you feel you need to – have taken these from the book!
Page 3: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2006, he became a Canadian Evaluation Society Fellow.

Bastiaan de Laat (PhD) is Evaluation Expert and Team Leader at the European Investment Bank (EIB) where over the past two of years he has been in charge of major evaluations in important areas such as Climate Action, SME support and Technical Assistance. He has a longstanding experience in evaluation as well as in foresight. Founder-director of the French subsidiary of the Technopolis Group (1998-2006) he led many evaluations for and provided policy advice to a great variety of local, national and international public bodies. He trained several hundreds of European Commission staff and national government officials in evaluation and designed monitoring and evaluation systems for various public organisations. Before joining the EIB he worked as Evaluator at the Council of Europe Development Bank. He has developed tools and performed programme, policy and regulatory evaluations, both ex ante and ex post, in a variety of fields. He has also made several academic contributions, most recently with articles on evaluation use and on the "Tricky Triangle", on the relationships between evaluator, evaluation commissioner and evaluand. In his private capacity, Bastiaan served as Secretary General of the European Evaluation Society and was recently elected Vice-President.

Acknowledgements The Review exercise could not have been possible without the support and commitment of the OAC, the OIE and the CDB. The exploratory discussions with the OIE staff, members of the Board of Directors as well as with the CDB management and staff provided great insight and were a valuable contribution to the Review.

We are indebted to the Head of OIE, Michael Schroll, and his team for their cooperation, insight, and readiness to provide us with any information requests. We are also grateful to them for their useful comments regarding the first draft of our report, and for their suggested improvements.

We are especially appreciative of OIE’s administrative assist, Denise Padmore, for her help in coordinating the interviews during our 10-day field study in Barbados and in providing us with all the documents we requested.

3

Page 4: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

AcronymsLIST OF ABBREVIATIONS

AMT Advisory Management TeamAPECAPAR

Audit and Post Evaluation CommitteeApproach PaperAppraisal Report

BOD Board of DirectorsBMCs Borrowing Member CountriesBNTF Basic Needs Trust FundCDB Caribbean Development BankCSPDAC

Country Strategy PaperDevelopment Assistance Committee

DFI Development Financial InstitutionED Economics DepartmentEOV Evaluation and Oversight DivisionFI Financial InstitutionIRLMDB

Immediate Response LoanMultilateral Development Bank

mn millionM&E Monitoring and EvaluationMfDR Managing for Development ResultsOAC Oversight and Assurance CommitteeOIE Office of Independent EvaluationPAS Performance Assessment SystemPBG Policy-Based GrantPBL Policy-Based Loan PCR Project Completion Report PCVR Project Completion Validation ReportPPES Project Performance Evaluation SystemPPMS Portfolio Performance Management SystemSDF Special Development FundTA Technical Assistance WB World Bank

4

Bastiaan de Laat, 20/03/16,
The normal symbol is just “m”MLL but will leave it like this as this is CDB practice
Page 5: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Preface Evaluation work at the Caribbean Development Bank (CDB) has been ongoing since the early 1990s, although initially it was mainly focused on the ex-post evaluation of projects. However, in 2011, the CDB reviewed its evaluation system to bring it up to date with the good practices of international development organisations. In December that year, it produced its comprehensive Evaluation Policy (December 2011) setting out the aim and objectives and guiding principles for CDB’s evaluation system.

The Policy provides for the establishment of the Office of Independent Evaluation (OIE). Its main objective is to provide “CDB’s Board of Directors, President, Advisory Management Team, CDB staff and other stakeholders and partners with timely, credible and evidence-based information on the relevance and performance of CDB’s projects, programs, policies and other development activities.” (Evaluation Policy, 2011, p. 1).

To oversee and assess good practice, the Evaluation Cooperation Group (ECG) for Multilateral Development Banks (MDBs) recommends that the MDBs’ evaluation system and independent evaluation units be the subject of a review on a regular basis. The aim here is to help the institutions adopt recognised evaluation standards and practices so that its policies may benefit from evidence-based assessments.

In mid-2014, a new Head of the OIE was appointed and, following an initial learning period, he called for a peer review of the evaluation system. Even though the OIE had only been in existence since 2012, it was considered timely to take stock of what had been done so far in order to tease out the priorities for the next 3-4 years.

It was originally anticipated that such an assessment could be done by the ECG as part of the OIE’s application for ECG membership. This did not prove possible, since the CDB’s operation is considered too small for such membership. A review was therefore commissioned to independent experts in evaluation who are knowledgeable and experienced in the management of internal evaluation units.

Main Aim of the ReviewThe Review’s main aim is to provide the CDB’s Board of Directors with an independent assessment of the OIE and CDB’s evaluation system. The intention is to highlight the factors that help or hinder the OIE’s independence and performance in order to identify where improvements could be made. This report will be presented together with a Management Response to the CDB’s Oversight Assurance Committee and its Board of Directors at its meeting in May 2016. It is anticipated that an action plan will be drawn up on the basis of the Board’s decision on how to address the recommendations put forward.

Report StructureThe Review starts with some general background information about the CDB and the setting up of an independent evaluation function. It also sets out the reasons for an external review and why this was requested at this particular point in time. Part One also outlines the Review methodology, which, is presented in more detail in Appendix II. Part Two reports on the Review’s findings and conclusions with regard to a certain number of criteria. The Panels conclusions and recommendations for future work are the subject of Part Three.

The Panel is grateful for the complete freedom it was given to form its own opinions and to reach conclusions based on its analysis. The findings, conclusions and recommendations

presented in this paper are those of the Peer Review Panel members. The views of the CDB, are provided separately in the Management Response that accompanies this Report.

5

DE LAAT Bastiaan, 20/03/16,
Michael, Was this the formal reason?
Page 6: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table of Contents

6

Page 7: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Executive Summary

7

Page 8: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part One: Introduction and BackgroundIn order to understand the development of the Office of Independent Evaluation’s (OIE) work, a brief description of the CDB’s current reforms is needed. First, there has been a change over the last decade in the nature of the programmes the bank supports; for example it has become increasingly engaged in funding policy-based operations and social development issues. Similarly, there have been changes in the whole of the development field, which is grappling to deal with complex issues such as gender and climate change. To meet today’s challenges and ensure that its work practices reflect the international standards of Multilateral Development Banks (MDBs), the CDB has introduced a number of measures aimed at improving its effectiveness and efficiency. For example, in line with international standards for Management for Development Results (MfDR), it has introduced a Results Based Management Framework for organising and assessing its performance.

In 2011, the CDB commissioned an external consultancy to undertake an assessment of its evaluation function in order to develop a policy that took account of good practices within the international development community.1 The CDB’s Evaluation Policy (referred to hereafter as the Policy) is a direct response to that review; it reflects the standards and good practices of the Evaluation Cooperation Group (ECG) of the Multilateral Development Banks (MDB) as well as the evaluation principles and standards of many professional associations.

Similarly, the Bank showed its commitment to having evaluation as a core function by establishing an independent evaluation unit that reports to the Board of Directors. It is responsible for assessing the Bank’s activities and interventions, but especially for drawing out the key lessons and recommendations for improving the Bank’s performance. As such, the monitoring tasks formally under the responsibility of the Evaluation and Oversight Division (EOV) were handed over to the Bank’s Operations and Economic Divisions. The OIE then validates the credibility and rigour of the self-evaluations.

In addition to the OIE, the Bank also set up other independent functions: internal audit, risk assessment and management, integrity, compliance and accountability. The mainstreaming of three cross-cutting themes (gender, energy and climate issues) into CDB’s work has also been initiated. At the same time, there are limited funds available as the CDB is working within a Board-sanctioned policy based on the principle of a zero real growth, which is in line with the budget policy of other MDBs.

In short, the bank has taken many important steps towards updating CDB’s management practices in line with other MDBs. However, the introduction of many innovations in parallel requires coordination and a shift in working practices and thinking. There is also the need to engage in different types of evaluation; evaluations that take into account cross-cutting themes and different levels of complexity. As such, whilst this review is particularly focused on the CDB’s Office of Independent Evaluation (OIE), its work and utility depend to a large degree on the development of other management practices and the degree to which evaluation is able to link into their work.

The Review in BriefA full description of the Review’s mandate, approach, process and methods are provided in Appendices I and II. It was designed to address the following four key questions as set out in the appended Terms of Reference and Approach Paper:

1 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank

8

marlene laeubli loud, 20/03/16,
To Michael, is this generally correct? From interviews I understood that the bank has moved its funding more towards these fields as well as infrastructural support – whereas originally most of its engagement in the BMCs was for infrastructural support (and poverty reduction etc, but to a lesser degree than now).
Page 9: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

To what degree is the Office of Independent Evaluation independent at the strategic, functional and operational levels? Which measures help or hinder such independence?

To what extent is the OIE achieving its 2 strategic objectives? (which are (1) the timely delivery of good quality evaluations and PCR Reviews and (2) strengthening capacity building, networking and communication) How useful are the OIE’s procedures and products towards this end?

How adequate are the financial and human resources of the OIE for carrying out its tasks and achieving its objectives?

How effective is OIE in relating with its internal partners to develop evaluation capacity?

Our assessment of the OIE is largely based on the recommended criteria of the Evaluation Cooperation Group for Multilateral Development Banks; governance and independence, credibility, use and transparency.

The data used for analysing and interpreting the findings relied on exploratory, semi structured interviews with OIE staff as well as with CDB senior and middle managers and members of its Board of Directors. Whilst much of the interview data was collected during a 10-day intensive, on-site visit to the Bank, the majority of the Board members were interviewed through Skype. The interview data was complemented by a review of a range of key documents including the Bank’s Evaluation Policy, various kinds of reports on, or about evaluation, the complete set of minutes of meetings between the OIE and the Oversight Assurance Commission2 and the subsequent chairman’s report to the Board for the study period 2012 to 2015, OIE staff biographies as well as a number of other organisations’ evaluation principles, good practices and standards. A full list can be found in the Appendices (Appendix V). Not least, the Reviewers have also drawn on their own knowledge and experience of evaluation management to complement data analysis and interpretation.

Scope and Limitations

The Review was asked to concentrate on the 4-year period since the establishment of the OIE, January 2012 to December 2015, but more particularly on the changes introduced since the new Head of the OIE was appointed (June 2014 to December 2015).

It has mainly focussed on the strategic role of the OIE within the CDB as well as its functional and operational roles and responsibilities.

It was planned as a Review and not a fully-fledged evaluation; this was due to the limited time and resources available for the exercise as well as the fact that a ”light” review is in keeping with the spirit of the OIE’s Terms of Reference. The Review could not undertake any in-depth analysis of documents or consult with country level stakeholders or other external sources of expertise. Moreover, of the 29 people identified for interview, despite several reminders (by email or telephone) , the Panel were unable to either contact or secure the agreement of 5 of the 14 Board members, and 1 CDB senior manager. In light of this experience, as well as the time invested in securing the interviews “at a distance”, the planned on-line survey to follow-up on face-to-face interview data was abandoned.

We regret that in the time available, full justice could not be done to all the material provided to the Panel by the OIE. Nevertheless, the documentary review and interviews focussed on addressing the key questions, and we are therefore confident that the main issues raised in the Terms of Reference have been addressed in this report.2 The Audit and Post Evaluation Committee, now the Oversight Assurance Committee, is a Board Committee responsible for the oversight of evaluation and other key management functions.

9

Page 10: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

10

Page 11: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Two: What the Review FoundIn the first place, the Panel should like to commend the CDB for its efforts in supporting the establishment and independent functioning of the OIE. Similarly, in spite of some of the challenges raised in this Review, the current Head of OIE and his team are to be commended for their efforts in advancing the evaluation function in the right direction of the UNEG Norms and Standards and ECG guidelines on Good Practices.

In this part of the report, we shall present our findings and conclusions in relation to each of the criteria below used to assess and respond to the four ToR questions:

the evaluation policy governance independence the OIE strategy, practices and work programme usefulness of evaluation, evaluation use communicating evaluation results adequacy of resources, and finally the working relationship between self and independent evaluation .

The Evaluation PolicyThe CDB Board agreed an Evaluation Policy (the Policy) in December 2011. It sets out the guiding principles and provisions for the OIE. It also aims at guaranteeing the independent functioning of the Office of Independent Evaluation (OIE) by having it report to the Board of Directors through the Oversight Assurance Committee, OAC. However, the President retains oversight on administrative matters such as travel, procurement of consultants.

Generally speaking, the Policy reflects many of the ECG’s recommendations on evaluation independence and good practices. Similarly, the evaluation criteria for judging outcomes are the five developed by the DAC, that is relevance, effectiveness, efficiency, impact and sustainability. In general, the Policy is intended to maximize the strategic value, timeliness and the learning aspect of evaluation.

Yet in reality, the Policy provides a framework for what could be achieved under optimal conditions. It is overambitious in terms of what could be done with the current level of resources. The Policy is considered useful as a reference mainly for the OIE, senior CDB staff and the OAC.

Many important tasks outlined in the evaluation policy, however, have not been done so far by either the OAC or the OIE. For instance, the OAC has yet to produce an annual report on OIE’s performance and the OIE has yet to establish a database of evaluation lessons, recommendations, actions and management responses.

To conclude: The Evaluation Policy is a mission statement of what could be achieved in time with sufficient financial and human resourcing. It reflects the internationally recognized evaluation principles and standards, but is probably somewhat ambitious for the OIE to fully put into practice for a number of years.

Governance IssuesOversight of the OIE is entrusted to a Committee of the Board of Directors (originally called the Audit and Post-Evaluation Committee, APEC, and now the Oversight Assurance Committee, OAC, to reflect its broadened mandate since October 2015). The OIE reports to the Board through the OAC. There are 5 members, of which only 2 are located in Barbados.

11

Bastiaan de Laat, 03/20/16,
Should the Panel give an advise on priorities? MLL Not at this stage but in the recommendations
marlene laeubli loud, 20/03/16,
John, I have modified my original text to make a more cautious congratulations!
Page 12: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OAC meets 5 times per year, the day before Board meetings. It has oversight responsibility for external and internal audit, independent evaluation, risk management and integrity, compliance and accountability in relation to CDB’s work.

The OAC Chairperson prepares a very brief resume of the day before’s meeting to present to the Board for its approval. The report generally covers progress, shortcomings and risks but is only a small part of the Board meeting so that generally there is little discussion; evaluation is only one of many items on the agenda. (We were told that the report to the Board averages approximately 10 minutes). Some of our interviewees could not recall any discussion about evaluation during Board meetings or remember reference being made to any evaluation report.

Yet there was keen interest expressed for having rigorous evaluation, especially in having it draw out important lessons about CDB interventions. Members interviewed perceive the OIE as a credible entity and are satisfied with its methodological approach. The Panel was therefore surprised to find that despite OAC’s awareness of the data problems in the BMCs (e.g. lack of rigorous monitoring and statistical data and the consequent effect on the rigour of OIE’s evaluations) as well as the delays in the submission of self-evaluations and their validations, the OAC’s reaction is negligible. The Panel could find no evidence to show that the OAC has exerted any pressure on the CDB or on the BMCs through their representatives on the Board to redress these problems.

A major problem for the OAC is the volume of paperwork and length of individual documents it receives in parallel from the CDB and its independent offices, generally very shortly ahead of its meetings. Both Board and OAC members expressed their deep concern about the need for the more timely delivery of reports and background papers for their meetings. The OAC members fear they are unable to do justice to their oversight responsibilities. Hence, based on the Panel’s review of the minutes and comments from the OIE, the meetings appear to be more formalistic, with OIE’s presentation of highlights from evaluation reports and management’s response, but little evidence to suggest that there is much discussion or systematic follow up on the recommendations, agreed actions or the lessons drawn, The “follow up on actions agreed” does not appear to be a systematic item on each OAC meeting’s agenda..3 Similarly any attempt to identify key messages for various stakeholders other than the CDB is not mentioned in the minutes or reports to the Board.

In response, the OIE has greatly improved the presentation of its technical reports by summarising the main points in its “Brief Reports” (e.g. the Tax Administration and Tax Reform and Technical and Vocational Education and Training evaluation). This is commendable and certainly a step in the right direction although the Panel considers that they should have a sharper focus on the strategic issues (which are the end of the brief rather than the beginning), be condensed and be made more “reader friendly”.

To conclude: The OAC firmly supports having an independent evaluation function which produces rigorous evaluations. It attaches much importance to evaluation’s ability to highlight key lessons. However, the OAC it is not performing its oversight function with sufficient firmness to bring about any change regarding the challenges evaluation raises or has to deal with. This is not helped by the lack of any systematic report on “follow up of actions agreed” which could be particularly useful for tracking changes as a consequence of an evaluation and management’s response.

The OAC could do better justice to its oversight responsibly if it were to receive all background documents systematically at least two weeks before its meetings. Moreover, the volume and length of documents received at any one time is considered to be overwhelming. The number

3 At the APEC meeting in May 2012, it was agreed that the OIE would prepare a Management Action Record to highlight the follow up actions taken to the recommendations of all evaluation reports, every two years, with the first report presented to APEC at the March 2013 Board Meeting. There is no record of this having ever been done or of the APEC / OAC’s following up on such request.

12

John Mayne, 03/20/16,
What evidence do you have for this conclusion? Maybe would be just an excuse to spend more time there, getting ready! Any evidence they know what to do with what they get? MLL See changes made
John Mayne, 20/03/16,
Why is the OAC so powerless to bring about change? When I was doing work with UNFPA, I used to get frustrated that what ever the Board said was treated as words from God and the answer always as how high should we jump sir. Almost no effort made to question a Board suggestion, many of which were stupid.
John Mayne, 20/03/16,
What do you know about the discussions OAC has on OIE efforts, evaluations, etc. Anything of substance? What % of time on evaluation? In other words, are they doing their job? Lateness of documents can be an issue, or an excuse. Why haven’t they given people hell for late delivery? Or are they just enjoying a nice trip to the Barbados each few months?MLL: See changes I have made
Page 13: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

and/or importance of agenda items competing for attention at any one session is an additional handicap.

Independence of the Office of Independent Evaluation (OIEIndependence is central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division4. The appraisal is based on a comparison of the ECG’s recommendations on independence5 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting, conducting and setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

A comparison of ECG recommendations with OIE practice is presented in Appendix IV, (Tables 1, organisational independence, Table 2, behavioural independence, and Table 3, protection from external influence or interference.) This section summarises the Panel’s assessment of each of these aspects. 6

Organisational / structural independence

The CDB has succeeded in establishing an independent, stand-alone office that has direct dialogue with the OAC, the Board and senior management. However, there appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between

4 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank5 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1 6 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

13

John Mayne, 20/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
Page 14: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

1) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to the operational departments. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations.

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

2) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observers status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues

The Panel has some concerns about behavioural issues. For example, there is a lack of quality data available and considerable delays are incurred in processing both the independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operations department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception.

14

Page 15: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

OIE and Protection from External influence or interference

The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake as well as the methodologies of its studies. (See Figures 1 and 2 later in this Review)

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against staff conflicts of interest, past, present or potential. We requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. We can only assume that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: We are impressed with the measures CDB has taken to assure the organisational independence of the OIE. Senior and line management alike accepts its independent status. But there is some indication of independence having possibly gone too far. The OIE is operating at arm’s length from operations and has limited access to senior management strategic discussions. Regarding behavioural independence, the quality of OIE’s reports is affected by the lack of rigorous data and many types of delays; difficulty in easily accessing documentation, and delays in the exchange of reports between the OIE and operations area. Both have affected the timely delivery of reports to the Board. Few evaluation reports are publically available. The OIE’s resources are limited; this is a hindrance to the OIE’s independence since it cannot cover the full range of MDB type evaluations or have complete autonomy over the selection of themes and methodologies. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeFollowing the approval of the CDB’s Evaluation Policy, the OIE attempted to develop a plan for its implementation. It considered such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but the programme proved to be over ambitious.

Much of the period 2012 to 2015 has been taken up with preparing for OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. From 2014 the OIE has therefore adopted a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and increasing the involvement of its own staff in conducting evaluations (3) networking to share experiences with centres of expertise and align OIE with international practices. The OIE plans to conduct 2-4 high-level studies per year from 2016. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed

But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities (e.g. knowledge management and brokerage in particular) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set delivering utility-focused and timely evaluations as a key objective. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. The challenges that have to be dealt with to enable the

15

Page 16: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

OIE to move up the MDB evaluation pyramid7 are brought out in the remaining sections of this Review, not least given the limited resources available. (See Appendix III for more on OIE’s work practices)

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource and implementing the Evaluation Policy. The strategy as it stands, however, does not sufficiently take account of the full range of activities and time commitment needed to support evaluation management. Furthermore, there is no road map to indicate how the OIE intends to realise the full range of tasks and responsibilities set out in the Evaluation Policy, and/or which should be prioritised.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products8.

1. Planning relevant and timely evaluations

The OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies will deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how overly bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. Our concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

Figure 1: Selection of Evaluation Topics and Funding Source

7 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).8 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

3-year Work Programme and Budget (approved by Board)

Annual OIE report and work plan

submission to OAC

16

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 17: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Detailed ToR or Final Approach Paper if sufficiently detailed.

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

Funding Track

Final Approach Paper/ToR

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Approval

Board Paper

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

OIE – Selection of consultants (if any)

contracting

17

Page 18: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

To summarise, with the latest three independent evaluations, the OIE has achieved its 2015 objective of planning and delivering useful and timely studies. But the process in securing approval and funding, particularly for studies requiring additional resources, is inefficient.

2. Consultation and ownership

“The credibility of evaluations depends to some degree on whether and how the organization’s approach to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

As previously mentioned, the OIE engages with the OAC, CDB senior management and operational departments for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors; the final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Arrangement AFully outsourced / external

consultants; oversight by OIE

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

Prepares Inception Report /

Approach Paper OIE

Board notification only if USD 150,000 or

below

Data Collection and AnalysisBoard notification only

if USD 150,000 or below

18

Page 19: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

1.2.

Notes to Figure 2

1. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

2. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Prepare for disclosure and dissemination

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Final OIE approved report to CDB Senior Management for Management Response

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

Feedback to evaluation lead

Submission of Final Report to

OIE

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

19

Page 20: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. Occasionally, however, the OIE arranges discussions with operations for reflecting on emerging findings, but we are not sure of how systematically this feedback loop is.

There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of other contexts for improving buy-in and providing strategic input as well.

More generally speaking, outside of an evaluation study, the OIE has limited dealings with the operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel wishes to stress that this is not the case for newly appointed senior management. We found a much more open attitude to evaluation and appreciation of its potential value. For example, we learned that the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. However, the Panel considers the process of reviewing draft reports is inefficient. The operations staff is less convinced than senior management about the value of evaluation. There are no “advisory groups” in place to accompany individual studies, which can add value to ownership as well as the perceived value of evaluation. Such a group could help improve evaluation’s image.

3. Tools to support the evaluation process

So far, the OIE has mainly focussed on improving the tools to support operations and its self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual for guiding its own processes. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE refers to the Performance Assessment System (PAS) Manuals for its independent evaluations; the operations area also use the same for completing their reports on public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated. However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in Operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

20

John Mayne, 20/03/16,
Somewhere here the needs to be a discussion of Avisory groupsMLL: have put it in previous section under consultation and ownership
Page 21: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

The use of the QaE by OIE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

4. Credibility and Quality of Evaluation Products

As with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

Independent evaluations are processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis).

During this period of transition, much of OIE’s work has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. With the change of Head in June 2014, it has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year out of an annual average of approximately 15 in theory. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

With regard to the independent evaluations, since 2012, the OIE has produced a range of studies and approach papers. Our review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. The list includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada.

21

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: Will do, make new table to show sequence of the different papers – where I can!
DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
Page 22: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Assessment of Effectiveness of Implementation of Poverty Reduction Strategy 2004-09.

253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment, Climate Change, Disaster Management at CDB.

254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality Policy and Operational Strategy of the Caribbean Development Bank.

Validation of Project Completion Report on Enhancement of Technical and Vocational Education and Training – Belize.

Assessment of the Effectiveness of the Policy-based Lending Instrument.256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams

International Airport – Barbados. Validation of Project Completion Report on Fifth Water Supply Project –

Saint Lucia. 261 May 2014 Validation of Project Completion Report on Immediate Response Loan,

Tropical Storm Gustav, Jamaica. Validation of Project Completion Report on Social Investment Fund,

Jamaica. Validation of Project Completion Report on Disaster Mitigation and

Restoration – Rockfall and Landslip, Grenada.263 Oct. 2014 Validation of Project Completion Report on Basic Education Project –

Antigua and Barbuda263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla Validation of Project Completion Report on Immediate Response Loan -

Tropical Storm Arthur – Belize. Evaluation of Technical Assistance Interventions of the Caribbean

Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

Our review and analysis is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

Approach Papers

Three Approach Papers (APs) were available to the panel for review (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

22

Page 23: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The APs examined are clearly written, well-structured and of reasonable length.9 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was available for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during our review period were considered for this review. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives. Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object10 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.11 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final evaluation reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide

9 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.10 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…11 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.

23

DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.
Page 24: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

sufficient insight into how ultimately an intervention’s performance is judged.12 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation13 and the DAC criteria to the exclusions of considering other evaluation approaches such as Developmental Evaluation (Patton, 201014); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports15 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

12 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evalutions”!!13 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.14 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press15 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

24

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 7 what does that mean?
Page 25: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

- OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As with other MDBs, the OIE has the mandate to validate the operational departments PCRs. The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,16 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use of evaluation, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had drawn together a synthesis of lessons from evaluations of the power sector. (Conceptual use) But none have been

16 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

25

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 26: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

done since setting up the OIE although it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On the one hand we read in the OAC minutes that lessons learned are integrated into the next phase. On the other, we were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. The OAC has on occasion, raised questions about specific evaluations and asked that they be kept informed on progress as an “action information” on subsequent agenda. However, in our review of the minutes from 2012 to December 2015 such requests do not appear to have been followed-up. The OAC oversight of evaluation use appears to be insufficient.

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operational department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication StrategyIn recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

We have already commended the OIE in its efforts to engage the CDB and the OAC in deciding on which evaluations to pursue, on the draft reports, their conclusions and recommendations, as well as on the lessons drawn. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB.

We feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak17. There appears to

17 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

26

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
DE LAAT Bastiaan, 19/03/16,
This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.
Page 27: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage). Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

To conclude, evidence on the uptake of evaluation is sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). Furthermore although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed.

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

In line with the OIE’s shift from project-based to in-depth thematic and strategic evaluations, for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid18

18 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

27

Page 28: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.19

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDBThe OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunities on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in

19 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

28

Page 29: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans, it would seem that it therefore tends to be put to the bottom of the to-do list. However, from what we understand, the OIE’s strategy is to join together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always an issue to be tackled, but one which tended to be put to the bottom of the “to do” list. Working with the BMCs is a priority but will require focus and additional human and financial resources.

Adequacy of the OIE’s financial and human resources

Human Resources

OIE staffing consists of 5; the Head and 3 professional staff plus one administrator. Three of the five were recruited from within the CDB. There is some indication that the Board will be expecting the OIE to conduct impact evaluations at some time in the near future. The OIE itself is also keen to engage its staff in the independent studies and to focus on high-level evaluations. However, actual staff capacity contrasts with such increasing demand and expectations. The Panel is of the view that in trying to meet such expectations, the OIE will be overwhelmed and less likely to deliver credible and useful evaluations. of evaluation Moreover there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s work plans and budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other essential tasks.

Limited financial resources for OIE’s work programme

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses. (more details on budgets and OIE practices are provided in Appendix

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound

29

DE LAAT Bastiaan, 21/03/16,
I doubt this is a priority given their own limited capacity. At the same time it can help improve the quality of evaluations locally.
Page 30: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

Self- and independent evaluationIn line with international standards, CDB’s evaluation system covers both self and independent evaluation. Self-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both are important as the self-evaluations are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.20

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset.

20 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

30

Page 31: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance.

For some time, this lack of oversight has been tolerated. Part of the problem is the very real problem of time; staff has more pressing priorities and there is little incentive to complete the self-evaluation reports in a timelier manner. There is also the absence of any focal point within senior management to drive the process and deal with the problems. According to the Evaluation Policy (p.15)

“The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”.

But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be over-critical, regulatory and adding little value; it is seen as a threat rather than an opportunity for learning. Yet at the same time, evaluation is understood to be an integral part of result-based management.

Some managers appear keen to change the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of the above “frustrations” between the OIE and operations, the value that evaluation might offer to the operations area is ill recognized. the development of a learning and evaluation culture within CDB has been somewhat hindered. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advance a learning environment in which evaluation can play a major part. Moreover, with the transition to the OIE and change in focus, there is a void in the operations area that has not been satisfactorily dealt with.

31

Page 32: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”21

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

21 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

32

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 33: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”22 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 23 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

22 CDB (2011) Evaluation Policy (p.2)23

33

Page 34: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

34

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 35: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

35

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 36: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

36

Page 37: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division24. The appraisal is based on a comparison of the ECG’s recommendations on independence25 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 26

24 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank25 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1 26 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

37

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
John Mayne, 19/03/16,
Meaning what?
Page 38: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of the program, activity or entity being evaluated

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of

38

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
John Mayne, 2016-03-19,
Don’t need the first column.
Page 39: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

c) programs, activities, or entities it funds or sponsors

evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

3) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

4) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a

39

Page 40: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the organization’s programs, activities and entities

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the evaluation organization; consultation on work program with Management and Board

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of

40

Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 41: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

sources, for evaluation function

preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management interference

Complies – however within limits of restricted human and financial resources available

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

41

Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Page 42: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

b) promotion, pay increases, and

c) firing, within a merit system

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the

42

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Page 43: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid27 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products28.

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

Figure 1: Selection of Evaluation Topics and Funding Source

27 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).28 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

3-year Work Programme and Budget (approved by Board)

Annual OIE report and work plan

submission to OAC

43

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 44: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Detailed ToR or Final Approach Paper if sufficiently detailed.

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

Funding Track

Final Approach Paper/ToR

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Approval

Board Paper

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

OIE – Selection of consultants (if any)

contracting

44

Page 45: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Arrangement AFully outsourced / external

consultants; oversight by OIE

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

Prepares Inception Report /

Approach Paper

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Final OIE approved report to CDB Senior Management for Management Response

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

Feedback to evaluation lead

Submission of Final Report to

OIE

45

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 46: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

3.4.

Notes to Figure 2

3. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

4. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident;

Prepare for disclosure and dissemination

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

46

Page 47: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated. However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

47

John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 48: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla

48

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
Page 49: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Validation of Project Completion Report on Immediate Response Loan - Tropical Storm Arthur – Belize.

Evaluation of Technical Assistance Interventions of the Caribbean Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.29 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives.

29 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.

49

DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 50: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object30 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.31 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.32 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation33 and the DAC criteria to the exclusions of considering other evaluation

30 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…31 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.32 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!33 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.

50

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
Page 51: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

approaches such as Developmental Evaluation (Patton, 201034); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports35 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect

34 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press35 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

51

Page 52: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,36 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

36 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

52

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 53: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak37. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

37 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

53

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 54: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid38

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the

38 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

54

Page 55: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.39

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other 39 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

55

Page 56: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses. (more on financial aspects and OIE practices is provided in Appendix III).

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

56

Page 57: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.40

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset. Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to have started making positive changes. For example a revised and simplified template for producing project completion reports is being considered, and mid-term project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the

40 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

57

Page 58: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

58

Page 59: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”41

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

41 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

59

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 60: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”42 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 43 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

42 CDB (2011) Evaluation Policy (p.2)43

60

Page 61: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

61

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 62: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

62

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 63: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

63

Page 64: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

64

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 65: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

65

Page 66: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV – OIE Practice compared with ECG Good Practice Standards on Independence

Appendix V - List of Persons Interviewed

Appendix VI - List of Documents Reviewed

Appendix VII- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VIII- List of Topics used to guide interviews with CDB staff

66

Page 67: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

67

Page 68: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

68

Page 69: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix V – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

69

Page 70: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

70

Page 71: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

71

Page 72: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

72

Page 73: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

actual or potential conflict of interestThe PanelIt must be s

The PanelisThe; this affects also of y, Work PracticesThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. therefore The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation

73

Page 74: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

pyramid44 are brought out in the remaining sections of this Review, not least given the limited resources available.

But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities. sTheThes before completing the final version. However, p are only submitted to the CDB line and senior managers.Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the following the recommendations of professional good practices and standards on participative approaches; it has succeeded in , The Panelsstaff fromas well as There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. OIE doesarrange .sthe Panel also wishes tonewly appointed rsA was evidentthey expressed interest in In one case, interest was followed up in practice; can be improvedfostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that can offer can add value to understanding the strengths and weaknesses of such strategies. Tduring this transitional phase, Manual to guide and support the independent evaluation process.and operations staff sevaluation activities. ,oOIE’sjudginga . As such, theyare thatAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An is processed as follows;the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

SThisrItand Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

The rmade

- is still considered to be good practice to have the elaborated in the initial design documents the45 such as Developmental Evaluation (Patton, 201046)

-)

said abovePCHowever, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s

44 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).45 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.46 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press

74

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 75: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessonsthe Panelas done this ing on lessonsAlthough nothing has happened since, it is , sometimes indicate (Panel has already referred above to ’s lack of oversight in the use of evaluation.)

sThe Panelsevaluation work Moreover, the 2015 budget provides only US$2’000 for

communication – nothing of which is intended for outreach.Reviewerseither confusing or and

budgetedConsequently for 2015, theFigure 3: The MDB Evaluation Pyramid47

47 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

75

Page 76: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

ys to be effective. A modest attempt has been made in 2015; OIE hased But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.man and financial resources to support its work

OIE’s Human Resources;

5eThree of the five were recruited from within the CDB. edfrom the Board that OIE

should embark on ee and for impact evaluations in particular,OIE’s ee Moreover, there

are many other designated OIE activities that should be recognised as valuable work; the

validations, building CDB and BMC evaluation capacity, providing supervision, advice,

knowledge management and brokerage as well as managing evaluation contracts, The

time needs of dealing with all of these may be underestimated in OIE’s budgets; all are

important for assuring best value from evaluation. The Panel is concerned that a demand

for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level

evaluations may undermine the importance and time needs of other essential

tasks.Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies

76

Page 77: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.Self-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes.types of evaluation y There appears to be little incentive to complete self-evaluations in a timelier manner.

.

; it is a threat rather than an opportunity for learning. Yis recognized as

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.a number of , which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring dataadded value that evaluation might offer to the operations area is ill recognized Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

77

Page 78: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”48

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

48 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

78

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 79: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”49 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 50 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

49 CDB (2011) Evaluation Policy (p.2)50

79

Page 80: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

80

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 81: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

81

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 82: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

82

Page 83: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division51. The appraisal is based on a comparison of the ECG’s recommendations on independence52 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 53

51 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank52 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1 53 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

83

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
John Mayne, 19/03/16,
Meaning what?
Page 84: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of the program, activity or entity being evaluated

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

84

John Mayne, 2016-03-19,
Don’t need the first column.
Page 85: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

5) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

6) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the

85

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
Page 86: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the organization’s programs, activities and entities

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management

86

Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 87: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

evaluation organization; consultation on work program with Management and Board

and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget sources, for evaluation function

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management interference

Complies – however within limits of restricted human and financial resources available

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot

87

Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Page 88: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

88

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Page 89: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid54 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products55.

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

54 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).55 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

89

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 90: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Figure 1: Selection of Evaluation Topics and Funding Source

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Detailed ToR or Final Approach Paper if sufficiently detailed.

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

Funding Track

Final Approach Paper/ToR

3-year Work Programme and Budget (approved by Board)

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Approval

Board Paper

Annual OIE report and work plan

submission to OAC

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

OIE – Selection of consultants (if any)

contracting

90

Page 91: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Arrangement AFully outsourced / external

consultants; oversight by OIE

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

Prepares Inception Report /

Approach Paper

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

91

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 92: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5.6.

Notes to Figure 2

5. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

6. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

Prepare for disclosure and dissemination

Final OIE approved report to CDB Senior Management for Management Response

Feedback to evaluation lead

Submission of Final Report to

OIE

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

92

Page 93: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident; they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated. However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

93

John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 94: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

94

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
Page 95: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla Validation of Project Completion Report on Immediate Response Loan -

Tropical Storm Arthur – Belize. Evaluation of Technical Assistance Interventions of the Caribbean

Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.56 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good

56 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.

95

Page 96: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives. Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object57 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.58 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.59 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong

57 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…58 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.59 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!

96

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 97: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation60 and the DAC criteria to the exclusions of considering other evaluation approaches such as Developmental Evaluation (Patton, 201061); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports62 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and

60 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.61 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press62 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

97

Page 98: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,63 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On

63 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

98

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 99: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak64. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

64 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

99

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 100: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid65

65 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

100

Page 101: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.66

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The

66 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

101

Page 102: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

102

Page 103: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.67

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset. Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term

67 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

103

Page 104: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

104

Page 105: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”68

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

68 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

105

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 106: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”69 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 70 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

69 CDB (2011) Evaluation Policy (p.2)70

106

Page 107: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

107

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 108: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

108

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 109: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

109

Page 110: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

110

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 111: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

111

Page 112: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

112

Page 113: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

113

Page 114: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

114

Page 115: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

115

Page 116: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

116

Page 117: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

117

Page 118: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

118

Page 119: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division71. The appraisal is based on a comparison of the ECG’s recommendations on independence72 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

71 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank72 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1

119

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
Page 120: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 73

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

73 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

120

John Mayne, 2016-03-19,
Don’t need the first column.
Page 121: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

the program, activity or entity being evaluated

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

7) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

121

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
Page 122: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

8) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

122

Page 123: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

organization’s programs, activities and entities

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the evaluation organization; consultation on work program with Management and Board

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget sources, for evaluation function

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management

Complies – however within limits of restricted human and financial resources available

123

John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 124: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

interference

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by

124

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
Page 125: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid74 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products75.

74 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).75 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

125

Page 126: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

Figure 1: Selection of Evaluation Topics and Funding Source

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

3-year Work Programme and Budget (approved by Board)

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Paper

Annual OIE report and work plan

submission to OAC

126

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 127: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Detailed ToR or Final Approach Paper if sufficiently detailed.

Funding Track

Final Approach Paper/ToR Board

Approval

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

Arrangement AFully outsourced / external

consultants; oversight by OIE

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

OIE – Selection of consultants (if any)

contracting

127

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 128: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

7.8.

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Prepares Inception Report /

Approach Paper

Prepare for disclosure and dissemination

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Final OIE approved report to CDB Senior Management for Management Response

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

Feedback to evaluation lead

Submission of Final Report to

OIE

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

128

Page 129: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Notes to Figure 2

7. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

8. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident; they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated.

129

Page 130: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

130

DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 131: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla Validation of Project Completion Report on Immediate Response Loan -

Tropical Storm Arthur – Belize. Evaluation of Technical Assistance Interventions of the Caribbean

Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

131

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 132: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.76 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives. Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object77 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.78 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be

76 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.77 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…78 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.

132

marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 133: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.79 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation80 and the DAC criteria to the exclusions of considering other evaluation approaches such as Developmental Evaluation (Patton, 201081); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports82 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

79 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!80 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.81 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press82 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

133

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
Page 134: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,83 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or

83 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

134

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 135: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

135

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 136: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak84. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid85

84 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.85 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

136

Page 137: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.86

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in

86 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

137

Page 138: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the

138

Page 139: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.87

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset.

87 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

139

Page 140: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

140

Page 141: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”88

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

88 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

141

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 142: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”89 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 90 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

89 CDB (2011) Evaluation Policy (p.2)90

142

Page 143: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

143

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 144: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

144

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 145: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

145

Page 146: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

146

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 147: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

147

Page 148: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

148

Page 149: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

149

Page 150: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

150

Page 151: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

151

Page 152: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

152

Page 153: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

153

Page 154: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

154

Page 155: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

External Review of the Office of Independent Evaluation

Caribbean Development Bank

April, 2016

155

Page 156: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Principal ReviewerMarlène Läubli Loud,

Review Panel MembersJohn Mayne

Bastiaan de Laat

156

Page 157: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Primary audiencesMembers of the Oversight Assurance Committee

Members of the Board of Directors

Independent Evaluation Office

Caribbean Development Bank Management and Staff

Review Panel Members

Marlène Läubli Loud (DPhil) is currently an independent consultant and trainer in public sector evaluation. She has worked with a range of organizations, small and big including the European Commission, the World Health Organisation, the United Nations Evaluation Group, the UK Employment Department, UK Health Promotion Agency (now merged and become NICE), and the English Nursing Board. She was head of the Research and Evaluation Unit at the Swiss Federal Office of Public Health for nearly twenty years where she gained much experience in evaluation management, and especially in the ways and means for improving the use and utility of evaluation in organisations. She continues to have a keen theoretical and practical interest in this area. Prior to this, she was an independent evaluator in the UK, specializing in the evaluation of developmental programmes in health and general education. She was also a research fellow at the Department of Education, University of Surrey and in the Social Science Faculty, University of Oxford, UK.

John Mayne (PhD) is an independent advisor on public sector performance. He has been working with a number of organizations and jurisdictions, including several agencies of the UN, the Challenge Program on Water and Food, the European Union, the Scottish Government, the United Nations Secretariat, the International Development Research Centre, the Asian Development Bank and several Canadian federal departments on results management, evaluation and accountability issues. Until 2004, he was at the Office of the Auditor General where he led efforts at developing practices for effective managing for results and performance reporting in the government of Canada, as well as leading the Office’s audit efforts in accountability and governance. Prior to 1995, John was with the Canadian Treasury Board Secretariat and Office of the Comptroller General. He has authored numerous articles and reports, and edited five books in the areas of program evaluation, public administration and performance monitoring. In 1989 and in 1995, he was awarded the Canadian Evaluation Society Award for Contribution to Evaluation in Canada. In 2006, he became a Canadian Evaluation Society Fellow.

Bastiaan de Laat (PhD) is Evaluation Expert and Team

157

Page 158: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Leader at the European Investment Bank (EIB) where over the past two of years he has been in charge of major evaluations in important areas such as Climate Action, SME support and Technical Assistance. He has a longstanding experience in evaluation as well as in foresight. Founder-director of the French subsidiary of the Technopolis Group (1998-2006) he led many evaluations for and provided policy advice to a great variety of local, national and international public bodies. He trained several hundreds of European Commission staff and national government officials in evaluation and designed monitoring and evaluation systems for various public organisations. Before joining the EIB he worked as Evaluator at the Council of Europe Development Bank. He has developed tools and performed programme, policy and regulatory evaluations, both ex ante and ex post, in a variety of fields. He has also made several academic contributions, most recently with articles on evaluation use and on the "Tricky Triangle", on the relationships between evaluator, evaluation commissioner and evaluand. In his private capacity, Bastiaan served as Secretary General of the European Evaluation Society and was recently elected Vice-President.

Acknowledgements The Review exercise could not have been possible without the support and commitment of the OAC, the OIE and the CDB. The exploratory discussions with the OIE staff, members of the Board of Directors as well as with the CDB management and staff provided great insight and were a valuable contribution to the Review.

We are indebted to the Head of OIE, Michael Schroll, and his team for their cooperation, insight, and readiness to provide us with any information requests. We are also grateful to them for their useful comments regarding the first draft of our report, and for their suggested improvements.

We are especially appreciative of OIE’s administrative assist, Denise xxxx, for her help in coordinating the interviews during our 10-day field study in Barbados and in providing us with all documents we requested.

158

Page 159: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

AcronymsLIST OF ABBREVIATIONS

AMT Advisory Management TeamAPECAPAR

Audit and Post Evaluation CommitteeApproach PaperAppraisal Report

BOD Board of DirectorsBMCs Borrowing Member CountriesBNTF Basic Needs Trust FundCDB Caribbean Development BankCSPDAC

Country Strategy PaperDevelopment Assistance Committee

DFI Development Financial InstitutionED Economics DepartmentFI Financial InstitutionIRLMDB

Immediate Response LoanMultilateral Development Bank

mn millionM&E Monitoring and EvaluationMfDR Managing for Development ResultsOAC Oversight and Assurance CommitteeOIE Office of Independent EvaluationPAS Performance Assessment SystemPBG Policy-Based GrantPBL Policy-Based Loan PCR Project Completion Report PCVR Project Completion Validation ReportPPES Project Performance Evaluation SystemPPMS Portfolio Performance Management SystemSDF Special Development FundTA Technical Assistance WB World Bank

159

Bastiaan de Laat, 19/03/16,
The normal symbol is just “m”MLL but will leave it like this as this is CDB practice
Page 160: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Preface Evaluation work at the Caribbean Development Bank (CDB) has been ongoing since the early 1990s, although initially it was mainly focused on the ex-post evaluation of projects. However, in 2011, the CDB reviewed its evaluation system to bring it up to date with the good practices of international development organisations. In December that year, it produced its comprehensive Evaluation Policy (December 2011) setting out the aim and objectives and guiding principles for CDB’s evaluation system.

The Policy provides for the establishment of the Office of Independent Evaluation (OIE). Its main objective is to provide “CDB’s Board of Directors, President, Advisory Management Team, CDB staff and other stakeholders and partners with timely, credible and evidence-based information on the relevance and performance of CDB’s projects, programs, policies and other development activities.” (Evaluation Policy, 2011, p. 1).

To oversee and assess good practice, the Evaluation Cooperation Group (ECG) for Multilateral Development Banks (MDBs) recommends that the MDBs’ evaluation system and independent evaluation units be the subject of a review on a regular basis. The aim here is to help the institutions adopt recognised evaluation standards and practices so that its policies may benefit from evidence-based assessments.

In mid-2014, a new Head of the OIE was appointed and, following an initial learning period, he called for a peer review of the evaluation system. Even though the OIE had only been in existence since 2012, it was considered timely to take stock of what had been done so far in order to tease out the priorities for the next 3-4 years.

It was originally anticipated that such an assessment could be done by the ECG as part of the OIE’s application for ECG membership. This did not prove possible, since the CDB’s operation is considered too small for such membership. A review was therefore commissioned to independent experts in evaluation who are knowledgeable and experienced in the management of internal evaluation units.

Main AimThe Review’s main aim is to provide the CDB’s Board of Directors with an independent assessment of the OIE and CDB’s evaluation system. The intention is to highlight the factors that help or hinder the OIE’s independence and performance in order to identify where improvements could be made. This report will be presented together with a Management Response to the CDB’s Oversight Assurance Committee and its Board of Directors at its meeting in May 2016. It is anticipated that an action plan will be drawn up on the basis of the Board’s decision on how to address the recommendations put forward.

Report StructureThe Review starts with some general background information about the CDB and the setting up of an independent evaluation function. It also sets out the reasons for an external review and why this was requested at this particular point in time. Part One also outlines the Review, which, is presented in more detail in Appendix II. Part Two provides the Review’s findings and conclusions according to each of the criteria presented. The Panels general conclusions and recommendations are the subject of Part Three.

The Panel is grateful for the complete freedom it was given to form its own opinions and to reach conclusions based on its analysis. The findings, conclusions and recommendations

presented in this paper are those of the Peer Review Panel members. The views of the CDB, are provided separately in the Management Response that accompanies this Report.

160

DE LAAT Bastiaan, 19/03/16,
Michael, Was this the formal reason?
Page 161: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Executive Summary

161

Page 162: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part One: Introduction and BackgroundIn order to understand the development of the Office of Independent Evaluation’s (OIE) work, a brief description of the CDB’s current reforms is needed. First, there has been a change over the last decade from the nature of the programmes the bank supports; for example it has become increasingly engaged in funding policy-based operations and social development issues. Similarly, there have been changes in the whole of the development field, which is grappling to deal with complex issues such as gender or climate change. To meet today’s challenges and ensure that its work practices reflect the international standards of Multilateral Development Banks (MDBs), the CDB has introduced a number of measures aimed at improving its effectiveness and efficiency. For example, in line with international standards for Management for Development Results (MfDR), it has introduced a Results Based Management Framework for organising and assessing its performance.

In 2011, the CDB commissioned an external consultancy to undertake an assessment of its evaluation function in order to develop a policy that took account of good practices within the international development community.91 The CDB’s Evaluation Policy (referred to hereafter as the Policy) is a direct response to that review; it reflects the standards and good practices of the Evaluation Cooperation Group (ECG) of the Multilateral Development Banks (MDB) as well as the evaluation principles and standards of many professional associations. Similarly, the Bank showed its commitment to having evaluation as a core function by establishing an independent evaluation unit that reports to the Board of Directors.It is responsible for assessing the Bank’s activities and interventions, but especially for drawing out the key lessons and recommendations for improving the Bank’s performance.

As such, the monitoring tasks formally under the responsibility of the Evaluation and Oversight Division (EOV) were handed over to the Bank’s Operations and Economic Divisions. The OIE, in its advisory capacity, is expected to provide the sareaoperations area with the necessary support manuals, tools and guidance; the OIE then validates the credibility and rigour of the self-evaluations.

In addition to the OIE, the Bank also set up other independent functions: internal audit, risk assessment and management, integrity, compliance and accountability. The mainstreaming of three cross-cutting themes (gender, energy and climate issues) into CDB’s work has also been initiated. At the same time, there are limited funds available as the CDB is working within a Board-sanctioned policy based on the principle of a zero real growth, which is in line with the budget policy of other MDBs.

In short, the bank has taken many important steps towards updating CDB’s management practices in line with other MDBs. However, the introduction of many innovations in parallel requires coordination and a shift in working practices and thinking. There is also the need to engage in different types of evaluation; evaluations that take into account cross-cutting themes and different levels of complexity. As such, whilst this review is particularly focused on the CDB’s Office of Independent Evaluation (OIE), its work and utility depend to a large degree on the development of other management practices and the degree to which evaluation is able to link to their work.

A full description of the Review’s mandate, approach, process and methods are provided in Appendices I and II. It was designed to address the following four key questions as set out in the appended Terms of Reference and Approach Paper:

91 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank

162

marlene laeubli loud, 19/03/16,
To Michael, is this generally correct? From interviews I understood that the bank has moved its funding more towards these fields as well as infrastructural support – whereas originally most of its engagement in the BMCs was for infrastructural support (and poverty reduction etc, but to a lesser degree than now).
Page 163: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

To what degree is the Office of Independent Evaluation independent at the strategic, functional and operational levels? Which measures help or hinder such independence?

To what extent is the OIE achieving its 2 strategic objectives? (which are (1) the timely delivery of good quality evaluations and PCR Reviews and (2) strengthening capacity building, networking and communication) How useful are the OIE’s procedures and products towards this end?

How adequate are the financial and human resources of the OIE for carrying out its tasks and achieving its objectives?

How effective is OIE in relating with its internal partners to develop evaluation capacity?

This Review of the OIE is based on the recommended criteria of the Evaluation Cooperation Group for Multilateral Development Banks; governance and independence, credibility, use and transparency.

The data used for analysing and interpreting the findings relied on exploratory, semi structured interviews with OIE staff as well as with CDB senior and middle managers and members of its Board of Directors. Whilst much of the interview data was collected during a 10-day intensive, on-site visit to the Bank, the majority of the Board members were interviewed through Skype. The interview data was complemented by a review of a range of key documents including the Bank’s Evaluation Policy, various kinds of reports on, or about evaluation, the complete set of minutes of meetings between the OIE and the Oversight Assurance Commission92 and the subsequent chairman’s report to the Board for the study period 2012 to 2015, OIE staff biographies as well as a number of other organisations’ evaluation principles, good practices and standards. A full list can be found in the Appendices (Appendix V). Not least, the Reviewers have also drawn on their own knowledge and experience of evaluation management to complement data analysis and interpretation.

Scope and Limitations

The Panel was asked to concentrate on the 4-year period since the establishment of the OIE, January 2012 to December 2015, but more particularly on the changes introduced since the new Head of the OIE was appointed (June 2014 to December 2015).

It has mainly focussed on the strategic role of the OIE within the CDB as well as its functional and operational roles and responsibilities.

It was planned as a Review and not a fully-fledged evaluation; this was due to the limited time and resources available for the exercise as well as the fact that a ”light” review is in keeping with the spirit of the OIE’s Terms of Reference. The Review could not undertake any in-depth analysis of documents or consult with country level stakeholders or other external sources of expertise. Moreover, of the 29 people identified for interview, despite several reminders (by email or telephone) , the Panel were unable to either contact or secure the agreement of 5 of the 14 Board members, and 1 CDB senior manager. In light of this experience, as well as the time invested in securing the interviews “at a distance”, the planned on-line survey to follow-up on face-to-face interview data was abandoned.

We regret that in the time available, full justice could not be done to all the material provided to the Panel by the OIE. Nevertheless, the documentary review and interviews focussed on addressing the key questions, and we are therefore confident that the main issues raised in the Terms of Reference have been addressed in this report.92 The Audit and Post Evaluation Committee, now the Oversight Assurance Committee, is a Board Committee responsible for the oversight of evaluation and other key management functions.

163

Page 164: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

164

Page 165: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Two: What the Review Found In the first place, the Panel should like to commend the CDB for its efforts to establish an independent evaluation function.. Similarly, in spite of some of the challenges raised in this Review, the current Head of OIE and his team are to be commended for their efforts in advancing evaluation in the right direction of the UNEG Norms and Standards and ECG guidelines on Good Practices. The Panel presents its findings and analysis in a spirit of constructive criticism, highlighting the strengths of the current situation as well as several challenges that need to be addressed in order to bring out the full value of evaluation to the CDB.

In this part of the report, we present our findings and conclusions relative to the following criteria used to assess and respond to the four TOR questions:

the evaluation policy governance independence the OIE strategy, practices and work programme usefulness of evaluation, evaluation use communicating evaluation results (transparency) adequacy of resources, and finally the working relationship between self and independent evaluation

The Evaluation PolicyThe CDB Board agreed an Evaluation Policy (the Policy) in December 2011. It sets out the guiding principles and provisions for the OIE. It also aims at guaranteeing the independent functioning of the Office of Independent Evaluation (OIE) by having it report to the Board of Directors through the Oversight Assurance Committee, OAC. However, the President retains oversight on administrative matters for management of day-to-day activities such as travel approval..

Generally speaking, the Policy reflects many of the ECG’s recommendations on evaluation independence and good practices. Similarly, the evaluation criteria for judging outcomes are the five developed by the DAC, that is relevance, effectiveness, efficiency, impact and sustainability. In general, the Policy is intended to maximize the strategic value, timeliness and the learning aspect of evaluation.

Yet in reality, the Policy provides a framework for what could be achieved under optimal conditions. It is overambitious in terms of what could be done with the current level of resources. For example undertaking the validation of all Project and Country Completion Reports as well as engaging in the full range of evaluation types undertaken within the MDBs have proven to be simply not feasible at this stage. (More on this later in the report.)

Many important tasks outlined in the evaluation policy have not been done so far by either the OAC or the OIE. For instance, the OAC has yet to produce an annual report on OIE’s performance and the OIE has yet to establish a database of evaluation lessons, recommendations, actions and management responses.

To conclude: The Evaluation Policy is a mission statement of what could be achieved in time with sufficient financial and human resourcing. It reflects the internationally recognized evaluation principles and standards, but is probably somewhat ambitious for the OIE to fully put into practice for a number of years

165

Bastiaan de Laat, 03/19/16,
Should the Panel give an advise on priorities? MLL Not at this stage but in the recommendations
marlene laeubli loud, 03/19/16,
Given the delays and lack of relevant data in the BMCs, the OIE got OAC approval to validate up to a maximum of 6 self evaluations per year, and cannot proceed with any impact evaluation until the data situation has improved. This is discussed later in the report.
marlene laeubli loud, 19/03/16,
John, I have modified my original text to make a more cautious congratulations!
Page 166: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Governance IssuesOversight of the OIE is entrusted to a Committee of the Board of Directors (originally called the Audit and Post-Evaluation Committee, APEC, and now the Oversight Assurance Committee (OAC).The OIE reports to the Board through the OAC. There are 5 members, of which only 2 are located in Barbados.

The OAC meets 5 times per year, the day before Board meetings. It has oversight responsibility for external and internal audit, independent evaluation, risk management and integrity, compliance and accountability in relation to CDB’s work.

The OAC Chairperson prepares a very brief resume of the day before’s meeting to present to the Board for approval. The report generally covers progress, shortcomings and risks but is only a small part of the Board meeting so that generally there is little discussion; evaluation is only one of many items on the agenda. (We were told that the report to the Board averages approximately 10 minutes). Some of our interviewees could not recall any discussion about evaluation during Board meetings or remember reference being made to any evaluation report.

Given the breadth of its oversight responsibilities, there is now provision for the OAC to hire in consultants to provide it with technical expertise as needs be. Resources would have to be provided out of the CDB’s administrative budget. Another novelty is the provision to meet with the head of the OIE in an executive session at least once per year. So far, our understanding is that neither of these opportunities has yet been taken up by the OAC.

The major problem for the OAC is the volume of paperwork and length of individual documents received in parallel from the CDB and its independent offices, generally very shortly ahead of its meetings. Both Board and OAC members expressed their deep concern about the need for the more timely delivery of reports and background papers.The OAC members fear they are unable to do justice to their oversight responsibilities. Hence, based on the Panel’s review of the minutes and comments from the OIE, the meetings appear to be more formalisticbut with little in-depth discussion or systematic follow up on the recommendations, agreed actions or the lessons drawn, The “follow up on actions agreed” does not appear to be a systematic item on each OAC meeting’s agenda..93 Similarly any attempt to identify key messages for various stakeholders other than the CDB is not mentioned in the minutes or reports to the Board.

In response, the OIE has greatly improved the presentation of technical reports by summarising the main points in its “Brief Reports” (e.g. the Tax Administration and Tax Reform and Technical and Vocational Education and Training evaluation). This is commendable and certainly a step in the right direction although the Panel considers that they should have a sharper focus on the strategic issues (which are the end of the brief rather than the beginning), be condensed and be made more “reader friendly”.

The Panel was also surprised to find that despite expressions of support for rigorous evaluation and its importance to the CDB, the OAC do not appear to be taking any firm position with regard to the paucity of available data. OAC has been made aware of the data problems in the BMCs (e.g. lack of rigorous monitoring and statistical data and the consequent effect on the rigour of OIE’s evaluations) as well as the delays in the submission of self-evaluations and their validations, yet there appears to be no OAC attempt to deal with such problems e.g. exerting any pressure on the CDB or on the BMCs through their representatives on the Board.

To conclude: The OAC has an expressed interest in advancing the role of evaluation as a strategic tool for CDB management. However, it is not performing its oversight function with sufficient firmness to bring about any change to the problems raised through evaluations, especially with regard to data issues and reporting delays. More generally there is a lack of any systematic

93 At the APEC meeting in May 2012, it was agreed that the OIE would prepare a Management Action Record to highlight the follow up actions taken to the recommendations of all evaluation reports, every two years, with the first report presented to APEC at the March 2013 Board Meeting. There is no record of this having ever been done or of the APEC / OAC’s following up on such request.

166

John Mayne, 19/03/16,
What do you know about the discussions OAC has on OIE efforts, evaluations, etc. Anything of substance? What % of time on evaluation? In other words, are they doing their job? Lateness of documents can be an issue, or an excuse. Why haven’t they given people hell for late delivery? Or are they just enjoying a nice trip to the Barbados each few months?MLL: See changes I have made
marlene laeubli loud, 19/03/16,
It is now, not no
Page 167: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

report on “follow up of actions agreed” which could be particularly useful for tracking changes as a consequence of an evaluation and management’s response. The OIE could do better justice to its oversight responsibly if it were to receive all background documents systematically at least two weeks before its meetings. Moreover, the volume and length of documents received at any one time is considered to be overwhelming. The number and/or importance of agenda items competing for attention at any one session is an additional handicap.

the Panelss

the PanelssUnit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

9) There appears to be a that is of concern to the Panels (More on this point under the heading self and independent evaluations.)The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. Its observer status at meetingsor as a do not necessarily provide the same insight as to the dynamics of management actions and/or decisions.

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

actual or potential conflict of interestThe PanelIt must be s

The PanelisThe; this affects also of y, Work PracticesThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. therefore The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid94 are brought out in the remaining sections of this Review, not least given the limited resources available.

But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities. sTheThes before completing the final

94 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

167

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
John Mayne, 03/19/16,
What evidence do you have for this conclusion? Maybe would be just an excuse to spend more time there, getting ready! Any evidence they know what to do with what they get? MLL See changes made
John Mayne, 19/03/16,
Why is the OAC so powerless to bring about change? When I was doing work with UNFPA, I used to get frustrated that what ever the Board said was treated as words from God and the answer always as how high should we jump sir. Almost no effort made to question a Board suggestion, many of which were stupid.
Page 168: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

version. However, p are only submitted to the CDB line and senior managers.Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the following the recommendations of professional good practices and standards on participative approaches; it has succeeded in , The Panelsstaff fromas well as There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. OIE doesarrange .sthe Panel also wishes tonewly appointed rsA was evidentthey expressed interest in In one case, interest was followed up in practice; can be improvedfostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that can offer can add value to understanding the strengths and weaknesses of such strategies. Tduring this transitional phase, Manual to guide and support the independent evaluation process.and operations staff sevaluation activities. ,oOIE’sjudginga . As such, theyare thatAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An is processed as follows;the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

SThisrItand Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

The rmade

- is still considered to be good practice to have the elaborated in the initial design documents the95 such as Developmental Evaluation (Patton, 201096)

-)

said abovePCHowever, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessonsthe Panelas done this ing

95 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.96 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press

168

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 169: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on lessonsAlthough nothing has happened since, it is , sometimes indicate (Panel has already referred above to ’s lack of oversight in the use of evaluation.)

sThe Panelsevaluation work Moreover, the 2015 budget provides only US$2’000 for

communication – nothing of which is intended for outreach.Reviewerseither confusing or and

budgetedConsequently for 2015, theFigure 3: The MDB Evaluation Pyramid97

97 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

169

Page 170: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

ys to be effective. A modest attempt has been made in 2015; OIE hased But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.man and financial resources to support its work

OIE’s Human Resources;

5eThree of the five were recruited from within the CDB. edfrom the Board that OIE

should embark on ee and for impact evaluations in particular,OIE’s ee Moreover, there

are many other designated OIE activities that should be recognised as valuable work; the

validations, building CDB and BMC evaluation capacity, providing supervision, advice,

knowledge management and brokerage as well as managing evaluation contracts, The

time needs of dealing with all of these may be underestimated in OIE’s budgets; all are

important for assuring best value from evaluation. The Panel is concerned that a demand

for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level

evaluations may undermine the importance and time needs of other essential

tasks.Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies

170

Page 171: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.Self-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes.types of evaluation y There appears to be little incentive to complete self-evaluations in a timelier manner.

.

; it is a threat rather than an opportunity for learning. Yis recognized as

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.a number of , which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring dataadded value that evaluation might offer to the operations area is ill recognized Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

171

Page 172: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”98

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

98 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

172

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 173: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”99 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 100 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

99 CDB (2011) Evaluation Policy (p.2)100

173

Page 174: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

174

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 175: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

175

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 176: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

176

Page 177: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division101. The appraisal is based on a comparison of the ECG’s recommendations on independence102 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 103

101 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank102 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1 103 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

177

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
John Mayne, 19/03/16,
Meaning what?
Page 178: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of the program, activity or entity being evaluated

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of

178

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
John Mayne, 2016-03-19,
Don’t need the first column.
Page 179: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

c) programs, activities, or entities it funds or sponsors

evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

10) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

11) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a

179

Page 180: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the organization’s programs, activities and entities

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the evaluation organization; consultation on work program with Management and Board

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of

180

Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 181: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

sources, for evaluation function

preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management interference

Complies – however within limits of restricted human and financial resources available

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

181

Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Page 182: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

b) promotion, pay increases, and

c) firing, within a merit system

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the

182

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Page 183: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid104 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products105.

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

Figure 1: Selection of Evaluation Topics and Funding Source

104 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).105 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

3-year Work Programme and Budget (approved by Board)

Annual OIE report and work plan

submission to OAC

183

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 184: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Detailed ToR or Final Approach Paper if sufficiently detailed.

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

Funding Track

Final Approach Paper/ToR

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Approval

Board Paper

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

OIE – Selection of consultants (if any)

contracting

184

Page 185: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Arrangement AFully outsourced / external

consultants; oversight by OIE

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

Prepares Inception Report /

Approach Paper

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Final OIE approved report to CDB Senior Management for Management Response

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

Feedback to evaluation lead

Submission of Final Report to

OIE

185

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 186: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

9.10.

Notes to Figure 2

9. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

10. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident;

Prepare for disclosure and dissemination

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

186

Page 187: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated. However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

187

John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 188: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla

188

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
Page 189: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Validation of Project Completion Report on Immediate Response Loan - Tropical Storm Arthur – Belize.

Evaluation of Technical Assistance Interventions of the Caribbean Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.106 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives.

106 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.

189

DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 190: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object107 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.108 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.109 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation110 and the DAC criteria to the exclusions of considering other evaluation

107 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…108 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.109 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!110 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.

190

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
Page 191: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

approaches such as Developmental Evaluation (Patton, 2010111); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports112 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect

111 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press112 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

191

Page 192: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,113 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

113 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

192

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 193: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak114. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

114 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

193

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 194: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid115

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the

115 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

194

Page 195: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.116

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other 116 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

195

Page 196: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

196

Page 197: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.117

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset. Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the

117 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

197

Page 198: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

198

Page 199: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”118

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

118 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

199

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 200: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”119 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 120 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

119 CDB (2011) Evaluation Policy (p.2)120

200

Page 201: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

201

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 202: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

202

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 203: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

203

Page 204: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

204

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 205: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

205

Page 206: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

206

Page 207: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

207

Page 208: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

208

Page 209: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

209

Page 210: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

210

Page 211: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

211

Page 212: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

212

Page 213: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

actual or potential conflict of interestThe PanelIt must be s

The PanelisThe; this affects also of y, Work PracticesThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. therefore The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation

213

Page 214: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

pyramid121 are brought out in the remaining sections of this Review, not least given the limited resources available.

But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities. sTheThes before completing the final version. However, p are only submitted to the CDB line and senior managers.Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the following the recommendations of professional good practices and standards on participative approaches; it has succeeded in , The Panelsstaff fromas well as There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. OIE doesarrange .sthe Panel also wishes tonewly appointed rsA was evidentthey expressed interest in In one case, interest was followed up in practice; can be improvedfostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that can offer can add value to understanding the strengths and weaknesses of such strategies. Tduring this transitional phase, Manual to guide and support the independent evaluation process.and operations staff sevaluation activities. ,oOIE’sjudginga . As such, theyare thatAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An is processed as follows;the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

SThisrItand Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

The rmade

- is still considered to be good practice to have the elaborated in the initial design documents the122 such as Developmental Evaluation (Patton, 2010123)

-)

said abovePCHowever, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s

121 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).122 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.123 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press

214

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 215: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessonsthe Panelas done this ing on lessonsAlthough nothing has happened since, it is , sometimes indicate (Panel has already referred above to ’s lack of oversight in the use of evaluation.)

sThe Panelsevaluation work Moreover, the 2015 budget provides only US$2’000 for

communication – nothing of which is intended for outreach.Reviewerseither confusing or and

budgetedConsequently for 2015, theFigure 3: The MDB Evaluation Pyramid124

124 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

215

Page 216: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

ys to be effective. A modest attempt has been made in 2015; OIE hased But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.man and financial resources to support its work

OIE’s Human Resources;

5eThree of the five were recruited from within the CDB. edfrom the Board that OIE

should embark on ee and for impact evaluations in particular,OIE’s ee Moreover, there

are many other designated OIE activities that should be recognised as valuable work; the

validations, building CDB and BMC evaluation capacity, providing supervision, advice,

knowledge management and brokerage as well as managing evaluation contracts, The

time needs of dealing with all of these may be underestimated in OIE’s budgets; all are

important for assuring best value from evaluation. The Panel is concerned that a demand

for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level

evaluations may undermine the importance and time needs of other essential

tasks.Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies

216

Page 217: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.Self-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes.types of evaluation y There appears to be little incentive to complete self-evaluations in a timelier manner.

.

; it is a threat rather than an opportunity for learning. Yis recognized as

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.a number of , which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring dataadded value that evaluation might offer to the operations area is ill recognized Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

217

Page 218: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”125

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

125 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

218

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 219: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”126 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 127 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

126 CDB (2011) Evaluation Policy (p.2)127

219

Page 220: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

220

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 221: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

221

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 222: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

222

Page 223: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division128. The appraisal is based on a comparison of the ECG’s recommendations on independence129 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 130

128 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank129 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1 130 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

223

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
John Mayne, 19/03/16,
Meaning what?
Page 224: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of the program, activity or entity being evaluated

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

224

John Mayne, 2016-03-19,
Don’t need the first column.
Page 225: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

12) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

13) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the

225

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
Page 226: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the organization’s programs, activities and entities

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management

226

Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 227: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

evaluation organization; consultation on work program with Management and Board

and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget sources, for evaluation function

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management interference

Complies – however within limits of restricted human and financial resources available

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot

227

Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Page 228: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

228

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Page 229: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid131 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products132.

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

131 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).132 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

229

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 230: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Figure 1: Selection of Evaluation Topics and Funding Source

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Detailed ToR or Final Approach Paper if sufficiently detailed.

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

Funding Track

Final Approach Paper/ToR

3-year Work Programme and Budget (approved by Board)

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Approval

Board Paper

Annual OIE report and work plan

submission to OAC

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

OIE – Selection of consultants (if any)

contracting

230

Page 231: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Arrangement AFully outsourced / external

consultants; oversight by OIE

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

Prepares Inception Report /

Approach Paper

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

231

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 232: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

11.12.

Notes to Figure 2

11. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

12. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

Prepare for disclosure and dissemination

Final OIE approved report to CDB Senior Management for Management Response

Feedback to evaluation lead

Submission of Final Report to

OIE

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

232

Page 233: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident; they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated. However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

233

John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 234: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

234

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
Page 235: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla Validation of Project Completion Report on Immediate Response Loan -

Tropical Storm Arthur – Belize. Evaluation of Technical Assistance Interventions of the Caribbean

Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.133 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good

133 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.

235

Page 236: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives. Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object134 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.135 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.136 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong

134 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…135 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.136 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!

236

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 237: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation137 and the DAC criteria to the exclusions of considering other evaluation approaches such as Developmental Evaluation (Patton, 2010138); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports139 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and

137 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.138 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press139 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

237

Page 238: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,140 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On

140 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

238

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 239: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak141. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

141 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

239

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 240: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid142

142 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

240

Page 241: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.143

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The

143 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

241

Page 242: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

242

Page 243: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.144

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset. Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term

144 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

243

Page 244: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

244

Page 245: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”145

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

145 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

245

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 246: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”146 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 147 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

146 CDB (2011) Evaluation Policy (p.2)147

246

Page 247: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

247

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 248: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

248

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 249: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

249

Page 250: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

250

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 251: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

251

Page 252: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

252

Page 253: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

253

Page 254: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

254

Page 255: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

255

Page 256: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

256

Page 257: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

257

Page 258: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

258

Page 259: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division148. The appraisal is based on a comparison of the ECG’s recommendations on independence149 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

148 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank149 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1

259

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
Page 260: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 150

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

150 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

260

John Mayne, 2016-03-19,
Don’t need the first column.
Page 261: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

the program, activity or entity being evaluated

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

14) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

261

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
Page 262: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

15) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

262

Page 263: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

organization’s programs, activities and entities

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the evaluation organization; consultation on work program with Management and Board

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget sources, for evaluation function

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management

Complies – however within limits of restricted human and financial resources available

263

John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 264: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

interference

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by

264

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
Page 265: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid151 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products152.

151 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).152 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

265

Page 266: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

Figure 1: Selection of Evaluation Topics and Funding Source

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

3-year Work Programme and Budget (approved by Board)

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Paper

Annual OIE report and work plan

submission to OAC

266

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 267: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Detailed ToR or Final Approach Paper if sufficiently detailed.

Funding Track

Final Approach Paper/ToR Board

Approval

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

Arrangement AFully outsourced / external

consultants; oversight by OIE

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

OIE – Selection of consultants (if any)

contracting

267

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 268: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

13.14.

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Prepares Inception Report /

Approach Paper

Prepare for disclosure and dissemination

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Final OIE approved report to CDB Senior Management for Management Response

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

Feedback to evaluation lead

Submission of Final Report to

OIE

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

268

Page 269: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Notes to Figure 2

13. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

14. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident; they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated.

269

Page 270: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

270

DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 271: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla Validation of Project Completion Report on Immediate Response Loan -

Tropical Storm Arthur – Belize. Evaluation of Technical Assistance Interventions of the Caribbean

Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

271

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 272: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.153 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives. Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object154 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.155 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be

153 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.154 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…155 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.

272

marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 273: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.156 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation157 and the DAC criteria to the exclusions of considering other evaluation approaches such as Developmental Evaluation (Patton, 2010158); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports159 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

156 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!157 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.158 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press159 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

273

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
Page 274: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,160 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or

160 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

274

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 275: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

275

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 276: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak161. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid162

161 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.162 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

276

Page 277: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.163

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in

163 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

277

Page 278: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the

278

Page 279: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.164

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset.

164 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

279

Page 280: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

280

Page 281: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”165

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

165 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

281

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 282: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”166 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 167 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

166 CDB (2011) Evaluation Policy (p.2)167

282

Page 283: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

283

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 284: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

284

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 285: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

285

Page 286: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

286

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 287: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

287

Page 288: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

288

Page 289: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

289

Page 290: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

290

Page 291: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

291

Page 292: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

292

Page 293: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

293

Page 294: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

294

Page 295: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

External Review of the Office of Independent Evaluation

Caribbean Development Bank

April, 2016

295

Page 296: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Principal ReviewerMarlène Läubli Loud,

Review Panel MembersJohn Mayne

Bastiaan de Laat

296

Page 297: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Primary audiencesMembers of the Oversight Assurance Committee

Members of the Board of Directors

Independent Evaluation Office

Caribbean Development Bank Management and Staff

Review Panel Members

Marlène Läubli Loud (DPhil) is currently an independent consultant and trainer in public sector evaluation. She has worked with a range of organizations, small and big including the European Commission, the World Health Organisation, the United Nations Evaluation Group, the UK Employment Department, UK Health Promotion Agency (now merged and become NICE), and the English Nursing Board. She was head of the Research and Evaluation Unit at the Swiss Federal Office of Public Health for nearly twenty years where she gained much experience in evaluation management, and especially in the ways and means for improving the use and utility of evaluation in organisations. She continues to have a keen theoretical and practical interest in this area. Prior to this, she was an independent evaluator in the UK, specializing in the evaluation of developmental programmes in health and general education. She was also a research fellow at the Department of Education, University of Surrey and in the Social Science Faculty, University of Oxford, UK.

John Mayne (PhD) is an independent advisor on public sector performance. He has been working with a number of organizations and jurisdictions, including several agencies of the UN, the Challenge Program on Water and Food, the European Union, the Scottish Government, the United Nations Secretariat, the International Development Research Centre, the Asian Development Bank and several Canadian federal departments on results management, evaluation and accountability issues. Until 2004, he was at the Office of the Auditor General where he led efforts at developing practices for effective managing for results and performance reporting in the government of Canada, as well as leading the Office’s audit efforts in accountability and governance. Prior to 1995, John was with the Canadian Treasury Board Secretariat and Office of the Comptroller General. He has authored numerous articles and reports, and edited five books in the areas of program evaluation, public administration and performance monitoring. In 1989 and in 1995, he was awarded the Canadian Evaluation Society Award for Contribution to Evaluation in Canada. In 2006, he became a Canadian Evaluation Society Fellow.

Bastiaan de Laat (PhD) is Evaluation Expert and Team

297

Page 298: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Leader at the European Investment Bank (EIB) where over the past two of years he has been in charge of major evaluations in important areas such as Climate Action, SME support and Technical Assistance. He has a longstanding experience in evaluation as well as in foresight. Founder-director of the French subsidiary of the Technopolis Group (1998-2006) he led many evaluations for and provided policy advice to a great variety of local, national and international public bodies. He trained several hundreds of European Commission staff and national government officials in evaluation and designed monitoring and evaluation systems for various public organisations. Before joining the EIB he worked as Evaluator at the Council of Europe Development Bank. He has developed tools and performed programme, policy and regulatory evaluations, both ex ante and ex post, in a variety of fields. He has also made several academic contributions, most recently with articles on evaluation use and on the "Tricky Triangle", on the relationships between evaluator, evaluation commissioner and evaluand. In his private capacity, Bastiaan served as Secretary General of the European Evaluation Society and was recently elected Vice-President.

Acknowledgements The Review exercise could not have been possible without the support and commitment of the OAC, the OIE and the CDB. The exploratory discussions with the OIE staff, members of the Board of Directors as well as with the CDB management and staff provided great insight and were a valuable contribution to the Review.

We are indebted to the Head of OIE, Michael Schroll, and his team for their cooperation, insight, and readiness to provide us with any information requests. We are also grateful to them for their useful comments regarding the first draft of our report, and for their suggested improvements.

We are especially appreciative of OIE’s administrative assist, Denise xxxx, for her help in coordinating the interviews during our 10-day field study in Barbados and in providing us with all documents we requested.

298

Page 299: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

AcronymsLIST OF ABBREVIATIONS

AMT Advisory Management TeamAPECAPAR

Audit and Post Evaluation CommitteeApproach PaperAppraisal Report

BOD Board of DirectorsBMCs Borrowing Member CountriesBNTF Basic Needs Trust FundCDB Caribbean Development BankCSPDAC

Country Strategy PaperDevelopment Assistance Committee

DFI Development Financial InstitutionED Economics DepartmentFI Financial InstitutionIRLMDB

Immediate Response LoanMultilateral Development Bank

mn millionM&E Monitoring and EvaluationMfDR Managing for Development ResultsOAC Oversight and Assurance CommitteeOIE Office of Independent EvaluationPAS Performance Assessment SystemPBG Policy-Based GrantPBL Policy-Based Loan PCR Project Completion Report PCVR Project Completion Validation ReportPPES Project Performance Evaluation SystemPPMS Portfolio Performance Management SystemSDF Special Development FundTA Technical Assistance WB World Bank

299

Bastiaan de Laat, 18/03/16,
The normal symbol is just “m”MLL but will leave it like this as this is CDB practice
Page 300: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Preface Evaluation work at the Caribbean Development Bank (CDB) has been ongoing since the early 1990s, although initially it was mainly focused on the ex-post evaluation of projects. However, in 2011, the CDB reviewed its evaluation system to bring it up to date with the good practices of international development organisations. In December that year, it produced its comprehensive Evaluation Policy (December 2011) setting out the aim and objectives and guiding principles for CDB’s evaluation system.

The Policy provides for the establishment of the Office of Independent Evaluation (OIE). Its main objective is to provide “CDB’s Board of Directors, President, Advisory Management Team, CDB staff and other stakeholders and partners with timely, credible and evidence-based information on the relevance and performance of CDB’s projects, programs, policies and other development activities.” (Evaluation Policy, 2011, p. 1).

To oversee and assess good practice, the Evaluation Cooperation Group (ECG) for Multilateral Development Banks (MDBs) recommends that the MDBs’ evaluation system and independent evaluation units be the subject of a review on a regular basis. The aim here is to help the institutions adopt recognised evaluation standards and practices so that its policies may benefit from evidence-based assessments.

In mid-2014, a new Head of the OIE was appointed and, following an initial learning period, he called for a peer review of the evaluation system. Even though the OIE had only been in existence since 2012, it was considered timely to take stock of what had been done so far in order to tease out the priorities for the next 3-4 years.

It was originally anticipated that such an assessment could be done by the ECG as part of the OIE’s application for ECG membership. This did not prove possible, since the CDB’s operation is considered too small for such membership. A review was therefore commissioned to independent experts in evaluation who are knowledgeable and experienced in the management of internal evaluation units.

Main AimThe Review’s main aim is to provide the CDB’s Board of Directors with an independent assessment of the OIE and CDB’s evaluation system. The intention is to highlight the factors that help or hinder the OIE’s independence and performance in order to identify where improvements could be made. This report will be presented together with a Management Response to the CDB’s Oversight Assurance Committee and its Board of Directors at its meeting in May 2016. It is anticipated that an action plan will be drawn up on the basis of the Board’s decision on how to address the recommendations put forward.

Report StructureThe Review starts with some general background information about the CDB and the setting up of an independent evaluation function. It also sets out the reasons for an external review and why this was requested at this particular point in time. Part One also outlines the Review, which, is presented in more detail in Appendix II. Part Two provides the Review’s findings and conclusions according to each of the criteria presented. The Panels general conclusions and recommendations are the subject of Part Three.

The Panel is grateful for the complete freedom it was given to form its own opinions and to reach conclusions based on its analysis. The findings, conclusions and recommendations

presented in this paper are those of the Peer Review Panel members. The views of the CDB, are provided separately in the Management Response that accompanies this Report.

300

DE LAAT Bastiaan, 18/03/16,
Michael, Was this the formal reason?
Page 301: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Executive Summary

301

Page 302: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part One: Introduction and BackgroundIn order to understand the development of the Office of Independent Evaluation’s (OIE) work, a brief description of the CDB’s current reforms is needed. First, there has been a change over the last decade from the nature of the programmes the bank supports; for example it has become increasingly engaged in funding policy-based operations and social development issues. Similarly, there have been changes in the whole of the development field, which is grappling to deal with complex issues such as gender or climate change. To meet today’s challenges and ensure that its work practices reflect the international standards of Multilateral Development Banks (MDBs), the CDB has introduced a number of measures aimed at improving its effectiveness and efficiency. For example, in line with international standards for Management for Development Results (MfDR), it has introduced a Results Based Management Framework for organising and assessing its performance.

In 2011, the CDB commissioned an external consultancy to undertake an assessment of its evaluation function in order to develop a policy that took account of good practices within the international development community.168 The CDB’s Evaluation Policy (referred to hereafter as the Policy) is a direct response to that review; it reflects the standards and good practices of the Evaluation Cooperation Group (ECG) of the Multilateral Development Banks (MDB) as well as the evaluation principles and standards of many professional associations. Similarly, the Bank showed its commitment to having evaluation as a core function by establishing an independent evaluation unit that reports to the Board of Directors.It is responsible for assessing the Bank’s activities and interventions, but especially for drawing out the key lessons and recommendations for improving the Bank’s performance.

As such, the monitoring tasks formally under the responsibility of the Evaluation and Oversight Division (EOV) were handed over to the Bank’s Operations and Economic Divisions. The OIE, in its advisory capacity, is expected to provide the sareaoperations area with the necessary support manuals, tools and guidance; the OIE then validates the credibility and rigour of the self-evaluations.

In addition to the OIE, the Bank also set up other independent functions: internal audit, risk assessment and management, integrity, compliance and accountability. The mainstreaming of three cross-cutting themes (gender, energy and climate issues) into CDB’s work has also been initiated. At the same time, there are limited funds available as the CDB is working within a Board-sanctioned policy based on the principle of a zero real growth, which is in line with the budget policy of other MDBs.

In short, the bank has taken many important steps towards updating CDB’s management practices in line with other MDBs. However, the introduction of many innovations in parallel requires coordination and a shift in working practices and thinking. There is also the need to engage in different types of evaluation; evaluations that take into account cross-cutting themes and different levels of complexity. As such, whilst this review is particularly focused on the CDB’s Office of Independent Evaluation (OIE), its work and utility depend to a large degree on the development of other management practices and the degree to which evaluation is able to link to their work.

A full description of the Review’s mandate, approach, process and methods are provided in Appendices I and II. It was designed to address the following four key questions as set out in the appended Terms of Reference and Approach Paper:

168 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank

302

marlene laeubli loud, 18/03/16,
To Michael, is this generally correct? From interviews I understood that the bank has moved its funding more towards these fields as well as infrastructural support – whereas originally most of its engagement in the BMCs was for infrastructural support (and poverty reduction etc, but to a lesser degree than now).
Page 303: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

To what degree is the Office of Independent Evaluation independent at the strategic, functional and operational levels? Which measures help or hinder such independence?

To what extent is the OIE achieving its 2 strategic objectives? (which are (1) the timely delivery of good quality evaluations and PCR Reviews and (2) strengthening capacity building, networking and communication) How useful are the OIE’s procedures and products towards this end?

How adequate are the financial and human resources of the OIE for carrying out its tasks and achieving its objectives?

How effective is OIE in relating with its internal partners to develop evaluation capacity?

This Review of the OIE is based on the recommended criteria of the Evaluation Cooperation Group for Multilateral Development Banks; governance and independence, credibility, use and transparency.

The data used for analysing and interpreting the findings relied on exploratory, semi structured interviews with OIE staff as well as with CDB senior and middle managers and members of its Board of Directors. Whilst much of the interview data was collected during a 10-day intensive, on-site visit to the Bank, the majority of the Board members were interviewed through Skype. The interview data was complemented by a review of a range of key documents including the Bank’s Evaluation Policy, various kinds of reports on, or about evaluation, the complete set of minutes of meetings between the OIE and the Oversight Assurance Commission169 and the subsequent chairman’s report to the Board for the study period 2012 to 2015, OIE staff biographies as well as a number of other organisations’ evaluation principles, good practices and standards. A full list can be found in the Appendices (Appendix V). Not least, the Reviewers have also drawn on their own knowledge and experience of evaluation management to complement data analysis and interpretation.

Scope and Limitations

The Panel was asked to concentrate on the 4-year period since the establishment of the OIE, January 2012 to December 2015, but more particularly on the changes introduced since the new Head of the OIE was appointed (June 2014 to December 2015).

It has mainly focussed on the strategic role of the OIE within the CDB as well as its functional and operational roles and responsibilities.

It was planned as a Review and not a fully-fledged evaluation; this was due to the limited time and resources available for the exercise as well as the fact that a ”light” review is in keeping with the spirit of the OIE’s Terms of Reference. The Review could not undertake any in-depth analysis of documents or consult with country level stakeholders or other external sources of expertise. Moreover, of the 29 people identified for interview, despite several reminders (by email or telephone) , the Panel were unable to either contact or secure the agreement of 5 of the 14 Board members, and 1 CDB senior manager. In light of this experience, as well as the time invested in securing the interviews “at a distance”, the planned on-line survey to follow-up on face-to-face interview data was abandoned.

We regret that in the time available, full justice could not be done to all the material provided to the Panel by the OIE. Nevertheless, the documentary review and interviews focussed on addressing the key questions, and we are therefore confident that the main issues raised in the Terms of Reference have been addressed in this report.169 The Audit and Post Evaluation Committee, now the Oversight Assurance Committee, is a Board Committee responsible for the oversight of evaluation and other key management functions.

303

Page 304: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

304

Page 305: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Two: What the Review Found In the first place, the Panel should like to commend the CDB for its efforts to establish an independent evaluation function.. Similarly, in spite of some of the challenges raised in this Review, the current Head of OIE and his team are to be commended for their efforts in advancing evaluation in the right direction of the UNEG Norms and Standards and ECG guidelines on Good Practices. The Panel presents its findings and analysis in a spirit of constructive criticism, highlighting the strengths of the current situation as well as several challenges that need to be addressed in order to bring out the full value of evaluation to the CDB.

In this part of the report, we present our findings and conclusions relative to the following criteria used to assess and respond to the four TOR questions:

the evaluation policy governance independence the OIE strategy, practices and work programme usefulness of evaluation, evaluation use communicating evaluation results (transparency) adequacy of resources, and finally the working relationship between self and independent evaluation

The Evaluation PolicyThe CDB Board agreed an Evaluation Policy (the Policy) in December 2011. It sets out the guiding principles and provisions for the OIE. It also aims at guaranteeing the independent functioning of the Office of Independent Evaluation (OIE) by having it report to the Board of Directors through the Oversight Assurance Committee, OAC. However, the President retains oversight on administrative matters for management of day-to-day activities such as travel approval..

Generally speaking, the Policy reflects many of the ECG’s recommendations on evaluation independence and good practices. Similarly, the evaluation criteria for judging outcomes are the five developed by the DAC, that is relevance, effectiveness, efficiency, impact and sustainability. In general, the Policy is intended to maximize the strategic value, timeliness and the learning aspect of evaluation.

Yet in reality, the Policy provides a framework for what could be achieved under optimal conditions. It is overambitious in terms of what could be done with the current level of resources. For example undertaking the validation of all Project and Country Completion Reports as well as engaging in the full range of evaluation types undertaken within the MDBs have proven to be simply not feasible at this stage. (More on this later in the report.)

Many important tasks outlined in the evaluation policy have not been done so far by either the OAC or the OIE. For instance, the OAC has yet to produce an annual report on OIE’s performance and the OIE has yet to establish a database of evaluation lessons, recommendations, actions and management responses.

To conclude: The Evaluation Policy is a mission statement of what could be achieved in time with sufficient financial and human resourcing. It reflects the internationally recognized evaluation principles and standards, but is probably somewhat ambitious for the OIE to fully put into practice for a number of years

305

Bastiaan de Laat, 03/18/16,
Should the Panel give an advise on priorities? MLL Not at this stage but in the recommendations
marlene laeubli loud, 03/18/16,
Given the delays and lack of relevant data in the BMCs, the OIE got OAC approval to validate up to a maximum of 6 self evaluations per year, and cannot proceed with any impact evaluation until the data situation has improved. This is discussed later in the report.
marlene laeubli loud, 18/03/16,
John, I have modified my original text to make a more cautious congratulations!
Page 306: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Governance IssuesOversight of the OIE is entrusted to a Committee of the Board of Directors (originally called the Audit and Post-Evaluation Committee, APEC, and now the Oversight Assurance Committee (OAC).The OIE reports to the Board through the OAC. There are 5 members, of which only 2 are located in Barbados.

The OAC meets 5 times per year, the day before Board meetings. It has oversight responsibility for external and internal audit, independent evaluation, risk management and integrity, compliance and accountability in relation to CDB’s work.

The OAC Chairperson prepares a very brief resume of the day before’s meeting to present to the Board for approval. The report generally covers progress, shortcomings and risks but is only a small part of the Board meeting so that generally there is little discussion; evaluation is only one of many items on the agenda. (We were told that the report to the Board averages approximately 10 minutes). Some of our interviewees could not recall any discussion about evaluation during Board meetings or remember reference being made to any evaluation report.

Given the breadth of its oversight responsibilities, there is now provision for the OAC to hire in consultants to provide it with technical expertise as needs be. Resources would have to be provided out of the CDB’s administrative budget. Another novelty is the provision to meet with the head of the OIE in an executive session at least once per year. So far, our understanding is that neither of these opportunities has yet been taken up by the OAC.

The major problem for the OAC is the volume of paperwork and length of individual documents received in parallel from the CDB and its independent offices, generally very shortly ahead of its meetings. Both Board and OAC members expressed their deep concern about the need for the more timely delivery of reports and background papers.The OAC members fear they are unable to do justice to their oversight responsibilities. Hence, based on the Panel’s review of the minutes and comments from the OIE, the meetings appear to be more formalisticbut with little in-depth discussion or systematic follow up on the recommendations, agreed actions or the lessons drawn, The “follow up on actions agreed” does not appear to be a systematic item on each OAC meeting’s agenda..170 Similarly any attempt to identify key messages for various stakeholders other than the CDB is not mentioned in the minutes or reports to the Board.

In response, the OIE has greatly improved the presentation of technical reports by summarising the main points in its “Brief Reports” (e.g. the Tax Administration and Tax Reform and Technical and Vocational Education and Training evaluation). This is commendable and certainly a step in the right direction although the Panel considers that they should have a sharper focus on the strategic issues (which are the end of the brief rather than the beginning), be condensed and be made more “reader friendly”.

The Panel was also surprised to find that despite expressions of support for rigorous evaluation and its importance to the CDB, the OAC do not appear to be taking any firm position with regard to the paucity of available data. OAC has been made aware of the data problems in the BMCs (e.g. lack of rigorous monitoring and statistical data and the consequent effect on the rigour of OIE’s evaluations) as well as the delays in the submission of self-evaluations and their validations, yet there appears to be no OAC attempt to deal with such problems e.g. exerting any pressure on the CDB or on the BMCs through their representatives on the Board.

To conclude: The OAC has an expressed interest in advancing the role of evaluation as a strategic tool for CDB management. However, it is not performing its oversight function with sufficient firmness to bring about any change to the problems raised through evaluations, especially with regard to data issues and reporting delays. More generally there is a lack of any systematic

170 At the APEC meeting in May 2012, it was agreed that the OIE would prepare a Management Action Record to highlight the follow up actions taken to the recommendations of all evaluation reports, every two years, with the first report presented to APEC at the March 2013 Board Meeting. There is no record of this having ever been done or of the APEC / OAC’s following up on such request.

306

John Mayne, 18/03/16,
What do you know about the discussions OAC has on OIE efforts, evaluations, etc. Anything of substance? What % of time on evaluation? In other words, are they doing their job? Lateness of documents can be an issue, or an excuse. Why haven’t they given people hell for late delivery? Or are they just enjoying a nice trip to the Barbados each few months?MLL: See changes I have made
marlene laeubli loud, 18/03/16,
It is now, not no
Page 307: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

report on “follow up of actions agreed” which could be particularly useful for tracking changes as a consequence of an evaluation and management’s response. The OIE could do better justice to its oversight responsibly if it were to receive all background documents systematically at least two weeks before its meetings. Moreover, the volume and length of documents received at any one time is considered to be overwhelming. The number and/or importance of agenda items competing for attention at any one session is an additional handicap.

the Panelss

the PanelssUnit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

16) There appears to be a that is of concern to the Panels (More on this point under the heading self and independent evaluations.)The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. Its observer status at meetingsor as a do not necessarily provide the same insight as to the dynamics of management actions and/or decisions.

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

actual or potential conflict of interestThe PanelIt must be s

The PanelisThe; this affects also of y, Work PracticesThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. therefore The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid171 are brought out in the remaining sections of this Review, not least given the limited resources available.

But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities. sTheThes before completing the final

171 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

307

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
John Mayne, 03/19/16,
What evidence do you have for this conclusion? Maybe would be just an excuse to spend more time there, getting ready! Any evidence they know what to do with what they get? MLL See changes made
John Mayne, 19/03/16,
Why is the OAC so powerless to bring about change? When I was doing work with UNFPA, I used to get frustrated that what ever the Board said was treated as words from God and the answer always as how high should we jump sir. Almost no effort made to question a Board suggestion, many of which were stupid.
Page 308: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

version. However, p are only submitted to the CDB line and senior managers.Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the following the recommendations of professional good practices and standards on participative approaches; it has succeeded in , The Panelsstaff fromas well as There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. OIE doesarrange .sthe Panel also wishes tonewly appointed rsA was evidentthey expressed interest in In one case, interest was followed up in practice; can be improvedfostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that can offer can add value to understanding the strengths and weaknesses of such strategies. Tduring this transitional phase, Manual to guide and support the independent evaluation process.and operations staff sevaluation activities. ,oOIE’sjudginga . As such, theyare thatAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An is processed as follows;the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

SThisrItand Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

The rmade

- is still considered to be good practice to have the elaborated in the initial design documents the172 such as Developmental Evaluation (Patton, 2010173)

-)

said abovePCHowever, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessonsthe Panelas done this ing

172 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.173 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press

308

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 309: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on lessonsAlthough nothing has happened since, it is , sometimes indicate (Panel has already referred above to ’s lack of oversight in the use of evaluation.)

sThe Panelsevaluation work Moreover, the 2015 budget provides only US$2’000 for

communication – nothing of which is intended for outreach.Reviewerseither confusing or and

budgetedConsequently for 2015, theFigure 3: The MDB Evaluation Pyramid174

174 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

309

Page 310: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

ys to be effective. A modest attempt has been made in 2015; OIE hased But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.man and financial resources to support its work

OIE’s Human Resources;

5eThree of the five were recruited from within the CDB. edfrom the Board that OIE

should embark on ee and for impact evaluations in particular,OIE’s ee Moreover, there

are many other designated OIE activities that should be recognised as valuable work; the

validations, building CDB and BMC evaluation capacity, providing supervision, advice,

knowledge management and brokerage as well as managing evaluation contracts, The

time needs of dealing with all of these may be underestimated in OIE’s budgets; all are

important for assuring best value from evaluation. The Panel is concerned that a demand

for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level

evaluations may undermine the importance and time needs of other essential

tasks.Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies

310

Page 311: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.Self-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes.types of evaluation y There appears to be little incentive to complete self-evaluations in a timelier manner.

.

; it is a threat rather than an opportunity for learning. Yis recognized as

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.a number of , which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring dataadded value that evaluation might offer to the operations area is ill recognized Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

311

Page 312: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”175

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

175 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

312

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 313: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”176 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 177 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

176 CDB (2011) Evaluation Policy (p.2)177

313

Page 314: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

314

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 315: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

315

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 316: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

316

Page 317: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division178. The appraisal is based on a comparison of the ECG’s recommendations on independence179 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 180

178 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank179 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1 180 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

317

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
John Mayne, 19/03/16,
Meaning what?
Page 318: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of the program, activity or entity being evaluated

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of

318

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
John Mayne, 2016-03-19,
Don’t need the first column.
Page 319: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

c) programs, activities, or entities it funds or sponsors

evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

17) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

18) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a

319

Page 320: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the organization’s programs, activities and entities

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the evaluation organization; consultation on work program with Management and Board

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of

320

Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 321: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

sources, for evaluation function

preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management interference

Complies – however within limits of restricted human and financial resources available

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

321

Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Page 322: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

b) promotion, pay increases, and

c) firing, within a merit system

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the

322

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Page 323: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid181 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products182.

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

Figure 1: Selection of Evaluation Topics and Funding Source

181 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).182 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

3-year Work Programme and Budget (approved by Board)

Annual OIE report and work plan

submission to OAC

323

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 324: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Detailed ToR or Final Approach Paper if sufficiently detailed.

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

Funding Track

Final Approach Paper/ToR

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Approval

Board Paper

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

OIE – Selection of consultants (if any)

contracting

324

Page 325: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Arrangement AFully outsourced / external

consultants; oversight by OIE

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

Prepares Inception Report /

Approach Paper

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Final OIE approved report to CDB Senior Management for Management Response

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

Feedback to evaluation lead

Submission of Final Report to

OIE

325

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 326: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

15.16.

Notes to Figure 2

15. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

16. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident;

Prepare for disclosure and dissemination

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

326

Page 327: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated. However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

327

John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 328: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla

328

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
Page 329: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Validation of Project Completion Report on Immediate Response Loan - Tropical Storm Arthur – Belize.

Evaluation of Technical Assistance Interventions of the Caribbean Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.183 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives.

183 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.

329

DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 330: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object184 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.185 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.186 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation187 and the DAC criteria to the exclusions of considering other evaluation

184 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…185 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.186 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!187 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.

330

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
Page 331: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

approaches such as Developmental Evaluation (Patton, 2010188); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports189 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect

188 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press189 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

331

Page 332: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,190 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

190 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

332

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 333: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak191. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

191 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

333

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 334: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid192

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the

192 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

334

Page 335: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.193

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other 193 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

335

Page 336: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

336

Page 337: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.194

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset. Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the

194 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

337

Page 338: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

338

Page 339: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”195

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

195 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

339

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 340: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”196 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 197 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

196 CDB (2011) Evaluation Policy (p.2)197

340

Page 341: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

341

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 342: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

342

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 343: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

343

Page 344: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

344

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 345: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

345

Page 346: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

346

Page 347: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

347

Page 348: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

348

Page 349: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

349

Page 350: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

350

Page 351: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

351

Page 352: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

352

Page 353: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

actual or potential conflict of interestThe PanelIt must be s

The PanelisThe; this affects also of y, Work PracticesThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. therefore The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation

353

Page 354: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

pyramid198 are brought out in the remaining sections of this Review, not least given the limited resources available.

But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities. sTheThes before completing the final version. However, p are only submitted to the CDB line and senior managers.Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the following the recommendations of professional good practices and standards on participative approaches; it has succeeded in , The Panelsstaff fromas well as There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. OIE doesarrange .sthe Panel also wishes tonewly appointed rsA was evidentthey expressed interest in In one case, interest was followed up in practice; can be improvedfostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that can offer can add value to understanding the strengths and weaknesses of such strategies. Tduring this transitional phase, Manual to guide and support the independent evaluation process.and operations staff sevaluation activities. ,oOIE’sjudginga . As such, theyare thatAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An is processed as follows;the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

SThisrItand Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

The rmade

- is still considered to be good practice to have the elaborated in the initial design documents the199 such as Developmental Evaluation (Patton, 2010200)

-)

said abovePCHowever, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s

198 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).199 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.200 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press

354

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 355: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessonsthe Panelas done this ing on lessonsAlthough nothing has happened since, it is , sometimes indicate (Panel has already referred above to ’s lack of oversight in the use of evaluation.)

sThe Panelsevaluation work Moreover, the 2015 budget provides only US$2’000 for

communication – nothing of which is intended for outreach.Reviewerseither confusing or and

budgetedConsequently for 2015, theFigure 3: The MDB Evaluation Pyramid201

201 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

355

Page 356: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

ys to be effective. A modest attempt has been made in 2015; OIE hased But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.man and financial resources to support its work

OIE’s Human Resources;

5eThree of the five were recruited from within the CDB. edfrom the Board that OIE

should embark on ee and for impact evaluations in particular,OIE’s ee Moreover, there

are many other designated OIE activities that should be recognised as valuable work; the

validations, building CDB and BMC evaluation capacity, providing supervision, advice,

knowledge management and brokerage as well as managing evaluation contracts, The

time needs of dealing with all of these may be underestimated in OIE’s budgets; all are

important for assuring best value from evaluation. The Panel is concerned that a demand

for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level

evaluations may undermine the importance and time needs of other essential

tasks.Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies

356

Page 357: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.Self-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes.types of evaluation y There appears to be little incentive to complete self-evaluations in a timelier manner.

.

; it is a threat rather than an opportunity for learning. Yis recognized as

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.a number of , which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring dataadded value that evaluation might offer to the operations area is ill recognized Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

357

Page 358: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”202

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

202 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

358

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 359: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”203 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 204 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

203 CDB (2011) Evaluation Policy (p.2)204

359

Page 360: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

360

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 361: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

361

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 362: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

362

Page 363: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division205. The appraisal is based on a comparison of the ECG’s recommendations on independence206 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 207

205 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank206 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1 207 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

363

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
John Mayne, 19/03/16,
Meaning what?
Page 364: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of the program, activity or entity being evaluated

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

364

John Mayne, 2016-03-19,
Don’t need the first column.
Page 365: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

19) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

20) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the

365

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
Page 366: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the organization’s programs, activities and entities

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management

366

Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 367: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

evaluation organization; consultation on work program with Management and Board

and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget sources, for evaluation function

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management interference

Complies – however within limits of restricted human and financial resources available

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot

367

Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Page 368: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

368

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Page 369: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid208 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products209.

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

208 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).209 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

369

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 370: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Figure 1: Selection of Evaluation Topics and Funding Source

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Detailed ToR or Final Approach Paper if sufficiently detailed.

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

Funding Track

Final Approach Paper/ToR

3-year Work Programme and Budget (approved by Board)

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Approval

Board Paper

Annual OIE report and work plan

submission to OAC

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

OIE – Selection of consultants (if any)

contracting

370

Page 371: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Arrangement AFully outsourced / external

consultants; oversight by OIE

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

Prepares Inception Report /

Approach Paper

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

371

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 372: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

17.18.

Notes to Figure 2

17. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

18. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

Prepare for disclosure and dissemination

Final OIE approved report to CDB Senior Management for Management Response

Feedback to evaluation lead

Submission of Final Report to

OIE

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

372

Page 373: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident; they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated. However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

373

John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 374: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

374

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
Page 375: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla Validation of Project Completion Report on Immediate Response Loan -

Tropical Storm Arthur – Belize. Evaluation of Technical Assistance Interventions of the Caribbean

Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.210 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good

210 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.

375

Page 376: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives. Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object211 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.212 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.213 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong

211 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…212 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.213 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!

376

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 377: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation214 and the DAC criteria to the exclusions of considering other evaluation approaches such as Developmental Evaluation (Patton, 2010215); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports216 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and

214 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.215 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press216 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

377

Page 378: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,217 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On

217 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

378

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 379: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak218. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

218 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

379

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 380: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid219

219 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

380

Page 381: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.220

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The

220 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

381

Page 382: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

382

Page 383: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.221

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset. Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term

221 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

383

Page 384: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

384

Page 385: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”222

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

222 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

385

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 386: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”223 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 224 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

223 CDB (2011) Evaluation Policy (p.2)224

386

Page 387: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

387

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 388: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

388

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 389: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

389

Page 390: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

390

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 391: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

391

Page 392: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

392

Page 393: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

393

Page 394: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

394

Page 395: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

395

Page 396: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

396

Page 397: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

397

Page 398: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

398

Page 399: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division225. The appraisal is based on a comparison of the ECG’s recommendations on independence226 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

225 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank226 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1

399

John Mayne, 19/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
Page 400: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 227

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

227 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

400

John Mayne, 2016-03-19,
Don’t need the first column.
Page 401: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

the program, activity or entity being evaluated

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

21) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

401

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
Page 402: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

22) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

402

Page 403: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

organization’s programs, activities and entities

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the evaluation organization; consultation on work program with Management and Board

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget sources, for evaluation function

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management

Complies – however within limits of restricted human and financial resources available

403

John Mayne, 19/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Bastiaan de Laat, 19/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 404: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

interference

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by

404

Bastiaan de Laat, 19/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Bastiaan de Laat, 19/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
Page 405: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid228 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products229.

228 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).229 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

405

Page 406: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

Figure 1: Selection of Evaluation Topics and Funding Source

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

3-year Work Programme and Budget (approved by Board)

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Paper

Annual OIE report and work plan

submission to OAC

406

John Mayne, 19/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 407: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Detailed ToR or Final Approach Paper if sufficiently detailed.

Funding Track

Final Approach Paper/ToR Board

Approval

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

Arrangement AFully outsourced / external

consultants; oversight by OIE

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

OIE – Selection of consultants (if any)

contracting

407

Bastiaan de Laat, 19/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 408: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

19.20.

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Prepares Inception Report /

Approach Paper

Prepare for disclosure and dissemination

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Final OIE approved report to CDB Senior Management for Management Response

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

Feedback to evaluation lead

Submission of Final Report to

OIE

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

408

Page 409: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Notes to Figure 2

19. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

20. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident; they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated.

409

Page 410: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

410

DE LAAT Bastiaan, 19/03/16,
To be added – one inception report.
John Mayne, 19/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 411: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla Validation of Project Completion Report on Immediate Response Loan -

Tropical Storm Arthur – Belize. Evaluation of Technical Assistance Interventions of the Caribbean

Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

411

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 412: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.230 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives. Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object231 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.232 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be

230 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.231 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…232 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.

412

marlene laeubli loud, 19/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
DE LAAT Bastiaan, 19/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 413: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.233 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation234 and the DAC criteria to the exclusions of considering other evaluation approaches such as Developmental Evaluation (Patton, 2010235); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports236 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

233 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!234 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.235 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press236 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

413

John Mayne, 19/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 19/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
Page 414: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,237 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or

237 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

414

DE LAAT Bastiaan, 19/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 19/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 19/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 415: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

415

John Mayne, 19/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 416: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak238. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid239

238 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.239 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

416

Page 417: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.240

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in

240 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

417

Page 418: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the

418

Page 419: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.241

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset.

241 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

419

Page 420: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

420

Page 421: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”242

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

242 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

421

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 422: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”243 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 244 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

243 CDB (2011) Evaluation Policy (p.2)244

422

Page 423: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

423

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 424: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

424

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 425: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

425

Page 426: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

426

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
Page 427: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

427

Page 428: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

428

Page 429: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head

429

Page 430: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

5 year term, renewable once. Appointed by the President with the agreement of the Board.

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

430

Page 431: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

431

Page 432: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

432

Page 433: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

433

Page 434: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

434

Page 435: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

In response, the OIE has greatly improved the presentation of technical reports by summarising the main points in its “Brief Reports” (e.g. the Tax Administration and Tax Reform and Technical and Vocational Education and Training evaluation). This is commendable and certainly a step in the right direction although the Panel considers that they should have a sharper focus on the strategic issues (which are the end of the brief rather than the beginning), be condensed and be made more “reader friendly”.

The Panel was also surprised to find that despite expressions of support for rigorous evaluation and its importance to the CDB, the OAC do not appear to be taking any firm position with regard to the paucity of available data. OAC has been made aware of the data problems in the BMCs (e.g. lack of rigorous monitoring and statistical data and the consequent effect on the rigour of OIE’s evaluations) as well as the delays in the submission of self-evaluations and their validations, yet there appears to be no OAC attempt to deal with such problems e.g. exerting any pressure on the CDB or on the BMCs through their representatives on the Board.

To conclude: The OAC has an expressed interest in advancing the role of evaluation as a strategic tool for CDB management. However, it is not performing its oversight function with sufficient firmness to bring about any change to the problems raised through evaluations, especially with regard to data issues and reporting delays. More generally there is a lack of any systematic

435

Page 436: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

report on “follow up of actions agreed” which could be particularly useful for tracking changes as a consequence of an evaluation and management’s response. The OIE could do better justice to its oversight responsibly if it were to receive all background documents systematically at least two weeks before its meetings. Moreover, the volume and length of documents received at any one time is considered to be overwhelming. The number and/or importance of agenda items competing for attention at any one session is an additional handicap.

the Panelss

the PanelssUnit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

c) programs, activities, or entities it funds or sponsors

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of evaluation studies

23) There appears to be a that is of concern to the Panels (More on this point under the heading self and independent evaluations.)The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. Its observer status at meetingsor as a do not necessarily provide the same insight as to the dynamics of management actions and/or decisions.

. Extent to which the evaluation unit has control over:

a) staff hiring,

b) promotion, pay increases, and

c) firing, within a merit system

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

actual or potential conflict of interestThe PanelIt must be s

The PanelisThe; this affects also of y, Work PracticesThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. therefore The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid245 are brought out in the remaining sections of this Review, not least given the limited resources available.

But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities. sTheThes before completing the final

245 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

436

Bastiaan de Laat, 19/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 19/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
John Mayne, 03/18/16,
What evidence do you have for this conclusion? Maybe would be just an excuse to spend more time there, getting ready! Any evidence they know what to do with what they get? MLL See changes made
John Mayne, 19/03/16,
Why is the OAC so powerless to bring about change? When I was doing work with UNFPA, I used to get frustrated that what ever the Board said was treated as words from God and the answer always as how high should we jump sir. Almost no effort made to question a Board suggestion, many of which were stupid.
Page 437: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

version. However, p are only submitted to the CDB line and senior managers.Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the following the recommendations of professional good practices and standards on participative approaches; it has succeeded in , The Panelsstaff fromas well as There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. OIE doesarrange .sthe Panel also wishes tonewly appointed rsA was evidentthey expressed interest in In one case, interest was followed up in practice; can be improvedfostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that can offer can add value to understanding the strengths and weaknesses of such strategies. Tduring this transitional phase, Manual to guide and support the independent evaluation process.and operations staff sevaluation activities. ,oOIE’sjudginga . As such, theyare thatAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An is processed as follows;the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

SThisrItand Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

The rmade

- is still considered to be good practice to have the elaborated in the initial design documents the246 such as Developmental Evaluation (Patton, 2010247)

-)

said abovePCHowever, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessonsthe Panelas done this ing

246 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.247 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press

437

B de Laat, 2016-03-19,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
Page 438: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on lessonsAlthough nothing has happened since, it is , sometimes indicate (Panel has already referred above to ’s lack of oversight in the use of evaluation.)

sThe Panelsevaluation work Moreover, the 2015 budget provides only US$2’000 for

communication – nothing of which is intended for outreach.Reviewerseither confusing or and

budgetedConsequently for 2015, theFigure 3: The MDB Evaluation Pyramid248

248 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

438

Page 439: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

ys to be effective. A modest attempt has been made in 2015; OIE hased But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.man and financial resources to support its work

OIE’s Human Resources;

5eThree of the five were recruited from within the CDB. edfrom the Board that OIE

should embark on ee and for impact evaluations in particular,OIE’s ee Moreover, there

are many other designated OIE activities that should be recognised as valuable work; the

validations, building CDB and BMC evaluation capacity, providing supervision, advice,

knowledge management and brokerage as well as managing evaluation contracts, The

time needs of dealing with all of these may be underestimated in OIE’s budgets; all are

important for assuring best value from evaluation. The Panel is concerned that a demand

for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level

evaluations may undermine the importance and time needs of other essential

tasks.Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies

439

Page 440: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.Self-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes.types of evaluation y There appears to be little incentive to complete self-evaluations in a timelier manner.

.

; it is a threat rather than an opportunity for learning. Yis recognized as

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.a number of , which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring dataadded value that evaluation might offer to the operations area is ill recognized Moreover, the link between self-evaluation as the building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

440

Page 441: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”249

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

249 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

441

John Mayne, 19/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 19/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 19/03/16,
These all seem OK.
John Mayne, 19/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 19/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 19/03/16,
But you say it is credible?
DE LAAT Bastiaan, 19/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 19/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 19/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 19/03/16,
Remove???
DE LAAT Bastiaan, 19/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-19,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 442: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”250 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 251 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

250 CDB (2011) Evaluation Policy (p.2)251

442

Page 443: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

443

marlene laeubli loud, 19/03/16,
Have to find the quote from the CDB’s strategy paper
Page 444: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

444

John Mayne, 19/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/19/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 19/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 19/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 19/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 19/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 19/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-19,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 445: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

445

Page 446: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

Independence of the Office of Independent Evaluation (OIEIndependence is absolutely central to the integrity and trustworthiness of evaluation. It is an agreed requirement within the development agencies and in the evaluation community as a whole. In examining the issue of independence and good practice, reviewers are guided by the Evaluation Cooperation Group’s recommendations on good practices, the CDB’s Evaluation Policy and by the 2011 consultancy review of independence relative to the CDB’s evaluation and oversight division252. The appraisal is based on a comparison of the ECG’s recommendations on independence253 and the current OIE status.

OIE and Independence: Recommendations from the OECD Evaluation Cooperation Group (ECG)

The ECG’s considers the issue of independence according to three specific areas: organisational, or structural independence, behavioural, or functional independence and protection from outside interference, or operational independence.

Organizational independence, ensures that the evaluation unit and staff are protected against any influence or control by senior or line management, and have unrestricted access to all documents and information sources needed for conducting their evaluations. Also, that the scope of evaluations selected can cover all relevant aspects of their institution.

Behavioural independence, generally refers to the evaluation unit’s autonomy in selecting and conducting setting its work programme and in producing quality reports which can be delivered without management interference.

Protection from outside interference refers to the extent to which the evaluation function is autonomous in setting its priorities, and conducting its studies and processes and in reaching its judgments, and in managing its human and budget resources without management interference.

Conflict of interest safeguards refers to protection against staff conflict of interests be they current, immediate, future or prior professional and personal relationships and considerations or financial interests for which there should be provision in the institution’s human resource policies.

The OIE’s Independence in Practice

Organisational / structural independenceOn the whole, the Panel acknowledges and commends the efforts being made by the CDB to assure OIE’s organisational independence. The CDB’s Evaluation Policy provides for the OIE’s organisational independence from line management and the interview data suggests that there is also wide acceptance and acknowledgement of why the OIE should have such independent status. Table 1 below provides our overall assessment of this aspect of OIE’s independence when compared with ECG recommendations. 254

252 Osvaldo Feinstein & Patrick G. Grasso, Consultants, May 2011 Consultancy to Review the Independence of the Evaluation and Oversight Division of the Caribbean Development Bank253 ECG 2014 Evaluation Good Practice Standards, Template for Assessing the Independence of Evaluation Organizations, Annexe II.1 254 Based on ECG (2014) Template for Assessing the Independence of Evaluation Organizations, Evaluation Good

Practice Standards, Annexe II.1

446

John Mayne, 18/03/16,
This section is way too long, giving “Independence” much too much import. And in the end, it is not an issue of concern!MLL Independence and evaluation products are the 2 largest parts. Independence was one of the main reasons for setting up the OIE and the theme was important to the CDB for the review to say how it compares now with intl. standards. Hence lengthy discussion.
John Mayne, 19/03/16,
Meaning what?
Page 447: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Table 1: OIE organisational independence compared with ECG recommendations

Aspects Indicators CDB Evaluation Policy (EP) and Practice

The structure and role of evaluation unit

Whether the evaluation unit has a mandate statement that makes clear its scope of responsibility extends to all operations of the organization, and that its reporting line, staff, budget and functions are organizationally independent from the organization’s operational, policy, and strategy departments and related decision-making

Partially Complies The Policy is broad enough to cover the full range of MDB type of evaluations. However in practice this would not be possible without additional human and budget resources

The unit is accountable to, and reports evaluation results to, the head or deputy head of the organization or its governing Board

Whether there is a direct reporting relationship between the unit, and

a) the Management, and/or

b) Board or

c) relevant Board Committee, of the institution

Complies - OIE reports to the Board of Directors (BoD) through its Oversight Assurance Committee (OAC)

The unit is located organizationally outside the staff or line management function of the program, activity or entity being evaluated

The unit’s position in the organization relative to the program, activity or entity being evaluated

Complies - The OIE is located outside, and is therefore independent of CDB line management

The unit reports regularly to the larger organization’s audit committee or other oversight body

Reporting relationship and frequency of reporting to the oversight body

Complies - The OIE reports x 5 per year to the OAC . Board approval for an additional executive meeting between the Head of the OIE and the OAC at least once per year was given in October 2015

The unit is sufficiently removed from political pressures to be able to report findings without fear of repercussions

Extent to which the evaluation unit and its staff are not accountable to political authorities, and are insulated from participation in political activities

Complies

Unit staffers are protected by a personnel system in which compensation, training, tenure and advancement are based on merit

Extent to which a merit system covering compensation, training, tenure and advancement is in place and enforced

Partially Complies - with CDB human resource policy. However the skill needs of OIE staff ought to be regularly reviewed in light of its move towards higher-level evaluations. Appraisal of skill needs and hiring of relevant staff should be completely under the authority of the Head of Evaluation. This is not sufficiently clear in the Policy or other documents we reviewed.

Unit has access to all needed information and information sources

Extent to which the evaluation unit has access to the organization’s

a) staff, records, and project sites;

b) co-financiers and other partners, clients; and

Complies –The available evidence suggests that there is no reason to doubt such access. But systematic and easily accessible documentation is lacking in the CDB; it is one of its weak points.. Delays in getting hold of the relevant documents can have consequences on the timeliness of

447

Bastiaan de Laat, 18/03/16,
I would also change the formulation avoiding the negation. Eg “The available evidence suggests that...”ML Done
John Mayne, 18/03/16,
But I would expect you had interviews findings on this. Have any issues been mentioned to you?MLL See changes
John Mayne, 2016-03-18,
Don’t need the first column.
Page 448: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

c) programs, activities, or entities it funds or sponsors

evaluation studies

However, independence should not mean isolation: There appears to be a detachment between the OIE and CDB that is of concern to the Panel; on the one hand, between the OIE and operations staff, and (2) on the other, in terms of the structural arrangements between the OIE and senior management.

24) In agreeing for the OIE to concentrate on strategic and thematic, in-depth evaluations, responsibility for project monitoring and evaluation were given over to operations. The division is clear and respected. However, it has its drawbacks. With the OIE no longer systematically involved at the front-end of project design, the monitoring data needs are likely to be poorly defined. Weak monitoring data will contribute to weaker evaluations. (More on this point under the heading self and independent evaluations.)

In the reviewers’ opinion, it is a common misunderstanding to assume that providing evaluator advice on monitoring and evaluation data will comprise evaluator independence. On the contrary, evaluation input into project design is essential to assure that the logic, indicators and data needs are addressed so that at some future point in time an evaluation of the achievements can be empirically grounded.

This is not to say that the OIE no longer has any influence at the front-end design stage; it has merely shifted the point of focus. The OIE is now systematically providing such input more generally to the corporate planning teams for the tools and systems they are developing to support the MfDR framework. The monitoring data for projects and their implementation should be improved once the Project Performance Evaluation System (PPES) and the Portfolio Performance Management System (PPMS) are updated and operational.

25) In the second place, the OIE has limited formal access to the Advisory Management Team (AMT) weekly meetings where the President and senior management gather to exchange up-to-date information on the dynamics of CDB policy and practice. The OIE is not regularly invited in any capacity to these meetings or given a copy of the agenda or minutes; the OIE is occasionally invited to attend in order to discuss an evaluation report or management feedback. For the OIE, this means that it is unlikely to pick up on the ‘when’ and ‘what’ of key decisional issues or provide input into the discussion based on evaluative information. Its observer status at Loans Committee meetings, or as a participant informer at the OAC and BoD meetings and discussions do not necessarily provide the same insight as to the dynamics of management actions and/or decisions. .

To respond to this situation, the President has agreed to meet regularly with the Head of the OIE in order to keep him up to date with CDB strategic thinking. This is a welcomed change.

OIE Independence and Behavioural Issues The Panel has concerns about some behavioural issues. For example, through both the interviews and documentary review, we learned of considerable delays in processing both the independent evaluation reports as well as OIE’s validation of the CDB’s self-evaluations. Delays are generally due to receiving feedback on the independent reports from first, the relevant operational department, then from the AMT, and then on providing the OIE with a management response that is initially drafted by operations staff before being reviewed by the AMT. (OIE reports cannot be submitted to the OAC without the relevant management response). This two-layer process for preparing submissions to the Board is inefficient and could potentially be a

448

Page 449: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

threat to evaluation’s independence in the future by delaying OIE’s timely reporting to the OAC.

OIE validations of the CDB self-evaluations are also submitted to the OAC, but it is in both sides’ interest to clear up any misunderstandings beforehand. Despite attempts to improve the timeframe for completing these validations, delays are more the norm than the exception. Table 2 below summarises our assessment of the behavioural aspects of independence.

Table 2: OIE and Behavioural Independence

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Ability and willingness to issue strong, high quality, and uncompromising reports

Extent to which the evaluation unit:

a) has issued high quality reports that invite public scrutiny (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk) of the lessons from the organization’s programs and activities;

b) proposes standards for performance that are in advance of those in current use by the organization; and

c) critiques the outcomes of the organization’s programs, activities and entities

Partially complies – paucity of data and documentation sometimes hinder the quality of reports. The OIE emphasizes the learning part of evaluation, and is cautious in its criticism recognising that management is going through a transitory stage and can still be overly defensive.

Ability to report candidly

Extent to which the organization’s mandate provides that the evaluation unit transmits its reports to the Management/Board after review and comment by relevant corporate units but without management-imposed restrictions on their scope and comments

Partially complies - as sometimes reporting to the Board is compromised by delays in the review/comment process between the OIE and the CDB. Any delay with the production of a Management Response will also mean that submitting a report to the Board in a timely manner is impaired since the two have to be submitted together.

Transparency in the reporting of evaluation findings

Extent to which the organization’s disclosure rules permit the evaluation unit to report significant findings to concerned stakeholders, both internal and external (within appropriate safeguards to protect confidential or proprietary information and to mitigate institutional risk).

Who determines evaluation unit’s disclosure policy and procedures: Board, relevant committee, or management.

Partially complies - The OIE’s conforms to the CDB’s disclosure policy. However, the dissemination of evaluation findings appears to be currently restricted to website publication and reports to the Board. A more targeted communication strategy to include other key stakeholders, e.g. project implementers in the BMCs should be developed and put in place.

Self-selection of items for work program

Procedures for selection of work program items are chosen, through systematic or purposive means, by the evaluation organization; consultation on work program with Management and Board

Complies - The OIE also ensures that its work program is drawn up after consultation with both CDB Management and Board to seek their input on relevant topics and themes.

Protection of administrative budget, and other budget

Line item of administrative budget for evaluation determined in accordance with a clear policy parameter, and

Partially complies - The administrative budget for supporting OIE work is protected. Access to additional sources of

449

Bastiaan de Laat, 18/03/16,
We could make a suggestion to disconnect the two as does the AsDB, who published the report with a placeholder for the mgt response which “comes when it comes”. At the EIB we have a two-step approach (first reading w/o mgt response second reading w/ mgt response) and there’s normally one or two weeks needed to prepare the mgt response and that deadline is generally respected.MLL Can be put in the recommendations section.
Page 450: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

sources, for evaluation function

preserved at an indicated level or proportion; access to additional sources of funding with only formal review of content of submissions

funding is possible if well argued and justified. But the approval process is complex and inefficient. (See Figure 1 below)

OIE and Protection from External influence or interference

Our overall assessment is provided in Table 3 below. The OIE’s independence in the design, conduct and content of its evaluations does not appear to be subjected to any external interference. But securing funding from any sources outside the OIE’s administrative budget, i.e. from the Social Development Fund, is an unduly complex and long process. As such we consider that the current funding process can affect the OIE’s choice with regard to the type of evaluations it can undertake. (See Figures 1 and 2 below)

Table 3: OIE and its Independence from External influence or interference

Aspects Indicators CDB Evaluation Policy (EP) and Practice

Proper design and execution of an evaluation

Extent to which the evaluation unit is able to determine the design, scope, timing and conduct of evaluations without Management interference

Complies – however within limits of restricted human and financial resources available

Evaluation study funding

Extent to which the evaluation unit is unimpeded by restrictions on funds or other resources that would adversely affect its ability to carry out its responsibilities

Partially Complies - OIE must work within the limits of the agreed administrative budget wherever possible. If additional resources are needed for studies it must seek alternative funds elsewhere. The budget limitations can have an affect on the type of evaluations undertaken and therefore its independence in terms of choice.

Judgments made by the evaluators

Extent to which the evaluator’s judgment as to the appropriate content of a report is not subject to overruling or influence by an external authority

Complies – the evidence available suggests that the Board and Management accept the evaluators’ independent interpretation and conclusions Management responses are agreed to be the accepted place to raise any difference of opinion.

Evaluation unit head hiring/firing, term of office, performance review and compensation

Mandate or equivalent document specifies procedures for the

a) hiring, firing,

b) term of office,

c) performance review, and d). compensation of the evaluation unit head that ensure independence from operational management

Complies – the Head of OIE is appointed by the CDB President in agreement with the OAC for a 5 year period which is renewable x 1. The Head could be removed from Office by the President or the Board but only with the agreement of both parties.

However the Head reports to the President for all administrative and personnel matters. Even though this was not recommended in the Osvaldo Feinstein & Patrick G. Grasso report on Independence in 2011, the BoD accepted CDB’s reasons for keeping this arrangement. (e.g.most OAC members are non residents and cannot oversee day-to-day work)

. Extent to which the evaluation unit has control over:

a) staff hiring,

Partially complies - All OIE staff members are treated in the same way as other CDB staff. The Head has limited control over the hiring, firing or promotion of OIE staff.

450

Bastiaan de Laat, 18/03/16,
What is the evidence for this? And what does it mean to “respect”?MLL See changes
John Mayne, 18/03/16,
Maybe coming later, but do we say anything about the size of the budget? Always a tricky subject, but does it allow them do even a few decent evaluations?MLL under resources section
Page 451: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Aspects Indicators CDB Evaluation Policy (EP) and Practice

b) promotion, pay increases, and

c) firing, within a merit system

Continued staff employment

Extent to which the evaluator’s continued employment is based only on reasons related to job performance, competency or the need for evaluator services

Partially complies - Whilst the EP is clear about procedures for hiring, firing and promotion, all of which must conform with CDB human resource policy, there is nothing mentioned about any difference of opinion between the CDB and the Head of the OIE with regard to continued staff employment subject to changes in the level of technical or interpersonal competencies needed to meet new demands.

Avoidance of Financial, Personal or Professional conflicts of interest

This particular aspect refers to the organisation’s Human Resources Policy; there must be provisions in place to protect against actual or potential conflict of interest. The Panel requested via the OIE, to have evidence from human resources on any such provisions but did not receive an answer. It must be assumes that this aspect of independence, past or present, does indeed form part of normal CDB Human Resource Policies

To conclude: The Panel is impressed with the measures CDB has taken to assure the organisational independence of the OIE. Its independent status is accepted and respected by senior and line management. The OIE’s budget is not independent from the overall CDB administrative budget; this affects its choice of evaluation types or approaches. Some of the behavioural issues affecting independence were also of concern, especially due to the delays in the exchange of documents, between the OIE and operations departments, which has a direct effect on timely reporting to the OAC. As for protection from outside interference, our concerns are largely to do with OIE’s independence over staffing issue; there are potential loopholes in current arrangements that could undermine OIE’s autonomy over its staff.

OIE’s Strategy, Work Practices and Work ProgrammeThe OIE has had to develop a plan to implement the Evaluation Policy. This raises such questions as what are the priorities and what is the timeframe for achieving which activities? These were partially addressed in the OIE work programme and budget 2012 to 2014, but it proved to be over ambitious. Much of the period 2012 to 2015 has therefore been taken up with preparing OIE’s shift in focus from project-based evaluations to the high-level thematic and in-depth strategic studies. This has meant adopting a three-way approach; (1) for self-evaluations, reducing its time input to support the process and (2) for independent evaluations, taking stock of the gaps in coverage and expertise, and (3) networking to share experiences with centres of expertise and align OIE with international practices. In addition, amongst other duties, it has been supporting the development of MfDR tools and systems such as the Project Performance Assessment System by providing advice and input on programme logic and monitoring needs. The OIE plans to conduct 2-4 high-level studies per year from 2016. The OIE has also chosen to increase the involvement of its professional staff in conducting independent evaluations. Outsourcing is still needed; when the study is funded by the SDF, when time is limited and when specific expertise is needed.

But plans appear to place little emphasis on the activities associated with evaluation management (e.g. knowledge management) and the relevant time needed. Other time demands mentioned in the previous sections, such as delays in completing reports, validation work etc, have also affected OIE’s plans. The more recent work plans have set the task of devliering utility-focused and timely evaluations. But it lacks clarity on how the OIE proposes to surmount the

451

Bastiaan de Laat, 18/03/16,
Why is this relevant?MLL: Because of the fact that Michael recently wanted to extend a retiring staff member for only 1 year because he didn’t have the skills to adjust to the more strategic evaluation needs. Management overturned his decision and extended the contract for a further 3 years
Page 452: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

time and data issues, which are far from new. In short it lacks a theory of change and timeline. The challenges that have to be dealt with to enable the OIE to move up the MDB evaluation pyramid255 are brought out in the remaining sections of this Review, not least given the limited resources available.

To conclude: The OIE has made a first step in proposing a strategy for establishing itself as an independent evaluation resource. But its strategy is lacking a theory of change and prioritisation of tasks, which should include more emphasis on evaluation management activities.

The Value / Usefulness of OIE’s Independent EvaluationsEvaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice. But useful evaluations depend not only on the evaluators’ skills, but on several other important factors as well; 1) on planning evaluations to be relevant to the priorities of the organisation’s work and for their results to be delivered in time to be useful; on the degree of 2) consultation and ultimately ownership by those who seek evaluative information; on the 3) tools used to support the evaluation process per se; and on the 4) credibility and quality of the evaluation products256.

1. Planning relevant and timely evaluationsThe OIE is now working on a 3 year rolling work plan that sets out the broad areas for enquiry. So far, there are no agreed criteria for making the selection of the specific topics for independent evaluation, although the priorities tend to reflect those of the CDB’s strategic plan. Nevertheless decision-making is rather arbitrary based on a process of dialogue between the OIE and the CDB and the OIE and the Board.

One of the OIE’s two objectives for 2015 therefore, was to define a work plan and agree priorities based on an approach that is “utilisation-focused”. This means that the studies are selected and planned to be relevant and useful to the organisation’s needs.

The OIE has achieved this objective with respect to its latest studies, which concerns the Social Development Fund (SDF) Multicycle 6&7 Evaluation, the Haiti Country Strategy evaluation and the evaluation of the CDB’s Policy Based Operations. Each of these three have been planned to deliver their results in time to provide the CDB Board of Directors with relevant information for negotiating the next round of funding. In spite of some delays due to a myriad of reasons, not least to the extra effort needed to secure essential data, the studies are expected to deliver on time.

The processes for agreeing OIE’s work plan and specific evaluations on the one hand, and, in securing alternative funding on the other, are shown in Figure 1 below. The Panel was surprised at learning how bureaucratic (the internal approval process), and inefficient (in view of the time it takes) the process seems to be. The concern here is that such a process could possibly pose a threat to assuring the Board of “timely studies.”

Figure 1: Selection of Evaluation Topics and Funding Source

255 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).256 These aspects reflect the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

Consultation with CDB Operations and OAC/Board for selection of

evaluation topic

3-year Work Programme and Budget (approved by Board)

Annual OIE report and work plan

submission to OAC

452

John Mayne, 18/03/16,
I hope we have some suggestions!MLL Check out in the recommendations to make sure I did this please!
Page 453: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

2. Consultation and ownership“The credibility of evaluations depends to some degree on whether and how the organization’s approach

to evaluation fosters partnership and helps build ownership and capacity in developing countries.”

(ECG good practices)

Internal review of Approach Paper

Specific Evaluation Study Design and Budgeting

OIE Draft Terms of Reference / Approach

Paper

Detailed ToR or Final Approach Paper if sufficiently detailed.

Finalise Approach Paper and submit to OAC/Board

Final Approach Paper

OAC ApprovalOAC minutes

Paper

Funding Track

Final Approach Paper/ToR

Board approval necessary If above USD

150,000

Board notification only if USD 150,000 or

below

Board Approval

Board Paper

OIE – Selection of consultants (if any) contracting

OIE Admin Budget or …

… SDF

Prepare TA Paper (content similar to Approach Paper but different

format.

TA Paper

Approval – Internal Loans Committee

OIE – Selection of consultants (if any)

contracting

453

Page 454: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

The OIE engages with the OAC, CDB senior management and operations for agreeing its 3-year work plan and then for selecting the specific topics and themes. It also discusses the evaluation approach paper (design and implementation plan) with the CDB and OAC before completing the final version. However, preliminary and final drafts of the report are only submitted to the CDB line and senior managers for comment and factual errors. Only final versions are given over to the OAC. A series of discussions are held with the CDB first and then with the OAC on the results and their implications. Discussions with the OAC are more limited due to the overburdened agenda of OAC and Board meetings, as previously discussed.

In short, the OIE is to be commended for following the recommendations of professional good practices and standards on participative approaches; it has succeeded in having introduced a modus operandi that involves the key players in the selection of evaluation topics, the evaluation designs and their results. Figure 2 below provides an overview of the evaluation implementation and stakeholder engagement processes.

Figure 2: Evaluation Study Implementation and Feedback Loops

Arrangement AFully outsourced / external

consultants; oversight by OIE

Preparations:Detailed evaluation plan (incl tools,

timeline, etc.) and logistics

Production of Inception Report / Approach Paper

Arrangement BConducted by OIE

staff

Arrangement CJointly: external

consultants and OIE

Terms of Reference

Prepares Inception Report /

Approach Paper

Presentation/workshop:Interim findings and conclusions for immediate feedback and validation

Data Collection and Analysis

OIE

Summary and ppt for workshop presentation

and discussion with CDBSubmission of Draft Final

Report to OIE

Final OIE approved report to CDB Senior Management for Management Response

Board notification only if USD 150,000 or

below

Draft Final Report

Review loops – OIE and CDB (potentially also BMC)

Feedback to evaluation lead

Submission of Final Report to

OIE

454

Bastiaan de Laat, 18/03/16,
On which basis?MLL professional standards on participatory approaches for increasing ownership and buy-in
Page 455: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

21.22.

Notes to Figure 2

21. The OIE informed the Panel that this is an abbreviated version as there are e.g. additional steps (secondary processes) when evaluations are procured (tendering or single source), when there are additional review loops and updates to OAC etc.

22. OAC may also decide to return the report to OIE, the Panel were informed, or demand from Management specific actions based on the report.

This process is engaging and appears to have secured senior management and OAC interest and buy-in as witnessed in the latest studies. But there is the downside too! The process takes much time and, in our view, is partly unnecessary. The Panel appreciates that staff from operations as well as the AMT may both want to confer on an appropriate management response, but this should not be the case for reviewing an independent report for factual errors. The two-phase approach seems somewhat inefficient and unnecessary in our opinion.

Contact between the OIE, the CDB and/or the OAC during the actual study implementation is most often restricted to the occasional progress report, particularly when studies run behind time. There is no “accompanying group” for individual studies, which would include both internal and possibly external partners. Such “advisory groups” have shown their worth in a number of contexts for improving buy-in and providing strategic input as well. The OIE does, however, arrange discussions for reflecting on emerging findings, but we are not sure of how systematic this feedback loop is.

More generally speaking, outside of an evaluation study, the OIE has limited dealings with operations. The OIE has an advisory role in providing them with help, particularly with providing training, guidelines and tools to support self-evaluations. We are nevertheless concerned about the seeming distance between these two and how this has affected the perceived value of evaluation. (For further on this point, please see the section below on “Self- and Independent Evaluations”)

But the Panel also wishes to stress that this is not the case for newly appointed senior managers. A much more open attitude to evaluation and appreciation of its potential value was evident;

Prepare for disclosure and dissemination

Final Report

Final Report and Management Response submitted to

OAC/BoardFinal Report and

Mgt. Resp.

Management Response

OIE ApprovalFinal Report and Management Response considered by CDB

AMT

OAC/Board endorsed

455

Page 456: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

they expressed interest in drawing out important lessons on what works, how, for whom, and under what conditions. In one case, interest was followed up in practice; the OIE was recently invited by a senior manager to share evaluative knowledge and experience with his staff regarding policy based operations.

Certainly, we can say that overall, the key stakeholders within the CDB are adequately integrated into the evaluation process as to foster their buy-in and ownership. But more generally, we feel that the utility of independent evaluations can be improved by fostering a supportive climate that wants to learn through calculated trial and error. The constructive criticism that evaluation can offer can add value to understanding the strengths and weaknesses of such strategies. This however cannot be done overnight and takes a long time.

3. Tools to support the evaluation processSo far, during this transitional phase, the OIE has mainly focussed on improving the tools to support the operations areas’ self-evaluations. This has left the OIE with little time to produce the checklists or tools to support its own studies. There are plans to develop an OIE Manual to guide and support the independent evaluation process. Such plans should be encouraged, as these documents will form a very important part of training, particularly for newcomers to the OIE team.

In the meantime, the OIE and operations staff refers to the Performance Assessment System (PAS) Manuals for evaluation activities. The manuals are based on DAC criteria and ECG principles. Much emphasis is given to the rating system and how and what should be rated. However we find them lengthy, unwieldy and overcomplicated. Moreover, such manuals should be used for reference, but cannot and should not replace first-hand training in how to plan, conduct and manage the evaluation process.

Quality Assessment (QA) and Quality at Entry (QaE)

There was a transition period between 2012 and 2014 to establish the OIE. Work on the PAS, QaE, PCRs, ARPP, which had started earlier, was therefore completed after OIE came into existence, but it effectively had no formal ‘home’ in operations. The Panel was told that there had been some discussions about creating a Quality Assurance unit within CDB (OPS) but the current status is unclear.

The QaE Guidance Questionnaire was developed before and completed by the OIE. It was used to assess the documents that came across to the OIE for comments at the Review Stage. The results were then sent to the Portfolio Manager/Project Coordinator indicating any gaps/issues that needed to be addressed or clarified. QaE Guidance Questionnaires were developed for all the Bank’s lending products, CSP and to assess the quality of supervision.

After the QaE was launched bank wide, several operations officers saw the merit in using the QaE Guidance Questionnaire in the field and adopted it as a tool for their use during the appraisal mission in order to cross check and test their data collection and analysis.

OIE’s use of the QaE was discontinued in 2014 due to limited resources and a stronger focus on evaluations. It still sometimes comments on specific appraisals, but very selectively.

Both QaE and QaS (quality at supervision) are also addressed in the PAS Manuals. In addition the QaE and PAS have been incorporated in Volume 2 of the Operations Manual OPPM.

The Review Panel assessed the QaE forms. They are relatively standard, adapted to the specificities of the CDB. They contribute to judging a project’s expected quality in a relatively objective way. As such, they are are helpful, as a benchmark, in the ex-post assessment of projects.

The Panel considers that the lack of an established Quality Unit in the CDB (and independent from OIE) is a weakness that should be addressed in the near future.

456

John Mayne, 18/03/16,
Somewhere here the needs to be a discussion of Avisory groups
Page 457: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

4. Credibility and Quality of Evaluation ProductsAs with many other MDBs, evaluation activities include both independent and self-evaluations; the latter are the results of completion reports on operational projects and country strategy programmes and are done by the operations staff. The OIE then validates the quality of such reports. The self-evaluations should inform the more strategic studies conducted independently by the OIE. (More on the relationship between these two is provided later in this Review).

An independent evaluation is processed as follows; the OIE prepares an Approach Paper (AP) for approval by the OAC. If the study is to be outsourced, the AP becomes the basis for a Terms of Reference (ToR), which, subject to the size of the budget, may be put to tender. The contracted evaluator then prepares an Inception Report (IR) after some desk and field research has taken place. This intermediary report is not done if the OIE itself is conducting the evaluation. Sometimes a Progress Report is submitted, but otherwise the next stage is the delivery of the final report in various drafts. (Assessments are like evaluations but more limited in scope and depth of analysis)

Since 2012, the OIE has produced a range of studies and approach papers. This review is based on those listed below as provided by the OIE, and cover the period from May 2012 to December 2015. It includes 3 evaluations (in blue), 4 Assessment studies (in brown) 14 validations of self-evaluations (in green) and 3 Approach Papers (in purple) for upcoming evaluations. These are listed below in Table 4.

Table 4: List of studies (N = 24) submitted to the Board during for the period January 2012 to December 31 2015

Board Meeting

Date Type / Topic

251 May 2012 Ex-Post Evaluation Report on Road Improvement and Maintenance Project, Nevis -St. Kitts and Nevis.

Validation of Project Completion Report on Sites and Services – Grenada. Assessment of Effectiveness of Implementation of Poverty Reduction

Strategy 2004-09.253 Oct. 2012 Assessment of Extent and Effectiveness of Mainstreaming Environment,

Climate Change, Disaster Management at CDB.254 Dec. 2012 Assessment of the Implementation Effectiveness of the Gender Equality

Policy and Operational Strategy of the Caribbean Development Bank. Validation of Project Completion Report on Enhancement of Technical and

Vocational Education and Training – Belize. Validation of Project Completion Report on Fourth Road (Northern Coastal

Highway Improvement Section 1 of Segment II) Project – Jamaica. Assessment of the Effectiveness of the Policy-based Lending Instrument.

256 May 2013 Validation of Project Completion Report on Expansion of Grantley Adams International Airport – Barbados.

Validation of Project Completion Report on Fifth Water Supply Project – Saint Lucia.

261 May 2014 Validation of Project Completion Report on Immediate Response Loan, Tropical Storm Gustav, Jamaica.

Validation of Project Completion Report on Social Investment Fund, Jamaica.

Validation of Project Completion Report on Disaster Mitigation and Restoration – Rockfall and Landslip, Grenada.

263 Oct. 2014 Validation of Project Completion Report on Basic Education Project – Antigua and Barbuda

263 Oct. 2014 Approach Paper for SDF 6 & 7 Multicycle Evaluation

264 Dec. 2014 Validation of Project Completion Report on Policy-Based Loan – Anguilla

457

B de Laat, 2016-03-18,
Marlène – maybe make one column per product and tick boxes / ût the titles against the timeline, that would give a clearer overviewMLL: There is not much sequence in particular products to show the link.
DE LAAT Bastiaan, 18/03/16,
To be added – one inception report.
Page 458: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Validation of Project Completion Report on Immediate Response Loan - Tropical Storm Arthur – Belize.

Evaluation of Technical Assistance Interventions of the Caribbean Development Bank Related To Tax Administration and Tax Reform in The Borrowing Member Countries 2005-2012.

265 March

2015

Approach Paper for the Evaluation of Policy Based Operations

266 May 2015 Validation of Project Completion Report on Upgrading of Ecotourism Sites – Dominica

The Evaluation of the Caribbean Development Bank’s Intervention in Technical and Vocational Education and Training (1990-2012)

267 July 2015 Validation of Project Completion Report on The Belize Social Investment Fund I Project − Belize

268 Oct.2015 Approach Paper Country Strategy and Programme Evaluation, Haiti

The review and analysis of these documents is based on the UNEG Quality Checklist for Evaluation Reports (http://www.uneval.org/document/detail/607) as well as on ECG guidance (Big Book on Good Practice Standards).

Approach Papers

Three Approach Papers (APs) were made available to the panel (see Table [ref] above). An AP describes the rationale for the evaluation, the background to the topic evaluated, the evaluation framework (criteria and questions) and approach. It also describes the team and provides an initial planning. Being the first main deliverable of OIE’s evaluation process, APs are the starting point and therefore a major determining element in the roll-out of each evaluation. Therefore APs “have to get it right”.

The APs examined are clearly written, well-structured and of reasonable length.257 We were surprised to find, however, that they do not make explicit the objectives of the evaluated intervention(s), e.g., through a clear objective tree, or through an explicit theory of change, intervention logic or logframe. Whilst one of the APs contains, in an appendix, a results framework for the evaluation, the results framework for the intervention (PBO) itself is lacking.

Inception reports

Only one Inception Report was given to the Panel for review (SDF 6&7). This gives an in-depth description of the evaluated programme and provides a clear Theory of Change. It is good practice that this is established after a pilot field mission, which helps to amend the initial AP on the basis of field observations and sharpen the evaluation questions if needed.

However, it is still considered to be good practice to have the Theory of Change elaborated in the initial design documents . This would facilitate OIE evaluations after project completion. Establishing the Theory of Change of any intervention would be included in the QaE form more explicitly, to be developed between the Quality unit referred to above, and OIE.

Evaluations and Assessments

Three evaluations and four assessment reports completed during the review period were considered. Assessments are similar to evaluations but have a narrower scope; they focus on a limited set of aspects or judgment criteria, mainly effectiveness, i.e. achievement of objectives.

257 Opportunities remain of course to be more concise and to move parts to appendices, e.g., detailed descriptions of the evaluation team or part of the description of the evaluated intervention.

458

DE LAAT Bastiaan, 18/03/16,
As you can see my issue is solved after having consulted the inception report. It is quite good quality and well thought true. If we take this as representative than I’m fine with it and also better understand the basis for evaluation reports. But I’m not sure if inception reports are systematically done in this manner – Marlène do you know? Otherwise we can bring this up in the discussion later.MLL to Bastiaan – let’s talk about what you mean here.
Page 459: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Evaluations generally base their judgment on the internationally recognised DAC criteria as well as aspects of the CDB and BMC’s management of the intervention.

In general, these reports are of reasonable quality. In the main, they explain the evaluated object258 and provide evaluation objectives. The findings are organised around the evaluation criteria or questions detailed in the scope and objectives section of the report. They are based on evidence derived from data collection and analysis methods as described in the methodology section. The reports tend to dwell on the limitations that the evaluation encountered, but without becoming defensive. In one case (PBL Assessment) the report starts with a summary of the reviews on the topic done by other MDBs. This was a pleasant surprise and indeed a good practice that could well be adopted in future evaluations too.

However, the reports also show several significant weaknesses:

- Reports do not always provide clear (reconstructed) intervention logics or theories of change for the intervention(s) evaluated.259 Evaluation criteria and questions are defined at a fairly general level. They are translated into more precise “research questions” (in an “Evaluation Design Matrix”, for each project for each criterion). However, it is unclear how these questions relate to the intervention logic (as this is not made explicit). This may be done in inception reports (of which, as noted above, only one was available for review), but should be done also in the final reports.

- The reports do not describe the link from the evaluation questions to the answers, how the evaluation judgments are made and how these ultimately transform into ratings for each criterion and each project. In other words, the explanation provided in the evaluation frameworks is inadequate. The “evaluation design matrix” currently used does not provide sufficient insight into how ultimately an intervention’s performance is judged.260 Links between findings, conclusions and recommendations could be improved by making this more explicit. In other words, reports should include the story on how the evaluand is credibly linked to any observed outcomes and impacts, and should be clear on how causal claims are made.

- With the exception of the PBL Assessment, reports are lengthy and detailed. One reason for this is an over-emphasis on ratings. Their detailed discussion, project by project, criterion by criterion, occupies a very prominent position in the evaluation reports’ main body of text. Although ratings are traditionally an important element in evaluations of MDBs, too strong an emphasis can be tedious and may distract the reader from the real lessons to be drawn. The detailed discussion of ratings, and their evidence base, would be better placed in an Appendix, with a brief summary in the main report. This would help give the lessons and recommendations a more prominent position than is now the case. This would also help make the evaluation reports not only shorter but also more interesting to read; this could help add value to evaluation’s image within the organisation.

- The reviewers feel that the OIE evaluations tend to over-emphasise objective-based evaluation261 and the DAC criteria to the exclusions of considering other evaluation

258 Sometimes in great length: for instance with the SDF 6&7 multicycle evaluation report it is only at page 30 that we find the beginning of the report on findings…259 Again with the SDF 6&7 evaluation, it is said to be guided by a “Logic Model” which is not explained.260 Marlène: I moreover have the idea that the methodology (often described as “visits”) is based on interviews and little hard evidence. Any view on this?.JM: My “interview-based evaluations”!!261 The focus of an objectives-oriented evaluation is on specified goals and objectives and determining the extent to which these have been attained by the relevant intervention. See for example, Worthen, Sanders, & Fitzpatrick (1997) ). Program Evaluation: Alternative Approaches and Practical Guidelines. (2nd Ed). White Plains, NY: Addison Wesley Longman.

459

John Mayne, 18/03/16,
I would expect to see something here on how they credibly linked the evluand to any observed outcomes/impacts, i.e., the causal issue. How did they draw their causal claims? Or maybe they were just looking at outputs and near outcomes for which causality is not really an issue?
marlene laeubli loud, 18/03/16,
BAstiaan, do you mean there is no explanation of the methods used? – see footnote no. 12 what does that mean?
marlene laeubli loud, 18/03/16,
Bastiaan, is there sufficient on data collection and analysis methods? Is it more than interviews and documents?
Page 460: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

approaches such as Developmental Evaluation (Patton, 2010262); evaluation should be case specific and answer the actual information needs of managers and other decisions makers rather than always concentrating on final performance.

- Related to the previous point (and again with the exception of the PBL Assessment) executive summaries (approximately 8 pages) are too long. For the evaluation report to increase potential impact, they would need to be reduced to 2 to 3 pages and be more focused; again this could be done by dwelling less on the individual ratings of projects and more on key findings, lessons and conclusions. More generally, reports could be better adapted to the needs of the different audiences. Although not strictly limited to evaluations, The Health Evidence Network Reports263 are a model that could be adapted for evaluation reporting purposes; they are specifically geared towards addressing policy and decision-making.

- The “Recommendations to BMCs” are an interesting feature of the reports, (although we are unsure to what degree such recommendations could be effectively followed up by OIE or the Bank, but certainly could taken up with BMC Board members.

- Reports (e.g. the evaluation report on Technical Assistance) focus much on technical problems that were encountered during the evaluation. Although these are important issues, again to improve the report’s flow and “readability” this section would be better placed in the Appendix. What counts is the story of the intervention, not the story of the evaluation (see “Limitations” section in the TA report for instance)

OIE Validations of Project and Country Strategy Programme Completion Reports (referred to globally as PCRs hereafter)

As said above, the OIE has the mandate to validate the Project and Economic departments PCRs and CSPCRs. However, in this period of transition, much of the OIE’s work since 2012 has been dealing with the backlog of the CDB self-evaluation validations. In theory, there is an estimated 15 completion reports due each year. However, delays in submitting the reports for validation is commonplace. Therefore with the change of Head in June 2014, the OIE has secured the OAC’s agreement to reduce the number of validations to a maximum of 6 per year. However, there is a continued backlog accumulating as only 2 PCRs were given to the OIE for validation in 2015.

The validations tend to repeat the different items reported in the PCRs and then provide extensive comment on each. The PCVRs go into great depth and detail, which makes the documents rich and complete. This is their strength – but also their weakness. The depth and level of detail, as well as the repetitions from the original PCRs, makes PCVRs (overly) lengthy (20-40 pages) and difficult to read. The OIE reported spending approximately 27.2% of its time on validating PCRs in 2015 compared with 44.4% on its core work, i.e. doing or managing the higher level evaluations. That is more than half of its evaluation work is being spent on the validation process. Finally, the PCVRs now seem to be, to a great extent, a standalone output of OIE. It is not always clear to us how they are being used as the “building blocks” for the OIE’s independent evaluations. Making this clearer in the independent evaluations would help show the link and therefore the value of the time being spent on the self-evaluation validations.

To conclude, the review finds that the OIE has taken steps to improve the perceived utility of evaluation in several ways. In the first instance, by planning its work to provide relevant and timely evidence geared towards helping the Board with its oversight and decision making tasks. The topics are selected through dialogue between the OIE and key CDB stakeholders and reflect

262 Patton, M.Q. (2010) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guildford Press263 See the reports available at the WHO’s Health Evidence Netowkr at http://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/health-evidence-network-hen

460

Page 461: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

priorities of the CDBs strategic plan. Secondly, by securing the interest and consequently the buy-in of the OAC and CDB senior management through engaging their input throughout the evaluation process. This is evidenced by the reported interest in the latest three studies, the Country strategy programme in Haiti, the evaluation of policy-based operations and the SDF 6& 7 multicycle assessment.

The OIE products are of an acceptable quality and could be even better if some of the shortcomings were addressed. However, the products themselves do not impair the utility of OIE’s work; this is undermined in several ways: (1) by the time delays in commenting on PCRs (OIE) and providing feedback to the independent evaluations (operations and management) (2) by the inefficient processes for agreeing topics and funding sources as well as providing OIE with management responses to its reports.

Putting Evaluation to Use: transparency, feedback and follow-upThere are several ways that evaluation can be, and is being used. As John Mayne has pointed out in his many publications on the issue,264 when we talk of evaluation use, we are mainly thinking about its Instrumental use—use made to directly improve programming and performance. But there is also conceptual use - use which often goes unnoticed or more precisely, unmeasured. This refers to the kind of use made to enhance knowledge about the type of intervention under study in a more general way. Or even Reflective use— this refers to using discussions or workshops to encourage and support reflection on the evaluation findings to see how they might contribute to future strategies.

In the case of the CDB there is some evidence to suggest that “use” is not only instrumental, but other types are also developing. For example, in the review of draft evaluation reports, the process includes reflective workshops that discuss not only the findings, but also seek to draw out the important lessons. (Reflective use)

Another important use, as recommended by the ECG, is that from time to time a synthesis of lessons is drawn from a number of evaluations and made available publically. In fact the Panel was impressed to hear that in the past, the evaluation unit had done this drawing on lessons from evaluations of the power sector. (Conceptual use) Although nothing has happened since, it is now on the “to do list” for 2016 (OIE’s 2016 Work Plan).

As for instrumental use, responsibility for using the knowledge generated through evaluation and for possibly drawing up an action plan of what should be done is up to CDB senior management and the relevant CDB department and division. Oversight on applying recommendations and picking up on the lessons drawn is the responsibility of the OAC.

Evidence on how evaluations have actually contributed to decisions or negotiations is lacking or confusing, Certainly the OIE is unaware of the extent to which its evaluations are put to use. On the one hand, the OAC minutes sometimes indicate that lessons learned are integrated into the next phase. On the other hand, the reviewers were told that often in the past, the evaluation results were “too old” to be of use as the lessons had already been drawn and used way before the report was completed. Similarly, people’s gaps in memory on how well the evaluative information from previous studies may have been used may also account for the scarcity of evidence.

In response, the Panel questioned CDB staff and the OIE about a particular study, the Technical and Vocational Education and Training Assessment. The feedback was somewhat contradictory. On the one hand, the study was criticised as “confirming” news rather than bringing “new news”. However, on the other, we learned that In October 2015, the Board of Directors approved a proposal for the revision of CDB’s Education and Training Policy and Strategy. Work on this has already begun and an external consultant has been engaged to lead the process.

264 See for example, his opening chapter to Enhancing Evaluation use: Insights from internal Evaluation Units, Läubli Loud, M. and Mayne, J. 2014, Sage Publications

461

DE LAAT Bastiaan, 18/03/16,
It is overall difficult to see what in general the quality is. I think we should be more severe and repeat more clearly some of the shortcomings (lengthy reports, too much focus on ratings and on details, no explicit theories of change etc.). This said1 the Baastel inception report (also lengthy and detailed besides) has really made me temper my critical view, as it is a serious piece of thinking. The problem is that we have not seen any other inception report and I am not sure that we can generalise from this specific case. 2 I have not view (see John’s comment above) on how reports (whether they are good or bad quality) are (mis)used. According to Marlène’s interviews they do not seem to be used at all!! So what we could suggest is that they work on the quality and making their approaches more explicit, but that they especially focus on increasing the use of their not-too-bad-quality evaluations.The second point comes in fact below.
John Mayne, 18/03/16,
But maybe people are accepting erroneous and/or unsubstantiated findings as truth and utilizing them … not a good result
John Mayne, 18/03/16,
This is a key finding, and I know I have not got into the evidence much, but I remain sceptical. If all they do is go and interview people and read some documents, the products can’t be that great. They are either very limited in scope, avoiding tough issues or the findings are based largely on the collected views of people. And on top of that you mention the overall lack of data. How can they be acceptable? An unqualified acceptable?Are the evaluations critical of things?
Page 462: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Although it is one of the OIE’s tasks to set up a database on results and lessons learned from evaluations, so far this has not been a priority. There is also currently no systematic tracking of lessons or recommendations arising from the evaluations, or on any progress in their uptake. (The Panel has already referred above to OAC’s lack of oversight in the use of evaluation.)

The OIE’s role in supporting CDB’s organisational learning is clearly specified in the Evaluation Policy, with many good suggestions for knowledge sharing activities such as “brown-bag lunches, workshops, pamphlets and short issues papers” (p. 19). So far, however, the OIE’s lead role on the knowledge sharing side appears to be quite limited. It has provided advisory input in Loan Committee discussions, and organises workshops together with the relative operations department for discussing the implications of evaluation studies. Ultimately, of course, the uptake of evaluation results and knowledge is in the hands of management. But the evaluation unit has an important role to play in terms of knowledge broker and knowledge manager. Both have tended to be underplayed in OIE’s work plan so far.

Transparency: The Communication Strategy

In recent times and with the approval of its new Disclosure Policy, the CDB has started to post its independent evaluation reports on its website. (There is nothing on the self-evaluations). The website also presents a good overview of the role and function of the OIE and evaluation within the CDB. This is a step in the right direction for sharing information. However, in our view, the CDB’s communication strategy is the weakest part of the evaluation system to date.

The Panel has already commended the OIE in its efforts to engage the CDB and the OAC in evaluation work. But reporting and communicating the lessons seem to be entirely targeted at the Board and the CDB. Moreover, the 2015 budget provides only US$2’000 for communication – nothing of which is intended for outreach.

Reviewers feel that actively engaging with the more indirect stakeholders, for example project implementers in the BMCs, NGOs or project beneficiaries is relatively weak265. There appears to be little reflection on drawing out significant messages for the broader group of stakeholders, or on how then to transmit them to the “right” people in the “right” way (knowledge brokerage).

To conclude, evidence on the uptake of evaluation is either confusing or sparse. It is unfortunate that so far no systematic record keeping system has been put into place to track lessons learned or the uptake of recommendations (or actions agreed from management responses). The OIE plays a weak role in brokering the knowledge generated through evaluations to the benefit of external partners and in managing such knowledge. Although the Evaluation Policy specifies the need for “distilling evaluation findings and lessons learned in appropriate formats for targeted audiences both within and outside the CDB” (p.19) such a targeted communication strategy has yet to be developed and budgeted.

Strengthening Evaluation Capacities and Networking From the onset in 2012, the OIE has stressed the importance of developing and strengthening evaluation capacities within the OIE, the CDB and, subject to available resources, in borrowing member countries. Building evaluation capacity in BMCs and the CDB is one of the OIE’s mandated tasks. It has been a priority that figures on the work plan from the beginning (Work Programme and Budget 2012-2104) The idea of developing an internship programme for graduates from the Caribbean region was one idea that was advanced to help build local evaluation resources. However, the capacity-building has primarily been focused on OIE and CDB staff to date. One of the OIE’s two objectives for 2015 therefore was to take up the challenge and “strengthen evaluation capacities and networking” to include reaching out to the BMCs.

265 A broader communication strategy is one of the principles and good standards of the Evaluation Coordination Group and the Evaluation Community more generally.

462

John Mayne, 18/03/16,
You could relate this to the evaluation culture issue. These are all actions that would help to build such a culture.
Page 463: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Developing OIE staff capacities

The change from project level to strategic and thematic evaluations does require different evaluative skills and competencies. The MDB Evaluation Pyramid presented below in Figure 3 shows the different types of evaluation and changing resource needs as one ascends the pyramid. Implicit here also is the change in the type of expertise and competencies needed as evaluation aspires to the higher levels.

Consequently for 2015, the OIE set itself the objective of networking and developing working partnerships with regional and international evaluation entities and academic institutions. The rationale was twofold: (1) secure further support and guidance as well as (2) increase its outreach and coverage through joint work and international exposure. Another implicit aim was to benefit from partners’ contacts in the BMCs wherever possible so as to improve data collection and quality.

Figure 3: The MDB Evaluation Pyramid266

The OIE has therefore linked up with Carleton University in Canada and the University of the West Indies, Barbados campus. The OIE was also approached by the Development Bank of South Africa to exchange experiences about setting up an evaluation entity in a “small” development bank. However, its attempt to become a member of the Evaluation Cooperation Group was not successful for reasons beyond its control.

The OIE is to be commended in addressing the issue of staff competencies and professional development more generally. New developments in evaluation as well as new developments in the scope of OIE’s work may necessitate new competencies. For this reason, organisations such as the International Developmental Evaluation Association have recommended that the

266 US Treasury Report to Congress on Evaluation Standards and Practices at the Multilateral Development Banks (September 2014, Annex C).

463

Page 464: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

competencies of evaluators and evaluation managers should be periodically reviewed. Several publications now exist on competency requirements and suggestions for the periodic review of staff competencies.267

It is not within this remit to compare and contrast OIE’s competencies with those recommended by international and national agencies. However, what we can say is that the OIE demonstrates great forethought in taking this on board.

Capacity building within CDB

The OIE’s objective also consists of continuing to develop measures for improving the monitoring and self-evaluation side of CDB’s work. OIE’s strategy here is to use the windows of opportunity on offer through some of the training sessions that are being organised by CDB as part of its shift towards MfDR e.g. by Corporate Planning Services and Technical Assistance. For 2016 it is also planned to have the OIE present at the annual staff meeting and Learning Forum.

The OIE also organises some ad hoc training with operations, for example to help understand new tools e.g. for drawing out lessons from self-evaluation reports and, more generally, in helping staff appreciate how evaluation can add value to the organisation’s work. Measures include providing advisory services on demand, and providing training alongside the introduction of new or revised tools.

Capacity building in the BMCs

This is an ambitious task and would require additional investment; from the bi-annual work plans to be effective. A modest attempt has been made in 2015; from what we understand, the OIE has joined together with the Carleton University and the University of the West Indies, using their networks in some of the BMCs, to try to develop this aspect.

To conclude, we cannot comment on the quality or reaction to such training, but can commend the OIE for making capacity building one of its priority objectives. From both the Policy and the documents we reviewed, we note that capacity building was always seen to be an important aspect of OIE’s work, but hitherto has received little strategic focus. But the resources currently available to the OIE will limit the scope of such work in the BMCs, which in turn, will continue to hinder the production of sound evidence for the OIE’s evaluations.

Adequacy of the OIE’s human and financial resources to support its work

OIE’s Human Resources;

The OIE is has a staff of 5; the head, 1 senior evaluation officer and two evaluation managers, plus one administrative assistant. Three of the five were recruited from within the CDB. The limited capacity means that it is not feasible to cover all the types of evaluation activities outlined in the Evaluation Policy. Yet there is some indication from the Board that OIE should embark on impact evaluations at some future stage. An increasing demand for evaluation and for impact evaluations in particular, would run the risk of overstretching the OIE’s capacity to deliver credible and useful evaluations. Moreover, there are many other designated OIE activities that should be recognised as valuable work; the validations, building CDB and BMC evaluation capacity, providing supervision, advice, knowledge management and brokerage as well as managing evaluation contracts, The time needs of dealing with all of these may be underestimated in OIE’s budgets; all are important for assuring best value from evaluation. The Panel is concerned that a demand for “doing” evaluations as well as OIE’s interest in advancing its skills in high-level evaluations may undermine the importance and time needs of other 267 E.g. IDEAS, (2012) Competencies for Development Evaluation Evaluators, Managers and Commissioners, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice (2010) and the Swiss Evaluation Society’s Evaluation Managers Competencies Framework (2014)

464

Page 465: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

essential tasks.

Limited and unpredictable resources for independent evaluations

The OIE is funded from the general administrative budget and represents approx 2.5% of the total. Whilst this is seemingly a higher proportion than other MDBs, in real terms it is quite limited. 75% of OIE budget is for staff salaries leaving US$190,000 in 2015 for external consultants and other expenses.

CDB’s donors do not appear to specify a budget for monitoring and evaluation activities. This means that on the one hand, there is no clear external budgetary recognition of the operations’ self-evaluation work or of OIE’s time in the validation process, and on the other, that whilst donors expect to receive reports from independent evaluations, the expectation is not backed by making this clear when allocating funds.

Resources available to the OIE for hiring external consultants has dropped from $350,000 in the revised 2014 budget to US$120,000 in the 2015 indicative budget. The OIE estimates that for high-level evaluations, the cost for external consultants is between US$90,00 - $350,000. (The SDF &6&7 evaluation cost US$255,000). According to the Panel’s experience, this is a sound estimate. With one less staff during 2014-2015 coupled with OIE’s focus on dealing with the backlog of self-evaluations amongst other priorities, it was unable to execute some of the evaluations during the annual budget period. Hence, the budget was reduced for the consequent years but has proven to be insufficient to fund the OIE Work Programme. The OIE has therefore needed to turn to the only alternative source available at present, the SDF fund. But the SDF funding rules apply to specific countries and themes, which obviously restrict the OIE’s choice of evaluation subjects and themes. Since the SDF does not allow for OIE recurring costs such as staff travel, the SDF evaluations have to be outsourced. As presented in Figure 1 above, the approval process is inefficient and causes delays. The Panel learned that additional funds, for example for specific studies, could be secured from within the administrative budget during the year on condition that the request was based on sound arguments.

Whilst the Panel appreciates full well that the Bank is operating within a zero growth framework, the reviewers were surprised to learn that OIE funding is not sufficiently secured in line with its priorities and work plan. The need to seek alternative funding for individual studies does not allow for any flexibility and undermines the OIE’s independent judgment of what needs to be done.

To conclude: the OIE is inadequately resourced to meet the expectations outlined in the CDB’s Evaluation Policy. However, the Panel recognises that CDB itself has budgetary restrictions. But current arrangements to secure extra funding are complicated, inefficient and limit the OIE’s ability to exercise autonomy in the selection of its evaluation studies. Moreover, OIE budgets significantly underestimate the time needs of managing evaluations and other evaluation activities.

Self- and independent evaluationSelf-evaluations cover public sector investment, lending and technical assistance, policy based loans, and country strategy programmes. Both types of evaluation are important as they are at the very heart of the evaluation function; they are said to be the building blocks for the more strategic evaluations that the OIE is now undertaking.

The Evaluation Coordination Group recommends that the self-evaluations be carried out by the relevant operations department and in turn, reviewed and validated by the organisation’s independent evaluation office. The CDB’s Evaluation Policy therefore talks of “validating all self-evaluations” as being one of OIE’s essential oversight tasks.

465

Page 466: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Within CDB, the self-evaluations should provide management with performance assessments and thereby serve an accountability function to the CDB and Board. To support the process, the OIE provides operations with manuals and checklists for guidance. Once a self-evaluation report is to hand, it is given over to the OIE for the validation of its technical quality and credibility.268

However, in the CDB case, there are well-documented issues that have affected the quality and timeliness of the self-evaluations on the one hand, and therefore the quality of the foundation on which to build the independent evaluations. Paucity of documentation within CDB, paucity of data collected and available in the Borrowing Member Countries (BMCs), time delays in producing completion reports and in turn, having them validated by the OIE - all such issues were systematically raised during interviews and in some of the independent evaluation reports. There appears to be little incentive to complete self-evaluations in a timelier manner.

Generally speaking, many of the monitoring data problems appear to be due to a lack of management oversight. For example, with the introduction of results-based management, the logic frame and monitoring and data needs are systematically being built into intervention design. However, the BMCs are not delivering the data as contractually agreed at the outset. Incentives to support any significant change towards building a results-based culture seem to be weak and sanctions seem to be rarely enforced when the supply of data is lacking or lengthy delays to the projects occur. Although we can appreciate the complexities of trying to enforce monitoring compliance, this means that often, project deadlines have had to be extended, data gaps are not being satisfactorily dealt with and in turn, there has been a void in the quality and quantity of available evidence for the CDB’s self-assessment of project performance. For some time, this lack of oversight has been tolerated. Part of the problem is the low priority accorded to completing the self-evaluation reports by operations, coupled with the absence of any focal point within senior management to drive the process and deal with the problems.

No record is kept of how the self-evaluation results are actually used. They do not appear on the CDB website, but we were told that the findings are integrated into the following project designs. Hence we are somewhat unclear as to the utility of these reports at present. The situation is exacerbated by a rather confused image of evaluation: some operations staff consider OIE’s input (through validations or independent evaluations) to be sometimes over-critical, regulatory and adding little value; it is a threat rather than an opportunity for learning. Yet at the same time, evaluation is recognized as an integral part of result-based management.

According to the Evaluation Policy (p.15) “The President, with the support of the Advisory Management Team, is accountable for encouraging and providing an environment where evaluation adds value to the overall management of CDB’s activities and fosters a culture of critical analysis and learning”. But, in the CDB a learning culture appears to be still in its infancy. The leadership role as expressed in the Evaluation Policy is underdeveloped.

Some managers however seem to start changing the status quo. For example a revised and simplified template for producing project completion reports is being considered, and mid-term project reviews are expected to be more stringent in looking at monitoring plans and practices and tying disbursements to performance. In some cases we also learned of incentives being introduced to encourage project managers to complete their reports in a timelier manner. But much remains to be done and, since the OIE is no longer responsible for monitoring and project evaluations, there is a void that needs to be filled. It is up to line managers to drive this work forward.

To conclude, it is fair to say that in view of a number of “frustrations” between the OIE and operations, which are largely to do with delays in exchanging comments on the various reports as well as the paucity and/or lack of monitoring data, the added value that evaluation might offer to the operations area is ill recognized. Moreover, the link between self-evaluation as the

268 According to the Evaluation Policy, OIE should validate all PCRs and CCRs but due to the backlog of reports and the delay in completing them (sometimes years later) since October 2015, the OIE has secured OAC agreement to validate a maximum of 6 per year, which are selected in consultation with the OAC.

466

Page 467: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

building blocks for the independent evaluation is not apparent. Thus there is little incentive or management focus to drive any change to current practices. In other words, there is a lack of leadership to advanced a learning environment in which evaluation can play a major part.

467

Page 468: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: General Conclusions and RecommendationsTo conclude, with regard to the Evaluation Policy and OIE’s independence, our Review finds that over the past few years, the CDB has succeeded in establishing an independent evaluation office that is credible and respected. It reports to a Board Committee and is thus organisationally independent from CDB management. Its work is grounded on an Evaluation Policy agreed by the Board and the CDB that reflects internationally recognised principles and good practices. The Policy sets out a broad scope of responsibility for the OIE which, however, seems over-ambitious given current resource constraints. The OIE clearly has both an accountability and a learning function; the latter should support the development of an organisational learning culture. (So far any monitoring the uptake of recommendations and key lessons has not been systematically recorded.) In general, on the issues of independence, we can conclude that the OIE meets the criteria for organisational and behavioural independence and is protected to a certain degree from external or contextual influences.

However, as the independent Advisory Committee for Development Impact has said, “independent evaluation needs to have clout……credibility of evaluation hinges on public perceptions as well as on reality.”269

We are therefore highlighting a few potential threats even though there is no evidence to suggest they are in any way real at present. But it would be in the OIE and CDB’s interest to have these clarified sooner rather than later. For instance,

any delays incurred in reporting self and independent evaluation results to the Board could be interpreted as operational interference.

Similarly, there is no agreed process to deal with any conflict of interests between the OIE and management in reporting results as it is expected that any disagreements will be reported in the management response.

Another possible threat is the lack of complete autonomy that the Head of the OIE has over staff; recruitment, termination, continuation, and professional development. The Policy is not sufficient clear about who has the final word in the case of disagreement.

And finally, on resources, our Review accepts the limited funds available to the CDB and the fact that the OIE’s budget is not independent but operates within the Bank’s budgetary limitations. Nevertheless, we feel that some more flexible arrangements could be devised that would allow for a less restrictive and timelier access to funds.

With regard to governance, our Review has highlighted the difficulties the OAC faces in not receiving the background papers for its meetings in sufficient time to be able to do them justice. Moreover these documents tend to be very lengthy and not necessarily “reader friendly”. The OAC’s oversight responsibility is likely to be weakened and we can already see some indication of this. For instance, requests for systematic follow-up on management actions resulting from evaluation findings have not been answered. Neither is there a systematic item for this on the OAC agenda so that such requests can easily be passed over and forgotten. The broadened responsibilities now given to the OAC also mean that there are many competing entities trying to secure the OAC’s attention. There is now provision for the OAC to call on consultants for help, which we feel may help strengthen the OAC in its oversight responsibilities.

Furthermore, in its capacity as members of the Board, the OAC should stress the urgency of developing evaluation and monitoring capacity in the BMCs since this gap is having a direct impact on OIE and CDB evaluations.

With regard to the OIE’s performance, we have to respond to the questions raised in this Review’s Terms of Reference, which basically mean answering two main questions: Is the OIE doing the right thing? And is it doing it in the right way?

269 Picciotto, R. (2008) Evaluation Independence at DFID; An independent Assessment prepared for the Independent Advisory Committee for Development Impact (IADCI) (p. 4).

468

John Mayne, 18/03/16,
No much in what follows on the conduct of evaluations.
John Mayne, 18/03/16,
Are we prematurely mixing in recommendations?
John Mayne, 18/03/16,
These all seem OK.
John Mayne, 18/03/16,
But the director in some sense would have to abide by the general HR policy. Couldn’t create his own HR regime. I think this needs more nuance.
DE LAAT Bastiaan, 18/03/16,
Mmm, why do we see these threats then
DE LAAT Bastiaan, 18/03/16,
But you say it is credible?
DE LAAT Bastiaan, 18/03/16,
I would agree that this is another topic – in fact not dealt with above.
John Mayne, 18/03/16,
Shouldn’t this and other conclusions be made more prominent? Bullet for or bolded?
DE LAAT Bastiaan, 18/03/16,
Was this pour mémoire? Comes in strangely here
John Mayne, 18/03/16,
Remove???
DE LAAT Bastiaan, 18/03/16,
This I still do not see really; What is this based on?
DE LAAT Bastiaan, 2016-03-18,
Should we stick to the letter of our ToR rather?I have not commented yet this part as I feel that the following text is not yet clearly “filtered out” and mixes things. Maybe we could start from three-four main conclusions responding to our ToR and from that on formulate recommendations with a clear link to our findings. They seem to be a bit independent now.
Page 469: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

There is no doubt that the decision to establish a credible, independent evaluation function in the CDB is the “right thing” to do; effective and useful evaluation and oversight activities can assess development effectiveness, hold the organisation accountable for results, and improve operational performance.”270 It is also a policy of the MDBs to have such a function and the CDB has now aligned itself with international standards and practice. 271 The question now therefore is the following; is the OIE going about it in the right way?

The OIE has taken the “right” steps to improve the engagement and interest of the OAC and CDB senior management from selecting the topics for its evaluations through to finalising the conclusions and recommendations in a collaborative spirit. It falls short of taking the messages emerging from the studies to “outsiders” such as those responsible for implementing CDB interventions in the BMCs.

In its oversight role, we feel that the OIE has paid insufficient attention to the actual utilisation of evaluation; it is beyond its responsibility to see that action is taken, but it is certainly within its remit to record how, and how well the lessons drawn have been taken up and used. With regard to its oversight of the self-evaluations (the validation process), the OIE has attempted to improve dialogue with the operations departments and, demonstrate the dual function of oversight and learning. It is now emphasising the learning aspect by providing tools and guidance on how to draw out lessons and integrate them into future planning. More recently it has sought ways to provide more formalised training on evaluation by working with the corporate planning services and technical assistance department to develop courses that show how, where and when evaluation plays its part within the MfDR framework.

However, one of the challenges in evaluation management is balancing its independence with facilitating buy-in and ownership at the same time. It is a fine line to walk and depends to a large degree on the climate between management and the head and staff of the independent evaluation unit in defining the tone of the collaboration. In practical terms, for the CDB this means defining the role of the OIE in relation to the self-evaluations performed by the Projects and Economics Departments. The change from the EOV to the OIE made this role change quite clear; the OIE no longer has responsibility for project monitoring and planning data needs together with the operational departments. On the other hand, to improve understanding and learning, there needs to be an interface between evaluation and management. At present, OIE’s dual role, that is advisory role in relation to operations and its strategic role towards the OAC and senior management, has not been satisfactorily resolved. The operational staff still do not appear to see any urgency in producing their completion reports or appreciate what lessons might be drawn from such reflection. The OIE is doing its best to support “learning” whilst at the same time, keeping an arm’s length. The greatest challenge the OIE faces in its new capacity is the slow development of an organisational learning and evaluation culture.

A Learning and Evaluation Culture

Evaluation utility depends on the engagement of evaluation users – those who should benefit from the knowledge generated through the studies. Useful evaluation therefore depends to a large degree on the development of an evaluation and learning culture and how well these are embedded in the organisation. This means that the organisation recognises and appreciates evaluation’s role and the functions it can have, particularly for helping understand what it is achieving and where and how improvements can be made. In short, the added value that evaluation can bring to the organisation is its ability to draw out the important lessons that can help improve the organisation’s performance.

However, whilst CDB senior management shows all the signs of embracing evaluation as an important strategic tool, there still appears to be some apprehension about receiving criticism

270 CDB (2011) Evaluation Policy (p.2)271

469

Page 470: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

however constructive this might be. The OAC has already affirmed its interest in learning what can be” put right the next time around.” In considering accountability, the committee is asking for a more strategic approach to learning and sharing knowledge based on evidence. The CDB also shares the development goals of other MDBs, that is « to end extreme poverty and promote shared prosperity. » This means looking for new forms of problem-solving and for ways to create a “development solutions culture.” Hence there is an interest in learning from experience and exchanging knowledge about what works. This implies balancing accountability and learning; making sure they are not seen as opposites, but as compatible entities. This greater emphasis on learning requires a reframing of CDB’s thinking and dealing with the constructive criticism that evaluation can offer.

Weak evaluation culture 27. While some stakeholders seem keen on evaluation, the overall evaluation culture in UNRWA is weak. There are several aspects to it.

28. First, many of the interviewees stressed that UNRWA has a weak learning culture. The weak learning culture stems from a number of factors. One reason given is related to the cultural virtue of oral communication. This makes conveying documented experiences challenging. Another reason is language. A majority of UNRWA’s national staff is not fluent in English (evaluation reports are mostly in English). Furthermore, criticism – even if constructive - is – according to some interviewees - mainly perceived as a threat and not as an opportunity. Finally, learning is also affected by a very basic constraint – lack of time.

29. Second, there is a weak knowledge management system to systematically collect and share experience and lessons learned in UNRWA. UNRWA communities of practices do not exist. Several interviewees mentioned the use of knowledge networks outside of UNRWA, i.e. communities of practices managed by other agencies. Also, accessing evaluation reports is not easy. The UNRWA website on the Internet does not provide access to evaluation reports. While the Agency’s Intranet has a site for evaluation reports, it is not a complete depository and the Evaluation Division does not exactly know how many decentralized evaluations are being produced. In addition, there are only few evaluation plans at the level of field offices or departments.

30. Third, the Panel found that decentralized evaluations are - at least partly - perceived as donor-driven accountability instruments rather than as learning tools. In that sense, evaluations are managed as bureaucratic requirements thereby weakening the learning dimension.

31. Finally, the sensitive political context in which UNRWA operates may also discourage a strong evaluation culture as evaluative evidence can sometimes be overridden by political considerations.14 The Panel was repeatedly told that given the political context, any change is a challenge.

14 An example mentioned to the Panel was the evaluation of the Qalqilya Hospital (2013) which concluded that the Hospital should be closed. However, for

political

470

marlene laeubli loud, 18/03/16,
Have to find the quote from the CDB’s strategy paper
Page 471: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Part Three: Recommendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

471

John Mayne, 18/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?BdL I understood they were done by operations, so in-house
DE LAAT Bastiaan, 03/18/16,
Vaste chantier! And our report may not be the right place to do this (and we will make many enemies )
DE LAAT Bastiaan, 18/03/16,
I don’t think it is a priority given the scarce resources and the small team.
John Mayne, 18/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 18/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 18/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 18/03/16,
Meaning what?
DE LAAT Bastiaan, 2016-03-18,
Shouldn’t we link those more closely to our findings. Maybe we could write them “together”, i.e. “we found A, B and C therefore we recommend Recommendation 1, 2, 3 and 4…” I think it should be clearer how each recommendation will help the CDB and OIE to improve on the aspects our Panel was supposed to look at. We could also formulate it as “in order to improve XXX, we recommend YYY”.To be discussed.
Page 472: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

link between self evaluations, validations and independent evaluation not clear now between self evaluations and QaE documents – so one wonders a bit what all the effort is for on their side. This is a real issue. They seem to do a lot of interesting and not too bad things but there is a lack of coherence. (but then I have only seen the documents, not done any interviews to get a broader picture).

This is something the EIB evaluation unit was criticised for in the past too. Since, we have started to include also “younger” projects in our samples (sometimes still on-going). We also redo the portfolio analysis right before the finalisation of the report to see if things have changed. and of course the services can in their response indicate if indeed things have changed over time.

Recommendations for improving process for study approval and funding

Give recommendations on priorities for OIE work

. Funding preferably from the administrative budget. Unused monies could then be released in the annual budgetary reviews, but this should have no affect on the budget for consequent years. SDF funding at a leveit is surprised to find that a Board approved OIE work programme and budget is inadequate; either the proposed budget per work programme

472

Page 473: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

mendations

(Here is a list of some possible ones - to be discussed and developed within Review Panel initially and then proposed to discuss together with the OIE)

OAC’s oversight responsibility needs to be strengthened (possibly) Review Evaluation Policy to redress gaps OIE to develop 5 year strategy providing step-by-step work programme, budget and

timeframe for implementing Evaluation Policy, Stronger leadership from President to provide conducive climate for promoting added

value of evaluation to overall management and fostering a culture of critical analysis and learning. Stronger support from Advisory Management Team for evaluation function by emphasising the accountability function of CDB managers for performance results and for devising incentive schemes to support accountability function.

Set up committees to accompany study specific evaluations as a means of reinforcing ownership (advisory groups)

OIE could contribute to developing a learning culture within the CDB by adopting the role of critical friend in its dealings with the CDB operations departments and the CDB more generally. Valuable insight can be gained by having a focused conversation with your client about aspects of the evaluation that went well and those that did not go so well. Some meetings might include all relevant stakeholders in addition to the main client. This feedback can then inform a subsequent internal project team debrief (e.g., identifying internal professional development or process improvement needs). Input from all perspectives can advise planning for future projects. We then circulate a summary of these discussions so that everyone can use them for their own internal or external quality improvement purposes.

Rather than asking ‘what went wrong?,’ can talk about “what surprised us, what we’d do differently, what did we expect to happen that didn’t, and what did we not expect to happen that did”. Better means of getting at the negative aspects without placing blame.

OIE to train up and engage “champions” within CDB operations departments to help demonstrate evaluation utility and provide “on the job” training in self-evaluation to colleagues Could also set up quality control group like Picciotto suggested for DfID to carry out and develop the work of Quality Assessment and particularly, Quality at Entry monitoring.

OIE should be given the resources to build M&E capacity in BMCs as a priority, possibly in partnership with other MDBs, development organisations and the countries themselves

Possibly criticise over emphasis on using the five DAC criteria, which have been in use now for more than 15 years without going through any major revisions. Given their Importance and level of influence in the field, it is pertinent that independent professionals take a critical look at them, especially since many scholars and practitioners consider that the quality of evaluations in development aid has been quite disappointing (http://evaluation.wmich.edu/jmde/ Evaluation in International Development Journal of MultiDisciplinary Evaluation, Volume 5, Number 9 ISSN 1556-8180 March 2008 41 The OECD/DAC Criteria for International Development Evaluations: An Assessment and Ideas for Improvement Thomaz Chianca Independent)

Regarding the Validation of self-evaluations: It is recommended that a leaner format be developed for the PCVRs, repeating less the content of the original PCR text and focusing

473

John Mayne, 18/03/16,
I should have asked earlier, but are these (a) actually done by operations staff or consultants, (b) more than just project completion reports?
John Mayne, 18/03/16,
But this is a huge task. Where are donors in supporting this? Certainly need partnerships. Indeed, donors seem to get off scot free in all of this!
John Mayne, 18/03/16,
Might want more here. Get OIE somewhat responsible for building an evaluation culture with the strong backing of senior management. Hold different sorts of learning events, etc. The champions bit would be part of this.I’ve written a lot about this.
John Mayne, 18/03/16,
Yes, this is part of conducting evaluations. Should have advisory groups made up of operations, others and outsiders.We should discuss this good practice earlier.
John Mayne, 18/03/16,
Meaning what?
Page 474: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

on the validity of PCR against a limited number of key issues to be assessed for each PCR. The PCVRs are also highly redacted which may not be needed for such type of document – a more tabular form with more succinct statements could lead to a leaner process by which those PCVRs are produced, all by not losing in usefulness. The “PCR checklist would be a good starting point for this

The Panel however encourages creating such a Quality control unit the role of which cannot be fulfilled by OIE, as it lies outside the scope and present capacity of OIE – even though OIE could have an advisory/methodological role.

474

Page 475: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

APPENDICES

Appendix I - The External Review Mandate – Terms of Reference and Approach Paper

Appendix II -Review Approach, Data collection and Analysis, and Limitations

Appendix III – Overview of OIE Evaluation Practice

Appendix IV - List of Persons Interviewed

Appendix V - List of Documents Reviewed

Appendix VI- List of Topics used to guide interviews with members of CDB Board of Directors

Appendix VII - List of Topics used to guide interviews with CDB staff

475

Page 476: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix III – Overview of OIE Evaluation Practice (prepared by the OIE in response to Reviewer’s request)

Caribbean Development Bank, Office of Independent Evaluation - OIE

Category Response

Percentage of projects subject to project (self-) evaluation

100% - Project Completion Reports (PCR)

Percentage of projects subject to validation by OIE

Approximately 40-50%

About 15 projects exit portfolio annually. Evaluation Policy calls for all PCR to be validated. However, OIE resources insufficient. Validation process reviewed in 2014. Now OAC (Board committee) selects a sample of 6-8 PCR for validation each year.

Percentage/number of projects subject to in-depth review by OIE

None – unless specifically requested by OAC

Due to limited resources, focus of OIE evaluation work programme is on PCR validations and high-level evaluations – including country strategy and programme evaluations (CSPE).

Number of high-level evaluations conducted by OIE (e.g. sector, thematic, geographic)

1-2 per year since 2011

Plan is 2-4 per year from 2016. This would include CSPE (1st planned for Q1 2016: Haiti)

Number of project impact evaluations conducted by OIE

None

OIE includes “impact questions” in high-level evaluations.

Number of project impact evaluations conducted by Bank staff or other non-OIE staff

OIE is not aware of any impact evaluation conducted by the Bank.

However, OIE provides technical support to the Basic Needs Trust Fund (BNTF) in its design of an M&E framework that entails impact evaluations.

Budget In USD mn: 0.78 in 2015; 0.82 in 2016. This is equivalent to about 2.5% of total CDB Administrative Budget.

75% of the budget is for Staff salaries (4 Professionals, 1 Support staff), leaving around USD 190,000 (in 2015) for other expenses, including consultants e.g. for external evaluations. Additional funding is accessed via the Special Development Fund (SDF). This varies according to type and scope of the evaluation, e.g. the ongoing SDF 6/7 Evaluation is SDF funded at USD 255,000.

Budget determined by Board, not separate from administrative budget.

SDF funding for evaluations is considered separately and subject to Bank internal approval process. SDF funding cannot be used to cover OIE expenses such as staff time or travel. Country eligibility for SDF funding is also a consideration. OIE expressed concerns about this funding track in respect to predictability, independence and eligibility limitations.

Head of OIE reports to Board, with administrative link to the President

Terms of appointment for Head 5 year term, renewable once. Appointed by the President with the agreement of the Board.

476

Page 477: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Right of Return for Head Not eligible for other staff positions.

Consultants as proportions of OIE budget

2015: 19% (USD 145,000)

Plus SDF funding. SDF funded evaluations are outsourced.

Last external evaluation (or peer review) of OIE

No external evaluation, though a review of the function was done in 2011, leading to the Evaluation Policy.

OIE External Review completed in April, 2016

Departments or special programmes supporting impact evaluation

None

477

Page 478: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix IV – List of Persons Interviewed

Name Function relative to OIE Type interview

Mrs. Colleen Wainwright Member CDB Board of Directors (UK)

Face to face

Mrs. Cherianne Clarke Alternate MemberCDB Board of Directors (UK)

Face to face

Mrs. Jean McCardle Member CDB Board of Directors (Canada)

Face to face

Dr. Louis Woodroofe MemberCDB Board of Directors (Barbados)

Mr. A: de Brigard Former Member CDB Board of Directors

Skype interview

Mr. H. Illi Fromer Member CDB Board ofDirectors

Telephone interview

Mrs. Claudia Reyes Nieto Member CDB Board of Directors

Telephone interview

Mr. Bu Yu alternate DirectorCDB Board of Directors

Face to face

Mr. Michael Schroll(Barbados)

Head OIE

series of interviews viaSkype and face-to-face

Mr. Mark Clayton OIE Senior Evaluation Officer Focus GroupMrs. Egene Baccus Latchman OIE Evaluation OfficerMr. Everton Clinton OIE Evaluation OfficerMrs. Valerie Pilgrim OIE Evaluation Officer

Dr. Justin Ram CDB Director Economics Department

Face to face

Mr. Ian Durant CDB Deputy Director Economics Dept Face to faceDr. Wm Warren Smith CDB President

Joint interviewFace to face

Mrs. Yvette Lemonias-Seale CDB Vice President Corporate Services & Bank Secretariat

Mr. Denis Bergevin CDB Deputy DirectorInternal Audit

Face to face

Mr. Edward Greene CDB Division Chief, Technical Cooperation Division

Face to face

Mrs. Monica La Bennett CDB Deputy Director Corporate Planning Face to faceMrs. Patricia McKenzie CDB Vice President Operations Face to faceMs. Deidre Clarendon CDB Division Chief

Social Sector DivisionFace to face

Mrs. Cheryl Dixon CDB Co-ordinator, Environmental Sustainability Unit

Focus group

Mrs. Denise Noel- Debique CDB Gender Equality Advisor Mrs. Tessa Williams-Robertson CDB Head Renewable EnergyMrs. Klao Bell-Lewis CDB Head Corporate Communications Face to faceMr. Daniel Best CDB Director

Projects DepartmentFace to face

Mr. Carlyle Assue CDB Director Face to face

478

Page 479: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Finance Department

479

Page 480: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VI - Interview Guide: Members of CDB Board of Directors

Below are a list of themes that I should like to raise with you based on your experience and knowledge of the CDB’s independent evaluation function (Office of Independent

Evaluation).

In each case, I should be grateful if you could illustrate your responses with examples or help this Review by, wherever possible, sending me (or telling me where I can find) any

documents that could support your responses.

This guide is being sent to you in advance to help prepare our meeting. However, our interview will be conducted more in the style of a conversation. The following sub-questions will be used to GUIDE the interview. Please feel encouraged to raise any

additional issues that you feel we should take into account

On the governance and Independence of CDB’s evaluation functionWhat mechanisms are there in place to support its independence?

How satisfactory are the current arrangements in your opinion?

How is the balance between independence and the need for interaction with line management dealt with by the system? For example, what mechanisms exist to ensure that the OIE is kept up to date with decisions, policy / programme changes, other contextual changes etc that could have an affect on OIE evaluation studies / evaluation planning?

On the OIE’s Evaluation PolicyThe CDB’s Evaluation Policy was established in 2011. To what degree do you feel it is adequate? Still relevant?

What suggestions do you have for any improvements?

In your opinion, how adequate is the current quality assurance system for over viewing the evaluation function?

On the quality and credibility of evaluation studiesTo what degree do you believe the reports are fair and impartial?

Do you consider them to be of good quality? Are they credible?

Are you adequately consulted/involved on evaluations of interest to you?

On the relevance and usefulness of evaluations How well does the OIE engage with you / your committee during the preparation, implementation and reporting of an evaluation study to assure that it will be useful to the CDB?

How are the priorities set for the independent evaluations? What criteria are used? Are you satisfied with the current procedure?

480

Page 481: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

When OIE evaluation studies are outsourced to external consultants, what criteria are used to make this decision?

How are the priorities for the OIE’s 3.year rolling work plan agreed? In your opinion, is the current plan adequate in terms of coverage and diversity?

In your opinion, do the evaluations address important and pressing programs and issues?

To what extent do you feel that the OIE’s evaluations integrate the cross-cutting theme such as gender, energy efficiency/renewable energy, climate change? What improvements might be made and how?

On the dissemination and uptake of evaluation findings and recommendationsTo what extent do you feel that evaluation findings are communicated to the CDB and its stakeholders in a

a) useful, b) constructive andc) timely manner?

Are evaluation recommendations useful? Realistic?

What mechanisms are in place to assure that evaluation results are taken into account in decision making and planning? What improvements do you feel could be made?

How have you used the findings from any evaluations? Examples?

To what degree do you feel that evaluation contributes to institutional learning? And what about to institutional accountability? Any examples?

What mechanisms are in place to ensure that knowledge from evaluation is accessible toCDB staff and other relevant stakeholders? Are the current arrangements satisfactory?

How satisfied are you with current arrangements? What expectations do you have for the future?

On resourcesHow is the OIE resourced financially and is this satisfactory?

What about the OIE staff, are all the important areas of expertise represented in the team?

On this Review of the Office of Independent EvaluationWhat are your expectations? What are you particularly hoping to learn from it?

Thank you very much for your cooperation and input

481

Page 482: f01.justanswer.comf01.justanswer.com/nFcTOGCI/External+Review+of+the+OIE+v.20.3.1… · Web viewf01.justanswer.com

Appendix VII : Interview Pro-Forma – CDB Staff membersThis presents a list of the topics raised during interviews. It was used to guide the open-ended

discussion – this means that the sequence and exact wording of the questions may not necessarily have followed in this order or been asked in exactly this way.

Changeover to an Independent Evaluation Office? Expectations? Advantages and disadvantages??

Satisfaction with working relations between operations and the OIE from your perspective?

Process of dealing with the PCRs and CCRs? Advantages and limitations?

Quality and credibility of the validation process?

How are the self-evaluation reports used?

Credibility and Quality of OIE’s evaluation reports

Communication of self and OIE independent evaluations? To whom, in what way? Possible improvements?

482