course module by dr. ian nzali banda (beng, meng, phd

45
1 MONITORING & EVALUATION GBEPM 740 / MBAH 760 / MPA 720 COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD, FEIZ, REng) [email protected]

Upload: others

Post on 08-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

1

MONITORING & EVALUATION GBEPM 740 / MBAH 760 / MPA 720

COURSE MODULE

BY

Dr. Ian Nzali Banda (BEng, MEng, PhD, FEIZ, REng)

[email protected]

Page 2: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

2

Section 1- Definitions

1. Project

A project can be defined as a collective task or undertaking, that is carefully planned to achieve a

particular aim or objective within a set time frame e.g. Malaria Eradication Project, Project to Improve

Township Roads, Primary School Computer Awareness Project etc. In some instances, projects are

usually undertaken using large amounts of resources namely personnel, finance and equipment

A Project can also be defined as a UNIQUE SET of CO-ORDINATED ACTIVITIES, with a DEFINITE starting

and finishing point, undertaken by an individual or organisation to MEET SPECIFIC OBJECTIVES within

defined schedule, cost and performance parameters.

KEY CHARACTERISTICS OF PROJECTS

• utilises a set of well-defined resources (which are dedicated to…)

• achieving specific results

• project life over a defined period of time. A project has a clear time frame (start and end),

• there must be a clear strategy on how to use the availed resources to produce results

• they are designed and implemented principally to address developmental needs or problems

2. Programme

A system of projects or services intended to meet a public need;

or

The coordinated management of a portfolio of projects that change organisations to achieve

benefits that are of strategic importance e.g. The ROADSIP Programme

The Programme comprised several different Projects which were intended to improve the

quality of Roads (a public need!!!!!!!!!)

Page 3: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

3

3. Monitoring

The routine tracking of the key elements of project performance, usually inputs and outputs,

through record-keeping, regular reporting and surveillance systems as well as site observation

and consultants management or

The ONGOING PROCESS by which STAKEHOLDERS OBTAIN REGULAR FEEDBACK on the

progress being made toward achieving their GOALS and OBJECTIVES.

Key aspects are;

Key Characteristics

4. Evaluation

Is the occasional (special!!!!) assessment of the change in targeted results that can be attributed to the project intervention. Evaluation attempts to link a particular output or outcome directly to an intervention after a period of time has passed.

✓ Inputs and Outputs

✓ Record Keeping

✓ Regular Reporting

✓ Surveillance (Investigation, Scrutiny,

Observation…..) Systems

✓ Site (or Field) Inspection or Observation

✓ Consultants Management Processes

WHAT IS

THE

LINKAGE??

IS THERE

ANY

LINKAGE??

✓ Monitoring represents an on-going activity to

track project progress against planned tasks.

✓ It aims at providing regular oversight of the

implementation of an activity in terms of

input delivery, work schedules, targeted

outputs, etc.

Page 4: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

4

5. The Significance of Monitoring and Evaluation

The Power of Measuring Results

• If you do not measure results, you cannot tell success from failure.

• If you cannot see success, you cannot reward it.

• If you cannot reward success, you are probably rewarding failure.

• If you cannot see success, you cannot learn from it.

• If you cannot recognize failure, you cannot correct it.

• If you can demonstrate results, you can win public support.

Monitoring and evaluation are two different management tools that are closely related, interactive and mutually supportive.

Rationale for Monitoring

i) Provides project management staff and other stakeholders with information on whether

progress is being made towards achieving project objectives. In this regard, monitoring

represents a continuous assessment of project implementation in relation to project plans,

resources, infrastructure, and use of services by project beneficiaries.

ii) Provides regular feedback to enhance the ongoing learning experience (LESSONS LEARNT!!)

and to improve the planning process and effectiveness of interventions

iii) Increasing project accountability with donors and other stakeholders. iv) It enables managers and staff to identify and reinforce initial positive project results, strengths

and successes. v) It alerts managers to actual and potential project weaknesses, problems and shortcomings

before it is too late therefore timely adjustments and corrective actions to improve the program/project design, work plan and implementation strategies can be made

vi) Checks whether the project continues to be relevant to the target group and/or geographical area, and whether project assumptions are still valid

vii) Adherence to specifications or the original project plan Monitoring actions must be undertaken throughout the lifetime of the project as socio economic or environmental conditions can change drastically in the target area

The Essential Requirements of Monitoring

i) adequate planning, ii) baseline data, iii) indicators of performance, and results iv) practical implementation mechanisms v) Physical monitoring actions such as:

a. field visits, b. stakeholder meetings,

Page 5: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

5

c. documentation of project activities, d. regular reporting, etc.

vi) project monitoring is normally carried out by project management, staff and other stakeholder

Rationale for Evaluation

i) Project evaluation represents a systematic and objective assessment of ongoing or completed projects in terms of their design, implementation and results.

ii) Project performance iii) Project relevance, iv) Effectiveness, v) Efficiency (expected and unexpected), vi) Impacts vii) Sustainability viii) Periodic to review the Implementation progress, ix) Predict project's likely effects and highlight necessary adjustments in project design. x) Mid-term evaluations xi) Terminal evaluations (or final evaluations) are carried out at the end of a project to provide

an overall assessment of project performance and effects/impact xii) To assess the extent to which the project has succeeded in meeting their objectives and

potential sustainability xiii) Provide signs of project strengths and weaknesses, and therefore, enable future improvement

regarding planning, delivery of services and decision-making xiv) Assist to determine in a systematic and objective manner the relevance, effectiveness, and

efficiency of activities (expected and unexpected) in light of specified objectives xv) Can verify if programme is proceeding as originally planned

Project Process Diagram

INPUTS PROCESSES or ACTIVITIES OUTPUTS

Page 6: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

6

A Glance at Monitoring and Evaluation

Item Monitoring Evaluation Frequency Periodic, Regular Occasional (Intervallic, Infrequent,

Intermittent, Discontinuous….)

Main Action Keeping Track, Oversight Improve effectiveness, Impact, Future Programming

Focus Inputs/Outputs, Process Outcomes, Work Plans

Effectiveness, Relevance, Impact, Cost Effectiveness

Information Sources

Routine systems, Field Observations, Progress Reports, Rapid Assessments

Same Plus Surveys/ Studies

Undertaken By

Project Managers, Community Workers, Community Beneficiaries, Supervisors, Funders

Program Managers, Supervisors, Funders, External Evaluators, Community ( i.e. Beneficiaries)

There is a school of thought that asserts that GOOD MONITORING substitutes PROJECT EVALUATIONS

❖ True for SMALL SCALE or SHORT TERM PROJECTS

❖ But when a final judgment regarding project results, impact, sustainability, and future development are needed, an

evaluation must be conducted

Page 7: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

7

The Table below outlines the Complimentary Roles between Monitoring and Evaluation (Kusek and Risk, 2004);

Monitoring Evaluation • Clarifies Program or Policy Objectives • Analyses why intended results were or were not achieved

• Links activities and their resources into objectives • Assesses specific casual contributions of activities to results

• Translates objectives into performance indicators and sets targets

• Examines Implementation Processes

• Routinely collects data on these indicators, compares actual results with targets

• Explores unintended results

• Reports progress to managers and alerts them to problems

• Provides lessons, highlights significant accomplishment or program potential and offers recommendations for improvement

Page 8: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

8

Section 2: Monitoring

Monitoring: Overview Defined as the regular observation and recording of activities taking place in a project or programme. It is a process of routinely gathering information on all aspects of the project.

• To monitor is to check on how project activities are progressing. It is observation; ─ systematic and purposeful observation (Through site visits etc.).

• Monitoring also involves giving feedback about the progress of the project to the donors, implementers and beneficiaries of the project.

• Reporting enables the gathered information to be used in making decisions for improving project performance.

Monitoring: Purpose and Objectives

• Analysing the situation in the community and its project;

• Determining whether the inputs in the project are well utilized;

• Identifying problems facing the community or project and finding solutions;

• Ensuring all activities are carried out properly by the right people and in time;

• Using lessons from one project experience on to another; and

• Determining whether the way the project was planned is the most appropriate way of solving the problem at hand

• To provide constant feedback on the extent to which the projects are achieving their

goals.

• Identify potential problems at an early stage and propose possible solutions

• To Monitor the accessibility of the project to all sectors of the target population

• To Monitor the efficiency with which the different components of the project are being

implemented and suggest improvements

• To Evaluate the extent to which the project is able to achieve its general objectives

.

• Provide guidelines for the planning of future projects

• Influence sector assistance strategy. Relevant analysis from project and policy

evaluation can highlight the outcomes of previous interventions, as well as the strengths and weaknesses of their implementation.

• Improve project desigu. Use of project design tools such as the log frame (logical

framework) results in systematic selection of indicators for monitoring project

performance. The process of selecting indicators for monitoring is a test of the soundness

of project objectives and can lead to improvements in project design.

Types of Monitoring 1. Implementation monitoring – (inputs, activities, outputs) 2. Outcome monitoring; and 3. Impact monitoring

NOTE: Outcome and Impact Monitoring are categorized as Results Monitoring

Page 9: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

9

Introduction to Results-Based Monitoring and Evaluation Results-Based Monitoring is defined as a continuous process of collecting and analyzing

information to compare how well a project, program or policy is performing against

expected results. 1. Results-based monitoring involves the regular collection of information on how effectively

government (or any organization) is performing 2. Results-based monitoring demonstrates whether a project, program, or policy is achieving its

stated goals 3. Results-based monitoring and evaluation measures how well governments, Programmes and or

Institutions are performing 4. Results-based monitoring and evaluation is a management tool! 5. Results-based monitoring and evaluation emphasizes assessing how outcomes are being achieved

over time

Results based monitoring and evaluation (M&E) is a powerful public management tool that can be used to help policymakers and decisionmakers track progress and demonstrate the impact of a given project, program, or policy. Results based M&E differs from traditional implementation focused M&E in that it moves beyond an emphasis on inputs and outputs to a greater focus on outcomes and impacts (Kusek and Risk, 2004)

The Old Approach: Traditional monitoring (key focus is on implementation monitoring)

1. This involves tracking inputs ($$, resources, strategies), activities (what actually took place) and outputs (the products or services produced)

2. This approach focuses on monitoring how well a project, program or policy is being implemented 3. Often used to assess compliance with work-plans and budget

Examples of key Stakeholders That Care About Performance?

✓ Government officials/Parliament ✓ Program managers and staff ✓ Civil society (Citizens, NGOs, Media, Private Sector etc.) ✓ Donors

Examples of Activities that require Results Based Monitoring

✓ Setting goals and objectives ✓ Reporting to Parliament and other stakeholders ✓ Managing projects, programs and policies ✓ Reporting to donors ✓ Allocating resources

Rationale for Undertaking a Results-Based M&E exercise ✓ Provides crucial information about public sector performance ✓ Provides a view over time on the status of a project, program, or policy ✓ Promotes credibility and public confidence by reporting on the results of programs ✓ Helps formulate and justify budget requests

Page 10: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

10

✓ Identifies potentially promising programs or practices ✓ Focuses attention on achieving outcomes important to the organization and its stakeholders ✓ Provides timely, frequent information to staff ✓ Helps establish key goals and objectives ✓ Permits managers to identify and take action to correct weaknesses ✓ Supports a development agenda that is shifting towards greater accountability for aid lending

THE THREE KEY PROJECT RESULTS

Project results can be divided into three categories:

Outputs These are results which are achieved IMMEDIATELY after implementing an activity, e.g. Construction of a new substation the output is the completed sub station and accessory infrastructure or conducting a trainer of trainers workshop output will be x no. trained facilitators ready for deployment

Outcomes These are regarded or termed as “mid-term” results i.e. they are not immediatley seen at the end of the project activity but only after some time is some change noticed due to the project activity. Hence for the trainer of trainers workshop the outcome can be well trained practitioners in the field trained by the trainers, who in turn are impacting their variuos work environments

Impact It is usually a long term result and may not be achievable during the project life cycle (also referred to as medium to long term developmental change) e.g. the practiotiners impact and influence may be is noticed well after the training is completed maybe several years later

1. OUTPUTS 2. OUTCOMES 3. IMPACTS

Page 11: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

11

The Readiness Assessment

What is a Readiness Assessment? An analytical framework to assess a project’s (or even country’s) ability to monitor and evaluate its development goals:

Why conduct a Readiness Assessment (or what is the RATIONALE for a Readiness

Assessment)?

✓ To understand what incentives (or lack thereof) exist to effectively monitor and

evaluate development goals? {An Incentive is something that incites or has the tendency

to incite to determination, action or greater effort as a reward offered for increased

productivity}

✓ To understand the roles and responsibilities of those organizations and individuals

involved in monitoring activities

Page 12: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

12

✓ To identify issues related to the capacity ( or lack of) to monitor on-going programs

Incentives Help Drive the Need for a Results System!!!!!

Examine whether incentives exist in any of these four areas to begin designing and

building an M&E system?

✓ ◦Political arena (TRIGGER may be citizen demand) ✓ ◦Institutional arena(TRIGGER may be existent legislative or legal framework) ✓ ◦Economic Development arena (A donor requirement)

Page 13: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

13

Who are the champion(s) that can help drive a results system and what factors may be

motivating them?

✓ ◦Government (social reforms)

✓ ◦Parliament (effective expenditures)

✓ ◦Civil society, opposition etc. (holding government accountable)

✓ ◦Donors (accountability)

Assess the roles and responsibilities and existing structures to monitor and evaluate

development goals

✓ -What is the role of central and line ministries?

✓ -What is the role of Parliament?

✓ -What is the role of the Auditor General?

✓ -What is the role of civil society?

✓ -What is the role of statistical groups/agencies?

Who in the country produces data?

✓ Central and Line Ministries (MOFNP, MOH, MOE, etc.)

✓ Regulatory Agencies e.g. NWASCO, ERB, CA, ZEMA etc.

✓ Specialized units/offices (National Audit Office)

✓ National Statistics Office

✓ Local Government

✓ NGOs

✓ Donor Agencies

Where in the Government is the data used? ✓ Preparing the budget

✓ Resource allocation

✓ Program policy making

✓ Parliament/legislation & accountability

✓ Planning

✓ Fiscal management

✓ Evaluation and oversight

Capacity for Monitoring (and Evaluation)

Assess current capacity to monitor and evaluate i.e.: ✓ Technical skills

✓ Managerial skills

✓ Existing data systems and their quality

✓ Technology available

Page 14: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

14

✓ Fiscal resources available

✓ Institutional experience Barriers

Do any of these immediate barriers now exist to getting started in building an M&E

system? ✓ Lack of fiscal resources

✓ Lack of political will

✓ Lack of champion(s)

✓ Lack of expertise & knowledge

✓ Lack of strategy

✓ Lack of prior experience

Key Elements for Success

Assess the Country’s Capacity Against the Following: ✓ Does a clear mandate exist for M&E,

Law, Civil Society, etc.

✓ Is there the presence of strong leadership at the most senior level of the

government?

✓ Are resource and policy decisions linked to the budget?

✓ How reliable is information that may be used for policy and management

decision making?

✓ How involved is civil society as a partner with government, or voice with

government?

✓ Are there pockets of innovation that can serve as beginning practices or pilot

programs?

Page 15: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

15

Why an Emphasis on Outcomes?

✓ Makes explicit the intended objectives of project or action

(“Know where you are going before you get moving!!”)

✓ Outcomes are what produce benefits

✓ They tell you when you have been successful or not {If you do not know where you are going then how will you

get there??}

Issues to Consider in Choosing Outcomes to Monitor and Evaluate ✓ Are there stated national/sectoral goals?

✓ Have political promises been made that specify improved performance of the

government?

✓ Do citizen polling data indicate specific concerns?

✓ Is authorizing legislation present? ✓ Other? (Millennium Development Goals)

✓ Is aid lending linked with specific goals?

Page 16: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

16

Note: When Choosing Outcomes, Remember

Do Not Go It Alone! Develop a participative approach that includes

the views and ideas of key stakeholder (e.g.

Government, Civil Society, Donors etc.) groups

These stakeholders help to BUILD CONSENSUS or UNANIMITY For the Monitoring Process!!

“The new realities of governance, globalization, aid lending, and citizen

expectations DEMAND or require an approach that is consultative,

cooperative and committed to consensus building.”

PARTICIPATION IN PROJECT MONITORING GENERAL PRINCIPLE ON THE ROLE OF STAKEHOLDERS IS THAT……………………..All stakeholders

have a stake in knowing how well things are going

Advantages of Participation: ➢ Common Understanding of Problems and Identification of Solutions:

Participative monitoring helps stake holders to get a shared understanding of the

problems facing the community or project (their causes, magnitude, effects and

implications).

This facilitates the identification of solutions. These solutions are more likely to be

appropriate because they are derived from a current situation.

➢ Benefits the Target Groups and Enhances Accountability: Participation in

monitoring ensures that the people to which the project was intended are the ones

benefiting from it. It increases the awareness of people's rights, which elicits their

Page 17: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

17

participation in guarding against project resource misappropriation. Guarding against

resource misappropriation makes project implementation less expensive.

➢ Making Appropriate Decisions: Monitoring provides information necessary in making management decisions. When

many people participate in monitoring it means that they have participated in providing

management information and contributed to decision making. The decisions from this

are more likely to be acceptable and relevant to the majority of the population. This

makes human and resource mobilization for project implementation easier. ➢ Performance Improvement:

During Monitoring, if a performance deviation is discovered solutions can be devised. To

find appropriate decisions that can be implemented requires the participation of those

people who will put the solution into practice. Therefore participation in monitoring can

help improve project performance.

➢ Design of Projects: The information generated during project monitoring helps in re-designing projects in

that locality to make them more acceptable.

The lessons learned can also be used in the design of similar projects elsewhere.

➢ Collection of Information: If many people participate in monitoring they are more likely to come up with more

accurate information. This is because, information that is omitted by one party, can be

collected by the other. Each stake holder is putting varying emphasis on the different aspects of the project

using different methods. Alternatively, one party knowing that the information they are

collecting will be verified, forestalls deliberate wrong reporting.

Challenges of Participation in Monitoring:

➢ High Initial Costs: Participation in monitoring requires many resources (e.g. time, transport and

performance-related allowances). It is a demanding process that can over-stretch

volunteer spirit at community level and financial resources at district and national

levels. Therefore it must be simple and solely focused on the vital elements.

➢ Quantity and Variety of Information: Monitoring requires collection, documentation and sharing of a wide range of

information. This requires many skills that are lacking in the communities. It therefore

necessitates much time and resources for capacity building. It also risks wrong

reporting.

➢ Inaccuracy of Information:

Some stake holders, from the community to the national level, may intentionally

provide wrong information to depict better performance and outputs or because of

community or project differences. To counteract wrong or incorrect reporting needs

sensitization and consensus building that is difficult to attain.

Page 18: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

18

Summary Statement on Challenges versus Participation in Monitoring

“The advantages of participation in monitoring are evidently more than the

challenges. It is therefore necessary to encourage and support participatory monitoring as we devise means to counteract the challenges”

Developing Outcome Statements

“Eradicate NEGATIVITY in the STATEMENTS”

❖ “Outcome Statements are Derived from identified problems

or issues”

Page 19: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

19

❖ “Outcome Statements Should Capture Only One Objective”

It makes the task achievable and manageable to enable measuring of the various

indicators

✓ ONE OUTCOME STATEMENT SHOULD CAPTURE ONLY ONE OBJECTIVE

✓ SEVERAL INDICATORS CAN EMANATE FROM ONE OUTCOME STATEMENT

Page 20: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

20

Why should we lay emphasis on outcomes first and foremost??

✓ Makes explicit the intended objectives of government action (“Know where you are going before you get moving”)

✓ Outcomes are the results governments hope to achieve and they are

usually REPORTED on and not DIRECTLY MEASURED

✓ Outcomes must be translated into a set of KEY INDICATORS

✓ Clear setting of outcomes is key to results-based M&E system

Page 21: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

21

Selecting key Performance Indicators to Monitor Outcomes

Outcome indicators are not the same as outcomes!!!

✓ Each outcome needs to be translated into one or more indicators

✓ An outcome indicator identifies a specific numerical measurement that

tracks progress (or not) toward achieving an outcome

✓ An OUTCOME INDICATOR answers the question “HOW WILL WE

KNOW SUCCESS WHEN WE SEE IT”

Key Criteria for Outcome Indicator Selection (CREAM!!!!!!!)

A good performance indicator must be:

Clear (Precise and unambiguous)

Relevant (Appropriate to subject at

hand)

Economic (Available at reasonable cost)

Adequate (Must provide a sufficient basis to assess performance)

Monitorable (Must be amenable to

independent validation)

Page 22: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

22

Note and Remember!! When Selecting Programme, Project or Policy

Indicators

➢ Select several Indicators for any one outcome

➢ Make sure the interest of multiple stakeholders are considered ➢ Know that over time, it is ok (and expected) to add new ones and drop

old ones ➢ Have at least three points of measurement before you consider changing

your indicator

Page 23: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

23

Types of Indicators

There are three main types of Indicators namely Predesigned, Designed and Proxy (or Indirect).

Designed Designed by and data collected by the monitoring team

Pre-Designed Data collected by national, regional and or statutory

institutions… A pre-designed list of indicators are indicators established independent of the context of any individual

country or organization

A number of development agencies have created indicators

to track development goals, including •Millennium Development Goals (MDGs)

•UNDP – Sustainable Human Development

•World Bank – Rural Development Handbook •IMF – Macroeconomic indicators

The Pros and Cons of using Pre-Designed Indicators

Pros – •Can be aggregated across similar types of projects/programs/policies

•Reduces costs of building multiple unique measurement systems

•Creates greater harmonization of donor requirements

Cons – •Often does not address country specific goals

•Often viewed as imposed—coming from the top down •Does not promote key stakeholder participation and ownership

•Multiple competing indicators

Proxy Designed by Monitoring team but data collected by others

Only use indirect measures (proxies) when data for direct indicators are not available or feasible to collect at regular

intervals e.g.… Number of new tin roofs or televisions as a

proxy measure of increased household income

Page 24: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

24

BRAIN TEASER!!!!!!!!!!!!!!!!!!!

Develop Indicators for the following: Outcome #1 Improved delivery of health care to citizens living in rural areas

Outcome #2 Improve quality of agriculture export products Outcome #3 Safe urban communities

SOME BASIC PRINCIPLES FOR INDICATOR DEVELOPMENT

i) You will need to develop your own indicators to meet your own needs ii) Developing good indicators often takes more than one try!

iii) Arriving at the final indicators you will use will take time!

Page 25: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

25

Establishing Baseline Data on Indicators

A performance baseline is…

Information (quantitative or qualitative) that provides data at the beginning of, or just prior to, the monitoring period. The baseline is used to:

• Learn about recent levels and patterns of performance on the indicator; and to

• Gauge subsequent policy, program, or project performance

The Approach to collect Baseline Data*****

First and Foremost is to identify data sources for the developed indicators!!!!!

Page 26: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

26

Sources are who or what provide data – not the method of collecting data

For example what types of data sources can you think of for performance indicators in Highway Transportation Safety?

Data sources may be PRIMARY or SECONDARY!!

PRIMARY data are collected directly by your organization, for example,

through surveys, direct observation, and interviews.

SECONDARY data have been collected by someone else, initially for a purpose

other than yours. Examples include survey data collected by another agency, a

Demographic Health Survey, or data from a financial market. Secondary data often can save you money in acquiring data you need, but be

careful!

Data Sources can include:

Written records (paper and electronic)

Individuals involved with the program General public

Trained observers

Mechanical measurements and tests

Page 27: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

27

Designing of the Data Collection Methods

1. Decide how to obtain the data you need from each source 2. Prepare data collection instruments

3. Develop procedures for use of the data collection instruments

Page 28: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

28

Page 29: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

29

Planning for Improvement – Selecting Results Targets

Definition of Target

Targets are the quantifiable levels of the indicators that a country or

organization wants to achieve at a given point in time—e.g. there will be 60% access to safe and sustainable water supply for all resident’s in informal

settlements General Approach to Target Selection:

Identifying expected or desired level of project or program or policy results

requires selecting performance targets

Base Indicator

Level

Desired level of

performance to be

reached within a

specific time

Base Indicator

Level

Desired Level of

Improvement

Assumes a Finite and expected level of Inputs, Activities and Outputs

Page 30: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

30

Examples of Targets Related to Development

1. Goal: Economic Well-Being Outcome target: Reduce by 20% the proportion of people living in extreme

poverty by 2008 against the baseline 2. Goal: Social Development

Outcome target: Improve by 30% the Primary Education enrollment rates in Zambia by 2008 against the baseline

Outcome target: Reduce by 20% the incidence of hepatitis rates for infants by 2006 against the baseline.

3. Goal: Environmental Sustainability

Outcome target: Implement a national strategy for sustainable forest management by 2005

Factors to Consider When Selecting Indicator Targets

• Clear understanding of baseline starting point (e.g. average of last 3 years, last year, average trend, etc.)

• Funding and level of personnel resources expected throughout the target period

• Amount of outside resources expected to supplement the program’s resources

• Political concerns • Institutional capacity

• Only one target is desirable for each indicator

• If the indicator is new (not previously used) be careful on setting firm targets (use a range)

• Most targets are set yearly, but some could be set quarterly; others set for longer periods (not more than 5 years)

• It takes time to observe the effects of improvements; therefore, be realistic when setting targets

• A target does not have to be one single numerical value; it can be a range

• Consider previous performance • Take your baseline seriously

• Targets should be feasible, given all the resource (input) consideration

Page 31: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

31

Set targets so modest (easy) that they will

surely be met

Move the target (as needed) to fit

performance

Pick targets that are not politically sensitive

Page 32: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

32

Monitoring for Results

• A results-based monitoring system tracks both implementation (inputs, activities, outputs) and results (outcomes and goals)

• Implementation monitoring is supported through the use of management tools – budget, staffing plans, and activity planning

• Implementation monitoring tracks the means and strategies used by the organization

• Means and strategies are found in annual and multi-year workplans • Do not forget: Results framework is not the same as a work plan

• Do not forget: Budget to outputs, manage to outcomes

Developing A Results Plan • Once a set of outcomes are identified, it is time to develop a plan to assess

how the organization will begin to achieve these outcomes • In the traditional approach to developing a plan, the first thing a manager

usually did was to identify activities and assign responsibilities • But the shortcoming in this approach is that completing all the activities

does not mean the same as reaching the outcome goal

Page 33: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

33

Performance Monitoring System Framework

Page 34: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

34

Monitoring System Strategy Should Include a Data Collection and

Analysis Plan

• The plan should cover: • Units of analysis

• Sampling procedures • Data collection instruments to be used

• Frequency of data collection • Expected methods of data analysis

• Who collects the data • For whom the data are being collected

Key Criteria for Collecting Quality Performance Data

RELIABILITY: The extent to which the data collection approach is stable and consistent across time and space

VALIDITY: Extent to which data clearly and directly measure the performance we intend to measure

TIMELINESS:

• Frequency (how often are data collected?)

• Currency (how recently have data been collected?)

• Relevance (data need to be available on a frequent enough basis to support management

ALWAYS REMEMBER TO Pretest Your Data Collection Instruments and Procedures!!!

1. You will never really know how good your data collection approach is until you test it

2. Pretesting is learning how to improve your instruments or procedures, before your data collection is fully under way

3. Avoiding pretesting probably will result in mistakes. The mistake could cost your organization a lot of wasted time and money, and

maybe its valued reputation with the public.

Page 35: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

35

A Typical M and E FRAMEWORK TEMPLATE BELOW

Specific Objective (SUB PROBLEM or ISSUE)

Outcome Statement

Indicator Baseline Value

Target Method of collection

By whom Frequency

Page 36: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

36

Section 3 – Evaluation

Evaluation

An assessment of planned, ongoing or completed intervention to

determine its relevance, efficiency, effectiveness, impact and sustainability

or

A systematic and objective assessment of an implemented project

Purpose of an Evaluation Exercise

• The key intent in Evaluation is to incorporate lessons learned into the decision-making process.

• Evaluation assesses whether the project is; o Solving the problem; o To what extent; and

o Why? • The Starting point of an evaluation is the problem context

Page 37: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

37

Aims of an Evaluation

An Evaluation aims to determine whether:

◦project is still relevant

◦the overall design of the project is logical & still appropriate

◦The project is achieving its immediate objectives and aims and if not,

what are the reasons

◦The project is efficient in its use of resources or is using an appropriate

level of technology

◦The necessary measures are being taken for handing over of the project to the principle proponents e.g. Government, Donors

And to what extent has the project succeeded in creating awareness, commitment and action at local level

Generally an evaluation should provide Feedback information!!!

The purpose and type of evaluation normally are: ◦Improve management of on-going project=Mid-Term Evaluation

◦Propose corrective actions for implementation=Ex-ante & Mid- Term evaluation

◦Identify follow-up projects=Mid-Term &Final evaluation ◦Analyse accounting =Ex-ante, Mid-Term, Final & Ex-Post evaluation

◦Improve picture of future projects & developments=Mid-Term, Final &Ex- Post evaluation

Note: An Evaluation is not a fault finding exercise!!!!

Evaluation vs. Audit Evaluation = Accountability + Learning (Principle focus of an Evaluation)

Audit = Accountability (Principle focus of an Audit)

Learning plus Accountability = Evaluation

Evaluation minus Learning = Audit

Evaluation minus Accountability = Research //

Research --> Learning Audit --> Accountability

Evaluation --> Learning & Accountability

Page 38: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

38

Uses of an Evaluation

To make resource decisions

To re-think the causes of a problem

To identify issues around an emerging problem, i.e. children dropping out of school

Decision-making on best alternatives

Support of public sector reform / innovation

To help build consensus among stakeholders on how to respond to a problem

When to Evaluate

• Just after Implementation

• Mid Term stage of the Programme or Projects • After Implementation (ex-post impact evaluation)

Timing Purpose Duration

Early in the Project Cycle

1-2 years

Check early strategy of ambitious

outcome

Short term

Middle of the Project Cycle

2-3 years

Prompt mid-course adjustments in

output production

Mid Term

End of the Project Cycle 4-

5 years

Lessons learnt for next project

formulation

Long Term

When is it time to make use of an Evaluation?

i)

Page 39: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

39

ii)

iii) Resource and budget allocations are being made across projects, programs, or policies

iv) A decision is being made whether to (or not) expand a pilot v) There is a long period with no evidence of improvement in the problem

situation vi) Similar projects, programs or policies are reporting divergent outcomes

vii) There are conflicting political pressures on decision-making in ministries or parliament

viii) Public outcry over a governance issue ix) To identify issues around an emerging problem, i.e. children dropping

out of school

Qualifications (attributes) of the Evaluation Team

Technical knowledge and experience

Knowledge of national (location) situation and context

Results based management expertise

Capacity building expertise

Familiarity with policy making process (policy dialogue issues)

Page 40: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

40

Methodologies for Evaluation

➢ Performance Logic Chain Asks questions about the basic causal logic of the project, program, or

policy (cause and effect assumptions) Asks about the rationale for the sequence of activities of the project,

program, or policy Asks about the plausibility of achieving intended effects based on

research and prior experience

➢ Pre-Implementation Assessment

Preliminary evaluation of a project, program, or policy’s implementation strategy to assure that three standards are met: •Objectives are well defined •Implementation plans are plausible

•Intended uses of resources are well defined and appropriate to achievement of objectives

➢ Case Study A case study is a method for learning about a complex situation and is

based on a comprehensive understanding of that situation. ➢ Meta Evaluation

Pulls together known studies on a topic to gain greater confidence in findings and generalizability

Addresses where there are credible supportable evaluation findings on a topic

Compares different studies with disparate findings about a topic

against a common set of criteria ➢ Impact Evaluation

Provides information on how and why intended (and un-intended) project, program, or policy outcomes and impacts were achieved (or not)

➢ Process Implementation Provides detailed information on whether the program is operating as

it ought (are we doing things right?) Provides detailed information on program functioning to those

interested in replication or scaling up a pilot Provides continuous feedback loops to assist managers

Page 41: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

41

Outcome Monitoring Versus Outcome Evaluation

Outcome Monitoring Outcome Evaluation

Objective To track changes from baseline conditions to

desired outcomes

To validate what results were achieved and how and why they

were or were not achieved

Focus Focuses on the outputs and

their contribution to

outcomes

Compares planned with intended

outcome achievement. Focuses on

how and why outputs and strategies contributed to

achievement of outcomes. Focuses on questions of

relevance, effectiveness, sustainability and change

Methodology Tracks and assess performance (progress

towards outcomes) through analysis and comparison of

indicators over time

Evaluates achievements of outcomes by comparing indicators

before and after the intervention. Relies on monitoring data on

information from external sources

Conduct Continuous and systematic by programme / project

manager and key partners

Time-bound, periodic, in-depth. External evaluators and partners.

Use Alerts managers to

problems in performance,

provides options for corrective actions and helps

demonstrate accountability

Provides managers with strategy

and policy options, provides basis

for learning and demonstrates accountability

Page 42: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

42

Page 43: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

43

Page 44: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

44

Data Analysis

Examine changes over time!!!!!!!!!!!11111111

◦Compare present to past data to look for trends and other changes

◦The more data points you have, the more certain you are of your trends

Reporting of the Data Results (Findings!!!!)

• Present the Data in a Clear and Understandable Form

• Avoid being a statistics division • Present most important data only

• Use an appendix or a separate report to convey detailed data • Use visual presentations (charts, graphs, maps) to highlight key points

• Avoid “data dumps” • Ensure that stakeholders are:

Aware of findings Understand the findings

Able to use the Data in decision making with ease

Modes of Data Presentation

Formal progress reports – periodic reports (standard format) ◦Reviews all activities on the work program for the period.

◦Should compare with planned vs current

Problem Diagnostic Reports ◦Limited to findings on a specific problem and prepared as a diagnostic

study ◦Should be short

◦Highlight cause of the problem, consequences, corrective action

Monitoring briefs – (2-3 pages maximum)

◦Conveys key findings or bring a specific problem to management ◦Can be memo with a longer report attached

Note: Any written communication longer than a few pages should begin with

an Executive summary – lists key points raised in the report, relevant to management and calls for action

Page 45: COURSE MODULE BY Dr. Ian Nzali Banda (BEng, MEng, PhD

45

When Reporting Your Finding Use Explanatory Notes

Suggestions:

Combine qualitative information along with quantitative

When comparisons show unexpected trends or values, provide

explanations, if known

Report internal explanatory notes

◦e.g. loss of program personnel or other resources

Report external explanatory notes,

◦e.g. unexpected natural disaster, or political changes

Summarize important findings