planning, implementing, monitoring and evaluating extension programs and resources

47
PLANNING, MANAGING, MONITORING AND EVALUATING EXTENSION PROGRAM 1 JEDDAHLYN SANCHEZ MANAN UNDONG CED 240-WY 1 A final report submitted in partial fulfillment of the requirements in CED 240 –WY under Prof. Myra E. David, 1 st semester AY 2015-2016, University of the Philippines Los Baños, College, Laguna

Upload: up-losbanos

Post on 07-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

PLANNING, MANAGING, MONITORING AND EVALUATING EXTENSION PROGRAM1

JEDDAHLYN SANCHEZ MANAN UNDONG

CED 240-WY

                                                                                                                         1  A final report submitted in partial fulfillment of the requirements in CED 240 –WY under Prof. Myra E. David, 1st semester AY 2015-2016, University of the Philippines Los Baños, College, Laguna

  2  

TABLE OF CONTENTS Title Page....................................................................................................................................1 Table of Contents.................................................,,....................................................................2 Preface………............................................................................................................................3 Part I: Planning and Managing Extension Program...................................................................5 Chapter 1: Planning an Extension Program…………...............................................................6

1.1: Definition of Extension Program Planning...........................................................6 1.2: Assumptions on Extension Program Planning......................................................6 1.3: Rationale of Extension Program Planning............................................................7

1.4: Principles of Extension Program Planning...........................................................8 1.5: Approaches in Extension Program Planning........................................................9 1.6: Stages and Models in Extension Program Planning...........................................10

Chapter 2: Managing an Extension Program……...................................................................19

2.1: Definition of Management………………..........................................................19 2.2: Philosophy of Management................................................................................19 2.3: Management Functions....................................................................................,..20 Chapter 3: Case Analysis in Planning Extension Program......................................................22

3.1: Use of Logic Model to Plan Extension in Outreach Program Development......22 3.2: Logic Model as a framework for evaluation and planning in health industry....23 Part II: Monitoring and Evaluation of Extension Program......................................................25 Chapter 4: Definition of Monitoring and Evaluation………………………...........................26 Chapter 5: Evolution of Monitoring and Evaluation as a Discipline.......................................27 Chapter 6: Monitoring and Evaluation of Extension Program ................................................32 References................................................................................................................................45

  3  

PREFACE In developing and delivering extension program, the most important elements are the different phases of implementing it. However, extension planners throughout the world face difficult challenges in program development as they become responsive to the needs of the rural communities (Cristovao et. al, 2007). A way to overcome these challenges is by analyzing the phases, processes and functions of implementing extension program. To understand these, there are basic concepts that need to be discussed such as: “What is an extension program?” and “Are there distinctive features involved in extension program? Several literatures have defined extension program, its characteristics and elements. The table below shows the list of the definitions of the concept according to different authors, organizations and extension practitioners. Table 1. Definition of Extension Program Through the Years

Author or Organization What is an Extension Program? Kelsey and Hearne (1949) It is a statement of situation, objectives,

problems and solutions. US Department of Agriculture (1956)

It is arrives at cooperatively by the local people and the extension staff and includes a statement of (a) the situation; (b) the problems that are part of the local situation; (c) the objectives and goals of the local people in relation to these problems; and (d) the recommendations or solutions to reach these objectives on a long-time basis or on a short time basis.

Leagans (1961) It is a set of clearly defined, consciously conceived objectives or ends, derived from an adequate analysis of the situation, which are to be achieved through extension teaching activity.

Lawrence (1962) It is the sum total of all activities and undertakings of a county extension services which includes: (a) program planning process; (b) written program statement; (c) plan of work; (d) program execution; (e) results; and (f) evaluation.

Forest and Baker (1994) It is a set of purposeful, planned and interrelated experiences to reach our educational objectives and to solve problems.

Israel, Harder and Brodeur (2011) It is a comprehensive set of activities that are intended to bring about a sequence of outcomes among targeted clients

From the above list of definitions, we can define extension program as a systematic written statement with situation, objectives, problems, target audience and solutions guided by extension activities.

  4  

Aside from various definitions, there are number of important principles that an extension program will be considered. These are the elements that characterized extension programs. Among other things, Diehl and Gonzalez (2014) enumerated the following: (1) Focus on the needs of the target audience; (2) The intent to affect participant learning and behavior outcomes; (3) Multiple activities that are comprehensive in nature; and (4) The presence of a formal evaluation (Israel et. al., 2011).

On the other hand, Oakley and Garforth (1988) mentioned that an extension program is a written statement which contains the following four elements: (1) Objectives which the agent expects to be achieved in the area within a specified period of time; (2) Means of achieving the objectives; (3) Resources that are needed to fulfill the program; and (4) Work plan or the schedule of extension activities that will lead to the fulfillment of the program. Having defined extension program, describing the concepts of planning, managing, monitoring and evaluating it are needed to explicate. Thus, this paper will: (1) Discuss the importance of planning, managing, monitoring and evaluating extension programs; (2) Enumerate the stages in extension program planning; (3) Identify the management functions of extension program; (4) Discuss the evolution of monitoring and evaluation extension programs; (5) Distinguish the types of evaluation and methods of gathering data for monitoring and evaluation extension program; (6) Explain the plan of activities involved in the conduct of monitoring and evaluation of extension program; and (4) Apply the stages in planning, managing, monitoring and evaluating extension programs to their respective extension organizations/offices/projects.

  5  

PART I: PLANNING AND MANAGING EXTENSION PROGRAM

Written by:

JEDDAHLYN SANCHEZ

  6  

CHAPTER 1 Planning an Extension Program

The concept of extension program planning is important to consider in the entire

program. It is regarded as an integral and important dimension to a systematized approach of solving issues in the community at a certain period of time. The most effective that planning achieves “is the greatest degree of performance of actions, motions or operations” (Boyle, 1965). Definition of Extension Program Planning

Planning is a “dynamic act of reflecting about, thinking about, and choosing among various options regarding the destination (goals and objectives) and the route to journey (educational experience) we should follow to reach those destinations” (Forest and Baker, 1994). It offers opportunity to people to “participate” and “contribute” in the process. Therefore, it is understood that “planning extension programs has become an increasingly accepted practice among national authorities” (Maalouf, in Rivera, 1987, p. 116 as cited by Cristovao et. al., 1997).

On the other hand, extension program planning is viewed as a process through which

representatives of the people are intensively involved with extension personnel and other professional people in four activities (Boyle, 1965): (1) Studying Facts and Trends; (2) Identifying problems and opportunities based on these facts and trends; (3) Making decisions about problems and opportunities that should be given priority; and (4) Establishing objectives or recommendations for future economic and social development of a community through educational programs.

Other literature defined extension program planning as the process whereby the people

in the country, through their leaders, plan their extension program where the end result is a written program statement (Lawrence, 1962). While one author interpreted it as a continuous process, whereby farm people, with the guidance and leadership of extension personnel, attempt to determine, analyze and solve local problems (Musgraw, 1962). In addition, Olson (1962) described it as an organized and purposeful process, initiated and guided by agent, to involve a particular group of people in the process of studying their interests, needs and problems, deciding upon and planning education and other actions to change their situation in desired ways and making commitments regarding the role and responsibilities of the participants.

Therefore, in the definitions presented above, it implies that extension program

planning is a social process involving decisions to determine the target group of peoples’ needs, problems, opportunities, resources and priorities to further understand and involvement of extension educationists and community representatives. Assumptions on Extension Program Planning

According to Boyle (1965), the concept of extension planning is based on number of assumptions. Below is the list of some of these assumptions:

  7  

1. Planning change is a necessary prerequisite to effective social progress for people and communities.

2. The most desirable change is predetermined and democratically achieved. 3. Extension education programs, if properly planned and implemented, can make a

significant contribution to planned change. 4. It is possible to select, organize and administer a program that will contribute to

the social and economic progress of people. 5. People and communities need the guidance, leadership and help of extension

educators to solve their problems in a planned and systematic way.

Based on the assumptions presented above, extension planning delved on planned change and progress of people by people. It has a logical sequence of action in solving problems to be administered for the people. Thus, extension planning is all about guided activity to bring change to a particular group of people.

Rationale of Extension Program Planning

Kelsey and Hearne (1949) mentioned the following rationale for a planned extension program. Therefore, a sound extension program planning:

1. Is based on analysis of the facts in the situation; 2. Selects problems based on needs; 3. Determines objectives and solutions which offer satisfaction; 4. Reflects performance with flexibility; 5. Incorporate balance with emphasis; 6. Envisages a definite plan of work; 7. Is a continuous process; 8. Is a teaching process; 9. Is a coordinating process; 10. Involves local people and their institutions; and 11. It provides for evaluation of results.

It is observed that planning program is an integral part of the development process and

ensures better and efficient utilization of resources, accountability and human development. Meanwhile, other rationales of program planning are the following:

1. Progress requires a design: As design is established through the education of extension workers who will make a plan of action for progressive and careful planning.

2. Planning gives direction: Planning is one of the most important tasks in extension that has five factors to consider: purpose, needs, learning environment, sources of information and requirements.

3. Effective learning requires a plan. Successful extension learning requires an

objective to achieve. This must be developed from the extension professionals and the group of people concerned.

  8  

4. Planning precedes action. It is mentioned that the result of action depends on how you answer the questions involved in the planning process. These questions are the following: (a) What information do farm men and women need most? (b) Which king of information shall be extended (c) What information shall be extended first? (d) How much time shall be devoted to this line of work? (e) How much effort shall be devoted to this line of work?

Principles of Extension Program Planning

Sandhu (1965) identified a set of principles that maybe applied in developing countries. The discussion will be divided into two parts: (1) as a program and (2) as a process.

1. As a Program

1.1 Extension program planning is based on analysis of the facts in a situation.

o It is important to take into account the factors or aspects in lands, crops, economic trends, social structure, economic status, tradition and culture should be considered as facts. As this principle believes in “Extension knows, if need be, the surer way is to effect cultural change by the slow but certain process of education.”

1.2 Extension program planning selects problems based on people’s needs. o Extension program must meet the felt needs of the people (Brunner, 1945).

It is very significant to identify problems highlighting the needs of the target group of people. Extension workers must adopt the subject matter and teaching strategy based on the learning level of the target group of peoples’ needs and interests.

1.3 Extension program planning determines definite objectives and solutions

which offer satisfaction. o The objectives and proposed solution in the work plan must be flexible as

the how the changes evolved from planning to implementation of extension program.

1.4 Extension program planning has permanence with flexibility. o A program should be prepared with long-term goals in mind but must be

flexible enough to meet the changing needs and interests of the people.

1.5 Extension program planning has balance with emphasis. o In planning, comprehensibility of understanding the target group of

peoples’ needs and interests matter. Efficient distribution of time and effort to solve a particular needs and problems must be carried out to the extension professionals’ attention.

2. As a Process

2.1 Extension program planning has a definite plan of work. o Good organization and careful planning for action is the key to outline a

procedure based on the efficiency of its execution in the entire program.

  9  

Plan of work will be further discussed in the stages involved in extension program planning.

2.2 Extension program planning is an educational process.

o Participation to surveys, interviews and other methodologies is part of the educational process in planning. As an extension planner, they must perceive the following activities as a learning process: gaining knowledge, finding facts, recognizing problems, stating problems and proposing possible solutions.

2.3 Extension program planning is a coordinating process. o It is part of the planning process to extension professional to cooperate and

coordinate will interested leaders, groups and other agencies to work together for an integrated program.

2.4 Extension program planning is a continuous process. o Extension in a changing society must adjust and plan for the future by

keeping the choice of the target group of people, be flexible to new problems, work with people in seeking practical solutions and keep abreast of technological and social change (Sutton, 1961)

2.5 Extension program planning involves local people and their institutions.

o Aside from the partners and agencies involved in the process, it is inevitable to involve local people and let them participate in the program planning process.

2.6 Extension program planning provides for evaluation of results.

o Extension program planning and evaluation go together (Matthews, 1962). Evaluation is important to know and decide whether the program objectives during the planning process are achieved.

Approaches in Planning Extension Program

After discussing the principles, assumptions and rationale behind planning extension program, the most important aspect in planning are the approaches involved in it. All organizations working for agricultural development have their own procedures for planning. When considering the planning of extension programs, there are three different approaches that can be distinguished: planning from above, planning from below and the combination of both approaches.

1. Planning from Above Approach

This approach is also known as top-down or blueprint planning and centralized approach. It is a conventional way of developing a program wherein the agent is simply expected to implement plans made at national level. It is the government institutions holding the mandate of an extension program that strictly follows the policy of the national government. Therefore, this approach is best utilized in the transfer of technology. One good example is the case of disseminating artificial insemination for dairy cattle to improve the best breeding practices. The dissemination policy came from the National Dairy Authority (NDA) of the

  10  

Department of Agriculture (national level) down to local level and different dairy cooperatives. According to Dusseldorp and Zijderveld (1991), this approach has clearly defined and generally accepted objectives; detailed and précised knowledge of the process to be implemented in order to reach the objectives; has political will to use available power and resources; and has a predetermined timetable and resources. Nevertheless, this approach has advantages and disadvantages. Its’ advantages are it facilitates management, monitoring and evaluation tasks smoothly because of defined activities and well-identified chain of responsibilities and duties. While its’ disadvantages are it does take into account the aspect of socio-cultural environment. It is agency-centered and programs are based on institutional policies and philosophy. It is rigid and assumes high level of stability. Thus, it does not acknowledge the changing needs of the people.

2. Planning from Below Approach

This approach is also known as decentralized, bottom-up, and participatory planning process. It characterized the farmers together with extension agents make plans for developing local agriculture on the basis of local needs and potential, and then make requests for specific assistance from national and regional authorities. This approach is commonly used by non-government institutions (NGO) and private agencies. According to Bergdall (1993), Dusseldorp & Zijderveid (1991) and Korten (1991), the guiding principle behind is the participatory approach in which the ultimate goal of the program is to increase the power of the local actors. It is possible by planning and implementing their own improvements where development is long a term process and personnel should acts as partners/facilitators rather than experts. In addition, participation in local actors is stressed and have more time on needs identification and project preparation with the active involvement of the intended beneficiaries. Similar to the first approach, this approach has advantages and disadvantages too. Its’ advantage are it is open and process-centered, embraces error as a learning factor and leads to programs and project with an emergent nature. However, its’ disadvantages are activities start without predefined objectives that led to confusion for the participants and recipients of extension intervention; the success is dependent among the local actors, thus, its very difficult if they will not participate well. Lastly, the overall practice of this of this approach is in contrast with the conventional that may result to complications to the relationships among funding agencies.

3. Combination of the Two Approaches

Successful extension programs should include both planning approaches: combination of bottom-up and top-down. National policies and program provide a framework that guide an agent plans the local programs and establish priorities which local level follows.

  11  

Stages in Extension Program Planning

Several literatures offer processes/cycles/models of planning process. According to FAO (1997), whatever procedures for program planning are laid down by the extension organizations, there are five distinct stages that can be identified.

1. Analyze the present situation. This stage utilizes situational analysis by

identifying farming problems and their causes must be understood and the natural, human and other resources in the area. The purpose of this stage is to determine the needs by learning the difference between farmers. This stage has three activities such as: collecting facts, analyzing facts and identifying problems and potential In collecting facts, it vary from different sources depending on the situation you are dealing with for example, social structure, culture, farming systems, education, size of farms, channels of communication, transport facilities, crops and livestock, etc. Facts may be obtained from surveys and interviews to the target group of people or even to the supporting agencies and organizations. However, to analyze those facts require an explanation. You have to understand the difference between facts, opinion and wild guess. After collecting and analyzing the facts, you may now identify the main problems. Below are the criteria to setting prioritization among the farmers’ needs and problems. External Criteria about farmers situation

Actual needs are based on urgency, size of the gap, number of affected individuals and sequence of actions.

Readiness of the farmers to the proposed solutions Market strength includes the relationship of the extension organization’s

mission and the local groups to the identified needs Internal Criteria about extension organizational structure

Qualifications, interest of the staff. Research and knowledge based Likelihood of internal adjustments

Furthermore, Forest and Baker (2004) have written a checklist for situational analysis as summarized below.

Describes the current condition Identifies need/problem/opportunity or emerging issues Includes supporting data and documentation of needs Includes indicators of severity/scope of needs Includes benchmarked data Establishes clear justification of the program Describes primary audience, number and geographic location Indicates a gap between “what is” and “what could be” Indicates needed research

  12  

2. Set objectives for the extension program. The key questions in this stage are how will local problems be solved and how will local potential be developed. Solutions will require clear, realistic objectives. There are three stages involved in this stage: finding solutions, selecting solutions, stating objectives.

The sources of ideas for solution is to develop an area’s potential including the agent’s own technical knowledge, farmers and agents from other areas who have tackled similar problems successfully, applied research which tests new ideas under farm conditions and national priorities and directives; and projects which make funds available for particular activities. The proposed solutions should be acceptable to farmers in the area, technically sound and tested by research and experience elsewhere, consistent with national policy and with the local activities of other agencies, feasible within the time and with the resources available to farmers and the extension service and within the scope of the agent’s ability and job description. While there are three components of stating the objectives: intended participants of the program, content of the program and the desired change after the teaching-learning process is completed. Commonly, objectives should address three behavioral changes in cognitive, affective and psychomotor. Below is the checklist in developing objectives by Forest and Baker (1988).

Relate directly to need/problem/opportunity Fit the broader priorities and goals of extension and the farmers Identify what specific client will accomplish or improve Reflect realistic expectations for farmers given time and resources available Describe expected result in measurable terms Provide directions for type, design and sequence of learning experience

needed Understandable and can be communicated clearly

One way of developing objectives is through objective tree analysis (Cristovao et. al, 1997). It consists of program or project objectives linked in hierarchical forming a tree graph. Objectives at a lower level are supposed to contribute to the attainment of an objective at a higher level as shown in the example Figure 1.

Figure 1. Example of a partial objective tree for a rural development project (Inspired by Delp, Thesen, Motiwalla, Seshadri, 1977).

  13  

3. Develop the program. . When an extension agent is completed in the objectives, compilation of the overall annual work plan is required. It does not specify what specific activity per day but must indicate the start and end date of each activity as well as the resourced needed in the extension program. An example of a work plan in Figure 2.

Figure 2. Example of Plan of Work

  14  

4. Implement the program by putting the work plan into effect. An extension

program should be flexible enough to allow the agent to respond to circumstances in this way. This stage could be further divided into 2 sub-phases: the Plan of Work and the Execution of Plan of work. The Plan of work must specify the activity, required resources, place of action, persons participating, time to implement and expected results prepared by the extension personnel. On the other hand, the execution of plan of work can be determined through people change their attitudes and learning improved skills.

5. Evaluate the program. The purpose of this stage is to know the influence of

extension activities to the group of people. The detailed discussion on evaluation shows at the latter part of this paper.

In summary, the different stages of extension program planning are somewhat interrelated and do not proceed smoothly from one stage to another. For example, an objective may be set during the situation analysis stage but may be altered at the latter part of the stages when you collect facts and have full understanding of the problems or situation. On the other hand, other models and cycles in extension program planning is summarized in Table 2.

Table 2. Different Models in Extension Program Planning Model Category Concept Definition Necessary Steps

Ralph Tyler’s (1949) Program Development Model

Classical -Analyzed the educational programs of an institution

-Focuses on the desired results of curriculum and instruction in formal educational setting

-What educational purposes should be attained?

-What educational experience can be provided to meet these purposes?

-How can these educational experiences be effectively organized

-How can it be determined whether these programs have been attained?

Boone et. al (2002) Conceptual Programming Model

Most comprehensive & complex

-From a systems approach for organizational development

-The program planner

-Understanding the organization and its renewal process

-Linking the organization

  15  

is the change agent and decision maker through program facilitation, implementation and evaluation

-No decision making given to stakeholders

-Diversity of stakeholders is not addressed

to the public

-Designing the planned program

-Implementing the planned program, evaluation and accountability

Boyle (1981) Lifelong learning perspective

Micro Simplified

-There are developmental institutional and informational programs with varying goals, sources of objectives, use of knowledge, involvement of the learner, roles of the programmer and standards of effectiveness

-Includes the involvement of stakeholder in the program

-Establish philosophical basis for programming

-Analyze problems needs or concerns of people

-Involve potentials client

-Determine intellectual and social development level

-Select sources to investigate and analyze in determining program objectives

-Recognize organizational and individual constraints

-Establish criteria for determining program priorities

-Decide on degree of flexibility

-Legitimize and obtain support of formal and informal power institution

-Select and organize

  16  

learning experience

-Identify instructional design

-Utilize effective promotional priorities

-Obtain resources necessary to support the program

-Determine the effectiveness, results and impact

-Communicate program value to appropriate decision makers.

Caffarella & Ratcliff Dafron (2013) Interactive Model

Micro simplified

-Program planning for adult education and reflecting the dynamism of the changing educational context

-Focuses on the extension professional as am instructor and does not take into account complex situations

-Building a solid base of support

-Identifying and prioritizing ideas and needs

-Developing program goals and objectives

-Designing instruction

-Devising transfer-of-learning plans

-Formulating evaluation plans

-Selecting formats, schedules and staffing programs

-Preparing and managing budgets

-Organizing marketing campaigns

  17  

Coordinating details

Cervero and Wilson (2006) People-Centered Model

People-centered

-Based on responsible planning theory

-Focused on politics, ethical obligations, power, interests, communication and language as important success of programs

-Planner is primarily concerned about the management and politics of outcomes through power relations of program stakeholders

-Has little emphasis on program evaluation and reporting

Klein and Morse (2009) Business Plans

Micro simplified

-_Reach out to audiences as a community of interest around a topic for learning rather than a geographic community of learners

-Creates an analysis of comparative advantage

-Improved collaboration between educators and specialist

-Executive Summary

-List of program team members

-Educational goals

-Target audience

-Market research on needs

-Promotional plans

-Logic model and research base

-Public and pirvate value

-Implementation

-Evaluation

-Financial

The University of Wisconsin-Extension (2003) Logic Model

-Often used by extension professionals as a tool to describe their program to stakeholders

-To develop more detailed programs and evaluation plans

-Does not take into

-Describes the program’s situation, inputs, outputs, outcomes, assumptions, external factors and evaluation to visually show how the program is supposed to work

  18  

account complex content

On the other hand, other models are best-described using figures and illustrations

Figure 3 and 4. These two models are somewhat similar in terms of stages and steps suggested by FAO.

Figure 3. Sandhu (1965) Extension Program Planning Model

Figure 4. Detailed six-stage PAM planning cycle. Source: Chamala (1995a).

  19  

CHAPTER 2 Managing an Extension Program

Managing extension program is one of the borrowed extension concepts. Management

is defined as “the process by which people, technology, job, tasks and other resources are combined and coordinated so as to effectively achieve organizational objectives” (Waldron, et. al., 1997). On the other hand, an extension manager is “the person who is vested with formal authority over an organization or one of its sub-units” (Waldron, et.al., 1997). An extension program manager is the person who vested with formal authority to access information, devise strategies, make decisions and implement actions (Mintzberg, 1988). Philosophy of Management

Management functions have guiding principles and approach focusing on the following (Waldron et. Al, 1997):

1. Developing and clarifying mission, policies and objectives of the agency or organization.

2. Establishing formal and informal organizational structures as a means of delegating authority and sharing responsibilities.

3. Setting priorities and reviewing and revising objectives in terms of changing demands.

4. Maintaining effective communications within the working group, with other groups, and with the larger community.

5. Selecting, motivating, training and appraising staff. 6. Securing funds and managing budgets; evaluating accomplishments and; 7. Being accountable to staff, the larger enterprise, and to the community at

large (Waldron, 1994b)

Management Functions

Management functions can be categorized using the acronym POSDCORB (Bonoma & Slevin, 1978, from Gulick & Urwick, 1959 cited by Waldron, et. al., 1997). Table 3 shows the summary of the management functions and its description.

Table 3. Management Functions in Extension Program

Management Functions Concept Description Planning Includes outlining philosophy, policy, objectives

and resultant things to be accomplished and the techniques for accomplishment.

Organizing Occurred when establishing structures and systems through arranged, defined and coordinated activities in line with the program objectives.

Staffing Attained by fulfilling the personnel function, which includes selecting and training staff and maintaining favorable work conditions

Directing Encountered when making and embodying decisions in instructions and serving as the leader of the enterprise.

  20  

Coordinating Happened by interrelating the various parts of the work.

Reporting Keeping those to whom an extension manager is responsible including the staff and public.

Budgetting Making financial plans by maintaining accounting and managing control of revenue and keeping costs in line with the program objectives.

1. Planning has different types depending on the organization size and type. According to Waldron et. Al, (1997), there are four major types of planning exercises: strategic, tactical, contingency and managerial. Strategic planning is usually at the top management involving the determination of organizational goals and the activities to achieve them. Tactical planning, on the other hand, is similar strategic planning but with the involvement of the middle and lowe is concerned with implementing the strategic plans and involves middle and lower management. Contingency planning deals with the preparation to future problems and issues (Marshall, 1992 as cited by Waldron et. al., 1997). Lastly, managerial planning is a micro-level planning that helps in combining resources to fulfil the overall objectives of the extension organization.

As discussed in planning an extension program, needs assessment is the first step

in the process. Afterwards, establishing the extension organization philosophies including the mission, vision and goals are inevitable. Generally, planning also involves policy formation into two ways. Policies play a complex role in implementing organizational policies by top management. Thus, an extension manager must endow with good decision making skills that need to be made for alternatives and consequences. However, decision-making in extension is often a group process. Waldron et. al. (1997) mentioned several steps in decision making. These steps are as follows: (1) Identifying and defining the problem; (2) Developing various alternatives; (3) Evaluating alternatives; (4) Selecting an alternative; (5) Implementing the alternative; and (6) Evaluating both the actual decision and decision-making process.

2. Organizing is a process based on five organizing principles (Marshall, 1992): unity

of command, span of control, delegation of authority, homogeneous assignment, and flexibility. This process provides directions in achieving the plans. Various aspects such as time organization, degree of centralization and role specification must look into. In addition, there are five steps involved in organizing extension programs: (1) Determining the tasks to be accomplished; (2) Subdividing major tasks into individual activities; (3) Assigning specific activities to individuals, providing necessary resources; and (4) Designing the organizational relationships needed. Effective extension managers facilitate planning and managing by having organizational structure wherein there is a delegation of authority and someone has span of control

3. Staffing consists of human resource planning and recruitment process. An extension manager must know how to find the right people in an extension program. Thus, an extension manager must know the process in staff selection as well as staff orientation.

  21  

4. Directing is a management function focuses on directing to lead. It is more of

leadership capacity rather than pushing things. A good extension manager is a leader who knows how to direct the people to do their best ability for a successful extension program.

5. Coordinating involves interrelated responsibilities among the people involved in an

extension program. It must be effective with open and two-way channels of communication. In extension, the formal systems of coordination happened in the community through multiway flow of information.

6. Reporting includes the preparation of annual reports to summarized the program

achievements. This is essential to extension managers as they can review during the reporting the process if they meet the program objectives.

7. Budgetting reiterates specific plans on accounting, revenue and expense controls.

Most organizations has their own budget system based on historical data, management by objective system, program review and evaluation technique However, the key elements of any budget system consist of (1) determining what line items are necessary in terms of objectives; (2) in line with policies, determining the financial amounts for each line; (3) determining overhead, surplus, and/or profit margins; (4) determining anticipated revenue from fees, grants, gifts, contracts, etc.; (5) drafting a budget with specific amounts and justifications; and (6) discussing and making adjustments to produce a working budget (Waldron et. al., 1997).

Extension organizations main goal is to promote change. In managing extension programs, extension managers must know the efficient management skills and functions to adapt to the changing needs of the people. Management of extension program can be defined as “the rational assessment of a situation and the systematic selection of goals and purposes; the systematic development of strategies to achieve these goals; the marshalling of the required resources, the rational design, organization, direction, and control of the activities required to attain the selected procedures “(McNeil & Clemmer, 1988 as cited by Waldron et. al., 1997). The responsibilities an extension manager has must engaged on building and maintaining relationships, getting and giving information, influencing people and decision making.

  22  

CHAPTER 3

Case Analysis in Extension Program

1st Case: Study Title

Using the Logic Model to Plan Extension and Outreach Program Development and Scholarship by Marilyn Corbin, Nancy Ellen Kiernan, Margaret A. Koble, Jack Watson and Daney Jackson published by Journal of Higher Education Outreach and Engagement Volume 10, Number 1, p. 61, (2004)

Background of the Study

Penn State extension organization has already the multidimensional UniSCOPE model 2000 conceptualizes the teaching, research and services of the university as a continuum in outreach and scholarship programs. The authors presented the benefits of the new model for statewide program planning and implementation—LOGIC MODEL. They assumed logic model defined assumptions about the cause and effect relationship and provides a rationale among the resources, activities and impacts in a program.

A need for a rational unifying process to help program teams led by campus-based faculty

and field-based extension educators develop statewide plans of work. The college wanted to demonstrate how educational activities tie to desired impacts and how measured impacts could reasonably be attributed to activities conducted by educators. They utilized logic model in three cases in the college to:

• Strengthen the integration, application and education function of outreach scholarships

• Help program teams of campus-based faculty and field-based educators develop five year and annual statewide program plans in Penn State’s College of Agricultural Sciences.

• Embrace complex stages of impact that can occur through community groups

Case Study 1: Breast Cancer Screening Recruitment Program Activities: Extension provided an educational program for a community organizations and business representative that serve and underinsured women. Results (1st immediate stage of extension impact): The community groups will teach counsel and inspire women in the the target audience to take preventive steps against breast cancer. Results (2nd stage of extension impact): The target women will schedule screenings, keep appointment, etc. Implications: Logic model is appropriate also for other types of programs where the impacts of an individual or group take time to occur but require a series of recognizable steps to achieve the impact.

  23  

Case Study 2: Agricultural Entrepreneurship Program Activities: Extension conducted educational activities for agricultural producers looking at alternative agricultural opportunities that could result in increased income over time. Results (1st immediate stage of extension impact): The producers will discuss business ideas with partners and similar entrepreneurs, conduct analysis and initiate the development of a business or marketing plan. Results (2nd stage of extension impact): Completing the business and marketing plans, evaluating competition and securing business experts Implications: It links all the steps that ensures a logical connection between all sequential steps and provide extension to evaluate the intervening impacts as well as the final impact. Case Study 3: Agronomic Production (across interdisciplinary areas) Problem: Extension educators have 4 departments in the college and 6 regions across the state which they found difficult to organize and plan effectively for the year ahead Action: The team created 6 different logic models such as integrated pest management, forage crops, grain crops, nutrient management, soil management and organic production. Immediate Impact: No. of producers who increased subject knowledge Intermediate Impact: No. of producers who developed nutrient management plans using interview and observation methods Implications: A useful tool for organizing, planning and prioritizing activities to accomplish goals effectively. It reflects the organizational change that has occurred. Conclusion

In summary, using the logic model process encourages dialogue and reflective communication during the program development and implementation process. It provides a rational unifying process to help the program teams develop statewide plans of work by identifying overall goals, articulating measurable impacts (short term and long term), developing activities to accomplish impacts and forming strategies to evaluate impacts.

  24  

2nd Case: Study Title

A logic model framework for evaluation and planning in a primary care practice-based research network (PBRN) by Holly Hayes, MSPH, Michael L. Parchman, MD, MPH and Ray Howard, MBA published by J Am Board Fam Med. 2011 ; 24(5): 576–582. doi:10.3122/jabfm.2011.05.110043. Background of the Study

Practice-Based Research Networks (PBRNs) have an unprecedented opportunity to become effective laboratories to address high priority research questions To improve patient outcomes. Despite a significant growth in the number of PBRNs over the past 15 years, little is known about effective and useful methods of evaluating PBRNs.One method with significant potential for PBRN evaluation and planning is a logic model.

A logic model is an efficient tool that requires little resources other than personnel time.

It can also provide much needed detail about how resources and activities can be connected with the desired results which helps with project management, resource allocation and strategic planning. However, there are no publications demonstrating how a logic model framework can be used for evaluation and program planning in a primary care PBRN. The objective of the study is to describe the development of a logic model and how the framework has been used in a primary care PBRN, the South Texas Ambulatory Research Network (STARNet). Development & Application of Logic Model to PBRN Activities

The development of the logic model was considered only an initial phase in the process of evaluating, planning and developing the network. It is followed by the collection of outcome data and lastly, the assessment of PBRN progress.The following are the steps in developing logic model: Step One: Agree on the Mission and Target Audience; Step Two: Identify and describe assumptions, inputs and activities; and Step Three: Identify Outputs, Outcomes, and Outcome Indicators Conclusion

Logic model is an effective planning and evaluation tool and a useful project management resource that greatly increases the probability that PBRN goals will be reached consistent with its mission. The logic model framework not only helped facilitate the Network evaluation process, but equally important, it engaged the leadership and members in a meaningful way.  

  25  

PART II: MONITORING AND EVALUATION OF

EXTENSION PROGRAM

Written by:

MANAN UNDONG

  26  

CHAPTER 4

Definition of Monitoring and Evaluation

The Global Consultation on Agricultural Extension observed that monitoring and evaluation are important yet frequently neglected functions in most organizations (FAO, 1990, p. 27). In the worldwide survey of national extension systems, it was found that only about one half of all national extension systems have some type of monitoring and evaluation (M & E) capacity. The consultation noted that in many cases the M & E units are weak and are limited to ad hoc studies. Frequently, these M & E units are abandoned when project funding terminates. In many organizations, monitoring and evaluation have a negative image because these units may concentrate on problems, exposing weaknesses and failures. Instead, monitoring and evaluation should be used in a positive manner to improve extension's performance and increase its efficiency. Therefore, attitudes about and uses of M & E must be changed if this capacity is to be used to advantage in strengthening extension's performance and impact.

The consultation recommended that national extension systems should be strongly

encouraged to establish and use monitoring procedures and evaluation studies both to improve extension performance and to communicate the results of extension programmes to policy makers and clientele being served. Extension systems need to carefully consider monitoring, management information needs, sources of information, and a management information system. The word "monitor" is derived from the Latin word meaning to warn, and "evaluate" stems from the word value (Hortan, Peterson, & Ballantyne, 1993, p. 5). Monitoring is an integral and important part of a management information system. Managers require information to keep track of an extension programme and to guide its course of action.

Monitoring is a continuous process of collecting and analyzing information to compare how well a project, programme or policy is being implemented against expected results. Monitoring is supervising activities in progress to ensure they are on-course and on-schedule in meeting the objectives and performance targets (http://www.businessdictionary.com/definition/monitoring.html).

Evaluation on the other hand is the systematic collection of information about

activities, characteristics, and outcomes of projects to make judgments about the project, improve effectiveness, and/or inform decisions about future programming (adapted from Patton, 1987).

  27  

CHAPTER 5

Evolution of Monitoring and Evaluation as a Discipline

Program evaluation evolved primarily in the USA and became a semi-professional discipline around 1960. The theory and practice of program evaluation that emerged at this time had roots in earlier work by Tyler (1967) in education, Lewin (1948) in social psychology and Lazarsfeld in sociology (Lazarsfeld and Rosenburg 1955). Program evaluation also has roots in the rapid economic growth in the US after World War II, in the interventionist role that the US federal government took on social policy during the 1960s and in the increasing interest in evaluation as an academic pursuit (Shadish et al. 1995).

At the end of the 19th century Joseph Rice conducted the first generally recognized

formal education evaluation in the USA. The results of the evaluation led educators to revise their approach to the teaching of spelling. At the same stage the ‘accreditation’ evaluation movement began, even though it did not gain ground until the 1930s.

1900-1945 USA

During the early part of the 20th century a focus upon systemisation, standardization and efficiency was seen in the field of education program evaluation. A large number of surveys were carried out in this period, which focused upon school/teacher efficiency. These surveys were externally evaluated. After World War I, school districts used these tests to make inferences about program effectiveness. Unlike later school curriculum evaluations, these surveys focused more upon local needs of the school district. In areas other than education the same focus upon systemisation was seen (Freeman 1977).

In the period between 1930 and 1945, the work of Ralph Tyler came to have an

enormous effect upon the field of evaluation. Tyler coined the term ‘educational evaluation’, which meant assessing the extent to which valued objectives had been achieved in an institutional program. Evaluation was conceptualised by Tyler as a comparison of intended outcome with actual outcomes. It differed from the previously popular model of ‘comparative experiment’ in that it did not involve expensive and disruptive comparisons between experiment and control. Since Tyler’s approach calls for the measurement of behaviourally defined objectives, it concentrates on learning outcomes instead of organisational and teaching inputs (Madaus et al. 1991). The current resurgence in outcomes-oriented evaluation, is clearly not a new concept for program evaluation!

In 1932 Tyler set up an eight-year study in which he evaluated 30 schools on the basis of objectives that were identified by the teachers themselves. This has been cited as the

first comprehensive study of program evaluation (Gredler 1996). The concept of program appraisal was a new perspective introduced by Tyler. Program evaluation can be described as semi-professional as it shares certain attributes with other professions and differs from purely academic specialties. Program evaluation is not fully professionalised, like medicine or law; it has no licensure laws (Shadish et al. 1995)

  28  

1946-70 US

This period was characterised by rural poverty and despair in the inner cities in the US (Madaus et al. 1991). There was an expansion in educational services and social programs. The practice of standardised testing had expanded markedly. During this period evaluations were primarily the purview of local agencies. Federal and state agencies had yet to become involved.

In 1957 the USSR launched the first space capsule (Sputnik) and the American public questioned their educational system; were American schools good enough to produce scientists and scholars to close the perceived technology gap? This led to a phase of centralised curriculum development and created the need for national curriculum evaluation. Great Society’ programs in the US

Around 1960 the US government invested large sums of money towards programs in education, income maintenance, housing, and health. These programs are collectively termed ‘Great Society’ programs. Expenditure on these programs was massive and constituted a 1800 % increase in US public spending between 1950 and 1979 (Bell 1983). This vast expenditure raised issues which caused evaluation to be pushed forward as a necessary component of social programs: firstly, issues of accountability for distribution of funds; secondly, concern that program funds were being spent in ways that caused undesirable results. However, at that time of increase in demand for evaluation, there was a lack of personnel trained in this field. The rest of the world

While large-scale programs with an evaluation component became commonplace in the US, this trend also occurred in Europe and Australia. By the late 1960’s in theUnited States and internationally, evaluation research had become a growth industry (Wall Street Journal, cited by (Rossi et al. 1979).

1970 to early 1980s global professionalization of program evaluation

Prior to 1970, limited published literature about program evaluation existed; many evaluations were carried out by untrained personnel, others by research methodologists who tried unsuccessfully to fit their methods to program evaluation (Guba and Lincoln 1981). Around 1973, program evaluation began to emerge as a semi-professional discipline. In the USA a number of journals including Evaluation Review and New Directions in Program Evaluation went to press. In the USA in 1970s, two professional evaluation societies were formed: the ‘Evaluation Research Society’ and the ‘Evaluation Network’. In 1984 these two societies merged to become the ‘American Evaluation Association’.

By 1984 evaluation had become more international and the ‘Australasian Evaluation

Society’ had formed along with the ‘Canadian Evaluation Society’, and by 1995 new professional evaluation societies had formed representing Central America, Europe and Great Britain. In 1981 a joint committee appointed by 12 professional organisations issued a

  29  

comprehensive set of standards for judging evaluations of educational programs and materials. Around this time evaluators realised that the techniques of evaluation must: serve the needs of the clients, address the central value issues, deal with situational realities, meet the requirements of probity and satisfy the needs for veracity. (Madaus et al. 1991).

Explosion of theories and techniques

Responding to the need for more sophisticated evaluation models, by the 1970s many new books and papers were published (Cronbach 1963, Scriven 1967, Stake 1967).

Together these publications resulted in a number of new evaluation models described by the authors as responsive to the needs of specific types of evaluation. This body of information revealed sharp differences in philosophical and methodological preferences, but it also underscored a fact about which there was much agreement: evaluation is a multidimensional technical and political enterprise that requires both new conceptualisation and new insights into when and how existing methods from other fields might be used appropriately (Worthern et al.1997).

Despite all the new promising techniques, Madaus et al. (1991) point out that there is

still a need to educate evaluators in the availability of new techniques, to try out and report the results of using these new techniques, and to develop additional methods. The emphasis must be on making the methodology fit the needs of the society, its institutions and its citizens, rather than the reverse (Kalpan 1994).

1980-1990s era of accountability

In the US a new direction for evaluation emerged in the 1980s with the presidency of Ronald Reagan who led a backlash against government programming. This is partly

explained by the perceived failure of the earlier ‘Great Society Programs’; social indicators revealed that many multi-million dollar projects had resulted in little social change. In the 1990s the call for greater accountability became central to programs at every level: national and local, NGOs and the private sector (Brizius and Campbell 1991). Indeed, in the late 1980s to 1990s there was growing despair about the fact that ‘nothing seemed to work’. In 1988 Brandle’s message in an address to professional evaluators was: “No demonstrable relationship exits between program funding levels and impact, that is, between inputs and outputs; more money spent does not mean higher quality or greater results.” Brandle went on to say that “until systematic accountability is built into government, no management improvements will do the job

  30  

Table 4. Differences between monitoring and evaluation MONITORING EVALUATION

Definition Ongoing analysis of project progress towards achieving planned results with the purpose of improving management decision making.

Assessment of the efficiency, impact, relevance and sustainability of the project’s actions.

Who Internal management responsibility Usually incorporate external agencies

When On Going Usually at completion but also at mid-term, ex-post and ongoing.

Why Check progress, take remedial action, and update plans.

Learn broad lessons applicable to other programs and projects. Provides accountability.

Table 5. Summary of differences between monitoring and evaluation

  31  

Figure 5. Monitoring and Evaluation in the Program and Project Cycle

  32  

CHAPTER 6 Monitoring and Evaluation of Extension Program

Myths about Evaluation  

1. Evaluate only when mandated. Many funded programmes require evaluation as a form of accountability. However, it is a myth that evaluation should occur only if it is mandated.

2. Evaluation is an add-on. It is a myth that evaluation is an add-on activity or at most a

pretest with a posttest. It is most meaningful when it is integrated with decision making at every stage of programme planning and operation.

3. Evaluation is an activity for experts. It is a myth that evaluation should be undertaken

only by technical experts. Yes, complex methods can be used; however, systematic evaluation can be undertaken by inexperienced managers, and specialists and educators themselves can be helped to critique their own work.

4.Outside evaluators are best. It also is a myth that evaluation should be done only by

external, outside, objective evaluators. Yes, external evaluators are often useful in challenging insiders to address what they have overlooked because of their nearsightedness.

5.There is one best evaluation approach. Still another myth is that there is one best way to

conduct an extension evaluation. Some approaches are probably better than others for addressing particular types of questions or concerns. However, the many types of evaluation approaches have their own strengths and limitations.

6.Quantitative data are best. A mixed-methods approach combining qualitative and

quantitative methods can lead to better understanding and appreciation of phenomena under evaluation and provide triangulation, convergence, and corroboration of results from different methods.

Recognizing the politics of evaluation

Evaluations are never valued neutral. Political implications are always present because stakeholders have interests to defend. In this sense, all evaluations are political. Therefore, it is very important to know the values, expectations, and interests of the stakeholders at the outset. Probably the most crucial political question that evaluation can raise is, "Who benefits most from extension programmes?" This question is political, because the answer often reflects class and economic privilege, ethnic or racial dominance, and gender interests and relationships between staff and various sectors of the population of farmers. Purpose of Monitoring and Evaluation

1. Provide constant feedback on the extent to which the projects are achieving their goals. 2. Identify potential problems at an early stage and propose possible solutions. 3. Monitor the accessibility of the project to all sectors of the target population.

  33  

4. Monitor the efficiency with which the different components of the project are being implemented and suggest improvements.

5. Evaluate the extent to which the project is able to achieve its general objectives. 6. Provide guidelines for the planning of future projects (Bamberger 4). 7. Influence sector assistance strategy. Relevant analysis from project and policy

evaluation can highlight the outcomes of previous interventions, and the strengths and weaknesses of their implementation.

8. Improve project design. Use of project design tools such as the logframe (logical framework) results in systematic selection of indicators for monitoring project performance. The process of selecting indicators for monitoring is a test of the soundness of project objectives and can lead to improvements in project design.

9. Incorporate views of stakeholders. Awareness is growing that participation by project beneficiaries in design and implementation brings greater “ownership” of project objectives and encourages the sustainability of project benefits. Ownership brings accountability.  

10. Show need for mid-course corrections. A reliable flow of information during implementation enables managers to keep track of progress and adjust operations to take account of experience (OED).

Types of M & E Studies

A needs assessment: Focuses on identifying needs of the target audience, developing a rationale for a program, identifying needed inputs, determining program content, and setting program goals. It answers the question:

What do we need and why? What does our audience expect from us? What resources do we need for program implementation?

A baseline study: Establishes a benchmark from which to judge future program or

project impact. It answers the questions:

What is the current status of the program? What is the current level of knowledge, skills, attitudes and beliefs of our audience? What are our priority areas of intervention? What are our existing resources?

A formative, process, or developmental evaluation: provides information for

program improvement, modification, and management. It answers the question:

What are we supposed to be doing? What are we doing? How can we improve?

A summative, impact, or judgmental evaluation: Focuses on determining overall

success, effectiveness, and accountability of the program. It helps make major decisions about a program’s continuation, expansion, reduction, and/or termination. It answers the questions:

  34  

What were the outcomes? Who participated and how? What were the costs?

A follow-up study: Examines long-term effects of a program. It answers the questions:

What were the impacts of our program? What was most useful to participants? What are the long-term effects?

Table 6. The Complementary Roles of Monitoring and Evaluation MONITORING EVALUATION

Routine collection of information Analyzing information

Tracking project implementation progress Ex-post assessment of effectiveness and impact

Measuring efficiency Confirming project expectations

Question: “Is the project doing things right?”

Question: “Is the project doing the right things?”

Source: Alex and Byerlee (2000) Methods of Program Evaluation  

1. Self-evaluation This involves an organization or project holding up a mirror to itself and assessing how it is doing, as a way of learning and improving practice. It takes a very self-reflective and honest organization to do this effectively, but it can be an important learning experience.

2. Participatory evaluation

This is a form of internal evaluation. The intention is to involve as many people with a direct stake in the work as possible. This may mean project staff and beneficiaries working together on the evaluation. If an outsider is called in, it is to act as a facilitator of the process, not an evaluator.

. 3. Rapid Participatory Appraisal Originally used in rural areas, the same methodology can, in fact, be applied in most

communities. This is a qualitative way of doing evaluations. It is semi-structured and carried out by an interdisciplinary team over a short time. It involves the use of secondary data review, direct observation, semi-structured interviews, key informants, group interviews, games, diagrams, maps and calendars

  35  

4. External evaluation This is an evaluation done by a carefully chosen outsider or outsider team

Methods of Gathering Data for Monitoring and Evaluation

1. Interviews. These can be structured, semi-structured or unstructured (see Glossary of Terms). They involve asking specific questions aimed at getting information that will enable indicators to be measured. Questions can be open-ended or closed (yes/no answers). Can be a source of qualitative and quantitative information.

2. Key informant interviews. These are interviews that are carried out with specialists in a

topic or someone who may be able to shed a particular light on the process. 3. Questionnaires. These are written questions that are used to get written responses

which, when analyzed will enable indicators to be measured. 4. Focus Group Discussion. In a focus group, a group of about six to 12 people is

interviewed together by a skilled interviewer/facilitator with a carefully structured interview schedule. Questions are usually focused around a specific topic or issue.

5. Community Meetings. This involves a gathering of a fairly large group of beneficiaries

to whom questions, problems, situations are put for input to help in measuring indicators.

6. Field Workers Report. Structured report forms that ensure that indicator-related

questions are asked and answers recorded, and observations recorded on every visit. 7. Ranking. This involves getting people to say what they think is most useful, most

important, least useful etc. 8. Visual/Audio Stimuli. These include pictures, movies, tapes, stories, role plays,

photographs, used to illustrate problems or issues or past events or even future events. 9. Rating Scales. People are usually asked to say whether they agree strongly, agree, don’t

know, disagree, disagree strongly with a statement. You can use pictures and symbols in this technique if people cannot read and write.

10.Critical Event/Incident Analysis. This method is a way of focusing interviews with

individuals or groups on particular events or incidents. The purpose of doing this is to get a very full picture of what actually happened.

11. Participant Observation. This involves direct observation of events, processes,

relationships and behaviours. “Participant” here implies that the observer gets involved in activities rather than maintaining a distance.

12. Self Drawing. This involves getting participants to draw pictures, usually of how they

feel or think about something.

  36  

The Monitoring Indicators

Indicators, as the term suggests, are variables that help to measure changes in a given situation (ACC, 1984, p. 37). They are tools for monitoring and evaluating the effects of an activity. There are different types of indicators, for example, development indicators, socioeconomic indicators, agricultural development indicators, and extension indicators.

Table 7. The Locus of Evaluation and Criteria

EXTERNAL EVALUATOR

EXTERNAL CRITERIA

INTERNAL EVALUATOR

EXTERNAL CRITERIA

EXTERNAL EVALUATOR

INTERNAL CRITERIA

INTERNAL EVALUATOR

INTERNAL CRITERIA

Table 5. Example of Extension Performance Indicator

Extension Effectiveness Indicator

1. Awareness No of farmers aware of Village Extension Workers (%)

2. Visit No of visits by Village extension Workers to Farmers, 1) 2x/month, 1x/month and No visit

3. Field Meetings No of meetings with Village Extension Workers with farmers in their field (%)

4. Regularity No of meetings of Village Extension Worker with farmer on the fixed day (%)

5. Filed day No of field days organized by the Village Extension Worker in a) preceding month, b) quarterly and c) annually (Average)

6. Demonstration No of a) Method demonstration, b) Result demonstration c) Method cum result demonstration organized by Village Extension Worker in i) preceding month, ii) quarterly and iii) Annually

Extension Effectiveness Indicator

7. Supervision No of supervisory visits from Agricultural Extension Officers to Village extension Worker in the Field per month (Average)

  37  

8. Research-Extension Linkage

No of Research-Extension linkage workshops organized per month (Average)

9. Farmer Training No of farmers trained in Farmers Training Center (Institutionalized Training Course) per year (Ave.)

10. Extension Effectiveness No of meetings of Village Extension Worker with farmer on the fixed day (%)

11. Performance Index Actual no. of farmers reached out of the target no. which should be reached (Casley and Lury, 1972)

12. Penetration Index No of farmers adopting the recommended practice out of the actual no. reached (%)

Guide Questions in Conducting M & E

1. Relevance: Do the objectives and goals match the problems or needs that are being addressed?

2. Efficiency: Is the project delivered in a timely and cost-effective manner? 3. Effectiveness: To what extent does the intervention achieve its objectives? What are the

supportive factors and obstacles encountered during the implementation? 4. Impact: What happened as a result of the project? This may include intended and

unintended positive and negative effects. 5. Sustainability: Are there lasting benefits after the Intervention is completed.

Types of Performance Indicators

1. Input Indicators. Input indicators measure the quantity (and sometimes the quality) of resources provided for project activities. Examples:

a. Funding—counterpart funds, Bank loan funds, cofinancing, grants b. Guarantees c. Human resources—number of person-years for members of the implementation unit, d. consultants, and technical advisers e. Training f. Equipment, materials, and supplies, or recurrent costs of these items—for example, g. textbooks, syringes, vaccines, classroom facilities

2. Output Indicators. Output indicators measure the quantity of the goods or services

created or provided through the use of inputs. Examples:

  38  

a. Clients vaccinated (by a health project) b. Farmers visited (an extension project) c. Miles of roads built (a highway project) d. Electricity generation and transmission facilities installed (a rural electrification e. project) f. Pollution control measures installed or incentives or regulations enforced (a pollution g. control or air or water quality improvement project).

3. Outcome and Impact Indicators. Outcome and impact indicators measure the quantity

and quality of the results achieved through the provision of project goods and services. Examples:

a. Reduced incidence of disease (through vaccinations) b. Improved farming practices (through extension visits) c. Increased vehicle use or traffic counts (through road construction or improvement) d. Increased rural supply and consumption of electricity (through expansion of electricity f. network) g. Reduced mortality or lower health costs (through improved family health practices h. or improved nutrition, or cleaner air and water)

4. Relevance Indicators. Relevance indicator measure trends in the wider policy

problems that project impacts are expected to influence. Examples:

a. Improved national health as measured by health indicators (through improved b. health care, health system performance) c. Increased farm profits and reduced food costs (through improved farming practices) d. Reduced transportation costs and expanded economic development (through road e. construction or improvement) f. Improved economic growth and enhanced consumer well-being (through expanded g. electrification, pollution controls, and other new technology).

5. Risk Indicators. Risk indicators measure the status of the exogenous factors identified as critical through the risk and sensitivity analysis (risk and enabling factors) performed as part of a project’s economic analysis.

6. Efficiency Indicators. Efficiency indicators usually represent the ratio of inputs needed per unit of output produced. Examples: physical inputs, dollars, or labor required per unit of output.

7. Effectiveness Indicators. Effectiveness indicators usually represent the ratio of

outputs (or the resources used to produce the outputs) per unit of project outcome or impact, or the degree to which outputs affect outcomes and impacts. Example: a. Number of vaccinations administered (or their cost) per unit decline in morbidity

rate (illness prevented) or per unit decline in mortality rate b. Number of farmers visited per measured change in farm practices (number of farmers adopting new practices), or number of farmers adopting new practices per unit

  39  

c. increase in agricultural productivity d. Miles of road built per unit increase in vehicle usage, or new road usage per unit

decrease in traffic congestion.

8. Sustainability Indicators. Sustainability indicators represent the persistence of project benefits over time, particularly after project funding ends. Examples:

a. Disease incidence trends after external funding for a vaccination project ends b. Persistence of changed farming practices after extension visits are completed c. Maintenance and use of roads after highway construction ends d. Persistence of institutions (programs, organizations, relationships, and so on)

created to deliver project benefits. Criteria in Selecting Indicators

1. Relevance. The indicators selected must be relevant to the basic sectoral development objectives of the project and, if possible, to overall country objectives.

2. Selectivity. The indicators chosen for monitoring purposes should be few and

meaningful. 3. Practicality of indicators, borrower ownership, and data collection. If performance

indicators are to meaningfully reflect a project’s objectives, they should be selected jointly by the borrower and the Bank during participatory project preparation, and the data they measure should be useful to both project and country

4. Intermediate and leading indicators. In the absence of more definite impact indicators,

early pointers of development impact may be used during project implementation to indicate progress toward achieving project objectives.

5. Quantitative and qualitative indicators. To the extent possible, performance indicators

should allow for quantitative measurement of development impact. For some project objectives (for instance, capacity building) it may be necessary to develop qualitative indicators to measure success, which should still allow credible and dispassionate monitoring

Measuring Performance

1. Direct Measures. Direct measures correspond precisely to results at any performance

level. For instance, quantities of goods delivered or counts of clients served 2. Indirect Measures. They are often used when direct measures are too difficult,

inconvenient, or costly to obtain. Indirect measures are based on a known relationship between the performance variable and the measure chosen to express it. Example:

a. Using lower farmgate prices as an indirect indicator of increased agricultural

productivity b. Using reduced numbers of consumer complaints as an indirect indicator of improved customer processing.

  40  

3. Early pointers: intermediate and leading indicators. Intermediate indicators measure

intermediate results or intervening steps toward project objectives. They usually measure changes associated with the ultimate impact sought but for which information can be obtained earlier. Example:

a. Fertilizer purchase could be used as a preliminary indicator of changed farming

practices b. Increased nutritional knowledge as an indicator of improved eating practices.

4. Quantitative and qualitative measures. Indicators of impacts, outcomes, outputs, and

inputs are easily quantified, that is, measured by defined numerical values. These are typically the basis for calculations of economic rate of return or net present value during appraisal.

5. Measurement scope. Measurement scope refers to the use of sample populations.

Performance indicators sometimes measure results directly for an entire target population through administrative records, observations, or census surveys.

6. Special studies. Special studies are formative evaluations of the fundamentals of

problems and their origins, and in that way differ from monitoring indicators, which are part of an early warning system. Example:

a. Project managers might need to learn more about the causal links among project

outputs, outcomes, and impacts, especially when indicators reveal that the broader purposes of a project are not being achieved even though its planned outputs are being delivered.    

Action Plan and Chain of Events

1. Planning and design of the study 2. Desk research 3. Selection of methods 4. Data collection and analysis 5. Report writing 6. Report presentation 7. Follow up action

Outline of Evaluation Report

1. Executive Summary 2. Background of the program 3. Description of the Evaluation 4. Results/Findings 5. Discussions 6. Cost and Benefits 7. Conclusion 8. Recommendation 9. Appendice

  41  

Evaluating Kenya’s Agricultural Extension Project (Case Study)

Introduction

The first National Extension Project (NEP I) in Kenya introduced the Training and Visit (T&V) system of management for agricultural extension services in 1983. The project had the dual objectives of developing institutions and delivering extension services to farmers, with the goal of raising agricultural productivity. NEP II followed in 1991, and aimed to consolidate the gains made under NEP I by increasing direct contact with farmers, improving the relevance of extension information and technologies, upgrading skills of staff and farmers, and enhancing institutional development.

The performance of the Kenyan extension system has been controversial and is part of

the larger debate on the cost-effectiveness of the T&V approach to extension. Despite the intensity of the debate and the large volume of investments made, very few rigorous attempts have been made to measure the impact of T&V extension. In the Kenyan case, the debate has been particularly strong because of very high estimated returns to T&V reported in an earlier study and the lack of convincingly visible results, including the poor performance of Kenyan agriculture in recent years.

The World Bank's Operations Evaluation Department (OED) undertook an evaluation

of the Kenyan extension system in 1997-99. Using a results-based management framework, the evaluation examines the impact of project services on farm productivity and efficiency. It also develops measures of program outputs (for example, frequency and quality of contact) and outcomes (that is, farmer awareness and adoption of new techniques). Evaluation design

The evaluation strategy illustrates best practice techniques in using a broad array of evaluation methods and exploiting existing data. It drew on both quantitative and qualitative methods so that rigorous empirical findings on program impact could be complemented with beneficiary assessments and staff interviews that highlight practical issues in the implementation process. The study also applied the contingent valuation method to elicit farmers’ willingness to pay for extension services. The quantitative assessment was complicated by the fact that the T&V system was introduced on a national scale, preventing a "with program" and "without program" (control group) comparison. The evaluation methodology therefore sought to exploit the available preproject household agricultural production data for limited before-and-after comparisons using panel data methods. For this, existing household data were complemented by a fresh survey to form a panel. Beneficiary assessments designed for this study could not be conducted, but the evaluation drew on the relevant findings of two recent beneficiary assessments in Kenya. Data collection and analysis techniques

The evaluation approach drew on several existing qualitative and quantitative data sources. The quantitative evaluation is based largely on a 1998 household survey conducted

  42  

by OED. This survey generated panel data by revisiting as many households as could be located from a 1990 household survey conducted by the World Bank's Africa Technical Department, which in turn drew from a subsample of the 1982 Rural Household Budget Survey.

The study evaluated institutional development by drawing on a survey of extension

staff, several recent reviews of the extension service conducted or commissioned by the Ministry of Agriculture, and individual and focus group discussions with extension staff.

The study also drew on two recent beneficiary assessments, a 1997 study by Action aid,

Kenya, which elicited the views of users and potential users of Kenya’s extension services, and a 1994 Participatory Poverty Assessment, carried out jointly by the government of Kenya, the African Medical and Research Foundation, British Overseas Development Administration, UNICEF, and the World Bank, which inquired about public services, including extension. Quality and quantity of services delivered were assessed using a combination of the findings of participatory (beneficiary) assessments and staff surveys, and through measures of outreach and the nature and frequency of contact between extension agents and farmers drawn from the 1998 OED survey. Contingent valuation methods were used to directly elicit the farmers’ willingness to pay for extension services.

The survey data were also used to assess program outcomes, measured in terms of

farmer awareness and adoption of extension recommendations. The program’s results, its actual effects on agricultural production in Kenya were evaluated by relating the supply of extension services to changes in productivity and efficiency at the farm level. Drawing on the household panel data, these impacts were estimated using the Data Envelopment Analysis, a nonparametric technique, to measure changes in farmer efficiency and productivity over time, along with econometric analysis measuring the impact of the supply of extension services on farm production. Results

The institutional development of NEP I and NEP II has been limited. After 15 years, the effectiveness of extension services has improved little. Although there has been healthy rethinking of extension approaches recently, overall the extension program has lacked the strategic vision for future development. Management of the system continues to be weak, and information systems are virtually nonexistent. The quality and quantity of service provision are poor. Beneficiaries and extension service staff alike report that visits are infrequent and ineffective. Although there continues to be unmet demand for technically useful services, the focus of the public extension service has remained on simple and basic agronomic messages. Yet the approach taken—a high intensity of contact with a limited number of farmers—is suited to deliver more technical information. The result has been a costly and inefficient service delivery system. The analysis showed that extension activities had little influence on awareness and adoption of recommendations, indicating limited potential for impact. In terms of actual impact on agricultural production and efficiency, the data indicated a small positive impact of extension services on the ability of farmers to get the most production from available resources (technical efficiency). However, no effect was found on allocative efficiency (the use of resources given market prices) or overall economic efficiency (the combination of technical and allocative efficiency). Further, no significant impact of the supply of extension services on productivity at the farm level could be established. The data

  43  

did show, however, that the impact was relatively greater in the previously less productive areas, where the knowledge gap is likely to have been the greatest. These findings were consistent with the contingent valuation findings: a vast majority of farmers, among both the current recipients and non recipients, were willing to pay for advice, indicating an unmet demand. However, the perceived value of the service, in terms of the amount offered, was well below what the government was spending on delivering it. Policy implications

The Kenya Extension Service evaluation stands out in terms of the array of practical policy conclusions that could be derived from its results, many of which are relevant to the design of future agricultural extension projects. First, the evaluation revealed the need to enhance targeting of extension services, focusing on areas and groups in which the difference between the average and best practice is the greatest and hence the impact is likely to be greatest. Furthermore, advice needs to be carefully tailored to meet farmer demands, taking into account variations in local technological and economic conditions. Achieving a high level of service tailoring requires regular and timely flows of appropriate and reliable information, and a monitoring and evaluation system that provides regular feedback from beneficiaries on service content.

To raise program efficiency, a leaner and less intense presence of extension agents with

wider coverage is likely to be required. There are not enough technical innovations to warrant a high frequency of visits, and there is unmet demand from those currently not receiving services. The program’s blanket approach to service delivery, relying predominantly on a single methodology (farm visits) to deliver standard simple messages, also limits program efficiency. Radio programs are now popular, younger farmers are more educated, and alternative providers (non-governmental organizations) are beginning to emerge in rural Kenya. A flexible pluralistic approach to service delivery, particularly one that uses lower-cost means of communication, is likely to enhance the cost effectiveness of the program.

Finally, the main findings pointed to the need for institutional reform. The central focus

of the institution should be the farmer. Decentralization of program design, including participatory mechanisms that give voice to the farmer (such as cost sharing and farmer organizations) should become an integral part of the delivery mechanism. Financial sustainability is critical. The size and intensity of the service should be based on existing technological and knowledge gaps and the pace of flow of new technology.

Cost recovery, even if only partial, offers several advantages: it provides appropriate

incentives, addresses issues of accountability and quality control, makes the service more demand-driven and responsive, and provides some budgetary respite. Such decentralized institutional arrangements remain unexplored in Kenya and in many extension programs in Africa and around the world.

  44  

Evaluation costs and administration Costs.

The total cost of the evaluation was approximately $350,000, which covered household survey data collection and processing ($65,000—though this is probably an underestimate of actual costs); extension staff survey, data, and consultant report ($12,500); other data collection costs ($12,500); and approximately $160,000 for World Bank staff time and travel costs for data processing, analysis, and report writing should be added to reflect fully the study’s cost. Administration.

To maintain objectivity and dissociate survey work from both the government extension service and the World Bank, the household survey was implemented by the Tegemeo Institute of Egerton University, an independent research institute in Kenya. The analysis was carried out by World Bank staff. Lessons learned

The combination of theory-based evaluation and a results-based framework can provide a sound basis for evaluating the impact of project interventions, especially where many factors are likely to affect intended outcomes. The design of this evaluation provided for the measurement of key indicators at critical stages of the project cycle, linking project inputs to the expected results to gather sufficient evidence of impact.

An empirical evaluation demands constant and intense supervision. An evaluation can

be significantly simplified with a well-functioning and high-quality monitoring and evaluation system, especially with good baseline data. Adequate resources for these activities are rarely made available. This evaluation also benefited tremendously from having access to some, albeit limited, data for the preproject stage and also independent sources of data for comparative purposes.

Cross-validation of conclusions using different analytical approaches and data sources

is important to gather a credible body of evidence. Imperfect data and implementation problems place limits on the degree of confidence with which individual methods can provide answers to key evaluative questions. Qualitative and quantitative assessments strongly complement each other. The experience from this evaluation indicates that even in the absence of participatory beneficiary assessments, appropriately designed questions can be included in a survey to collect qualitative as well as quantitative information. Such information can provide useful insights to complement quantitative assessments.

If properly applied, contingent valuation can be a useful tool, especially in evaluating

the value of an existing public service. The results of the application in this evaluation are encouraging, and the responses appear to be rational and reasonable.

  45  

REFERENCES

Part I AEXT392: Lecture 02: Extension Program Planning and Evaluation. Retrieved from

agridr.in/tnauEAgri/eagri50/AEXT392/lec02.html Diehl, D.C. and S.G. Gonzalez. 2012. Planning or Refining an Extension Program. Retrieved

From http://edis.ifas.ufl.edu/fy1229 Corbin, M. N. Kiernan M. Koble, J. Watson, and D. Jackson. 2004. Using the Logic Model to

Plan Extension and Outreach Program Development and Scholarship. Journal of Higher Education Outreach and Engagement Volume 10, Number 1, p. 61. Retrieved from openjournals.libs.uga.edu/index.php/jheoe/article/download/1225/753

Cristovao, A. T. Kochen and J. Portela. 1997. Chapter 7-Developing and delivering extension

programmes. Swanson, B.E., R.P. Bentz and A.J. Sofranko (eds). FAO. Forest, L. and S. Mulcahy. 1976. First Things First: A Handbook of Priority Setting in

Extension. Retrieved from www.ext.edu/ces/pdanded/learning/pdf/ftf.pdf Franz, N. B. Garst & R. Ganon. 2015. The Cooperative Extension Program Development

Model: Adapting to a Changing Context. Journal of Human Sciences and Extension Volume 3 Number 2. Retrieved from lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1009&context=edu_pubs

Guidelines for Developing a Plan of Work. 2010. Retrieved from

www.ext.colostate.edu/staffies/program/pow-guidelines.pdf Hayes, H. M. Parchman and R. Howard. 2011. A logic model framework for evaluation and

planning in a primary care practice-based research network (PBRN). J Am Board Fam Med. 2011 ; 24(5): 576–582. doi:10.3122/jabfm2011.05.110043

Oakley P. and C. Garforth. 1985. Guide to Extension Training: 7. The planning and

evaluation of extensionprogrammes. Retrieved from http://www.fao.org/docrep/t0060e/T0060E09.htm

Waldron, M.W., Vsanthakumar, J. and S. Arulraj. 1997. Chapter 13-Improving the

organization and management of extension. Swanson, B.E., R.P. Bentz and A.J. Sofranko (eds). FAO.

  46  

Part II Alex, G., and D. Byerlee. 2000. Monitoring and Evaluation for AKIS Projects. Framework

and Options. Agricultural Knowledge and Information Systems (AKIS). Good Practice Note. Washington, DC: World Bank

ACC (Administrative Committee on Coordination), United Nations (1984). Guiding

principles for the design and of monitoring and evaluation in rural development projects and programmes. Rome: International Fund for Agricultural Development (IFAD).

Casley, D. J., & Kumar, K. (1987). Project monitoring and evaluation in agriculture.

Baltimore Cernea, M. M., & Topping, B. J. (1977). A system for monitoring and evaluating agricultural extension projects. World Bank Staff Working Paper No. 272. Washington, DC: World Bank Eisner, E. (1983). Educational connoisseurship and criticism: Their form and function in educational evaluation. In G. F. Madaus, M. Scriven, & D. Stufflebeam (Eds.), Evaluation

models. Boston: Kluwer-Nijhoff European Commission, 2004. Project Cycle Management Guideline World Bank Staff Working Paper No. 272. Washington, DC: World Bank FAO (1988). Socio-economic indicators for monitoring and evaluating agrarian reform and rural development. WCARRD: Ten years of follow-up (1979-1989). Rome: Author. Greene, J. C., Caracelli, B. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11, 255-274. Gautam, Madhur. 1999. ”World Bank Agricultural Extension Projects in Kenya: An Impact

Evaluation.” Report No. 19523. World Bank, Operations Evaluation Department, Washington, D.C.

Misra, D. C. (1994). Agricultural extension effectiveness in India (mimeo). University of Oxford, Queen Elizabeth House, International Development Centre. Monitoring and Evaluating Urban Development Programs, A Handbook for Program

Managers and Researchers. Bamberger, Michael and Hewitt, Eleanor. World Bank Technical Paper no 53. (Washington, D.C.: 1986)  

http://siteresources.worldbank.org/INTISPMA/Resources/Training-Events-and Materials/Day1_Session2_MGoldstein.ppt.

 Morris, L. H., Fitz-Gibbon, C. and Freeman, M. (1987). How to communicate evaluation

findings. Newbury Park, CA: Sage Publications.    Murphy, J., & Marchant, T. (1988). Monitoring and evaluation in extension agencies. Oxford, Queen Elizabeth House, International Development Centre. Patton, M. Q. (1991). Beyond evaluation myths. Adult Learning (October), 9-10, 28. rural development. WCARRD: Ten years of follow-up (1979-1989). Rome: Author.

  47  

Scriven, M. (1972). Pros and cons about goal-free evaluation. Evaluation Comment, 3, 1-5. Slade, R. H., & Feder, G. (1985). Training and visit extension: A manual of instruction World Bank. 1998. Public Expenditure Management Handbook. Washington, DC: World

Bank. http://www.businessdictionary.com/definition/monitoring.html www.worldbank.org/smallgrantsprogram    http://www.ifrc.org/ Washington, DC: The International Bank for Reconstruction and Development/The World Bank. Fowler, A. (1997) Striking A Balance: A Guide To Enhancing The Effectiveness Of Non-

Governmental Organisations In International Development. Earth scan. London. FAO (1990). Report of the global consultation on agriculture extension. Rome: Author. Hortan, D., Peterson, W., & Ballantyne, P. (1993). M & E principles and concepts (p. 5-19).

In D. Hortan et al. (Eds.), Monitoring and evaluating agricultural research: A sourcebook. Oxon, UK: CAB International.