performance improvement

130
The Assessment Book: Applied Strategic Thinking and Performance Improvement Through Self-Assessments Roger Kaufman, CPT, PhD Ingrid Guerra-López, PhD with Ryan Watkins, PhD and Doug Leigh, PhD HRD Press, Inc. Amherst Massachusetts

Upload: ahmed-zeen-el-abeden

Post on 24-Nov-2014

310 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Performance Improvement

The Assessment Book: Applied Strategic Thinking and

Performance Improvement Through Self-Assessments

Roger Kaufman, CPT, PhD Ingrid Guerra-López, PhD

with Ryan Watkins, PhD

and Doug Leigh, PhD

HRD Press, Inc. • Amherst • Massachusetts

Page 2: Performance Improvement

Copyright © 2008, Roger Kaufman and Ingrid Guerra-López Published by: HRD Press, Inc. 22 Amherst Road Amherst, MA 01002 800-822-2801 (U.S. and Canada) 413-253-3488 413-253-3490 (fax) www.hrdpress.com All rights reserved. Printed in the United States of America. No part of this material may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without written permission from the author. ISBN 978-1-59996-128-6 Production services by Jean Miller Editorial services by Sally Farnham Cover design by Eileen Klockars

Page 3: Performance Improvement

| iii

Table of Contents Introduction.............................................................................................................................. 1 Chapter 1: The Basics of Mega Thinking and Planning—The Rationale for the Seven Self-Assessment Instruments ..................................................................................... 5 Introduction......................................................................................................................... 5 The Societal Value-Added Perspective and Frame of Mind .............................................. 5 Guide One: The Organizational Elements Model (OEM)........................................... 6 Guide Two: Six Critical Success Factors .................................................................... 6 Guide Three: A Six-Step Problem-Solving Model ..................................................... 10 New Realities for Organizational Success.......................................................................... 11 Mega Planning Framework ......................................................................................... 12 What does all of this have to do with this book?................................................................ 13 Related References ............................................................................................................. 13 Chapter 2: Assessment Instruments: What are they and how can they be useful?........... 17 Questionnaires .................................................................................................................... 18 Considerations for Design and Development.............................................................. 18 What data should be collected? ................................................................................... 19 Questionnaire Structure ............................................................................................... 19 Length.......................................................................................................................... 20 Suggested Data Analysis Approach for These Instruments ............................................... 20 Displaying Response Data........................................................................................... 22 Interpretation and Required Action ............................................................................. 24 Related References ............................................................................................................. 24 Chapter 3: Strategic Thinking and Planning—How Your Organization Goes About It (and Finding out What it Might Want to Change) .................................... 25 How to Complete This Survey ........................................................................................... 25 Strategic Thinking and Planning Survey ..................................................................... 26 Once Data Is Collected and Analyzed ................................................................................ 28 The Meaning of Gaps in Results ................................................................................. 28 When There are No Gaps in Results ........................................................................... 29 Using the Results of This Survey ....................................................................................... 29 Related References ............................................................................................................. 30 Chapter 4: Needs Assessment, Determination, and Your Organization’s Success............ 33 How to Complete This Assessment.................................................................................... 34 Needs Assessment ........................................................................................................ 35 What the Results Mean....................................................................................................... 40 When There Are No Gaps .................................................................................................. 40 The Meaning of Gaps ......................................................................................................... 40 How the Organization Perceives Needs Assessment .................................................. 41 What the Organization Includes When Doing Needs Assessments ............................ 41 How the Organization Goes About Needs Assessment .............................................. 41

Page 4: Performance Improvement

The Assessment Book

iv |

The Basis for Needs Assessment Criteria ................................................................... 42 Using Needs Assessment Data .................................................................................... 43 Items and Patterns............................................................................................................... 43 Related References ............................................................................................................. 44 Chapter 5: Culture, Our Organization, and Our Shared Future ....................................... 47 How to Complete This Survey ........................................................................................... 48 Culture and My Organization Survey.......................................................................... 49 What the Results Mean....................................................................................................... 53 Associates and Work ................................................................................................... 53 Management Style ....................................................................................................... 54 Measuring Success ...................................................................................................... 54 The Organization ......................................................................................................... 54 Customer Relationships............................................................................................... 54 Direction and the Future.............................................................................................. 55 Using the Results of This Survey ....................................................................................... 55 Related References ............................................................................................................. 55 Chapter 6: Evaluation, You, and Your Organization .......................................................... 59 How to Complete This Survey ........................................................................................... 60 Evaluation, You, and Your Organization Survey ........................................................ 60 What the Results Mean....................................................................................................... 65 How the Organization Perceives Evaluation............................................................... 65 How the Organization Goes About Evaluation ........................................................... 66 The Basis for Evaluation Criteria ................................................................................ 67 Using Evaluation Data................................................................................................. 67 Using the Results of This Survey ....................................................................................... 68 Related References ............................................................................................................. 68 Chapter 7: Competencies for Performance Improvement Professionals ........................... 71 Introduction......................................................................................................................... 71 Framework.......................................................................................................................... 71 Instrument Validation ......................................................................................................... 72 How to Complete This Inventory ....................................................................................... 73 Performance Improvement Competency Inventory ..................................................... 73 Analysis and Interpretation................................................................................................. 76 Some Thoughts on Interpretation ....................................................................................... 78 Related References ............................................................................................................. 79 Chapter 8: Performance Motivation ...................................................................................... 81 Introduction......................................................................................................................... 81 Expectancies of Success and the Value of Accomplishment ............................................. 81 Performance Motivation ..................................................................................................... 81 Input Performance Motivation .................................................................................... 82 Process Performance Motivation................................................................................. 82 Individual Performance Motivation (Micro Level) ..................................................... 82 Team Performance Motivation (Micro Level) ............................................................ 83 Organizational Performance Motivation (Macro Level) ............................................. 83

Page 5: Performance Improvement

Table of Contents

| v

External-Client Performance Motivation (Mega Level) ............................................. 83 Societal Performance Motivation (Mega Level) ......................................................... 83 Description of the Performance Motivation Inventory—Revised (PMI-A)....................... 84 Purpose of the PMI-A.................................................................................................. 84 Directions for Completing the PMI-A................................................................................ 84 Performance Motivation Inventory ............................................................................. 85 Scoring the PMI-A.............................................................................................................. 86 Interpreting Results of the PMI-A ...................................................................................... 88 Related References ............................................................................................................. 88 Chapter 9: Organizational Readiness for E-learning Success ............................................. 91 Introduction......................................................................................................................... 91 E-learning ........................................................................................................................... 91 Foundations ................................................................................................................. 91 How to Use the Self-Assessment........................................................................................ 93 Organizational E-learning Readiness Self-Assessment............................................... 95 Scoring and Analysis of Results ......................................................................................... 104 Interpreting Results for E-learning Dimensions ................................................................. 104 Related References ............................................................................................................. 105 Concluding Thoughts and Suggestions .................................................................................. 107 Glossary .................................................................................................................................... 109 About the Authors.................................................................................................................... 123

Page 6: Performance Improvement
Page 7: Performance Improvement

| 1

Introduction Determining what should be accomplished before selecting how to accomplish it is essential for improving performance among individuals, within teams, and across organizations, as well as for making valued contributions to external partners and society. This book includes a set of profes-sional self-assessment guides for achieving those goals, as well as a manual for how to success-fully use them for strategic decision making within your organization. From e-learning, motiva-tion, and competency development to valuable performance processes such as strategic planning and evaluation, the self-assessments included in this book provide the necessary questions, logi-cal frameworks, and systematic guidance for making practical decisions about both what should be accomplished and how those objectives can best be achieved. What This Is About and Who It Is For: A Proactive Focus The Assessment Book: Applied Strategic Thinking and Performance Improvement Through Self-Assessments is research based and intended to measurably improve the contributions of those professionals who are asked to make decisions that will improve performance. It is for indi-viduals, teams, and organizations who wish to assess if they are pursuing the most useful approaches for improving performance before applying valuable how-to-do-it technologies, methods, or guidance. It is specifically targeted to let you and your organization ask and answer the “right questions” relative to the vital areas of individual and organizational performance improvement that lead to systemic success. It is for practitioners and managers alike. What Is Unique About This Approach? Several aspects of the approach applied in each self-assessment make these unique and worthy of your attention. First, each instrument has gone through utility and validation reviews and appli-cations that provided valuable feedback for subsequent revisions and improvement. Not all are scientifically validated instruments, but all are pragmatic ways and means to calibrate where your organization is currently and help you decide where it should be. A second distinctive aspect is that these instruments address the issue of “what” before “how.” Most professional surveys, books, guidelines, and support help available today are in the form of how-to-do-its. This how-to approach has popular appeal, but research tells us that starting with implementation can often lead to consequences other than desirable results. Interventions are, after all, only part of the performance improvement story. In this context is the reality provided many years ago by Peter Drucker that “it is more important to do what is right rather than doing it right.” This set of self-assessment instruments goes to the heart of Drucker’s insight. We offer seven self-assessment instruments—validated by professionals and organizations, including IBM and Hewlett Packard—that provide solid guidance on “what to accomplish” before deciding “how to do it.” This does not discount the application of how-to-do-it guides, but it does encourage individuals and organizations to first ensure that they are headed where they want to end up before selecting how to get there. Thus this series of self-assessment instruments does not conflict with existing

Page 8: Performance Improvement

The Assessment Book

2 |

how-to-do-it guidance but rather provides a set of complementary assessments that up until now had been largely ignored. Defining and delivering useful and measurable performance improvement for all organizations and their associates are vital steps toward success. Usually missing are, however, cost-effective ways for organizations to find out where they are in terms of results and consequences—neces-sary information for deciding where they should be in terms of required skills, knowledge, atti-tudes, and abilities for defining and delivering success and then proving it. This leads to the final unique aspect of these self-assessment instruments: each of the self-assessments relates to estab-lishing a value chain that aligns external clients and our shared society (Mega) with organiza-tional (Macro) and individual (Micro) contributions and these with appropriate processes and activities and then with resources. This alignment of the results to be accomplished at three levels with the processes and resources required to achieve them is the hallmark of an effective self-assessment approach to performance improvement. The seven self-assessment instruments provided in this book offer guides for you and your organization to define what results and consequences you want to deliver so that you may sen-sibly define the approaches, tools, and methods you should use to deliver success.

They consist of:

• Strategic Thinking and Planning—developed by Roger Kaufman • Needs Assessment and Your Organization—developed by Roger Kaufman • Corporate Culture and Your Organization—developed by Roger Kaufman • Evaluation and Your Organization—developed by Roger Kaufman • Performance Improvement Competencies—developed by Ingrid Guerra-López • Performance Motivation to Change—developed by Doug Leigh • Organizational Readiness for E-learning—developed by Ryan Watkins

Each of these instruments uses a unique dual response (i.e., “What Is” and “What Should Be”) format with performance-related questions. This format easily provides you with useful data on gaps between current practice and best practice that may be measurably and conveniently noted. Table I.1 identifies how each currently available self-assessment instrument relates to the 10 ISPI principles.

Page 9: Performance Improvement

Introduction

| 3

Table I.1. The Relationship of each available self-assessment instrument and the International Society for Performance Improvement’s Standards (ISPI, 2002) While all integrate, an “X” identifies coverage and linking to more than one standard, and “XX” indicates major focus.

Focu

s on

Res

ults

Take

a S

yste

m

App

roac

h

Add

Val

ue

Esta

blis

h Pa

rtne

rshi

ps

Nee

ds A

sses

smen

t

Perf

orm

ance

A

naly

sis

Des

ign

to

Spec

ifica

tion

Sele

ctio

n, D

esig

n, &

D

evel

opm

ent

Impl

emen

tatio

n

Eval

uatio

n &

Con

tinua

l Im

prov

emen

t

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)

Strategic Thinking and Planning

XX XX X X X X X X XX

Needs Assessment XX XX X XX X X X XX

Corporate Culture XX XX XX X X XX XX XX

Evaluation X X X XX XX

Performance Improvement Competencies

XX XX XX X XX XX XX XX XX XX

Performance Motivation

X X XX

Readiness for E-learning

X X XX XX XX X

Page 10: Performance Improvement
Page 11: Performance Improvement

| 5

Chapter 1 The Basics of Mega Thinking and Planning1—

The Rationale for the Seven Self-Assessment Instruments

Introduction Mega planning places a primary focus on adding value for all stakeholders. It is realistic, practi-cal, and ethical. Defining and then achieving sustained organizational success are possible when they rely on some basic elements:

1. A societal value-added “frame of mind” or paradigm: your perspective about your organization, people, and our world. It focuses on an agreed-upon goal of adding value to all stakeholders.

2. A shared determination and agreement on where to head and why: all people who can

and might be impacted by the shared objectives must agree on purposes and results cri-teria, and pragmatic and basic tools.

This chapter provides the basic concepts for thinking and planning Mega in order to define and deliver value to internal and external partners. The concepts and tools discussed here form the basis of the seven self-assessment instruments in this book with each assessment defining a par-ticular part of the whole of individual and organizational performance improvement. The Societal Value-Added Perspective and Frame of Mind The required frame of mind for Mega thinking and planning—your guiding paradigm—is sim-ple, straight forward, and sensible. It puts a primary concern on adding measurable value for external clients and society using your own job and organization as the vehicle. From this shared societal value-added frame2, everything you and your organization use, do, produce, and deliver is linked to achieve shared and agreed-upon positive societal results. This societal frame of refer-ence, or paradigm, is termed the Mega level of thinking and planning.3 If you are not adding value to our shared society, what assurance do you have that you are not subtracting value? Starting with results at the Mega (societal) level as the central focus, strategic thinking provides the foundation for valued strategic planning.

1 Based in part on Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning primer. Performance Improvement Quarterly, 18(3), 8–16. 2 The process for defining and using Mega relies on the democratic process of all persons who could be impacted by the definition of Mega coming to agreement. 3 In some writings, “social value” formal considerations are limited to adding value to the associates working within the organization and thus might be missing the external social value suggested in Mega thinking and planning (Kaufman, R., 2005).

Page 12: Performance Improvement

The Assessment Book

6 |

A central question that each and every organization should ask and answer is:

If your organization is the solution, what’s the problem?

This fundamental proposition is central to thinking and planning strategically. Using a Mega focus represents a shift from the usual focus only on yourself, individual performance improve-ment, and your organization to one that makes certain you also add value to external clients and society. What follows are three guides, or templates, that will help you define and achieve organizational success. We will begin with an overview of each, and then proceed later in the chapter with more detailed descriptions. Guide One: The Organizational Elements Model (OEM)4

Table 1.1 defines and links/aligns what any organization uses, does, produces, and delivers with external client and societal value added. For each element, there is an associated level of plan-ning. Note that strategic planning (and thinking) starts with Mega while tactical planning starts with Macro and operational planning starts with Micro. Table 1.1. The five levels of results, the levels of planning, and a brief description

Level of Focus Brief Description

Mega Results and their consequences for external clients and society (shared vision)

Macro The results an organization can or does deliver outside of itself

Micro The building block results that are produced within the organization

Process The ways, means, activities, procedures, and methods used internally

Input The human, physical, and financial resources an organi-zation can or does use

These elements are also useful for defining the basic questions every organization must ask and answer as provided later in this chapter. Guide Two: Six Critical Success Factors

The Six Critical Success Factors provide a vital framework of this approach to strategic thinking and for Mega planning. Unlike conventional “success factors,” these are factors for successfully accomplishing desired results, not just for the things that an organization must get done to meet its mission. These are for Mega planning, regardless of the organization.

4 Based on Kaufman, 2006a, and Kaufman, Oakley-Browne, Watkins, and Leigh, 2003.

Page 13: Performance Improvement

The Basics of Mega Thinking and Planning

| 7

Let’s look at each of the Six Critical Success Factors.

Critical Success Factor 1: Use new and wider boundaries for thinking, planning, doing, and evaluating/continuous improvement. Move out of today’s zones. There is evidence just about everywhere we look that success tomorrow is not a linear projection (or a straight-line function) of success yesterday and today. For instance, a successful car manufacturer that squanders its dominant client base by shoving unacceptable vehicles into the market is likely to go out of business, just as an airline that focuses on shareholder value and ignores customer value or safety. An increasing number of credible authors (Alvin Toffler and Peter Drucker) tell us that the past is, at best, prologue and not a harbinger of what the future will be. In fact, old paradigms can be so deceptive that Tom Peters suggests that “organizational forgetting” must become conventional organizational culture for success now and in the future.5

Times change, and anyone who doesn’t also change appropriately is risking failure. It is vital to use new and wider boundaries for thinking, planning, doing, and delivering. Doing so will require that you get out of current comfort zones to change the very foundations of the decision making that has made you successful in the past. But not doing so will likely deliver failure.6

Critical Success Factor 2: Differentiate between ends and means. Focus on “what” (Mega/Outcomes, Macro/Outputs, Micro/Products) before “how.” People are “doing-types.” We want to swing right into action, and in so doing, we usually jump right into solutions—means—before we know the results—ends—we must deliver. Writing and using measurable performance objectives is something upon which almost all performance improvement authors agree. Objectives correctly focus on ends and not methods, means, or resources.7 Ends—what you accomplish—sensibly should be identified and defined before you select means—how you will accomplish results—to get from where you are to your desired desti-nations. If we don’t select our solutions, methods, resources, and interventions on the basis of what results we are to achieve, what do we have in mind to make the selections of means, resources, or activities? Focusing on means, processes, and activities is usually a comfortable starting place for con-ventional performance improvement initiatives. Starting with means for any organization and performance improvement initiative would be as if you were provided process tools and techniques without a clear map that included a definite destination identified (along with a statement of why you want to get to the destination in the first place). This is obviously

5 Peters, 1997. 6 Again, in Peters, 1997, he states that it is easier to kill an organization than it is to change it. 7 Bob Mager set the original standard for measurable objectives. Later, Tom Gilbert made the important distinction between behavior and performance (between actions and consequences). Recently, some “Constructivists” have had objections to writing objectives because they claim doing so can cut down on creativity and impose the planner’s values on the clients. This view, we believe, is not useful. For a detailed discussion on the topic of Constructivism, please see the analysis of philosophy professor David Gruender (Gruender, C. D. [1996]. Constructivism and learning. A philosophical appraisal. Educational Technology, 30[3], 21–29).

Page 14: Performance Improvement

The Assessment Book

8 |

risky. An additional risk for starting a performance improvement journey with means and processes would be the fact that there would be no way of knowing whether your trip is taking you toward a useful destination nor would there be criteria for telling you if you were making progress. It is vital that successful planning focus first on results—useful performance in measurable terms—for setting its purposes, measuring progress, and providing continuous improvement toward the important results, and for determining what to keep, what to fix, and what to abandon. It is vital to focus on useful ends before deciding “how” to get things done. It also sets the stage for other related Critical Success Factors, such as CSF 3 (Use and align all three levels of results) through application of the Organizational Elements Model (OEM) and CSF 4 (Prepare objectives that have indicators of how you will know when you have arrived.) Both the OEM and performance objectives rely on a results-focus because they define what every organization uses, does, produces, and delivers, and the consequences of that for external clients and society. Critical Success Factor 3: Use and align all three levels of planning and results. As we noted in the previous Critical Success Factor, it is vital to prepare all objectives that focus only on ends and never on means or resources. There are three levels of results, shown in Table 1.2, that are important to target and link:

Table 1.2. The levels of planning and results that should be linked during planning, doing, and evaluation and continuous improvement and the three types of planning

Primary Client and Beneficiary

Name for the Level of Planning

Name for the Level of Result8

Type of Planning

External clients and society

Mega Outcomes Strategic

The organization itself

Macro Outputs Tactical

Internal clients: individuals and small groups

Micro Products Operational

8 The distinction between the three levels of results in terms of who is the primary client and beneficiary is very important. Suffice it to say when one calls every result an “Outcome,” it tends to blur the differences among the three types of results.

Page 15: Performance Improvement

The Basics of Mega Thinking and Planning

| 9

There are three levels of planning and results, based on who is to be the primary client and beneficiary of what gets planned, designed, and delivered. For each level of planning, there are associated three levels of results (Outcomes, Outputs, Products).9 Strategic planning tar-gets society and external clients, tactical planning targets the organization itself, and opera-tional planning targets individuals and small groups. Use all three to ensure that the results you accomplish lead to positive societal consequences.

Critical Success Factor 4: Prepare objectives—including those for the Ideal Vision and Mission Objectives—that have indicators of how you will know when you have arrived (mission statement plus success criteria). It is vital to state in precise, measurable, and rigorous terms where you are headed and how to tell when you have arrived (i.e., what results you want to achieve and how you will measure their accomplishment).10 Statements of objectives must be in performance terms so that one can plan how best to get there, how to measure progress toward the end, and how to note progress toward it.11 Objectives at all levels of planning, activity, and results are absolutely vital. And every-thing—from leadership and management to data entry and strategic direction setting—is measurable. Don’t kid yourself into thinking you can dismiss important results as being “intangible” or “non-measurable.” If you can name it, then you can measure it. It is only sensible and rational therefore to make a commitment to measurable purposes and destina-tions. Organizations throughout the world are increasingly focusing on Mega-level results.12

A simple mnemonic device for developing performance objectives is denoted by the acro-nym PQRS (Leigh, 2004). First, performance requirements should specify the performer or performers who are expected to achieve the desired result. Next, relevant qualifying criteria should be laid out, typically indicating the time frame over which a result should be accom-plished. Lastly, the results to be accomplished should be stated, along with the standards against which the value of a performance will be judged.13 Critical Success Factor 5: Define need as a gap between current and desired results (not as insufficient levels of resources, means, or methods). Conventional English-language usage would have us employ the common word need as a verb (or in a verb sense) to identify means, methods, activities, and actions and/or resources we desire or intend to

9 It is interesting and curious that in the popular literature, all results tend to be called “Outcomes.” This failure to distinguish among three levels of results blurs the importance of identifying and linking all three levels in planning, doing, and evaluating/continuous improvement. 10 An important contribution of strategic planning at the Mega level is that objectives can be linked to justifiable purpose. Not only should one have objectives that state “where you are headed and how you will know when you have arrived,” they should also be justified on the basis of “why you want to get to where you are headed.” While it is true that objectives only deal with measurable destinations, useful strategic planning adds to the reasons why objectives should be attained. 11 Note that this Critical Success Factor (CSF) also relates to CSF 2. 12 Kaufman, Watkins, Triner, and Stith, 1998, Summer. 13 Another compatible approach to setting objectives is provided in Kaufman, Oakley-Browne, Watkins, and Leigh (2003) where they suggest expanding the attributes of objectives with the acronym SMARTER.

Page 16: Performance Improvement

The Assessment Book

10 |

use.14 As a consequence, terms such as need to, need for, needing, and needed are common and conventional, and yet are counter to useful planning. These terms obligate you to a method or means (e.g., training, more computers, bigger budgets) before deciding what results are to be accomplished.

As hard as it is to change our own behavior (and most of us who want others to change seem to resist it the most ourselves!), it is central to useful planning to distinguish between ends and means (as noted in Critical Success Factor 2). To do reasonable and justifiable planning, we have to (1) focus on ends and not means, and thus (2) use need as a noun. Need, for the sake of useful and successful planning, is only used as a noun (i.e., as a gap between current and desired results).

If you use need as a noun, you will be able to not only justify useful objectives, but you will also be able to justify what we do and deliver on the basis of costs-consequences analysis. You will be able to justify everything you (or your organization) uses, does, produces, and delivers. As a result, it is the only sensible way we can demonstrate value added.

Critical Success Factor 6: Use an Ideal Vision as the underlying basis for all planning and doing (don’t be limited to your own organization). Critical Success Factor 6 repre-sents another area that requires some change from the conventional ways of doing planning. An Ideal Vision is never prepared for an organization, but rather identifies the kind of world we want to help create for tomorrow's child. From this societal-linked Ideal Vision, each organization can identify what part or parts of the Ideal Vision we commit to deliver and move ever-closer toward. If we base all planning and doing on an Ideal Vision of the kind of society we want for future generations, we can achieve “strategic alignment” for what we use, do, produce, deliver, and the external payoffs for our Outputs.

Guide Three: A Six-Step Problem-Solving Model

Figure 1.1 provides a function model that includes (1.0) identifying problems based on needs, (2.0) determining detailed solution requirements and identifying (but not yet selecting) solution alternatives, (3.0) selecting solutions from among alternatives, (4.0) implementation, (5.0) evaluation, and (6.0) continuous improvement (at each and every step). Each time you want to identify problems and opportunities and systematically get from current results and conse-quences to desired ones, the six-step process provides a guide for decision making.

14 Because most dictionaries provide common usage and not necessarily correct usage, they note that need is used as a noun as well as a verb. This dual conventional usage doesn’t mean that it is useful. Much of this book depends on a shift in paradigms about need. The shift is to use it only as a noun—never as a verb or in a verb sense.

Page 17: Performance Improvement

The Basics of Mega Thinking and Planning

| 11

Figure 1.1. The six-step problem-solving process: A process for identifying and resolving problems (and identifying opportunities)

6.0 Revise as Required

1.0 2.0 3.0 4.0 5.0Needs

AssessedNeeds

AnalyzedMeans

Selected EvaluatedImplemented

1.0

Sources: Kaufman, 1992, 1998, 2000, 2006. Further development of Mega thinking and planning may be found in:

Brethower, D. (2006). Performance analysis: knowing what to do and how. Amherst, MA: HRD Press, Inc.

Gerson, R. F. (2006). Achieving high performance: a research-based practical approach. Amherst, MA: HRD Press, Inc.

Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for performance improvement practitioners. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those who refuse to be mediocre. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R., Guerra, I., & Platt, W. A. (2006). Practical evaluation for educators: finding what works and what doesn’t. Thousand Oaks, CA: Corwin Press/Sage.

Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Strategic planning for success: aligning people, performance, and payoff. San Francisco, CA: Jossey-Bass/Pfeiffer.

Watkins, R. (2006). Performance by design: the systematic selection, design, and development of performance technologies that produce useful results. Amherst, MA: HRD Press, Inc.

New Realities for Organizational Success The successful application of Mega planning builds on the fact that yesterday’s methods and results are often not appropriate for tomorrow. Just as the business models of the industrial age apply less and less in the new knowledge-based economy, most of the methods that made you successful in the past will have to be updated in order to ensure future success. Most planning experts agree that the past is only prologue, and tomorrow must be crafted through new patterns of perspectives, tools, and results. The tools and concepts for meeting the new realities of soci-ety, organizations, and people are linked to each of the Six Critical Success Factors.

Page 18: Performance Improvement

The Assessment Book

12 |

The details and how-to’s for each of the three guides are also provided in the referenced sources at the end of this chapter. The three basic “guides” or templates should be considered as forming an integrated set of tools—like a fabric—instead of only each one on their own.15 Mega Planning Framework

A Mega planning framework has three phases: scoping, planning, implementation/continuous improvement. From this framework, specific tools and methods are provided to do Mega plan-ning. It is not complex, really. If you simply use the three guides, you will be able to put it all together. When doing Mega planning, you and your associates will ask and answer the following ques-tions shown in Table 1.3. The answers to these questions provide boundaries that help define the scope of your strategic planning and organizational decision making. Table 1.3. The basic questions every organization must ask and answer

Self-Assessment Organizational Partners

Questions No Yes No Yes

1. Do you commit to deliver organizational results that add value for all external clients and society? (Mega)

2. Do you commit to deliver organizational results that have the measurable quality required by your external clients? (Macro)

3. Do you commit to produce internal results—including your job and direct responsibilities—that have the measurable quality required by your internal partners? (Micro)

4. Do you commit to having efficient internal processes?

5. Do you commit to acquire and use quality—appropriate human capital, information capital, and physical resources? (Inputs)

6. Do you commit to evaluate/determine: 6.1 How well you deliver products, activities,

methods, and procedures that have positive value and worth? (Process Performance)

6.2 Whether the results defined by your objectives in measurable terms are achieved? (Evaluation/Continuous Improvement)

15 Of course, each one is valuable. But used together, they are even more powerful.

Page 19: Performance Improvement

The Basics of Mega Thinking and Planning

| 13

A “yes” answer to all questions will lead you toward Mega planning and allow you to prove that you have added value—something that is becoming increasingly important in our global econ-omy and society. These questions relate to Guide One, the Organizational Elements Model. It defines each organizational element in terms of its label and the question each addresses. If you use and do all of these, you are better able to align everything you use, do, produce, and deliver to adding measurable value to yourself, to your organization, and to external clients and society. Mega planning is proactive by its very nature. It requires that you begin all planning and decision making with a societal perspective. This allows you to work with others to define and achieve success. Many approaches to organizational improvement wait for problems to happen and then scramble to respond. But there is a temptation to react to problems and never take the time to plan so that surprises are fewer and success is defined—before problems spring up—and then systematically achieved. Mega thinking and planning is about defining a shared success, achieving it, and being able to prove it. Mega thinking and planning is a focus not on one’s organization alone but on society now and in the future. It is about adding measurable value to all stakeholders. Mega thinking and planning has been offered for many years, perhaps first formally with Kaufman’s Educational System Planning (1972) and further developed in Kaufman and English (1979), and continuing through 2006. In one form or another, using a societal frame for planning and doing has shown up in the works of respected thinkers, including Senge (1990) and more recently Prahalad (2005). There continues this migration from individual performance as the pre-ferred unit of analysis for performance improvement to one that includes a first consideration of society and external stakeholders. It is, after all, responsible, responsive, and ethical to add value to all. What does all of this have to do with this book? Each of the seven self-assessments can help you define what basic dimensions of performance improvement you wish to address. Each builds on the concepts in this overview, including a focus on ends rather than means (each assessment has performance statements for your responses), and each puts into practice (and “models”) the concepts and discipline of strategic thinking and planning that begin with a desire to accomplish valuable results that benefit our shared society. Related References Barker, J. A. (2001). The new business of paradigms (classic ed.). St. Paul, MN: Star Thrower

Distribution. Videocassette.

Brethower, D. (2006). Performance analysis: knowing what to do and how. Amherst, MA: HRD Press, Inc.

Brethower, D. M. (2005, Feb.). Yes we can: a rejoinder to Don Winiecki’s rejoinder about saving the world with HPT. Performance Improvement, 44(2), 19–24.

Carleton, R. (in preparation). Implementation and management of solutions. Amherst, MA: HRD Press.

Clark, R. E., & Estes, F. (2002). Turning research into results: a guide to selecting the right performance solutions. Atlanta, GA: CEP Press.

Page 20: Performance Improvement

The Assessment Book

14 |

Gerson, R. F. (2006). Achieving high performance: a research-based practical approach. Amherst, MA: HRD Press, Inc.

Guerra, I. (2006). Human performance technology: standards and ethics. In J. Pershing’s Handbook of human performance technology.

Guerra, I. (2005). Outcome-based vocational rehabilitation: measuring valuable results. Performance Improvement Quarterly, 18(3), 65–75.

Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for performance improvement practitioners. Amherst, MA: HRD Press, Inc.

Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding societal value: The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3), 76–99.

Guerra, I., & Rodriguez, G. (2005). Social responsibility and educational planning. Performance Improvement Quarterly, 18(3), 56–64.

International Society for Performance Improvement. (2002). ISPI's Performance Technology Standards. Retrieved 17:15, April 2, 2007, from http://www.ispi.org/hpt_institute/ Standards.pdf

Kaufman, R. (2006a). Change, choices, and consequences: A guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006b). 30 seconds that can change your life: a decision-making guide for those who refuse to be mediocre. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2005, May–June). Choosing success: the rationale for thinking, planning, and doing Mega. Educational Technology, 45(2), 59–61.

Kaufman, R. (2004, October). Mega as the basis for useful planning and thinking. Performance Improvement, 43(9), 35–39.

Kaufman, R. (2002, May–June). What trainers and performance improvement specialists can learn from tragedy: lessons from September 11, 2001. Educational Technology, 14(3), 63–64.

Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks, CA: Sage Publications.

Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised). Arlington, VA & Washington, DC: Jointly published by the American Society for Training and Development and the International Society for Performance Improvement. Also, published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.

Kaufman, R. A. (1972). Educational system planning. Englewood Cliffs, NJ: Prentice-Hall. (Also Planificacion de systemas educativos [translation of Educational system planning]. Mexico City: Editorial Trillas, S.A., 1973).

Kaufman, R., & English, F. W. (1979). Needs assessment: concept and application. Englewood Cliffs, NJ: Educational Technology Publications.

Kaufman, R., & Forbes, R. (2002). Does your organization contribute to society? 2002 Team and Organization Development Sourcebook. New York: McGraw-Hill, 213–224.

Page 21: Performance Improvement

The Basics of Mega Thinking and Planning

| 15

Kaufman, R., Guerra, I., & Platt, W. (2006). Practical evaluation for educators: Finding out what works and what doesn’t. Thousand Oaks, CA: Corwin Press.

Kaufman, R., & Lick, D. (2000–2001, Winter). Change creation and change management: partners in human performance improvement. Performance in Practice: 8–9.

Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/ Pfeiffer.

Kaufman, R., Stith, M., Triner, D., & Watkins, R. (1998). The changing corporate mind: organizations, visions, mission, purposes, and indicators on the move toward societal payoffs. Performance Improvement Quarterly, 11(3), 32–44.

Kaufman, R., & Unger, Z. (2003, August). Evaluation plus: beyond conventional evaluation. Performance Improvement, 42(7), 5–8.

Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing, and accomplishing. Lancaster, PA: Proactive Publishers.

Lagace, M. (2005, January). How to put meaning back into leading. Working Knowledge. Cambridge, MA: Harvard School of Business.

Leigh, D. (2004). Conducting needs assessments: A step-by-step approach. In A. R. Roberts & K. R. Yeager (Eds.) Evidence-based practice manual: research and outcome measures in health and human services. New York: Oxford University Press.

Peters, T. (1997). The circle of innovation: you can’t shrink your way to greatness. New York: Alfred A. Knopf.

Peters, T. J., & Waterman, R. H., Jr. (1982). In search of excellence: lessons learned from America's best run companies. New York: Harper & Row.

Prahalad, C. K. (2005). The fortune at the bottom of the pyramid: eradicating poverty through profits. Upper Saddle River, NJ: Wharton School Publishing/Pearson Education, Inc.

Senge, P. M. (1990). The fifth discipline: the art & practice of the learning organization. New York: Doubleday-Currency.

Watkins, R. (2007). Performance by design: the systematic selection, design, development of performance technologies that produce useful results. Amherst, MA: HRD Press, Inc.

Page 22: Performance Improvement
Page 23: Performance Improvement

| 17

Chapter 2 Assessment Instruments:

What are they and how can they be useful? In a nutshell, a useful assessment instrument allows us to answer important questions. An assessment instrument could be well implemented, but if it was poorly designed, it amounts to nothing useful, and perhaps even harmful, as it may render unreliable or inaccurate information that could later be cited as “research data.” The utility of data is a function of, among other things, the data collection tools (Guerra-López, 2007). The data collection tools therefore must be the right fit for attaining the required data. In the case of a questionnaire, which is the type of assessment instrument we illustrate in this book, the type of data desired must be opinions, perceptions, or attitudes about a particular sub-ject. The purpose of each instrument is to learn people’s perceptions about each item within the instrument. It is important to note that the findings of a questionnaire reflect reality according to each indi-vidual, which is not independently verifiable. For that reason, you should triangulate people’s perceptions with other forms of data, such as actual performance that can be measured through observations and work products. Whatever the data you are after, all methods you select should be focused on answering the “right” question so that useful decisions can be made. Target population characteristics, such as culture, language, education, past experiences, and gender, are also essential to consider. Whether written questionnaires, group techniques, inter-views, or tests are used, one must understand the impact of these characteristics when deriving questions and methods to collect data from individuals. The words in a question can mean many different things to different people based on a myriad of factors. In some instances, those devel-oping the data collection instruments can unconsciously over rely on their own experiences and sense of “what is.” Such is the case with questions that include colloquialisms that, although are well known for one group of people, are completely unfamiliar to others. The results from these questions are often misleading, as the interpretations of these questions can potentially be as numerous as the respondents. Similarly, one approach can be appropriate in a given culture, and perhaps not others. For instance, in some cultures, it is considered rude to publicly disagree with the position of others. In such cases, it may be difficult to use a standard group technique to elicit honest responses from a group. Other important factors to consider when selecting data collection instruments are the relative costs, time, and expertise required to develop and/or obtain them. Once a range of suitable alter-natives has been identified based on the type of data required and their source, the ultimate selection should be based on the relative feasibility of each alternative. While a face-to-face interview might be the best choice in terms of the data the evaluator is after on a given project, the sheer number of those to be interviewed might put the time and money required beyond the scope of the project.

Page 24: Performance Improvement

The Assessment Book

18 |

Questionnaires16 Considerations for Design and Development

One of the most popular data collection tools is the questionnaire. As a general guideline for increasing the usefulness of questionnaires, questions posed must be geared toward informed opinions such as those based on the target group’s personal experience, knowledge, background, and vantage point for observation. It is important that questionnaires avoid items that lead the respondent to speculate about the information being requested, nor should they use a question-naire to confirm or shape a pre-existing bias. For instance, you would not want to ask in a questionnaire “If you were to buy this car, how would your spouse feel about your decision?” The respondent could only speculate in their answer to this question. Equally, you would not want to ask a leading question such as “Do all of the wonderful safety features included with this car make you feel safer?” Perhaps no questionnaire can be regarded as perfect or ideal for soliciting all the information required, and in fact, most have inherent advantages as well as flaws (Rea and Parker, 1997). However, there are factors, including professional experience and judgment, that may help ensure any advantages and reduce the effects of inherent flaws of questionnaires. In developing the self-assessments included in this book, the authors have strived to overcome many of these challenges. Another advantage of using questionnaires, such as those provided in this book, is that they can be completed by respondents at their own convenience and at their own pace. Though a deadline for completion should be given to respondents, they still have sufficient time to carefully reflect, elaborate, and if appropriate, verify their responses. Of course, the drawback here is that mail-out or online questionnaires can require significantly more time to administer than other methods. The sooner you get a response, the more likely it will be complete. Perhaps one of the most important advantages is that of providing the possibility of anonymity.17 Questionnaires can be administered in a way such that responses are not traced back to individ-ual respondents. Explicitly communicating this to potential respondents tends to increase the chances for their cooperation on at least two levels: (1) completing the survey to begin with and (2) being more forthcoming and honest in their responses. However, even if guaranteed anonymity may increase response rate, the overall response rate for questionnaires is usually still lower than for other methods. When responses are low, follow-ups, over sampling, respondent replacements, and non-respon-dent studies can contribute toward a more representative, random sample, which is critical for generalization of findings. Still, there will usually be some bias in the sample due to self-selec-tion; some people, for their own reasons, might not respond to a questionnaire. But a representa-tive sample is a must.

16 Based on Guerra-López, 2007. 17 Again, there are different opinions on anonymity. Some think it vital, others suggest that people should not hide their observations, thinking, and suggestions. You pick which option based on the environment in which you are using a questionnaire.

Page 25: Performance Improvement

Assessment Instruments

| 19

There are a number of characteristics across which respondents and non-respondents may differ, and thus, can impact the findings. You want to know where people agree and where they do not. This is another important issue to acknowledge when interpreting and presenting data collected through questionnaires. What data should be collected?

So exactly what data is collected with questionnaires? How does one determine what questions to ask? The fundamental source of information for the items that will be included is the set of results, indicators, and related questions you want answered. The self-assessments in this book are based on the authors’ experience related to defining and delivering organizational success. The self-assessments included in this book provide a baseline for each area. They have been developed to provide useful information for most organizations. For your organization, you may be tempted to customize some of the questions or add new questions that address specific con-cerns that may be unique to your organization. The guides described in Chapter 1 can be valu-able guides for tailoring the instruments to your organization and its culture. Just remember to focus on ends (rather than means) and always maintain societal results as your primary guide for making decisions. The instruments provided are based on key issues to consider in the design, development, and/or selection of useful questionnaires. The important variables considered in the assessment items in this book may be reviewed in Guerra-López (2007).18 Questionnaire Structure

Questionnaire respondents are not only sensitive to the language used in each question, but also the order in which these questions are asked. Keep in mind that each question can become the context for the next. Thus, poorly structured questionnaires may not only confuse the respon-dents and cause them to provide inaccurate responses, but may also lead them to abandon the questionnaire altogether. A well-structured questionnaire should begin with straightforward yet interesting questions that motivate the respondent to continue. As with any relationship, it takes time for an individual to feel comfortable with sharing sensitive information, therefore sensitive items should be saved for later on in the questionnaire. Questions that focus on the same specific issue should be presented together to maximize the respondent’s reflection and recall. One way for both the questionnaire designer and the respondent to get a clear picture is to cluster specific items around different categories.

18 For a more advanced analysis of developing and testing items within a questionnaire, see DeVellis, R. F. (2003). Scale development: theory and applications. Thousand Oaks, CA: Sage Publications.

Page 26: Performance Improvement

The Assessment Book

20 |

Length

Simplicity is key. Nobody wants to complete a long and complicated questionnaire. The ques-tionnaire should include exactly what is required—nothing more, nothing less. Only relevant indicators should form the basis of a questionnaire. While there may be plenty of interesting information that could be collected through the questionnaire, if it is not central to the indicators being investigated, it will only be a distraction—both for the evaluators and the respondent. In considering the length of the questionnaire, the questionnaire designer should not only think about the actual length of the questionnaire, but the length of time the respondent will invest in completing it. As a general rule, the entire questionnaire should take no more than 30 minutes to complete, and ideally about half that long. Suggested Data Analysis Approach for These Instruments This section focuses on the suggested analysis approach specifically for the instruments, or the type of instruments using the same dual measurement scales, presented in this book. Patterns are of particular importance. Review the responses and note any patterns that emerge. Gaps between “What Is” and “What Should Be” for each section and item should be estimated, with gaps over 1½ points meriting special attention. You might also consider prioritizing some of these gaps based on magnitude, importance, urgency, and/or other prioritization criteria that might be par-ticularly relevant to you and your organization. Below are four analysis criteria that are particu-larly worth exploring, but you may come up with others that are meaningful for you and your organization.

Analysis One: Discrepancy. For each question on a self-assessment, a gap analysis should be performed by subtracting the value assigned to the “What Is” (WI) column from the value assigned to the “What Should Be” (WSB) column (see Figure 2.1). The results of this analy-sis will identify discrepancies between the current and desired performance for each variable of the assessment. The size of the gap can provide valuable information in determining the perceived acuteness of the need or the extent to which opportunities can be capitalized upon. The results of this analysis are, however, necessary rather than sufficient for quality decision making. Alone they only provide isolated values (data points) that have to be put into con-text through their relationships with other analyses described below.

Page 27: Performance Improvement

Assessment Instruments

| 21

Figure 2.1. Two-response-column format from an online self-assessment

2 Column Question Layout RKA Culture & My Organization

Analysis Two: Direction. For each question, the positive or negative value of the gap should be identified to different needs (when WSB is greater than WI) from opportunities (when WI is greater than WSB).

• Positive discrepancies between WSB and WI (for example, WSB = 5, WI = 3,

Gap = 2) identifies a need.

• Negative discrepancies between WSB and WI (for example, WSB = 3, WI = 4, Gap = -1) identifies an opportunity.

The distinction between needs and opportunities provides a context for discrepancy data, which by itself only illustrates the size of the gap between “What Should Be” and “What Is.” Based on the direction of the discrepancy, decision makers can consider which gaps illus-trate needs that have the potential to be addressed through organizational efforts, and which identify opportunities that the organization may want to leverage (or maintain) in order to ensure future success.

Analysis Three: Position. The position analysis illustrates the relative importance or prior-ity of discrepancies from the perspective of the respondents. While many gaps between “What Should Be” and “What Is” may have equivalent discrepancies and be in the same direction, the position of the discrepancy on the Likert scale of the instrument can demon-strate the relative priority of the discrepancy in relation to other gaps.

Sample provided by E-valuate-IT. Visit www.e-valuate-it.com/instruments/RKA for details of online instrument services.

Page 28: Performance Improvement

The Assessment Book

22 |

For example, two needs may be identified with a discrepancy of +3, but one need illustrated a gap between WSB = 5 and WI = 2 while the other illustrated WSB = 3 and WI = 0. As a result, the interpretation of these discrepancies in relation to one another would indicate a perceived prioritization of the initial need over the other. This information can be valuable in selecting which discrepancies are addressed when resources are limited. Together, the three analyses (discrepancy, direction, and position) can offer valuable data for identifying, prioritizing, and selecting performance improvement efforts related to the com-plete e-learning system.

Analysis Four: Demographic Differences (optional). Organizations may want to view the results of the self-assessment based on demographic differences (e.g., division, location, position type, years of experience). Analysis of the results of the self-assessment can be reviewed by demographic variables if items related to the desired categories are added to the instrument. If your organization has collected data regarding the demographics of those completing the self-assessment, the analysis for discrepancy, direction, and position should be completed for each demographic on a section, subsection, and/or item basis depending on the level of information required for decision making.

Displaying Response Data

A visual representation such as a matrix and/or a bar chart for illustrating the results is suggested. If you implement an instrument for a group of people, you may also want to indicate the percentage of people who indicated each response option, giving you insight into where the majority of the respondents’ perceptions lie. Additionally, you can plot the gaps between the median scores for “What Is” and “What Should Be.” While using means, or mathematical aver-ages, is not actually a proper manipulation of scores, it can be presented, along with its gaps, as a source of comparison for those who are used to interpreting data in this fashion. A graphic format for reporting that was jointly derived by Roger Kaufman & Associates and E-valuate-IT19 is presented in Figure 2.2. By displaying your results in this way, you and your associates can quickly scan and see both gaps and patterns.

19 This assessment instrument is available online at www.e-valuate-it.com/instruments/RKA for groups and may also be customized for your particular organization. An associated analysis service is also available.

Page 29: Performance Improvement

Assessment Instruments

| 23

Figure 2.2: Graphic format for reporting gaps and patterns

Gap magnitude by question

Respondent Group

Gap: Mean [WHAT SHOULD BE] – Mean [WHAT IS] Mean: 1 = Rarely, if ever; 5 = Consistently Response Options: 1–Rarely, if ever; 2–Not Usually; 3–Sometimes; 4–Frequently; 5–Consistently Gap range by question

Respondent Group: Anyone serious about success

Gap: Mean [WHAT SHOULD BE] – Mean [WHAT IS] Mean: 1 = Rarely, if ever; 5 = Consistently Response Options: 1–Rarely, if ever; 2–Not Usually; 3–Sometimes; 4–Frequently; 5–Consistently

Page 30: Performance Improvement

The Assessment Book

24 |

Interpretation and Required Action

With the data analyzed and patterns apparent, you will want to extrapolate what all this means. Depending on who is involved, the data could mean various things to various people. Not all interpretations are equally viable. Be sure to consider all viable interpretations, verifying these with additional relevant and valid information. You may also want to collect information on why these gaps exist (via a causal or SWOT analy-sis) so that the solutions and courses of action you consider have a high probability of closing those gaps, and in turn, yielding the consequences you want and require. Related References Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for

performance improvement practitioners. Amherst, MA: HRD Press, Inc.

Rea, L. M., & Parker, R. A. (1997). Designing and conducting survey research: A comprehensive guide (2nd ed.). San Francisco, CA: Jossey-Bass Publishers.

Page 31: Performance Improvement

| 25

Chapter 3 Strategic Thinking and Planning—How Your Organization Goes About It (and Finding out

What it Might Want to Change)

Roger Kaufman, PhD Planning is an alternative to relying on good luck. Strategic planning (and strategic thinking—the way we think when we want to create our own future) is a proactive approach. It is creating the kind of world we want for our children and grandchildren. When using the following assessment instrument, we ask you to rate, on two dimensions of “What Is” and “What Should Be,” how your organization views and goes about strategic plan-ning. The following questions identify the most important variables so that you and your organization can calibrate if you are going to do really useful strategic thinking and planning. What you target in your planning and how precisely you develop your planning criteria will make an important difference in your success. This is a very specific assessment instrument, and the terms used are chosen carefully. So please do not skim through them. If some terms seem strange to you, please check in the Glossary at the back of this book. This instrument is designed to provide you with information on whether or not you and your organization are really doing strategic planning. Most organizations, in spite of the label, do not do strategic planning but rather do tactical planning (considering and choosing methods, means, programs, and projects) or operational planning (making sure that what is planned is kept on target). All three levels of planning are important. The most effective planning starts with strategic, or Mega—the societal contribution we make using our organization as the vehicle. Thus, this instrument allows you to determine the extent to which you are being strategic. Please pay close attention to the words in this assessment instrument. How to Complete This Survey There are two dimensions to this survey that allow you to compare the current status of our organization with a preferred status: (1) What Is and (2) What Should Be. In the first column (left side), indicate how you see our organization currently operating. In the second column (right side), indicate how you feel our organization should be operating. For each item, think of the phrase “In our organization, from my experience…” as you consider the current and preferred states of the organization. If there are items about which you are uncertain, give your best response.

Page 32: Performance Improvement

The Assessment Book

26 |

What Is Strategic Thinking and Planning Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

1. Planning has a focus on creating the future.

2. We define strategic planning as starting with an initial focus on measurable societal value added.

3. We define tactical planning as having a focus on measurable organizational value added.

4. We define operational planning as having a focus on measurable individual and small group value added.

5. We start strategic planning and thinking at the societal value-added level.

6. In our strategic planning, we carefully distin-guish among strategic, tactical, and opera-tional planning.

7. We align—link and relate—strategic, tactical, and operational planning.

8. All people in our organization understand the differences and relationships among strategic, tactical, and operational planning.

9. Planning involves, either directly or indirectly, all those people and parties that will be impacted by the results of the strategic plan.

10. Planning always focuses on results.

11. Planning always focuses on the conse-quences of achieving (or not achieving) results.

12. Planning is proactive.

13. Revisions to the plan are made any time it is required.

14. We use an Ideal Vision—the kind of world we want to help create for tomorrow’s child—as the basis for planning.

15. People who are involved and could be impacted by the plans participate in the planning.

Page 33: Performance Improvement

Strategic Thinking and Planning

| 27

What Is Strategic Thinking and Planning Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

16. We use a formal needs assessment—collecting and prioritizing gaps in results—for making plans.

17. We collect needs at all three levels of plan-ning: strategic, tactical, and operational.

18. When we do strategic planning, we formally consider and collect data for the following Ideal Vision purposes:

a) There will be no losses of life or elimina-tion or reduction of levels of well-being, survival, self-sufficiency, or quality of life, from any source.

b) Eliminate terrorism, war, and/or riot.

c) Eliminate unintended human-caused changes to the environment including permanent destruction of the environment and/or rendering it nonrenewable.

d) Eliminate murder, rape, or crimes of vio-lence, robbery, or destruction of property.

e) Eliminate disabling substance abuse.

f) Eliminate disabling disease.

g) Eliminate starvation and/or malnutrition.

h) Eliminate destructive behavior (including child, partner, spouse, self, elder, others).

i) Eliminate accidents, including transporta-tion, home, and business/workplace.

j) Eliminate discrimination based on irrele-vant variables including color, race, age, creed, gender, religion, wealth, national origin, or location.

19. The elements of the Ideal Vision (a through j above) are treated as interrelated, not just each one independently.

20. Planning is done before taking action.

Page 34: Performance Improvement

The Assessment Book

28 |

What Is Strategic Thinking and Planning Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

21. Plans are used when making decisions.

22. Plans are revised based on the results that come from implementation of the plan.

23. Plans develop rigorous measurable criteria.

24. Implementation of the plans actually uses the criteria developed by the plan.

25. The needs—gaps between current and desired results—are used to determine success of implementation: implementers compare actual results with intentions based on the needs.

26. Strategic planning is taken seriously by all associates.

27. When crises occur, the strategic plan is used for making decisions.

Note: Online services for this survey are available at www.e-valuate-it.com/instruments/RKA/. Relevant demographic information should also be collected. Once Data Is Collected and Analyzed The Meaning of Gaps in Results

This assessment instrument is to let you know if you are doing strategic planning or actually doing tactical or operational planning and only calling it “strategic.” Item 1 tells you if you are future oriented (or, perhaps, just planning to make the here-and-now more efficient). Items 2, 3, 4, 5, and 6 probe into whether there is an understanding concerning the differences among strategic, tactical, and operational planning. Discriminating among these is really important, even though some might see that such detail is annoying or not useful. Also important here are Items 7 and 8. Items 7 and 8 will let you know the extent to which you not only understand and distinguish among strategic, tactical, and operational planning but also link and align the plans and criteria for all. Items 9 and 15 identify if all the partners who can and might be affected by what gets planned and delivered are involved. If they are not, you risk resistance from not achieving what Peter

Page 35: Performance Improvement

Strategic Thinking and Planning

| 29

Drucker called “transfer of ownership,” which indicates that the plan is “our plan” and not “their plan.” If people don’t “own” a plan, then its implementation and success will likely be seriously limited. Items 10 and 11 will tell you if you are planning for results or perhaps just about methods, pro-grams, and/or resources. Item 24 will tell you about using the plan’s criteria to define and deliver implementation. Items 23 and 25 reveal if rigorous measurable criteria and the needs (not “wants”) assessment are used for evaluation and continual improvement. Item 12 tells you if the plan is proactive. Items 13 and 22 tell you if the results of implementing planning are used to revise the plan: continual improvement. Items 16 and 17 relate to the validity of the data for strategic planning. If there are gaps here, then defining needs as gaps in results (and not gaps in resources or methods) will likely make your planning database suspect. Item 18 and parts a–j will let you know if you are doing real strategic planning—if you are bas-ing your planning on Mega (measurable societal value added) or on a more usual set of existing organizational statements of purpose (that almost always focuses on the organization and not on the external value added). Items 14 and 19 will let you know if the elements of Mega and the Ideal Vision are really being integrated or if there is “splintering” going on. See the elements of the Ideal Vision (a–j) as a fabric, not individual strands. And note that your organization might not even intend to deal with all the elements. Item 20 lets you know if planning is really used in terms of it really coming before swinging into action. Items 21, 26, and 27 relate to the plan really being used (or just perhaps window dressing or compliance). When There are No Gaps in Results

If you get few gaps in between “What Is” and “What Should Be” and both are at the 1 or 2 level, this might signify that the organization is not yet concerned about strategic thinking and planning and might not even note the important differences between conventional wisdom about the topic and what emerging concern for societal value added (some call it Corporate Social Responsibil-ity) seriously advises. The responses can educate you about the corporate culture that exists and the requirement for some changes there concerning creating the future. Using the Results of This Survey Gaps in results for each item within each section provide a great deal of information: patterns in each section and between sections, and choices concerning what is going well and what should be changed. The demographics collected at the end of the survey can help you determine where help might be given—who might want to change—and where things are on target for organiza-tional success. The data are now available for objective assessment of your organizational culture, the major variables in it, and what to change and what to continue.

Page 36: Performance Improvement

The Assessment Book

30 |

There are several critical items for which a high “What Should Be” should be obtained if your commitment is to sustain success. These are the cornerstone variables, and others are part of the success tapestry. These include items 1, 2, 5, 7, 11, 14, 15, 16, 17, 19, 20, and 27.When there are no high “What Should Be” for these, this can signal that your organization is not really doing strategic planning but might be doing tactical or operational planning and just using the label “strategic.” This assessment instrument can guide you to create a successful strategic thinking and planning organization. Related References Barker, J. A. (2001a). The new business of paradigms (classic ed.). St. Paul, MN: Star Thrower

Distribution. Videocassette.

Barker, J. A. (2001b). The new business of paradigms (21st century ed.). St. Paul, MN: Star Thrower Distribution. Videocassette.

Bernardez, M. (2005). Achieving business success by developing clients and community: lessons from leading companies, emerging economies and a nine year case study. Performance Improvement Quarterly, 18(3), 37–55.

Clark, R. E., & Estes, F. (2002). Turning research into results: a guide to selecting the right performance solutions. Atlanta, GA: CEP Press.

Davis, I. (2005, May 26). The biggest contract. The Economist (London) 375(8428), 87.

Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.

Guerra, I. (2003). Key competencies required of performance improvement professionals. Performance Improvement Quarterly, 16(1).

Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding value: The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3), 76–99.

Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for performance improvement practitioners. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those who refuse to be mediocre. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning primer. Performance Improvement Quarterly, 18(3), 8–16.

Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks, CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito organizacional. (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana, Espana.

Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised). Arlington, VA & Washington, DC: Jointly published by the American Society for Training and Development and the International Society for Performance Improvement. Also, published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.

Page 37: Performance Improvement

Strategic Thinking and Planning

| 31

Kaufman, R., & Lick, D. (Winter, 2000–2001). Change creation and change management: partners in human performance improvement. Performance in Practice, 8–9.

Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/ Pfeiffer.

Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing, and accomplishing. Lancaster, PA: Proactive Publishers.

Kaufman, R., Watkins, R., Sims, L., Crispo, N., & Sprague, D. (1997). Costs-consequences analysis. Performance Improvement Quarterly, 10(3), 7–21.

Mager, R. F. (1997). Preparing instructional objectives: a critical tool in the development of effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.

Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring, MD: International Society for Performance Improvement and the American Society for Training and Development.

Watkins, R., Kaufman, R., & Leigh, D. (2000, April). A performance accomplishment code of professional conduct. Performance Improvement, 35(4), 17–22.

Watkins, R., Leigh, D., & Kaufman, R. (1998, September). Needs assessment: a digest, review, and comparison of needs assessment literature. Performance Improvement, 37(7), 40–53.

Page 38: Performance Improvement
Page 39: Performance Improvement

| 33

Chapter 4 Needs Assessment, Needs Determination,

and Your Organization’s Success

Roger Kaufman, PhD This assessment instrument asks questions about needs assessment and using that performance data for finding the direction for your organization. This determination and agreement on where the organization is headed is central to any initiative for performance improvement and delivering useful results. Each organization will be different. These assessment questions and the pattern they provide are designed to help you decide what you might want to change and what you might want to continue relative to where you are headed and justifying why you want to get there. Needs assessment data are used for finding out what results you should seek and what payoffs you can expect. Needs assessment, at its core, simply identifies where you are in terms of results and consequences and where you should be. Poor or incomplete needs assessments can lead to poor results and contributions. This assessment instrument asks a lot of you. It uses words and concepts in very specific ways—specific ways that are vital for your collecting appropriate data for decision making and determining where your organization should head, why it should get there, and what data are required for selecting the most effective and efficient ways to get from where you are to success. Patience is invited for this instrument. If you skim-read the items, many of the vital, yet subtle, distinctions might be missed. And the terms are used for a reason, not just to look different; this approach is different. A note on words: Some of the words used in this assessment instrument may seem new to you or may be used differently than you typically use them. Review the list below that describes how these words will be used:

• By society we mean the world in which we all live.

• Management includes supervisors, leaders, or bosses to whom you report in the chain of command. Associates are people with whom you work.

• Resources are dollars, people, equipment, and tools.

• Formally refers to doing something with a shared rigorous definition of the ways and means in which we do something, such as collect data, rather than informally, which can be casual or unstructured.

Page 40: Performance Improvement

The Assessment Book

34 |

• Actual performance is when somebody actually does and produces something as contrasted with how we recall processes or procedures that might be employed.

• Clients are those that some result is delivered to. They may be internal or external to your organization.

• Internal clients are those within your organization (also referred to as internal partners).

• External clients are those outside of your organization, such as your immediate clients and the clients of your clients (also referred to as external partners).

• Stakeholders are those people internal and external to the organization who have some personal interest in what gets done and delivered.

As a reminder, a Glossary of terms is provided at the end of this book. How to Complete This Assessment There are two dimensions to this assessment instrument that allow you to compare the current status of our organization with a preferred status: (1) What Is and (2) What Should Be. In the first column (left side), indicate how you see our organization currently operating. In the second column (right side), indicate how you feel our organization should be operating. For each item, think of the phrase: “In our organization, from my experience…” as you consider the current and preferred states of the organization. If there are items about which you are uncertain, give your best response.

Page 41: Performance Improvement

Needs Assessment, Needs Determination, and Your Organization’s Success

| 35

What Is Needs Assessment What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

How the Organization Perceives Needs Assessment

1. We formally plan.

2. We do needs assessment.

3. Needs assessments are valued in our organization.

4. We use the data from needs assessment to decide what to do.

5. Our needs assessment looks at the gaps between obtained results and predetermined objectives.

6. Needs assessments are results focused.

7. Needs assessments are activities focused.

8. We do “training needs assessments.”

9. The organization skips formal needs assessments because of time constraints.

10. The organization skips formal needs assessments because of lack of needs assessment capabilities.

11. The organization skips formal needs assessments because of not knowing how to interpret needs assessment data.

12. Management is focused on results accomplished (rather than processes and activities engaged in) when it requests a needs assessment.

13. The organization’s culture is focused on results.

14. Needs assessment is seen as comparing current results against those results that should be accomplished.

15. Needs assessments are done for strategic planning.

Page 42: Performance Improvement

The Assessment Book

36 |

What Is Needs Assessment What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

16. Needs assessment data are used for project plans.

17. Needs assessments are shared with our internal stakeholders.

18. Needs assessment results are shared with our external stakeholders.

What the Organization Includes When Conducting Needs Assessments

19. Needs assessments are resources focused.

20. Needs assessments focus on both results and processes (including resources).

21. Needs assessments include a focus on the results to be accomplished by individuals.

22. Needs assessments include a focus on the results to be accomplished by small groups.

23. Needs assessments include a focus on the results to be accomplished by individual departments.

24. Needs assessments include a focus on the results to be accomplished by the organiza-tion.

25. Needs assessments include a focus on all results to be accomplished by the organiza-tion for external clients and society.

26. Needs assessments include a focus on all of the levels of results—individual performance, small group performance, individual depart-mental performance, the organization itself—to be accomplished within the organization.

Page 43: Performance Improvement

Needs Assessment, Needs Determination, and Your Organization’s Success

| 37

What Is Needs Assessment What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

How the Organization Goes About Needs Assessment

27. Needs assessments collect actual (hard20) performance data about individuals’ jobs and tasks.

28. Needs assessments collect actual perform-ance data about units within the organization.

29. Needs assessments collect actual perform-ance data about the organization itself.

30. Needs assessments collect actual perform-ance data about the impact on external clients.

31. Needs assessments collect actual perform-ance data about the impact on society (our neighbors both near and far).

32. Needs assessments collect perceptions (soft21) of performance-related data about individuals’ jobs and tasks.

33. Needs assessments collect perceptions of performance-related data about units within the organization.

34. Needs assessments collect perceptions of performance-related data about the organi-zation itself.

35. Needs assessments collect perceptions of performance-related data about the impact on external clients.

36. Needs assessments collect perceptions of performance-related data about the impact on society or the community.

20 “Hard” data are results that are independently verifiable (such as error rate, production rate, employ-ment, being a self-sufficient citizen). 21 “Soft” data are results that are personal and not independently verifiable (such as perceptions, opinions, feelings).

Page 44: Performance Improvement

The Assessment Book

38 |

What Is Needs Assessment What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

37. Needs assessments collect both actual and perceptions data for assessing gaps between current and desired results.

The Basis for Needs Assessment Criteria

38. Needs assessments are done to provide useful information for defining future direc-tions for the organization.

39. Plans are made on the basis of the desired consequences for external clients (those peo-ple to whom we deliver things or services).

40. Plans are made on the basis of the desired consequences of results for society and community (those people who are our close and distant neighbors).

41. Plans are made on the basis of desired individual performance.

42. Plans are made on the basis of desired resources.

43. Plans are made on the basis of desired activities, programs, projects.

44. Plans are made on the basis of desired departmental or section results.

45. Plans are made on the basis of desired organizational results.

46. Plans are made on the basis of results desired for external clients (those to whom we deliver things or services).

47. Plans are made on the basis of results desired for society or the community.

48. Plans are derived from the data obtained from a needs assessment.

49. Data from a needs assessment are used to link resources to activities, programs, projects.

Page 45: Performance Improvement

Needs Assessment, Needs Determination, and Your Organization’s Success

| 39

What Is Needs Assessment What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

50. Data from a needs assessment are used to link resources to results that add value for external clients.22

51. The organization chooses means and resources (e.g., training, restructuring, layoffs) without first identifying results to be achieved.

52. The organization defines and uses needs assessments for identifying gaps in results for impact on external clients.

53. The organization defines and uses needs assessments for identifying gaps in results for impact on our society (e.g., health, safety, well-being, survival).

54. The organization uses needs assessments for identifying gaps in results for the organization itself (such as based on a business plan).

55. The organization uses needs assessments for identifying gaps in results for individual opera-tions or tasks.

56. Needs are formally prioritized on the basis of the costs to close gaps in results as com-pared to the costs of ignoring them.

57. The organization uses data from a needs assessment to set objectives.

Using Needs Assessment Data

58. Needs assessment data are used to determine what gaps in results should be addressed.

59. Needs assessment data are used to prioritize the needs—gaps in results.

60. Needs assessment data are used to select the best ways and means to meet the needs—gaps in results.

22 Sometimes our external clients have clients themselves. Include these links in your response.

Page 46: Performance Improvement

The Assessment Book

40 |

What Is Needs Assessment What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

61. The organization uses needs assessment data as the basis for evaluation.

62. Needs assessment data are not used after they are collected.

63. Needs assessments are seen by associates in the organization as providing important information.

Note: Online services for this survey are available at www.e-valuate-it.com/instruments/RKA/. Relevant demographic information should also be collected.

What the Results Mean Patterns are important. Review the responses (by you and/or others if that is your choice) and note any patterns that emerge. Gaps between “What Is” and “What Should Be” of over 1½ deserve special attention. When There Are No Gaps If you get few gaps between “What Is” and “What Should Be” and both are at the 1 or 2 level, this might signify that the organization is not yet concerned about needs assessments to harvest the data concerning performance gaps between “What Is” and “What Should Be” and might not even note the important differences between conventional wisdom about the topic and what emerging concern for defining needs and not just wants (favored ways of doing things). No gaps might also show up for those areas that address Mega, societal value added, although such a focus is attracting increasing support (see Chapter 1 of this book). The responses can let you know about the beliefs, values, attitudes, and approach to needs assessment that exist and the requirement for some changes there concerning creating the future. The Meaning of Gaps Now to look at the responses, patterns of responses, and why many of the words and terms were unique. When there is a gap between “What Is” and “What Should Be” that is 1½, then you should take a close look at them—they usually signal important gaps.

Page 47: Performance Improvement

Needs Assessment, Needs Determination, and Your Organization’s Success

| 41

How the Organization Perceives Needs Assessment

Items 1, 2, 3, and 4 address your organization’s valuing and using of needs assessments. Items 5 and 6 lets you know if the needs assessments being done or contemplated really examine gaps in results. This focus is vital to defining useful needs. Also pertaining to this are Items 12, 13, and 14. Items 15, 16, 17, and 18 let you know the scope of the use of needs assessment data. Items 7 and 8 signal a possible problem. If people use a needs assessment focusing on activities, they are missing the concept of need as a noun—as a gap in results. The performance data relative to doing a training needs assessment or an activities-focused needs assessment advise that, while a popular approach, your decisions relative to such will be wrong 80 to 90 percent of the time. This is not good relative to defining and delivering organizational success. Items 9, 10, and 11 will show you why useful needs assessments are not being used. What the Organization Includes When Doing Needs Assessments

Items 19 and 20 give you some guidance on what is and is not included in a current or anticipated needs assessment. If there is a positive response to Item 19, then you will likely miss the power of a results-referenced needs assessment. Item 20 will let you know if needs assessments look at both results and processes. Now things get a bit trickier—tricky, but important. Items 21–25 examine the Organizational Elements—those things that any organization uses, does, produces, delivers, and the impact all of that has for external clients and our shared society. Look at these one at a time: Item 21 lets you know if individual performance is examined. Item 22 lets you know if small group performance is examined. Item 23 lets you know if individual departmental performance is examined. Item 24 lets you know if organizational performance is examined. Item 25 lets you know if value added for external clients and society is examined. Of course, the most effective (and safest because it covers all important aspects of needs assess-ment and defining success) needs assessment approach will get a positive “What Should Be” for Item 26. Less than that, the approach being used is partial. Item 26 lets you know if all levels of results (individuals, small groups, departments, the organi-zation itself, and external clients and society) are indeed linked. How the Organization Goes About Needs Assessment

The next items probe the nature of the data being collected. Both “hard” and “soft” data—both actual performance and perceptions—are central to a useful needs assessment. First to the “hard” performance-based data.

Page 48: Performance Improvement

The Assessment Book

42 |

Item 27 examines if hard performance data is collected for individuals’ jobs and tasks, Item 28 examines that same question for units within the organization, Item 29 for the organization itself, and Item 30 for impact on external clients. Item 31 seeks to determine if hard data are collected for impact on our shared society. The best news you can get is high “What Should Be” ratings for Items 27–30. Now, attention is turned to “soft” or perception data. As was done in Items 27–31, the same examinations are probed for soft data. Item 32 examines soft data for individual jobs and tasks, Item 33 for individual units within the organization, Item 34 for the organization itself, Item 35 for external clients, and Item 36 for our shared society and community. As before, the most valid needs assessments will collect both hard and soft data at all organizational levels, and that is examined in Item 37. The Basis for Needs Assessment Criteria

Item 38 probes if needs assessment data are used for defining the organization’s future. (If not, the organization might be simply looking to improve what is now going on and risks continuing to do what is already being done, which will not suffice.) Item 39 probes consequences of the needs assessment data for external clients, and Item 40 probes concerns society and our shared communities. Item 41 seeks to determine if needs assessment data are used for decisions relative to individual performance. Item 44 seeks to determine if plans are related to departmental or section results, Item 45 seeks to determine if plans are related to desired organizational results, Item 46 is concerned about external clients, and Item 47 is concerned about society and community. Items 42 and 43 revisit the tendency to use a so-called needs assessment as a “cover” for jumping into premature solutions by seeking to find out if the organization’s plans are resource driven and/or activity driven. If you get a high response on one or both of these items, you should be alerted to the potential that your organization is solutions-driven and not results-driven. Item 48 looks at planning and whether it is based on needs assessment data, and Item 49 seeks to find out if needs assessment data are used for linked decisions about resources, activities, programs, and projects. Item 50 continues the inquiry concerning the extent to which needs assessment data are used for linking organizational efforts and contributions to external clients and society. Item 51 provides a “red flag” if needs assessment data are not used for such vital decisions as training, restructuring, layoffs, or the like. A high response on “What Is” signals trouble. The alternative to this is found in Item 52.

Page 49: Performance Improvement

Needs Assessment, Needs Determination, and Your Organization’s Success

| 43

Continuing the inquiry concerning needs assessments, needs assessment data, and application by the organization, Item 53 sees if they are used for Mega level concerns (health, safety, and well-being), and Item 54 examines if data are used for Macro level, with a business plan. Item 55 examines if data are used for individual operations and performance tasks. Costs and consequences—what you give and what you get—are examined in Item 56. The sensible and rational approach to using needs assessment data is to prioritize on the costs of meeting the need as compared to the costs for ignoring it. Item 57 seeks to find out if needs identified in a needs assessment are used to set objectives: the “What Is” criteria are the basis for measurable objectives. Using Needs Assessment Data

Items 58–61 examine the extent to which data from a needs assessment are not only used but used appropriately. Item 62 is another “red flag” item. It lets you know if needs assessments are done but the data not used. Item 63 probes to see if needs assessments are seen by associates as being useful and as providing important information. Items and Patterns Look at the responses you get (and, if you choose, those of others in your organization) to deter-mine what you might change concerning the nature of your needs assessments and how any data get used. Needs assessments, when done completely and correctly, will be invaluable for you as you define where to head, justify why you want to get there, and provide the criteria for strategic, tactical, and operational planning as well as sensitive and sensible evaluation and continual improvement. There are some items that you should want to have high “What Should Be” scores. These are the cornerstone items and the other items are part of the tapestry of success: Items 4, 5, 12, 14, 26, 31, 36, 37, 38, 47, 48, 53, 57, 59, and 60. There are some “red flag” items that you will be better served if you have (and maintain) low “What Should Be” ratings. These include Items 7, 8, 9, 10, 11, 19, 42, and 51. These data may be used to identify what in your organization should productively change relative to needs assessment. Remember, needs assessments provide the basic data and justification for you to determine where you are headed, why you want to get there, and how to tell when you have arrived.

Page 50: Performance Improvement

The Assessment Book

44 |

Related References Barker, J. A. (2001a). The new business of paradigms (classic ed.). St. Paul, MN: Star Thrower

Distribution. Videocassette.

Barker, J. A. (2001b). The new business of paradigms (21st century ed.). St. Paul, MN: Star Thrower Distribution. Videocassette.

Bernardez, M. (2005). Achieving business success by developing clients and community: lessons from leading companies, emerging economies and a nine year case study. Performance Improvement Quarterly, 18(3), 37–55.

Clark, R. E., & Estes, F. (2002). Turning research into results: A guide to selecting the right performance solutions. Atlanta, GA: CEP Press.

Davis, I. (2005, May 26). The biggest contract. The Economist (London): 375(8428), 87.

Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.

Guerra, I. (2003). Key competencies required of performance improvement professionals. Performance Improvement Quarterly, 16(1).

Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding value: The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3), 76–99.

Kaufman, R. (2005). Defining and delivering measurable value: a Mega thinking and planning primer. Performance Improvement Quarterly, 18(3), 8–16.

Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks, CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito organizacional (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana, Espana.

Kaufman, R. (1998). Strategic thinking: A guide to identifying and solving problems (revised). Arlington, VA & Washington, DC: Jointly published by the American Society for Training and Development and the International Society for Performance Improvement. Also, published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.

Kaufman, R., & Lick, D. (2000–2001, Winter). Change, creation, and change management: partners in human performance improvement. Performance in Practice, 8–9.

Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/Pfeiffer.

Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: Defining, prioritizing, and accomplishing. Lancaster, PA: Proactive Publishers.

Kaufman, R., Watkins, R., Sims, L., Crispo, N., & Sprague, D. (1997). Costs–consequences analysis. Performance Improvement Quarterly, 10(3), 7–21.

Mager, R. F. (1997). Preparing instructional objectives: A critical tool in the development of effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.

Page 51: Performance Improvement

Needs Assessment, Needs Determination, and Your Organization’s Success

| 45

Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring, MD: International Society for Performance Improvement and the American Society for Training and Development.

Watkins, R., Leigh, D., & Kaufman, R. (1998, September). Needs assessment: a digest, review, and comparison of needs assessment literature. Performance Improvement, 37(7), 40–53.

Watkins, R., Kaufman, R., & Leigh, D. (2000, April). A performance accomplishment code of professional conduct. Performance Improvement, 35(4), 17–22.

Page 52: Performance Improvement
Page 53: Performance Improvement

| 47

Chapter 5 Culture, Our Organization, and

Our Shared Future

Roger Kaufman, PhD This self-assessment survey contains statements about organizational culture-related considera-tions—how and why things get done around our place of work. Each organization is different, and these items and the response patterns they provide can help your organization decide what you might want to change and what you might want to continue. Organizational success depends on ensuring that everyone in the organization is heading to the same destination and that people can work both together and independently to get from here to there. The items in this assessment instrument are based on the basic concepts provided in Chapter 1. The statements for the instrument are performance-based—results-based—in keeping with the basic concepts of the value and usefulness of strategic thinking and planning and “what it takes” to be successful. The responses we ask for include the following current and perceived required status on organ-izational culture variables:

• Associates and work, including trust, ethics, decision making, cooperation, innovation, and valuing

• Management style, including information sharing, input, direction giving/direction fol-lowing, amount of supervision, openness

• Measuring success, including nature of evaluations, the kind of data collected, what data gets used, criteria, clarity, dealing with success and failure, compliance, focus on kinds of results to achieve

• The organization, including work environment, workspace, drivers for organizational performance, policies, ways of working

• Customer relations, including customer feedback, planning with and for customers, objectives sharing, modes of data collection, vision and purposes

There are no right or wrong answers, just variables for you and your associates to consider in terms of what in the culture is productive and what might be changed.

Page 54: Performance Improvement

The Assessment Book

48 |

A note on words: Some of the words used in this survey may seem new or may be used differ-ently than you typically use them. Review the list below that describes how these words will be used in this survey:

• Customers are those to which you deliver a thing or service.

• Managers are bosses, supervisors, or leaders to whom you report in the chain of command.

• Associates are people, or employees, with whom you work.

• Actual performance is when somebody actually does and produces something as contrasted with how we recall processes or procedures that might be employed.

• Rewards are the incentives, perks, financial gains, better assignments, and recognition that are given on the basis of what is done and delivered.

• Resources are dollars, people, equipment, and tools.

• Society refers to the world in which we all live—our near and distant neighbors. As a reminder, a Glossary of terms is provided at the end of this book. How to Complete This Survey There are two dimensions to this survey that allow you to compare the current status of our organization with a preferred status: (1) What Is and (2) What Should Be. In the first column (left side), indicate how you see our organization currently operating. In the second column (right side), indicate how you feel our organization should be operating. For each item, think of the phrase “In our organization, from my experience…” as you consider the current and preferred states of the organization. If there are items about which you are uncertain, give your best response.

Page 55: Performance Improvement

Culture, Our Organization, and Our Shared Future

| 49

What Is Culture and My Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

Associates and Work

1. I have the right information to do my job.

2. Our people respond well to change.

3. Decisions are based on solid evidence.

4. Our associates make independent decisions.

5. Our associates feel important.

6. There are enough people to do the work.

7. We have the right people to do the work.

8. Our associates trust each other to do what has to be accomplished.

9. Our associates adhere to ethical standards.

10. Our associates are competitive with one another.

11. Our associates are friendly.

12. We work cooperatively. 13. Details are important.

14. We have the resources to do our work properly.

15. We select solutions and resources by first identifying objectives.

16. Our organization encourages innovation.

17. I am valued at work.

18. I am helped to improve and advance.

19. Decisions are timely for what we have to do and deliver.

Page 56: Performance Improvement

The Assessment Book

50 |

What Is Culture and My Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

Management Style

20. Information is shared quickly.

21. Our associates can easily provide input to managers about goals, directions, or methods.

22. Our associates do what their manager says.

23. Our associates are expected to do what they are told.

24. Our managers do what is right.

25. Our managers have courage.

26. Our managers appreciate new ideas.

27. Our managers trust associates to do what has to be accomplished.

28. Everyone is closely supervised.

29. Our managers act on input from associates when making decisions.

30. Change is supported.

Measuring Success

31. Our organization conducts formal evaluations.

32. We track our progress.

33. The organization collects actual (hard) performance data.

34. The organization collects perception (soft) performance data.

35. Evaluation results are used for deciding what to stop, what to continue, and what to modify.

36. Evaluation criteria for individual performance are clear.

37. Evaluation criteria for individual performance are uniformly applied to people throughout all levels of the organization.

Page 57: Performance Improvement

Culture, Our Organization, and Our Shared Future

| 51

What Is Culture and My Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

38. We evaluate activities to see what works and what does not work during the activities.

39. There are appropriate consequences for failures.

40. Rewards for performance accomplishments are clear.

41. Consequences for not achieving results are clear.

42. Performance is more highly prized than compliance.

43. Rewards result from value added to society.

44. We link organizational results to external consequences.

45. I have time to do evaluation.

The Organization

46. We have a good working environment.

47. Our workspaces are comfortable.

48. Our mission is clear.

49. Our mission is measurable.

50. We are cost driven.

51. We are profit driven.

52. We are time driven.

53. Career path opportunities are clear.

54. I understand our organizational structure.

55. I understand why we do things the way we do.

56. Policies and procedures are clearly communicated.

57. Policies and procedures make sense to me.

Page 58: Performance Improvement

The Assessment Book

52 |

What Is Culture and My Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

58. We work in a trusting environment.

59. Our methods for working together are understood.

Customer Relationships

60. We have a dialog with customers about what they want.

61. We have a dialog with customers about what we think they should have.

62. The customers are only given what they want.

63. We share evaluation results with our customers.

64. Our customers know what we will deliver.

65. Our customers know why we are delivering what we deliver.

66. Our decisions are based on what it takes to help our customers best serve their customers.

67. Feedback is collected from customers on the quality of our service/product.

68. Feedback is collected on the impact we have on our customers’ customers.

69. Feedback is collected on the impact we have on society.

70. We deliver what adds value to society.

Direction and the Future

71. Our organization uses the feedback it collects.

72. Our associates agree on where we are headed.

73. Where we are headed is clear to our customers.

74. I have time to do planning.

Page 59: Performance Improvement

Culture, Our Organization, and Our Shared Future

| 53

What Is Culture and My Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

75. Our organization prepares measurable objec-tives (ones that state where we are headed and how to tell when we have arrived).

76. Our organization exhibits a results-focused culture—everyone is focused on results.

77. Information from a needs assessment is used to plan.

78. Needs assessments identify gaps in results, not gaps in resources or methods.

79. We plan appropriately for the future.

80. We plan on the basis of adding value to society—for tomorrow’s child.

81. Our vision is based on the value we will add to external customers.

82. Our vision is based on the value we will add to society (beyond external customers).

Note: Online services for this survey are available at www.e-valuate-it.com/instruments/RKA/. Relevant demographic information should also be collected. What the Results Mean Take a look at the patterns. Patterns are revealed when you (and possibly others) complete the assessment. The gaps identified between “What Is” and “What Should Be” allow you to identify those of potential high priority. As with other assessment instruments of this type, gaps between “What Is” and “What Should Be” of 1½ or more on the scale should attract your attention. The patterns of questions and how they relate to your organization, or the organization you would like to create, are yours to choose. Associates and Work

Let’s take a look at the responses in the first section, Associates and Work. For all items except Item 11, gaps in results show exactly where changes might be considered. For example, if asso-ciates don’t see themselves as important (Item 5), then this might be a symptom of people or supervisors discounting associates, withholding useful rewards, or failing to appropriately recog-nize contributions. Discrepancies here might be further indicated by looking at Item 10, for if

Page 60: Performance Improvement

The Assessment Book

54 |

there are gaps here (people perhaps overly competing with one another), then there is a clue for what is causing the gap in recognition. Scan for other patterns in terms of gaps. Are there gaps between “What Is” and “What Should Be” for being specific about objectives (Item 15)? Are there gaps in information, as well as human and physical resources (Items 1, 6, 7, 14, 18, 19)? Are there trust problems (Items 8 and 12)? Management Style

These questions and the gaps they provide give clues to how your associates and those in charge interact on their way to organizational success. Again, look for patterns. What are the characteristics of managers? Courageous (Item 25) or timid? Do associates ask for compliance (Items 22, 23, 28) or are they encouraged to contribute to the purposes and efforts (Items 21, 26, 27, 29, 30)? What about the ethics of managers (Item 24)? Look for patterns in the data. Do some groups of questions have a different pattern than others? Do some groups of respondents describe a differing perspective than other groups? Measuring Success

Here is where you can find a lot about the organizational culture in terms of how it defines and measures success. Does the organization do formal evaluations (Item 31) and collect appropriate data (Items 33, 34)? How are evaluation results used (Items 32, 35, 38)? What is prized by the organization (Items 42, 43, 44)? Is there enough time for evaluation (Item 45)? It is a truism that what gets rewarded gets done. The responses between “What Is” and “What Should Be” here can tell you much about how success is defined, measured, and how rewards and incentives are used. Again, look for the patterns. The Organization

The gaps displayed here also relate to what was found in Associates and Work, so there are opportunities to look at the reliability of responses from two different sections. How good is the environment for work (Items 46, 47)? Are purposes of the organization clear (Item 48)? What drives—gets valued—the organization (Items 50, 51, 52)? How about why associates in the organization do things and do them the way they do them (Items 53, 54, 55, 56, 57)? How about how associates work together (Items 58, 59)? The gaps here should be considered along with the gaps in other areas revealed by this assess-ment. Customer Relationships

How does the organization really view and interact with the customer? Do associates interact with customers about what to provide (Items 60, 61)? Are we responsive to customer require-ments or just what they want (Items 62, 64, 65, 66)? How much do we tell the customer (Items 63, 67, 68)? And how do we use our performance data (Items 69 and 70)?

Page 61: Performance Improvement

Culture, Our Organization, and Our Shared Future

| 55

As before, the patterns of the gaps between “What Is” and “What Should Be” are useful and up to you to determine. Direction and the Future

Is the organization looking to define a better and more successful future (Items 75, 76, 79, 80)? Are employees and customers included in defining a better future (Items 72, 73)? Does the organization collect and use performance data in order to define a preferred future (Items 71, 77, 78). Especially important given the emerging agreement and emphasis on organizations adding societal value (sometimes called Corporate Social Responsibility) are Items 81 and 82. If there are few responses for “What Should Be” for these, you might want to revisit your organization’s vision, related mission, and associated policy: not to add value to our shared society is a pre-scription for future corporate failure. The future it seeks to create is up to the organization to determine. The patterns obtained here can certainly help define the future and know if associates are open to making that happen. Using the Results of This Survey Determine gaps in results for each question in each section, patterns in each section and between sections, and choices concerning what is going well and what should be changed. The demo-graphics collected at the end of the survey can help you determine where help might be given—who might want to change—and where things are on target for organizational success. The data are now available for objective assessment of your organizational culture, the major variables in it, and what to change and what to continue. It is your choice on what gaps you want to close and which ones are not important. It is sug-gested that “What Should Be” for the following items will be especially important for you as you move from your current results to ones that will deliver continual success. There are many more items that are also important, but they make up the tapestry of success while the following are the cornerstones of success: 9, 24, 31, 35, 36, 38, 42, 43, 48, 58, 61, 66, 69, 70, 71, 75, 78, 79, and 82. Trouble spots that should serve as red flags include: 10, 28, 50, 51, and 62. It is your organization. Use the data from this assessment to change what should be changed and keep what is working well. Ask yourself and your associates, “What kind of organization do I want to work with,” and “what kind of organization would I want to work with if I were the customers?” Related References Barker, J. A. (2001a). The new business of paradigms (classic ed.). St. Paul, MN: Star Thrower

Distribution. Videocassette.

Barker, J. A. (2001b). The new business of paradigms (21st century ed.). St. Paul, MN: Star Thrower Distribution. Videocassette.

Page 62: Performance Improvement

The Assessment Book

56 |

Bernardez, M. (2005). Achieving business success by developing clients and community: lessons from leading companies, emerging economies and a nine year case study. Performance Improvement Quarterly, 18(3), 37–55.

Clark, R. E., & Estes, F. (2002). Turning research into results: a guide to selecting the right performance solutions. Atlanta, GA: CEP Press.

Davis, I. (2005, May 26). The biggest contract. The Economist (London) 375(8428), 87.

Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.

Guerra, I. (2003). Key competencies required of performance improvement professionals. Performance Improvement Quarterly, 16(1).

Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding value: The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3) 76–99.

Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those who refuse to be mediocre. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning primer. Performance Improvement Quarterly, 18(3), 8–16.

Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks, CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito organizacional (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana, Espana.

Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised). Arlington, VA & Washington, DC: Jointly published by the American Society for Training and Development and the International Society for Performance Improvement. Also, published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los Problemas., Madrid, Editorial Centros de Estudios Ramon Areces, S. A.

Kaufman, R., & Lick, D. (2000–2001, Winter). Change creation and change management: partners in human performance improvement. Performance in Practice, 8–9.

Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/ Pfeiffer.

Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing, and accomplishing. Lancaster, PA: Proactive Publishers.

Kaufman, R., Watkins, R., Sims, L., Crispo, N., & Sprague, D. (1997). Costs-consequences analysis. Performance Improvement Quarterly, 10(3), 7–21.

Mager, R. F. (1997). Preparing instructional objectives: a critical tool in the development of effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.

Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring, MD: International Society for Performance Improvement and the American Society for Training and Development.

Page 63: Performance Improvement

Culture, Our Organization, and Our Shared Future

| 57

Watkins, R., Kaufman, R., & Leigh, D. (2000, April). A performance accomplishment code of professional conduct. Performance Improvement, 35(4), 17–22.

Watkins, R., Leigh, D., & Kaufman, R. (1998, September). Needs assessment: a digest, review, and comparison of needs assessment literature. Performance Improvement, 37(7), 40–53.

Page 64: Performance Improvement
Page 65: Performance Improvement

| 59

Chapter 6 Evaluation, You, and Your Organization

Roger Kaufman, PhD

This material introduces a survey that asks about evaluation and evaluation-related considera-tions—how things get reviewed at your organization. Each organization will be different. These items and the pattern they provide are designed to help you decide what you might want to change and what you might want to continue relative to comparing your results with your inten-tions. Evaluation, at its core, simply compares one’s results with their intentions. Poor evaluation can lead to poor results and contributions. So let’s see what is going on within your organization. A note on words: Some of the words used in this survey might seem new to you or may be used differently than you typically use them. Review the list below that describes how these words will be used in this survey:

• By society we mean the world in which we all live.

• Supervisors are bosses, leaders, or management to whom you report in the organiza-tion’s chain of command.

• Rewards (or incentives) are the perks, financial gains, better assignments, and recognition that are given on the basis of what is done and delivered.

• Resources are dollars, people, equipment, and tools.

• Formally refers to doing something with a shared rigorous definition of the ways and means in which we do something, such as collect data, rather than informally, which can be casual or unstructured.

• Actual performance is when somebody actually does and produces something as con-trasted with how we recall processes or procedures that might be employed.

• Clients are those who you deliver a thing or service to. They may be internal or external to your organization.

• Internal clients are those within your organization (also referred to as internal partners).

• External clients are those outside of your organization, such as your immediate clients and your clients’ clients (also referred to as external partners).

• Staff are people who work within your organization.

Page 66: Performance Improvement

The Assessment Book

60 |

How to Complete This Survey There are two dimensions to this survey that allow you to compare the current status of our organization with a preferred status: (1) What Is and (2) What Should Be. In the first column (left side), indicate how you see our organization currently operating. In the second column (right side), indicate how you feel our organization should be operating. For each item, think of the phrase “In our organization, from my experience…” as you consider the current and preferred states of the organization. If there are items about which you are uncertain, give your best response.

What Is Evaluation, You, and Your Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

How the Organization Perceives Evaluation

1. Evaluations compare results with purposes (predetermined objectives).

2. Evaluations are results focused.

3. Evaluations are process focused.

4. Evaluations focus on both results and processes.

5. We evaluate.

6. Evaluations include a focus on the results accomplished by individuals.

7. Evaluations include a focus on the results accomplished by small groups.

8. Evaluations include a focus on the results accomplished by individual departments.

9. Evaluations include a focus on the results accomplished by the organization itself.

10. Evaluations include a focus on the results accomplished for the benefit of external clients.

11. Evaluations include a focus on the results accomplished for the benefit of society or the community.

Page 67: Performance Improvement

Evaluation, You, and Your Organization

| 61

What Is Evaluation, You, and Your Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

12. The organization skips formal evaluation because of time constraints.

13. The organization skips formal evaluation because of lack of evaluation expertise.

14. The organization skips formal evaluation because of not knowing how to interpret evaluation data.

15. Management and supervisors are focused on results accomplished (rather than processes and activities engaged in) when they request an evaluation.

16. Staff is focused on results accomplished when conducting an evaluation.

17. The organization’s culture is focused on results.

18. Evaluation provides information that is used for deciding what to stop, to continue, and to modify.

19. Evaluations are included in project plans.

20. Evaluation results are shared with all planning partners.

How the Organization Goes About Evaluation

21. Evaluations both collect and use actual (hard23) performance data about individuals’ jobs and tasks.

22. Evaluations both collect and use actual (hard) performance data about units within the organization.

23. Evaluations both collect and use actual (hard) performance data about the organization itself.

23 “Hard” data are results that are independently verifiable (such as error rate, production rate, employ-ment, being a self-sufficient citizen). We use it as the same as “actual performance” data.

Page 68: Performance Improvement

The Assessment Book

62 |

What Is Evaluation, You, and Your Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

24. Evaluations both collect and use actual (hard) performance data about the impact on external clients (i.e., results for clients and clients’ clients).

25. Evaluations both collect and use actual (hard) performance data about the impact on society or the community.

26. Evaluations both collect and use perceptions (soft24) of performance-related data about individuals’ jobs and tasks.

27. Evaluations both collect and use perceptions of performance-related data about units within the organization.

28. Evaluations both collect and use perceptions of performance-related data about the organization itself.

29. Evaluations both collect and use perceptions of performance-related data about the impact on external clients (i.e., results for clients and clients’ clients).

30. Evaluations both collect and use perceptions of performance-related data about the impact on society or the community.

31. Evaluations both collect and use both actual data and perceptions for assessing gaps between current and desired results.

32. Evaluations involve internal partners (staff) in setting objectives.

33. Evaluations involve external partners (clients and society) in setting objectives.

34. The organization formally evaluates the results external to the organization (i.e., results for clients and clients’ clients).

24 “Soft” data are results that are personal and not independently verifiable (such as perceptions, opinions, feelings). They are the perceptions that are held by people about some performance results.

Page 69: Performance Improvement

Evaluation, You, and Your Organization

| 63

What Is Evaluation, You, and Your Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

35. The organization informally—no structure or specific criteria—evaluates results external to the organization (i.e., results for clients and clients’ clients).

36. The organization informally evaluates results accomplished for society or the community.

37. The organization informally evaluates results of the organization.

38. The organization informally evaluates results of work units.

39. The organization informally evaluates results of individuals.

40. Evaluation provides measurable objectives that state both what result is to be accomplished and how the accomplishment will be measured.

The Basis for Evaluation Criteria

41. We don’t plan.

42. Plans are made on the basis of desired results.

43. Plans are made on the basis of the desired consequences of results for external clients (those people we deliver things or services to).

44. Plans are made on the basis of the desired consequences of results for society and community (those people who are our close and distant neighbors).

45. Plans are made on the basis of desired individual performance.

46. Plans are made on the basis of desired resources.

47. Plans are made on the basis of desired activities, programs, and projects.

Page 70: Performance Improvement

The Assessment Book

64 |

What Is Evaluation, You, and Your Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

48. Plans are made on the basis of desired departmental or section results.

49. Plans are made on the basis of desired organizational results.

50. Plans serve to link resources to activities, programs, and projects.

51. Plans serve to link resources to results that add value for clients and clients’ clients.

52. The organization chooses means and resources (e.g., training, restructuring, layoffs) without first identifying results to be achieved.

53. The organization uses evaluation data for identifying gaps in results for impact on external clients and society.

54. The organization uses needs assessments for identifying gaps in results for impact on the organization itself (such as a business plan).

55. The organization uses needs assessments for identifying gaps in results for impact on individual operations or tasks.

56. Needs are formally prioritized on the basis of the costs to close gaps in results as compared to the costs of ignoring them.

57. The criteria used for evaluating people and programs are only known to the supervisors.

58. The criteria for evaluating people and programs are not rigorously defined.

59. The criteria for evaluating people and programs are consistently applied to people at all levels.

60. The criteria for evaluating people and programs are used fairly.

61. The criteria for evaluating people and programs are used to differentially reward “friends” of those in power.

Page 71: Performance Improvement

Evaluation, You, and Your Organization

| 65

What Is Evaluation, You, and Your Organization Survey What Should Be

1 – Rarely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 – Consistently

1 – Rarely, if ever 2 – Not usually 3 – Sometimes 4 – Frequently 5 – Consistently

1 – R

arely, if ever

2 – Not usually

3 – Sometim

es

4 – Frequently

5 - Consistently

Using Evaluation Data

62. The organization uses evaluation data for learning and improving.

63. The organization uses evaluation data for blaming or punishing.

64. Evaluations compare results accomplished with objectives established at the beginning of a project.

65. Evaluations compare accomplishments to the rewards of those accomplishments for the organization itself.

66. Evaluations compare accomplishments to the rewards of those to individual projects or activities.

67. Evaluations compare accomplishments to the rewards of those accomplishments based only on political consequences.

68. Evaluations determine (while doing a program, project, or activity) what is working and what is not based on hard data.

69. Evaluations determine (while doing a program, project, or activity) what is working and what is not based on soft data.

Note: Online services for this survey are available at www.e-valuate-it.com/instruments/RKA/. Relevant demographic information should also be collected. What the Results Mean How the Organization Perceives Evaluation

Items 1 through 5 let you know about the nature of your evaluations. Item 3 provides a “red flag” because evaluation of process (or results) without relating to gaps in results does not provide use-ful information—the only sensible way to select a process, or means, is on the basis of what results you want to achieve. Item 4 allows you to see if your evaluations, while looking at results, might be “sneaking” means and resources into the picture.

Page 72: Performance Improvement

The Assessment Book

66 |

Items 6 through 11 examine the target of evaluations. Item 6 examines evaluations relative to individuals, Item 7 to small groups, Item 8 to individual departments, Item 9 to the organization itself, Item 10 to external clients, and Item 11 to society and the community. By breaking these into individual questions, you may pinpoint where evaluations are or are not being targeted. Of course, inclusion of all will yield the best evaluation results. Items 12 through 14 focus on possible trouble spots. Item 12 seeks why evaluations might be skipped for lack of time, Item 13 for lack of expertise, and Item 14 for lack of knowing how to use the evaluation data. These are usual traps that interfere with good and useful evaluation. Item 15 is about management and supervisors’ orientation toward results and putting results and methods/ resources into useful perspective. Item 16 probes staff focus, and Item 17 is about the organization’s results culture. Item 18 examines how evaluation data are used, and Item 19 looks specifically at inclusion in project plans. Item 20 lets you know about the sharing of evaluation results—openness. How the Organization Goes About Evaluation

This series of assessment items will let you know about how and if evaluation data are being applied. Item 21 looks to see if hard (performance) data are used concerning individuals’ jobs and tasks. The same question is applied concerning hard (performance) data to units within the organization in Item 22, and to the organization itself in Item 23. Item 24 lets you know about both the col-lection and use of performance data for external clients, while Item 25 probes the Mega level of society and community. All levels should be included in hard data collection and use. Now the items shift to soft (perception) data. Item 26 examines the collection and use of soft data related to individuals, Item 27 to organiza-tional units, Item 28 to the organization itself, Item 29 to external clients, and Item 30 to society and the community. All levels should be included in soft data collection and use. Item 31 will tell you if the integrity of evaluation (comparing results with intentions) is contin-ued by examining gaps in results. Item 32 examines if internal staff are involved in setting objectives. Item 33 examines if external partners are included in setting objectives. Item 34 examines if evaluations are formal and focus on external clients. Item 35 shifts to examine the extent to which your evaluations are informal, and then further probes to see if informal evaluations are made for external clients. Item 36 seeks information about informal evaluation regarding results accomplished for society or the community, while Item 37 seeks information about informal evaluation of the organization itself, Item 38 of work units, and Item 39 of individuals. Informal evaluations are best replaced by formal and rigorous evaluations. Item 40 is key. It probes to find out if evaluation provides measurable criteria and provides data for what is to be accomplished and how to calibrate that accomplishment.

Page 73: Performance Improvement

Evaluation, You, and Your Organization

| 67

The Basis for Evaluation Criteria

Item 41 can be telling: Does the organization even plan? Item 42 evaluates whether plans are made on the basis of desired results. Item 43 examines if external client desired consequences are used for planning, and Item 44 seeks the same question for society and the community. Item 45 probes whether plans are based on desired individual performance, Item 46 on desired resources, and Item 47 on desired programs, projects, and activities. Item 48 examines if plans are made on the basis of desired section results, and Item 49 on desired organizational results. Item 50 examines if plans are linked to activities, programs, and projects. Item 51 examines if plans are made on the basis of linking resources to results that add value for clients and clients’ clients. Your evaluations will be more useful if all of these are included. Item 52 examines whether the organization takes action without identifying results to be achieved. Item 53 probes to see if the organization uses evaluation data for external and societal impact. Item 54 starts a shift to see if needs assessments (determining and prioritizing gaps in results) data are used for impact on the organization itself, and Item 55 focuses on individual operations and tasks. Item 56 determines if needs are prioritized on the basis of the costs to meet the needs as com-pared to the costs to ignore them. Items 57 through 61 start a series on criteria for evaluation. Item 57 examines if criteria for evaluation of people and programs are only known to supervisors, which is not a good approach. Item 58 probes if evaluation criteria for people and programs are not rigorous, and Item 59 examines the consistency of the application of evaluation criteria. Item 60 examines the fairness of the criteria, and Item 61 is concerned with differential (and inappropriate) rewards. Using Evaluation Data

How are evaluation data used? Item 62 seeks to determine if evaluation data are used for continual improvement, and Item 63 for blaming and punishing. Item 64 seeks to find if evaluation data are used to compare results with intentions. Item 65 looks specifically at the organization itself, and Item 66 looks at individual projects or activities. Item 67 slips into the arena of company politics and whether rewards are made on the basis of politi-cal considerations alone. Item 68 speaks to continual improvement using evaluation data based on hard (performance) data, and Item 69 asks the same for soft (perception) data. Of course, both hard and soft data should be used.

Page 74: Performance Improvement

The Assessment Book

68 |

Using the Results of This Survey Gaps in results for each item within each section provide a great deal of information: patterns in each section and between sections, and choices concerning what is going well and what should be changed. The demographics collected at the end of the survey can help you determine where help might be given, who might want to change, and where things are on target for organiza-tional success. The data are now available for objective assessment of your approach to evaluation, the major variables in it, and what to change and what to continue. Following are the items that are the cornerstone for defining, doing, and benefiting from useful evaluations, realizing that the others are also important but are part of the fabric. These should receive a rating of 4 or 5 on the “What Should Be” scale: Items 1, 2, 11 (particularly important if evaluation is to look at the value added to our shared society and communities), 15, 18, 24, 25 (particularly important), 30, 34 (particularly important), 40, 44 (particularly important), 53, 56, 59 (particularly important), 60 (particularly important), 62, 64, 68, and 69. There are some items that if you get high—4 or 5—“What Should Be” scores should serve as red flags: 3, 12, 13, 14, 35, 36, 37, 38, 39, 41, 46, 47, 52, 57, 58, 61, 63, and 67. From the responses to this survey, you can identify what is working and what should be changed. Evaluation is critical to organizational success. Do it well. Related References Barker, J. A. (2001). The new business of paradigms (21st century ed.). St. Paul, MN: Star

Thrower Distribution. Videocassette.

Bernardez, M. (2005). Achieving business success by developing clients and community: lessons from leading companies, emerging economies and a nine year case study. Performance Improvement Quarterly, 18(3), 37–55.

Clark, R. E., & Estes, F. (2002). Turning research into results: a guide to selecting the right performance solutions. Atlanta, GA: CEP Press.

Davis, I. (2005, May 26). The biggest contract. The Economist (London) 375(8428), 87.

Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.

Greenwald, H. (1973). Decision therapy. New York: Peter Wyden, Inc.

Guerra, I. (2003). Key competencies required of performance improvement professionals. Performance Improvement Quarterly, 16(1).

Guerra, I., Bernardez, M., Jones, M., & Zidan, S. (2005). Government workers adding value: The Ohio Workforce Development Program. Performance Improvement Quarterly, 18(3), 76–99.

Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for performance improvement practitioners. Amherst, MA: HRD Press, Inc.

Page 75: Performance Improvement

Evaluation, You, and Your Organization

| 69

Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those who refuse to be mediocre. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning primer. Performance Improvement Quarterly, 18(3), 8–16.

Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks, CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito organizacional (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana, Espana.

Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised). Arlington, VA & Washington, DC: Jointly published by the American Society for Training and Development and the International Society for Performance Improvement. Also, published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.

Kaufman, R., Guerra, I., & Platt, W. A. (2006). Practical evaluation for educators: finding what works and what doesn’t. Thousand Oaks, CA: Corwin Press/Sage.

Kaufman, R., & Lick, D. (Winter, 2000–2001). Change creation and change management: partners in human performance improvement. Performance in Practice, 8–9.

Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/ Pfeiffer.

Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing, and accomplishing. Lancaster, PA: Proactive Publishers.

Kaufman, R., Watkins, R., Sims, L., Crispo, N., & Sprague, D. (1997) Costs-consequences analysis. Performance Improvement Quarterly, 10(3), 7–21.

Kirkpatrick, D. L. (1994). Evaluating training programs: the four levels. San Francisco, CA: Berret-Koehler.

Mager, R. F. (1997). Preparing instructional objectives: a critical tool in the development of effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.

Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring, MD: International Society for Performance Improvement and the American Society for Training and Development.

Sample, J. (1997, July 25). Training programs: how to avoid legal liability. Fair Employment Practices Guidelines (Supplement #436), 1–21.

Scriven, M. (1973). Goal free evaluation in school evaluation: the politics and process. E. R. House (Ed.). Berkeley, CA: McCutchan.

Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. M. Gagne, and M. Scriven, Perspectives of Curriculum Evaluation (AERA Monograph Series on Curriculum Evaluation). Chicago, IL: Rand McNally & Co.

Page 76: Performance Improvement

The Assessment Book

70 |

Stufflebeam, D. L., Foley, W. J., Gephart, W. R., Hammon, R. L., Merriman, H. O., & Provus, M. M. (1971). Educational evaluation and decision making. Itasca, IL: Peacock.

Van Tiem, D. M., Moseley, J. L., & Dessinger, J. C. (2000). Fundamentals of performance technology: a guide to improving people, process, and performance. Silver Spring, MD: International Society for Performance Improvement.

Page 77: Performance Improvement

| 71

Chapter 7 Competencies for Performance

Improvement Professionals

Ingrid Guerra-López, PhD Introduction Today’s performance improvement practitioners represent the entire spectrum of business, industry, and the public sector, with their functions being as diverse as the organizations they come from (Dean, 1995). Along with eclecticism and expansion comes the threat of mediocrity (Gayeski, 1995). Given the speed of change in today’s society and the variety of knowledge and skill sets brought in by more specialized subsets (e.g., instructional designers, training special-ists, human resource developers, organizational developers), there has been a growing disparity among practitioners’ behavior, even if they share the title of performance improvement profes-sional (Hutchinson, 1990). Some time ago, the International Board of Standards for Training, Performance and Instruction (IBSTPI) released the third edition of Instructional Design Competencies: The Standards (Richey, Fields, and Foxon, 2000), intended to provide instructional designers with a foundation for the establishment of professional standards. Although Sanders and Ruggles (2000) conclude, “By most accounts, HPI is an outgrowth of instructional systems design and programmed instruction,” these instructional design competencies do not cover the entire spectrum of compe-tencies required for performance improvement professionals (Kaufman and Clark, 1999). Many of the leading figures in the performance improvement field agree that skills/knowledge is one of about four possible causes (e.g., selection, motivation, environment, and ability) of performance problems (Harless, 1970; Rummler and Brache, 1995; Rothwell and Kazanas, 1992). According to some of the literature, the field of performance improvement must be analyzed to determine what behaviors and performances are required in order for practitioners to add a demonstrable value to the field and society as a whole (Dean, 1997; Kaufman, 2000; Kaufman, 2006; Westgaard, 1988). If they are to deliver what they promise—improved performance—a logical place to start is to set performance standards for themselves. Stolovitch, Keeps, and Rodrigue (1995) agree, “Performance standards can serve as a means to officially recognize the achievements a professional has made in his or her area of practice” (683). Thus, the Performance Improvement Competency Inventory, presented here, was rigorously designed and validated with the purpose of informing and improving the practice of performance improvement professionals. Framework Although there are many performance improvement models in practice today, most of them can be traced back to this ADDIE model (Analysis, Design, Development, Implementation, and Evaluation). Thus, studies have used the ADDIE model as the basis for the development of

Page 78: Performance Improvement

The Assessment Book

72 |

proposed performance improvement competency models (see Harless, 1995; Stolovitch, Keeps, and Rodrigue, 1995). For the development of this instrument, however, the ADDIE model was modified to include assessment as a first and distinct phase in the performance improvement process (Guerra, 2001). Generally, some performance improvement authors have either used the terms analysis and assessment interchangeably, or have affirmed that the term analysis is understood to include assessment (Rossett, 1987, 1999; Rothwell, 1996; Stolovitch, Keeps, and Rodrigue, 1995). How-ever, others have made a clear distinction between the two (Kaufman, 1992, 2000; Kaufman, Rojas, and Mayer, 1993). Based on Kaufman’s definition (2000), needs assessment is the process that identifies gaps in results. Needs analysis, on the other hand, breaks these identified needs or problems into its components and seeks to identify root causes. Thus, needs analysis, when applied to performance improvement as the first step, assumes that what is being analyzed are in fact “needs.” Simply stated, while assessment identifies the “what,” analysis identifies the “why.” Ignoring the distinction between these two separate, but related, steps introduces the pos-sibility of mistaking a symptom for the problem itself. Consequently, another “A” (assessment) was added to the conventional ADDIE model, resulting in the A²DDIE model (Guerra, 2003). Finally, another element included in this model was that of social responsibility. Westgaard (1988), Kaufman (1992, 2000, 2006), Dean (1993), Kaufman and Clark (1999), Farrington and Clark (2000), and others have challenged professionals to rethink many traditional practices within the performance improvement field and consider the ethical responsibilities associated with its application. In the broader context of consulting, even mainstream consulting firms such as McKinsey (Davis, 2005) caution against the perils of not looking and aligning organizational and societal goals. Professionals are increasingly being expected to consider the environment within which their clients exist and account for the impact organizational performance may have on this environment. Thus, the A²DDIE model also includes societal impact of organizational performance (Guerra, 2003). Instrument Validation Content validity was ensured by an expert panel consisting of four leading figures in the per-formance improvement field. All had published extensively in the performance improvement area, and three of the four experts were past presidents of the International Society for Perform-ance Improvement (ISPI). Each expert panelist was sent a content validation packet, which included a brief overview of the framework and operational definitions of the A²DDIE model and each of its phases. They were asked to systematically review the list of competencies and determine whether it was representative of those required of performance improvement profes-sionals. Using a matching task approach (Rovinelli and Hambleton 1977, cited in Fitzpatrick, 1981), they were specifically asked to categorize each of the competencies into the correspond-ing domain (assessment, analysis, design, development, implementation, and evaluation). Sec-ondly, using a 5-point scale (1 = slightly relevant, 2 = somewhat relevant, 3 = relevant, 4 = very relevant, and 5 = extremely relevant), they were asked to rate how relevant each competency was to the corresponding domain they indicated. Lastly, they were asked general questions regarding the adequacy of the list of competencies. Items classified into their intended category with a rat-ing of 3 or higher by at least three of the four judges were included in the questionnaire.

Page 79: Performance Improvement

Competencies for Performance Improvement Professionals

| 73

How to Complete This Inventory Please indicate how often you (a) currently apply and (b) should apply each of the competencies listed below in your job as a performance improvement professional. Please provide two responses for each question:

What Is Performance Improvement Competency Inventory25 What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

1. Interview stakeholders (e.g., managers, job incumbents, community members) to define, in measurable terms, possible performance-related needs.

2. Avert “premature solutions” offered by stake-holders by focusing first in performance needs (i.e., gaps in results).

3. Obtain approval to conduct needs assessment.

4. Assess impact (i.e., value added) of organizational performance on society.

5. Assess performance gaps at the organizational level.

6. Assess team performance gaps.

7. Assess individual performance gaps.

8. Prepare valid and reliable analysis instruments (e.g., surveys, interviews, etc.).

9. Conduct job analysis.

10. Conduct task analysis.

11. Analyze performer characteristics.

12. Analyze characteristics of performance environment.

13. Conduct content analysis of relevant organizational records and documents.

14. Apply appropriate data analysis techniques.

25 Based on Guerra, I. (2003). Key competencies required of performance improvement professionals. Performance Improvement Quarterly, 16(1).

Page 80: Performance Improvement

The Assessment Book

74 |

What Is Performance Improvement Competency Inventory

What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

15. Develop frameworks as models for improving performance.

16. Identify obstacles to required performance.

17. Determine the type of intervention required from all possible alternatives.

18. Determine the resources (e.g., time, money, people) required for the intervention(s).

19. Estimate cost and consequences associated with closing each performance gap.

20. Estimate cost and consequences associated with ignoring each performance gap.

21. Prioritize performance gaps on the basis of the estimated cost and consequences associated with closing them versus the cost and consequences of ignoring them.

22. Clearly and succinctly explain performance-related needs data to the client.

23. Explain interactions between performance-related needs at all organizational levels.

24. Obtain agreement on what results will be delivered by a performance improvement intervention based on performance-based needs.

25. Obtain agreement on what each partner (e.g., performance improvement professionals, organizational partners) is to deliver and when.

26. Review performance analysis report prior to designing an intervention.

27. Involve all stakeholders in the design of interventions.

28. Apply systematic research-based design principles.

29. Identify and prioritize intervention require-ments.

Page 81: Performance Improvement

Competencies for Performance Improvement Professionals

| 75

What Is Performance Improvement Competency Inventory

What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

30. Sequence required performance intervention results and activities.

31. Specify performance improvement tactics appropriate for intervention.

32. Anticipate barriers to successful implementa-tion.

33. Derive an implementation plan based on intervention requirements.

34. Derive an implementation plan based on organizational dynamics.

35. Derive an evaluation/continuous improvement plan based on pre-specified performance objectives.

36. Review design specifications before developing an intervention.

37. Determine if an already existing intervention would meet performance requirements.

38. Purchase performance improvement intervention if already available.

39. Adapt or supplement purchased performance improvement interventions if required.

40. If required performance intervention does not already exist, produce the intervention according to design specifications.

41. Monitor performance intervention development activities as required.

42. Continuously evaluate the development of performance improvement interventions.

43. Review implementation plan.

44. Review implementation plan as required.

45. Communicate to those affected by the intervention the associated benefits and risks.

Page 82: Performance Improvement

The Assessment Book

76 |

What Is Performance Improvement Competency Inventory

What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

46. Grant necessary authority with responsibility when assigning roles for the implementation process.

47. Assist in implementation of performance interventions as required.

48. Monitor implementation activities.

49. Review evaluation plan prior to evaluation activities.

50. Revise evaluation plan as required.

51. Evaluate organizational impact (i.e., value added) on society.

52. Evaluate attainment of prespecified organiza-tional level performance objectives.

53. Evaluate attainment of prespecified team performance objectives.

54. Evaluate attainment of prespecified individual performance objectives.

55. Develop recommendations concerning what must be improved to maintain required performance.

56. Develop recommendations concerning what must be maintained to improve performance.

57. Develop recommendations concerning what must be abandoned to improve performance.

58. Present recommendations regarding applying evaluation results in a revision process for continuous improvement.

Note: Online services for this survey are available at www.e-valuate-it.com/instruments/RKA/. Relevant demographic information should also be collected. Analysis and Interpretation The Performance Improvement Competency Inventory is designed to allow performance tech-nologists to gauge the gaps between their ideal and current practices. As past instruments, this one is based on a dual-response column format that allows the detection of gaps for each item.

Page 83: Performance Improvement

Competencies for Performance Improvement Professionals

| 77

The gaps should be analyzed in light of the magnitude of their discrepancy, the relative position on the Likert scale continuum, and the direction in which the gap is located (i.e., is it a negative or a positive gap). If you apply this questionnaire not just for yourself, but to a group of perform-ance improvement professionals, an analysis of the demographic characteristics will also be very meaningful. For example, work responsibilities may be one of the demographic characteristics you collect data for. Because work responsibilities of some professionals may be almost exclusively cen-tered on a particular phase (e.g., a job specialty such as that of designer), the different phases/dimensions of the instrument should reveal an accurate reflection of gaps in the relevant task competencies. For instance, if you are a designer of interventions whose work in a project begins after assessment and analysis have been conducted by another team, then you may want to pay particular attention to the gaps in the design phase. Likewise, your practice may require you to be a generalist and be involved throughout the entire process. In this case, you would benefit from examining the full inventory of competencies. As you interpret the gaps, consider the following:

• “What Should Be” responses indicate perceived importance of that competency to the respondent. A high score indicates they think it’s very important, and conversely low scores indicate they don’t think it’s very important.

• “What Is” responses indicate respondent perception about their current behavior. No matter how honest respondents feel they are being in answering items, those interpret-ing the results should keep in mind that this is reality according them. Were a third party to observe their actual behavior, the conclusions might be different.

• Large positive gaps indicate that they don’t feel they carry out these tasks as often as they think they should.

• Large negative gaps indicate that they carry out these tasks more often than they think they should.

Of course, the relative position of responses will also add to the complexity of the potential interpretations.

Page 84: Performance Improvement

The Assessment Book

78 |

Some Thoughts on Interpretation

Phases Items Notes

Assessment 1–7 Large positive gaps in this phase represent a lack of focus on identifying or verifying performance problems and opportunities. Confirming performance problems and opportunities may be important even for designers and developers who want to ensure their solutions will meet the client’s requirements.

A particularly low score in Item 4 for both “What Should Be” and “What Is” (and therefore a small or nonexistent gap) means that the respondent sees no connection between the value of organizational contributions and societal needs and requirements. If the score is high for “What Should Be” and low for “What Is” (i.e., a high positive gap), it could mean that though they think it’s important, they do not feel empowered to address those needs (whether because of lack of authority, resources, etc.).

Analysis 8–25 Large positive gaps here represent a lack of focus on identifying factors causing the gaps. Because these factors are the basis for selecting the right performance solutions, large gaps here are also dangerous. Again, confirmation of performance problems and their causes inform good design, development, implementation, and evaluation.

Design 26–35 The items included in the design phase are meant to ensure a sound design. Large gaps in any of these items can jeopardize the utility and success of the design.

Development 36–42 These items are meant to guide the most efficient and effective development process possible. Large gaps in any of these may signal redundancies, unnecessary expenses, and steps in the development of a solution.

Implementation 43–48 The implementation items provide a road map for how to ensure that the developed solution will actually be adopted by the end users and any other stakeholders. If assessment is about change creation, implementation is about change management, in large part. Large positive gaps in this section may signal a solution that will fail to render desired results, no matter how appropriate and well designed.

(continued)

Page 85: Performance Improvement

Competencies for Performance Improvement Professionals

| 79

Phases Items Notes

Evaluation 49–58 Finally, evaluation tasks are critical in ensuring that everything done up to this point was worthwhile. If there are large gaps in this phase, there may not be relevant, reliable, and valid data to prove that the efforts and results (whether positive or negative) were in fact worth the expense.

Related References Davis, I. (2005). The McKinsey Quarterly, 3.

Dean, P. (1997). Social science and the practice of performance improvement. Performance Improvement, 10(3), 3–6.

Dean, P. (1995). Examining the practice of human performance technology. Performance Improvement Quarterly, 8(2), 17–39.

Dean, P. (1993). A selected review of the underpinnings of ethics for human performance technology professionals—Part one: key ethical theories and research. Performance Improvement Quarterly, 6(4), 6–32.

Farrington, J., & Clark, R. E. (2000). Snake oil, science, and performance products. Performance Improvement, 39(10), 5–10.

Fitzpatrick, A. (1981). The validation of criterion-referenced tests. Amherst, MA: University of Massachusetts.

Gayeski, D. (1995). Preface to the special issue. Performance Improvement Quarterly, 18(2), 6–16.

Guerra, I. (2003). Key competencies required of performance improvement professionals. Performance Improvement Quarterly, 16(1).

Guerra, I. (2001). Performance improvement based on results: Is our field adding value? Performance Improvement, 40(1), 6–10.

Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for performance improvement practitioners. Amherst, MA: HRD Press, Inc.

Harless, J. (1995). Performance technology skills in business: implications for preparation. Performance Improvement Quarterly, 8(4), 75–88.

Harless, J. (1970). An ounce of analysis is worth a pound of objectives. Newman, GA: Harless Performance Guild.

Hutchinson, C. (1990). What’s a nice P.T. like you doing? Performance & Instruction, 29(9), 1–5.

Kaufman, R. (2006). Change, choices, and consequences: a guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2000). Mega planning. Thousand Oaks, CA: Sage Publications.

Page 86: Performance Improvement

The Assessment Book

80 |

Kaufman, R. (1992). Strategic planning plus: an organizational guide (revised). Newbury Park, CA: Sage.

Kaufman, R., & Clark, R. (1999). Re-establishing performance improvement as a legitimate area of inquiry, activity, and contribution: rules of the road. Performance Improvement, 38(9), 13–18.

Kaufman, R., Rojas, A., & Mayer, H. (1993). Needs assessment: a user’s guide. Englewood, Cliffs, NJ: Educational Technology.

Richey, R., Fields, D., & Foxon, M. (2000). Instructional design competencies: the standards. Iowa City, IA: International Board of Standards for Training, Performance, and Instruction.

Rossett, A. (1999). Analysis for human performance technology. In H. D. Stolovitch and E. J. Keeps (Eds.) Handbook for human performance technology (2nd ed.). San Francisco, CA: Jossey-Bass Pfeiffer.

Rossett, A. (1987). Training needs assessment. Englewood Cliffs, NJ: Educational Technology.

Rothwell, W. (1996). ASTD models for human performance improvement: roles, competencies, and outputs. Alexandria, VA: The American Society for Training and Development.

Rothwell, W., & Kazanas, H. (1992). Mastering the instructional design process: a systematic approach. San Francisco, CA: Jossey-Bass Publishers.

Rummler, G. A. (2004). Serious performance consulting: according to Rummler. Silver Spring, MD: International Society for Performance Improvement and the American Society for Training and Development.

Rummler, G. A., & Brache, A. P. (1995). Improving performance: How to manage the white space on the organization chart (2nd ed.). San Francisco, CA: Jossey-Bass Publishers.

Sanders, E., & Ruggles, J. (2000). HPI Soup. Training & Development, 54(6).

Stolovitch, H., & Keeps, E. (1999). What is human performance technology? In H. D. Stolovitch and E. J. Keeps (Eds.) Handbook for human performance technology (2nd ed.). San Francisco, CA: Jossey-Bass Pfeiffer.

Stolovitch, H., Keeps, E., & Rodrigue, D. (1995). Skill sets for the human performance technologists. Performance Improvement Quarterly, 8(2), 40–67.

Westgaard, O. (1988). A credo for performance technologists. Western Springs, IL: International Board of Standards for Training, Performance and Instruction.

Page 87: Performance Improvement

| 81

Chapter 8 Performance Motivation

Doug Leigh, PhD

Introduction Goals are thought to influence behavior (Zaleski, 1987) and can be considered thoughts related to results that are required in the future. Kaufman (1998, 2006) defines goal statements as gen-eral aims, purposes, or intents in nominal or ordinal scales of measurement which, unlike per-formance objectives, do not specify evaluation criteria nor the means by which the goal will be achieved. Expectancies of Success and the Value of Accomplishment Understanding choice and the factors influencing the decisions that individuals make is the aim of human motivation theory (Weiner, 1992). Motivation can be defined as those factors that lead individuals to act in explicit, goal-directed ways (Settoon, 1998). Contrary to older approaches to human motivation (such as psychoanalytic, ethnological, sociobiological, drive, and gestalt theo-ries), contemporary investigation into human behavior has tended to focus on models that are concerned with rational, cognitive choice between alternative courses of action (Weiner, 1992). Expectancy-value theory is often viewed as the most promising means by which to predict indi-viduals’ future goal-directed behavior (Kuhl, 1982). Also known as “subjective expected utility,” expectancy-value theory simply provides a means for predicting behavior by considering the judgments individuals make regarding a goal’s value, as well as their anticipated likelihood of successfully achieving that goal. Three precursors of expectancy-value theory have been responsible for this explanation of motivation: Lewin’s (Lewin, Dembo, Festinger, and Sears, 1944) “resultant valence theory,” Atkinson’s (1964) “achievement motivation,” and Rotter’s (1954) “social learning theory.” All three theories share the assumption that the actions individuals take regarding goals depend on the assumed likeli-hood that their action will lead to the goal as well as the value individuals ascribe to the payoffs of accomplishing that goal (Weiner, 1992). Whereas individuals’ commitment to goals of their own creation are presumed to be at least par-tially grounded in attaining desirable consequences or avoiding undesirable ones (Heider, 1958), externally imposed goals should first be evaluated in both terms of likelihood of successful accomplishment as well as subjective value. Supporting this notion, Hollenbeck and Klein (1987) point out that goal acceptance is a “necessary prerequisite to goal setting’s salutary effects on performance” (2). Performance Motivation The decision to “accept” goals whose immediate payoff may manifest not just for the individual, but instead benefit her or his team, organization, or community, is particularly critical in today’s workplace. The importance of defining, linking, and taking steps to accomplish results at various levels of results—individually and in teams, within the organization itself, and for external

Page 88: Performance Improvement

The Assessment Book

82 |

clients and the community—is essential to the continued success of any organization (Nicholls, 1987; Senge, 1990). Kaufman (1998, 2006) suggests that if organizations are not useful to society, they will invaria-bly fail. Further, as House and Shamir (1993) point out, the “articulation of an ideological goal as a vision for a better future, for which followers have a moral claim, is the sine qua non of all charismatic, visionary theories [of leadership]” (97). Kaufman’s (1998, 2000, 2006) “Organiza-tional Elements Model” (OEM) provides a useful framework for stratifying results according to the differing clients and beneficiaries of organizational action. This model distinguishes what an organization uses (Inputs) and does (Processes) from the results it yields to three distinct (but often related) stakeholders—individual employees and the teams they work within (Micro level), the organization as a whole (Macro level), and/or external clients and society (Mega level). Results at the Micro level are called “Products,” at the Macro level “Outputs,” and at the Mega level “Outcomes.” In 1984 Naylor, Pritchard, and Ilgen coined the term performance motivation to refer to the “multiple processes by which individuals allocate personal resources, in the form of time and effort, in response to anticipated [results] or consequences” (159). In this vein, the instrument presented in this chapter operationalizes performance motivation as the perceived likelihood and utility of Input, Process, Micro (individual and team), Macro (organizational), and Mega (exter-nal client and societal) goal statements. In keeping with this framework, the following goal-directed commitments are measured by the instrument presented in this chapter. Input Performance Motivation

At the Input level of the OEM, goal-directed behavior seeks to ensure the availability and quality of organizational resources (Kaufman, 1998, 2000, 2006). Inputs are those resources that an organization has at its disposal in its Processes in order to deliver useful ends. Input performance motivation, then, is the degree of effort in which individuals engage in order to ensure the avail-ability and quality of the human, capital, and physical resources (Kaufman, Watkins, and Leigh, 2001). Process Performance Motivation

Goal-directed behavior at the Process level of the OEM concerns the acceptability and efficiency of the methods, activities, interventions, programs, projects, and initiatives that an organization can or does use (Kaufman, 1998, 2000, 2006). These processes are the ways and means that individuals within organizations employ Inputs in order to deliver useful results. Process per-formance motivation, then, is the degree of effort in which individuals engage to ensure the acceptability and efficiency of the organizational methods, activities, interventions, programs, and initiatives (Kaufman, Watkins, and Leigh, 2001). Individual Performance Motivation (Micro Level)

At the Micro level of results, the goal-directed behavior of individuals is intended to deliver positive results for themselves or their teams (Kaufman, 1998, 2000, 2006). Individual perform-ance motivation concerns goals whose consequences are of benefit to the individual performer her-/himself (rather than the teams with which the individual may be involved). Such results include personal effectiveness, the acquisition of competencies required for a task, positive performance evaluations, and promotion. Individual performance motivation is, then, the

Page 89: Performance Improvement

Performance Motivation

| 83

subjective appraisals that individuals make regarding goals whose accomplishment is intended to benefit primarily oneself. Team Performance Motivation (Micro Level)

As opposed to individual performance motivation, team performance motivation refers to goal-directed behavior that individuals undertake with the intention of delivering positive results for the teams with which they are involved (Kaufman, 1998, 2000, 2006). These teams may be departments, work groups, subdivisions, work centers, or other groups of which the individual is a part, but does not commonly include programs, agencies, commands, associations, businesses or larger organizational entities. Results that are of benefit to teams include such measures as team effectiveness, value added to other teams within the organization, team products that meet internal quality standards, and useful team products delivered to the organization or other inter-nal teams. Team performance motivation is, then, the subjective appraisals individuals make regarding goals whose accomplishment is intended to benefit primarily the team. Organizational Performance Motivation (Macro Level)

The focus of organizational performance motivation is the consequences derived from the accomplishment of goals intended to be of benefit primarily to the organization as a whole (Kaufman, 1998, 2000, 2006). To this end, intact programs, agencies, commands, associations, businesses, and institutions can be considered examples of “organizations.” Results at this level may include organizational effectiveness, accomplishment of the organization’s mission objec-tive, increased market share, organizational outputs that meet the quality standards of external recipients, and accomplishment of the goals of planned partnerships with other organizations. Organizational performance motivation is, then, the subjective appraisals individuals make regarding goals whose accomplishment is intended to benefit primarily the organization itself. External-Client Performance Motivation (Mega Level)

The Mega level of results concerns the consequences that an organization delivers outside of itself to external clients and society (Kaufman 1998, 2000, 2006). External-client performance motivation refers to goals whose accomplishments are of benefit to an organization’s direct external clients (rather than society at large). Results of benefit to external clients include cus-tomer value-added, profit over time, and cost-effective client outcomes. External-client perform-ance motivation is, then, the subjective appraisals individuals make regarding goals whose accomplishment is intended to benefit primarily external clients. Societal Performance Motivation (Mega Level)

While external-client performance motivation primarily concerns value added to an organiza-tion’s customers, societal performance motivation refers to goal-directed behavior intended to be of benefit primarily to the community and world as a whole, now and in the future (Kaufman, 1998, 2000, 2006). Societal performance motivation transcends the individual, his or her team, organization, and direct external clients, instead considering stakeholders who may or may not be immediate recipients of an organization’s outputs. Results at this level include consumption being less than its production, self-sufficiency, self-reliance, and not being under the care, cus-tody, or control of another person, agency, or substance. Societal performance motivation is, then, the subjective appraisals individuals make regarding goals whose accomplishment is intended to benefit primarily society.

Page 90: Performance Improvement

The Assessment Book

84 |

Description of the Performance Motivation Inventory—Revised (PMI-A) The PMI-A is a 28-item questionnaire designed to provide an indication of individuals’ inten-tions to pursue goals aimed at enhancing individual, organizational, as well as external client and societal performance (based on the Organizational Elements Model of Kaufman [1998, 2000, 2006]). The PMI-A solicits two appraisals for each item to gauge (a) the value of the results coming from goal accomplishment and (b) the anticipated probability that goal accomplishment will lead to such consequences. Expectancy—the anticipated likelihood that goal accomplishment will lead to valued conse-quences—is measured on a 5-point response scale from 0 (“None at all”) to 4 (“A great deal”). Valence—subjective appraisals of the value of consequences coming from goal accomplish-ment—is measured on a 5-point response scale using the same anchors as for expectancy. According to expectancy-value theory (Weiner, 1992), individuals are more likely to take steps to accomplish goals with a positive multiplicative product of these two scores (a derived score of up to 16) and take no action on goals that yield a product of zero. Purpose of the PMI-A

Rather than measuring commitment to a single goal, expectancy-value serves as a predictor of behavior that—when implemented across various competing goals—provides an indicator of one’s likelihood of adopting and enacting steps to achieve each goal (Weiner, 1992). Items within the PMI-A are stratified according to the various levels of the OEM and provide an indi-cation of individuals’ likelihood of pursuing or abandoning goals related to these levels. By implementing the PMI-A on an ongoing basis, organizations can identify and bolster the moti-vators necessary to accomplish goals at each level of the OEM. Directions for Completing the PMI-A While not all items must be answered to complete the PMI-A, the fewer items that are answered, the less accurate the results will be. It is recommended that respondents think about the questions and weigh them before answering. Some individuals approach the questionnaire deliberately, using their first gut reaction as their answer rather than thinking about the items. While this is obviously up to the respondent, the former approach is recommended. For each of the 28 items in the questionnaire, please provide two responses for each of the goals listed. To the left, rate how important you believe each of the goals to be. To the right, rate how influential you believe your efforts are to the accomplishment of each of the goals listed.

Page 91: Performance Improvement

Performance Motivation

| 85

Importance of the Goal Performance Motivation Inventory Impact of

Your Effort

None at all

A great deal

To the left, rate how important you believe each of the goals below are.

To the right, rate how influential you believe your efforts are to the accomplishment of each of the goals below.

None at all

A great deal

1. Ensuring that the resources and materials necessary to get jobs done right are available

2. Meeting quality standards for all jobs and tasks

3. Demonstrating your effectiveness to supervi-sors

4. Contributing to the team’s quality of life at work

5. Your organization completing projects at or beyond agreed-upon criteria for success

6. Successful joint ventures with your organiza-tion’s partners

7. Ensuring that your organization’s outputs contribute to a better tomorrow

8. A climate of mutual respect between all employees

9. Having active participation in delegated tasks

10. Receiving positive performance evaluations from your supervisor

11. Completing team deliverables that meet or exceed quality standards

12. Accomplishing your organization’s mission

13. Mutually beneficial interactions with your organization’s collaborators

14. Establishing a society in which every person earns at least as much as it costs to live

15. Knowing that all employees can be counted on to do what they say they’ll do

16. Maintaining compliance to work expectations

17. Accomplishing your plans for professional development

18. Improving team effectiveness

Page 92: Performance Improvement

The Assessment Book

86 |

Importance of the Goal Performance Motivation Inventory Impact of

Your Effort

None at all

A great deal

To the left, rate how important you believe each of the goals below are.

To the right, rate how influential you believe your efforts are to the accomplishment of each of the goals below.

None at all

A great deal

19. Progress toward accomplishment of your organization’s long-term plans

20. Successful long-term alliances with external clients

21. Improving the quality of life within society through your work

22. Maintaining a work environment that is free from discrimination

23. Ensuring adequate progress on delegated tasks

24. Completing products and deliverables that meet or exceed the quality standards of your supervisor

25. Increasing the usefulness of teams’ deliverables

26. Measurably contributing to your organization’s success

27. Adding value through cooperation with vendors and suppliers

28. Contributing to self-sufficiency and self-reliance within your organization’s outside community

Note: Online services for this survey are available at www.e-valuate-it.com/instruments/RKA/. Relevant demographic information should also be collected. Scoring the PMI-A Insert the values for the “Importance of the goal” and “Impact of your effort” from the responses to each item within the table on the next page. Then, within each line, multiply the “Importance” and “Impact” scores in order to determine the “Performance motivation” for each item. Lastly, to calculate overall performance motivation to accomplish goals related to each level of the Organ-izational Elements Model, simply add the “Performance motivation” for the preceding four items.

Page 93: Performance Improvement

Performance Motivation

| 87

Item # “Importance” score X “Impact” score = Performance motivation

1 X =

8 X =

15 X =

22 X = +

Total Input Performance Motivation =

2 X =

9 X =

16 X =

23 X = +

Total Process Performance Motivation =

3 X =

10 X =

17 X =

24 X = +

Total Individual Performance Motivation =

4 X =

11 X =

18 X =

25 X = +

Total Team Performance Motivation =

5 X =

12 X =

19 X =

26 X = +

Total Organizational Performance Motivation =

6 X =

13 X =

20 X =

27 X = +

Total External Client Performance Motivation =

7 X =

14 X =

21 X =

28 X = +

Total Societal Performance Motivation =

Page 94: Performance Improvement

The Assessment Book

88 |

Interpreting Results of the PMI-A Individual items with a high multiplicative score (up to 16) of “Importance” and “Impact” are more likely to be pursued, while those with a product closer to zero are more liable for aban-donment. Preferably, both “Importance” and “Impact” scores will be at least 3 for each item. Likewise, total performance motivation scores within each level of the Organizational Elements Model that approach a score of 64 may be expected to be pursued, while those that are close to zero are more likely to be abandoned. Total performance motivation scores of less than 36 can be considered prime candidates for abandonment, and actions should be taken regarding these goals to ensure individuals’ continued effort toward their accomplishment. Related References Atkinson, J. W. (1964). An introduction to motivation. Princeton, NJ: Van Nostrand.

Heider, F. (1958). The psychology of interpersonal relations. Hillsdale, NJ: Lawrence Erlbaum Associates.

House, R. J., & Shamir, B. (1993). Toward the integration of transformational, charismatic, and visionary theories. In Chemmers, M., & Ayman, R. (Eds.), Leadership theory and research: perspectives and directions. San Diego, CA: Academic Press, pp. 81–108.

Hollenbeck, J. R., & Klein, H. J. (1987). Goal commitment and the goal-setting process: problems, prospects, and proposals for future research. Journal of Applied Psychology, 72, 212–220.

Kaufman, R. (2006). Change, choices, and consequences: a guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2000). Mega planning: defining and achieving success. Newbury Park, CA: Sage.

Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised). Arlington, VA & Washington, DC: Jointly published by the American Society for Training and Development and the International Society for Performance Improvement. Also, published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.

Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: Defining, prioritizing, and accomplishing. Lancaster, PA: Proactive Publishers.

Kuhl, J. (1982). The expectancy-value approach within the theory of social motivation: Elaborations, extensions, critique. In N. T. Feather (Ed.), Expectations and actions: expectancy-value models in psychology. Hillsdale, NJ: Erlbaum.

Lewin, K., Dembo, T., Festinger, L., & Sears, P. (1944). Level of aspiration. In Hunt, J. M. (Ed.). Personality and the behavior disorder (333–338). Oxford: Roald Press.

Leigh, D. (2000). Causal-utility decision analysis (CUDA): quantifying SWOTs. In Biech, E. (Ed.), The 2000 annual, volume 2, consulting. San Francisco, CA: Jossey-Bass/Pfeiffer, 251–265.

Levenson, H. (1972). Distinctions within the concept of internal-external control: development of new scale. Proceedings, 80th Annual Convention, APA.

Page 95: Performance Improvement

Performance Motivation

| 89

Naylor, J. C., Pritchard, R. D., & Ilgen, D. R. (1984). A theory of behavior in organizations. New York: Academic Press.

Nicholls, J. (1987). Leadership in organisations: Meta, macro, and micro. European Management Journal, 6(1), 16–25.

Rotter, J. B. (1954). Social learning and clinical psychology. New York: Prentice-Hall.

Senge, P. M. (1990). The fifth discipline: the art and practice of the learning organization. New York: Doubleday-Currency.

Settoon, R. (1998). Management of organizations: management 351 (online). Available: http://sluweb.selu.edu/Academics/Faculty/rsettoon [1999, April 5].

Weiner, B. (1992). Human motivation: metaphors, theories, and research. Newbury Park, CA: Sage.

Zaleski, Z. (1987). Behavioral effects of self-set goals for different time ranges. International Journal of Psychology, 22, 17–38.

Page 96: Performance Improvement
Page 97: Performance Improvement

| 91

Chapter 9 Organizational Readiness for

E-learning Success

Ryan Watkins, PhD Introduction In its many forms, e-learning has become an integral part of the business and public sector models that shape the training and professional development services in many of today’s organi-zations. As a result, there is a growing recognition that implementing an effective e-learning pro-gram requires both a model for strategic alignment of e-learning initiatives as well as a systemic framework for linking the various dimensions of e-learning together to ensure success. Whether e-learning in your organization includes just a limited number of vendor-purchased online courses or the year-around management of international professional development events deliv-ered via satellite, assessing your organization’s readiness for successful e-learning is an essential step in shaping programs that lead to valuable accomplishments. Thus, find out what the readi-ness is before rushing into implementation. The Organizational E-learning Readiness Self-Assessment is a first-of-its-kind tool for assisting organizational leaders and managers with the essential questions for guiding the multidimen-sional implementation and management of e-learning systems. The self-assessment, whether completed by individual leaders or multiple project partners, offers useful guidance in asking and answering the essential questions to e-learning success. Even if your e-learning programs have been running for years, it is always a good time to re-assess the role e-learning can play in the achievement of your organization’s goals and objectives. E-learning In organizations around the globe, e-learning takes on many forms and functions. Although e-learning is often closely associated with Internet-based training, the realm of learning opportu-nities that are predominately mediated by electronic media (Internet, CD, DVD, satellite, digital cable, etc.) is broad and numerous. Likewise, the learning experiences offered through e-learning are as diverse as brown-bag lunches facilitated by desktop video and complete certifications programs delivered on DVD. For that reason, it is essential that any decisions made with regards to the successful implementation or management of e-learning be based on systemic or holistic business models. Foundations

The Organizational Elements Model (Kaufman and English, 1979; Kaufman, 1998, 2000, 2006b; Watkins, 2007; Kaufman, Oakley-Browne, Watkins, and Leigh, 2003) provides an ideal foundational framework for assessing the readiness of an organization to begin or to later evaluate their e-learning initiatives. The model provides five interlinked and essential elements for which alignment is necessary if e-learning programs (and most any other programs) are going to successfully accomplish useful results. The first three elements of the model are performance-

Page 98: Performance Improvement

The Assessment Book

92 |

focused and ensure the alignment of results across the internal and external partners of the organization, while the remaining two elements provide for the identification and linkage of appropriate processes and resources for the accomplishment of useful results. In planning for the successful implementation of e-learning initiatives, it is consequently critical that all five elements be assessed and reviewed. The Organizational Elements Model (OEM) is found in Chapter 1, Table 1.1 of this book. The strategic objectives at the Mega, Macro, and Micro levels of planning provide the concrete definitions of success that are necessary for any e-learning initiative. For example, a strategic objective of an organization may be to reduce the number of consumers who permanently injure themselves using the organization’s products to less than 1 in every 50,000 in the next 5 years with the goal of reducing it to zero in the next 15 years. This Mega level objective can then be used to guide both organizational decision making with regards to product design, as well as the evaluation of successful design changes based on this variable. In addition, e-learning initiatives within the organization may also contribute to the successful achievement of this objective by including additional information on customer safety in online new employee orientations and/or identifying e-learning opportunities in product engineering that focus on design safety measures. The success of these programs can then in part be measured by the successful achievements of the organization. Using the model, decisions regarding e-learning can be aligned with both the long-term and short-term objectives of the organization, thereby ensuring that all that is used, done, produced, and delivered is adequately aligned with the successful accomplishments of clients, clients’ clients, and others in the community. The model thereby provides for a systemic or holistic per-spective on how e-learning can be an integral part of any successful organization when aligned with valuable accomplishments. E-learning initiatives are complex systems with many variables that are critical to their success. As a consequence, e-learning initiatives can be viewed from eight distinct, yet closely related, dimensions that represent the following concise characteristics (based on Kahn, 2005; Watkins, 2006):

• Organizational: focuses on the alignment of strategic directions across the organiza-tion, with clients, as well as the society in which all partners exist together (for example, consists of strategic issues related to the accomplishments of organizational partners, internal clients, e-learning administration, and learner support services)

• Pedagogical: refers to issues related to goals/objectives, design approach, instructional strategies and tactics, e-learning activities, formative, summative, and goal-free evalua-tion, and media selection

• Technological: comprises of infrastructure planning, hardware, and software issues

• Interface design: focuses on all aspects of how the learner interacts with the learning technology, instructor, and peers in the learning experience (for instance, incorporates Web page and site design, videoconference format, content design, navigation, and usability testing)

Page 99: Performance Improvement

Organizational Readiness for E-learning Success

| 93

• Evaluation: relates to issues concerning assessment of learners and evaluation of instruction and learning environment; finding what works and what doesn’t and revising as required

• Management: focuses on successful maintenance of learning environments and distri-bution of information issues

• Resource support: examines issues related to online support and resources for learners, instructors, developers, administrators, and others

• Ethical: evaluates issues of plagiarism, social and cultural diversity, geographical diversity, learner diversity, information accessibility, etiquette, adding measurable value to our shared society, and legal issues

These dimensions provide for a holistic view of e-learning within any organization. The alliance of these eight dimensions is therefore critical to the success of e-learning when assisting the organization in making valuable contributions to clients and others. Both the Organizational Elements Model and the eight dimensional framework for e-learning are integrated into the questions that compose the Organizational E-learning Readiness Self-Assessment (Watkins, 2006). How to Use the Self-Assessment The Organizational E-learning Readiness Self-Assessment was developed to guide the leaders and managers in most organizations through the diverse, yet closely related and essential, dimen-sions that make for a holistic business framework around e-learning. When completed by a sin-gle organizational leader, the self-assessment can be a valuable tool for identifying both areas of potential weakness in the organization’s strategies and tactics regarding e-learning as well as potential opportunities for building on organizational strengths. For the individual, the self-assessment may also assist in building a systemic plan for the successful implementation (or management) of an e-learning initiative at any point in its development. The self-assessment may also be completed by multiple organizational leaders and managers in order to gain useful diverse perspectives on the related issues. From the training department to the office for information technologies, perspectives on the many dimensions of successful e-learning can vary greatly. Whether results of the self-assessment are later aggregated or if per-spectives are analyzed for distinct implications, the utility of the self-assessment’s diverse dimensions can be of value to most any organization. No matter who completes the Organizational E-learning Readiness Self-Assessment, it is impor-tant that both the “What Is” (WI) and “What Should Be” (WSB) response columns be completed. The dual-matrix self-assessment design is an essential ingredient to making guided decisions since having data regarding both “What Is” and “What Should Be” allows for four distinct and valuable methods for analyzing the results of the self-assessment. First, you can identify Gaps or differences between “What Is” and “What Should Be” (Gap = WSB – WI). Using the Likert-type scale, these Gaps can illustrate perceived differences between current individual, organizational, and societal performance (WI) and the desired or required per-formance that is necessary for individual, organizational, and societal success (WSB).

Page 100: Performance Improvement

The Assessment Book

94 |

Second, Gaps can then be classified either as Needs when the WSB is determined to be greater than the WI, or as Opportunities when the WI is greater than the WSB. Positive and negative Gaps can then be used to inform organizational strategic direction setting as well as daily deci-sion making. Third, you can identify the relative and perceived prioritization of Gaps (i.e., Needs or Opportu-nities) by examining the location of the Gaps along the Likert-type scale. For example, while the Gaps for two distinct questions on the self-assessment may have similar values of 2 points along the Likert-type scale, their perceived importance may vary greatly depending on whether the Gaps are between values of WSB = 3 and WI = 1, or the values of WSB = 5 and WI = 3. Lastly, by collecting data for both dimensions of the dual-matrix self-assessment, you can assess the distinct values of either “What Is” and/or “What Should Be.” To illustrate the value of these analyses, it is important to consider the perspectives of those completing the self-assessment. If, for example, it is determined that the information technology specialists who will be responsible for maintaining the integrity and security of the e-learning platforms do not view the sharing real-time feedback with learners at the same level of importance (i.e., WSB) as instructors, then there is potential for miscommunication and differing program objectives when later decisions regarding the use of peer-to-peer file sharing technologies are made. The Organizational E-learning Readiness Self-Assessment doesn’t attempt to dilute all e-learning decision making to a single variable, nor a single variable within any dimension of the founda-tional frameworks. As a system, e-learning in any organization is complex, evolving, and multi-variate. As a result, when complete the self-assessment does not provide a single value (e.g., average = 4) based on which all organizational decisions regarding e-learning can or should be based. Instead, for each group of questions within the dimensions of the self-assessment, leaders are encouraged to review the responses of the self-assessment participants using the four analysis strategies described above in order to determine how the data can best be used for result-focused decision making that can lead to valuable accomplishments.

Page 101: Performance Improvement

Organizational Readiness for E-learning Success

| 95

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

Organizational

The organization…

1. Is committed to the long-term successful accomplishment of its clients.

2. Is committed to the long-term successful accomplishments of its clients’ clients.

3. Is committed to the long-term successful accomplishments of the communities in which it exists.

4. Is committed to providing useful results/ products to clients.

5. Is committed to the contributions made through the professional development of its associates.

6. Is committed to the use of technology in the professional development of its associates.

7. Is committed to e-learning for the next 12–18 months.

8. Is committed to e-learning for the next 3–5 years.

When planning future initiatives…

9. E-learning is integrated as part of the long-term strategic plans.

10. Objectives from the most recent strategic plan for the organization are used to guide deci-sions regarding potential initiatives.

11. Managers make the strategic decisions and inform the necessary personnel.

12. Representative committees make recommen-dations that drive decision making.

Page 102: Performance Improvement

The Assessment Book

96 |

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

Formal organizational planning for e-learning will (or does) focus on…

13. The current financial cost of e-learning.

14. Estimates of financial return-on-investment (what it costs to meet needs as compared to ignoring needs) over the next year.

15. Estimates of financial return-on-investment (what it costs to meet needs as compared to ignoring needs) over the next 2–3 years.

16. Estimates of financial return-on-investment (what it costs to meet needs as compared to ignoring needs) over the next 5–10 years.

17. The value added by the initiative to those internal to the organization (i.e., other departments or sections).

18. The value added by the initiative to clients of the organization (i.e., direct customers).

19. The value added by the initiative to the organization’s clients’ clients (i.e., the end consumer or customer).

20. The value added by the initiative to the community and society.

21. Political maneuvering within the organization for individual advancement.

22. Responding to government regulations and policies.

The requirement for e-learning…

23. Has not yet been determined in terms of results to be accomplished.

24. Is based on a formal needs assessment that focuses on identifying gaps in performance and results.

25. Is based on a formal needs assessment that focuses on what associates want with regard to training.

Page 103: Performance Improvement

Organizational Readiness for E-learning Success

| 97

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

26. Is based on an informal needs assessment.

27. Was determined without data being collected.

28. Is in reaction to increasing budget constraints.

29. Is determined by managers and agreed upon by others.

30. Can be overridden if data supports that e-learning is not the right solution for organizational performance issues.

Pedagogical

Training content will be…

31. Based solely on subject-matter expert input.

32. Based on formal job/task analysis.

33. Based solely on previous materials used in similar courses.

34. Developed by an instructional designer.

35. Linked to the accomplishment of results by learners after the training.

36. Aligned with the necessary results defined in the needs assessment.

E-learning courses…

37. Will be self-paced and without the active participation of an instructor.

38. Will be instructor-led and include the active participation of an instructor.

39. Will be aligned with other e-learning courses to ensure synergy.

40. Will be aligned with other non–e-learning courses to ensure synergy.

41. Will have measurable performance-focused (i.e., results-based) goals and objectives.

42. Will be designed with learner characteristics taken into consideration.

Page 104: Performance Improvement

The Assessment Book

98 |

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

43. Will be designed with consideration of the post-training performance environment.

44. Will be designed with a systematic process that develops assessments prior to content.

45. Will be designed to utilize a variety of grounded instructional strategies.

46. Will be designed to ensure that appropriate instructional tactics and methods will be selected based on instructional and performance objectives.

47. Will be designed to actively involve learners with other learners.

48. Will be developed to provide access to learners with disabilities.

49. Will be formatively evaluated before offered to learners.

50. Will offer learners flexibility in how instruction is provided (e.g., video, Internet, paper).

51. Will offer learners flexibility in the sequence of instructional events.

Technological

The technology to be used in e-learning…

52. Is already implemented by the organization.

53. Will have to be purchased (or upgraded) in the next 12–18 months.

54. Supports a variety of media technologies (e.g., video, audio, synchronous, asynchronous).

55. Will focus primarily on synchronous media (e.g., video conferencing, real-time chat).

56. Will focus primarily on asynchronous or non–real-time media (e.g., discussion boards, e-mail).

Page 105: Performance Improvement

Organizational Readiness for E-learning Success

| 99

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

57. Will be outsourced to organizations that provide a variety of e-learning services.

58. Will be maintained by internal technology support personnel.

Interface Design

The e-learning interface will…

59. Require unique user logins and passwords.

60. Provide learners with visual information on their progress.

61. Offer learners the opportunity to create long-term learning plans.

62. Be aligned with the learner’s training history records (or transcripts).

63. Be reviewed for accessibility by individuals with physical disabilities.

64. Be evaluated through a user testing process. Management

The training team…

65. Has designed other successful e-learning courses.

66. Has adequate experience in the development of e-learning material.

67. Has adequate experience in the delivery of e-learning courses.

68. Has adequate experience in the management of e-learning student information.

69. Has adequate experience in assessing e-learning student success.

Page 106: Performance Improvement

The Assessment Book

100 |

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

E-learning instructors…

70. Will have access to a variety of options for communicating with learners.

71. Will receive training on using the e-learning technology.

72. Will receive training on interacting with learners online.

73. Will receive training on designing effective instructional activities.

74. Will have time to provide individualized feed-back to learners throughout the e-learning course.

The associates who will be taking e-learning courses will…

75. Have prior experience in taking e-learning courses.

76. Have the study skills necessary for success in this new learning environment.

77. Have the technology skills necessary for suc-cessfully participating in e-learning courses.

78. Have access to all the technology necessary for participating and being successful in e-learning courses.

79. Be able to benefit from the flexibility in time and location that e-learning offers.

80. Be given the opportunity to evaluate their readiness for taking e-learning courses.

81. Be given the opportunity to learn about what is required for success in the e-learning environment prior to taking an e-learning course.

82. Be given the opportunity to practice effective online communication skills prior to taking an e-learning course.

Page 107: Performance Improvement

Organizational Readiness for E-learning Success

| 101

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

83. Be given the opportunity to practice building online communities prior to taking an e-learning course.

84. Have the time management skills necessary for success in e-learning.

85. Have the work-release time necessary for success in e-learning.

86. Have the support of the supervisors in completing e-learning courses.

Resource Support

The associates who will be taking e-learning courses will…

87. Have access to specialized technology support personnel.

88. Have access to content support staff.

89. Be able to get support services in multiple languages.

E-learning instructors will…

90. Have access to specialized technology support personnel.

91. Have access to the e-learning course developers.

The training team will…

92. Have access to specialized technology support personnel.

93. Have access to development technologies (e.g., digital media converters) necessary to create useful learning experiences.

94. Subject-matter experts to inform, review, and evaluate e-learning course content.

Page 108: Performance Improvement

The Assessment Book

102 |

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

Ethical

The organization will…

95. Develop and communicate comprehensive plagiarism and/or code of conduct policies regarding e-learning.

96. Enforce plagiarism violations when documented.

97. Develop and communicate comprehensive plagiarism and/or code-of-conduct policies regarding the use of e-learning technologies.

The associates who will be taking e-learning courses will…

98. Acknowledge their review of the organiza-tion’s plagiarism and/or code-of-conduct policies.

99. Be informed of the consequences in violating the plagiarism and/or code-of-conduct policies.

100. Have access to information on the expected etiquette within e-learning courses.

101. Will apply expected etiquette within e-learning courses.

E-learning instructors will…

102. Inform learners of the organization’s plagia-rism and/or code-of-conduct policies.

103. Inform learners of their etiquette expectations.

104. Communicate the sanctions for violations and their consequences.

Evaluation and Continual Improvement

E-learning courses…

105. Will be formatively evaluated before being made accessible for learner use.

106. Will be formatively evaluated at least one time after learners have taken the course.

Page 109: Performance Improvement

Organizational Readiness for E-learning Success

| 103

What Is Organizational E-learning Readiness Self-Assessment What Should Be

Never

Rarely

Som

etimes

Usually

Alw

ays

Indicate with what frequencyyou arethis competency.

currently applyingIndicate with what frequencyyou think you should beapplying this competency.

What Should BeWhat Is

Never

Rarely

Som

etimes

Usually

Alw

ays

107. Will receive feedback from learners regarding the e-learning media.

108. Will receive feedback from learners regarding the course content.

109. Will receive feedback from learners regarding the instructor.

110. Results of learner transfer of skills to the workplace will be evaluated.

111. Organizational results linked to the skills taught in e-learning courses will be evaluated.

112. Client results linked to the skills taught in e-learning courses will be evaluated.

113. Evaluation data will be used for improving. The e-learning initiative will be evaluated…

114. For its ability to provide associates with necessary skills, knowledge, attitudes, and abilities for success.

115. For its contribution to the required results of the organization.

116. For its contribution to the required results of clients.

117. For its contributions to our shared communi-ties and society.

118. For its preparation of trainers for teaching e-learning courses.

119. For its preparation of associates for learning in the e-learning environment.

120. For its effective use of available technologies.

121. For its efficient use of resources.

Note: Online services for this survey are available at www.e-valuate-it.com/instruments/RKA/. Relevant demographic information should also be collected.

Page 110: Performance Improvement

The Assessment Book

104 |

Scoring and Analysis of Results Scoring of the Organizational E-learning Readiness Self-Assessment can either be done with the assigned values of individual instruments or the values can be aggregated across multiple instruments to provide average group values. The suggested analysis for this instrument follows that of the others previously mentioned. Each of the four types of analyses—discrepancy, direction, position, and demographics—should be applied to gain maximum understanding of the meaning of each and the associated dimension. Interpreting Results for E-learning Dimensions In interpreting the results of a self-assessment, it is important to remember that the data collected represent the perceptions of those who completed the survey. Individuals providing input on the self-assessment may have differing access to performance data, motivation, misperceptions of results, strategic objectives, values, and other characteristics that impact on their assessment of “What Should Be” and “What Is.” As a result, it is most useful to corroborate the perceptions driving the self-assessment’s results with verifiable performance data that can illustrate the accu-racy (or inaccuracy) of the perceptual data from the survey. Using the data from the four analyses of the Organizational Readiness for E-learning Success Self-Assessment, results for each of the eight e-learning dimensions should be interpreted in order to identify the strategic objectives of potential performance improvement activities. Within each section (and subsection) of the self-assessment items should be reviewed for discrepancy, direction, position, and demographics.

Note: The following items from the Organizational Readiness for E-learning Success Self-Assessment are recommended for reverse scoring (i.e., for these items it is often beneficial to have a “What Should Be” score lower than a “What Is” score). You will want to review each of these items separately in order to interpret the results of your assessment for your organization. It should be noted, however, that for some organizations, reversed scoring of the items may not be appropriate given their strategic objectives at the Mega, Macro, and Micro levels.

Items: 11, 13, 21, 22, 23, 25, 27, 28, 29, 31, and 33

The Organizational E-learning Readiness Self-Assessment includes a mix of items that will likely vary in the direction when the analysis is complete. While many items will have an easily identified “preferred” or “better” direction that can be used in interpreting the results, there are multiple items included in the self-assessment in which the positive or negative direction of the data will have to be interpreted within the context of the organization (for example, the use of internal or external technical support staff). With regards to these items, the purpose of the self-assessment is to ensure that adequate attention has been paid to these considerations and that decision makers realize the importance of the topic to the successful implementation of an e-learning initiative.

Page 111: Performance Improvement

Organizational Readiness for E-learning Success

| 105

It is often helpful to aggregate scores on individual items to determine the average discrepancy, direction, position, and demographics for sections and subsections of the self-assessment as well. When aggregating scores, you will want to transpose the values for the reverse-scored items before including those with the other items. While aggregated scores for sections and subsections can be useful, they should not, however, be used as a substitute for reviewing the analysis of each item in the self-assessment. In addition to using the results of the self-assessment to guide decision making during the devel-opment or reengineering of an e-learning program, the self-assessment can also be a valuable tool for evaluating the results or improving the performance of any current e-learning programs. When using the self-assessment as part of an evaluation as opposed to an assessment (see Watkins and Kaufman, 2002), you will want to provide specific directions regarding which e-learning programs should be the focus of the self-assessment. Accordingly, you will also want to define how the interpretation of results from each section, subsection, and item will be utilized to evaluate results and improve future performance. Similar analyses for discrepancy, direction, position, and demographics will be of value when using the instrument as an evaluation tool. The purpose of the self-assessment is to support useful decision making within organizations. As a result, the use of the self-assessment, the analyses performed on the resulting data, and how those results are interpreted within the organization are all processes that should be considered within the context of the decisions being made. The extent to which additional analyses of the results are performed and the time spent interpreting the results from individual items within the instrument should be guided by how the findings can inform the decision-making process. For some organizations, distinct sections or subsections of the instrument may be higher priorities than others, and as a result, additional analyses based on demographic data may be useful in sup-porting decision makers. While all eight dimensions of the e-learning framework are important to the long-term success of the initiative, each organization should determine an appropriate balance among those to support their decision making. No single section should go un-used or un-analyzed, but the time and effort spent aggregating and interpreting results should be focused on the requirements of the organization and its decision makers. Related References Kahn, B. (2005). Managing e-learning strategies: design, delivery, implementation and

evaluation. Hershey, PA: Information Science Publishing.

Kaufman, R. (2006a). 30 seconds that can change your life: a decision-making guide for those who refuse to be mediocre. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006b). Change, choices, and consequences: a guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2005). Defining and delivering measurable value: Mega thinking and planning primer. Performance Improvement Quarterly, 18(3), 8–16.

Kaufman, R. (2000). Mega planning: practical tools for organizational success. Thousand Oaks, CA: Sage Publications. Also Planificación Mega: Herramientas practicas paral el exito organizacional (2004). Traducción de Sonia Agut. Universitat Jaume I, Castelló de la Plana, Espana.

Page 112: Performance Improvement

The Assessment Book

106 |

Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised). Arlington, VA & Washington, DC: Jointly published by the American Society for Training and Development and the International Society for Performance Improvement. Also, published in Spanish: El Pensamiento Estrategico: Una Guia Para Identificar y Resolver los Problemas, Madrid, Editorial Centros de Estudios Ramon Areces, S. A.

Kaufman, R., & English, F. W. (1979). Needs assessment: concept and application. Englewood Cliffs, NJ: Educational Technology Publications.

Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Strategic planning for success: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass.

Kaufman, R., Watkins, R., and Guerra, I. (2001). The future of distance education: defining and sustaining useful results. Educational Technology, 41(3), 19–26.

Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing and achieving. Lancaster, PA: Proactive Publishing.

Watkins, R. (2007). Performance by design: the systematic selection, design, and development of performance technologies that produce results. Amherst, MA: HRD Press, Inc.

Watkins, R. (2006). Ends and means: is your organization ready for e-learning? Distance Learning Magazine, 3(4).

Watkins, R. (2005). 75 e-learning activities: making online courses more interactive. San Francisco, CA: Jossey-Bass/Pfeiffer.

Watkins, R. (2003). Determining if distance education is the right choice: applied strategic thinking in education. Computers in the Schools, 20(2), 103–120. Also in Corry, M., & Tu, C. (Eds.). Distance education: what works well. Binghamton, NY: Haworth Press.

Watkins, R. (2000). How distance education is changing workforce development. Quarterly Review of Distance Education, 1(3), 241–246.

Watkins, R., & Corry, M. (2004). E-learning companion: a student's guide to online success. New York: Houghton Mifflin.

Watkins, R., & Corry, M. (2002). Virtual universities: challenging the conventions of education. In Haddad, W., & Draxler A. (Eds.). Technologies for education: potentials, parameters and prospects. Paris: UNESCO.

Watkins, R., & Kaufman, R. (2002). Is your distance education going to accomplish useful results? 2002 Training and Performance Sourcebook. New York: McGraw-Hill, 89–95.

Watkins, R., Leigh, D., & Triner, D. (2004). Assessing readiness for e-learning. Performance Improvement Quarterly, 17(4), 66–79.

Watkins, R., & Schlosser, C. (2003). Conceptualizing educational research in distance education. Quarterly Review of Distance Education, 4(3), 331–341.

Page 113: Performance Improvement

| 107

Concluding Thoughts and Suggestions The assessment instruments presented in this book may not all necessarily be appropriate for your organization. If you are going to use any of these, make sure that the instrument fits your purposes, and that part of your purpose is to use the data to create desirable change. For example, if you suspect that there is significant neglect of the strategic planning process in your organiza-tion, then that may be the best instrument to use. Once you have collected, analyzed, and inter-preted the data, and these in fact confirm your initial suspicions (i.e., strategic planning is really not being done), be sure that your recommendations get implemented. Of course, clearly com-municating the potential costs and consequences of both closing the identified gaps in results, as well as ignoring them, plays a significant role in the decision-making process. Be sure that all involved understand what actions must be taken, why, and how. Also worth noting is that each of the assessment instruments presented here is meant to serve as merely one data collection tool among various options. If appropriate to your circumstances, you are encouraged to explore other data points (besides the perceptions of those in your sample of respondents), other data sources, and data collection methodologies. These tools enable you to better understand people’s perceptions about each of the items. It is also important to consider that people’s perceptions change over time. While your survey findings may be such today, you might find different responses using the same instrument at another point in time. In fact, that is one of the most productive ways of using these tools. Con-sider the first implementation of any given instrument as a baseline. If the results of the assess-ment instruments lead to specific actions for gap closure, be sure to allow an appropriate amount of time to transpire so that the impact of those actions can be seen. Then, apply the instrument again to track how perceptions have shifted. While it is tempting to assume that if you do observe changes in perceptions, these can automatically be attributed to your actions. It is always possible that there are other factors that contributed to such a shift, either individually or collec-tively. Beware of jumping to conclusions without additional evidence to support your claims.

Page 114: Performance Improvement
Page 115: Performance Improvement

| 109

Glossary The increasing responsibilities of professionals for the results, consequences, and payoffs of their activities has lead us into a new era of professionalism. For the performance professional, this era requires a renewed focus on the scientific basis for decision making, the system approach to performance improvement and technology, as well as a consistency in language that leaves no confusion regarding the value added for individuals, organization, and society. This article provides a model for defining and achieving success in the future through a glossary of terms that focuses on the results and payoffs for internal and external clients instead of the process, activities, and interventions we commonly apply. A Requirement to Deliver Value-Added for External Clients and Society We are increasingly responsible for results, consequences, and payoffs of our actions and inactions. We no longer have the luxury of leaving the big questions and issues to leaders, supervisors, and executives. The new era we face is defining and achieving useful results for all stakeholders: including both internal and external partners. And we must prove the value we add in terms of empirical data about what we deliver, what it accomplished, and what value it added for all stakeholders (not just the value it added to our team, our department, or our organization, but to the entire system of internal and external partners). We can no longer get away with “feel good” discussions of how we increased efficiency or effectiveness of processes that may or may not add value to all of our clients, our client's clients, and the society. Our Language—Terms We Use Most of our performance improvement approaches and methods, including the language27 we use in describing our profession, commonly leave questions concerning value added unanswered. We tend to talk about means (e.g., HRD, HPT, online training programs, EPSS, CD-ROMs) and not ends (e.g., reduction in poverty, client value, legitimate profits, product quality). Our language seems almost to encourage a number of confusions that “allows” for lack of precision and consequences. The performance professional of the future has to both know how to improve performance as well as how to justify why an individual or organization should improve performance. For in addition to justifying what we use, do, accomplish and deliver, the new reality is that we must all now prove that there are useful results to both the client and to society. From a societal perspective, value-added includes the survival, health, and well-being of all partners. Planning for and achieving results at the societal level—value-added for tomorrow’s child—is termed “Mega Planning” or “Strategic Thinking” (Kaufman, 1992, 1998, 2000). It is this system or super-system (society) that best begins our planning and serves as the basis for our evaluation and continuous improvement. But to be successful in planning for and demonstrating value ___________________________ 27Danny Langdon (1999) speaks to the language of work and the importance of the terms and concepts we use and understand.

Page 116: Performance Improvement

The Assessment Book

110 |

added, we must use words with rigor and precision. Language that is crisp, to the point, and focused on results (including societal payoffs) is essential for professional success. And then we must match our promises with deeds and payoffs that measurably add value. System, systems, systematic, and systemic: related but not the same To set the framework, let’s define these basic terms, relate them, and then use them to put other vocabulary in context.

system approach: Begins with the sum total of parts working independently and together to achieve a useful set of results at the societal level, adding value for all internal and external partners. We best think of it as the large whole, and we can show it thus:

systems approach: Begins with the parts of a system—sub-systems—that make up the “system.” We can show it thus:

It should be noted here that the “system” is made up of smaller elements, or subsystems, shown as bubbles embedded in the larger system. If we start at this smaller level, we will start with a part and not the whole. So, when someone says they are using a “systems approach” they are really focusing on one or more subsystems, but they are unfortunately focusing on the parts and not the whole. When planning and doing at this level, they can only assume that the payoffs and consequences will add up to something useful to society and external clients, and this is usually a very big assumption.

systematic approach: An approach that does things in an orderly, predictable, and controlled manner. It is a reproducible process. Doing things, however, in a systematic manner does not ensure the achievement of useful results.

systemic approach: An approach that affects everything in the system. The definition of the system is usually left up to the practitioner and may or may not include external clients and society. It does not necessarily mean that when something is systemic it is also useful.

Interestingly, these above terms are often used interchangeably. Yet they are not the same. Notice that when the words are used interchangeably and/or when one starts at the systems level and not the system level, it will mean that we might not add value to external clients and society.

Page 117: Performance Improvement

Glossary

| 111

Semantic quibbling? We suggest just the opposite. If we talk about a “systems” approach and don’t realize that we are focusing on splinters and not on the whole, we usually degrade what we use, do, produce, and deliver in terms of adding value inside and outside of the organization. When we take a “systems” approach, we risk losing a primary focus on societal survival, self-sufficiency, and quality of life. We risk staying narrow. A primary focus on survival, health, and well-being— the Mega level—is really important Kaufman (2000, 2006a) and Kaufman, Oakley-Browne, Watkins, & Leigh (2003) urge that we must focus on societal payoffs—on a “system” approach for both survival and ethical reasons by asking:

What organizations that you personally do business with do you expect to really put client health, safety, and well being at the top of the list of what they must deliver?

It is the rare individual who does not care whether or not the organizations that affect their lives have a primary focus and accountability for survival, health, welfare, and societal payoffs. Most people, regardless of culture, want safety, health, and well-being to be the top priority of everyone they deal with. What we do and deliver must be the same as what we demand of others. So, if we want Mega—value added for society—to be at the top of the list for others (e.g., airlines, government, software manufacturers), why don’t we do unto others as we would have them do unto us? At best we give “lip service” to customer pleasure, profits, or satisfaction… and then go on to work on splinters of the whole. We work on training courses for individual jobs and tasks, and then we hope that the sum total of all of the training and trained people adds up to organizational success. We too often don’t formally include external client survival and well-being in our performance plans, programs, and delivery. We rarely start our plans or programs with an “outside-the-organization” Outcome28 clearly and rigorously stated before selecting the organizational results and resources (Outputs, Products, Processes, and Inputs). The words we use might get in the way of a societal added value focus. To keep our performance and value-added focus, we should adjust our perspective when reviewing the literature and as we listen to speakers at meetings. Far too often we read and hear key terms used with altering (or case specific) definitions. There seems to be many words that sound familiar, and these words are often so comfortable and identify us as professionals that we neglect to question the meaning or appropriateness of use within the context. And when we apply the words and concepts inconsistently, we find that their varying definitions can abridge success. What we communicate to ourselves and others—the words and phrases—are important since they operationally define our profession and communicate our objectives and processes to others. They are symbols and signs with meaning. When our words lead us away, by implication or convention, from designing and delivering useful results for both internal and external clients, then we must consider changing our perspectives and our definitions. _________________________ 28As we will note later, this word is used in a fuzzy way by most people for any kind of result.

Page 118: Performance Improvement

The Assessment Book

112 |

If we don’t agree on definitions and communicate with common and useful understandings, then we will likely get a “leveling” of the concepts—and thus our resulting efforts and contributions—to the lowest common denominator. Let’s look at some frequently used words, define each, and see how a shift in focus to a more rigorous basis for our terms and definitions will help us add value to internal and external clients. The following definitions come from our review of the literature and other writings. Many of the references and related readings from a wide variety of sources are included at the end of the glossary. Italics provide some rationale for a possible perspective shift from conventional and comfortable to societal value added. In addition, each definition identifies if the word or phrase relates most to a system approach, systems approach, systematic approach, or systemic approach (or a combination). The level of approach (system, systems, etc.) provides the unit of analysis for the words and terms as they are defined in this article. Alternative definitions, should also be analyzed based on the unit of analysis. If we are going to apply system thinking (decision making that focuses on valued added at the individual, organizational, and societal levels), then definitions from that perspective should be applied in our literature, presentations, workshops, and products. Here are the terms, definitions, and comments:

ADDIE model: A contraction of the conventional instructional systems steps of Analysis, Design, Development, Implementation, and Evaluation. It ignores or assumes a front determination through assessment of what to analyze, and it also assumes that the evaluation data will be used for continuous improvement.

A2DDIE model: Model proposed by Ingrid Guerra-López (2007) that adds Assessment to the ADDIE Model.

change creation: The definition and justification, proactively, of new and justified as well as justifiable destinations. If this is done before change management, acceptance is more likely. This is a proactive orientation for change and differs from the more usual change management in that it identifies in advance where individuals and organizations are headed rather than waiting for change to occur and be managed.

change management: Ensuring that whatever change is selected will be accepted and implemented successfully by people in the organization. Change management is reactive in that it waits until change requirements are either defined or imposed and then moves to have the change accepted and used.

comfort zones: The psychological areas, in business or in life, where one feels secure and safe (regardless of the reality of that feeling). Change is usually painful for most people. When faced with change, many people will find reasons (usually not rational) for why not to make and modifications. This gives rise to Tom Peter’s (1997) observation that “it is easier to kill an organization than it is to change it.”

costs-consequences analysis: The process of estimating a return-on-investment analysis before an intervention is implemented. It asks two basic questions simultaneously: what do you expect to give and what do you expect to get back in terms of results? Most formulations do not compute costs and consequences for society and external client (Mega)

Page 119: Performance Improvement

Glossary

| 113

return on investment. Thus, even the calculations for standard approaches steer away from the vital consideration of self-sufficiency, health, and well-being (Kaufman & Keller, 1994; Kaufman, Keller, & Watkins, 1995; Kaufman, 1998, 2000).

criteria: Precise and rigorous specifications that allow one to prove what has been or has to be accomplished. Many processes in place today do not use rigorous indicators for expected performance. If criteria are “loose” or unclear, there is no realistic basis for evaluation and continuous improvement. Loose criteria often meet the comfort test, but don’t allow for the humanistic approach to care enough about others to define, with stakeholders, where you are headed and how to tell when you have or have not arrived.

deep change: Change that extends from Mega—societal value added—downward into the organization to define and shape Macro, Micro, Processes, and Inputs. It is termed deep change to note that it is not superficial or just cosmetic, or even a splintered quick fix. Most planning models do not include Mega results in the change process, and thus miss the opportunity to find out what impact their contributions and results have on external clients and society. The other approaches might be termed superficial change or limited change in that they only focus on an organization or a small part of an organization.

desired results: Ends (or results) identified through needs assessments that are derived from soft data relating to “perceived needs.” Desired indicates these are perceptual and personal in nature.

ends: Results, achievements, consequences, payoffs, and/or impacts. The more precise the results, the more likely that reasonable methods and means can be considered, implemented, and evaluated. Without rigor for results statements, confusion can take the place of successful performance.

evaluation: Compares current status (what is) with intended status (what was intended) and is most commonly done only after an intervention is implemented. Unfortunately, evaluation is used for blaming and not fixing or improving. When blame follows evaluation, people tend to avoid the means and criteria for evaluation or leave them so loose that any result can be explained away.

external needs assessment: Determining and prioritizing gaps, then selecting problems to be resolved at the Mega level. This level of needs assessment is most often missing from conventional approaches. Without the data from it, one cannot be assured that there will be strategic alignment from internal results to external value added.

hard data: Performance data that are based on objectives and independently verifiable. This type of data is critical. It should be used along with “soft” or perception data.

Ideal Vision: The measurable definition of the kind of world we, together with others, commit to help deliver for tomorrow’s child. An Ideal Vision defines the Mega level of planning. It allows an organization and all of its partners to define where they are headed and how to tell when they are getting there or getting closer. It provides the rationality and reasons for an organizational mission objective.

Page 120: Performance Improvement

The Assessment Book

114 |

Inputs: The ingredients, raw materials, and physical and human resources that an organization can use in its processes in order to deliver useful ends. These ingredients and resources are often the only considerations made during planning without determining the value they add internally and externally to the organization.

internal needs assessment: Determining and prioritizing gaps, then selecting problems to be resolved at the Micro and Macro levels. Most needs assessment processes are of this variety (Watkins, Leigh, Platt, & Kaufman , 1998).

learning: The demonstrated acquisition of a skill, knowledge, attitude, and/or ability.

learning organization: An organization that sets measurable performance standards and constantly compares its results and their consequences with what is required. Learning organizations use performance data, related to an Ideal Vision and the primary mission objective, to decide what to change and what to continue—it learns from its performance and contributions. Learning organizations may obtain the highest level of success by strategic thinking: focusing everything that is used, done, produced, and delivered on Mega results—societal value added. Many conventional definitions do not link the “learning” to societal value added. If there is no external societal linking, then it could well guide one away from the new requirements.

Macro level of planning: Planning focused on the organization itself as the primary client and beneficiary of what is planned and delivered. This is the conventional starting and stopping place for existing planning approaches.

means: Processes, activities, resources, methods, or techniques used to deliver a result. Means are only useful to the extent that they deliver useful results at all three levels of planned results: Mega, Macro, and Micro.

Mega level of planning: Planning focused on external clients, including customers/citizens and the community and society that the organization serves. This is the usual missing planning level in most formulations. It is the only one that will focus on societal value added: survival, self-sufficiency, and quality of life of all partners. It is suggested that this type of planning is imperative for getting and proving useful results. It is this level that Rummler refers to as primary processes and Brethower calls the receiving system.

Mega thinking: Thinking about every situation, problem, or opportunity in terms of what you use, do, produce, and deliver as having to add value to external clients and society. Same as strategic thinking.

methods-means analysis: Identifies possible tactics and tools for meeting the needs identified in a system analysis. The methods-means analysis identifies the possible ways and means to meet the needs and achieve the detailed objectives that are identified in this Mega plan, but does not select them. Interestingly, this is a comfortable place where some opera-tional planning starts. Thus, it either assumes or ignores the requirement to measurably add value within and outside the organization.

Page 121: Performance Improvement

Glossary

| 115

Micro level planning: Planning focused on individuals or small groups (such as desired and required competencies of associates or supplier competencies). Planning for building-block results. This also is a comfortable place where some operational planning starts. Starting here usually assumes or ignores the requirement to measurably add value to the entire organization as well as to outside the organization.

mission analysis: Analysis step that identified: (1) what results and consequences are to be achieved; (2) what criteria (in interval and/or ratio scale terms) will be used to determine success; and (3) what are the building-block results and the order of their completion (functions) required to move from the current results to the desired state of affairs. Most mission objectives have not been formally linked to Mega results and consequences, and thus strategic alignment with “where the clients are” are usually missing (Kaufman, Stith, Triner, & Watkins, 1998).

mission objective: An exact performance-based statement of an organization’s overall intended results that it can and should deliver to external clients and society. A mission objective is measurable on an interval or ratio scale, so it states not only “where we are headed” but also adds “how we will know when we have arrived.” A mission objective is best linked to Mega levels of planning and the Ideal Vision to ensure societal value added.

mission statement: An organization’s Macro-level “general purpose.” A mission statement is only measurable on a nominal or ordinal scale of measurement and only states “where we are headed” and leaves rigorous criteria for determining how one measures successful accomplishment.

need: The gap between current results and desired or required results. This is where a lot of planning goes “off the rails.” By defining any gap as a need, one fails to distinguish between means and ends and thus confuses what and how. If need is defined as a gap in results, then there is a triple bonus: (1) it states the objectives (What Should Be), (2) it contains the evaluation and continuous improvement criteria (What Should Be), and (3) it provides the basis for justifying any proposal by using both ends of a need—What Is and What Should Be in terms of results. Proof can be given for the costs to meet the need as well as the costs to ignore the need.

needs analysis: Taking the determined gaps between adjacent organizational elements, and finding the causes of the inability for delivering required results. A needs analysis also identifies possible ways and means to close the gaps in results—needs—but does not select them. Unfortunately, needs analysis is usually interchangeable with needs assessment. They are not the same. How does one “analyze” something (such as a need) before they know what should be analyzed? First assess the needs, then analyze them.

needs assessment: A formal process that identifies and documents gaps between current and desired and/or required results, arranges them in order of priority on basis of the cost to meet the need as compared to the cost of ignoring it, and selects problems to be resolved. By starting with a needs assessment, justifiable performance data and the gaps between What Is and What Should Be will provide the realistic and rational reason for both what to change as well as what to continue.

Page 122: Performance Improvement

The Assessment Book

116 |

objectives: Precise statement of purpose, or destination of where we are headed and how we will be able to tell when we have arrived. The four parts to an objective are (1) what result is to be demonstrated, (2) who or what will demonstrate the results, (3) where will the result be observed, (4) what interval or ratio scale criteria will be used? Loose or process-oriented objectives will confuse everyone (c.f. Mager, 1997). A Mega level result is best stated as an objective.

outcomes: Results and payoffs at the external client and societal level. Outcomes are results that add value to society, community, and external clients of the organization. These are results at the Mega level of planning.

outputs: The results and payoffs that an organization can or does deliver outside of itself to external clients and society. These are results at the Macro level of planning where the primary client and beneficiary is the organization itself. It does not formally link to outcomes and societal well-being unless it is derived from outcomes and the Ideal (Mega) Vision.

paradigm: The framework and ground rules individuals use to filter reality and understand the world around them (Barker, 1992). It is vital that people have common paradigms that guide them. That is one of the functions of the Mega level of planning and outcomes so that everyone is headed to a common destination and may uniquely contribute to that journey.

performance: A result or consequence of any intervention or activity, including individual, team, or organization—an end.

performance accomplishment system (PAS): Any of a variety of interventions (such as “instructional systems design and development,” quality management/continuous improve-ment, bench-marking, reengineering, and the like) that are results oriented and are intended to get positive results. These are usually focused at the Micro/Products level. This is my pre-ferred alternative to the rather sterile term performance technology that often steers people toward hardware and premature solutions (Kaufman, 1999, 2000).

processes: The means, processes, activities, procedures, interventions, programs, and initiatives an organization can or does use in order to deliver useful ends. While most planners start here, it is dangerous not to derive the Processes and Inputs from what an organization must deliver and the payoffs for external clients.

products: The building-block results and payoffs of individuals and small groups that form the basis of what an organization produces and delivers, inside as well as outside of itself, and the payoffs for external clients and society. Products are results at the Micro level of planning.

quasi-need: A gap in a method, resource, or process. Many so-called “needs assessments” are really quasi-needs assessments since they tend to pay immediate attention to means (such as training) before defining and justifying the ends and consequences (Watkins, Leigh, Platt, & Kaufman, 1998).

required results: Ends identified through needs assessment, which are derived from hard data relating to objective performance measures.

Page 123: Performance Improvement

Glossary

| 117

Results: Ends, Products, Outputs, Outcomes; accomplishments and consequences. Usually misses the Outputs and Outcomes.

soft data: Personal perceptions of results. Soft data is not independently verifiable. While people’s perceptions are reality for them, they are not to be relied on without relating to “hard”—independently verifiable—data as well.

strategic alignment: The linking of Mega, Macro, and Micro level planning and results with each other and with Processes and Inputs. By formally deriving what the organization uses, does, produces, and delivers to Mega/external payoffs, strategic alignment is complete.

strategic thinking: Approaching any problem, program, project, activity, or effort by noting that everything that is used, done, produced, and delivered must add value for external clients and society. Strategic thinking starts with Mega.

system analysis: Identifies and justifies what should be accomplished based on an Ideal/Mega Vision and is results focused. It is a series of analytic steps that include Mission analysis, Function analysis, and (if selected), Task analysis. It also identifies possible methods and means (methods-means analysis) but does not select the methods-means. This starts with rolling-down (from outside to inside the organization) linkages to Mega.

systems analysis: Identifies the most effective and efficient ways and means to achieve required results. Solutions and tactics focused. This is an internal—inside the organization—process.

tactical planning: Finding out what is available to get from What Is to What Should Be at the organizational/Macro level. Tactics are best identified after the overall mission has been selected based on its linkages and contributions to external client and societal (Ideal Vision) results and consequences.

wants: Preferred methods and means assumed to be capable of meeting needs.

What Is: Current operational results and consequences. These could be for an individual, an organization, and/or for society.

What Should Be: Desired or required operational results and consequences. These could be for an individual, an organization, and/or society.

wishes: Desires concerning means and ends. It is important not to confuse wishes with needs.

Page 124: Performance Improvement

The Assessment Book

118 |

Making Sense of Definitions and Their Contribution to a Mega Perspective What can we surmise by a close consideration of the above definitions and the consideration of the possible perspective (unit of analysis) differences between conventional use and what is suggested here? Here are some:

1. System approach ≠ systems approach ≠ systematic approach ≠ systemic approach. 2. Mega level planning ≠ Macro level planning ≠ Micro level planning. 3. System analysis ≠ systems analysis. 4. Means ≠ ends. 5. Outcome ≠ Output ≠ Product ≠ Process ≠ Input. 6. There are three levels of planning: Mega, Macro, and Micro and three related types of

results: Outcomes, Outputs, and Products. 7. Need is a gap in results, not a gap in Process or Input. 8. Needs assessment ≠ needs analysis (nor front-end analysis or problem analysis). 9. Strategic planning ≠ tactical planning ≠ operational planning. 10. Change creation ≠ change management.

References Barker, J. A. (1992). Future edge: discovering the new paradigms of success. New York:

William Morrow & Co., Inc.

Brethower, D. (2006). Performance analysis: knowing what to do and how. Amherst, MA: HRD Press, Inc.

Guerra-López, I. (2007). Evaluating impact: evaluation and continual improvement for performance improvement practitioners. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006a). Change, choices, and consequences: a guide to Mega thinking and planning. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2006b). 30 seconds that can change your life: a decision-making guide for those who refuse to be mediocre. Amherst, MA: HRD Press, Inc.

Kaufman, R. (2000). Mega planning. Thousand Oaks, CA: Sage Publications.

Kaufman, R. (1998). Strategic thinking: a guide to identifying and solving problems (revised). Arlington, VA and Washington, DC: Jointly published by the American Society for Training and Development and the International Society for Performance Improvement.

Kaufman, R. (1992). Strategic planning plus: an organizational guide (revised). Newbury Park, CA: Sage.

Kaufman, R., & Keller, J. (Winter, 1994). Levels of evaluation: beyond Kirkpatrick. Human Resources Quarterly, 5(4), 371–380.

Kaufman, R., Keller, J., & Watkins, R. (1995). What works and what doesn’t: evaluation beyond Kirkpatrick. Performance and Instruction, 35(2), 8–12.

Kaufman, R., Oakley-Browne, H., Watkins, R., & Leigh, D. (2003). Practical strategic planning: aligning people, performance, and payoffs. San Francisco, CA: Jossey-Bass/ Pfeiffer.

Page 125: Performance Improvement

Glossary

| 119

Kaufman, R., Stith, M., Triner, D., & Watkins, R. (1998). The changing corporate mind: organizations, visions, mission, purposes, and indicators on the move toward societal payoffs. Performance Improvement Quarterly, 11(3), 32–44.

Langdon, D., Whiteside, K., & McKenna, M. (1999). Intervention resource guide: 50 performance improvement tools. San Francisco, CA: Jossey-Bass.

Mager, R. F. (1997). Preparing instructional objectives: a critical tool in the development of effective instruction (3rd ed.). Atlanta, GA: Center for Effective Performance.

Peters, T. (1997). The circle of innovation: you can’t shrink your way to greatness. New York: Knopf.

Popcorn, F. (1991). The Popcorn report. New York: Doubleday.

Watkins, R., Leigh, D., Platt, W., & Kaufman, R. (1998). Needs assessment: A digest, review, and comparison of needs assessment literature. Performance Improvement, 37(7), 40–53.

Watkins, R., Leigh, D., Foshay, R., & Kaufman, R. (1998). Kirkpatrick plus: evaluation and continuous improvement with a community focus. Educational Technology Research and Development Journal, 46(4).

Watkins, R. (2007). Performance by design: the systematic selection, design, development of performance technologies that produce useful results. Amherst, MA: HRD Press, Inc.

Related Readings Banathy, B. H. (1992). A systems view of education: Concepts and principles for effective

practice. Englewood, Cliffs, NJ: Educational Technology Publications.

Beals, R. L. (December, 1968). Resistance and adaptation to technological change: Some anthropological views. Human Factors.

Bertalanffy, L. Von (1968). General systems theory. New York: George Braziller.

Block, P. (1993). Stewardship. San Francisco, CA: Berrett-Koehler Publishers.

Branson, R. K. (August, 1998). Teaching-centered schooling has reached its upper limit: It doesn’t get any better than this. Current Directions in Psychological Science, 7(4), 126–135.

Churchman, C. W. (1969, 1975). The systems approach (1st and 2nd eds.). New York: Dell Publishing Company.

Clark, R. E., & Estes, F. (March–April, 1999). The development of authentic educational technologies. Educational Technology, 38(5), 5–11.

Conner, D. R. (1998). Building nimble organizations. New York: John Wiley & Sons.

Deming, W. E. (1972). Code of Professional Conduct. Inst. Stat. Rev., 40(2), 215–219.

Deming, W. E. (1986). Out of the crisis. Cambridge, MA: MIT, Center for Advanced Engineering Technology.

Deming, W. E. (May 10, 1990). A system of profound knowledge. Washington, DC: Personal memo.

Drucker, P. F. (1973). Management: tasks, responsibilities, practices. New York: Harper & Row.

Page 126: Performance Improvement

The Assessment Book

120 |

Drucker, P. F. (1993). Post-capitalist society. New York: Harper Business.

Drucker, P. F. (November, 1994). The age of social transformation. The Atlantic Monthly, 53–80.

Drucker, P. F. (October 5, 1998). Management’s new paradigm. Forbes, 152–168.

Forbes, R. (August, 1998). The two bottom lines: let’s start to measure. The Quality Magazine, 7(4), 17–21.

Gilbert, T. F. (1978). Human competence: Engineering worthy performance. New York: McGraw-Hill.

Greenwald, H. (1973). Decision therapy. New York: Peter Wyden, Inc.

Gruender, C. D. (May–June, 1996). Constructivism and learning: a philosophical appraisal. Educational Technology, 36(3), 21–29.

Harless, J. (1998). The Eden conspiracy: educating for accomplished citizenship. Wheaton, IL: Guild V Publications.

Kaufman, R. (Winter, 2001). We now know we have to expand our horizons for performance improvement. ISPI News & Notes. Also a modified version “What Performance Improvement Specialists Can Learn from Tragedy: Lesson Learned from September 11, 2001” is published on the ISPI Web site (http://www.ispi.wego.net/).

Kaufman, R. (October, 1997). A new reality for organizational success: two bottom lines. Performance Improvement, 36(8).

Kaufman, R. (May–June, 1997). Avoiding the “dumbing down” of human performance improvement. Performance Improvement.

Kaufman, R. (February, 1994). Auditing your needs assessment. Training & Development.

Kaufman, R. (1992). Strategic planning plus: an organizational guide (revised). Newbury Park, CA: Sage.

Kaufman, R. (1972). Educational system planning. Englewood Cliffs, NJ: Prentice-Hall.

Kaufman, R., & Forbes, R. (2002). Does your organization contribute to society? 2002 Team and Organization Development Sourcebook, 213–224. New York: McGraw-Hill.

Kaufman, R., & Guerra, I. (March, 2002). A perspective adjustment to add value to external clients. Human Resource Development Quarterly, 13(1), 109–115.

Kaufman, R., & Lick, D. (Winter, 2000–2001). Change creation and change management: partners in human performance improvement. Performance in Practice, 8–9.

Kaufman, R., Stith, M., & Kaufman, J. D. (February, 1992). Extending performance technology to improve strategic market planning. Performance & Instruction Journal.

Kaufman, R., & Swart, W. (May–June, 1995). Beyond conventional benchmarking: integrating ideal visions, strategic planning, re-engineering, and quality management. Educational Technology, 11–14.

Kaufman, R., & Watkins, R. (Spring, 1996). Costs-consequences analysis. HRD Quarterly, 7, 87–100.

Page 127: Performance Improvement

Glossary

| 121

Kaufman, R., & Watkins, R. (1999). Using an ideal vision to guide Florida’s revision of the State Comprehensive Plan: a sensible approach to add value for citizens. In DeHaven-Smith, L. (ed.). Florida’s future: a guide to revising Florida’s State Comprehensive Plan. Tallahassee, FL: Florida Institute of Government.

Kaufman, R., Watkins, R., & Guerra, I. (Winter, 2002). Getting valid and useful educational results and payoffs: we are what we say, do, and deliver. International Journal of Educational Reform, 11(1), 77–92.

Kaufman, R., Watkins, R., & Leigh, D. (2001). Useful educational results: defining, prioritizing, accomplishing. Proactive Press, Lancaster, PA.

Kaufman, R., Watkins, R., & Sims, L. (1997). Costs-consequences analysis: a case study. Performance Improvement Quarterly, 10(3), 7–21.

Kuhn, T. (1970). The structure of scientific revolutions (2nd ed.). Chicago, IL: University of Chicago Press.

LaFeur, D., & Brethower, D. (1998). The transformation: business strategies for the 21st century. Grand Rapids, MI: IMPACTGROUPworks.

Langdon, D. (ed.). Intervention resource guide: 50 performance improvement tools. San Francisco, CA: Jossey-Bass.

Lick, D., & Kaufman, R. (Winter, 2000–2001). Change creation: the rest of the planning story. Planning for Higher Education, 29(2), 24–36.

Muir, M., Watkins, R., Kaufman, R., & Leigh, D. (April, 1998). Costs-consequences analysis: a primer. Performance Improvement, 37(4), 8–17, 48.

Peters, T. (1997). The circle of innovation: you can’t shrink your way to greatness. New York: Alfred A. Knopf.

Rummler, G. A., & Brache, A. P. (1990). Improving performance: how to manage the white space on the organization chart. San Francisco, CA: Jossey-Bass Publishers.

Senge, P. M. (1990). The fifth discipline: the art and practice of the learning organization. New York: Doubleday-Currency.

Triner, D., Greenberry, A., & Watkins, R. (November–December, 1996). Training needs assessment: a contradiction in terms? Educational Technology, 36(6), 51–55.

Watkins, R., Triner, D., & Kaufman, R. (July, 1996). The death and resurrection of strategic planning: a review of Mintzberg’s The Rise and Fall of Strategic Planning. International Journal of Educational Reform.

Watkins, R., & Kaufman, R. (November, 1996). An update on relating needs assessment and needs analysis. Performance Improvement, 35(10), 10–13.

Watkins, R. (2007). Performance by design: the systematic selection, design, development of performance technologies that produce useful results. Amherst, MA: HRD Press, Inc.

Page 128: Performance Improvement
Page 129: Performance Improvement

| 123

About the Authors Roger Kaufman, CPT, PhD, is professor emeritus of educational psychology and learning systems at the Florida State University, where he served as director, Office for Needs Assessment and Planning, as well as Associate Director of the Learning Systems Institute, and where he received the Professorial Excellence award. He is also Distinguished Research Professor at the Sonora Institute of Technology, Sonora Mexico. In addition, Dr. Kaufman has served as Research Professor of Engineering Management at the Old Dominion University, at the New Jersey Institute of Technology, and is Associated with the faculty of industrial engineering at the University of Central Florida. Previously he has been professor at several universities, including Alliant International University (formerly the U.S. International University) and Chapman University and also taught courses in strategic planning, needs assessment, and evaluation at the University of Southern California and Pepperdine University. He was the 1983 Haydn Williams Fellow at the Curtin University of Technology in Perth, Australia. Dr. Kaufman also serves as the Vice Chair of the Senior Research Advisory Committee for Florida TaxWatch and is a member of the Business Advisory Council for Excelsior College. He is a Fellow of the American Psychological Association, a Fellow of the American Academy of School Psychology, and a Diplomate of the American Board of Professional Psychology. He has been awarded the highest honor of the International Society for Performance Improvement (an organization for which he also served as president), being named “Member for Life,” and has been awarded the Thomas F. Gilbert Professional Achievement Award by that same organization. He has recently been awarded ASTD’s Distinguished Contribution to Workplace Learning & Performance award—only the tenth individual to be so honored—and received the U.S. Coast Guard/Department of Homeland Security medal for Meritorious Public Service. These recognitions have come from his internationally recognized contributions in strategic and tactical planning, needs assessment, and evaluation. Having working for National Defense contracting companies (Boeing, Douglas, and Martin [now Lockheed-Martin] as a scientist and manager as well as in academia, he balances extensive scholarly expertise with a practical understanding of requirements in both the public and private sectors. A Certified Performance Technologist (CPT), Kaufman earned his Ph.D. in communications from New York University, with additional graduate work in industrial engineering, psychology, and education at the University of California at Berkeley and Johns Hopkins University (where he earned his MA). His undergraduate work in psychology, statistics, sociology, and industrial engineering was at Purdue and George Washington (where he earned his B.A.) universities. Prior to entering higher education, he was Assistant to the Vice President for Engineering as well as Assistant to the Vice President for Research at Douglas Aircraft Company. Before that, he was director of training system analysis at US Industries, Head of Training Systems for the New York office of Bolt, Beranek & Newman, and head of human factors engineering at Martin Baltimore & earlier as a human factors specialist at Boeing. He has served two terms on the U.S. Secretary of the Navy's Advisory Board on Education and Training. Dr. Kaufman has published 38 books, including Change, Choices, and Consequences; 30 Seconds That Can Change Your Life, Mega Planning; and Strategic Thinking—Revised, and co-authored Useful Educational Results: Defining, Prioritizing, and Accomplishing as well as Practical Strategic Planning: Aligning People, Performance, and Payoffs, and Practical Evaluation for Educators: Finding what Works and What Doesn’t, plus 246 articles on strategic planning, performance improvement, distance learning, quality management and continuous improvement, needs assessment, management, and evaluation.

Page 130: Performance Improvement

The Assessment Book

124 |

Ingrid Guerra-López, PhD, is an Assistant Professor at the Wayne State University, Associate Research Professor at the Sonora Institute of Technology in Mexico, and a Senior Associate of Roger Kaufman and Associates. Dr. Guerra-López publishes, teaches, consults (nationally and internationally across all sectors) and conducts research in the areas of organizational effectiveness, performance evaluation, needs assessment and analysis, and strategic alignment. She is co-author of Practical Evaluation for Educators: Finding What Works and What Doesn’t with Roger Kaufman and Bill Platt and has also published chapters in the 2006 Handbook for Human Performance Technology, various editions of the Training and Performance Sourcebooks, as well as the Organizational Development Sourcebooks. Additionally, she has published articles in journals such as Performance Improvement, Performance Improvement Quarterly, Human Resource Development Quarterly, Educational Technology, Quarterly Review of Distance Education, International Public Management Review, and the International Journal for Educational Reform, among others. She obtained her doctorate and master’s degrees from Florida State University. Ryan Watkins, PhD, is an associate professor of educational technology at George Washington University in Washington, D.C. He received his doctoral degree from Florida State University in instructional systems design, and has additional formal training in performance improvement, Web design, change management, and program evaluation. Dr. Watkins designs and teaches courses in instructional design, distance education, needs assessment, system analysis and design, research methods, and technology management for online and classroom delivery. He has been a visiting scientist with the National Science Foundation, a professor of instructional technology and distance education at Nova Southeastern University, and a member of the research faculty in the Learning Systems Institute’s Center for Needs Assessment and Planning at Florida State University. He has written several books, including 75 E-Learning Activities: Making Online Courses More Interactive and E-Learning Companion: A Student’s Guide to Online Success. He co-authored Strategic Planning for Success: Aligning People, Performance, and Payoffs, and Useful Educational Results. He has written more than 60 articles and book chapters on the topics of strategic planning, distance education, needs assessment, return-on-investment analysis, and evaluation. He served as vice president of the Inter-American Distance Education Consortium, and is an active member of the International Society for Performance Improvement and the United States Distance Learning Association. Dr. Watkins offers a variety of workshops and consulting services on topics such as instructional design, performance improvement, interactive e-learning, and preparing learners for online success. Doug Leigh, PhD, is an Associate Professor with Pepperdine University’s Graduate School of Education and Psychology. He is coauthor of Strategic Planning for Success: Aligning People, Performance and Payoffs and Useful Educational Results: Defining, Prioritizing, and Accomplishing. Dr. Leigh is an associate director of Roger Kaufman & Associates, two time chair of the American Evaluation Association's Needs Assessment Topic Interest Group, and past editor-in-chief of Performance Improvement journal. He currently serves as chair of the International Society for Performance Improvement's Research Committee. Leigh's current research, publication, and consulting interests concern cause analysis, organizational trust, leadership visions, and dispute resolution.