a qualitative case study identifying metrics for itil

211
The Pennsylvania State University The Graduate School Department of Learning and Performance Systems A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL ® REQUEST FULFILLMENT PROCESS TO CREATE EXECUTIVE DASHBOARDS: PERSPECTIVES OF AN INFORMATION TECHNOLOGY SERVICE PROVIDER GROUP A Dissertation in Workforce Education and Development by Sohel M. Imroz 2016 Sohel M. Imroz Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2016

Upload: others

Post on 29-Oct-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

The Pennsylvania State University

The Graduate School

Department of Learning and Performance Systems

A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL®

REQUEST FULFILLMENT PROCESS TO CREATE EXECUTIVE

DASHBOARDS: PERSPECTIVES OF AN INFORMATION TECHNOLOGY

SERVICE PROVIDER GROUP

A Dissertation in

Workforce Education and Development

by

Sohel M. Imroz

2016 Sohel M. Imroz

Submitted in Partial Fulfillment

of the Requirements

for the Degree of

Doctor of Philosophy

December 2016

Page 2: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

ii

The dissertation of Sohel M. Imroz was reviewed and approved* by the following:

William J. Rothwell

Professor of Education

Dissertation Advisor

Chair of Committee

Judith A. Kolb

Associate Professor of Education

Wesley E. Donahue

Associate Professor of Education

Edgar P. Yoder

Professor of Agricultural Extension Education

Susan M. Land

Director of Graduate Studies for Learning and Performance Systems

*Signatures are on file in the Graduate School

Page 3: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

iii

ABSTRACT

IT organizations in today’s world must transform from viewing themselves as

“overheads” and running as “cost centers” into “aligned business partners” (Overby,

2004, p. 50) that meet the operational, tactical, and strategic needs and goals of the

organization. Doug F. Busch, the Chief Information Officer (CIO) of Intel once said, "If

we behave as a cost center, we won’t get the most benefit from IT, and we certainly

won’t earn credibility" (Overby, 2004, p. 50). An increasing number of organizations

have started to shift their focus on IT, seeking now to “run like a business” or “act like a

business”. IT leaders consider the transformation of IT not as a choice, but as an

obligation and a matter of survival. This transformation has compelled IT leaders to

measure and evaluate the quality and effectiveness of the services they provide and

support. Without metrics of IT processes supporting services, the quality and

effectiveness of the services cannot be measured or managed.

Although organizations spend millions of dollars every year on IT infrastructures,

system implementation, and support and maintenance, many do not establish clear and

well-understood performance measures for these IT initiatives. Metrics in IT have

traditionally been measured in functionality-oriented silos like the help desk, but IT

departments have shifted towards process- and service-oriented metrics to determine

success. To address this shift and be able to measure performance and effectiveness of

processes and services, a new and improved approach for identifying and implementing

metrics is needed. This study examined the request fulfillment process for an IT service

Page 4: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

iv

provider group, identified that group’s perceptions of the most important metrics of the

process, and subsequently created executive dashboards for displaying those metrics.

The two primary research questions were: (1) What do the group members

perceive as being the most important metrics of the request fulfillment process? (2) How

to create executive dashboards with the metrics perceived as most important by the group

members? To answer these questions, this research utilized components of the qualitative

research approach, descriptive research strategy, and case study research tradition

(strategy of inquiry). Study results indicated that the following 12 metrics were perceived

by the group as most important: Total number of tickets created and closed per month,

Number of Priority-1 tickets created and closed per month, Number of tickets by issue

type, Number of tickets by priority, Number of tickets by issue status, Number of tickets

by department/area, Number of tickets per assignee, Number of tickets per reviewer,

Number of tickets per assignee and issue type, Number of tickets per assignee and

priority, Number of tickets per assignee and issue status, and Number of tickets per

department/area and issue type. Three dashboard pages (Trend analysis, Monthly

operational summary, and Monthly workload distribution summary) were created that

contained bar charts, pie charts, and tables using the iDashboards self-service software

application to present these metrics.

In reviewing recent IT-related scholarly works, there is a paucity of research on

metrics, measurements, and evaluation of IT processes—especially on how to identify

and develop metrics. This study should be meaningful to a growing number of IT

Page 5: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

v

practitioners because it addressed these topics on which very little previous empirical

work has been conducted.

Page 6: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

vi

TABLE OF CONTENTS

LIST OF FIGURES ix

LIST OF TABLES

x

ACKNOWLEDGMENTS

Chapter 1 Introduction

Context of the Study

The Problem

Purpose of the Study

Significance of the Study

Limitations

Assumptions

Definition of Key Terms

Conceptual Framework

xi

1

2

4

6

7

8

8

9

11

Chapter 2 Review of Literature

Motives, Justifications, and Benefits

Challenges, Barriers, and Risks

Implementation Strategies

Critical Success Factors (CSFs)

Metrics, Measurements, and Evaluation

Chapter Summary

13

16

25

28

31

35

44

Chapter 3 Method

Research Design

Qualitative Research Approach

Descriptive Research Strategy

Case Study Research Tradition

Single-Case Design

Instrument Development

Pilot Study

Participant Selection Criteria

Institutional Review Board (IRB)

Data Collection

Using Multiple Sources of Evidence

Creating a Case Study Database

Maintaining a Chain of Evidence

Exercising Care When Using Electronic Sources of Evidence

45

47

47

48

49

50

51

53

54

54

55

55

58

59

60

Page 7: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

vii

Data Analysis

Member Validation and Check

Data Triangulation

Grounded Theory Approach

Strategies for Addressing the Quality of the Research

Trustworthiness

Construct Validity

Reliability

Chapter Summary

60

61

61

62

66

66

67

67

70

Chapter 4 Research Findings

Background

Participant Profile

Development of Three Themes

Key Observations in Focusing the Research Study

Theme 1: Trend Analysis

Total Number of Tickets Created and Closed Per Month

Number of Priority-1 Tickets Created and Closed Per Month

Theme 2: Monthly Operational Summary

Number of Tickets by Issue Type

Number of Tickets by Priority

Number of Tickets by Issue Status

Number of Tickets by Department/Area

Theme 3: Monthly Workload Distribution Summary

Number of Tickets per Assignee

Number of Tickets per Reviewer

Number of Tickets per Department/Area and Issue Type

Chapter Summary

71

71

73

78

80

82

85

90

93

96

98

101

104

107

109

112

115

118

Chapter 5 Study Summary, Conclusions, and Recommendations

Study Summary

Single Site, Single-Case Study Approach

Data Collection

Data Analysis

Strategies for Judging Trustworthiness

Conclusions

Themes of the Study

Metrics from Service Providers’ Perspective—Demonstrating Value

Metrics from the Business Perspective

Business Value Metrics

Applicability of This Study to the Field of Organization Change and

Development

Recommendations

119

119

120

120

121

122

123

124

130

135

137

138

141

Page 8: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

viii

References

147

Appendix A: IRB Approval Letter

162

Appendix B: HRP-591 Protocol for Human Subject Research

163

Appendix C: Recruitment Letter

164

Appendix D: Informed Consent Form

165

Appendix E: Interview Guide

167

Appendix F: Properties of a JIRA Record

169

Appendix G: Case Study Protocol

Appendix H: Field Journal Template and Example

Appendix I: Documents, Physical Artifacts, and Data Analyzed

Appendix J: The Codebook: Codes, Descriptions, and Examples

Appendix K: Codes, Sources, Categories, and Themes

170

172

173

174

196

Page 9: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

ix

LIST OF FIGURES

Figure Page

1.1 Metric Development Plan 11

3.1 Research Roadmap 46

3.2 Data Triangulation Framework 62

3.3

4.1

4.2

4.3

4.4

Project Plan and Timeline

Triangulation Matrix of Data Sources

Trend Analysis Dashboard Page

Monthly Operational Summary Dashboard Page

Monthly Workload Distribution Summary Dashboard Page

70

79

93

106

117

Page 10: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

x

LIST OF TABLES

Table Page

2.1 Journals, Conferences, and Databases Used in this Study 15

2.2 KPIs Identified to Measure Each IT Service 37

2.3 Frequency of Recording and Reporting Metrics 39

2.4 Environmental Factors Influencing Selection of ITSM Performance

Metrics

40

2.5 Processes, Critical Success Factors (CSFs), and Key Performance

Indicators (KPIs)

43

3.1 Link Among Research Questions, Interview Questions, and Guiding

Literature

52

4.1 Prioritization Matrix of a Service Request 91

Page 11: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

xi

ACKNOWLEDGMENTS

This dissertation could not have been finished without the help of and support

from many professors, research staff, graduate students and colleagues, and my family. It

is my great pleasure to acknowledge all those people who have given me guidance, help,

and encouragement.

First of all, I would like to express my deepest sense of Gratitude (with a capital,

bold, and underlined g) to my advisor Dr. William Rothwell at Penn State, who offered

his continuous advice and encouragement throughout the course of this dissertation. I

thank him for the systematic guidance and great effort he put into opening my eyes to the

concepts of scholarship and research.

I would also like to express very sincere gratitude to my doctoral committee

members at Penn State—Dr. Judith Kolb, Dr. Wesley Donahue, and Dr. Edgar Yoder.

They helped me develop a new lens through which I examined my work, and to

understand the world and contribute to society.

I am thankful to Dr. David Passmore for his support and encouragement

whenever I was in need. He was the professor of my very first (Scholarly Inquiry) and

last (Research in Workforce Education) classes at Penn State. He encouraged me to

achieve my highest potential and produce novel research that is usable by others.

I am indebted to the staff members of the Workforce Education and Development

program at Penn State for the support they provided during my stay there. I especially

give sincere thanks to Carol Fantaskey and Laurie Heininger for helping me with various

administrative requests whenever I needed assistance. I am also thankful to Lee

Page 12: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

xii

Carpenter, the Project Development Associate & Writer/Editor at Penn State for editing

my dissertation and putting up with the demanding project planning.

It has been my pleasure to work with other graduate students and colleagues in the

Department of Learning and Performance Systems at Penn State. We talked together,

exchanged ideas, and helped each other. Those made my years at Penn State the most

memorable period in my life.

I take this opportunity to express profound gratitude from the core of my heart to

my beloved parents and to my son, Ashaaz. Finally, I dedicate the completion of this

dissertation and the doctoral degree to my wife, Bijoly, for her sacrifice, support, and

encouragement. Bijoly, in the vastness of space and the immensity of time, it is my joy

and honor to share a planet, a home, and an epoch with you. I love you.

Page 13: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

1

Chapter 1

Introduction

For more than eight years, I have been working as an Information Technology

Service Management (ITSM) professional in various U.S. organizations. ITSM refers to

“the management of IT services through the use and coordination of people, workflows,

and information technology” (Hoerbst, Hackl, Blomer, & Ammenwerth, 2011, p. 2).

ITIL, formerly known as the Information Technology Infrastructure Library, is based on

a set of best ITSM practices and provides a framework for managing end-to-end IT

services (Kumbakara, 2008). ITIL 2011 is the latest version of the ITIL framework, and

comprises five lifecycle stages and 26 processes. Request Fulfillment is one of those

processes that “deals with service requests, defining the roles and activities needed to

deliver services” (Mendes & Mira da Silva, 2011, p. 113).

Recently an increasing number of organizations are shifting their focus on IT to

“run like a business” or “act like a business” (Steinberg, 2013, p. 1). IT managers and

senior executives of today’s organizations (especially the CEO, CFO, and CIOs) consider

the evolution of IT from being run as a cost center to an aligned business partner not as a

choice, but as an obligation and as a matter of survival (Heller, 2013). This evolution has

compelled IT managers and senior executives to measure and evaluate the quality and

effectiveness of the services they support (Steinberg, 2013). Without the metrics of ITIL

processes supporting a service, the quality and effectiveness of the service cannot be

measured or managed. The metrics of ITIL processes must be calculated correctly, and

Page 14: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

2

the metrics should be easily accessible and presented in real time (Steinberg, 2013). A

dashboard is a user interface that organizes and presents metrics and information that is

easy to read, easy to access, and enables managers and decision makers to quickly “spot

deficiencies without wading through lots of reporting detail” (Steinberg, 2013, p. 25).

Despite the obvious necessity and value of having metrics to measure and manage

various ITSM processes, it is surprising that few IT professionals have specific ideas

regarding the metrics—what should be measured, how to identify what really matters, or

how an ITSM metrics program should be implemented and run (Steinberg, 2013).

According to Steinberg (2013), fewer than 5% of the business organizations measure

their IT processes and services “well”, about 50% of the organizations have “some”

measures, and fewer than 25% of the organizations have “absolutely no measures at all”

(p. 2). Identifying the most important metrics that really matter and to present them in

real-time in an easily understandable manner are the first steps for today’s IT managers

and senior executives in solving a business problem, justifying ITSM initiatives to

stakeholders, and making effective management decisions (Steinberg, 2013).

Context of the Study

The subject of this study was an information technology service provider group

(“the Group”) responsible for identifying, collecting, evaluating, analyzing, reporting,

maintaining, and warehousing data for over 15,000 customers to support the planning and

decision-making activities within a higher education organization in the U.S. The group

reported to the Executive Director and had 13 members: a director, an associate director,

Page 15: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

3

a manager, a senior data analyst, a data analyst, four consultants, a business analyst, two

graduate assistants, and an administrative support assistant. I was a staff member of the

group as an IT consultant with responsibility for defining, developing, and implementing

the request fulfillment process (“the Process”) of the ITIL framework. I was also

responsible for executing all activities and steps of the process, understanding how the

process contributes to the overall delivery of service and creation of value for the

business. Those responsibilities required defining policies and standards to be employed

throughout the process and creating and updating service request records to document all

activities were carried out correctly.

A service request is a generic description for many types of issues or demands

(related to reports, data, information, or knowledge) that are placed upon the group by its

users and customers, and all service request records were stored in JIRA—a commercial

issue tracking and project management software developed by the company named

Atlassian (Jira, 2015). To increase customer satisfaction by better managing all service

requests and completing them on time, the group identified implementing the request

fulfillment process as a high-priority objective in the beginning of 2014, and the process

was fully implemented since the middle of 2014.

As the members of the group became more familiar and accustomed with the

newly implemented request fulfillment process, the need for measuring the efficiency,

effectiveness, performance, and quality of the process also became paramount. The

Executive Director firmly believed in the motto, “What gets measured gets done; what

cannot be measured cannot be managed.” To measure the effectiveness of the process and

manage it better, it was necessary for the group members to identify important metrics

Page 16: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

4

that appropriately captured the performance of the process and the quality of service. It

was also necessary to explore how the group members understood the needs and benefits

of these metrics. Finally, the metrics must be calculated correctly, presented in

dashboards, and accessible in real time by the group members, managers, and executives.

I have a long-term interest in ITIL, am a certified IT Service Management professional,

and had direct involvement in implementing the request fulfillment process. Based on

that background, I found a unique opportunity with the group to identify significant

metrics of the request fulfillment process and to create executive dashboards presenting

those metrics.

The Problem

Although organizations spend millions of dollars every year on various IT

infrastructures (e.g. hardware, software, network and storage, etc.), system

implementation, and support and maintenance, many do not establish clear and well-

understood performance measures for these IT initiatives (Budd & Malcolm, 2001;

Steinberg, 2013). Too often, loose measurements (e.g. customer satisfaction survey score,

percentage of IT projects completed on time and within budget, etc.) are considered

sufficient while they offer little help in understanding the success or failure of an IT

initiative (Budd & Malcolm, 2001). Metrics in IT have traditionally been measured in

functionality-oriented silos like the help desk, but IT departments have shifted towards

process- and service-oriented metrics to determine success (van Bon, 2008). According to

van Bon (2008), “Metrics for IT service management need to measure process and

Page 17: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

5

service effectiveness in addition to the functions and technologies that provide them” (p.

574). To address this shift and be able to measure performance and the effectiveness of

processes and services, a new and improved approach for identifying and implementing

metrics is needed. This study sought to contribute to this effort by examining how a

service provider group identified metrics for an ITIL request fulfillment process and

created executive dashboards to present those metrics.

Metrics have long been a topic of interest, concern, and debate (Mann, 2011).

Many organizations struggle with metrics due to multiple reasons which include:

They are not entirely sure what should be measured and why;

They often measure what is easy to measure rather than what should be

measured;

They frequently have too many metrics as opposed to a select few that matter;

There is no structure or context between metrics; and

Metrics are commonly viewed as an output rather than as an input into

business conversations about services or improvement activity (Mann, 2011).

For any process, implementing and adopting a new metric can also be challenging

because sometimes it causes culture change, frustration, and even resistance (Spafford,

2009). In addition, it is far too common for departments and organizations to set a metric

only to later discover that the process and tools in place cannot generate the supporting

data (Spafford, 2009). Last, to better manage a process and measure its performance, all

metrics pertaining to the process must be clearly understood, well-defined, and

achievable so its success can be benchmarked (Spafford, 2009).

Page 18: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

6

However, coming up with a list of clearly understood and well-defined metrics,

understanding what data are needed and how to collect them, and securing senior

management support in buying necessary tools to present the metrics can hinder

understanding, identifying, designing, and developing metrics (Mann, 2011). A review of

recent ITSM- and ITIL-related scholarly works pointed to a paucity of research on

metrics, measurements, and evaluation of ITIL processes—especially how to identify and

develop metrics for ITIL processes (Iden & Eikebrokk, 2013). This study used a

qualitative case study method to address these topics on which little previous empirical

work has been conducted.

Purpose of the Study

This study examined the request fulfillment process for an information technology

service provider group to identify the perceived most important metrics of the process,

and to subsequently create executive dashboards to display those metrics. My two

primary research questions were:

Research Question 1: What do the group members perceive as being the most

important metrics of the ITIL request fulfillment process?

This research question was guided by these sub-questions: How do the group

members understand the motives or driving forces behind the process implementation

(implementation motives)? What do the group members say about the activities that

received constant and careful attention from management during implementation (critical

success factors or CSFs)? How do the group members describe the benefits perceived and

Page 19: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

7

challenges faced during implementation (benefits and challenges)? What do the group

members say about measuring and evaluating over time to understand how well the

process is working (metrics, measurement, and evaluation)?

Research Question 2: How to create executive dashboards with the metrics

perceived as most important by the group members?

This research question was guided by these sub-questions: What is the process of

pulling the data from the source(s) of the metrics? How should the metrics be presented

in a dashboard? How to make these dashboards accessible in real time to all group

members, managers, stakeholders, and other decision makers?

Significance of the Study

A study identifying the most important metrics of the request fulfillment process

in a real-life service provider group is important for several reasons. First, the study

offers IT managers and senior executives indicators from which they can make more

informed, accurate, and timely operational and strategic decisions. Second, the study

provides visibility into how effectively and efficiently a service provider group operates

and delivers a set of IT services. Third, the study provides a basis for identifying and

prioritizing continual service improvement (CSI) initiatives within the service provider

group. Last, the study is helpful for a service provider to avoid negative consequences of

poorly managed ITSM activities such as slow operational processes, slow response rate

and turnaround times, and dissatisfied customers.

Page 20: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

8

Limitations

This study has limitations. Findings from this study cannot be generalized to a

larger population because a qualitative research method was used that enabled the

examination of perspectives from only one service provider group. The researcher

utilized a criterion-based purposive or purposeful sampling method (Patton, 1997) to

identify eight group members as potential study participants. All data collected during

face-to-face interviews were self-reported and could be biased by the interviewees

(Welch, 2014)—either knowingly or unknowingly.

Assumptions

According to Patton (2002), “Qualitative interviewing begins with the assumption

that the perspective of others is meaningful, knowable, and able to be made explicit” (p.

341). In this study, the researcher assumed that all in-person interview participants were

willing and capable of providing meaningful descriptions of their experiences (Patton,

2002), and were knowledgeable and truthful about their understanding of the request

fulfillment process. It was also assumed that the documents and reports reviewed while

conducting this study were prepared in good faith to represent the facts based on valid

and correct production data.

Page 21: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

9

Definition of Key Terms

To clarify the readers’ interpretation of this study, key terms were defined:

Dashboards: “A graphical representation of overall IT service performance and

availability. Dashboard images may be updated in real time, and can also be included in

management reports and web pages” (Best Management Practice, 2007, pp. 348–349).

Incident: “An unplanned interruption to an IT service or reduction in the quality

of an IT service. Failure of a configuration Item that has not yet affected service is also an

Incident. For example, failure of one disk from a mirror set” (Best Management Practice,

2007, p. 353).

Information Technology Infrastructure Library (ITIL): “A set of Best

Practice guidance for IT Service Management. ITIL is owned by the OGC and consists of

a series of publications giving guidance on the provision of Quality IT Services, and on

the Processes and facilities needed to support them” (Best Management Practice, 2007, p.

355).

IT Service: “A service provided to one or more customers by an IT service

provider. An IT service is based on the use of Information Technology and supports the

customer’s business processes. An IT service is made up from a combination of people,

processes and technology and should be defined in a Service Level Agreement” (Best

Management Practice, 2007, p. 354).

IT Service Management (ITSM): “The implementation and management of

Quality IT Services that meet the needs of the Business. IT Service Management is

performed by IT Service providers through an appropriate mix of people, Process and

Page 22: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

10

Information Technology” (Best Management Practice, 2007, p. 355).

Metrics: “Metrics are a means of telling a complete story for the purpose of

improving something” (Klubeck, 2011, p. xiii).

Office of Government Commerce (OGC): “OGC owns the ITIL brand

(copyright and trademark). OGC is a UK Government department that supports the

delivery of the government’s procurement agenda through its work in collaborative

procurement and in raising levels of procurement skills and capability within

departments. It also provides support for complex public sector projects” (Best

Management Practice, 2007, pp. 357–358).

Request Fulfillment: “The Process responsible for managing the lifecycle of all

service requests” (Best Management Practice, 2007, p. 363).

Service: “A means of delivering value to Customers by facilitating outcomes

customers want to achieve without the ownership of specific Costs and Risks” (Best

Management Practice, 2007, p. 364).

Service Request: “A request from a user for information or advice, or for a

standard change or for access to an IT Service” (Best Management Practice, 2007, p.

367).

Page 23: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

11

Conceptual Framework

The conceptual framework used to guide this research is the Metric Development

Plan (MDP) (Klubeck, 2011) as shown in Figure 1.1. This MDP was selected because it

captures all components of a metric (e.g., data, measures, information, pictures, root

questions) and provides guidance for making metrics well-defined, useful, and

manageable (Klubeck, 2011). The components of this development plan are briefly

described in this section.

Purpose Statement Or

Root Question

Design an Abstract view

Identify Information, Measures, and Data

Collect Measures and Data

Analyze the Data

Visual Depiction

Is ItRight?

Use the Metric

No NoNo

Yes

Figure 1.1. Metric Development Plan (Klubeck, 2011, p. 72). Used with permission

conveyed through Copyright Clearance Center, Inc.

The purpose statement or the root question depends on understanding the real

driving need and could come from goals, improvement opportunities, or problems one

would like to solve (Klubeck, 2011). A well-defined purpose statement or root question is

the foundation of the metric; it provides a clear guide to the metric and allows

identification of other necessary metrics. Once a purpose statement or root question is

defined, it is useful to design an abstract view or picture of the metric and resist the

temptation of jumping into the data too soon. According to Klubeck (2011), having a

picture of the metric “provides focus, direction, and helps us avoid chasing data” (p. 44).

Page 24: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

12

The next part of the MDP involves identifying the metrics—information,

measures, and data and clearly articulating how they will be used, how they will not be

used, and identifying those who will use the metrics. Honest, open, and clear answers to

these questions can help reduce the “fear, uncertainty, and doubt” (Klubeck, 2011, p. 62)

people have toward the way the data may be used. Clearly identifying the customers of a

metric is important because the customers may have requirements for how the metric is

presented and its validity (Klubeck, 2011). The next part of the MDP deals with

collecting measures and data and scheduling of the metric. Since most metrics are time-

based or even-driven (Klubeck, 2011), having a schedule of reporting allows timely

delivery of the metric to its customers.

After completion of data collection for the metric, it must be analyzed properly.

Incorrect analysis of data will contribute to the incorrect formulation of conclusions and

therefore wrong answers to the root question. To avoid negative consequences due to

incorrect and inappropriate data analysis, the MDP “must include all metric data rules,

edits, formulas, and algorithms; each should be clearly spelled out for future reference”

(Klubeck, 2011, p. 71). Once the data are correctly analyzed, the next part involves

presenting the metric to the customer in a manner that can be easily understood.

Normally, metrics are presented via different types of graphs, charts, or table formats. If

customers do not find useful the way a metric is presented, they are most likely not going

to use the metric. This part of the MDP recommends experimenting with different ways

to present the metric so customers can voice their preferences. The final part of the plan

is achieved when customers approve the metric and the manner of its presentation, and

use the metric.

Page 25: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

13

Chapter 2

Review of Literature

This chapter provides a review of existing research and literature related to IT

Service Management (ITSM) and ITIL. The literature review encompasses three primary

goals—first, to summarize, critically analyze, and evaluate previous research available on

ITSM and ITIL; second, to identify themes and research contributions to ITSM and ITIL

that have emerged from past research; and third, to identify knowledge gaps that can be

addressed by the present study. The literature search process used in this study, criteria

for inclusion and exclusion of relevant literature, and types of data extracted from each

study are briefly described next.

ITIL was developed in 1989 by the Central Computer and Telecommunications

Agency (CCTA) in the United Kingdom to help improve IT Service Management in the

UK central government (Anderson, 2009). The literature review for the present study

started with ITIL Core Publications, comprised of five books on the ITIL lifecycle—

Service Strategy, Service Design, Service Transition, Service Operation, and Continual

Service Improvement. These five books are the bedrock of the ITIL framework. They

provide guidance on the planning, delivery, and management of IT services to support

business needs and maximize IT resources and capabilities to drive business value

(ITILNews, 2016). ITIL core publications are based on the best practices, principles, and

international standards related to IT Service Management and drawn from the public and

private sectors worldwide (Office of Government Commerce, 2011). As of 2015, the

most recent core publications pertain to the current official version of ITIL in use—ITIL

Page 26: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

14

2011. Among all core publications, only the Service Operation book falls within the

scope of this literature review, because the request fulfillment process is a part of Service

Operation. For this literature review, two earlier versions of ITIL were also considered

relevant—ITIL V2 (released in 2001) and ITIL V3 (released in 2007). It was appropriate

to consider these two earlier versions of ITIL because they are still widely practiced in

many companies worldwide (Gacenga, Cater-Steel, & Toleman, 2010).

After reviewing ITIL core publications, the literature review then targeted other

books published on various ITSM and ITIL topics and peer-reviewed literature on

implementation of ITSM and ITIL published between January 1, 2000 and December 31,

2015. The keywords used to select relevant literature were “IT Service Management”,

“Information Technology Infrastructure Library”, and their respective abbreviations,

“ITSM” and “ITIL.” The main sources of literature included academic and professional

journals, international Information Systems conferences, and online research directories

and databases. Seventeen journals, proceedings from eight conferences, and six online

databases were used in this literature review. The selected journals, conferences, and

databases are listed in Table 2.1. Articles published only in English were considered in

this literature review. Non-research articles and white papers (non-peer reviewed) that are

purely descriptive were not considered.

Page 27: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

15

Table 2.1

Journals, Conferences, and Databases Used in this Study

Type of Source

Source Name

Journals

BMC Medical Informatics and Decision Making

Business & Information Systems Engineering

Campus-Wide Information Systems

Electronic Journal Information Systems Evaluation

Information & Management

Information Management and Computer Security

Information Systems E-Business Management

Information Systems Management

Information Technology and Management Science

International Journal of Future Computer and Communication

International Journal of Information and Electronics Engineering

Journal of Computer Information Systems

Journal of Global Information Technology Management

Journal of Service Science and Management

Journal of Software Engineering research and Development

Technology and Investment

Wirtschafts Informatik

Conferences Americas Conference on Information Systems (AMCIS)

European Conference on Information Systems (ECIS)

Hawaii International Conference on System Sciences (HICSS) International Conference on Information Resources Management (CONF-IRM)

International Conference of Information Systems (ICIS) International Workshop on Database and Expert Systems Applications

Pacific Asia Conference on Information Systems (PACIS)

Sprouts

Databases ACM Digital Library

Business Source Premier

Emerald Insight

Web of Science

Google Scholar

IEEE Xplore

Page 28: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

16

Types of data extracted from each peer-reviewed article included the journal or

conference name, full reference in APA style, the authors’ names and affiliated

institutions, abstract, research question and method, theoretical framework, main topic

area, research findings, and summary of the study. Guided by the sub-questions for

Research Question 1, the literature review included articles on the following subtopics:

motives, justifications, and benefits of implementing ITSM and ITIL; challenges,

barriers, and risks associated with ITSM and ITIL; implementation strategy; critical

success factors (CSFs), and metrics, measurements, and evaluation. The following

section provides detail on the literature review findings for the present study.

Motives, Justifications, and Benefits

This section relates to the first research question for the present study and sub-

questions related to the motives, justifications, and benefits of ITIL process

implementation. The Office of Government Commerce (2011) mentioned three

significant values of the request fulfillment process that could be motivating factors for

organizations implementing ITIL. First, the process could give business staff the ability

to manage various standard services quickly and effectively, increase their productivity,

and improve quality of the business services and products. Second, the process could

reduce the cost of providing business services by reducing the bureaucracy involved in

requesting new services or receiving existing ones. Third, the process could “increase the

level of control over requested services through a centralized fulfillment function” (p.

87).

Page 29: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

17

The study by Hoerbst, Hackl, Blomer, and Ammenwerth (2011) involved

interviews with the heads of information technology and IT managers at 75 hospitals in

Austria, Germany, Slovakia, South Tyrol (Italy), and Switzerland to learn about their

motives and expectations for implementing ITSM and ITIL in the healthcare context,

using descriptive statistics and qualitative content analysis. The implementation motives

in that study included: improve IT services, increase productivity, and reduce IT costs.

That study claimed that “the idea of IT services and IT service management is still not

widely recognized in hospitals in the countries and regions of the study” and “research on

IT service management and ITIL in health care is rare” (Hoerbst et al., 2011, p. 2).

The case study of Pollard and Cater-Steel (2009) explored successful ITIL

implementations in two U.S. and two Australian companies (one public and one private

company from each country) to understand justifications of implementation,

implementation strategies, critical success factors, benefits, and challenges. The

researchers defined ITIL implementation as “successful” when the companies reported

“achieving a more predictable infrastructure from improved rigour during system

changes, improved clarity in roles and responsibilities, reduction in system and service

outages, improved coordination between functional teams, seamless end-to-end service,

more documented and consistent ITSM processes across the organisation, consistent

logging of incidents, enhanced productivity, reduced costs, and improved customer

satisfaction” (Pollard & Cater-Steel, 2009, p. 168). There were three research questions in

Pollard and Cater-Steel (2009). The first question was whether public and private

organizations in the U.S. and Australia have different motives or justifications for

implementing ITIL. The second and third research questions pertained to ITIL

Page 30: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

18

implementation strategies and critical success factors—these are addressed later in this

section.

The study by Pollard and Cater-Steel (2009) mentioned above used a purposive

sample to select the organizations, and collected data through interviews, company

websites, and publicly available corporate documentations. Data were analyzed by

transcribing the interviews, identifying patterns and themes, summarizing main

characteristics, and selecting quotations that support the patterns and themes. In

answering the first research questions, the researchers found that justifications for ITIL

implementation “varied somewhat across organizations” (p. 168), and reported following

noteworthy justifications: increase operational efficiency and consistency, improve

communication between functional teams in IT, better tracking of incidents caused by

failed changes, and avoid providing services that often fail.

Disterer (2012) administered a survey among ISO 20000 certified companies in

Germany, Austria, Switzerland, and Liechtenstein to study “why do companies seek to

conform to ISO 20000 and what benefits do they experience?” (p 1). ISO 20000 is an

international standard for IT Service Management that offers “a normative concept for

aligning the performance provided by IT services, and enabling companies to certify their

compliance with this standard” (p. 2). Seventy-eight companies were invited to

participate in the survey; 53 valid responses were obtained, for a response rate of 68%.

Disterer (2012) found these organizational motivations for seeking ISO 20000

certification: customer orientation and satisfaction; competitive advantage; trust and

reputation; efficiency, uniformity, transparency, reliability, and continuous improvement

Page 31: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

19

of IT procedures and services; process-orientation; clarity of tasks, roles, and

responsibilities; staff awareness; and reduction of errors, incidents, and derivations.

Espindola, Luciano, and Audy (2009) studied IT professionals’ perception of the

relationship between adoption of various IT governance models or quality instruments

and their impact on several IT problems in Brazilian organizations, focusing on software

quality improvement and IT Service Management. ITIL was one of the quality

instruments used in the study. Espindola et al. (2009) examined the relationship between

adoption of ITIL and two IT problems—lack of formal decision making process and lack

of clear alignment among IT actions and strategic goals of organizations. Participants

were selected using a convenience sampling method and comprised attendees from a

software and IT services quality event in 2006. The researchers distributed the survey

questionnaire form to 260 attendees. One hundred and eighty-six usable responses were

returned, for a response rate of 72%. Results from the study showed a negative

relationship between ITIL adoption and two IT problems stated above. Adoption of ITIL

decreased lack of alignment among IT actions and strategic goals of the organization, and

lack of formal decision-making process. Therefore, Espindola et al. (2009) stated that

improved alignment among IT actions and organizational goals and a superior decision-

making process could be motivating factors for adopting ITIL.

Wan and Chan (2008) identified the benefits of implementing IT Service

Management at the Hong Kong Science Park campus for managing campus-wide IT

operations. Wan and Chan (2008) evaluated the effects of ITSM tools by analyzing three

years of operational data to determine the effects of ITSM tools adoption. Data were

collected in two stages: between January 2004 and January 2005 (pre-deployment of

Page 32: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

20

tools) and between February 2005 to December 2006 (post-deployment of tools). The

study found two key motives and driving factors for implementing ITSM workflow

processes: to avoid chaos in operations management and to achieve a systematic process.

The study also identified two major benefits that resulted from implementing IT Service

Management: improvement of service target and identification of the most requested

users and services (Wan & Chan, 2008). Data from Wan and Chan (2008) showed that

the service target was improved by 13.4% (on average) and 25% (at maximum).

Improvement of the service target “created value to the operation thereby increased the

efficiency of processes and workflows” (p. 38). Identifying the most requested users

(Campus Facility Management) and the most requested services (Server Farm – 28% and

OA & Desktop Applications – 26%) in the campus allowed IT managers to adjust support

plans for these users and services.

An empirical quantitative study by Marrone and Kolbe (2011) identified

perceived benefits of ITIL implementation using an online questionnaire sent to 5,000

individuals in USA and UK with a response rate of 8%. The participants were on the

Hornbill mailing lists and members of the IT Service Management Forum (itSMF). The

purpose of the study was to understand how the benefits of ITIL implementation changed

as companies increased the level of adherence to the ITIL framework. The study had

three research questions: (1) Which effect does the total number of implemented

processes have on the maturity of the ITIL implementation? (2) How are challenges

perceived at different levels of maturity of the ITIL implementation? (3) How does the

number of benefits develop as the maturity of the ITIL implementation increases?

Marrone and Kolbe (2011) found positive correlation between level of maturity and

Page 33: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

21

number of implemented ITIL processes and number of benefits, and negative correlation

between level of maturity and number of challenges faced. According to Marrone and

Kolbe (2011), benefits of ITIL implementation included improvement in service quality,

customer satisfaction, business-IT alignment, call fix rate, and return on investments;

reduction in IT downtime and outages; and increased adherence to best practices and

standardized processes.

In another closely related but separate study, Marrone and Kolbe (2011a)

investigated the benefits of ITSM from both operational and strategic perspectives using

empirical data gathered from a large-scale international survey. Five hundred and three

IT executives from 441 organizations participated in the study. The study’s goal was to

gain an understanding of perception on business-IT alignment at different levels of

maturity of ITIL implementation, and relationships between the total number of realized

benefits and the maturity levels of ITIL implementation. The maturity levels used in this

study were found in the Strategy Alignment Maturity Model proposed by Luftman

(2001). Four hypotheses were proposed by Marrone and Kolbe (2011a). First, as the

maturity level of ITIL increases, the business-IT alignment also increases. Second, as the

maturity level of ITIL increases, the quantity of realized benefits also increases. Third, as

the maturity level of ITIL increases, so does the usage of metrics to measure the realized

benefits. And fourth, as the maturity level of ITIL increases, the acknowledgment by the

business of the benefits of ITIL also increases. All four hypotheses were confirmed in

Marrone and Kolbe (2011a). The study found that ITIL provided various benefits not

only at the operational level, but also contributed to the strategic positioning of the

Page 34: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

22

business—such as increased business performance, competitive advantage, and

profitability.

Gacenga, Cater-Steel, Tan, and Toleman (2011) used survey results and

conducted several case studies to ascertain the benefits of ITSM improvement initiatives

and “the factors that influence the selection of performance metrics” (p. 6). The study had

four research questions—RQ1: What types of benefits are reported from IT Service

Management (ITSM) improvement initiatives by organizations? RQ2: Which specific

performance metrics can measure ITSM benefits? RQ3: How can specific ITSM

performance metrics be derived? RQ4: What internal and external environment factors

influence the organization’s selection of performance metrics for ITSM? (p. 3). Results

from a survey of 263 itSMF Australia members conducted in 2009 were used to answer

RQ1 and RQ2, and six in-depth case studies of private and public organizations

conducted in 2010 were used to answer RQ3 and RQ4 of Gacenga et al. (2011).

Data analysis steps in Gacenga et al. (2011) included content analysis (to

determine the environmental factors) and cross-case analysis (to aggregate the metrics).

Since RQ1 pertained to benefits of ITIL and ITSM, the findings of RQ1 are reported in

this section, while the findings of RQ2, RQ3, and RQ4 are discussed in a later section

since they pertain to metrics and measurements. According to Gacenga et al. (2011), ITIL

processes provided efficient ways of recording, prioritizing, and tracking incidents by

having a central location for call handling and single point of contact; standardization of

workflows; and reporting on service level agreements (SLAs). Gacenga et al. (2011)

found that ITIL processes and ITSM implementation improved customer satisfaction,

availability of services and applications, scheduling of changes, and first-call resolution

Page 35: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

23

rate. Gacenga et al. (2011) also noticed that ITIL processes and ITSM implementation

resulted in reduced rework occasions and fewer incidents due to failed changes.

Wagner (2006) studied the benefits of implementing the Incident Management

process based on ITIL best practices in the IT department of a European bank. Eight IT

representatives (four IT managers and four IT specialists) and ten business

representatives (five business managers and five business specialists) participated in the

case study where data were collected primarily through interviews. Five indicators were

selected in measuring the benefits of the Incident Management process: proactiveness of

IT, understanding of business requirements by IT staff, responsiveness of IT, satisfaction

with IT services, and interaction between business and IT. Wagner (2006) found that

within three months of implementing the new process, the IT department experienced

improvement in the first three indicators while the fourth factor (responsiveness of IT)

slightly declined. In regarding the fifth indicator, Wagner (2006) found that “the

interaction between IT and business are more frequent and intense than before” (p. 7),

which can be an improvement.

Other researchers reported motivating factors similar to the ones mentioned

above. Lapao (2011) found improvement of IT management, efficiency at point of care,

and productivity; and reduction of work disruption costs as the primary motivating

factors behind IT Service Management initiatives at a hospital environment in Portugal.

The study of Cater-Steel, Tan, and Toleman (2006) stated several motivating factors for

adopting multiple process improvement frameworks including ITIL: compliance with

legal requirements, risk management, cost reduction, and customer satisfaction. A

process improvement project focusing on Incident Management within a leading

Page 36: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

24

Australian university identified cost reduction and efficient IT services as the “main

strategic intents” or motives behind the project (Arora & Bandara, 2006, p. 1277). The

case study of Hochstein et al. (2005) reported several benefits and costs of implementing

ITIL and service-oriented IT management in organizations. Benefits included increased

client-service orientation and improved quality of IT services; better efficiency due to

process standardization, optimization, and automation; and improved transparency and

comparability through process documentation and monitoring.

Besides academic literature, benefits of adopting ITIL framework is well-

documented in professional and industry literature. Kumbakara (2007) from the NCR

Corporation, Canada stated that both internal IT organizations and managed service

providers (MSP) could benefit from an IT Service Management framework like ITIL.

Implementing ITIL best practices helped companies increase IT predictability, efficiency,

and service quality; reduce support costs; comply with regulatory requirements; and

manage end-to-end IT services (Kumbakara, 2007, p. 343). According to the data

published by IT Service Management Forum (2004), some of the savings from ITIL

implementation made by organizations included over 70% reduction in service

downtime, over 1000% increase in ROI, £100 million in annual savings, and 50%

reduction in new product development cycles (Kumbakara, 2007, p. 345).

In summary, this section provides past research findings related to the motives,

justifications, and benefits of adopting ITIL and implementing ITIL processes. These

findings guided interview question Q1 of the present study as information was sought to

answer the first research question and associated sub-questions.

Page 37: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

25

Challenges, Barriers, and Risks

This section relates to the first research question of the present study and sub-

questions related to challenges, barriers, and risks of ITIL process implementation. While

understanding the motives and justifications for implementing ITSM and ITIL is

important to running and managing IT like a business, many organizations in the U.S.

still have not implemented ITSM or ITIL. Therefore, it is also important to understand

the challenges, barriers, and risks of ITSM and ITIL implementation. The Office of

Government Commerce (2011) mentioned several challenges in implementing a

successful request fulfillment process: clearly defining and documenting the requests

handled within the request fulfillment process, establishing self-help front-end

capabilities allowing users to easily interact with the process, establishing service-level

targets for each type of request, and agreeing on the costs of fulfilling requests.

Commonly encountered risks while implementing the request fulfillment process

included poorly defined scope, poorly designed user interfaces, inability to deal with

large number of service requests, and inadequate monitoring capability in gathering

accurate metric numbers (Office of Government Commerce, 2011, pp. 96-97).

The study of Winniford, Conger, and Erickson-Harris (2009) used a “computer-

aided telephone interview (CATI) script” (p. 155) to ask 364 IT system managers in the

U.S. if their companies managed or had plans to manage IT from a service perspective.

The study found that only 45% of companies were using IT Service Management, while

15% were in the planning stages. Major barriers reported by Winniford et al. (2009) for

organizations not practicing IT Service Management and implementing an ITIL

Page 38: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

26

framework included needs for more information about the framework, difficulty and high

costs of implementation, lack of support from senior management and lack of

collaboration and participation from business units, inadequate resources and capabilities,

tendency to avoid accountability, and outsourcing of IT.

The case study by Lapao (2011) involved an investigation of organizational

challenges and barriers in implementing and assessing IT governance at a hospital in

Portugal using both COBIT and ITIL frameworks. Data were gathered from “internal

document and reports, surveys, interviews, discussions, participant observation, group

work, and performance measurement” (p. 38). Lapao (2011) concluded that IT Service

Management was being carried out in that hospital “inefficiently” (p. 37), and the four

most important challenges were identified as the reasons: inadequate readiness for

change, knowledge and skill gaps, poorly defined roles and responsibilities, and lack of

stakeholder analysis.

The previously mentioned case study by Porter and Cater-Steel (2009) also

offered information on several challenges raised by the four participant organizations

during ITIL implementations. Staff members of one organization found it challenging

when they had to “wear two hats and do two roles” (p. 173). Another organization found

it difficult to engage the right people in changing the corporate culture. The third

organization experienced difficulty in gaining support from technical staff because they

resisted to adhere to new policies and processes. The same organization also noted

difficulties in measuring benefits and return on investments (Porter & Cater-Steel, 2009).

Organizational change involving adoption of multiple process improvement

frameworks including ITIL can lead to significant burden on employees, causing

Page 39: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

27

increased levels of stress, loss of morale and productivity, and resistance to change

(Cater-Steel, Tan, & Toleman, 2006), which can also hinder implementation of ITSM and

ITIL best practices. Challenges in implementing ITIL found in Marrone and Kolbe

(2011) included lack of executive support and understanding of ITIL objectives;

inadequate knowledge, skills, and funding; and organizational and cultural resistance to

change. According to Hochstein et al. (2005), ITIL implementation projects were subject

to incurring additional costs related to project planning, marketing, and coordination;

system development and tool customization; contracting and training of personnel;

quality control and consultation; process execution; performance measurement; and cost

of running additional infrastructures (Hochstein et al., 2005). The study of Cater-Steel

and McBride (2007) found persuading individuals’ behavior and practices conforming to

ITIL best practices, resistance in accepting new processes and procedures, and lack of

understanding of why changes are necessary by implementing ITIL as the biggest

challenges to overcome.

In summary, this section provided information on past research findings related to

the challenges in, barriers to, and risks of adopting ITIL and implementing ITIL

processes. These findings guided interview question Q2 of the present study as

information was sought to answer the first research question and associated sub-

questions.

Page 40: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

28

Implementation Strategies

This section relates to the first research question of the present study and sub-

questions related to strategies for ITIL process implementation. Just like the motives and

challenges, it is also necessary to understand the implementation strategies in the ITIL

framework and processes for evaluating their success and identifying appropriate metrics

and measurements. This section presents major ITIL implementation strategies identified

by researchers. For example, in the study by Porter and Cater-Steel (2009) mentioned in

the previous section, the researchers identified two distinct ITIL implementation

strategies: “big bang” and phased. The big bang strategy is applicable when

implementing the “whole” new system, can be ambitious, creates major changes in

existing business processes, and causes resistance to change (Porter & Cater-Steel, 2009).

Therefore, a big bang implementation strategy is “more appropriate for small companies

faced with shorter implementation times or initial setups in new firms” (p. 166). In the

phased implementation strategy, the new system is implemented module by module over

an extended period according to customer needs, company direction, and budget (Porter

& Cater-Steel, 2009). Regardless of which strategy is selected, ITIL implementation is a

complex undertaking that requires significant investments, extensive changes, and careful

planning and management (Porter & Cater-Steel, 2009).

A case study by Dorogovs and Romanovs (2008) focused on implementing ITIL

processes in government and state institutions in Latvia. According to this case study,

when the Information Centre of Ministry of Interior in Latvia implemented ITIL, the first

step was to introduce a Help Desk to overtake the Incident, Problem, and Configuration

Page 41: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

29

Management processes for internal and external users. The implementation strategy

involved a long-term plan of implementing the whole ITIL and short-term plans for

implementing different ITIL modules for different business units within the Information

Centre. Deming’s cycle of quality management (Plan-Do-Check-Act model) was used to

receive feedback and inputs for improvements and have them implemented. Pertaining to

implementation of ITIL, Dorogovs and Romanovs (2008) also highlighted challenges

commonly experienced in government institutions when undertaking large-scale IT

projects like ITIL—a lengthy decision-making process due to a strong hierarchical

structure, limited budget, resistance from and unwillingness of employees, lack of

support from senior management, and insufficient communication resources (Dorogovs

& Romanovs, 2008).

The implementation strategy of Coelho and da Cunha (2009) included six

processes for service support and five processes for service delivery. This was an

example of a phased implementation strategy where three high-priority service support

processes (Configuration, Incident, and Problem Management) and two high-priority

service support processes (Service Level and Financial Management) were implemented

during Phase 1. The priorities of the processes were determined based on a maturity

assessment of each process by using the OGC self-assessment questionnaires. According

to Coelho and da Cunha (2009), “a higher priority was assigned to processes with the

lowest maturity level” (p. 6). For each process, the implementation strategy also

identified the owner (Accountable), the task executants (Responsible), the experts

(Consulted), and individuals affected by the process improvement (Impacted).

Page 42: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

30

Mohammed (2008) referred to a Customer Resolution and Information Services

Project (CRISP) in a case study that examined implementation of IT Service

Management in a higher education institution in the UK. The researcher stated that the

original scope of CRISP included “replacement of the legacy system with a new IT

Service Management software solution, business process redesign, the adoption of ITSM

best practices such as British Standards (BS15000), and the ITIL framework” (p. 6). The

ITIL framework included six modules: Incident, Problem, Change, Release,

Configuration, and Knowledge Management. The initial time plan to complete CRISP

was set to be 12 months, but the main components of CRISP were not implemented even

after 18 months. The implementation strategy was subsequently revised to adopt a phased

approach in which CRISP became a long-term implementation project. The scope of the

first phase was revised to include only the ITSM software solution and the Incident

Management process with a minor redesign. The new scope of CRISP was achieved

within 12 months after the implementation strategy was revised (Mohammed, 2008).

Cater-Steel and McBride (2007) described the implementation strategy followed

by a large UK-based bank in adopting ITIL. The case study used cognitive mapping to

“focus on the concepts which drive the manager’s view of ITIL implementation” (p.

1202) and actor network theory to explain and structure “the activities of managers in

their practice of service improvement” (p. 1202). Data were gathered from interviews

with the Process Architect. Data analysis was performed after the interviews were

recorded, transcribed, and reviewed and confirmed by the participant. The process of

ITIL implementation involved “transformation of service processes to align with ITIL”

(p. 1207). The bank used a phased implementation strategy as several distinct ITIL

Page 43: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

31

processes were implemented in this order: Incident Management, Problem Management,

Change Management, Configuration Management, and help desk functions.

In summary, this section provides past research findings related to the strategies

of adopting ITIL and implementing ITIL processes. These findings guided interview

question Q3 of the present study as information was sought to answer the first research

question and associated sub-questions.

Critical Success Factors (CSFs)

This section relates to the first research question of the present study and sub-

questions related to the critical success factors (CSFs) of implementing ITIL processes.

CSFs are necessary for an organization, project, or process to achieve its mission.

According to Boynlon and Zmud (1984), “Critical success factors are those few things

that must go well to ensure success for a manager or an organization, and, therefore, they

represent those managerial or enterprise area, that must be given special and continual

attention to bring about high performance. CSFs include issues vital to an organization's

current operating activities and to its future success” (pp. 17-27). The Office of

Government Commerce (2011) listed three critical success factors for the request

fulfillment process—requests must be fulfilled in an efficient and timely manner aligning

with the agreed service level targets for each type of request, only authorized service

requests should be fulfilled, and last, user satisfaction must be maintained.

In a multi-site case study that focused on the effects of service oriented IT

management on European companies, Hochstein, Tamm, and Brenner (2005) examined

Page 44: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

32

ITIL transformation projects in six companies: T-Mobile, DaimlerChrysler, KfW

Bankengruppe, BASF IT Services, 3M Germany, and City of Cologne. This study

identified benefits, costs, and critical success factors of ITIL transformation projects. The

case studies were conducted between July 2003 and July 2004, and followed the

PROMET BECS method advanced by Senger and Osterle (2002), which is especially

suitable for studying transformation projects. Data were collected by conducting

structured interviews with the project leaders, followed by having extended telephone

conversations with them, and finally, by reviewing company-specific project related

documents. Hochstein et al. (2005) found these factors to be critical to the successful

implementation of service-oriented IT management: senior management support,

continuous service improvement efforts, training and personnel development programs,

and integration of new processes with existing operational activities.

The study of Porter and Cater-Steel (2009) identified the critical success factors

(third research question) associated with improving ITSM practices and implementing the

ITIL framework in four organizations as mentioned earlier. They found that senior

management support, training and staff awareness, virtual project team, careful software

selection, use of consultants, interdepartmental communication and collaboration, process

priority, ITIL-friendly culture, and customer-focused metrics were the “critical success

factors that need to be carefully monitored and managed throughout all phases of

implementation” (p. 170).

Coelho and da Cunha (2009) examined implementation of ITIL at the Grefusa

Group, a large European organization in the snack food industry. The implementation

project roadmap included six processes focusing on service support (Configuration

Page 45: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

33

Management, Incident Management, Problem Management, Change Management,

Release Management, and Service Desk), and five process focusing on service delivery

(Service Level Management, Financial Management, Availability Management, Capacity

Management, and Continuity Management). The case study highlighted ITIL

implementation strategy, and critical success factors and key performance indicators all

the processes in the roadmap. For Incident Management, CSFs found in Coelho and da

Cunha (2009) were resolving incidents quickly and maintaining service quality. For

Service Level Management, CSFs included managing quantity and quality of IT services

and delivering services at affordable costs. It is noteworthy that Coelho and da Cunha

(2009) did not provide the complete list of CSFs and KPIs for the other processes in the

paper and acknowledged that the complete list was delivered to the Grefusa Group.

Implementation strategy and KPIs described in the study will be elaborated later in this

chapter in appropriate sections.

In the study by Iden and Langeland (2010),15 experts from Norwegian Armed

Forces were asked this research question: “What are the most important factors for the

successful adoption of ITIL?” (p. 109). Participants completed an email questionnaire to

list the factors, with an explanation for each factor being selected. By using a Delphi

process, Iden and Langeland (2010) initially compiled a list of 65 factors, and then

selected 12 of the most frequently cited ones: managers’ ownership of ITIL introduction,

senior management support of ITIL, involvement of key personnel in designing and

improving processes, knowledge and understanding of process orientation, starting with

the ITIL processes having the greatest opportunities for success, open and direct

communication with personnel and customers about ITIL, competence in ITIL processes,

Page 46: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

34

having a modular ITSM system for all processes, demonstration of positive project

results, training of ITIL processes, having a standard system for measuring and reporting

on service levels, and be cognizant of the cultural impact due to ITIL introduction.

Other researchers reported similar findings in their studies. Considering top

management support and training and staff awareness as CSFs were supported by Somers

and Nelson (2001); Hochstein, Tamm, and Brenner (2005); and Tan, Cater-Steel,

Toleman, and Seaniger (2007). Hochstein et al. (2005) also considered the virtual project

team to be a CSF. Somers and Nelson (2011) agreed with careful software selection, use

of consultants, and interdepartmental communication and collaboration as CSFs. Tan et

al. (2007) concurred with Porter and Cater-Steel (2007) and Somers and Nelson (2001)

when they stated that the careful selection of an ITSM toolset was critical in ITIL

implementations. According to Porter and Cater-Steel (2009), these toolsets “facilitate the

end-to-end and life cycle view of ITSM by integrating the recording of incidents with the

configuration management database, change management, and asset management” (p.

172). Critical success factors found in Cater-Steel and McBride (2007) included senior

management support and buy-in; a communication strategy to dissipate messages and

updates to employees throughout project implementation about the goals, objectives,

roadmaps, and milestones achieved; ITIL foundation training for all senior managers and

executives; and reinforce messages and updates by hosting regular brown bag lunch

sessions and presentations during departmental meetings, posting signs and symbols at

various locations within the physical environment, and displaying real-time service

metrics, balanced scorecards, and outage figures easily visible to all employees.

Page 47: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

35

In summary, this section provided past research findings related to the critical

success factors of adopting ITIL and implementing ITIL processes. These findings

guided interview question Q4 of the present study in gathering information needed to

answer the first research question and associated sub-questions.

Metrics, Measurements, and Evaluation

This section relates to the first research question of the present study and sub-

questions related to metrics, measurements, and evaluation of ITIL processes. Metrics are

“standards of measurement by which efficiency, performance, progress, or quality of a

plan, process, or product can be assessed” (Business Dictionary, 2016, para. on

definition). According to the Office of Government Commerce (2007), the suggested

metrics needed to judge the effectiveness and efficiency of the ITIL request fulfillment

process were: total number of service requests, number of service requests at each stage

of the request fulfillment process, number of current backlog of outstanding service

requests, mean elapsed time for handling each type of service request, number and

percentage of service requests completed with service level agreement, average cost per

service request type, and level of customer satisfaction with handling of service requests.

In addition to some of the metrics mentioned above, Steinberg (2013) suggested these

operational metrics for the request fulfillment process: number of service requests

fulfilled without service desk escalation, number of service requests fulfilled without

human intervention, number of service requests fulfilled with proper authorization,

number of service requests that caused incidents, total available labor hours to work on

Page 48: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

36

service requests, total labor hours spent fulfilling service requests, request fulfillment

tooling support level, and request fulfillment process maturity.

An exploratory case study by Talla and Valverde (2013) focused on

“implementing ITIL guidelines at an operational level for service desk, incidents,

problems, and change management” (p. 334) for “improving the quality and reducing the

cost of operations” (p. 335) at an IT services company in Liverpool, UK. In Talla and

Valverde (2013), key performance indicators (KPIs) were established to measure quality

improvement and operation costs for various IT services such as Service Desk, Incident,

Problem, and Change Management. Data were gathered using questionnaires, documents

review, archival records, and observation techniques. Participants were selected via

“convenience sampling by invitation” (p. 335). Participants completed an evaluation

questionnaire before and after implementing each IT service. Descriptive statistics were

used to analyze the data collected from the questionnaires to find general trends. Table

2.2 summarizes the KPIs found in the case study that were used to measure each IT

service:

Page 49: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

37

Table 2.2

KPIs Identified to Measure Each IT Service

IT Service

KPIs

Service Desk

1. Time to log the incident.

2. Time to acknowledge the user, categorize and prioritize the incident,

start the resolving action, and complete the action.

3. Number of medium- and high-priority incidents.

Incident

Management

1. Number of incidents in open and closed state.

2. Number of incidents created and solved within the month.

Problem

Management

1. Number of incidents and problems.

2. Average number of incidents related to a problem.

Change

Management

1. Number of failed and emergency changes implemented.

2. Number of occurrences of process being circumvented, and

percentages of these numbers.

3. Number of occurrences when the critical level is reached, escalation

is performed, and percentages of these numbers.

Note. Adapted from Talla and Valverde (2013, pp. 337-338)

McNaughton, Ray, and Lewis (2010) designed a holistic evaluation framework

for ITSM implementation and improvement efforts especially focusing on ITIL. They

reviewed nine existing evaluation frameworks (p. 221) that provided “improvement of

service quality, IT functional assessment, and evaluation of IT benefits” (p. 220), and

designed the new framework that is more suitable for ITIL by combining common

elements from all and expanding some elements of existing frameworks. McNaughton et

al. (2010) used a design research approach to create the framework and a contextual

inquiry of industry experts method to assess it. The new framework could be used “to

evaluate the change, perform benefit realization, conduct performance assessment, and

direct future improvement” (p. 222).

Page 50: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

38

The main components of the framework proposed by McNaughton et al. (2010)

had four perspectives on evaluation (management, technology, IT users, and IT

employees), two levels of abstraction (corporate, process levels), and metrics for each

ITIL process. According to McNaughton et al. (2010), “Metrics are specific indicators

that are used to show progress or achievement; they are based on simple descriptive

statistics, such as percentages or means, etc.” (p. 223). Their proposed evaluation

framework used three types of process metrics: effectiveness, capability, and efficiency.

The researchers advocated for applying their framework before implementing an ITIL

process to create a baseline, during implementation and towards the end to assess

progress, and also an extended period after the implementation to find continual

improvement opportunities (McNaughton et al., 2010).

In answering the second research question in Gacenga et al. (2011) (RQ2: Which

specific performance metrics can be used to measure ITSM benefits?), the study found

these ITSM performance metrics: number of incidents, problems, and service requests

created and resolved; trend analysis; number of known errors; number of failed changes

(incidents arising from changes); mean time to restore services; first call resolution

percentage; timeliness of resolution (adherence to target resolution timeframes); number

of misclassification of incidents and number of process avoidance by work unit; service

level compliance; mean time between failures; overall availability of services; number of

service requests by service, by location, by user; and number of incidents by service, by

priority, and by location. Gacenga et al. (2011) reported these findings (see Table 2.3) in

response to the third research question (RQ3: How can specific ITSM performance

metrics be derived?).

Page 51: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

39

Table 2.3

Frequency of Recording and Reporting Metrics

Process Name Frequency of

Recording Metrics

Frequency of

Reporting Metrics

Level

Incident Management Daily, Monthly Monthly, Quarterly,

Ad Hoc

Strategic,

Operational, Tactical

Problem

Management Monthly Monthly, Quarterly Strategic, Tactical

Change Management Daily, Weekly Monthly, Quarterly Strategic, Tactical

Service Desk Monthly Monthly Operational

Configuration

Management

Daily Quarterly Strategic

Release Management Daily, Monthly Monthly, Quarterly Strategic, Tactical

Service Catalog Daily Monthly Operational

Request Fulfillment Monthly Monthly -

Availability

Management

Monthly Monthly -

Service Level

Management

Monthly Monthly -

Note. Adapted from Gacenga et al. (2007)

Regarding the fourth research question in Gacenga et al. (2011) (RQ4: What

internal and external environmental factors influence the organization’s selection of

performance metrics for ITSM?), the factors are presented in Table 2.4.

Page 52: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

40

Table 2.4

Environmental Factors Influencing Selection of ITSM Performance Metrics

Environmental

Factor Factor Name

Internal

1. Corporate strategy and goals

2. Organization size and culture

3. Governance framework; ITIL process implementation

4. HR performance monitoring

5. Top management support for performance metrics

6. CIO influence; senior management philosophy and needs

7. Information Systems (IS) goals: visibility, chargeback,

standardization, improvement

8. IS function size (e.g. headcount and budget), structure (centralized,

decentralized, or matrix), and maturity

9. ITSM manager perspective; ITSM staff management

10. ITSM and ICT (Information and Communication Technology) tools

11. Internal customers

12. Knowledge management

External 1. Industry sector; Legislation; Competitive environment

2. Culture – external to the organization

3. Climate (including natural and manmade disasters)

4. External customers

5. ITSM resources (e.g. ITIL books, training, standards, consultants)

Note. Adapted from Gacenga et al. (2011, p. 11)

In measuring the performance of service-oriented IT management, Gacenga,

Cater-Steel, Toleman, and Tan (2011) analyzed the three most implemented ITIL

processes—Change, Incident, and Problem Management—based on a survey of IT

service managers. The study answered two research questions: (1) Which specific

performance metrics can measure ITSM benefits? (2) What are the challenges of

measuring and reporting ITSM benefits? An online questionnaire with 25 questions was

sent to 2,085 members of itSMF Australia. Two hundred and fifteen completed responses

were received for a response rate of less than 11%. Gacenga et al. (2011) found these

Page 53: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

41

Change Management metrics: number of successful changes implemented, number of

reduced emergency changes, and number of incidents caused by change. Incident

Management metrics included customer satisfaction numbers, percentage of calls

resolved at first contact, and number of times reoccurrence of similar incidents avoided.

The following performance metrics were identified by Gacenga et al. (2011) for

the Problem Management process: number of times and amount of service penalties for

service level agreement (SLA) breaches avoided, number of repeat incidents, and

incident trend by classification. The study also highlighted various challenges in

measuring and reporting benefits. The top three challenges in measuring benefits

included configuring and reporting from ITSM tools, aligning the value of ITSM with

business requirements, and defining tangible benefits. The top three challenges in

reporting benefits were lack of agreement on common metrics across divisions,

inadequate understanding of what needs to be reported on and who to distribute the

reports, and quantifying intangible benefits (Gacenga et al., 2011).

Barash, Bartolini, and Wu (2007) presented a framework for assessing and

improving the performance of an Incident Management process within a leading IT

provider for the airline industry. The case study was conducted based on a one-year

worth of Incident Management database with over 600,000 incidents and 700 support

groups handling those incidents. Citing “the amount of time it takes to resolve an

incident” (p. 11) as the ultimate Incident Management metric, Barash et al. (2007)

defined the performance metrics for this process according to two categories: metrics

assessing effectiveness in routing incidents between support groups and metrics assessing

efficiency within support groups. Seven metrics were identified to assess effectiveness in

Page 54: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

42

routing incidents between support groups: number of reassignments per incident, number

of assignment cycles—number of incidents seen twice or more, number of cross-level

reassignments, number of updates between reassignments, number of updates before

incident bounced back, time to closure after reassignments, and number of incidents with

a large processing time before being passed on. Six metrics were considered to assess

efficiency within support groups: fan-in and fan-out of the support group, bottlenecks,

time spent in support group, number of incidents received vs. number of incidents

resolved, number of incidents treated, and number of operators that looked at ticket in

support group. Fan-in is the number of support groups from which the current support

group receives incidents, and fan-out is the number of support groups that the current

support group forwards incidents on to (Barash et al., 2007, p. 14). Bottlenecks were

identified when incidents remained assigned to a support group for much longer than the

average time.

The study by Coelho and da Cunha (2009) identified the KPIs for each CSF in the

study. The CSFs were mentioned earlier in this chapter. Table 2.4 displays the KPIs

associated with each CSF. As previously noted, Coelho and da Cunha (2009) did not

provide the complete list of CSFs and KPIs for the other processes.

Page 55: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

43

Table 2.5

Processes, Critical Success Factors (CSF) and Key Performance Indicators (KPI)

Process

CSF

KPI

Incident

Management

Quickly resolve

incidents

1. Average time to respond to a call for

assistance from first-line personnel.

2. Number of incidents resolved by first line

personnel.

3. Number of incidents resolved within

service level agreement.

Maintain IT

service quality

1. Service time unavailability caused by

incidents.

2. Number of incidents solved before users

notice.

3. Number of incidents reopened.

Service Level

Management

Manage quantity

and quality of IT

services required

1. SLA targets missed.

2. SLA targets threatened.

3. Customer perception of SLA

achievements via surveys responses.

Deliver

previously agreed

services at

affordable costs

1. Number of SLA.

2. Number of SLA agreed against

operational services being run.

3. Service delivery costs.

Note. Adapted from Coelho and da Cunha (2009, p. 7)

This section provided information on past research findings related to the metrics

and measurements of various ITIL processes. These findings guided interview question

Q5 of the present study to gather information needed to answer the first research question

and associated sub-questions. Interview questions Q7 through Q12, pertaining to the

second research question and associated sub-questions, were also guided by the literature

presented in this section.

Page 56: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

44

Chapter Summary

This chapter offered a report on relevant ITIL and ITSM research and literature

divided into five major sections: motives, justifications, and benefits; challenges, barriers,

and risks; implementation strategies; critical success factors; and metrics, measurements,

and evaluation. The literature review was primarily based on the ITIL core books and

articles and papers published in various journals and conference proceedings. Three

popular versions of ITIL were considered relevant in this study: V2, V3, and 2011. The

literature review revealed that existing research on ITIL and ITSM is dominated by a few

research areas: benefits, challenges, critical success factors, and strategies of

implementing ITIL and ITSM. Although ITIL is heavily process-oriented, the review

also showed only limited research on measuring and evaluating the performance of ITIL

processes. Case studies examining the details of implementing and measuring ITIL

processes in a real-life scenario can be a fertile ground for much-needed research on IT

Service Management.

Page 57: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

45

Chapter 3

Method

This chapter focuses on the method used to analyze the information technology

service provider group’s perspectives on identifying metrics for the request fulfillment

process to create executive dashboards. Two research questions were answered during

this study. Research Question 1: What do the group members perceive as being the most

important metrics of the ITIL request fulfillment process? Research Question 2: How to

create executive dashboards with the metrics perceived as most important by the group

members?

A research road map can be a useful tool in presenting the layout for how a study

will be carried out. According to Kashyap (2014), a research roadmap “presents the

various stages of the research process, visually showing the path from commencement to

completion of the research study; increasing the likelihood that the study could be

repeated by an independent researcher, with other subjects and at other points in time” (p.

46). Figure 3.1 shows the research roadmap for this study. A detailed project plan with

timeline is presented at the end of this chapter (see Figure 3.3). Subsequent sections of

this chapter offer information on the research design, data collection, data analysis, and

strategies to address quality in this study.

Page 58: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

46

Figure 3.1. Research Roadmap

Results and Findings

Study Summary Conclusions Recommendations

Quality of Research

Trustworthiness Construct Validity Reliability

Data Analysis

Participant Verificaion

Data Triangulation

Coding, Building Categories and Themes

Peer VerificationSubject Matter

Expert Verificaion

Data Collection

Interviews (Individual, Group)

Archival Records Physical Artifacts

Research Design

Method: Qualitative

Strategy:Exploratory

Tradition:Case Study

IRB Approval

Participant Selection

Develop Instrument

Pilot Study

Research Preparation

Title and Research Topic

Generate and Explore Issue

Research Purpose and Questions

Literature Review

Page 59: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

47

Research Design

This research utilized components of the qualitative research approach,

descriptive research strategy, and case study research tradition (strategy of inquiry). This

section describes the rationale behind selecting these research design components.

Qualitative Research Approach

A qualitative research approach is often used to develop an understanding of a

topic, concept or phenomenon that is highly contextual, less clearly defined, and needs to

be explored in depth (Briner, 1997; Patton, 2002). According to Denzin and Lincoln

(1994), “Qualitative researchers study things in their natural settings, attempting to make

sense of, or interpret, phenomena in terms of the meanings people bring to them” (p. 2).

In addition, Creswell (1998) suggested several significant reasons for conducting

research using the qualitative approach. The nature of qualitative research questions is

asking “how” or “what” is within a topic, instead of asking “why” and examining

comparisons, relationships, or establishing cause and effect. A second reason for

conducting qualitative research is to explore a topic for which theories must be developed

to explain a behavior, or to obtain a detailed view of the topic from subjects’

perspectives. The present study satisfies all these preconditions for using a qualitative

research approach—it sought to develop an in-depth understanding of a particular topic

or phenomenon (a business process) from the perspectives of a group of subjects (the

users of this process), it took place in a natural setting (in a business environment), and

Page 60: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

48

asked “how” and “what” research questions. For these reasons, using a qualitative

research approach was justified in carrying out the present study.

Descriptive Research Strategy

After the research approach was formulated for this study, the researcher

evaluated four research strategies presented by Marshall and Rossman (2006)—

descriptive, emancipatory, explanatory, and exploratory—to determine the most

appropriate research strategy for this study. A descriptive strategy is used when

describing behaviors, thoughts, or feelings of a particular group of individuals (Leary,

2008) in a real-world context (Yin, 2014). An emancipatory strategy is applicable when

creating opportunities for the subjects of a social inquiry and empowering them to engage

in social action (Marshall & Rossman, 2006). An explanatory strategy is used to explain

patterns or identify plausible relationships for a topic or phenomenon (Marshall &

Rossman, 2006), or to explain how or why a phenomenon occurs or does not occur (Yin,

2014). Finally, an exploratory strategy can be used to identify research questions or

formulate hypotheses that may be tested in a subsequent research study (Yin, 2014).

Based on the purpose and applicability of each strategy, the researcher determined that a

descriptive research strategy was most appropriate for the present study because “it

examines a situation as it is” (Leedy & Ormrod, 2001, p. 191) in a real-life context. Once

a research strategy was selected, the next step was selecting a research design or

“tradition of inquiry” (Creswell, 1998, p. 47).

Page 61: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

49

Case Study Research Tradition

The tradition of inquiry selected for this study was a case study; as this inquiry is

used when a researcher seeks to explore the depth of a program, event, or phenomenon

from the perspective of one or more individuals (Creswell, 2009; Stake, 1995).

According to Yin (2003), selecting a case study tradition is appropriate when (a) “how”

or “why” type of research questions are being asked about the topic or phenomenon; (b)

the focus of the study is a contemporary event or phenomenon within its real-life context

as opposed to a historical one; and (c) the researcher has little or no control over the event

or phenomenon. Yin (2003) also suggested that the case study is appropriate when the

researcher has expectations that there will be significant and meaningful revelations of

phenomena.

The present study explored the request fulfillment process (a business process)

from the perspectives of the members of a small group in a contemporary business

setting. The researcher was a member of this group in a non-managerial role, so he had

little or no control over the process or the group. The researcher also expected that the

study would provide significant and meaningful information on how to identify metrics

for this process and create dashboards to present the metrics. Based on the rationale

discussed above, selection of the case study as the tradition of inquiry for the present

study was justifiable. Once the case study was selected as the tradition of inquiry, the

researcher decided that a single-case design (Yin, 2014) was the preferred case study

design for this study. The following section describes the rationale for selecting a single-

case design.

Page 62: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

50

Single-Case Design

The case study design for this research was a single-case, because the study

explored the understanding of a business process from the perspectives of the members of

a small service provider group; sources of data were multiple interviews with

participants, archival data, and official documents created and used by group members.

Yin (2014, p. 51) recommended five circumstances under which selecting single-case is

an appropriate type of case study design: critical, unusual, common, revelatory, or

longitudinal. Two of these five were applicable to this case study: it was common and it

was revelatory. The case study was common, because it involved the request fulfillment

process—one of the ITSM processes widely used in organizations utilizing the ITIL

framework (Steinberg, 2013). The case study was also revelatory, because the researcher

had access to subject(s) and situation(s) that had not been accessible to empirical study in

the past (Yin, 2014).

Based on the circumstances just described, a single-case design was worthwhile in

this study because it allowed the researcher to “capture the circumstances and conditions

of a situation” (Yin, 2014, p. 52) that is widely present in the ITSM industry, and the

descriptive information from the study was revelatory (Yin, 2014). Now that the single-

case design was justified, the next step was to determine the unit of analysis—the “case”

itself (Yin, 2014, p. 31). The unit of analysis in this study was “a group” —the service

provider group that uses the request fulfillment process to satisfy various customer needs.

This case study comprised a "whole" study, in which facts were gathered from various

sources and conclusions were drawn on those facts. Several steps took place to carry the

Page 63: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

51

research study forward: the research instrument was created and validated, criteria were

established to select participants, and exemption approval was obtained from the

Institutional Review Board (IRB). These steps are discussed next.

Instrument Development

The instrument used for this case study was an open-ended structured interview

guide with follow up questions and probes to gather additional information. The research

questions and sub-questions for this study (see RQ 1 and RQ 2 on pp. 6-7) provided the

basis for the interview guide for interviews with participants. The interview guide was

developed in the manner of a focused interview, “in which a respondent is interviewed

for a short period of time - an hour, for example” (Yin, 2003, p. 90). Although the

interviews were focused, they remained open-ended and assumed a conversational

manner which was likely to generate additional pertinent questions (Yin, 2003).

Following the recommendations of Seidman (2012), a three-round interview

protocol with participants was carried out for a complete and robust examination. All of

the interviews were conducted face-to-face. Participants were interviewed individually

during first- and second-round interviews. The guide for the first-round interview

questions is provided in Appendix E. The second-round interviews were conducted to

clarify anything or request additional information to better understand the answers from

the first-round interviews. The third- and final-round interview was a group interview

with all participants. Table 3.1 provides the links among research questions, interview

questions, and relevant literature.

Page 64: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

52

Table 3.1

Link among research questions, interview questions, and guiding literature

Research Questions Interview

Questions

Guiding

Literature

1. What do the group members perceive

as being the most important metrics

of the ITIL request fulfillment

process?

Q1

Q2

Q3

Q4

Q5

Q6

Arora & Bandara (2006); Cater-Steel et al.

(2006); Disterer (2012); Espindona et al.

(2009); Gacenga et al. (2011); Hochstein et al.

(2005), Hoerbst et al. (2011); Kumbakara

(2007); Lapao (2011); Marrone & Kolbe

(2011, 2011a); Office of Government

Commerce (2011); Pollard & Cater-Steel

(2009); Wagner (2006); Wan & Chan (2008)

Cater-Steel & McBride (2007); Coelho & da

Cunha (2009); Hochstein et al. (2005); Iden &

Langeland (2010); Office of Government

Commerce (2011); Pollard & Cater-Steel

(2009); Porter & Cater-Steel (2007); Somers &

Nelson (2001); Tan et al. (2007);

Cater-Steel & McBride (2007); Cater-Steel et

al. (2006); Hochstein et al. (2005); Lapao

(2011); Marrone & Kolbe (2011); Office of

Government Commerce (2011); Porter &

Cater-Steel (2009); Winniford et al. (2009);

Cater-Steel & McBride (2007); Coelho & da

Cunha (2009); Dorogovs & Romanovs (2008);

Mohammed (2008); Porter & Cater-Steel

(2009);

Barash et al. (2007); Coelho & da Cunha

(2009); Gacenga et al. (2011); McNaughton et

al. (2010); Office of Government Commerce

(2011); Steinberg (2013); Talla & Valverde

(2013);

Coelho & da Cunha (2009); Conger et al.

(2008); Gacenga et al. (2011); Mohammed

(2008);

2. How to create executive dashboards

with the metrics perceived as most

important by the group members?

Q7 – Q12 Barash et al. (2007); Coelho & da Cunha

(2009); Gacenga et al. (2007); Klubeck (2011);

Steinberg (2013);

Page 65: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

53

Pilot Study

A pilot study was conducted as a trial run in preparation for the complete study

and prior to administering the actual interviews. The goal of the pilot study was not to

collect any data, but to identify unclear or ambiguous questions in the interview guide

(see Appendix E), and observe non-verbal behavior of participants about any

embarrassment or discomfort experienced regarding the content or wording of the

instructions or interview questions. Changes were made to the instructions and interview

questions to improve the instrument. Two group members (a manager and a data analyst)

participated in the pilot study conducted in December 2015. They expressed interest in

the pilot study and were selected based on convenience sampling.

The value of the feedback and results obtained from the pilot study of this

research study cannot be overstated. Participants in the pilot study did not find anything

worrisome in the instrument that could create potential roadblocks that might derail the

research project. Pilot study participants also did not offer any indication that the research

protocol could not be followed by the participants in the actual study. Based on the

feedback received during the pilot study, the time limit for each interview session was

increased from 60 minutes to 90 minutes and the interview guide was revised by

eliminating one question, rephrasing three questions, and adding four new questions.

These revisions were made to allow the participants to better understand the interview

questions and answer accordingly. The newly added interview questions were especially

helpful in understanding and answering the second research question.

Page 66: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

54

Participant Selection Criteria

To select participants for the present study, the researcher determined that

purposeful sampling would be the most direct and meaningful approach to maximize

discovery (Groenewald, 2004; Kelly, 2010). The participants selected for this study had

been members of the group for at least one year as full-time employees and used the

request fulfillment process on a daily basis. Based on these criteria, eight of 13 group

members were selected: a director, an associate director, a manager, a senior data analyst,

a data analyst, two consultants, and a business analyst. The demographics of these

participants were as follows: five females, three males, six Caucasian, and two Asian

employees. The ages of the participants ranged from late 20s to 65 years.

Institutional Review Board (IRB)

The research study met or exceeded all of the requirements of The Pennsylvania

State University’s IRB process. Appendix A contains the final IRB approval letter. The

IRB approval process required completion of an application that included the protocol for

human subject research (see Appendix B) and review of all documents in support of the

research process. The documents reviewed were a recruitment letter (see Appendix C), an

informed consent form (see Appendix D), and an interview guide (see Appendix E).

Page 67: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

55

Data Collection

This study followed a qualitative research tradition in which data (information)

came from interviews with the participants, their stories, testimonials, and physical

artifacts created and used by them (Yin, 2014). Collection of data in this research study

relied on four principles recommended by Yin (2014): using multiple sources of

evidence, creating a case study database, maintaining a chain of evidence, and exercising

care when using electronic sources of evidence. The next section describes the strategies

used in this study to address these principles.

Using Multiple Sources of Evidence

A case study analysis requires data collected from multiple sources. Six sources

of data are appropriate in a case study analysis—interviews, documents, archival records,

direct observation, participant-observation, and physical artifacts (Yin, 2014, p. 102). The

present study used interviews (individual and group), archival records, and physical

artifacts as sources of data collection. How these sources were used in this study is

explained next.

Interviews

The goal of this study was to interview eight group members—enough to

complete the interviews in a time frame given the time restrictions for each interview, yet

small enough to dive deep into each research question (Creswell, 2009; Kranthwohl &

Page 68: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

56

Smith, 2005). Prior to the interview, each group member was invited by email to

participate in this research study. A copy of the invitation email can be found in

Appendix C. Participation in these interviews was voluntary, but it was expected that all

eight group members would participate. These interviews were face-to-face and took

place between February 10 and March 8, 2016. The interviews were open-ended and

conversational, and lasted between 60 to 90 minutes each. A standardized open-ended

interview guide (see Appendix E) was the starting point in eliciting participants’

responses and acquiring information on their experiences. Participants’ identities were

kept confidential and they were asked not to discuss their participation in this study with

any other member outside the group until the interviews were completed. This approach

was used to prevent risks such as being influenced by outside bias or skewing of results.

The interviews were audio-recorded and transcribed. The interview transcripts

included both the actual words spoken by the participants and any observations of non-

verbal communications that could enable building on the meaning of the statements

(Hycner, 1985). Complete written interview transcripts were provided to the participants

so they could clarify and verify all data. Participants returned the transcripts to the

researcher with additional thoughts and clarifications on the ideas raised during

interviews. All participants were re-informed of their anonymity and any personal

identifying data were double-checked to ensure their removal.

Besides the face-to-face interviews with each participant, one group interview

session was also conducted. All eight participants were invited to participate in the group

interview. The purpose of the group interview was to discuss any theme not previously

identified and to resolve any conflicting information or data inaccuracy.

Page 69: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

57

Archival Records

According to Yin (2014), archival records contain evidence of past activities—

examples include public use files, service records, organizational records, maps and

charts, or survey data. For this study, JIRA was the archival records source which stored

all service requests placed upon the group by its users and customers since July 2014

when the request fulfillment process officially went into effect. As of October 18, 2015,

over 4,600 service requests were placed, and the number was expected to grow every day

due to continuous business operations and services provided by the group. JIRA stored

detailed information on every single service request in the form of database records.

Appendix F contains the name and description of the properties in a JIRA record.

Physical Artifacts

A physical artifact is “a technological device, a tool or instrument, a work of art,

or some other physical evidence” that can be collected and evaluated as a part of case

study (Yin, 2014, p. 117). For this study, several documents and physical artifacts were

collected and evaluated (see Appendix I). For example:

Reports and files produced by the group in completing a service request.

Examples are Enrollment reports, Demographic reports, Application reports, etc.

These reports were stored electronically in a network drive accessible only by the

staff members of the group.

Page 70: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

58

Various training materials created and regularly maintained and used by the group

members. These training materials were stored electronically in a network drive

accessible only by the staff members of the group.

The Onboarding documentation and checklists followed when a new employee

came on board. These documents were stored electronically in a network drive

accessible only by the staff members of the group.

The request fulfillment process document that outlined the process steps and

provided detail description of each step. The request fulfillment process document

was also stored electronically in a network drive but was accessible by staff

members of other departments too.

Meeting notes were stored electronically in Box folders and were accessible by

staff members of the group.

The collection of data from different sources was reviewed and analyzed together,

so the study findings could be based on the convergence of information from different

sources. The use of multiple sources for data collection also facilitated data triangulation

as described later in this chapter (Yin, 2014, pp. 120-121).

Creating a Case Study Database

A case study database was used for organizing and warehousing case study data

and analyses in a single space (Davis, 2010) from which data could be easily retrieved

(Yin, 2014). Following the recommendations of Yin (2014), such a database was created

and used in this study to include all field notes, transcribed interviews and narratives,

Page 71: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

59

documents and artifacts, etc. The case study database gave the readers an opportunity to

examine the raw data that provided a basis for the conclusions for the study (Davis,

2010). In addition, the case study database enabled a review of connections between data

and claims made in the database and in the final report (Davis, 2010). The database for

this study can be found at: http://sites.psu.edu/sxi5021/.

Maintaining a Chain of Evidence

A detailed log of research activities was kept to document the chain of evidence

(audit trail) during data collection. The research activity log recorded the date, time, and

location of all interviews, and also indicated the circumstances under which official

documents and physical artifacts were collected. Each interview transcription,

organizational document, physical artifact, and field note was assigned a uniquely

identifiable ID. The case study report itself had clearly cited sources used to arrive at

specific findings by referring to specific documents, interviews, or physical artifacts

according to their ID (Yin, 2014). These specific sources contained the actual evidence

highlighted in yellow. The case study protocol, as discussed later in the Reliability

section, was used to establish links between the evidentiary sources and case study

questions (and sub-questions).

Page 72: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

60

Exercising Care when Using Electronic Sources of Evidence

Data from electronic sources were expected to be used in this study in a very

limited capacity. All interviews were conducted face-to-face, and the group interview

session was also in-person. Studying the dialogue with participants and interpersonal

interactions took place in one physical location where the researcher and other group

members were in close vicinity. No data were collected using social media sites like

Facebook, Twitter, or Youtube. No video or photograph was used. Electronic data were

used when documents or physical artifacts were stored electronically (in Box, shared

network drive, etc.). There was a minimal risk of using data from electronic sources in

this study.

Data Analysis

In this qualitative case study research, data came from different sources, such as

interview transcripts, archival records, and physical artifacts. These raw data comprised

large quantities of texts, notes, reports, and audio files. This vast amount of data must be

systematically dissected, rearranged, organized, and interpreted for the researcher to

present and research findings and answer the research questions (Evers & van Staa,

2010). Data analysis for this study was carried out using member validation and check,

data triangulation, and a grounded theory approach.

Page 73: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

61

Member Validation and Check

Member validation and check takes place post-interview when “a researcher

submits materials relevant to an investigation for checking by the people who were the

source of those materials” (Bryman, 2010, p. 634). To prepare for member validation of

this study, the researcher listened to the audio recordings of the face-to-face interviews

and had them transcribed. Interview transcripts were sent to respective participants to

give them an opportunity to review, edit, clarify, and confirm their interview statements,

and to acknowledge that the meanings interpreted by the researcher agreed with what was

intended by the participants (Kashyap, 2014). Patton (2002) described the post-interview

debriefing period when member validation occurs as “the beginning of the data analysis

process” (p. 384); this step was carried out before the triangulation of data analysis step

commenced.

Data Triangulation

Data triangulation provides multiple measures of the same phenomenon and is

used when collecting information from multiple sources to corroborate the same case

study finding or evidence (Yin, 2014, p. 121). If a research finding is derived from a

piece of evidence that comes from several sources of information, the finding is likely to

be more convincing and accurate (Yin, 2014). Figure 3.2 presents the researcher’s

framework for the triangulation from various sources used in this study. Each type of

source of data is expected to yield different evidence that in turn provides different

understandings and insights regarding the phenomena under study.

Page 74: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

62

Figure 3.2. Data Triangulation Framework. Adapted from Yin (2014, p. 121)

Grounded Theory Approach

With regard to data analysis in this study, the grounded theory approach was used

because its focus was on obtaining an abstract analytical schema of a phenomenon (event,

action, or process) that is related to a particular situation and to generate or discover a

theory (Creswell, 1998). In addition, the value of the grounded theory lies in its ability

not only to generate the theory but also to ground that theory in data (Strauss & Corbin,

1998). According to Denzin and Lincoln (1994), the grounded theory research method

comprises systematic inductive guidelines for collecting and analyzing data to build

theoretical frameworks that explain the collected data.

The primary method of analysis in this study was a continuous coding process.

Analysis began with open coding, when the raw data (interview transcripts, field notes,

etc.) were examined word by word, line by line, and incident by incident to define actions

or events within data (Charmaz, 2006). Open coding analysis likely leads to "refining and

specifying any borrowed extant concepts" (Creswell, 2007, p. 160; Strauss & Corbin,

Findings

Archival Records

Interview Data(Individual and Group)

Physical Artifacts

Page 75: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

63

1998). The analysis of axial coding was performed next when “the dissected data is

reassembled as the researcher develops and relates categories” (Benaquisto, 2008, p.

806). Then, categories were further defined by selective coding, "an integrative process of

selecting the core category, systematically relating it to other categories, validating those

relationships by searching for confirming and disconfirming examples, and filling in

categories that needed further refinement and development" (Creswell, 2007, p. 160;

Strauss & Corbin, 1998). Codes and categories were sorted, compared, and contrasted

until saturation was reached—when all data were accounted for in the core categories,

and no new codes or categories could be produced (Creswell, 2013). According to Guest,

Bunce, and Johnson (2006), saturation of basic themes can be achieved in six interviews.

Therefore, the present study achieved basic data saturation by interviewing eight

participants. For this study, coding was performed on the interviews, archival records,

and physical artifacts. Verification of coding was also performed. These steps are

elaborated next.

Coding of the Interviews, Archival Records, and Physical Artifacts

Following the practices of Kashyap (2014, pp. 66-67), the researcher used these

steps in coding the interviews and official documents using open, axial, and selective

coding techniques:

Listened to audio-taped interviews and had respective participants review the

interview transcriptions to make any corrections or clarifications required. For

Page 76: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

64

official documents, the creators, authors, and users of those documents were

asked to clarify or confirm accuracy of the document.

Removed any specific names or personal identifiers from the transcripts and

official documents.

Stored interview transcripts in Microsoft (MS) Word document. Started at the

beginning of each interview transcript. Highlighted participant ideas,

thoughts, or important statements using the New Comment command of the

software. Same command was used in highlighting official documents stored

in MS Word or PDF formats.

The comments were categorized into open codes. These open codes

comprised direct quotes and summarized quotes by the participants and

referenced if what was being described would be a facilitating or inhibiting

factor, or a general comment.

Open codes were drawn with larger and more general axial codes within the

same questions, and then across different questions by the same participant.

Axial codes were drawn across participants; the researcher looked for

common themes and meaning, recurring and repeated phases, threads of

discussions, and similar topics in the transcripts (Saldana, 2008; Sandelowski,

2000; Waltz, Strickland, & Lenz, 2005).

Page 77: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

65

Verification of Coding

Verification of coding in this study was performed following the

recommendations of Patton (1990). Patton (1990) suggested that “it is helpful to have

more than one person code the data. Each person codes the data into a classification

scheme separately and then the results of the coding are compared and discussed.

Important insights can emerge from the different ways in which two people look at the

same set of data, a form of analytical triangulation” (p. 383). Patton (1990) also noted

“the [category] set should be reproducible by another competent judge...the second

observer ought to be able to verify that a) the categories make sense in view of the data

which are available, and b) the data have been appropriately arranged in the category

system” (as cited in Guba, 1978, pp. 56-57). For this research study, a second coder

reviewed the interviews and coded the data using the open and axial coding process.

Once the researcher and the second coder completed the coding of the data, the

themes and findings were discussed with a subject matter expert (SME) in IT Service

Management. The manager of an IT department at Penn State was invited to act as the

SME in this study. The SME is also a doctoral candidate and has studied qualitative

research design and analysis methods in graduate-level courses. The IT Service

Management SME was helpful in verifying the open and axial codes into convergence of

themes.

The researcher and SME independently reviewed the codes, the meaning and

context behind the coding, and verified the codes for convergence of themes in relation to

the existing literature, research, and practices in the ITSM field. The categorization of

Page 78: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

66

themes agreed by both the researcher and SME were taken into consideration as valid

themes and are discussed in chapter 4. The remaining themes not agreed to by the

researcher and SME were discussed further, and if necessary, not determined as suitable

for the research questions in this study. Instead, areas of disagreement shed light on the

overall ITSM field, and were considered best suited for future research opportunities.

Strategies for Addressing the Quality of the Research

The quality of this research study was addressed based on three criteria:

trustworthiness (Krafting, 1991), construct validity (Yin, 2014), and reliability (Yin,

2014). The next section describes the strategies adopted and implemented by the

researcher throughout the study.

Trustworthiness

Krafting (1991) proposed four strategies for addressing the trustworthiness of

qualitative research: prolonged engagement of the researcher, referencing multiple data

sources, participant checking, and peer examination. The researcher was a consultant

with the subject of this study (the information technology service provider group) and

implemented the request fulfillment process in the group. The researcher also worked

with the group identifying and developing the metrics for this process, which required

him to be thoroughly engaged in this study. Data were collected from multiple sources

(e.g., interviews, focus group, documentation, physical artifacts, etc.) to determine the

Page 79: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

67

consistency of a finding. Participants reviewed the transcripts of their interviews to verify

the contents and made corrections when necessary. Peer examination took place once the

transcripts of the interviews were created and sent to an external thematic reader.

Construct Validity

According to Yin (2014), construct validity is “the accuracy with which a case

study’s measures reflect the concepts being studied” (p. 238). Construct validity of a

study can be achieved by using multiple sources of evidence, establishing chain of

evidence, and having key informants review draft case study report (Yin, 2014). This

study used evidence from multiple sources—such as in-person interviews, process

documentations, and various physical reports and artifacts. Chain of evidence was

established by “showing how findings come from the data that were collected and in turn

from the guidelines in the case study protocol and from the original research questions”

(Yin, 2014, pp. 237-238). Finally, the draft case study report was reviewed by at least two

participants who showed the most interest in the study and were most engaged during the

interviews.

Reliability

While generalizability of findings is not the purpose or goal of qualitative case

study research, maintaining protocol and demonstrating that the research is repeatable

relates to maintaining reliability of the research (Yin, 2014). Reliability is “the

Page 80: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

68

consistency and repeatability of the procedures used in a case study” (Yin, 2014, p. 240).

Reliability is the extent to which a data collection procedure and data analysis produce

the same answer for multiple participants in the research process (Kirk & Miller, 1986).

Maintaining reliability in a case study is essential because it helps minimize the biases

and errors in a study (Yin, 2014). Yin (2014) strongly recommended that a case study

protocol and a case study database be developed when conducting a case study research.

Both actions were performed in this research to enhance reliability. A detailed case study

protocol was established describing the data collection process at length. In addition, a

detailed interview guide was also established to maintain consistency and a replicable

process with each participant (Kashyap, 2014). The case study protocol for this research

is included in Appendix G. Details regarding the case study database were discussed

previously in the Data Collection section.

A detailed project plan for carrying out this study is presented in Figure 3.3.

Page 81: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

69

Page 82: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

70

Figure 3.3. Project Plan and Timeline

Chapter Summary

This chapter has four major sections: research design, data collection, data

analysis, and strategies to assess the quality of the research. The research design section

elaborated on selecting the approach, strategy, and tradition of the research. This section

also discussed instrument development, participant selection criteria, and the IRB

approval process. The data collection section focused on the use of multiple sources of

evidence, creating a case study database, maintaining a chain of evidence, and exercising

care when using electronic sources of evidence. Peer evaluation, data triangulation, and

the grounded theory approach were discussed in the data analysis section. Last, the

strategies for assessing the quality of the research were presented.

Page 83: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

71

Chapter 4

Research Findings

Chapter 4 presents research findings. Introducing this chapter is background on

the subject of the study (i.e., the organization and the service provider group), and a

profile of the participants. The next sections contain a brief review of the study, including

purpose, research questions and findings, and methodology used to carry out the study.

The majority of this chapter is dedicated to a report of the findings developed from the

data collection and analysis phase. A summary of the findings concludes the chapter.

Background

As mentioned earlier in chapter 1, the subject of this study was an IT service

provider group (“the Group”) within a higher education organization (“the Organization”)

in the U.S. The organization provides many online education programs (undergraduate,

graduate, professional, etc.) to adult students worldwide. The group is responsible for

identifying, collecting, evaluating, analyzing, reporting, maintaining, and warehousing

data for over 15,000 customers in support of the planning and decision-making activities

of the organization. The group frequently interacted with many stakeholders in

completing day-to-day requests, but had process and procedural flaws in a number of

essential areas of its IT Service Management (ITSM). Prior to 2014, the group’s

stakeholders lacked confidence in the group’s ability to deliver accurate data,

Page 84: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

72

information, and reports in a timely fashion. The group also lacked consistent and

effective prioritization of many incidents and service requests which resulted in wasted

efforts—when efforts were being expended on less important issues while more

important issues were not being accomplished.

The main reason for the group’s lack of consistency, ineffective prioritization, and

ultimately, customer dissatisfaction due to poor quality of services was that the group was

throwing incidents, problems, and service requests all into the same service desk melting

pot, with little care given to separating and prioritizing these distinct areas. All issues

were considered service requests, and all group members were choosing what to work on

based on their own determination of importance. There was also no clear structure for

resolutions or communications with the stakeholders about the latest updates on their

requests (Physical Artifact: Stakeholder Map). This resulted in the chaotic, unorganized,

and uncontrolled management of service requests and the ad hoc distribution of tasks that

often went unmonitored. Since July 2014, the group adopted a fulfillment process for all

service requests (separate from the Incident and Problem Management processes) based

on ITIL best practices. The request fulfillment process ensured requests were fairly

distributed among staff members and dealt with in a timely, organized fashion. The

request fulfillment process also introduced better reporting of service requests’ status and

clear visibility of the type and frequency of requests.

Page 85: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

73

Participant Profile

There were eight participants in this study who had been working full-time in the

group for more than one year and had been using the request fulfillment process on a

daily basis. The participants included a director (Jennifer), an associate director (Jackie),

a manager (Carl), a senior data analyst (Tiffany), a data analyst (Joanna), two consultants

(Jasmine and Ryan), and a business analyst (Samuel). All participants’ names in this

study are fictitious. The next section briefly highlights their professional experience and

roles within the group.

Jennifer

Jennifer had been with the organization for almost 27 years and served as the

director of the group since February 2010. As the director of the group, she had overall

responsibilities of maintaining user and customer satisfaction through efficient and

professional handling of all service requests. Her responsibilities include providing a

channel through which users may request and receive standard services through a

predefined authorization and qualification process, providing information to users and

customers about the availability of services and the procedure for obtaining them,

sourcing and delivering the components of requested standard services (e.g., licenses and

software media), and assisting with general information, complaints, or comments.

Page 86: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

74

Jackie

Jackie had been with the group since March 2013 as the associate director. In this

role, she was responsible for strategic planning and implementation of IT initiatives,

creating and writing IT-related policies for the department, collaborating with peers

across the organization to discuss common issues, ensuring resource allocation was

handled appropriately, actively participating in the IT leadership council, and serving on

various organization-wide IT committees. In the past, she had worked more than fifteen

years in various educational institutes and managed groups responsible for student

computer labs, campus systems and help desk, researched emerging trends and brought

new technology to campus, developed strategic goals for the IT department, provided

information or expert advice to faculty and staff, and trained and implemented many

software applications.

Carl

Carl had been with the group since December 2011. Although he started as a

consultant, he was promoted to manager of the group in June 2014. As a manager, he led

and supervised the development and design of dashboards utilizing Oracle Business

Intelligence Enterprise Edition (OBIEE). He maintained project schedules and managed

day-to-day dashboard development activities. He also managed dashboard security

access, developed training format, and delivered user training sessions on dashboard use.

His responsibilities also included leading process improvement within the group. He

actively engaged with other business units and stakeholders to guide process

Page 87: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

75

enhancements for dashboards, data analysis, and system alignment. He served as a

subject matter expert and a mentor for the group, and frequently provided advanced-level

data analysis to support the organization’s strategic objectives.

Tiffany

Tiffany was a senior data analyst who had been a member of the group since

October 2014. In this role, she gathered and analyzed data from various databases and

sources, performed statistical and mathematical programming, developed reports and

visual representations, and provided consultation and technical assistance to all

stakeholders in determining the appropriate statistical methodology to meet their needs

and objectives. She also used various statistical and mathematical software packages to

summarize and interpret statistical results, and designed, implemented, and coordinated

process improvement activities. Prior to joining this position, Tiffany worked in a cancer

center for almost two years where she applied mathematical modeling, forecasting,

statistical analysis, and data visualization to understand historical and current dynamics

and trends within the organization. She also interpreted, analyzed, and presented results

to stakeholders with different backgrounds—from clinicians to senior leaderships, and

provided insights on strategic and clinical decision making.

Page 88: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

76

Joanna

Joanna had more than nine years’ experience in business intelligence analysis in

various industries including higher education and software. She joined the group in July

2014 as a data analyst. In this role, she developed, built, maintained, and automated

complex metric reports involving extraction, compilation and presentation of data from

internal and external databases using database-querying tools. She also developed

complex Microsoft Excel spreadsheets on historical data by performing database joins,

grouping, and filters. Joanna was an expert in manipulating query results using formulas

and presenting results visually using pivots and charts. She also assisted the group with

dashboard and report generation.

Jasmine

Jasmine had been working as a staff member in this group since August 2012 as a

senior consultant. She coordinated, gathered, and analyzed data from various databases to

help meet and inform reporting needs for the organization. She also developed,

maintained, and updated existing reports and databases, determined root causes of

unusual occurrences, and applied common practices and procedures to resolve issues.

Finally, she utilized data to coordinate and track progress on key organizational

performance indicators. Prior to joining this group, Jasmine worked in various U.S.

hospitals for more than six years as a research analyst and a decision support specialist.

Page 89: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

77

Ryan

Ryan had been with the group since May 2014 as a consultant. His responsibilities

included retrieving and analyzing data from various data sources to help meet and inform

reporting needs for the group. He also utilized data to coordinate and track progress on

key organizational performance indicators and assisted stakeholders in making decisions

related to strategic planning. He created dashboards and enhanced data visualization to

help stakeholders interpret and understand the data. He also identified and recommended

process changes to ensure data integrity for the group. Prior to joining the group, Ryan

had more than 20 years’ experience as an industrial engineer, a plant manager, and a

senior production planner in various industries in the U.S.

Samuel

Samuel was an ITIL-certified professional with more than 12 years’ experience in

project management, managing service desk, incident, problem, change, and

configuration management database. He specialized in root cause analysis, gathering and

analyzing business requirements, preparing system design specifications, modeling and

improving processes, and leading testing efforts. He had been with the group since

December 2013 as a business analyst. In this role, Samuel reviewed, analyzed, and

created detailed documentations of business requirements and user needs, including

workflow, program functions, and steps required to develop the knowledge management

database and various IT Service Management modules. He also gathered and analyzed

Page 90: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

78

functional requirements, prepared system design specifications, conducted risk analysis,

and led testing efforts for the group.

Development of Three Themes

The purpose of this study was to examine the request fulfillment process for an

information technology service provider group to identify what was perceived as the most

important metrics of the process, and to subsequently create executive dashboards for

displaying those metrics. The study centered on the experiences of the group members as

they related to the following two primary research questions:

Research Question 1: What do the group members perceive as being the most

important metrics of the ITIL request fulfillment process?

Research Question 2: How to create executive dashboards with the metrics

perceived as most important by the group members?

A case study from multiple perspectives methodology was used to explore the

experiences of the participants as they related to the research questions. In-depth, face-to-

face interviews with eight participants were conducted (eight individual interviews and

one group interview). Observations were noted in a field journal according to the

recommendations set forth by Spradley (1980) (see Appendix H). Documents and

physical artifacts consulted for data analysis purposes are listed in Appendix I. All

interviews were reviewed by the researcher through active listening, and then transcribed

and coded using NVivo. Open coding, which generated individual codes subsequently

grouped into categories, was followed by axial coding and constant comparison to

Page 91: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

79

identify themes. Based on the interviews, field notes, and secondary sources (reports,

onboarding documents, process documents, user guides, and archival records between

July 1, 2014 and January 31, 2016), this process initially resulted in 104 codes

subsequently grouped into 17 categories. Appendix J contains the codebook created for

this study that includes these codes, description of each code, and examples. After the

focus group session, the researcher continued the process of data analysis via the constant

comparison method of reading and rereading and ended up identifying 83 codes and nine

metric categories under three overarching themes (dashboard pages). A data triangulation

matrix of these 83 codes’ data sources is presented in Figure 4.1 below:

Figure 4.1. Triangulation Matrix of Data Sources

Three more categories were identified by the second coder. Finally, three themes and 12

metric categories were identified for dashboards. Appendix K shows final codes, sources

of each code, categories, and themes which emerged from this study. The following

sections elaborate on the key observations that focused the research study and the final

themes and categories selected after data analysis.

21

8

14 6

13 9

12

Page 92: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

80

Key Observations in Focusing the Research Study

One of the challenges the researcher experienced in this research study was to

keep the participants, data collection, and data analysis focused on the scope of the study.

This may have been due to differences in understanding of ITIL and IT Service

Management-related terms and concepts among the participants. Every field has its own

terminology and any business process can be described in plain English. However, one

can easily observe that the meaning of the terms and jargon in a particular field could

completely differ from common definitions. It did not take very long for the researcher to

realize the veracity of this observation in this study. Since the majority of the participants

(seven out of eight participants) in this study were not certified in ITIL, terms such as

“request”, “incident”, “problem”, “service”, etc. often caused a great deal of confusion

and misunderstanding among many staff members. Therefore, the researcher collected

the data through whatever means the participants could provide, sought frequent

clarification from the participants when ITIL-based terms were used, and then reduced

the data to chunks and themes relevant to this particular research study (Miles &

Huberman, 1994).

The second observation made by the researcher was the notion of a high level of

cohesiveness and understanding among staff members in the group. The leaders of the

team (director, associate director, and manager) seemed to give autonomy and ownership

to the rest of the staff members, gave others’ ideas a chance, and shared credit where

credit was due. Staff members showed empathy to one another—when one staff member

came to another with a question or for help, they would turn off their cell phone, take

Page 93: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

81

their eyes off their computers, and put away paperwork to give the other person their full

attention. Staff members also practiced active listening—they focused on what the other

staff members were saying, rather than focusing on their own needs or what they wanted

to say next. They asked follow-up questions to make it clear that they were engaged in

the conversation with the other party. The high level of agreement among staff members

was particularly evident during the focus group session when one member’s suggestion

or objection would rarely be challenged or opposed by the rest of the team. It was also

very clear that the team leaders’ suggestions or recommendations would almost always

be accepted by the whole team with much enthusiasm.

While there was convergence of themes, not all interview participants provided

data that related to all three major themes. Each participant met the selection criteria

established for the purposes of this study, and they brought their own unique experiences

of customer support and service management to the interviews. In this research study,

while some themes were mentioned more often than others during the interviews, that did

not undermine the importance of the themes not mentioned as frequently. That’s because

each theme and metric identified during the interviews was discussed during the focus

group session and the final results were agreed upon “as a team”. Miles and Huberman

(1984) cautioned researchers on the concept of “counting” in qualitative research by

noting, “the hallmark of qualitative research is that it goes beyond how much there is of

something to tell us about its essential qualities” (p. 215). The researcher went beyond

counting how often a particular item was mentioned in an interview, and instead looked

for information, codes, and themes that described the essential qualities of understanding

Page 94: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

82

motivations, challenges, and benefits perceived by the group members in implementing

the request fulfillment process.

What follows is a detailed reporting of themes and the metrics identified for the

dashboard. In the following description of the themes, I reproduce the participants’

original verbal emphases. The emphases throughout participants’ interviews were

organic, borne of heartfelt discussion.

Theme 1: Trend Analysis

As discussed previously in chapter 1, implementing the request fulfillment

process was a “high priority” objective for the group in 2014 (Physical Artifact: Goals

and Values), and the group continued to view request fulfillment as “one of the most

critical and visible processes” for its users (Physical Artifact: Request Fulfillment Process

Document). The group considered the request fulfillment process as an excellent

opportunity to “promote a positive view of the group” among the user population (Field

Notes, April 16, 2016). Three primary reasons became evident for this perception of trend

analysis as important:

To identify areas where the process is performing well so that the group can

duplicate success.

To identify areas where the process is underperforming.

To provide evidence of making decisions based on data rather than a “hunch” or a

“best guess” (Field Notes, April 22, 2016).

Page 95: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

83

The group used the request fulfillment process exclusively to deal with service requests

rather than incidents, problems, or change requests for three major purposes: first, to

provide users a channel to request and receive various data and reporting services;

second, to provide users information about what kind of services are available and how to

obtain them; and third, to help users with general questions, comments, or information

(Physical Artifact: Request Fulfillment Process Document).

One group member found the request fulfillment process valuable because it

“provides quick and effective resolution of the service requests and improves

productivity of the users” (Field Notes, April 15, 2016). Another member believed the

process “reduced the bureaucracy involved in requesting and receiving services”, thus

“reduced the cost” of providing these services (Field Notes, April 15, 2016). Above all,

centralizing the request fulfillment process for all data and reporting related requests also

“increased the level of control over these types of services”, concurred by both the

manager and the director of the group (Field Notes, April 22, 2016).

The participants, and especially the team leaders (director, associate director, and

manager), felt that trend information should be “most prominently displayed” on the

dashboard (Field Notes, April 22, 2016). Jennifer said that “trend analysis is important

for our group because it is the practice of collecting past and present information to

identify a pattern or trend and predict future events or scenarios.” A similar

sentiment was echoed by Jackie and Carl. Jackie expressed her support for capturing and

displaying trend information when she stated that “trend analysis is useful because it can

be used to compare past data with present data, predict future events based on the

trend found, and make informed decisions.” Having worked for more than fifteen years

Page 96: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

84

in various educational institutes and managed multiple help desk teams, Jackie correlated

trend analysis with many value drivers such as “decreased service downtime”, “increased

customer satisfaction”, “confidence in offered services”, and “increased help desk

performance” (Field Notes, April 22, 2016).

Before being promoted to manager of the group in 2014, Carl was a member of

the group in a non-supervisory role for three years when the concept of ITIL or IT

Service Management was foreign to the team. Carl considered himself to be “intimately

familiar” when there was no formal reporting of past or present performances for staff

members. He could recall some unpleasant yet frequent complaints from many users,

such as “service desk activity failing to support business activities”, “customers not

satisfied by the services offered”, “incidents not resolved in a timely manner”, and

“increasing customer downtime” (Field Notes, April 18, 2016). Being familiar with the

risk drivers due to a lack of adequate reporting and trend numbers, and after being put in

charge to restore the team’s reputation as a high-quality service provider, Carl attended

various conferences and seminars on ITIL and IT Service Management over the last

twelve months and gained significant knowledge about reporting and trend analysis

(Field Notes, April 18, 2016). According to Carl, “trend analysis helps the group

understand how the request fulfillment process has performed in the past, and predict

where the current operations and practices of the process may take us next.” During the

interview with the researcher, he felt very strongly about capturing and displaying trend

numbers on the dashboards, and once mentioned trend analysis as a “cornerstone” of

being proactive in managing service requests and incidents.

Page 97: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

85

Although trend analysis can be performed on any data that the group captured

over time, the members of the group identified four metrics for which trend analysis was

deemed useful: total number of tickets created per month, total number of tickets closed

per month, number of Priority-1 tickets created per month, and number of Priority-1

tickets closed per month. The leaders of the group decided that these four metrics should

be displayed in two separate graphs—the first graph showing total number of tickets

created and closed per month in a stacked-column bar chart format; and the second graph

showing number of Priority-1 tickets created and closed per month—also in a stacked-

column bar chart format (Field Notes, April 28, 2016).

Total Number of Tickets Created and Closed Per Month

As the name suggests, tickets created and closed per month is simply the total

number of monthly tickets created (when a request is received) and closed (when a

request is completed) by the group. Tickets were the “primary unit of work” for the daily

operations of the group (Field Notes, April 24, 2016). As such, Jennifer emphasized that

“ticket volume drives the headcount of members needed by the group, thus, is

considered to be the number one determinant in making staffing decisions”. It was no

surprise that the group members in unison wanted to keep track of how many tickets were

being created every month and how many were being closed on a monthly basis (Field

Notes, April 29, 2016). Although tickets created towards the end of a month may not be

completed within the same month, but it was a standard practice among staff members to

ensure that most tickets were completed and closed within the same month and due date

Page 98: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

86

unless the due date of the ticket fell in a later month (Field Notes, April 29, 2016). As the

director of the group, Jennifer was accountable for managing the daily operations of the

service provider group and providing monthly status updates to the department’s

Executive Director. “I definitely want to see on the dashboard how many tickets we

created and closed in a given month. I want to see these numbers starting from when we

started to use the request fulfillment process”, declared Jennifer during a team meeting

(Field Notes, April 29, 2016).

Both Jackie and Carl expressed similar sentiment. Recalling her experiences in

one of her previous positions in a different organization, Jackie mentioned, “In the

company where I worked before coming here, the first numbers we reported on a

monthly basis were the number of tickets we created and closed in a given month.

Here too, [pointing at her current workplace], I think these are the most basic but

important metrics that should be reported on the dashboard” (Field Notes, April 22,

2016). Carl, who was responsible for reporting routine and assigned tasks, also felt that

number of tickets created and closed on a month-by-month basis was valuable to a

number of stakeholders—service owners, project managers, executive leaders, and

customers. Carl also expressed his preference for seeing these numbers in a single chart

on the dashboard (Field Notes, April 29, 2016). He went on to say:

We want to make sure all the tickets are completed and closed within the due

date of the ticket. We understand that it is not realistic to close all the tickets

in the same month they were created, but vast majority of the service requests

that we receive could be completed and closed within few days. So it makes

sense for us to see on the dashboard the total number of tickets created and

Page 99: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

87

closed on a month-by-month basis. If we see a vast discrepancy or disparity in

a given month, then we can look further into it and find out what’s going on.

Jennifer, responding to a question on how she would like to see these metrics on

the dashboard, thought for a while (Field Notes, April 27, 2016) and stated that, “a

stacked-column bar chart with red bars showing the number of tickets created (per

month) and green bars showing the number of tickets closed (per month) seems

appropriate and I think it would look nice.” Both Jackie and Carl mentioned that a

stacked-column bar chart would be appropriate for displaying these numbers on the

dashboard, but they did not have any “color” requirement (Field Notes, April 28, 2016).

In response to how the numbers for these metrics could be obtained to create the

dashboard, Carl answered, “Our JIRA system has all the tickets since July 2014 when we

first started to use the request fulfillment process, so these numbers can be easily

retrieved by running some simple queries in JIRA.”

To keep track of all the tickets created, it was necessary for the group to comply

with two critical activities (Physical Artifact: JIRA Requirements Document): identify all

the sources through which service requests could be submitted by users, and manually

create a ticket in JIRA for each valid service request (unless a ticket is automatically

created in JIRA when users submit a request).

To address the first critical activity, Jackie stated the following about the potential

sources for a service request:

The most preferred source of all service requests is a web portal or an online

form (E-Form) that is completed by the requester of the service. The online

form contains all the required information for the request to be assigned and

Page 100: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

88

processed. When the requester submits the form, it is automatically forwarded

to JIRA to create a ticket. The ticket number is used to uniquely identify the

service request.

Although the online form was the most preferred source (Field Notes, April 13,

2016), not all users were aware of the form, or simply “too lazy” to fill it out in its

entirety. Instead, many users preferred simply sending an email to the group stating their

request. So the group decided to have email as the “second-most preferred” source of

service requests (Physical Artifacts: Training Materials; Vendor Document – The 7 Steps

Improvement Process). Carl stated:

The second-most preferred service request source is email—when

requesters send email to the group’s support team (email address withheld)

with the request. The email is then forwarded to JIRA by the group’s

administrative assistant to manually create a service request ticket.

The least-desirable sources of service requests included users sending email to an

individual member of the group, calling a group member by phone, or simply walking up

to a staff member and verbally stating their request (Field Notes, April 20, 2016).

Jennifer discouraged users from using these methods when submiting a service request:

Users are discouraged from sending service requests through these means

(personal emails, phone calls, or walk ups) to avoid possible delays due to

the risk of the request being lost, overlooked, or incorrectly assigned. When

users send personal emails, call or physically come to a group member with

their request, the group member would still create a service request in JIRA,

Page 101: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

89

but would also strongly encourage the requester to use the online form for

future requests.

To address the second critical activity—having a JIRA record for all service

requests, Jennifer required all valid service requests, no matter their source (through

online form, email, phone call, or walk up), to be entered into JIRA as a service request

ticket (Field Notes, April 18, 2016; Physical Artifact: Onboarding Document). Jennifer

explained how this important activity was achieved by the group:

If a service request is submitted using the online form, the request directly

goes to JIRA and a ticket is automatically created. A JIRA ticket is also

known as a JIRA record. When a ticket is received in JIRA, the requester

also gets an automated message from JIRA saying “Your request has been

received. You will be notified by when the request is assigned.” The newly

created JIRA ticket is placed in the “New Arrival” queue.

When a service request came through any other means than the online form, the JIRA

ticket for that request was created manually (Field Notes, April 23, 2016). Jackie

explained:

If a service request is emailed to the group (group email address withheld),

currently there is no “automated” response sent to the requester when the

request is received. These requests are manually forwarded to JIRA by the

administrative assistant of the group. In this case, the administrative

assistant manually creates a new JIRA ticket, places it in the “New Arrival”

queue, and notifies the requesters saying that a ticket has been created with

Page 102: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

90

their request (also lets them know the ticket number for future

communication).

Number of Priority-1 Tickets Created and Closed Per Month

Prioritization of service requests was an important step in the request fulfillment

process (Physical Artifact: Request Fulfillment Process Document). The request

fulfillment process document prepared by the group members and approved by the

Executive Director of the department specified in boldface letters:

The priority of a service request is an assigned value representing relative

importance or sequence in which the service request should be addressed.

The impact of a service request (on the business/organization) and the

urgency of a service request (to the business/organization) will determine

the priority of the request.

The impact of a service request was determined by asking several key questions

(Physical Artifact: Request Fulfillment Process Document): Who is making the request?

What is the purpose of the request? How many people are impacted and who they are?

What is the possible financial impact? What is the impact to reputation on the group or

the organization? Is there a regulatory or legislative requirement? etc. The impact of a

ticket was categorized as High, Medium, and Low. To determine the urgency of a service

request, the group carefully considered how soon the request must be completed to avoid

a negative impact (Field Notes, April 25, 2016). The urgency of a ticket was also

categorized as High, Medium, and Low. The priority codes were Priority-1 or P-1

Page 103: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

91

(Critical), Priority-2 or P-2 (High), Priority-3 or P-3 (Medium), Priority-4 or P-4 (Low),

and Priority-5 or P-5 (Planning) (Physical Artifact: JIRA Workflows). The following

table shows a matrix of how service requests were prioritized:

Table 4.1

Prioritization Matrix of a Service Request

Impact

High Medium Low

Urg

en

cy

High Priority-1 (Critical) Priority -2 (High) Priority -3 (Medium)

Medium Priority -2 (High) Priority -3 (Medium) Priority -4 (Low)

Low Priority -3 (Medium) Priority-4 (Low) Priority -5 (Planning)

Although group members often mentioned tracking the number of P-1, P-2, P-3,

P-4, and P-5 tickets created every month and how many were closed on a monthly basis

(Field Notes, April 27, 2016), the group ultimately decided that only P-1 numbers should

be displayed on the dashboard; the group would create ad hoc reports (outside the

dashboard) to show the numbers for P-2 through P-5 tickets (Field Notes, April 30,

2016). This was the result of a lengthy discussion during the focus group session when

Jennifer asserted the following decision (with the approval of the rest of the group):

There’s no denying that I want to know how many P-1, P-2, P-3, P-4, and P-5

tickets were created every month and how many of them were closed. But I

also don’t want to clutter the dashboard page with too many graphs.

Therefore, let’s show the metrics for only the P-1 tickets on the dashboard,

and create reports or write queries in JIRA to see the numbers for P-2, P-3, P-

4, and P-5 tickets.

Page 104: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

92

Jackie showed her support by nodding empathically (Field Notes, April 30, 2016) and

echoing Jennifer’s sentiment:

You are absolutely right, Jennifer. P-1 tickets are “critical” and must be

completed in less than four hours. These tickets also impact the senior leaders

of the department and organization (names withheld)—for example, directors

of other groups, executive directors, and vice presidents.

Carl, as a regular attendee in the requirements gathering sessions with various

stakeholders and dashboard users, was sensitive to their needs (Field Notes, April 28,

2016), and pointed out that dashboard users do not want to see too much information on

the dashboard, just that which is important (Physical Artifact: Stakeholder map). Carl was

keen to jump into the conversation:

Let’s not forget that the executive dashboard users (i.e. the directors,

executive directors, and vice presidents) would mainly be looking at the trend

analysis dashboard page and associated graphs, so it is important to show

them the “really important” metrics instead of showing them everything. We

can always create reports outside the dashboard pages to show them other

numbers, should they ask for those.

In response to how the group wanted to see these numbers displayed on the

dashboard, Jennifer proposed that they be consistent, showing “how the other graph (the

total number of tickets created and closed per month graph) would look like. So does a

stacked-column bar chart with red bars showing the number of P-1 tickets created (per

month) and green bars showing the number of P-1 tickets closed (per month) seem

reasonable to everybody in the group?” The entire team voiced their agreement

Page 105: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

93

enthusiastically (Field Notes, April 30, 2016). Figure 4.2 shows the trend analysis

dashboard page.

Figure 4.2. Trend Analysis Dashboard Page

Theme 2: Monthly Operational Summary

All participants in this research study expressed a need to see various summary

numbers for monthly operations on the dashboard (Field Notes, April 30, 2016). Making

operational data pervasive to all group members, stakeholders, and customers of the

group was viewed as a “powerful tool for mapping performance against corporate goals

and key performance indicators that have been agreed to by management and

communicated to all group members” (Field Notes, April 23, 2016). The group operated

like a service desk designed to serve as a single point of contact for all data and reporting

requests and was responsible for handling recurring and ad hoc requests from the users

(Physical Artifact: Goals and Values). Showing monthly operational summary numbers

Page 106: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

94

on the dashboard was imperative for two major reasons: first, to turn raw data gathered

from the request fulfillment process into information on which “informed management

decisions” could be based; and second, to strengthen the case for “obtaining additional

resources and funding” for the group by communicating the “value of the support”

provided by the group, and by demonstrating the group as “high-potential” capable of

providing services in terms of quality and timeliness (Field Notes, April 24, 2016).

As the director of the group, Jennifer was responsible for giving the final “green

signal” to all team decisions (Field Notes, April 22, 2016). Naturally, she was interested

in operational numbers “to assist her in the decision making process” (Field Notes, April

22, 2016). Jennifer believed that “by using the operational summary numbers of the

request fulfillment process our team will be able to make informed, rational decisions

about the data, reports, and other services offered to our users.” She mentioned “raw

request count”, “mean time to resolution”, “time to response”, and “time to escalate” as

examples of operational summary numbers that she prepared in her previous position that

were often “required” by senior management (Field Notes, April 18, 2016).

Jackie expressed similar sentiments as she referred to operational numbers as an

indicator of the group’s “service level” and “performance” (Field Notes, April 26, 2016;

Physical Artifact: What is a KPI). Based on her past experience and knowledge of

preparing operational summary numbers (weekly, monthly, quarterly, even yearly),

Jackie reinforced her experience by boldly stating that “monthly operational numbers

can be used to show to the senior leaders of our department (name withheld) the level of

service given by, and performance of, our group” (Physical Artifact: RACI Matrix).

Page 107: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

95

Carl felt that operational numbers could be used when sending the request for

“additional funding and resources” to the Executive Director (Field Notes, April 29,

2016). As the entire organization had recently faced a budget cut, obtaining additional

funding for new positions proved to be a “real challenge” and had to be based on

“operational numbers that show more money or more people are needed to get the job

done correctly and on time” (Field Notes, April 19, 2016). Carl provided some valuable

insights when he said:

Communicating the value and support provided by our team helps us being

considered high-performing and strengthens the case for additional resources

and funding for our team. Therefore, we must align our team’s objectives

with the department’s and organization’s business objectives. By knowing

the business objectives, our team can easily set support goals that are aligned.

This in turn will lead to identifying and producing metrics that can validate

the alignment.

An example of aligning the business goals with the group’s goals and how the group’s

goals could facilitate meeting business goals was demonstrated during the focus group

session with the participants (Physical Artifact: Goals and Values, Vendor Document –

The V Model Perspective). One of the business goals of the group was to Increase the

number of request fulfillment process users by 20% by the end of the calendar year. To

support this business goal, three complementary goals were also identified (Physical

Artifact: Goals and Values):

Eliminate number of reports delivered to users that contain incorrect data.

Minimize the time it takes to deliver reports with correct data to users.

Page 108: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

96

Meet service-level goals for all data and report requests.

Jasmine understood the operational summary numbers as a predictor of “customer

satisfaction”, “team productivity”, and “quality” to the end users (Field Notes, April 30,

2016). During the personal interview session, she was delighted to explain how

operational numbers can influence overall customer satisfaction:

By supporting the customers those who support the end users, we are

indirectly influencing the overall customer satisfaction. End users have a

better perception of our department and organization when they get better

service that is reliable and consistent. This is possible when the members of

our team are productive and willing to provide highest quality services to

the customers.

The following categories were identified as the monthly operational summary

metrics to be displayed on the dashboard: number of tickets by issue type, number of

tickets by priority, number of tickets by issue status, and number of tickets by

department/area (Field Notes, April 30, 2016).

Number of tickets by issue type

Issue Type was used as a basic category to organize all the service requests the

group received from the users (Physical Artifact: JIRA Workflows). There were ten

different issue types for all service requests: Access, Feedback, Incident, Informative,

Rejected, Service – Ad Hoc, Service – Recurring, Service BI (Business Intelligence),

Inquiry, and Disruption (Physical Artifacts: JIRA Requirements Document, JIRA

Page 109: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

97

Database). The issue type for a service request was determined by the administrative

assistant, manager, or associate director when accepting or rejecting a service request

before prioritizing it (Physical Artifact: JIRA Requirements Document).

Several staff members believed that it would be valuable to have the dashboard

display the number of tickets by issue type on a monthly basis (Field Notes, April 1,

2016). Ryan once had categorized a request using an incorrect issue type, thereby

delaying its resolution and causing incorrect reporting (Field Notes, April 1, 2016). Ryan

expressed his dissatisfaction with incorrect issue types as he cautioned, “choosing the

wrong issue type for categorization will have repercussions throughout the lifecycle of

a service request in the request fulfillment process—from inefficiencies in assigning

requests to inability to accurately report on the types of requests we are receiving. So it is

important that we can easily see these numbers on the dashboard for every month since

July 2014.” Joanna felt that issue type could be used to “organize the requests very

easily” (Field Notes, May 1, 2016). She reinforced the necessity of having this metric

because it is “high level”, “easy to gather”, and “maps nicely” to the person completing

the request. She gladly elaborated by saying (Field Notes, May 1, 2016):

I find the most common way to organize our support tickets is by the issue

type, and they are so easy to gather. In my experience, in most cases,

organizing service requests by issue type maps nicely to the people who work

on completing those requests. So I think that team leaders and senior

executives would consider issue type a high-level metric and be very

interested to know these numbers.

Page 110: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

98

Tiffany believed that there are many ways to categorize service requests, and service

provider groups should adopt what works best for them (Field Notes, May 1, 2016).

Since determining the issue type for a service request is one of the first steps in the

request fulfillment process (Field Notes, May 1, 2016), Tiffany was optimistic about

displaying this metric on the dashboard. She offered her insight:

Many organizations choose to categorize their service requests based on the

department that strictly handle the requests, by the product, or by the customer

they serve. But, in the request fulfillment process that our group follows, one

of the very first steps after logging a service request is to determine its issue

type. Therefore, we should definitely display this metric on the dashboard.

In response to how the group wanted to see these numbers displayed on the

dashboard, there was general agreement during the focus group session among group

members that for each month, a pie chart showing the numbers for each issue type in

separate slices would be simple yet sufficient (Field Notes, April 30, 2016).

Number of tickets by priority

As mentioned in an earlier section, prioritization of service requests was a critical

step in the request fulfillment process (Field Notes, May 1, 2016). The priority assigned

to a service request was based on its impact (the measure of how business critical a

service request is) and urgency (how soon the request must be completed) (Physical

Artifact: Request Fulfillment Process Document). A service request was considered to

have a high impact if it affected the senior executives of the organization or two other

Page 111: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

99

teams within the department (names withheld) that supported business intelligence and

budget-related functions (Field Notes, May 5, 2016; Physical Artifact: What is a KPI).

Requests affecting only the directors and managers were regarded as having a medium

impact, while requests affecting only the staff members had a low impact. Requests with

high urgency were treated as being highly time-sensitive and were taken care of as soon

as possible. Low urgency requests were not considered to be time-sensitive as the cost of

not taking care of these requests only marginally increased over time (Field Notes, May

5, 2016). Regardless of the perceived impact and urgency of any service request, the

group strived to complete each request before its due date (Field Notes, May 5, 2016).

Several group members who participated in this study considered prioritization of

service requests as a mean to “be fair to all users”, and “avoid guesstimate in which

request should be completed first” (Field Notes, May 1, 2016). So they naturally

expressed a need to display monthly number of tickets by priority on the dashboard. For

example, Tiffany strongly felt that “the number of tickets by priority, on a monthly

basis, should be displayed on the dashboard because prioritization allows us to be fair to

all users and to objectively score each service request according to established rules

rather than doing guesstimates or favoring certain users or groups.” Jennifer believed that

it was important to prioritize using a service level agreement and operational point of

view, and offered her opinion:

On the dashboards, on a monthly basis, I would like to see the number of

tickets broken down by priorities to keep an eye on any sharp change

(increase or decrease) in the numbers of P-1 and P-2 tickets in particular.

These tickets affect our executives, directors, and managers; so we must be

Page 112: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

100

able to keep track of these requests and complete them on time—in less than

four hours for P-1 requests and in less than eight hours for P-2 requests.

According to Jackie, knowing the number of priority tickets “not only promotes fairness”

in dealing with users, but also allows the group leaders to be “fair with members”, too—

staff members who do the actual work to complete requests (Field Notes, May 3, 2016).

Jackie said:

We want a fair distribution of all the requests among our staff members.

Typically, P-1 and P-2 requests have zero room for errors, so they must be

thoroughly prepared and reviewed before being delivered. Requests with

priorities P-4 or P-5, although must be correct when delivered, they have

lower impact (i.e. they are not as critical as P-1 or P-2 requests). We don’t

want certain group members handling only the P-1 or P-5 tickets. We want to

make sure each group member has a fair and balanced distribution of P-1

through P-5 requests to complete.

To illustrate further the importance of showing the number of tickets by priority

on the dashboard, Joanna revealed her frustration with the complexities of prioritization

when she first started working with the group. She mentioned, “By having well-

established rules on setting the priorities, training time for new staff members is reduced,

because they don’t have to understand the complexities behind prioritization.” Jasmine

found prioritization of service requests as “empowering” and was very thankful for this

practice (of prioritization) when she stated, “Prioritization empowers the group

members when management support is needed. When we are confronted by irate users,

now we can justify if we must ask them to wait or require additional information from

Page 113: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

101

them to complete a request.” Samuel felt that being able to prioritize a request based on

well-defined and widely accepted rules was an example of “ITSM best practice”, and it

made his job “less stressful by eliminating the decision dilemma”. “Now I don’t ask my

manager—what do I do next?” said Samuel gleefully.

In response to how the group wanted to see these numbers displayed on the

dashboard, there was general agreement during the focus group session among group

members that for each month, a pie chart showing the numbers of tickets for each priority

(with corresponding percentage) in separate slices would be appropriate (Field Notes,

May 15, 2016).

Number of tickets by issue status

The issue status (also known as ticket status or request status) was one of the most

important ticket fields for the group in carrying out the request fulfillment process—it

helped them define their core support process. It was a part of the default ticket form and

helped the group manage the lifecycle of a request from the time it was recorded in their

JIRA system (Physical Artifact: JIRA Database), to the point where it was closed by one

of the group members. By default, JIRA had four statuses for every service request ticket:

Open, Pending, Resolved, and Closed (Physical Artifact: JIRA Database). Additionally,

the group created six new statuses based on their business needs and issue type:

Assigned, In Progress, Needs Reviewed, Reviewed, Ready for Production, and

Production. These ten issue statuses made up the entire lifecycle of a service request,

although not every request went through each of these statuses (Physical Artifact: JIRA

Page 114: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

102

Requirements Document). Note that a request ticket can only be in one of these statuses

at any given point while going through the support process (Field Notes, May 10, 2016).

To illustrate the significance of knowing the status of a service request, Robert

felt that he needed to “thoroughly understand” the meaning of all issue statuses, and

stressed the importance of being able to “provide up-to-date information” to the

customers whenever asked (Field Notes, May 5, 2016):

Customers occasionally ask for explanations about how their requests and

support issues are progressing, how well they are being handled, as well as

what various ticket status codes indicate. We must be able to provide most

up-to-date information to the customers, or the customer satisfaction rating

could take a hit. We strive to maintain a near-perfect customer satisfaction

rating, so it is important that we know the number of tickets in each issue

status.

Jasmine, as a strong proponent of the steady progress of a service request through its life-

cycle, provided additional insight into the best practice of submitting one ticket with one

issue for efficient handling of service requests (Field Notes, May 7, 2016). She explained

with great enthusiasm and energy:

Whenever possible, we prefer that customers submit one ticket per issue and

keep one issue per ticket. This allows us to easily tell what specific matter is

being handled in a ticket, to correctly close issues once they are completed,

and to move tickets to other group members or departments without stalling

the progress of other unrelated matters in the same ticket. The issue status

field keeps us in line. For example, if I am waiting for a ticket to be reviewed,

Page 115: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

103

it will be marked as Needs Reviewed with the date and time stamped. If

several days go by with the issue status being unchanged, then I will know

that the ticket has not been reviewed yet, so it may need to be escalated. So it

is very important for me to know the number of tickets by issue status on a

monthly basis.

Carl believed that although issue statuses consisted of only one or two words, they

conveyed “vast information” about a ticket (Field Notes, May 5, 2016). Being one of the

primary contributors of the request fulfillment process, he was very interested and kind

enough to provide a detailed explanation on how each issue status can help the group and

its customers while carrying out the request fulfillment process (Physical Artifact:

Request Fulfillment Process Document).

The issue status field, although being one or two words only, convey vast

information to the group members and the users. For example, a ticket in the

Open status means that ticket is in the active queue…it will be looked at soon

if it is not already being reviewed…users may receive staff response if they

have question or if we have addressed the issue, etc. An In Progress status

could mean that the ticket is being worked on by a staff member and an

update will be provided as soon as it is completed. A Closed status may

indicate that the ticket is completed...we received confirmation from the user

that the issue is resolved…so no further action will be taken at this time, etc.

The issue status is a very useful field and we refer to this field all the time.

In response to how the group wanted to see these numbers displayed on the

dashboard, there was general agreement during the focus group session among group

Page 116: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

104

members that for each month, a pie chart showing the numbers of tickets for each issue

status (with corresponding percentage) in separate slices would be appropriate.

Number of tickets by department/area

This is another category on which there was overwhelming agreement among

group members about including the monthly number of tickets by department or area on

the dashboard. Since July 2014, the group has supported over 660 users from eight

departments by completing service requests on various data and reporting needs (Physical

Artifact: Reports, Files, and Spreadsheets). Examples include Marketing, Finance,

Operations, etc. (full department names are withheld). These departments were part of the

corporate structure of this organization (name withheld) that contributed to the

organization’s overall mission and goals. For example, one of the departments hosted

more than 6,000 visitors last year and provided educational programming for all ages.

Another department provided customer account and billing information, received

payments, provided collection of delinquent accounts, and delivered refunds and financial

aid residuals after disbursement (Physical Artifact: Reports, Files, and Spreadsheets). To

illustrate the importance of knowing the number of tickets from each department,

Jennifer emphasized the customer base and significance of her team’s work by saying

(Field Notes, April 23, 2016):

We provide services to all these departments. They use the data and reports

provided by us and make decisions based on them. The decisions they make

ultimately serve the goals and objectives of the entire organization. If the data

Page 117: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

105

or report based on which they make decisions are “bad” or incorrect, it looks

bad on me and our team. I don’t want that to happen, for sure.”

Both Tiffany and Courtney echoed similar feeling and opinion (Field Notes, April 30,

2016). Tiffany provided a spirited explanation:

We need to know which departments are our biggest clients...I mean…which

departments place the most number of requests to us. We want to know the

natures of their requests, and also want to make sure we have enough

resources to handle their requests.

Courtney noted the characteristics of different customers (Physical Artifact: Stakeholder

Map) and stated, “Not all departments send same number of requests to us. Some are

way more active and demanding than others. Some departments regularly send multiple

P-1 requests, while some send mostly P-3 or P-4 requests. If we can see these numbers on

the dashboard, then we can reach out to the departments to find more ways to serve

them.”

During one of the meetings with the Executive Director and all department

managers (Field Notes, May 4, 2016), Carl was shocked and surprised when he learned,

“some of the departments are creating reports by themselves after hiring external

consultants and spending tons of money; but our team could provide them with the data

and similar reports for free. They just didn’t know that we could provide them with

similar services.” During a time of economic volatility, the Executive Director was “not

too happy” spending extra money that could otherwise be saved by soliciting services

from Carl’s group (Field Notes, May 4, 2016). “Now that the other departments know

what we can do for them, I am expecting more requests from them starting next quarter”,

Page 118: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

106

said a thrilled Carl about the possibilities of increased resources and funding for his

group.

In response to how the group wanted to see these numbers displayed on the

dashboard, Carl suggested that for each month, a pie chart showing the numbers of tickets

for each department/area in separate slices would be appropriate. Since only eight

departments currently are being served, Carl thought it would be more appealing to use a

pie chart rather than a bar chart. The rest of the group expressed sincere agreement during

the focus group session. Figure 4.3 shows an example of the monthly operational

summary dashboard page.

Figure 4.3. Monthly Operational Summary Dashboard Page

Page 119: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

107

Theme 3: Monthly Workload Distribution Summary

One of Jennifer’s goals, when she was promoted to director, was to transform the

service provider group into a high-performing team in which employees were asked to

contribute at their highest potential and learn a lot along the way while having fun and

enjoying the workload in a collaborative team environment (Field Notes, April 22, 2016).

Jennifer firmly believed that employees’ job satisfaction levels in a constantly changing,

highly demanding, and stressful IT support team like hers were likely to increase when

employees found that the workload was being assigned, distributed, and delegated in a

“fair”, “balanced”, and “transparent” manner. Group members felt strongly that service

requests assigned to them were fairly balanced; everyone was fully aware of who was

working on what and the number of service requests (Field Notes, May 1, 2016).

Fair and balanced distribution of the workload was attributed to much positive

feedback from group members (Field Notes, April 28, 2016). For example, Courtney was

appreciative when she said, “Since we know the workload will be assigned in a fair and

balanced manner, the staff members have solid and deep trust in each other and in the

team’s purpose. We feel free to express feelings and ideas.” As one of the newest team

members, Tiffany expressed her amazement at the teamwork and looked very cheerful

when she mentioned, “Everybody in our team believes that we are working toward the

same goals. We are clear on how to work together and how to accomplish tasks.” Ryan

was also very pleased at how well the staff members got along with each other. He

delightfully stated, “Everyone understands both team and individual performance goals

and knows what is expected. We actively diffuse tension and friction in a relaxed and

Page 120: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

108

informal atmosphere.” But no team is immune from conflict or differences in opinion.

Joanna offered an additional insight on conflict resolution practices within the team by

saying “The team engages in extensive discussion during team meetings, and everyone

gets a chance to contribute and showcase their weekly accomplishments—no matter how

small or insignificant they might sound. Disagreement is expressed with courtesy.

Criticism is constructive and is oriented toward problem solving and removing

obstacles.” Being one of the strongest proponent of fair distribution of workload (Field

Notes, April 22, 2016), Carl encouraged staff members to “seek cooperation” from others

and still “be accountable” for completing a request. He explained:

Fair and balanced distribution of workload is very important to the leaders of

the team because we expect each staff member to carry his or her own weight,

and respect the team processes and other members. It does not mean that one

cannot ask for help or support from other more experienced staff members

in completing a task or a request; rather, we actually encourage staff members

to seek assistance when needed. The accountability for completing a service

request, however, ultimately rests on the person assigned with the service

request.

Since fair and balanced workload distribution was viewed as a critical factor in

the team’s becoming high-performing and successful, staff members wanted to see three

metrics on the dashboard on a monthly basis for the executive leaders of the department

and the organization (Field Notes, May 2, 2016). These three metrics were: number of

tickets per assignee, number of tickets per reviewer, and number of tickets per

department/area and issue type.

Page 121: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

109

Number of tickets per assignee

The most basic metric for workload distribution was the number of tickets

assigned to each staff member on a monthly basis. Service requests were being assigned

by the group’s director or associate director (with the manager as their backup) to the

appropriate staff member (the assignee). When a request was assigned, the JIRA ticket

was updated with the “Assignee” field showing the name of the assignee (Physical

Artifact: JIRA Requirements Document). The assignee often acted as a “third filter” by

assuring the completeness of the request. If the request was incomplete, then the assignee

contacted the requester for necessary information (Field Notes, May 5, 2016).

Several factors were considered when assigning a service request: type of data

knowledge required by the assignee, access to required data, skillset (e.g. communication,

analytical, software experience, etc.) required to complete the request, the capacity and

availability of the assignee, and assignee’s previous experience in completing similar

requests (Field Notes, April 25, 2016). Mindful of fair distribution of workload, Jennifer

cautioned, “We don’t want to see most of the service requests being assigned to one or

two people making them overworked and others sitting idle because there is nothing to

do. We always have more work than we can do in a given day, and it is a good thing. If

we don’t have more work than we can do, then we might have no work at all.”

Jackie also echoed similar sentiment and stressed, “We want everyone to be busy

at work and contribute. Finding who has been assigned with how many tickets will let us

distribute the workload in a fair and balanced way. We must be able to distribute the

workload fairly.” Ryan usually handled the programming aspect of reports (Field Notes,

Page 122: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

110

May 2, 2016), and was considered to be the “go to guy” for writing queries, codes, or

scripts. He received requests for assistance from other staff members on a daily basis, so

he was very interested in allotting sufficient time in helping others and also getting his

own work done. He was very clear in expressing his requirements, “I want to see how

many tickets were assigned to me on a monthly basis. I know this number is important to

Jennifer, Jackie, and Carl, but this is important to me too. As a matter of fact, I think all

the staff members should be able to see the number of tickets assigned to each of us. That

way, we can be proactive in seeking and offering help if needed. I think this helps the

whole team. It definitely helps me.” Both Jasmine and Joanna expressed agreement to

Ryan during the focus group meeting (Field Notes, May 30, 2016). It was noteworthy

that not all staff members were as outspoken as Ryan, Carl, or Joanna. Tiffany considered

herself to be “quite shy” and a “little introverted” compared to Ryan or Joanna, but still

appreciated when others offered to assist her. She looked really happy and radiant (Field

Notes, May 10, 2016) when she said:

Sometimes I can be quite shy and not ask for help from others. So it really

makes my day when Ryan or Joanna or someone else from the team comes by

to my desk and asks if they can be of any assistance. I also stop by their cube

for a quick chat whenever they are free. One day, I was trying to fix a query

for more than two hours by myself but could not. Then I sat down with Ryan

for less than ten minutes and he readily pointed out the error.

During the focus group session, some group members, especially Jennifer,

commented (although half-heartedly at first) that while knowing the number of tickets per

assignee was useful, this metric itself might be too simplistic, and few additional metrics

Page 123: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

111

at a more “granular level” might be more appropriate. When the group sought

clarification and asked Jennifer to further explain her thought, she stated:

Well, for example, I’m sure we don’t want one or two people do all the heavy

lifting—complete all the P-1 or P-2 requests—while others do all the low-

priority requests. So I think there should be a fair balance of tickets for each

assignee in terms of priority too. I think it would be fair for each assignee to

contribute with some high-priority requests while enjoy some “light” work

doing some low-priority requests as applicable.

The rest of the group seemed to understand what Jennifer was trying to say and echoed

their support of the idea. They agreed as a group that another metric should be added to

the dashboard—number of tickets per assignee broken down by priority (Field Notes,

May 15, 2016). Ryan was excited about the newly proposed metric and asked, “Well,

what about the number of tickets per assignee broken down by issue type and status

too? Do you want all the ad hoc or recurring tickets assigned to one person? Do you

want all the open tickets go to another person? So it also makes sense to me seeing the

number of tickets per assignee broken down by issue type and status.” Joanna and

Samuel, almost simultaneously, exclaimed that “I was just thinking of that too!” Jennifer

nodded enthusiastically (Field Notes, May 15, 2016), showing her support for Ryan’s

idea, and soon the whole team agreed that these two metrics (number of tickets per

assignee broken down by issue type and number of tickets per assignee broken down by

issue status) should also be included on the dashboard.

In response to how the group wanted to see these numbers displayed on the

dashboard, Ryan suggested that for each month, a horizontal-bar chart showing the

Page 124: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

112

number of tickets for each assignee in a separate bar would be appropriate. Since there

were 13 members in the team, Tiffany thought it would be more appealing to use a

horizontal-bar chart rather than a vertical one (Field Notes, May 15, 2016). The rest of

the group expressed agreement with this approach during the focus group session.

About seeing the number of tickets per assignee broken down by priority, issue

type, and issue status, Carl said, “How about we show these numbers in plain and simple

two-dimensional tables? no graph or chart…just simple separate tables showing the

assignees’ names in rows and the number of tickets broken down by priority in one

table, by issue type in a second table, and by issue status in a third table.” There was a

little confusion among some of the group members at the beginning (Field Notes, May

15, 2016), and not everyone seemed to be on board with Carl’s idea at first. When he

showed an example of a two-dimensional table at Jasmine’s request, several group

members readily expressed their approval of Carl’s idea saying “that sounds simple yet

effective”, “I clearly understand your idea now”, “that is so easy to understand for me”

etc. So it was settled during the focus group session that these three additional metrics

(number of tickets per assignee broken down by priority, issue type, and issue status)

should also be displayed on the dashboard as three separate two-dimensional tables (Field

Notes, May 15, 2016).

Number of tickets per reviewer

To improve quality assurance, service requests that had new, recurring, ad hoc, or

high profile deliverables were to be reviewed and signed off by peer analysts or

Page 125: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

113

reviewers. Staff members could access the Needs Reviewed filter in JIRA to identify all

tickets that were ready to be reviewed and the name of the reviewer for each ticket (Field

Notes, May 7, 2016). To do the review, a reviewer opened the appropriate ticket in JIRA,

then used the Comments tab to read existing comments or to add new comments that

might be helpful in completing the request. If the reviewer was not satisfied with the

deliverable(s), he/she added a new comment explaining what needed to be corrected

before approving the deliverable(s). The reviewer then placed the request into the “In

Progress” status to notify the assignee. If the reviewer was satisfied with the

deliverable(s), he/she marked the JIRA ticket status as “Reviewed”. Deliverable(s) could

be sent to the requester only after it had been reviewed. The reviewer used JIRA

comments to document what was done to review the report (Field Notes, May 8, 2016).

The number of tickets per reviewer was considered to be “key”, “highly useful”,

and “significant”, and several participants felt that these numbers should be displayed on

the dashboard (Field Notes, May 7, 2016). In the past, before Jennifer became the team

director, she recalled, “This team had a bad reputation a few years back (before I

became the director) for delivering reports and spreadsheets with incorrect numbers and

silly typos. Sometimes the deliverables did not look professional and lacked quality

control. So I made it mandatory that all new, ad hoc, recurring, and high profile

deliverables must be reviewed by someone other than the person who created the

deliverable.” Carl had benefited once when his error was corrected by another reviewer

who caught it before delivering the report to the end user. Carl was visibly relieved,

recalling:

Page 126: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

114

Having a second pair of eyes checking for errors in data or typos really

increase the possibility of catching them. There are several times over the last

year or so when I created a report but had Samuel review them first. Surely

enough, he caught errors—sometimes simple typo, sometimes serious data

error—and we corrected them before sending the correct report to the end

user. Samuel is our expert reviewer as he does the penultimate review before

Jackie makes the final go/no go decision (should the report be delivered or

not).

Joanna was also very mindful of her deliverables being “error free” (Field Notes, May 3,

2016), and she had long been considered to be the “model” or “premier example” of

providing “quality” deliverables—reports submitted to the end user on time and without

any error (Field Notes, May 8, 2016). A very cautious Joanna talked about her

meticulousness in including the correct data in her deliverables when she stated, “I create

about half a dozen recurring reports every week, and I have each one of them reviewed

(usually by Samuel) before returning to the user. I never take it for granted that my

reports will be error-free, although I know the process and data inside out. Having

someone double check my work increases my comfort level in my deliverables.”

Jennifer also delivered a few monthly reports to the board of directors. “I always have

Samuel double check the numbers, graphs, written texts…pretty much everything before

these reports are sent to the board of directors”, remembered Jennifer warmly. Samuel

recalled with a sense of accomplishment:

There has been only one instance since July 2014 when both Jennifer and I

overlooked a simple spelling mistake. But overall, we have always been able

Page 127: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

115

to deliver correct reports to our users. That is important. The end users know

they can trust our data.

In response to how the group wanted to see these numbers displayed on the

dashboard, Samuel suggested that for each month, a horizontal-bar chart showing the

number of tickets for each reviewer on a separate bar would be appropriate. Similar to the

previous metric, both Samuel and Tiffany thought it would be more appealing to use a

horizontal-bar chart rather than a vertical one. The rest of the group expressed agreement

with this approach during the focus group session (Field Notes, May 15, 2016).

Number of tickets per department/area and issue type

The ultimate purpose of the request fulfillment process adopted by the group was

to fulfill the request tickets correctly and on time. The requesters were employees and

members of various other groups within the organization who required data and reports to

do their job. These users came from eight different departments or areas within the

organization (Field Notes, May 7, 2016). Before fulfilling each request, the group

consistently verified the following (Physical Artifact: JIRA Requirements Document):

The request had been properly fulfilled, requester was satisfied with the result,

and requester agreed that the request could be closed.

The initial categorization of the JIRA ticket was correct. If found incorrect, the

assignee corrected the categorization before closing the ticket.

The JIRA ticket had to contain full historic updates at a sufficient level of details

to ensure that the request record was being fully documented.

Page 128: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

116

The assignee used a standard closure wording in JIRA to formally close the JIRA

request record.

When a request was closed by the assignee, the JIRA ticket status was updated as

“Closed”. As the final step in request closure, a customer satisfaction survey was

conducted for every request being closed (Field Notes, May 7, 2016). The survey was

conducted by providing the requester with a call-back number or sending the user an

email survey. On a monthly basis, senior management asked the director of the group to

provide the number of requests received from each department and also wanted to know

which departments were using the group’s services the most. According to Jennifer, she

needed to respond to the senior executives’ questions, such as:

Which departments were the top five “clients” of our group?

To how many departments was our group providing support?

What type of requests were most frequently asked by these departments? Are they

recurring requests? Or are they ad hoc requests?

Jennifer explained that senior executives basically wanted to find out (a) why the

departments that used our services the most continued to seek our service (so that we can

better serve them) and (b) what kinds of services we can provide to those departments

that do not use our current offerings (so that we can increase the number of requests

received and fulfilled). During the personal interview with Jennifer, she said that she

would like to use the dashboard to show the number of requests received from each

department on a monthly basis broken down by issue type (Field Notes, April 18, 2016).

Rather than sending this information to the senior management every month, Jennifer

felt, “it makes sense we make this information available for them (senior management)

Page 129: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

117

every month so that they can see how valuable my team’s contribution is to the

organization and how we can continue to work like a true high-performing team.”

In response to how Jennifer wanted to see these numbers displayed on the

dashboard, she suggested that for each month, a two-dimensional table showing the

departments’ name in rows and each issue type in separate columns would be desirable

(Field Notes, May 15, 2016). This metric was originally mentioned only by Jennifer, but

when she expressed the need for this metric during the focus group session, the rest of the

group readily agreed and it was clear that this metric must be included on the dashboard

(Field Notes, May 15, 2016). Figure 4.4 shows an example of the monthly workload

distribution summary dashboard page.

Figure 4.4. Monthly Workload Distribution Summary Dashboard Page

Page 130: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

118

Chapter Summary

This section summarized the research findings for this study. In answering the

first research question, 12 different metrics were identified: Total number of tickets

created and closed per month, Number of Priority-1 tickets created and closed per month,

Number of tickets by issue type, Number of tickets by priority, Number of tickets by

issue status, Number of tickets by department/area, Number of tickets per assignee,

Number of tickets per reviewer, Number of tickets per assignee and issue type, Number

of tickets per assignee and priority, Number of tickets per assignee and issue status, and

Number of tickets per department/area and issue type. In answering the second research

question, three dashboard pages were created using bar charts (both vertical- and

horizontal-bars), pie charts, and two-dimensional tables.

Page 131: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

119

Chapter 5

Study Summary, Conclusions, and Recommendations

Study Summary

The purpose of this study was to examine the request fulfillment process for an

information technology service provider group to identify their perceptions of the most

important metrics in the process, and to subsequently create executive dashboards for

displaying those metrics. A single-site case study method was used at a higher education

organization in the U.S. The research questions were:

1. What do the group members perceive as being the most important metrics of

the ITIL request fulfillment process?

2. How to create executive dashboards with the metrics perceived as most

important by the group members?

This study funneled the existing literature and research related to the following five areas

to demonstrate the foundation for the purpose of this study: Motives, Justifications, and

Benefits; Challenges, Barriers, and Risks; Implementation Strategies; Critical Success

Factors (CSFs); and Metrics, Measurements, and Evaluation.

Page 132: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

120

Single Site, Single-Case Study Approach

This research study is presented as a single site, single-case design, with the

single context of an IT service provider group with multiple interview participants,

physical artifacts, and archival sources serving as multiple points of data. While this

single site, single-case study approach limits the generalizability of the findings, it makes

for a more in-depth study of the site. Within the context provided by Yin (2003), this

study meets the rationale for the appropriate use of the single-case study design by the

unique nature of the site, and the revelatory nature the role of the researcher brings to the

study.

Data Collection

A case study protocol and case study research map were developed for this study.

A profile of the service provider group, request fulfillment process, and participating

group members was developed by the researcher. An original interview instrument was

developed with the assistance of two individuals serving as key informants. Multiple

sources of data were used in this research. This facilitated the triangulation of data in the

data analysis phase. Data collection involved assembling the following sources:

Individual interviews with eight group members using the request fulfillment

process on a daily basis;

A focus group session with all the research participants;

Reports and files produced by the group in completing a service request, such as

Enrollment reports, Demographic reports, Application reports, etc.;

Page 133: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

121

Various training materials created and regularly maintained and used by the group

members;

The Onboarding documentation and checklists followed when a new employee

came on board;

The request fulfillment process document that outlined the process steps and

provided detail description of each step;

Interview participants had been members of the group for at least one year as full-time

employees and used the request fulfillment process on a daily basis. Based on data

gathered from the group member interviews, the researcher sought other data sources

referenced during the interviews.

Data Analysis

In this study, a total of eight individual interviews and one focus group session

were conducted, though the researcher began to see saturation after five successful,

information-rich and well-rounded interviews with thick description (Creswell, 1998, p.

54; Patton, 1990). Collected data were coded using open and axial coding techniques.

The researcher used open and axial coding techniques to organize data in preparation for

analysis (Saldana, 2008; Strauss & Corbin, 1990). For the purposes of this research study,

a second coder was enlisted to analyze the collected data. A subject matter expert in IT

Service Management was included in the verification of the coding, and convergence of

themes. Analysis of data resulted in the development of three themes: trend analysis,

monthly operational summary, and monthly workload distribution summary.

Page 134: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

122

Strategies for Judging Trustworthiness

In this research study, strategies were used to meet standards offered by Yin

(2003) on construct validity and reliability for judging the trustworthiness of the

information collected. To meet the measures of construct validity, multiple sources of

data including interviews, physical artifacts, and archival records were gathered during

the data collection phase. The collection of these multiple sources of data is also

described as data triangulation (Yin, 2003, p. 98). A detailed log of research activity was

kept to document the chain of evidence. In the data collection phase, interview

participants were provided with summaries of their interviews based upon the meaning

understood by the researcher, with an opportunity to review, clarify, and correct

information they had provided. Known as member-checking, this strategy further ensured

the credibility and rigor of the study (Creswell, 1998; Lincoln & Guba, 1985).

To meet the measures of reliability, a detailed case study protocol was established

for the data collection process. In addition, a detailed interview guide was established to

maintain consistency and a replicable process with each interviewee. A detailed database,

beyond the summarized case study reports, included all field notes, transcribed

interviews, observations, archival records, and physical artifacts. The database made it

possible for another researcher to retrace the study and independently examine the data

collected and analyzed. With strategies in place for addressing both these tactics, this

study met Yin’s (2003) test of reliability for judging the quality of the research design.

Page 135: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

123

Conclusions

In many ways, the IT service provider group in this study acted like a service desk

that provided a Single Point of Contact (SPOC) in meeting the data and reporting needs

of both users and IT staff in the organization. The motivation for this study began with a

desire to look more at the belief that metrics are a vital component in IT service

providers’ understanding of their performance and set targets. Metrics can be reported on

a wide variety of performance measures to demonstrate that service providers are

delivering value and quality to the customers they serve and to their organization. Metrics

also allow decision makers and senior leaders to make informed and reasoned decisions

based on verifiable quantitative data rather than relying on “gut feelings”.

Managing IT today means more than just managing—it is about providing ‘value’

substantiated by metrics from the perspectives of both the service provider and the

business. The following discussion is based on: (1) research findings and insights

obtained from interviews and focus group session conducted while carrying out this

research project and (2) the researcher’s personal opinion. The next section contains a

discussion of three themes that arose from the study and the themes’ meaning for staff

members of the group and the users they supported, and how they cared about these

themes and metrics. Discussion in the later sections are based on the researcher’s

personal opinions. The second section looks at various metrics mentioned by the research

participants from the service provider’s perspective—ability to demonstrate and share

value to the customers and other stakeholders. The third section focuses on metrics and

research findings from a business perspective—to understand the metrics and

Page 136: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

124

measurements in which the business may be interested to capture its benefits. The fourth

section looks at business value metrics. While traditional metrics focus on demonstrating

“how well” a service provider group is operating, business value metrics, according to the

study participants, highlight areas in which future improvements can be made to deliver

positive business outcomes. The fifth and final section looks at the applicability of this

study to the field of organization change and development—what it means to

organization development (OD), how it contributes to OD, and how practitioners of OD

can use the results of this study.

Themes of the Study

Theme 1: Trend Analysis

While it is useful to have a “snapshot” of data for a specific point of time, being

able to answer questions like “how did we do this week (or month, quarter, year, etc.)

compared to last week (or month, quarter, year, etc.)?” is also critical. Participants

believed that by using trend analysis, IT Service Management practitioners can see

multiple data points over a period of time and identify areas in which a process is

performing well, detect where it is underperforming, be able to make data- and evidence-

based decisions, compare past and present numbers, and discover patterns or trends in

data points to predict future directions for current operations and practices. If a service

provider group sees a clear trend in increased or reduced volumes of tickets or calls over

the last few months, the group must understand whether the trend is “positive”,

Page 137: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

125

“negative”, “expected”, or “unexpected” and the root cause behind the trend. For

example, a reduction in ticket or call volume may be due to a decrease in the number of

recurring errors after implementing a formal review process by the service provider

group, or to customers’ apparent perception of the service provider group as “not

delivering value”. Trend analysis helps ITIL practitioners perform many key service

management activities such as identifying changes required to prevent a negative trend,

operating according to business plans, meeting targets, recognizing improvements

required, or uncovering any underlying structural problem (Office of Government

Commerce, 2011a, p. 58). Trend analysis also helps ITIL practitioners identify

irregularities in results, find ways to improve, and detect any side-effects of a process,

component, system, or service (Office of Government Commerce, 2011a).

The value of trend analysis has also been recognized in other studies. For

example, Gacenga et al. (2011) identified trend numbers as a performance metric to

measure the benefits of IT Service Management. Gacenga et al. (2011) found trend

numbers of service requests, incidents, and problems based on their classifications as one

of the performance metrics of respective processes that indicates value. It is noteworthy

that ITIL best practice recommends managing service requests in the request fulfillment

process, managing incidents with known root cause in the Incident Management process,

and managing incidents with unknown root cause (i.e. problems) in the Problem

Management process. A positive trend indicates improvement in the availability of

services, systems, or applications; reduction of failure to meet service level agreements;

reduction of mean time to repair (MTTR); and reduction of major incidents and urgent or

emergency changes (Office of Government Commerce, 2011a, p. 90).

Page 138: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

126

Theme 2: Monthly Operational Summary

As mentioned earlier, the IT service provider group in this study operated like a

service desk and was the single point of contact for all incoming requests related to data

and reporting needs for a certain business unit in the organization. The group provided an

easy-to-use interface between customers and IT to help customers use various IT services

and request new services. Although the study participants cited many operational metrics,

only the number of service requests submitted, grouped into categories such as issue type,

priority, issue status, and department/area was considered feasible at present. Other types

of operational metrics such as number of escalations for service requests (to second- or

third-level support), number of tickets not resolved within the service level agreement,

average time for resolving service requests, and percentage of requests that came via the

self-service web portal were deemed important by study participants, but the group

decided not to capture the data for these metrics because it did not have the necessary

resources or capabilities. Resources were considered tangible assets of the organization,

and included IT infrastructure, people, money, or anything else that might help to deliver

an IT service. Capabilities were intangible assets such as skills and abilities of the group,

staff members, and processes to carry out an IT service activity.

The importance of operational metrics is widely supported and acknowledged by

many researchers. Steinberg (2013, p. 21) considered operational metrics such as number

of requests fulfilled without escalation, number of requests fulfilled without human

intervention, etc. as the “starting point” for an ITSM metrics model. Operational metrics

suggested by Talla and Valverde (2013) included average time to log requests; time to

Page 139: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

127

acknowledge the user, categorize and prioritize the requests, start the resolving action,

and complete the action; and number of medium- and high-priority requests. Barash et al.

(2007) considered the amount of time it takes to resolve a request as the single-most

important operational metric. Other operational metrics such as average time to respond

to a call for assistance from first-line personnel, number of incidents resolved by first-line

personnel, and number of incidents resolved within service level agreement were

proposed by Coelho and da Cunha (2009). So it is clear that the need to capture and

report operational metrics cannot be overemphasized.

Theme 3: Monthly Workload Distribution Summary

The third theme found in this study pertained to managing the workload of staff

members, setting their performance goals and evaluating performance, and ensuring

quality of services provided. In this study, the request fulfillment process was used to

distribute workload among staff members in a fair and balanced manner to enhance their

productivity and provide optimal performance for customers and end users. In today’s

challenging economic times, the subject organization for this study, like many other

organizations, was trying to “do more with less and do everything faster, better, and

smarter” (Warrick, 2005, p. 164) to remain competitive and survive. The group was often

overwhelmed while trying to meet too many deadlines within a short amount of time. The

group was struggling to manage an increased workload for staff members who were at

high risk of becoming more susceptible to stress and burnouts. This could severely

diminish their productivity and cause negative business impacts. To improve employee

Page 140: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

128

morale and job satisfaction, distributing the workload among staff members in a fair,

balanced, and equitable manner was essential. The request fulfillment process facilitated

workload distribution not only according to the workload itself (in terms of total number

of tickets), but also the importance of each task (in terms of the priority of each ticket).

The group used several metrics obtained through the execution of the request

fulfillment process to evaluate the performance of each staff member on an ongoing basis

(Physical Artifact: What is a KPI). For example, the number of tickets per assignee

metric allowed team leaders to identify the most productive staff members. The number

of tickets per reviewer metric revealed those staff members who are most actively

ensuring quality control of deliverables. The number of tickets per assignee and priority

metric identified the staff members completing the highest number of Priority-1 or

Priority-2 requests. Being able to identify these staff members based on the metrics under

this theme allowed team leaders to evaluate staff members’ performance fairly and

increased overall acceptance of the annual performance appraisal process.

For each staff member, the annual performance appraisal process primarily

consisted of several performance factors (e.g., knowledge of work, decision-making /

problem-solving, customer responsiveness, and dependability). The knowledge of work

performance factor measured a staff member’s skill level, knowledge, and understanding

as these were needed to resolve service requests. The decision-making / problem-solving

performance factor measured a staff member’s effectiveness in understanding the service

request and making timely and practical decisions. The customer responsiveness

performance factor measured the responsiveness of a staff member in dealing with other

internal staff, service requesters, stakeholders, customers, and vendors. By analyzing

Page 141: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

129

various metrics under the monthly workload distribution summary theme, team leaders

were able to rate the staff members on each performance factor as Outstanding, Exceeds

Expectations, Meets Expectations, Below Expectations, or Unsatisfactory.

Many of the “best practices” in the ITIL request fulfillment process adopted by

the group in this study improved quality of services (Physical Artifact: What is a KPI).

For example, the group began to manage service requests differently than those for

incidents and problems. As the group grew and offered more services, the number of

service requests also increased over time. Conversely, the fundamental goal of the

Incident and Problem Management processes was to reduce the number of incidents or

problems. Industry experts and ITIL professionals have long recognized the value of a

separate dedicated process for managing service requests. A much-desired request

fulfillment process was formally adopted in 2007 by the Office of Government

Commerce (OGC) when ITIL version 3 was introduced (Office of Government

Commerce, 2007).

In this study, the request fulfillment process was driven by the standard services

defined in the group’s service catalog. The service offerings defined in the catalog

governed the overall request fulfillment process for the group. The group created a web-

based self-service portal so that its customers could search and track all service requests.

This portal was easy to use and significantly reduced the number of calls to the group.

The portal was the preferred interface for creating all service requests, and helped the

group overcome the single biggest challenge in request fulfillment—users being confused

by access to multiple channels for locating services and submitting requests (Office of

Government Commerce, 2007). By having a mandatory “peer review” step in the request

Page 142: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

130

fulfillment workflows for all Priority-1 and Priority-2 deliverables and ad hoc requests,

the group dramatically reduced the number of failed changes and reworks caused by

errors in data and reports.

Metrics from Service Providers’ Perspective—Demonstrating Value

This research study revealed that there is limited but fast-growing acceptance and

adoption of metrics in the IT Service Management industry, and more business and IT

leaders are depending on verifiable quantitative information to make strategic and

operational decisions. Although many service desks and IT service providers have

adopted a number of metrics, there is no definite set of “most important” metrics due to a

wide range of opinions. The participants in this study mentioned a number of metrics that

they felt were important, but it was not a reasonable expectation to include them all on

the dashboards. Ultimately, only a limited number of metrics were selected to be “most

important” and worthy of presenting on the dashboards, and the team leaders (director,

associate director, and manager) had the “final say”. The offering of a wide range of

opinions from participants can be explained in a nutshell: every service provider group is

different, operates under a different corporate culture, provides different types of

services, and seeks a wide range of data, information, and evidence to make important

decisions on resource allocation, service delivery, customer satisfaction, etc.

All participants in this study had at least five years of IT experience working full-

time in various organizations and industries, and they shared insight related to metrics

and measurements not only in light of their current positions, but also based on their past

Page 143: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

131

experiences. They mentioned almost two dozen performance metrics that they had

measured in the past. The two most frequently used performance metrics stated by

participants included number of incidents and service requests and percentage of

incidents and service requests resolved within due dates (also known as the service level

agreement or SLA). It is noteworthy that several of these metrics were also deemed

important by the whole team during the focus group session, some metrics were

considered appropriate for future development, while few were regarded totally irrelevant

or not applicable based on current business practices of the group.

Number of incidents and service requests was expected to be universal—without

it, service provider groups would not be able to deploy resources appropriately and

ensure they are operating at full capacity. Simply stated, this metric signifies the “work” a

service provider group does on a daily, weekly, or monthly basis. By doing a trend

analysis over a certain period of time, this metric reveals whether a service provider is

becoming busier or not, and if more or fewer resources are needed to operate at an

optimal level in responding to customer requests in a timely manner. The percentage of

incidents and service requests resolved within due dates or service level agreement

metric measures how well service providers are adhering to the “contracts” that exist

between them and their stakeholders. However, many organizations do not have well-

established contracts or service level agreements between service providers and the

stakeholders they support. When there is a lack of well-defined SLAs, due dates—often

determined arbitrarily—can be taken into consideration to determine service providers’

level of adherence. For many service providers, SLA and level of customer satisfaction

are the key performance indicators (Wood, 2013).

Page 144: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

132

The study also revealed that cost-based metrics such as cost per incident or

service request and total cost of ownership are the least implemented metrics for

demonstrating value, because many service providers find it very difficult to truly

understand their costs—such as direct, indirect, fixed, variable, and operating costs.

Although cost accounting is widely practiced in the manufacturing industry, it has proven

useful in the IT Service Management industry as well. One of the noteworthy

observations of this study was that the participants understood they could learn where

resources are being wasted and which resources are most profitable by adopting the

principles of cost accounting.

However, the participants also realized that implementing such practices would

require a great deal of resources, observation, and time. An assignee was often taking

care of multiple service requests simultaneously for several different customers. It would

require constant observation and recording of time to determine exactly how much time

the assignee was spending on a single service request. In addition, some participants

believed that cost accounting principles may not always fit nicely in the IT Service

Management industry. That’s because often there are very few direct costs in the service

industry, and other types of costs are also very difficult (if not impossible) to calculate

because they involve people, support infrastructures, and overheads. Therefore, cost per

incident or service request can be difficult to measure as it takes into account the

“operation” of the service providers. The same reasoning is also applicable in calculating

the total cost of ownership, which refers to the total cost of running a service desk or a

service provider. However, it was widely agreed by the participants that being able to

know these metrics could help determine whether a service provider has enough funding

Page 145: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

133

available to accommodate increased resources or if the group has made any savings by

operating efficiently or by automation. Metrics related to cost of operation and

comparison of cost and budget numbers were reported by one of the participants who had

more than fifteen years of experience in several financial organizations. Knowing

variances between cost and budget is vital for effective IT management. By identifying

the variances, business can take prescriptive actions in allocating sufficient budget or

controlling excessive costs.

To demonstrate value, not only the metrics need to be identified that are important

and relevant, but the responsible parties for producing, measuring, and reporting those

metrics must also be solidified. For service providers, the managers are usually

accountable for these tasks because they generally have the overall control, visibility, and

responsibility for the operation of the group. However, participants in this research study

shared that they often depend on input from other members within the team, and

sometimes on members from other functional areas to obtain and verify accurate numbers

before publishing a metric. It was clear from the interviews that despite occasional

overlap, each group member had different areas of responsibilities, and it was too time-

consuming for one person to capture and analyze all data before measuring and

presenting a metric. The service provider group in this study found the “RACI matrix”

very helpful (Field Notes, May 5, 2016). RACI is a responsibility assignment matrix that

examines a process step, task, activity, or effort to determine who is Responsible (the

person who is assigned to do the work), Accountable (the person who makes the final

decision and has the ultimate ownership), Consulted, (those who must be consulted

Page 146: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

134

before completing the task), or Informed (those who must be informed when the task is

complete) (Physical Artifact: RACI Matrix).

Metrics must be published or reported on a regular basis by the service providers.

Depending on the business needs and capabilities of the provider, metrics can be reported

in real-time, daily, weekly, monthly, quarterly, semi-annually, annually, or an ad-hoc

basis. The study participants understood that there are different audiences for different

metrics due to the variation, range, and sensitivity of information they contain.

Therefore, not all metrics should be published or reported to all stakeholders (Physical

Artifact: Stakeholder Map). Weekly metrics were primarily reviewed by staff members to

highlight the week’s performance against targets, while monthly metrics usually started at

a higher level and could be drilled down to more details (Physical Artifact: Stakeholder

Map). The participants in this study believed that creating different “views” of the

metrics containing appropriate information for different stakeholder was a powerful and

effective way to demonstrate the value and usefulness of metrics.

Metrics must be presented to the right parties and stakeholders to demonstrate

their value. Typically, these stakeholders are the members of the service provider group,

senior executives (Executive Director, Vice President, and CIO), internal and external

customers, and vendors (Physical Artifact: Stakeholder Map). Several participants in this

study mentioned the importance of the level of stakeholder interest in the metrics

presented to them. Different stakeholders may display varying degrees of interest, but

service providers should take this as an opportunity to conduct additional needs analysis

with the stakeholders who exhibit a low level of interest, and understand what the

stakeholders truly need and how they want the information to be presented. Fully

Page 147: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

135

automated real-time dashboards may sound “cool” but not be necessary for some

stakeholders (Field Notes, May 3, 2016).

In addition to the above-mentioned ways of demonstrating values, service

providers should know whether the customers, end users, and stakeholders are making

decisions based on the metrics provided to them, or at least helping them to improve the

decision-making process. The value of most accurate, real-time dashboards would be

nonexistent if not used to make or facilitate critical operational or strategic business

decisions.

Metrics from the Business Perspective

In the previous section, research findings were discussed from the perspective of a

service provider’s ability to demonstrate value to the business. However, the business

may be interested in metrics that differ from those the service providers measure and

present. In this section, research findings are discussed from the business perspective—

metrics that can benefit the business by increasing profitability and growth, reducing

costs, or making better decisions. The following section addresses metrics that businesses

usually ask service providers, metrics that are deemed most important by the business,

type of information highly sought by the business, and increased necessity and desire for

cost-based metrics.

As stated in a previous section, the participants in this research study had at least

five years of industry experience in IT Service Management in various organizations in

the U.S. According to those participants’ experiences, businesses most frequently ask

Page 148: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

136

their service providers for metrics concerning service performance and customer

satisfaction. This observation signifies that businesses are primarily interested in the

performance of the service providers, and customer satisfaction numbers are considered

“de facto” in measuring performance. It was obvious that businesses care about what

customers think of the services they receive from the service providers. The research

participants observed a relationship between an increased number of tickets in a given

month and lower customer satisfaction numbers for that month. Although there is no

empirical evidence for this observation, hearing similar observations from multiple

participants was certainly thought-provoking.

Identifying one metric that is most important to the business is not easy, and often

not possible. However, customer satisfaction, number of incidents and requests resolved

within the service level agreement, availability of services, and cost of operations were

frequently mentioned by the participants in this study. This suggests that the most

important metrics perceived by the business are fairly common among various service

provider groups. Other metrics deemed “most important” by the business included cost-

based metrics (e.g., cost of ownership, cost of operations, cost per incident or service

request), average time to resolve incidents and service requests, average time to respond,

first time resolution rate, number of backlog tickets, etc. While it is common for the

business to have different expectations of what should be measured compared to what

service providers actually measure, metrics such as customer satisfaction and meeting the

service level agreement are perceived as being critical by both the business and the

service providers as key measures of their performance (Wood, 2013).

Page 149: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

137

Business Value Metrics

While the findings from this particular research study pertain to demonstrating

“how well” the service provider group is currently operating by identifying performance

metrics (e.g., trend analysis, operational numbers, workload distribution numbers), the

emphasis placed by participants on several business value metrics cannot be ignored. As

the name suggests, business value metrics are measures considered beneficial to the

business. These metrics provide a clear idea of performance and value to the business.

One example of business value metrics mentioned by participants was the number of lost

IT service and business hours, also known as downtime. The cost of data center

downtime has increased significantly in recent years. According to the study of U.S. data

centers, the average cost of an unplanned data center outage is more than $7,900 per

minute (Sverdlik, 2013). Knowing the number of lost IT service hours indicates how long

IT services were unavailable to the business and customers and promotes discussion

surrounding why hours were lost, how much did the downtime cost the business, and

what actions can be taken to proactively prevent downtimes in the future. The number of

lost business hours signifies IT’s importance to the organization and should be carefully

reviewed to ascertain exactly how much revenue was lost due to services being down

(Wood, 2013). The participants in this research study understood the importance of these

business value metrics and overwhelmingly wanted to measure and report these metrics

in the near future when additional resources and capabilities are put in place.

Page 150: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

138

Applicability of This Study to the Field of Organization Change and Development

This research study focused on implementing one of the 26 ITIL-based IT Service

Management processes within a small IT service provider group environment. There are

three major reasons this study is applicable to the field of organization change and

development. First, the study suggests that implementing ITIL is an “organizational

change” that implies “movement toward a goal, an idealized state, or a vision of what

should be and movement away from present conditions, beliefs, or attitudes” (Rothwell &

Sullivan, 2005, p. 22). This change needs to be planned, led, and managed carefully to be

successful. The participants in the study advised that implementing ITIL is an initiative

that should be supported by top managers and managed as a formal project.

Implementing ITIL is a change effort that takes long time and a lot of money,

requires education and training, changes culture or “the way of doing things”, must

overcome resistance to change, takes active involvement and participation of all

stakeholders, and improves business results by increasing effectiveness and efficiency of

IT processes. While organization development (OD) has been defined over the years “by

just about every author who has written about it” (Rothwell & Sullivan, 2005, p. 18),

three key points can be derived from all the definitions: “OD is long-range in

perspective”, “OD should be supported by top managers”, and “OD effects change,

although not exclusively, through education” (Rothwell & Sullivan, 2005, p. 19). Since

implementation of ITIL shares so many similar characteristics with the key elements of

OD, it is justified to consider implementation of ITIL as an example of organizational

change.

Page 151: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

139

Second, implementation of ITIL can be considered as a developmental,

transitional, or transformational change effort (Anderson & Anderson, 2001) depending

on how extensive the implementation is going to be. Developmental change is the

simplest type of change that improves what is currently being done rather than creating

something new. Examples of developmental change include improving existing skills,

processes, methods, performance standards or conditions, etc. Transitional change

replaces “what is” with something new. This requires designing and implementing a

“new state.” Transitional changes can be managed and effectively supported with

traditional change management tools. If ITIL implementations do not require a significant

shift in culture or behavior to be effective or do not radically impact people’s work,

managing the implementation as a transitional change may be appropriate.

Transformational changes are complex and involve change of culture, behavior, and

mindset. To manage ITIL implementations as a transformational change, it may be

appropriate having an overarching change strategy and letting the change processes

emerge as the implementation progresses, instead of executing a pre-determined, time-

bound, and linear project plans. Many ITIL implementation projects fail because they

require a change or shift of culture, behavior, and mindset that does not occur. This

common pitfall can be avoided by using a planned and systematic organizational change

management approach (not to be confused with the ITIL Change Management process)

that aligns strategy, people, and processes to improve organizational effectiveness.

Third, there are several ways OD practitioners can use the results of this research

study. For example, by using the trend analysis metrics, OD practitioners can assess the

emerging and changing environment in which the organizational change effort is taking

Page 152: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

140

place. According to Eisen, Cherbeneau, and Worley (2005, p. 193), “Emerging trends and

forces are changing the context in which organizations function and the requirements of

their leaders for assistance from consultants.” Using trend analysis, practitioners can

identify, investigate, and report on all queries in which the service level agreements were

breached. Analysis can also be performed on the feedback received from customers to

evaluate trends related to the levels of satisfaction with the services provided. Service

providers’ performance can be benchmarked against the industry standards. These results

can be taken into account for continuous service improvement.

After identifying the significant trends and having them prioritized if necessary,

practitioners then can assess implications of these trends—understand the critical success

factors, benefits, risks, and challenges presented by these trends impacting the group

members, group, department, and even the entire organization. Many of the critical

success factors, benefits, risks, and challenges found in the literature review and

mentioned by participants in this research study can be applicable to other types of

organizational change and development interventions, such as individual-based

(coaching, counseling), group-based (team building, conflict management), techno-

structural (outsourcing, business process reengineering), strategic (cultural change,

organizational transformation), or human resource management programs (performance

appraisal, employee development, employee wellness, etc.) (Kongalla, 2013).

Another way OD practitioners can use the results of this research study is by

developing an OD competency—use of technology and software applications (Eisen et

al., 2005) to create dashboards and disseminate performance measures and metrics that

can be helpful in implementing and evaluating OD intervention programs. Computer-

Page 153: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

141

based software applications have the capabilities to pull data from multiple sources,

correlate the data, allow OD practitioners to easily create dashboards, and display the

data using customizable graphs, charts, and infographics. This research study also shows

that dashboards can be built for user-interaction with multiple drilldowns, real-time

updating, and dynamic customizations to discover data relationships and obtain

additional business insights. Most dashboard software currently available in the market

are programmed to suit mobile devices so that most accurate and up-to-date information

can be accessed from anywhere and by anyone (with proper authorization and

permission) in a timely manner.

Recommendations

The results from this study are important to persons interested in identifying

metrics and creating dashboards. Moreover, the results have important implications for

practitioners and researchers. Academic scholars and industry practitioners of IT Service

Management and ITIL critical success factors should report on various IT services, their

quality, and their value to the organization. This study puts forward the necessity of

perceiving the quality of IT services from four different perspectives (customer,

operational, capability, and financial service metrics) and critical success factors. Studies

on customer service metrics can aid IT service providers measure customers’ perception

of the services and answer research questions on customer satisfaction (how well the

service meets customer expectations), customer loyalty (how well and often customers

return to use the service), and market share (how much of the market is being served by

Page 154: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

142

the service). Research on operational service metrics will allow IT service providers to

measure the efficiencies and effectiveness of operational activities and resources utilized

to deliver services. Studies on operational service metrics can be carried out to determine

service availability (percent of time the service is available), service accuracy (percent of

time the service produces accurate results), service performance (response time of the

service meeting targets), supplier performance (how well the suppliers deliver services),

support turnover (turnover rate of support staff), service compliance (how well the

service complies with rules and regulations), and service flexibility (how quickly the

service responds to changing business requirements).

Future research projects should also study capability and financial service metrics.

Understanding capability service metrics can support IT service providers by measuring,

managing, and improving the capabilities, skills, capacities, and performance of the

services being offered. These studies may include topics such as service capacity (ability

to meet current and future customer demands), service recoverability (ability to recover

service from a disruption), support skills (capability and skills of staff members

supporting the service), technology performance (technology capability to support the

service), and supportability (capability to support the service). Finally, future studies

focusing on financial service metrics can help IT service providers identify the cost and

revenue performance of services provided. These studies can answer research questions

pertaining to cost performance (how cost-competitive the service is compared to other

similar services in the marketplace), revenue performance (how much the service is

aligned to business revenue goals), and budget performance (how well the service

adheres to planned costs and expenditures).

Page 155: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

143

In addition, the research study recommends that IT Service Management

practitioners unable to implement a robust metrics program in their organization or

department due to various challenges and resource constraints should still try to establish

a minimal metrics program (Steinberg, 2013) when relevant data to calculate metrics are

difficult to obtain. Typical challenges faced by this study’s participants included disparate

data collection and reporting tools, difficulty in aggregating and summarizing data, lack

of a “golden” source of data—the source with unequivocal data, and a lack of automation

in collecting and reporting the data. Rather than abandoning the metrics program, this

case study illustrates that establishing a minimal metrics program based on indicators,

random inspection results, analogous measures, and audit results can be valuable.

Indicators “are based on some observable operational event for which an

operating quality assumption is derived” (Steinberg, 2013, p. 136). Random inspection

results “represent observable events for which an assumption is made that they apply to

all similar events” (p. 137). Analogous measures are “observable metrics from which

assumptions will be derived that other events have occurred” (p. 138). Lastly, audit

results consist of measures from “periodic audit activities that are conducted for specific

operational events” (p. 140). Using a minimal metrics program based on these techniques

is especially helpful for IT service providers where programmed metrics are very difficult

to obtain without undertaking a long-term major project. This study makes similar

observations to Steinberg (2013) that IT service providers can establish a minimal metrics

program based on constant management communications, clear understanding among all

stakeholders, and agreement to techniques employed.

Page 156: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

144

The principles of organization development can be applicable and relevant in

implementing a metrics program. The participants in this research study stated that a

metrics program is usually implemented within a department (department-wide effort),

and requires senior management’s commitment, support, and involvement to be

implemented successfully. They also mentioned that the primary goal of a metrics

program is to improve organizational effectiveness. To implement such a program, a

planned and systematic approach is required that aligns strategy, people, and processes.

These are the very essence of organization development principles and characteristics. To

the best knowledge of the researcher, no research study is currently available that

demonstrates how to apply the principles of organization development in implementing

an ITIL metrics program.

Research and case studies with real-life examples of how to initiate, design, build,

test, and implement ITIL metrics programs of different sizes and complexities can be

highly valuable and useful to IT Service Management practitioners. Examples of research

topics for the program initiation step can include program scope, charter, and work plan.

For the program design step, roles and responsibilities, skills and training requirements,

stakeholder analysis, and tools and processes to collect, report, and review data needed to

create the metrics should be studied. Examples of training materials, test scripts, and test

results can be valuable in building and testing a metrics program. Future research should

also study successful metrics program implementations and identify lessons learned while

validating program goals, finding areas of continuous service improvement (Physical

Artifact: Vendor Document – The 7 Steps Improvement Process), managing diverse

stakeholders, and rolling out the program.

Page 157: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

145

In this study, the researcher examined the request fulfillment process of an

information technology service provider group to gain their perceptions of the most

important metrics of the process, and subsequently created executive dashboards to

display those metrics. Analysis of data led to the identification of a variety of enablers

and constraints that converged across three themes: trend analysis, monthly operational

summary, and monthly workload distribution summary. Each of these themes (dashboard

pages) and their associated categories (metrics) are important areas of future research that

may offer results needed to understand the real drivers behind the perceived need for a

metric. In addition, research may be expanded to include other IT Service Management

processes in different organizations and service providers with varying user populations.

IT Service Management programs should be studied in organizations with varying

budgetary requirements and in various implementation stages of the program life-cycle.

In this study, only full-time employees of a small IT service provider group engaged in

in-person interviews and a focus group session. Another area of future research also

could include interviewing part-time employees and consultants. Future studies should

take into consideration the input of part-time employees—those who use IT Service

Management processes on a regular basis as a job requirement, and consultants—those

who typically have many years of experience in diverse industries. Their input may

provide additional insights into IT Service Management processes, resulting in additional

metrics.

This study used a qualitative case study approach to answer the research

questions. Future research studies should examine whether other research methods—such

as narrative inquiry, ethnography, or phenomenological study—are more appropriate and

Page 158: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

146

suitable for this purpose. These research methods can address various topics such as

participants’ experiences of ITIL adoption, customer care, or process improvement;

understanding different perspectives of participant groups, such as service providers,

customers and end users, stakeholders, and senior managers; and how experiences,

attitudes, and life circumstances affect the needs and behaviors of different participant

groups. Finally, this research study was limited to identifying important metrics

perceived by only the group members and did not account for stakeholders’ perspectives,

which may be another area to consider in future research.

Page 159: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

147

References

Anderson, B. (2009). ITIL background and history. Retrieved from:

http://www.itservicemanagement-itil.com/uncategorized/itil-background-history/

Anderson, L.A., & Anderson, D. (2001). The change leader’s roadmap: How to navigate

your organization’s transformation. San Francisco: Pfeiffer.

Arora, A., & Bandara, W. (2006, July). IT service desk process improvement – a

narrative style case study. Paper presented at the 10th Pacific Asia Conference on

Information Systems (PACIS 2006). Retrieved from:

http://aisel.aisnet.org/pacis2006/78

Barash, G., Bartolini, C., & Wu, L. (2007, May). Measuring and improving the

performance of an IT support organization in managing service incidents. In 2nd

IEEE/IFIP International Workshop on Business-Driven IT Management (pp. 11-

18). IEEE. doi:10.1109/BDIM.2007.375007

Best Management Practice. (2007). ITIL service strategy. London: TSO.

Bowen, G. A. (2003). Social funds as a strategy for poverty reduction in Jamaica: An

exploratory study. Dissertation Abstracts International. University Microfilms

AAT 3130417, Doctoral dissertation, Florida International University, A 65/04,

1557.

Boynlon, A.C., & Zmud, R.W. (1984). An assessment of critical success factors. Sloan

Management Review, 25(4), 17-27.

Page 160: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

148

Briner, R (1997). Improving stress assessment: Toward an evidence-based approach to

organizational stress interventions. Journal of Psychosomatic Research, 43(1),

61-71.

Brooks, P. (2012). Best practice: Metrics for service management designing for ITIL.

Retrieved from:

http://www.vanharen.net/Samplefiles/9789087536480_SMPL.pdf

Bryman, A. (2010). Member validation and check. Retrieved from:

http://srmo.sagepub.com/view/the-sage-encyclopedia-of-social-science-research-

methods/n548.xml

Budd, M. & Malcolm, C. (2001). An effective metrics program can ensure IT

performance success. Healthcare Financial Management, 55(11), 84–88.

Business Dictionary. (2016). What are metrics? Definition and meaning. Retrieved from:

http://www.businessdictionary.com/definition/metrics.html

Cater-Steel, A., & McBride, N. (2007, June). IT service management improvement -

actor network perspective. In Proceedings of the 15th European Conference on

Information Systems (ECIS 2007) (pp. 1202-1213). University of St. Gallen.

Cater-Steel, A., Tan, W., & Toleman, M. (2006, November). Challenge of adopting

multiple process improvement frameworks. Paper presented at the European

Conference on Information Systems (ECIS) - 2006. Retrieved from:

http://aisel.aisnet.org/ecis2006/177

Charmaz, K. (2006). Constructing grounded theory: A practical guide through

qualitative analysis. Thousand Oaks, CA: Sage Publications.

Page 161: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

149

Coelho, A. M., & da Cunha, P. R. (2009, August). IT service management diagnosis at

Grefusa Group and ITIL implementation proposal. Paper presented at the

Americas Conference on Information Systems (AMCIS) - 2009. Retrieved from:

http://aisel.aisnet.org/amcis2009/519.

Conger, S., Winniford, M., & Erickson-Harris, L. (2008, August). Service management in

operations. Paper presented at the AMCIS 2008 Proceedings. Retrieved from:

http://aisel.aisnet.org/amcis2008/362

Creswell, J. (1998). Qualitative inquiry and research design: Choosing among five

approaches. Thousand Oaks, CA: Sage.

Creswell, J. (2007). Qualitative Inquiry and Research Design: Choosing among Five

Approaches. Thousand Oaks, CA: Sage.

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods

approaches. Thousand Oaks, CA: Sage.

Creswell, J. W. (2013). Qualitative inquiry & research design. Choosing among five

approaches (3rd ed.). Thousand Oaks, CA: Sage Publications.

Davis, R. J. (2010). Case study database. Retrieved from:

http://srmo.sagepub.com/view/encyc-of-case-study-research/n30.xml

Denzin, N. K., & Lincoln, Y. S. (1994). (Eds.). Handbook of Qualitative Research.

Thousand Oaks, CA: Sage Publications, Inc.

Diirr, T., & Santos, G. (2014). Improvement of IT service processes: A study of critical

success factors. Journal of Software Engineering Research and

Development, 2(1), 1-21. doi:10.1186/2195-1721-2-4

Page 162: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

150

Disterer, G. (2012, June). Why firms seek ISO 20000 certification: A study of ISO 20000

adoption. Paper presented at the European Conference on Information Systems

(ECIS) - 2012. Retrieved from: http://aisel.aisnet.org/ecis2012/31

Dorogovs, P., & Romanovs, A. (2008). The optimization of use of IT infrastructure and

the implementation of ITIL processes in state institutions. Rigas Tehniskas

Universitates Zinatniskie Raksti, 36, 125. Retrieved from:

http://search.proquest.com/docview/914318383?accountid=13158

Duffy, K. P., & Denison, B. B. (2008). Using ITIL to improve IT services. Paper

presented at the AMCIS 2008 Proceedings. Retrieved from:

http://aisel.aisnet.org/amcis2008/3

Eisen, S., Cherbeneau, J., & Worley, C. G. (2005). A future-responsive perspective for

competent practice in OD. In W. Rothwell & R. Sullivan (Eds.), Practicing

Organization Development: A Guide for Consultants (pp. 188-208). San

Francisco: Pfeiffer.

Espindola, R. S., Luciano, E. M., & Audy, J. L. N. (2009, January). An overview of the

adoption of IT governance models and software process quality instruments at

Brazil - preliminary results of a survey. Paper presented at the 42nd Hawaii

International Conference on System Sciences. Retrieved from:

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4755737

Evers, J. C. & van Staa, A. L. (2010). Qualitative analysis in case study. Retrieved from:

http://srmo.sagepub.com/view/encyc-of-case-study-research/n277.xml

Fraenkel J. R. & Wallen, N. E. (2003). How to design and evaluate research in education

(5th ed.). New York: McGraw Hill.

Page 163: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

151

Gacenga, F., Cater-Steel, A., Tan, W., & Toleman, M. (2011, December). IT service

management: Towards a contingency theory of performance measurement. Paper

presented at the 32nd International Conference on Information System, Shanghai,

1-18. Retrieved from:

http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1189&context=icis2011

Gacenga, F., Cater-Steel, A., & Toleman, M. (2010). An international analysis of IT

service management benefits and performance measurement. Journal of Global

Information Technology Management, 13(4), 28-63.

Gacenga, F., Cater-Steel, A., Toleman, M., & Tan, W. G. (2011). Measuring the

performance of service orientated IT management. Sprouts: Working Papers on

Information Environments, Systems and Organizations, 11(162). Retrieved from:

http://sprouts.aisnet.org/11-162

Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The

Qualitative Report, 8(4), 597–607.

Groenewald, T. (2004). A phenomenological research design illustrated. International

Journal of Qualitative Methods, 3(1). Article 4. Retrieved from:

http://www.ualberta.ca/~iiqm/backissues/3_1/pdf/groenewald.pdf

Guba, E. (1978). Toward a methodology of naturalistic inquiry in educational evaluation.

CSE Monograph Series in Evaluation 8. Los Angeles: University of California,

Los Angeles, Center for the Study of Evaluation.

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An

experiment with data saturation and variability. Field Methods, 18(1), 59-82.

doi:10.1177/1525822X05279903

Page 164: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

152

Haustein, J. R. (2012). Successful metrics. Retrieved from:

https://confluence.cornell.edu/display/metrics/Successful+Metrics

Heller, M. (2013). Running IT like a business: Interview with Rob Webb, former CIO of

Hilton Worldwide and CEO of the TBM Council. Retrieved from:

http://www.cio.com/article/2370229/cio-role/running-it-like-a-business/cio-

role/cio-role/running-it-like-a-business.html

Hochstein, A., Tamm, G., & Brenner, W. (2005, May). Service-oriented IT management:

Benefit, cost and success factors. Paper presented at the European Conference on

Information Systems (ECIS) 2005 Proceedings. Retrieved from:

http://aisel.aisnet.org/ecis2005/98/

Hoerbst, A., Hackl, W. O., Blomer, R., & Ammenwerth, E. (2011). The status of IT

service management in health care – ITIL® in selected European countries. BMC

Medical Informatics and Decision Making, 11(76), 2–12.

Hsieh, H.-F. & Shannon, S.E. (2005). Three approaches to qualitative content analysis.

Qualitative Health Research, 15(9), 1277-1288.

Hycner, R.H. (1985). Some guidelines for the phenomenological analysis of interview

data. Human Studies, 8, 279-303.

Iden, J., & Eikebrokk, T. R. (2011). Understanding the ITIL implementation project:

Conceptualization and measurements. Paper presented at the 22nd International

Workshop on Database and Expert Systems Applications, Toulouse. 21-25.

doi:10.1109/DEXA.2011.87

Iden, J, & Eikebrokk, T. R. (2013). Implementing IT service management: A systematic

literature review. International Journal of Information Management, 33, 512-523.

Page 165: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

153

Iden, J., & Eikebrokk, T. R. (2015). The impact of senior management involvement,

organisational commitment and group efficacy on ITIL implementation

benefits. Information Systems and e-Business Management, 13(3), 527-552.

doi:10.1007/s10257-014-0253-4

Iden, J., & Langeland, L. (2010). Setting the stage for a successful ITIL adoption: A

Delphi study of IT experts in the Norwegian Armed Forces. Information Systems

Management, 27(2), 103-112. doi:10.1080/10580531003708378

ITILNews. (2016). ITIL v3: What are the ITIL core books or publications. Retrieved

from: http://www.itilnews.com/index.php?pagename=ITIL_v3__What_are_the_

ITIL_ Core_Books_or_Publications

Jira. (2015). In Wikipedia. Retrieved on October 14, 2015, from:

https://en.wikipedia.org/wiki/JIRA

Kashanchi, R., & Toland, J. (2006). Can ITIL contribute to IT/business alignment? An

initial investigation. Wirtschaftsinformatik, 48(5), 340-348. doi:10.1007/s11576-

006-0079-x

Kashyap, S. G. (2014). A case study to examine institutional factors facilitating and

inhibiting faculty preparation for teaching in an online MBA program

(Unpublished doctoral dissertation). The Pennsylvania State University, PA.

Kelly, M. C. (2010). A case study in citizen leadership during crisis: The experiences of

the women of the storm (Unpublished doctoral dissertation). The Pennsylvania

State University, PA.

Kirk, J., & Miller, M. L. (1986). Reliability and validity in qualitative research. Beverly

Hills, CA: Sage.

Page 166: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

154

Klubeck, M. (2011). Metrics: How to improve key business results. New York: Apress.

Kongalla, R. (2013). OD interventions. Retrieved from:

http://www.slideshare.net/artistramakrishna/05-orgnl-dev-interventions

Krafting, L. (1991). Rigor in qualitative research: The assessment of trustworthiness. The

American Journal of Occupational Therapy, 45(3), 214-222.

Krathwohl, D. R. & Smith, N. L. (2005). How to prepare a dissertation proposal:

Suggestions for students in education and the social and behavioral sciences.

Syracuse, NY: Syracuse University Press.

Kumbakara, N. (2008). Managed IT services: The role of IT standards. Information

Management & Computer Security, 16(4), 336-359.

doi:10.1108/09685220810908778

Lapão, L. V. (2011). Organizational challenges and barriers to implementing IT

governance in a hospital. Electronic Journal of Information Systems

Evaluation, 14(1), 37-45.

Leary, M. R. (2008). Introduction to behavioral research methods (5th ed.). Boston:

Pearson Education.

Leedy, P. D., & Ormrod, J. E. (2001). Practical research planning and design (7th ed.).

Upper Saddle River, NJ: Prentice-Hall.

Luftman, J. N. (2001) Assessing business-IT alignment maturity. In R. Papp (Ed.),

Strategic Information Technology: Opportunities for Competitive Advantage (pp.

105-134). Hershey, PA: Idea Group Publishing.

Page 167: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

155

Mack, N., Woodsong, C., MacQueen, K., Guest, G., & Namey, E. (2005). Qualitative

research methods: A data collector’s field guide. Research Triangle Park, NC:

Family Health International.

Majanoja, A., Tervala, E., Linko, L., & Leppänen, V. (2014). The challenge of global

selective outsourcing environment: Implementing customer-centric IT service

operations and ITIL processes. Journal of Service Science and Management, 7(6),

396-410. doi:10.4236/jssm.2014.76037

Mann, S. (2011). IT service management metrics: Advice and 10 top tips. Retrieved from:

http://blogs.forrester.com/stephen_mann/11-10-14-

it_service_management_metrics_advice_and_10_top_tips

Marrone, M., & Kolbe, L. M. (2011). Impact of IT service management frameworks on

the IT organization. Business & Information Systems Engineering, 3(1), 5-18.

doi:10.1007/s12599-010-0141-5

Marrone, M., & Kolbe, L. M. (2011a). Uncovering ITIL claims: IT executives’

perception on benefits and business-IT alignment. Information Systems and e-

Business Management, 9(3), 363-380. doi:10.1007/s10257-010-0131-7

Marshall, C., & Rossman, G. B. (2006). Designing qualitative research (4th ed.).

Thousand Oaks, CA: SAGE.

Mays, N. & Pope, C. (2000). Qualitative research in health care: Assessing quality in

qualitative research. The BMJ, 320, 50-52.

McNaughton, B., Ray, P., & Lewis, L. (2010). Designing an evaluation framework for IT

service management. Information & Management, 47(4), 219-225.

doi:10.1016/j.im.2010.02.003

Page 168: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

156

Mendes, M. & Mira da Silva, M. (2011). Implementing a request fulfillment process. In

M. Snene, J. Ralyte, & J-H. Morin (Eds.), Exploring Services Science (pp. 113-

126). Berlin: Springer Berlin.

Miles, M. & Huberman, M. (1994). Qualitative data analysis: An expanded sourcebook

(2nd ed.). Thousand Oaks, CA: Sage.

Mohamed B Al Mourad, & Johari, R. (2014). Resolution of challenges that are facing

organizations before ITIL implementation. International Journal of Future

Computer and Communication, 3(3), 210-215. doi:10.7763/IJFCC.2014.V3.298

Mohammed, T. A. (2008, December). The art of existence and the regimes of IS-enabled

customer service rationalization: A study of IT service management in the UK

higher education. Paper presented at the International Conference on Information

Systems (ICIS) - 2008. Retrieved from: http://aisel.aisnet.org/icis2008/195.

Office of Government Commerce. (2007). Service operation. London: TSO.

Office of Government Commerce. (2011). Service operation. London: TSO.

Office of Government Commerce. (2011a). ITIL continual service improvement. London:

TSO.

Orr, R. L. (2008). Faculty perceptions of institutional efforts at addressing barriers to

faculty's success in delivering online learning (Unpublished doctoral dissertation).

Western Carolina University, NC.

Overby, S. (2004, May). How to run I.T. like a business. CIO, 48-56.

Patton, M. Q. (2002). Qualitative evaluation and research methods (3rd ed.). Thousand

Oaks, CA: SAGE Publications, Inc.

Page 169: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

157

Pollard, C. & Cater-Steel, A. (2009). Justifications, strategies, and critical success factors

in successful ITIL implementations in U.S. and Australian companies: An

exploratory study. Information Systems Management, 26(2), 164-175.

doi:10.1080/10580530902797540

Rothwell, W. J. & Sullivan, R. L. (2005). Organization development. In W. Rothwell &

R. Sullivan (Eds.), Practicing Organization Development: A Guide for

Consultants (pp. 9-38). San Francisco: Pfeiffer.

Saldana, J. (2008). The coding manual for qualitative researchers. Thousand Oaks, CA:

Sage.

Sandelowski, M. (2000). Whatever happened to qualitative description? Research in

Nursing and Health, 23, 334-340.

Seidman, I. (2012). Interviewing as qualitative research: A guide for researchers in

education and the social sciences. New York, NY: Teachers College Press.

Senger, E. and Osterle, H. (2002). PROMET BECS – A Project Method for Business

Engineering Case Studies. Institute of Information Management at the University

of St. Gallen, St. Gallen.

Shahsavarani, N., & Ji, S. (2011, June). Research in information technology service

management (ITSM): Theoretical foundation and research topic perspectives.

Paper presented at the International Conference on Information Resources

Management (CONF-IRM 2011). Retrieved from:

http://aisel.aisnet.org/confirm2011/30

Page 170: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

158

Somers, T. M., & Nelson, K. (2001, January). The impact of critical success factors

across the stages of enterprise resource planning implementations. Paper

presented at the 34th Hawaii International Conference on System Sciences. IEEE

Press.

Spafford, G. (2009). What are metrics? Retrieved from:

http://www.processdox.com/Documents/3852_metrics_revised_12-07-09.pdf

Spradley, J. P. (1980) Participant observation. New York: Holt, Rinehart & Winston.

Stake, R. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Stake, R. (2006). Multiple case study analysis. New York, NY: Guildford Press.

Steinberg, R. (2013). Measuring ITSM: Measuring, reporting, and modeling the IT

service management metrics that matter most to IT senior executives.

Bloomington, IN: Trafford Publishing.

Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and

procedures for developing grounded theory. Thousand Oaks, CA: Sage.

Stopper, A. L. M. (2013). Success factors in building online executive development

programs in three universities: a collective case study (Unpublished doctoral

dissertation). The Pennsylvania State University, PA.

Sverdlik, Y. (2013). One minute of data center downtime costs US $7,900 on average.

Retrieved from: http://www.datacenterdynamics.com/content-tracks/power-

cooling/one-minute-of-data-center-downtime-costs-us7900-on-

average/83956.fullarticle

Page 171: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

159

Talla, M., & Valverde, R. (2013). An implementation of ITIL guidelines for IT support

process in a service organization. International Journal of Information and

Electronics Engineering, 3(3), 334-340. doi:10.7763/IJIEE.2013.V3.329

Tan, W., Cater-Steel, A., & Toleman, M. (2009). Implementing IT service management:

A case study focusing on critical success factors. The Journal of Computer

Information Systems, 50(2), 1-12.

Tan, W., Cater-Steel, A., Toleman, M., & Seaniger, R. (2007, December). Implementing

centralised IT service management: Drawing lessons from the public Sector.

Paper presented at the Australasian Conference on Information Systems,

Toowoomba. Retrieved from: http://www.researchgate.net/publication/228653786

Tayfour, M. A., (2008, December). The art of existence and the regimes of IS-enabled

customer service rationalization: A study of IT service management in the UK

higher education. Paper presented at the ICIS 2008 Proceedings. Retrieved from:

http://aisel.aisnet.org/icis2008/195

Tiku, L. K., Mukherjee, I., & Tellis, C. (2015). Help desk reboot. TD: Talent

Development, 69(7), 48-52.

Trochim, W. M. K. (2005). Research methods. Cincinnati, OH: Atomic Dog Publishing.

UCISA (2015). ITIL: A guide to request fulfillment. Retrieved from:

https://www.ucisa.ac.uk/~/media/Files/members/activities/ITIL/service_operation/request

_fulfilment/ITIL_a%20guide%20to%20request%20fulfilment%20pdf.ashx

van Bon, J. (2008). IT service management global best practices. Netherlands: Van

Haren Publishing.

Page 172: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

160

Vivant, B. (1999). Information technology metrics. The Journal of Bank Cost &

Management Accounting, 12(3), pp. 11-38.

Wagner, H. (2006, January). Managing the impact of IT on firm success: The link

between the resource-based view and the IT infrastructure library. Paper

presented at the 39th Hawaii International Conference on System Sciences.

doi:10.1109/HICSS.2006.265

Waltz, C., Strickland, O., & Lenz, E. (2005). Measurement in nursing and health

research (3rd ed.). New York, NY: Springer.

Wan, S. H. C., & Chan, Y. (2008). Improving service management in campus IT

operations. Campus-Wide Information Systems, 25(1), 30-49.

doi:10.1108/10650740810849070

Warrick, D. D. (2005). Organization development from the view of the experts: Summary

results. In W. Rothwell & R. Sullivan (Eds.), Practicing Organization

Development: A Guide for Consultants (pp. 164-187). San Francisco: Pfeiffer.

Welch, S. (2014). A case study of institutional efforts to address barriers to the successful

delivery of online learning: A business faculty perspective (Unpublished doctoral

dissertation). The Pennsylvania State University, PA.

Winniford, M., Conger, S., & Erickson-Harris, L. (2009). Confusion in the ranks: IT

service management practice and terminology. Information Systems Management,

26(2), 153-163. doi:10.1080/10580530902797532

Wood, D. (2013). Demonstrating service desk value through more meaningful metrics.

Retrieved from: http://www.servicedesk360.com/wp-

content/uploads/2013/05/Meaningful_Metrics_online.pdf

Page 173: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

161

Yin, R. (1989a). Case study research: Design and method (1st ed.). Thousand Oaks, CA:

Sage.

Yin, R. (1989b). Interorganizational partnerships in local job creation and job training

efforts. Washington, DC: COSMOS Corp.

Yin, R. (1993). Applications of case study research. Beverly Hills, CA: Sage Publishing.

Yin, R. (1994). Case study research: Design and method (2nd ed.). Beverly Hills, CA:

Sage Publishing.

Yin, R. (2003). Case study research: Design and method (3rd ed.). Thousand Oaks, CA:

Sage.

Yin, R. (2012). Applications of case study research (3rd ed.). Thousand Oaks, CA: Sage.

Yin, R. (2014). Case study research: Design and method (5th ed.). Thousand Oaks, CA:

Sage.

Page 174: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

162

Appendix A

IRB Approval letter

Page 175: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

163

Appendix B

HRP-591: Protocol for Human Subject Research

Has been provided in a separate file.

Page 176: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

164

Appendix C

Recruitment Letter

Dear <<Participant’s Full Name>>:

My name is Sohel M. Imroz, and I am a Ph.D. candidate in the department of Learning

and Performance Systems at the Pennsylvania State University, University Park, PA. I

am conducting a research identifying metrics for ITIL request fulfillment process and

creating executive dashboard for your group. The purpose of this letter is to request your

participation in this research.

The results of my inquiry will be of interest to your group and other professionals

interested in ITIL and IT Service Management. To participate in this research, you must

meet these three criteria:

1. Be employed full-time.

2. Be a staff member of your group for at least one year.

3. Use the request fulfillment process on a daily basis.

Please be assured that information by the participants will be held in the strictest of

confidence. All data will be analyzed and reported as group data only, and in accordance

with Penn State University policy for research initiatives.

Please let me know if you would be willing to participate in this study. If you do, then I

will contact you to make an appointment to discuss the ITIL request fulfillment process

implemented in your group. If you have any questions or need additional information

about this study, you may contact Sohel M. Imroz using the phone number and email

address provided below. A copy of the questions for the meeting will be attached for your

perusal before the meeting.

I really appreciate your assistance and look forward to your participation in this research.

Thank you in advance for your help!

Best regards,

Sohel M. Imroz, PhD Candidate

Department of Learning and Performance Systems

The Pennsylvania State University.

Address: 10 Vairo Blvd. Apt. 24-B, State College, PA 16803

Email: [email protected]

Phone: 814-360-9285

Page 177: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

165

Appendix D

Informed Consent Form

Title of the Project: A qualitative case study identifying metrics for ITIL®

request fulfillment process to create executive dashboards:

Perspectives of an information technology service provider

group

Principal Investigator: Sohel M. Imroz

Ph.D. Candidate, Penn State University

10 Vairo Blvd. Apt. 24-B, State College, PA 16803

Email: [email protected]; Phone: 814-360-9285

Advisor: Dr. William J. Rothwell

Professor of Education, Penn State University

310B Keller Building, University Park, PA 16802

Email: [email protected]; Phone: 814-863-2581

1. Purpose of the Study: This study understands the request fulfillment process for an

information technology service provider group to discover the most important metrics of the

process, and to create executive dashboards to display those metrics. The study offers IT

managers and senior executives indicators from which they can make accurate and timely

operational and strategic decisions. The study also provides visibility into how effectively and

efficiently a service provider group operates and delivers a set of IT services.

2. Procedures to be followed: In an interview setting with only the principal investigator, you

will be asked a series of questions about the ITIL implementation and the request fulfillment

process in your group. The interview will be tape-recorded (for transcription purposes only)

with your permission. Your real name will not be used in the report but will be replaced with

a pseudonym.

3. Duration/Time: The interview will be no longer than 90 minutes.

4. Statement of Confidentiality: Your participation in this research is confidential. In the event

of a publication or presentation resulting from the research, no personally identifiable

information will be shared. In order to help offset the likelihood that it would be possible to

deductively determine who provided specific responses; alpha/numeric codes will be used

instead of position titles to help protect the identities of participants. Your responses will

remain confidential. That is, names or other identifiable information will not be linked to

your responses. Data will be reported in summary form only, or if individual quotes are used,

a masked name, pseudonym, or number code will replace your given name. Information

gathered from the interview will be stored and secured in the principal investigator’s secure

home office and be accessible by only the principal investigator.

Page 178: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

166

5. Right to Ask Questions: Please contact Sohel M. Imroz at (814) 360-9285 with questions or concerns about this study.

6. Voluntary participation: Your decision to be in this research is voluntary. You may stop at

any time. You do not have to answer any questions you do not want to answer. Refusal to

take part in or withdrawing from this study will involve no penalty or loss of benefits you

would receive otherwise.

7. Permission to use tape recording device: Please indicate below your willingness to have

this interview tape recorded. All audio recordings will be stored in a locked cabinet until

destroyed after transcription unless you give the researcher permission to archive recordings

for uses in future reports and publications. Only approved researchers will have access to

these tapes. You may decline to have this interview tape recorded at any time before or

during the interview. After the interview, you have the right to ask that the tape recording or

your interview not be used in this research study.

________ I permit this interview to be tape recorded.

________ I do not permit this interview to be tape recorded.

If you agree to take part in this research study and the information outlined above, please sign

your name and indicate the date below. You will be given a copy of this form for your records.

----------------------------------------------------------- ------------

(Participant’s Signature) (Date)

----------------------------------------------------------- ------------

(Person Obtaining Consent) (Date)

Page 179: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

167

Appendix E

Interview Guide

Opening Script: Thank you for agreeing to participate in my study on identifying metrics for the ITIL request

fulfillment process. My name is Sohel M. Imroz, and you can reach me at [email protected] or

814-360-9285 at any time after this discussion if you have questions or would like to make any

change to your responses. Again, this research is for a doctoral dissertation titled “A qualitative

case study identifying metrics for ITIL request fulfillment process to create executive

dashboards: Perspectives of an information technology service provider group.” I am going to

ask you a number of questions and request that you respond honestly. The interview should

take between 60-90 minutes.

Interviewee Contact Information: Name

Title

Email Address

Phone Number

Interview Information: Date

Time

Venue

Question

No.

Interview Question Answers / Notes

Q1 What are the main reasons behind implementing the

request fulfillment process in your group?

Q2 What challenges and risks did your group face while

implementing the request fulfillment process?

Q3 Please describe the way the ITIL project was

implemented in your group.

Q4 What activities received constant and careful

attention from management, group members, and

external consultants during request fulfillment

process implementation?

Q5 What kind of service requests related information

does your manager usually want to know from you?

Page 180: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

168

Q6 How do you make new team members familiar with

the request fulfillment process after onboarding?

What method (mentoring, training, etc.) do you use?

Q7 What is the source of all the service requests that are

placed on your group? What kind of information

does it contain?

Q8 Who will use the dashboard? What key questions

would they like to be answered on the dashboard?

Q9 What metrics would you like to see on the

dashboard? How would you like the metrics visually

presented on the dashboard?

Q10 What tool/software will be used to create the

dashboard?

Q11 Do you want the dashboard to be updated manually

or automatically? How frequently should the

dashboard be updated?

Q12 Do you have the necessary resources and capabilities

in your group to maintain the dashboard?

Field Notes:

Follow Up Notes:

Closing Script: That concludes my questions. Do you have any additional comments you would like to add or

any questions for me?

Thank you so much for your time and cooperation.

Page 181: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

169

Appendix F

Properties of a JIRA Record

Property Name Description

Issue Type

Type of ticket. There are six types of tickets: Access, BI, Disruption,

Feedback, Incident, Informative, Inquiry, Rejected, Service – Ad Hoc,

Service – Recurring, and Sub-task.

Key Unique identification number of a ticket.

Assignee The person to whom a ticket is assigned.

Priority The priority of the ticket. There are five priorities:

1 – Very High, 2 – High, 3 – Medium, 4 – Low, 5 – Very Low.

Status

The current status of a ticket. There are 12 statuses: Assigned, Closed,

Evaluating, Fulfilled, In Development, In Progress, Needs Reviewed, On

Hold, Open, Production, Ready for Production, and Reviewed.

Date Created Date and time of when the ticket was created.

Due Date The date by when the ticket must be completed.

Summary Brief description of what the ticket is about.

Department/Area

The department or area creating the ticket.

There are eight departments/areas.

Names are not displayed to protect anonymity.

Analytic Service

Name of the analytic service the ticket belongs to.

There are 19 analytic services.

Names are not displayed to protect anonymity.

Date Closed Date and time of when the ticket was closed.

Date Resolved Date and time of when the ticket was resolved.

Page 182: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

170

Appendix G

Case Study Protocol

The following is a case study protocol adapted from Yin (2003, p. 68).

A. Overview of the case study project:

1. The purpose of this study is to understand the request fulfillment process

for an information technology service provider group to discover the most

important metrics of the process, and to create executive dashboards to

display those metrics.

2. Two primary research questions are:

a. What do the group members perceive as being the most important

metrics of the ITIL request fulfillment process?

b. How to create executive dashboards with the metrics perceived as

most important by the group members?

B. Field procedures:

1. Make contact (e-mail/phone call/skype) with “Director” interview subject

of the group.

a. Share invitation letter, purpose of study, research questions,

credentials of researcher.

b. If subject agrees to participate, and has appropriate approval from

institution:

i. Secure names of other possible subjects in the group,

following similar process described above.

ii. Schedule interview details (date/time/method of

communication).

iii. Share interview guide questions in advance of interview.

iv. Conduct interview, based on interview guide. Take an

audio recording of each interview

v. Answer any questions of interview subject, and provide

researcher’s contact information, should further questions

arise later.

vi. Share that a summary of the interview will be provided to

the participant for his/her review, clarification and

corrections.

vii. Request permission to contact the participant in the future,

should further questions arise, or there is a need for

clarification or additional information.

viii. Thank participant for the interview.

Page 183: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

171

2. Review other data sources including archival records and physical

artifacts of the request fulfillment process. This may include information

related to the following topics:

a. Meeting notes

b. Process documents, checklists, and workflows

c. Service request repository database (JIRA)

d. Reports and files produced

e. Training materials

f. Onboarding documents

3. Conduct coding and analysis.

a. Discuss similar themes across cases.

b. Discuss summarization of meaning

c. Discuss themes as they may tie back to the study’s conceptual

framework.

d. Discuss areas for future research opportunities.

Page 184: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

172

Appendix H

Field Journal Template and Example

Note. Adapted from Spradley (1980)

Example:

4/18/2016, 11:00 am EST

Met Jennifer and Carl at the XX office today. I was scheduled to be there at 10:30 am but

traffic was so bad that I did not get there until 11:00 am. Jennifer or Carl did not seem

bothered by my tardiness (Jennifer, “Don’t worry about it. I myself was late today.”). I

walked into the building and took the elevator up to the fourth floor. I was a little unsure

about where to go from there so I just walked into the first open door and said, “I’m

looking for the XX office.” A woman showed me into a large office (long and slightly

irregular shape with windows on one wall, a desk and table and many chairs. Also two

computers set up on a counter that runs along the wall across from the windows.) Two

women were looking at a computer screen that was on the counter. When I walked in, I

greeted Jennifer and Carl, and they greeted me back. Both of them shook my hand,

though Jennifer was the first to do so and did so with slightly more self-assurance than

Carl. Carl asked me to hang my coat on one of the “coat racks” and gestured to the many

chairs that were around the office. I placed my computer in front of me on one side of the

conference table, and placed the tape recorder on the other side couple of feet away in

front of Jennifer and Carl. I communicated to them prior to the meeting about using a

tape recorder, and they did not object. They sat together closely side by side, but not too

closely.

•The physical space or placesSpace

•The people involvedActor

•A set of related acts people doActivity

•The physical things that are presentObject

•Single actions that people doAct

•A set of related activities that people carry outEvent

•The sequencing that takes place over timeTime

•The things people are trying to accomplishGoal

•The emotions felt and expressedFeeling

Page 185: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

173

Appendix I

Documents, Physical Artifacts, and Data Analyzed

Documents or

Physical Artifacts

Data Analyzed

Note. Adapted from Bowen (2003)

Page 186: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

174

Appendix J

The Codebook: Codes, Descriptions, and Examples

Codes Descriptions Examples

Tickets opened per

month

Participant declares her decision to

see the number of tickets created

per month.

Participant refers to her past

experience about reporting number

of tickets created per month and

recognizes its importance.

Participant expresses preference of

obtaining numbers for all five

types of priority, but also

recognizes that not all can be

displayed on the dashboard.

Participant expresses his

preference on how to display these

numbers in dashboards.

“I definitely want to see on the

dashboard how many tickets we created

and closed in a given month. I want to

see these numbers starting from when

we started to use the request fulfillment

process.”

“In the company where I worked before

coming here, the first numbers we

reported on a monthly basis were the

number of tickets we created and

closed in a given month. Here too,

[pointing at her current workplace], I

think these are the most basic but

important metrics that should be

reported on the dashboard.”

“We want to make sure all the tickets

are completed and closed within the

due date of the ticket. We understand

that it is not realistic to close all the

tickets in the same month they were

created, but vast majority of the service

requests that we receive could be

completed and closed within few days.

So it makes sense for us to see on the

dashboard the total number of tickets

created and closed on a month-by-

month basis. If we see a vast

discrepancy or disparity in a given

month, then we can look further into it

and find out what’s going on.”

“It would be nice to see these numbers

for all P-1, P-2, P-3, P-4, and P-5

tickets.”

“A stacked-column bar chart with red

bars showing the number of tickets

created (per month) and green bars

showing the number of tickets closed

(per month) seems appropriate and I

think it would look nice.”

Page 187: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

175

Tickets closed per

month

Participant declares her decision to

see the number of tickets closed

per month.

Participant refers to her past

experience about reporting number

of tickets closed per month and

recognizes its importance.

Participant makes direct reference

of resolving tickets within due

dates. Participant also makes

indirect reference to trend analysis.

Participant expresses his

preference on how to display these

numbers in dashboards.

Same as above

Priority-1 tickets

opened per month

Participant makes direct reference

of creating Priority-1 tickets

Same as above

Priority-1 tickets closed

per month

Participant makes direct reference

of closing Priority-1 tickets

Same as above

Priority-2 tickets

opened per month

Participant makes direct reference

of creating Priority-2 tickets

Same as above

Priority-2 tickets closed

per month

Participant makes direct reference

of closing Priority-2 tickets

Same as above

Priority-3 tickets

opened per month

Participant makes direct reference

of creating Priority-3 tickets

Same as above

Priority-3 tickets closed

per month

Participant makes direct reference

of closing Priority-3 tickets

Same as above

Priority-4 tickets

opened per month

Participant makes direct reference

of creating Priority-4 tickets

Same as above

Priority-4 tickets closed

per month

Participant makes direct reference

of closing Priority-4 tickets

Same as above

Page 188: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

176

Priority-5 tickets

opened per month

Participant makes direct reference

of creating Priority-5 tickets

Same as above

Priority-5 tickets closed

per month

Participant makes direct reference

of closing Priority-5 tickets

Same as above

Issue type Participant expresses concern of

categorizing tickets under wrong

issue type.

Participant refers to the benefits of

issue type.

Participant explains that

determining issue type is the first

step in the request fulfillment

process.

Secondary sources provide a list of

all issue types.

“Choosing the wrong issue type for

categorization will have repercussions

throughout the lifecycle of a service

request in the request fulfillment

process—from inefficiencies in

assigning requests to inability to

accurately report on the types of

requests we are receiving. So it is

important that we can easily see these

numbers on the dashboard for every

month since July 2014.”

“I find the most common way to

organize our support tickets is by the

issue type, and they are so easy to

gather. In my experience, in most

cases, organizing service requests by

issue type maps nicely to the people

who work on completing those

requests. So I think that team leaders

and senior executives would consider

issue type a high-level metric and be

very interested to know these

numbers.”

“Many organizations choose to

categorize their service requests based

on the department that strictly handle

the requests, by the product, or by the

customer they serve. But, in the request

fulfillment process that our group

follows, one of the very first steps after

logging a service request is to

determine its issue type. Therefore, we

should definitely display this metric on

the dashboard.”

“Examples of Issue Type are: Access,

Disruption, Feedback, Incident,

Informative, Inquiry, Sub-task,

Rejected, Ad Hoc, Business

Intelligence, and Recurring.”

Page 189: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

177

Access Secondary source provides specific

examples of this issue type.

Same as above

Disruption Secondary source provides specific

examples of this issue type.

Same as above

Feedback Secondary source provides specific

examples of this issue type.

Same as above

Incident Secondary source provides specific

examples of this issue type.

Same as above

Informative Secondary source provides specific

examples of this issue type.

Same as above

Inquiry Secondary source provides specific

examples of this issue type.

Same as above

Sub-task Secondary source provides specific

examples of this issue type.

Same as above

Rejected Secondary source provides specific

examples of this issue type.

Same as above

Ad Hoc Secondary source provides specific

examples of this issue type.

Same as above

Business Intelligence Secondary source provides specific

examples of this issue type.

Same as above

Recurring Secondary source provides specific

examples of this issue type.

Same as above

Priority Secondary sources provide a list of

all priority levels.

“The priority of a service request is an

assigned value representing relative

importance or sequence in which the

service request should be addressed.”

Page 190: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

178

“There’s no denying that I want to

know how many P-1, P-2, P-3, P-4, and

P-5 tickets were created every month

and how many of them were closed.

But I also don’t want to clutter the

dashboard page with too many graphs.

Therefore, let’s show the metrics for

only the P-1 tickets on the dashboard,

and create reports or write queries in

JIRA to see the numbers for P-2, P-3,

P-4, and P-5 tickets.”

“P-1 tickets are “critical” and must be

completed in less than four hours.

These tickets also impact the senior

leaders of the department and

organization (names withheld)—for

example, directors of other groups,

executive directors, and vice

presidents.”

“Let’s be consistent with how the other

graph (the total number of tickets

created and closed per month graph)

would look like. So does a stacked-

column bar chart with red bars showing

the number of P-1 tickets created (per

month) and green bars showing the

number of P-1 tickets closed (per

month) seem reasonable to everybody

in the group?”

“The number of tickets by priority, on a

monthly basis, should be displayed on

the dashboard because prioritization

allows us to be fair to all users and to

objectively score each service request

according to established rules rather

than doing guesstimates or favoring

certain users or groups.”

“On the dashboard, on a monthly basis,

I would like to see the number of

tickets broken down by priorities to

keep an eye on any sharp change

(increase or decrease) in the numbers of

P-1 and P-2 tickets in particular. These

tickets affect our executives, directors,

and managers; so we must be able to

keep track of these requests and

complete them on time—in less than

four hours for P-1 requests and in less

than eight hours for P-2 requests.”

Page 191: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

179

“We want a fair distribution of all the

requests among our staff members.

Typically, P-1 and P-2 requests have

zero room for errors, so they must be

thoroughly prepared and reviewed

before being delivered. Requests with

priorities P-4 or P-5, although must be

correct when delivered, they have

lower impact (i.e. they are not as

critical as P-1 or P-2 requests). We

don’t want certain group members

handling only the P-1 or P-5 tickets.

We want to make sure each group

member has a fair and balanced

distribution of P-1 through P-5 requests

to complete.”

“By having well-established rules on

setting the priorities, training time for

new staff members is reduced, because

they don’t have to understand the

complexities behind prioritization.”

“Prioritization empowers the group

members when management support is

needed. When we are confronted by

irate users, now we can justify if we

must ask them to wait or require

additional information from them to

complete a request.”

“ITSM best practice”

“It made his job less stressful by

eliminating the decision dilemma.”

“Now I don’t ask my manager—what

do I do next?”

“Examples of Priority levels are:

Priority – 1, Priority – 2, Priority – 3,

Priority – 4, and Priority – 5.”

Priority – 1 Secondary source provides specific

examples of this priority level.

Same as above

Priority – 2 Secondary source provides specific

examples of this priority level.

Same as above

Page 192: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

180

Priority – 3 Secondary source provides specific

examples of this priority level.

Same as above

Priority – 4 Secondary source provides specific

examples of this priority level.

Same as above

Priority – 5 Secondary source provides specific

examples of this priority level.

Same as above

Issue Status Secondary sources provide a list of

all issue statuses.

“Customers occasionally ask for

explanations about how their requests

and support issues are progressing, how

well they are being handled, as well as

what various ticket status codes

indicate. We must be able to provide

most up-to-date information to the

customers, or the customer satisfaction

rating could take a hit. We strive to

maintain a near-perfect customer

satisfaction rating, so it is important

that we know the number of tickets in

each issue status.”

“Whenever possible, we prefer that

customers submit one ticket per issue

and keep one issue per ticket. This

allows us to easily tell what specific

matter is being handled in a ticket, to

correctly close issues once they are

completed, and to move tickets to other

group members or departments without

stalling the progress of other unrelated

matters in the same ticket. The issue

status field keeps us in line. For

example, if I am waiting for a ticket to

be reviewed, it will be marked as Needs

Reviewed with the date and time

stamped. If several days go by with the

issue status being unchanged, then I

will know that the ticket has not been

reviewed yet, so it may need to be

escalated. So it is very important for me

to know the number of tickets by issue

status on a monthly basis.”

Page 193: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

181

The issue status field, although being

one or two words only, convey vast

information to the group members and

the users. For example, a ticket in the

Open status means that ticket is in the

active queue…it will be looked at soon

if it is not already being

reviewed…users may receive staff

response if they have question or if we

have addressed the issue, etc. An In

Progress status could mean that the

ticket is being worked on by a staff

member and an update will be provided

as soon as it is completed. A Closed

status may indicate that the ticket is

completed...we received confirmation

from the user that the issue is

resolved…so no further action will be

taken at this time, etc. The issue status

is a very useful field and we refer to

this field all the time.”

“Examples of Issue Status are:

Assigned, Closed, Fulfilled, In

Development, In Progress, Needs

Reviewed, On Hold, Open, Production,

and Reviewed.”

Assigned Secondary source provides specific

examples of this issue status.

Same as above

Closed Secondary source provides specific

examples of this issue status.

Same as above

Fulfilled Secondary source provides specific

examples of this issue status.

Same as above

In Development Secondary source provides specific

examples of this issue status.

Same as above

In Progress Secondary source provides specific

examples of this issue status.

Same as above

Needs Reviewed Secondary source provides specific

examples of this issue status.

Same as above

Page 194: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

182

On Hold Secondary source provides specific

examples of this issue status.

Same as above

Open Secondary source provides specific

examples of this issue status.

Same as above

Production Secondary source provides specific

examples of this issue status.

Same as above

Reviewed Secondary source provides specific

examples of this issue status.

Same as above

Department/Area Secondary sources provide a list of

all department/area names.

“Which departments were the top five

clients of our group?”

“To how many departments was our

group providing support?”

“What type of requests were most

frequently asked by these departments?

Are they recurring requests? Or are

they ad hoc requests?”

“It makes sense we make this

information available for them (senior

management) every month so that they

can see how valuable my team’s

contribution is to the organization and

how we can continue to work like a

true high-performing team.”

“We provide services to all these

departments. They use the data and

reports provided by us and make

decisions based on them. The decisions

they make ultimately serve the goals

and objectives of the entire

organization. If the data or report based

on which they make decisions are

“bad” or incorrect, it looks bad on me

and our team. I don’t want that to

happen, for sure.”

Page 195: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

183

“We need to know which departments

are our biggest clients...I mean…which

departments place the most number of

requests to us. We want to know the

natures of their requests, and also want

to make sure we have enough resources

to handle their requests.”

“Not all departments send same amount

of requests to us. Some are way more

active and demanding than others.

Some departments regularly send

multiple P-1 requests, while some send

mostly P-3 or P-4 requests. If we can

see these numbers on the dashboard,

then we can reach out to the

departments to find more ways to serve

them.”

“Some of the departments are creating

reports by themselves after hiring

external consultants and spending tons

of money; but our team could provide

them with the data and similar reports

for free. They just didn’t know that we

could provide them with similar

services.”

“Now that the other departments know

what we can do for them, I am

expecting more requests from them

starting next quarter”

“Examples of Department/Area are: A,

B, C, D, E, F, G, and H (names are

withheld).”

Department A Secondary source provides specific

examples of this department/area.

Same as above

Department B Secondary source provides specific

examples of this department/area.

Same as above

Department C Secondary source provides specific

examples of this department/area.

Same as above

Department D Secondary source provides specific

examples of this department/area.

Same as above

Page 196: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

184

Department E Secondary source provides specific

examples of this department/area.

Same as above

Department F Secondary source provides specific

examples of this department/area.

Same as above

Department G Secondary source provides specific

examples of this department/area.

Same as above

Department H Secondary source provides specific

examples of this department/area.

Same as above

Analytic Service Secondary sources provide a list of

all analytical services.

“Another way of grouping service

requests.”

“Examples of analytics services are:

Access, Administrative, Business

Intelligence, Codes, Dashboard,

Disruption-Application, Disruption-

Data, Disruption-Server, Evaluations,

Faculty List, Feedback, Incident-

Dashboard, Incident-Evaluation,

Incident-Report, Informative - Analytic

Service, Inquiry, Other, Project,

Report, Research, SRTE, Student List,

and Testing”

Access – Analytic

Service

Secondary source provides specific

examples of this analytic service.

Same as above

Administrative Secondary source provides specific

examples of this analytic service.

Same as above

Business Intelligence –

Analytic Service

Secondary source provides specific

examples of this analytic service.

Same as above

Codes Secondary source provides specific

examples of this analytic service.

Same as above

Dashboard Secondary source provides specific

examples of this analytic service.

Same as above

Page 197: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

185

Disruption –

Application

Secondary source provides specific

examples of this analytic service.

Same as above

Disruption – Data Secondary source provides specific

examples of this analytic service.

Same as above

Disruption – Server Secondary source provides specific

examples of this analytic service.

Same as above

Evaluations Secondary source provides specific

examples of this analytic service.

Same as above

Faculty List Secondary source provides specific

examples of this analytic service.

Same as above

Feedback – Analytic

Service

Secondary source provides specific

examples of this analytic service.

Same as above

Incident – Dashboard Secondary source provides specific

examples of this analytic service.

Same as above

Incident – Evaluation Secondary source provides specific

examples of this analytic service.

Same as above

Incident – Report Secondary source provides specific

examples of this analytic service.

Same as above

Informative – Analytic

Service

Secondary source provides specific

examples of this analytic service.

Same as above

Inquiry – Analytic

Service

Secondary source provides specific

examples of this analytic service.

Same as above

Other Secondary source provides specific

examples of this analytic service.

Same as above

Project Secondary source provides specific

examples of this analytic service.

Same as above

Page 198: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

186

Report Secondary source provides specific

examples of this analytic service.

Same as above

Research Secondary source provides specific

examples of this analytic service.

Same as above

SRTE Secondary source provides specific

examples of this analytic service.

Same as above

Student List Secondary source provides specific

examples of this analytic service.

Same as above

Testing Secondary source provides specific

examples of this analytic service.

Same as above

Tickets per Assignee Participant declares her decision to

see the number of tickets per

assignee on a monthly basis.

Participant shows confidence about

team leaders wanting to know

these numbers.

Participant proposes various ways

to break down these numbers for

additional insight.

Participant expresses his

preference on how to display these

numbers in dashboards.

“I want to see how many tickets were

assigned to me on a monthly basis. I

know this number is important to

Jennifer, Jackie, and Carl, but this is

important to me too. As a matter of

fact, I think all the staff members

should be able to see the number of

tickets assigned to each of us. That

way, we can be proactive in seeking

and offering help if needed. I think this

helps the whole team. It definitely

helps me.”

“Well, what about the number of

tickets per assignee broken down by

issue type and status too? Do you want

all the ad hoc or recurring tickets

assigned to one person? Do you want

all the open tickets go to another

person? So it also makes sense to me

seeing the number of tickets per

assignee broken down by issue type

and status.”

Page 199: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

187

“How about we show these numbers in

plain and simple two-dimensional

tables?…no graph or chart…just simple

separate tables showing the assignees’

names in rows and the number of

tickets broken down by priority in one

table, by issue type in a second table,

and by issue status in a third table.”

Tickets per Assignee

and Issue Type

Participant proposes for number of

tickets per assignee and issue type

Same as above

Tickets per Assignee

and Priority

Participant proposes for number of

tickets per assignee and priority

Same as above

Tickets per Assignee

and Issue Status

Participant proposes for number of

tickets per assignee and issue

status

Same as above

Tickets per Assignee

and Analytic Service

Participant proposes for number of

tickets per assignee and analytic

service

Same as above

Reviewer Participant recalls horror stories

from the past when deliverables

were sent to the users without

being reviewed.

Participant provides specific

examples when the review step

caught errors before delivering a

report to the user.

Participant proposes various ways

to break down these numbers for

additional insight.

Participant expresses his

preference on how to display these

numbers in dashboards.

“This team had a bad reputation few

years back (before I became the

director) for delivering reports and

spreadsheets with incorrect numbers

and silly typos. Sometimes the

deliverables did not look professional

and lacked quality control. So I made it

mandatory that all new, ad hoc,

recurring, and high profile deliverables

must be reviewed by someone other

than the person who created the

deliverable.”

“Having a second pair of eyes checking

for errors in data or typos really

increase the possibility of catching

them. There are several times over the

last year or so when I created a report

but had Samuel review them first.

Surely enough, he caught errors—

sometimes simple typo, sometimes

serious data error—and we corrected

them before sending the correct report

to the end user. Samuel is our expert

reviewer as he does the penultimate

Page 200: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

188

review before Jackie makes the final

go/no go decision (should the report be

delivered or not).”

“I create about half a dozen recurring

reports every week, and I have each

one of them reviewed (usually by

Samuel) before returning to the user. I

never take it for granted that my reports

will be error-free, although I know the

process and data inside out. Having

someone double check my work

increases my comfort level in my

deliverables.”

“There has been only one instance

since July 2014 when both Jennifer and

I overlooked a simple spelling mistake.

But overall, we have always been able

to deliver correct reports to our users.

That is important. The end users know

they can trust our data.”

“How about we see which reviewer is

reviewing how many tickets? How

about we break this number by Issue

Type, Priority, Issue Status, and

Analytic Service?”

Tickets per Reviewer Participant makes direct reference

of number of tickets per reviewer

Same as above

Tickets per Reviewer

and Issue Type

Participant makes direct reference

of number of tickets per reviewer

and issue type

Same as above

Tickets per Reviewer

and Priority

Participant makes direct reference

of number of tickets per reviewer

and priority

Same as above

Tickets per Reviewer

and Issue Status

Participant makes direct reference

of number of tickets per reviewer

and issue status

Same as above

Tickets per Reviewer

and Analytic Service

Participant makes direct reference

of number of tickets per reviewer

and analytic service

Same as above

Page 201: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

189

Tickets per Department

/Area and Issue Type

Participant proposes various ways

to break down these numbers for

additional insight.

“How about we see which department/

area is submitting how many tickets?

How about we break this number by

Issue Type, Priority, Issue Status, and

Analytic Service?”

Tickets per Department

/Area and Priority

Participant makes direct reference

of number of tickets per

department and priority

Same as above

Tickets per Department

/Area and Issue Status

Participant makes direct reference

of number of tickets per

department and issue status

Same as above

Tickets per Department

/Area and Analytic

Service

Participant makes direct reference

of number of tickets per

department and analytic service

Same as above

Request fulfillment

process

Participant explains the request

fulfillment process and stresses its

benefits

“Provides quick and effective

resolution of the service requests and

improves productivity of the users.”

“Reduced the bureaucracy involved in

requesting and receiving services.”

“Increased the level of control over

these types of services.”

Trend analysis Participant makes direct reference

to why trend analysis is important.

Participant provides examples

from his previous job.

Participant explains what is really

important to different stakeholders.

“Trend analysis is important for our

group because it is the practice of

collecting past and present information

to identify a pattern or trend to predict

future events or scenarios.”

“Trend analysis is useful because it can

be used to compare past data with

present data, predict future events

based on the trend found, and make

informed decisions.”

Trend analysis helps the group

understand how the request fulfillment

process has performed in the past, and

predict where the current operations

and practices of the process may take

us next.”

Page 202: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

190

“Let’s not forget that the executive

dashboard users (i.e. the directors,

executive directors, and vice

presidents) would mainly be looking at

the trend analysis dashboard page and

associated graphs, so it is important to

show them the “really important”

metrics instead of showing them

everything. We can always create

reports outside the dashboard pages to

show them other numbers, should they

ask for those.”

“Decreased service downtime”

“Increased customer satisfaction”

“Confidence in offered services”

“Increased help desk performance”

“Cornerstone”

Customer attitude

before ITIL

Participant recalls customers’

attitude toward her team before

implementing ITIL.

“Service desk activity failing to support

business activities”

“Customers not satisfied by the

services offered”

“Incidents not resolved in a timely

manner and increasing customer

downtime”

Tickets or Service

Requests

Participant makes direct reference

to tickets or service requests.

“Primary unit of work”

“Ticket volume drives the headcount of

members needed by the group, thus, is

considered to be the number one

determinant in making staffing

decisions.”

JIRA Participant shows the software

application used to keep track of

all service requests.

“Our JIRA system has all the tickets

since July 2014 when we first started to

use the request fulfillment process, so

these numbers can be easily retrieved

by running some simple queries in

JIRA.”

Page 203: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

191

Sources of service

requests

Participant provides examples of

how a service request ticket can be

created.

Participant explains the sequence

of events that take place after

creating a ticket.

Participant states her preference of

how tickets should be created.

“The most preferred source of all

service requests is a web portal or an

online form (E-Form) that is completed

by the requester of the service. The

online form contains all the required

information for the request to be

assigned and processed. When the

requester submits the form, it is

automatically forwarded to JIRA to

create a ticket. The ticket number is

used to uniquely identify the service

request.”

“The second-most preferred service

request source is email—when

requesters send email to the group’s

support team (email address withheld)

with the request. The email is then

forwarded to JIRA by the group’s

administrative assistant to manually

create a service request ticket.”

“Users are discouraged from sending

service requests through these means

(personal emails, phone calls, or walk

ups) to avoid possible delays due to the

risk of the request being lost,

overlooked, or incorrectly assigned.

When users send personal emails, call

or physically come to a group member

with their request, the group member

would still create a service request in

JIRA, but would also strongly

encourage the requester to use the

online form for future requests.”

Processing service

requests

Participant draws flow chart

diagrams to show request

fulfillment process flow.

“If a service request is submitted using

the online form, the request directly

goes to JIRA and a ticket is

automatically created. A JIRA ticket is

also known as a JIRA record. When a

ticket is received in JIRA, the requester

also gets an automated message from

JIRA saying Your request has been

received. You will be notified by when

the request is assigned. The newly

created JIRA ticket is placed in the

New Arrival queue.”

Page 204: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

192

“If a service request is emailed to the

group (group email address withheld),

currently there is no automated

response sent to the requester when the

request is received. These requests are

manually forwarded to JIRA by the

administrative assistant of the group. In

this case, the administrative assistant

manually creates a new JIRA ticket,

places it in the New Arrival queue, and

notifies the requesters saying that a

ticket has been created with their

request (also lets them know the ticket

number for future communication).

Impact Participant makes direct reference

to the impact of service requests.

“Measure of the extent of the Incident

and of the potential damage caused by

the Incident before it can be resolved.”

“Who is making the request? What is

the purpose of the request? How many

people are impacted and who they are?

What is the possible financial impact?

What is the impact to reputation on the

group or the organization? Is there a

regulatory or legislative requirement?”

Urgency Participant makes direct reference

to the urgency of service requests.

“How quickly a resolution of the

Incident is required.”

Operational data Participant talks about various

benefits of reporting operational

data on a regular basis.

Participant makes general

references to what kind of

operational data should be

reported.

“Powerful tool for mapping

performance against corporate goals

and key performance indicators that

have been agreed by management and

communicated to all group members.”

“To assist her in the decision making

process.”

“By using the operational summary

numbers of the request fulfillment

process our team will be able to make

informed, rational decisions about the

data, reports, and other services offered

to our users.”

Page 205: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

193

“Monthly operational numbers can be

used to show to the senior leaders of

our department (name withheld) the

level of service given by, and

performance of, our group.”

“Operational numbers that show more

money or more people are needed to

get the job done correctly and on time.”

“By supporting the customers those

who support the end users, we are

indirectly influencing the overall

customer satisfaction. End users have a

better perception of our department and

organization when they get better

service that is reliable and consistent.

This is possible when the members of

our team are productive and willing to

provide highest quality services to the

customers.”

Goals, objectives, and

values

Participant makes general/specific

references to the goals and

objectives of the group and the

department.

Participant also specifically

mentions the core values of the

organization.

Participant explains how the

group’s goals complement

organizational values.

Participant refers to various

departmental and vendor

documents on goals and values.

“Communicating the value and support

provided by our team helps us being

considered high-performing and

strengthens the case for additional

resources and funding for our team.

Therefore, we must align our team’s

objectives with the department’s and

organization’s business objectives. By

knowing the business objectives, our

team can easily set support goals that

are aligned. This in turn will lead to

identifying and producing metrics that

can validate the alignment.”

“Everyone understands both team and

individual performance goals and

knows what is expected. We actively

diffuse tension and friction in a relaxed

and informal atmosphere.”

“Everybody in our team believes that

we are working toward the same goals.

We are clear on how to work together

and how to accomplish tasks.”

Page 206: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

194

Workload Participant makes general

reference to the work assigned to

them by the team leaders.

Participant make direct reference

to the necessity of distributing

work in a fair and balanced

manner.

“Since we know the workload will be

assigned in a fair and balanced manner,

the staff members have solid and deep

trust in each other and in the team’s

purpose. We feel free to express

feelings and ideas.”

“Fair and balanced distribution of

workload is very important to the

leaders of the team because we expect

each staff member to carry his or her

own weight, and respect the team

processes and other members. It does

not mean that one cannot ask for help

or support from other more experienced

staff members in completing a task or a

request; rather, we actually encourage

staff members to seek assistance when

needed. The accountability for

completing a service request, however,

ultimately rests on the person assigned

with the service request.”

“We don’t want to see most of the

service requests being assigned to one

or two people making them

overworked and others sitting idle

because there is nothing to do. We

always have more work than we can do

in a given day, and it is a good thing. If

we don’t have more work than we can

do, then we might have no work at all.”

“We want everyone to be busy at work

and contribute. Finding who has been

assigned with how many tickets will let

us distribute the workload in a fair and

balanced way. We must be able to

distribute the workload fairly.”

“Well, for example, I’m sure we don’t

want one or two people do all the heavy

lifting—complete all the P-1 or P-2

requests—while others do all the low-

priority requests. So I think there

should be a fair balance of tickets for

each assignee in terms of priority too. I

think it would be fair for each assignee

to contribute with some high-priority

requests while enjoy some “light” work

doing some low-priority requests as

applicable.”

Page 207: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

195

Conflict resolution Participants make general

comments on how conflicts within

the group are resolved.

“The team engages in extensive

discussion during team meetings, and

everyone gets a chance to contribute

and showcase their weekly

accomplishments no matter how small

or insignificant they might sound.

Disagreement is expressed with

courtesy. Criticism is constructive and

is oriented toward problem solving and

removing obstacles.”

Collaboration Participant makes general

comments on team work and how

the group members collaborate.

“Sometimes I can be quite shy and not

ask for help from others. So it really

makes my day when Ryan or Joanna or

someone else from the team comes by

to my desk and asks if they can be of

any assistance. I also stop by their cube

for a quick chat whenever they are free.

One day, I was trying to fix a query for

more than two hours by myself but

could not. Then I sat down with Ryan

for less than ten minutes and he readily

pointed out the error.”

Page 208: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

196

Appendix K

Codes, Sources, Categories, and Themes

Codes

Sources*

Categories

Themes

Tickets opened per month

Tickets closed per month

Priority-1 tickets opened per month

Priority-1 tickets closed per month

Priority-2 tickets opened per month

Priority-2 tickets closed per month

Priority-3 tickets opened per month

Priority-3 tickets closed per month

Priority-4 tickets opened per month

Priority-4 tickets closed per month

Priority-5 tickets opened per month

Priority-5 tickets closed per month

I, SS, FN

I, SS, FN

I, SS, FN

I, SS, FN

I, SS

I, SS

I, SS

I, SS

I, SS

I, SS

I, FN

I, FN

Total number of tickets opened and

closed per month**

Total number of Priority-1 tickets

opened and closed per month**

Total number of Priority-2 tickets

opened and closed per month

Total number of Priority-3 tickets

opened and closed per month

Total number of Priority-4 tickets

opened and closed per month

Total number of Priority-5 tickets

opened and closed per month

Trend analysis

Access

Disruption

Feedback

Incident

Informative

Inquiry

Sub-task

Rejected

Ad Hoc

Business Intelligence

Recurring

Priority-1

Priority-2

Priority-3

Priority-4

Priority-5

I, FN

I, FN

SS, FN

I, SS, FN

SS, FN

I, FN

SS, FN

I, SS

I, SS, FN

I, SS

I, SS, FN

I, SS, FN

I, SS, FN

I, SS, FN

I, SS, FN

I, SS, FN

Issue Type**

Priority**

Issue Status**

Department / Area**

Analytic Service

Monthly Operational Summary

Page 209: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

197

Assigned

Closed

Fulfilled

In Development

In Progress

Needs Reviewed

On Hold

Open

Production

Reviewed

Department A

Department B

Department C

Department D

Department E

Department F

Department G

Department H

Access

Administrative

Business Intelligence

Codes

Dashboard

Disruption-Application

Disruption-Data

Disruption-Server

Evaluations

Faculty List

Feedback

Incident-Dashboard

Incident-Evaluation

Incident-Report

Informative

I, SS, FN

I, SS, FN

I, SS

SS, FN

I, SS, FN

I, SS, FN

SS

I, SS, FN

SS

I, SS, FN

I, FN

SS, FN

SS

SS

I, FN

SS

SS

SS

SS, FN

I

I

I, SS

I, SS, FN

SS, FN

SS, FN

SS, FN

I

I, SS

I

I

I, FN

SS, FN

SS, FN

Page 210: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

198

Inquiry

Other

Project

Report

Research

SRTE

Student List

Testing

I

SS, FN

SS

SS

SS

SS

FN

FN

Tickets per Assignee

Tickets per Reviewer

Tickets per Assignee and Issue Type

Tickets per Assignee and Priority

Tickets per Assignee and Issue Status

Tickets per Assignee and Analytic Service

Tickets per Reviewer and Issue Type

Tickets per Reviewer and Priority

Tickets per Reviewer and Issue Status

Tickets per Reviewer and Analytic Service

Tickets per Department/Area and Issue Type

Tickets per Department/Area and Priority

Tickets per Department/Area and Issue Status

Tickets per Department/Area and Analytic Service

I, SS, FN

I, SS, FN

I, SS

I, FN

I, SS

I

I

SS

SS

SS

FN

FN

FN

FN

Assignee**

Assignee vs Issue Type**

Assignee vs Priority**

Assignee vs Issue Status**

Reviewer**

Department/Area vs Issue Type**

Monthly Workload

Distribution Summary

Note:

* I = Interview

SS = Secondary Source (Documents and Physical Artifacts)

FN = Field Notes

** Selected for creating dashboards

Page 211: A QUALITATIVE CASE STUDY IDENTIFYING METRICS FOR ITIL

VITA

Sohel M. Imroz

Education

Ph.D. Candidate Pennsylvania State University, University Park, PA (Dec. 2016)

Department of Learning and Performance Systems

M.S. University of Nebraska at Omaha, Omaha, NE (Aug. 2008)

Management Information Systems

M.B.A Drake University, Des Moines, IA (Dec. 2001)

Business Administration

B.S. University of Science and Arts of Oklahoma, Chickasha, OK (Aug. 1997)

Computer Science

Teaching Experience

Graduate Teaching Assistant (8/2012 – 5/2013) Department of Learning and Performance Systems

Pennsylvania State University

WF ED 560 - Historical and Philosophical Foundations of Workforce Education (Required course,

3 credits). This course is about an investigation of historical, philosophical, and professional

foundations of workforce education.

WF ED 597 - Ethics in Workforce Education (Required course, 3 credits). Primary focus of this

course is on values manifested by individuals and their impact on the administrative problem

solving processes in traditional and workforce development settings.

Selected Refereed Publications

Boswell, R. A., & Imroz, S. M. (2013). The AACC leadership competencies: Pennsylvania’s views and

experiences. Community College Journal of Research and Practice, 37(11), 892-900.

Rothwell, W. J. & Imroz, S. M. (2012). Improving customer care experience: A case study of a large

private hospital in Dhaka, Bangladesh. In D. D. Warrick & J. Mueller (Eds.), Lessons in Leading

Change: Learning From Real World Cases. Oxford, UK: RossiSmith Academic Publishing.

Alzahmi, R. & Imroz, S. M. (2012). A look at factors influencing the UAE education and development

system. International Handbook of Academic Research and Teaching. 22, 321-334.

Imroz, S. M. (2012). A conceptual framework for social networking policy in the workplace. In M. Dirani,

Khalil, J., Wang, J. Gedro, & P. Dorshy (Eds), Academy of Human Resource Development

Conference Proceeding. Denver, CO. 2059-2091.

Imroz, S. M. (2012). Improving team building in a customer care department: A case study. The 2013

Pfeiffer Annual: Consulting (pp. 189-194). San Francisco, CA: Pfeiffer & Co., Inc.

Imroz, S. M. (2012). What is Six-Sigma? In W. J. Rothwell (Executive Ed.), J. Lindholm, K. Yarrish & A.

Zabellero (Vol. eds.), The Encyclopedia of Human Resource Management: HR Forms and Job

Aids (pp. 295-300). San Francisco, CA: Pfeiffer & Co., Inc.

Imroz, S. M. (2011). Are standardized tests a good measure of student learning? International Handbook of

Academic Research and Teaching. 17, 28-33.