tipa assessor for itil annexure

39
Sample Material - Not for Reprint

Upload: itpreneurs

Post on 13-Mar-2016

252 views

Category:

Documents


6 download

DESCRIPTION

 

TRANSCRIPT

Page 1: TIPA Assessor for ITIL Annexure

Sample

Mate

rial -

Not for

Rep

rint

Page 2: TIPA Assessor for ITIL Annexure

This product includes TIPA, which is used by permission of the Centre de Recherche Public Henri Tudor. All rights reserved.

© Centre de Recherche Public Henri Tudor 2011

The information contained in this classroom material is subject to change without notice.

This material contains proprietary information that is protected by copyright.

No part of this material may be photocopied, reproduced, or translated to another language without the prior consent of ITpreneurs Nederland B.V.

© Copyright 2012 by ITpreneurs Nederland B.V. All rights reserved.

The language used in this course is US English. Our sources of reference for grammar, syntax, and mechanics are The Chicago Manual of Style, The American Heritage Dictionary, and the Microsoft Manual of Style for Technical Publications.

ITIL® is a registered trade mark of the Cabinet Offi ce.

The Swirl logo™ is a trade mark of the Cabinet Offi ce.

All content in italics and quotes © Crown Copyright 2011 Reproduced under licence from Cabinet Offi ce.

All info graphics © Crown Copyright 2011 Reproduced under licence from Cabinet Offi ce, unless

otherwise noted.

ITIL® Qualifi cation Scheme Copyright © 2011 APM Group. All rights reserved.

Glossaries/Acronyms © Crown Copyright of Cabinet Offi ce.

Sample

Mate

rial -

Not for

Rep

rint

Page 3: TIPA Assessor for ITIL Annexure

Contents

PART A: ASSIGNMENT READINGS 1

MODULE 1: TIPAFOR PROCESS ASSESSMENT OF IT SERVICE MANAGEMENT 2

1.4.7: Experiencing the TIPARating Scale 2

Assignment 3: Experiencing the TIPARating Scale 2

MODULE 4: ASSESSMENT PHASE 6

4.2.4: TIPAProcess Model 6

Assignment 4: TIPA Process Model 6

4.3.3: Interview Rating level 1 23

Assignment 5: Interview Rating Exercise at LARIPS – Part 1 23

4.4.4: The Other Process Attributes 29

Assignment 6: Teach-Back The Other Process Attributes 29

4.4.5: Interview Rating levels 2 to 5 39

Assignment 7: Interview Rating Exercise at LARIPS – Part 2 39

MODULE 5: ANALYSIS PHASE 45

5.2.2: Improvement Recommendations for the Process 45

Assignment 10: SWOTs & Recommendations in Tool T12 45

Assignment 11: LARIPS – SWOTs and Recommendations 47

5.2.4: Improvement Recommendations Across Processes 51

Assignment 12: Overall SWOTs and Recommendations in Tool T16 51

MODULE 6: RESULTS PRESENTATION AND CLOSURE PHASES 53

6.1.2: Assessment Report 53

Assignment 13: Assessment Report template (T17) 53

Assignment 14: LARIPS Assessment Report 85

6.1.3: Results Presentation 92

Assignment 15: Assessment Results Presentation template (T18) 92

Assignment 16: Presenting the Assessment Results at LARIPS 105

7.4: PAM Extract 111

Assignment 17: TIPA PAM for Change Management process 111

PART B: ANSWERS 119

MODULE 1: TIPAFOR PROCESS ASSESSMENT OF IT SERVICE MANAGEMENT 120

1.3.1: What Is Process Maturity? 120

Assignment 1: Match the Process Maturity Level name to the definition 120

1.3.2: What Are Maturity Levels? 121

Assignment 2: Match the Process Maturity Name to the Process Maturity Level 121

i

Sample

Mate

rial -

Not for

Rep

rint

Page 4: TIPA Assessor for ITIL Annexure

1.4.7: Experiencing the TIPA Rating Scale 122

Assignment 3: Experiencing the TIPA Rating Scale 122

MODULE 4: ASSESSMENT PHASE 127

4.2.4: TIPA Process Model 127

Assignment 4: TIPA Process Model 127

4.3.3: Interview Rating Level 1 128

Assignment 5: Interview Rating Exercise at LARIPS – Part 1 128

4.4.5: Interview Rating Levels 2 to 5 131

Assignment 7: Interview Rating Exercise at LARIPS – Part 2 131

4.5: Process Rating 136

Assignment 8: Process Rating at LARIPS 136

4.6: Maturity Level Determination 140

Assignment 9: Maturity Level Determination 140

MODULE 5: ANALYSIS PHASE 141

5.2.2: Improvement Recommendations for the Process 141

Assignment 11: LARIPS – SWOT and Recommendations 141

MODULE 6: RESULTS PRESENTATION AND CLOSURE PHASES 144

6.1.2: Assessment Report 144

Assignment 14: LARIPS – Assessment Report 144

MODULE 7: SIMULATION 154

7.4: PAM Extract 154

Assignment 17: TIPA PAM for Change Management process 154

7.5: Instructions

Assignment 19: Simulation Instructions – Rate and Analyze the Process 162

MODULE 8: MOCK EXAM 166

ii

Sample

Mate

rial -

Not for

Rep

rint

Page 5: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 1

PART AASSIGNMENT READINGS

Sample

Mate

rial -

Not for

Rep

rint

Page 6: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.2

Assessor

1.4.7: EXPERIENCING THE TIPA RATING SCALE

ASSIGNMENT 3Service Level Management

Rating of an InterviewIntroduction

Assessor

Good morning. I have been requested by your organization, IT Global Services Inc., to perform a process assessment of IT Service Management processes. The purpose of this assessment is to identify the current practices, look for gaps, and determine areas of improvement.

So, we are here today to talk with you about Service Level Management (SLM).

Perhaps you could introduce yourself to help me understand your background.

Interviewee I have worked for IT Global Services Inc. for 6 years. I am now Service Level Manager.

SLM.1 Determine, Document and Agree on Service Level Requirements

Assessor The fi rst activity I would like to cover is related to Service Level Requirements (SLRs). How are service level targets defi ned and gathered? Who does that? Your boss? And is it based upon a document describing service levels by service?

Interviewee

Well, we have a set of requirements that are quite common and requested by customers such as availability of applications, security (in the sense of confi dentiality), authorized volume of transactions, and these types of things. Usually, it is the account manager who collects requirements with his customer. Then, he comes back to my boss or me.

Assessor Do you have any specifi c requirements for a particular service or a new service to be developed?

IntervieweeOf course! Service Level Requirements for each service are described in a document. The basis is the same for all of them, but the specifi cities of each service require adapting and completing this basic set of requirements.

AssessorDo you consult others IT Service Management (ITSM) processes, such as Incident Management, Capacity Management, or Availability Management, to defi ne realistic service level targets that can be effectively achieved?

IntervieweeYes. When reviewing or defi ning new Service Level Requirements, we check if the changed or new service level targets requested by the customer are acceptable within the existing timescales and targets coming from Incident, Capacity, Security, and Availability Management.

Assessor What happens in case of unacceptable targets, for example, in incident resolution targets? Do you accept them anyway because they are asked by the customer?

IntervieweeTo be honest, yes, we accept. But we explain to the customer that we cannot guarantee this level of service, or that it will be only effective after internal organizational changes, without additional detail about when. Because we never include penalties in SLAs, it is not so important.

SLM.2 Negotiate, Document and Agree upon SLAs for Operational Services

Assessor The next activity I’d like to look at is how the SLAs are established by account managers (if I understood correctly), for preparing a draft to be reviewed and agreed by customers.

Interviewee

SLAs are defi ned and negotiated by the account managers. I believe there is a different understanding on how to manage the SLA defi nition and the negotiation phases from one account manager to another. But SLAs are established as drafts with information related to the Service Level Requirements discussed, and then agreed between us and each customer.

Sample

Mate

rial -

Not for

Rep

rint

Page 7: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 3

Annexure Book: Part A

Assessor And do you have SLA templates or drafts? Can you show one of them?

IntervieweeThere is no common format for the agreement, so each account manager describes service levels differently. Usually, each account manager works with his own “template,” which he reuses each time.

Assessor What type of information can an SLA contain? Can you show me one?

Interviewee I do not have an SLA with me now, but I can send you one tomorrow.

Assessor Who takes part in negotiations related to SLAs? And who validates them?

Interviewee

Well, as I said earlier, SLAs are defi ned and negotiated by account managers. This is performed differently, depending on the individual account manager. But agreements are sometimes unclear and services are not really specifi ed and described correctly. Even if my boss and I are trying to internally validate them, it is sometimes diffi cult to obtain SLAs from the account managers. They do not systematically consult us and do not discuss the feasibility of service levels.

Assessor So, how do you know if you are able to meet the required level of services?

Interviewee We do not know all the time… There is really a lack of communication and alignment between the account managers and the service delivery and support teams.

Assessor Do you know how many SLAs there are? And for how many customers?

Interviewee We currently have an SLA with most of our customers. I would say that about 90% of our customers have one.

SLM.3 Monitor Service Performance against SLAsAssessor How are service levels measured? Do you have any dashboards or tools?

IntervieweeWell, the tools are a bit limited. We are working with Excel sheets, related to some availability information given by the mainframe system, network, and servers. We have a sort of dashboard. And we also have some information from the Incident Management tool.

Assessor So, can you ensure that all indicators mentioned in the SLA can be followed?

Interviewee Unfortunately, we cannot. The monitoring tools that we use are not suffi cient to do that. We do what we can with the available tools but some of the indicators cannot be monitored.

Assessor When an SLA is defi ned, does it specify the way it is going to be measured?

Interviewee

I don’t think so. Account managers have limited visibility on how we manage and monitor the service levels of our customers. This can drive us to a situation where services will not meet customer expectations and service requirements. In addition, service delivery often starts before the SLA (and indicators specifi ed in it) is formally established and agreed by all parties.

Assessor And are you sure the measures collected are good indicators for the SLAs they aim to track?

Interviewee

No, we are not sure. As I said, service delivery often starts before the SLA itself. And we do not have either a metrics list or an appropriate service catalog with well-defi ned service level indicators. Most of the time, the measures collected in the fi eld become our “usual” measures, and from one contract to the other, account managers tend to “sell” the same type of service, without really “adapting” or changing the way of measuring and monitoring the service.

Assessor What is the frequency of the measurements?

Interviewee Some measures are daily. Most of them are weekly.

Assessor But all these measures are IT-component-oriented and not service-oriented, aren’t they? Is there a mechanism to derive service measures from these IT component measures?

Sample

Mate

rial -

Not for

Rep

rint

Page 8: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.4

Assessor

IntervieweeNot at all! It is diffi cult enough to collect these measures. Moreover, it is also time-consuming without trying to be that. I believe we need more tools and more integration on existing data fl ows for that.

SLM.4 Provide Service Reporting to CustomersAssessor Do you regularly report to customers? How? What is the frequency?

Interviewee Yes, every week, we e-mail several service level reports to each customer. These reports usually contain service component metrics and the last service level breaches.

Assessor Do you have a dedicated service report template?

Interviewee No, we do not have one.

Assessor What type of communication is implemented in case of service level breaches?

IntervieweeIn case of service level breaches, we communicate to the customer every action performed until the resolution of the breach. At the end of a breach, we provide the customer with an exception report.

Assessor In general, are your customers satisfi ed with the level of reporting that you provide them?

Interviewee Yes they are! Moreover, if what I have explained before is not enough, the account manager can discuss and agree with the customer on another mode of reporting.

SLM.5 Conduct Service Reviews and Discuss Improvements

Assessor Do the account managers regularly discuss with the customers about the service level achievements in the past period?

Interviewee

Most of the time, it is done. The account managers meet their customer regularly, every one or three months depending on the customer, to ensure that they are satisfi ed with the service levels provided. However, these service review meetings can sometime be cancelled if there is no particular topic to be discussed.

Assessor What topics are usually discussed during these meetings?

Interviewee

It depends of the number of breaches that have occurred during the past period. If no breach occurred, the meeting can be quite short since there is no tricky point to discuss. In that case it is rather an open discussion with the customer to maintain good relationship. On the other hand, if one or more breaches had to be treated, the main objective of the meeting is to explain the reasons of those breaches, show that we have done all we could to close the gaps quickly, and identify the actions to be taken to avoid that it occurs again in the future.

Assessor And do you try to anticipate any future event that can impact your ability to deliver the agreed service levels?

Interviewee

Yes, it is a topic that is systematically discussed with the customers. Some of the customers are more proactive than others and instigate discussion about the future events that could impact the service levels. In the other cases, the account manager has to initiate this discussion in order to be prepared for these predicted future events.

Assessor Whatever the situation, do you discuss the potential service level improvements with your customers?

IntervieweeNot exactly… Except if these improvements are requested by our customer, we try to maintain the status quo regarding the service levels. It is easier to manage… I am sure you understand that. Sample

Mate

rial -

Not for

Rep

rint

Page 9: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 5

Annexure Book: Part A

SLM.6 Review SLAs, OLAs, and UCs

Assessor In addition to those operational reviews on the service level achievements, are there any periodic and formal reviews of SLAs?

Interviewee Account managers do not regularly meet their customers to review the SLAs.

Assessor And when do they review SLAs with their respective customers?

Interviewee If everything is going well, never! It is only in case of major problem or if we have a specifi c request from our customer asking for such a review.

Assessor And, when they happen, are you involved in these reviews and how?

Interviewee Well, I am involved most of the time. Obviously, our manager asks us to particularly follow what happens for very important SLAs.

Assessor And how are your boss and you (if you are not involved) informed about these reviews? Are there any SLA review reports?

Interviewee No, there is no SLA review report. Depending on the account manager in charge, there can be an E-mail communicating the information about the SLA changes… or not.

Assessor And what about the Operational Level Agreements (OLAs). Have you such agreements to ensure that your internal teams have their objectives aligned with the SLA targets?

Interviewee As I said previously, we consult some other ITSM processes to defi ne our SLRs and SLAs, but we do not have any OLAs to formalize our internal discussions.

Assessor The same question about your suppliers? Do you agree on service level targets in your supplier contracts?

IntervieweeYes, we do! Our supplier contracts contain some information that support the SLAs with our customers, such as the service level targets, the metrics used and what to do in case of service level breach.

Assessor How do you ensure that your supplier contracts are kept aligned with your SLAs when they are reviewed with the customers?

Interviewee

Unlike the SLAs, the supplier contracts are renegotiated at least once a year. It is only at that time that we can align our supplier contracts with the new or changed SLAs. But, again, these situations are special cases since we have no systematic SLA reviews and the new SLAs are quite similar to the existing ones.

Sample

Mate

rial -

Not for

Rep

rint

Page 10: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.6

Assessor

4.2.4: TIPA PROCESS MODEL

ASSIGNMENT 4

TIPA - ITIL 2011 PAM Process Assessment Model

Extract1 Description of processes

1.1 SD Service Design

1.1.1 Service Level ManagementProcess ID SLMProcess Name Service Level Management

Process Purpose

The purpose of the Service Level Management process is to ensure that all current and planned IT services are delivered to agreed achievable targets. [ITIL 2011 - Service Design: p106]

NOTE 1: This is accomplished through a constant cycle of negotiating, agreeing, monitoring, reporting on and reviewing IT service targets and achievements, and through instigation of actions to correct or improve the level of service delivered.

Process

Expected Results

As a result of successful implementation of the Service Level Management process:

1. IT and the customers have a common, clear and unambiguous understanding of the levels of service to be delivered;

2. IT services are delivered at levels agreed with the customers;

3. A close relationship with business and customers is maintained (in conjunction with Business Relationship Management);

4. Specifi c and measurable targets are developed for all IT services;

5. Customer satisfaction regarding service levels is monitored and improved;

6. The levels of service delivered are subject to cost-effective continual improvement.

Base Practices

SLM.BP1: Determine, document and agree on Service Level Requirements (SLRs)

Determine, document and agree on service level requirements for services being developed, changed or procured. The SLRs should be an integral part of the overall service design criteria which also include the functional or “utility” specifi cations. [ITIL 2011 - Service Design: p112] [Expected Result 1]

NOTE 2: Representatives of other processes need to be consulted to determine which targets can be realistically achieved.

SLM.BP2: Negotiate, document and agree upon Service Level Agreements (SLAs) for operational services

Draft, negotiate, and then agree on SLAs, detailing the service level targets to be achieved and specifying the responsibilities of both the IT service provider and the customer. [ITIL 2011 - Service Design: p113] [Expected Result 1, 2, 3]

Sample

Mate

rial -

Not for

Rep

rint

Page 11: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 7

Annexure Book: Part A

SLM.BP3: Monitor service performance against SLAs

Monitor and measure service performance achievements of all operational services against targets within SLAs. [ITIL 2011 - Service Design: p114] [Expected Result 1, 4]

NOTE 3: Monitoring capabilities are established, reviewed and upgraded to assist with the service performance measurement.

SLM.BP4: Provide service reporting to customers

Produce and communicate service reports to customer, based on agreed mechanisms, report format, and intervals (service achievement report, operational reports, and where possible, exception reports should be produced whenever an SLA has been broken). [ITIL 2011 - Service Design: p116] [Expected Result 1, 2, 3]

NOTE 4: Service Reports should be reviewed with the customer on a regular basis.

SLM.BP5: Conduct service reviews and discuss improvements

Hold review meetings on a regular basis with customers to review the service achievement in the past period, to anticipate any issues for the coming period and to identify potential improvements. [ITIL 2011 - Service Design: p116] [Expected Result 1, 2, 3, 6]

SLM.BP6: Review SLAs, OLAs, and UCs

Review SLAs and service scope periodically (at least annually), to ensure that they are still aligned to business needs and strategy. Ensure that service levels defi ned in Operational Level Agreements (OLAs) and Underpinning Contracts (UCs) are kept aligned. [ITIL 2011 - Service Design: p118] [Expected Result 1, 3, 6]

NOTE 5: SLRs and SLAs should be reviewed jointly.

NOTE 6: SLM should assist Supplier Management with the review of all supplier agreements and UCs to ensure that targets are aligned with SLA targets.

SLM.BP7: Handle service level complaints and compliments

Log and handle service level complaints originating from users and customers, from the time they are made to the time they have been dealt with. Log and communicate compliments to the relevant parties. [ITIL 2011 - Service Design: p120] [Expected Result 2, 3, 5, 6]

NOTE 7: This work represents a signifi cant contribution to the overall customer satisfaction work being done in the Business Relationship Management process.

SLM.BP8: Monitor and take into account customer satisfaction

Monitor customer perception of service levels provided and take it into account in the SLA, OLA, and UC reviews and the Service Improvement Plan (SIP). [ITIL 2011 - Service Design: p117] [Expected Result 1, 3, 5, 6]

SLM.BP9: Instigate service level improvements

Analyze and use the results of service and SLA reviews, complaints, compliments and customer perception on service levels to instigate service improvements for implementation through a SIP (via Seven-Step Improvement process).

[ITIL 2011 - Service Design: p117] [Expected Result 2, 4, 5, 6]Sample

Mate

rial -

Not for

Rep

rint

Page 12: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.8

Assessor

Input Work ProductsID Name Expected results and related BPs

02_21 Patterns of Business Activity (PBA) catalogue [Expected Result 1, 2, 3] [SLM.BP1, 2]

02_22 User Profi le (UP) catalogue [Expected Result 1, 2, 3] [SLM.BP1, 2]

08_08 Service options [Expected Result 1, 2, 3] [SLM.BP2, 6]

01_03 Service portfolio [Expected Result 1, 3, 6] [SLM.BP1, 2, 6, 9]

08_02 Service Level Requirements (SLR) [Expected Result 1, 3, 4, 5, 6] [SLM.BP2, 6, 9]

08_01 Service Level Agreement (SLA) [Expected Result 1, 2, 3, 4, 5, 6] [SLM.BP3, 4, 5, 6, 7, 9]

08_03 Operational Level Agreement (OLA) [Expected Result 1, 2, 4, 5, 6] [SLM.BP2, 5, 6, 9]

08_04 Underpinning Contract (UC)/supplier agreement [Expected Result 1, 2, 4, 5, 6] [SLM.BP2, 5, 6, 9]

06_26 Service supplier performance reports [Expected Result 1, 2, 4, 6] [SLM.BP3, 4, 5, 6, 9]

07_01 Customer complaints and compliments [Expected Result 1, 3, 5, 6] [SLM.BP1, 2, 5, 6, 8, 9]

06_05 Customer satisfaction survey [Expected Result 1, 3, 5, 6] [SLM.BP1, 2, 4, 5, 6, 8, 9]

Output Work Products

ID Name Expected results and related BPs

08_02 Service Level Requirements (SLR) [Expected Result 1] [SLM.BP1]

08_01 Service Level Agreement (SLA) [Expected Result 1, 3, 6] [SLM.BP2, 6]

08_03 Operational Level Agreement (OLA) [Expected Result 1, 4, 6] [SLM.BP6, 9]

08_04 Underpinning Contract (UC)/supplier agreement [Expected Result 1, 4, 6] [SLM.BP6, 9]

06_10 Service report [Expected Result 1, 2, 3] [SLM.BP3, 4]

06_31 Service review meeting minutes [Expected Result 1, 2, 3, 6] [SLM.BP5]

06_11 SLA review meeting minutes [Expected Result 1, 3, 6] [SLM.BP6]

07_01 Customer complaints and compliments [Expected Result 3, 5] [SLM.BP7]

06_05 Customer satisfaction survey [Expected Result 1, 3, 5] [SLM.BP8]

03_01 Service Improvement Plan (SIP) [Expected Result 3, 5, 6] [SLM.BP9]

1.2 ST Service Transition

1.2.1 Change Management

Process ID CHGMProcess Name Change Management

Process Purpose

The purpose of the Change Management process is to control the lifecycle of all changes, enabling benefi cial changes to be made with minimum disruption to IT services. [ITIL 2011 - Service Transition: p61]

Sample

Mate

rial -

Not for

Rep

rint

Page 13: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 9

Annexure Book: Part A

Process

Expected Results

As a result of successful implementation of the Change Management process:

1. All change requests are addressed;

2. Risk level, business value, and urgency of change requests are understood and taken into account during the change lifecycle;

3. For each change, change documentation is recorded and maintained all along its lifecycle;

4. Changes are successfully implemented after formal approval by the appropriate change authority;

5. Actions are taken to make sure that all modifi cations to confi guration items resulting from changes are recorded in the CMS.

Base Practices

CHGM.BP1: Review and fi lter the change requests

Review the Request For Changes (RFCs) fi ltering out those that seem to be impractical, duplication of previous RFCs, or incomplete. [ITIL 2011 - Service Transition: p73] [Expected Result 1, 2, 3]

NOTE 1: Any suggestion that Change Management considers to be strategic should be immediately referred to Service Portfolio Management. This means that Service Portfolio Management and Change Management should defi ne thresholds for what constitutes a strategic issue.

CHGM.BP2: Document and maintain change details in a change record

Record the changes with appropriate information and keep change documentation up to date as changes progress throughout their lifecycle. [ITIL 2011 - Service Transition: p69] [Expected Result 1, 3]

NOTE 2: A change record is created/updated on the basis of one or more RFC(s).

CHGM.BP3: Assess impact and resources for each change

Understand the potential impact of (successful and unsuccessful) changes on the infrastructure, services and business, and assess the resources required to implement the changes. [ITIL 2011 - Service Transition: p73] [Expected Result 2, 4]

NOTE 3: The use of the seven Rs of Change Management may be helpful to make this impact analysis: Raised, Reason, Return, Risks, Resources, Responsible, and Relationship.

NOTE 4: If needed, submit a request for evaluation to trigger the Change Evaluation process. If that evaluation is not needed, then the change will be evaluated by the appropriate change authority.

CHGM.BP4: Authorize the change build and test

Authorize changes (through their RFCs) by the appropriate change authority, and communicate the decision (go/no go) to stakeholders [ITIL 2011 - Service Transition: p78] [Expected Result 2, 3, 4]

NOTE 5: The appropriate change authority can be the CAB, eCAB, board of directors, depending on the type, size, risk and potential business impact of the change.

CHGM.BP5: Schedule authorized changes with the business

Include the authorized changes into the schedule of changes taking account of the business constraints. [ITIL 2011 - Service Transition: p77] [Expected Result 2, 4]

CHGM.BP6: Coordinate the change build and test

Coordinate the design, build, and tests of the changes in collaboration with technical teams external to Change Management. [ITIL 2011 - Service Transition: p79] [Expected Result 3, 4]

NOTE 6: Change Management is responsible for the coordination of changes not for the implementation itself.

Sample

Mate

rial -

Not for

Rep

rint

Page 14: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.10

Assessor

CHGM.BP7: Authorize the change deployment

Evaluate the design, build, and testing of the change to ensure that risks have been managed (i.e. remediation) and that predicted and tested performance match the business requirements. [ITIL 2011 - Service Transition: p79] [Expected Result 2, 4]

CHGM.BP8: Coordinate the change deployment

Coordinate the resources and capabilities required to deploy the change as scheduled. [ITIL 2011 - Service Transition: p79] [Expected Result 1, 3, 4]

CHGM.BP9: Review the change implementation

Review the results of the change to ensure that the change has had the desired benefi ts and met its objectives with minimized side effects. [ITIL 2011 - Service Transition: p79] [Expected Result 2, 3, 4]

NOTE 7: A Post Implementation Review (PIR) should be performed to confi rm the effectiveness of the solution prior to closure.

CHGM.BP10: Close the changes

Verify that the entire documentation related to the change is available in the change record, update the Confi guration Management System (CMS) and then close RFCs and related change records, no matter the change result (implemented or abandoned). [ITIL 2011 - Service Transition: p79] [Expected Result 3, 5]

Input Work ProductID Name Related expected results and BPs

03_27 Policy and strategies for change and release [Expected Result 1, 2, 3, 4] [CHGM.BP1, 2, 3, 5]

07_02 Request for Change (RFC) [Expected Result 1, 3] [CHGM.BP1]

02_26 Change proposal [Expected Result 2, 4] [CHGM.BP3, 4]

03_10 Transition plan [Expected Result 2, 3, 4] [CHGM.BP5, 6, 8]

03_11 Release and deployment plans [Expected Result 2, 3, 4] [CHGM.BP5, 6, 8]

03_12 Test plan [Expected Result 2, 3, 4] [CHGM.BP5, 6, 8]

03_14 Remediation plan [Expected Result 2, 3, 4] [CHGM.BP5, 8]

03_15 Change Schedule (CS) [Expected Result 1, 3, 4, 5] [CHGM.BP5, 6, 8]

03_16 Projected Service Outage (PSO) [Expected Result 1, 3, 4, 5] [CHGM.BP5, 6, 8]

02_27 Change models [Expected Result 2, 4] [CHGM.BP3]

06_32 Change evaluation report [Expected Result 2, 4, 5] [CHGM.BP4, 9]

05_05 Confi guration Management System (CMS) [Expected Result 2, 4, 5] [CHGM.BP3, 10]

02_09 Confi guration baseline [Expected Result 2, 4, 5] [CHGM.BP3, 10]

06_22 Change test report [Expected Result 2, 4] [CHGM.BP4, 7, 9]

Sample

Mate

rial -

Not for

Rep

rint

Page 15: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 11

Annexure Book: Part A

Output Work Product

ID Name Related expected results and BPs

05_11 Change documents and records [Expected Result 1, 2, 3, 5] [CHGM.BP1, 3, 4, 7, 10]

03_15 Change Schedule (CS) [Expected Result 1, 2, 4] [CHGM.BP5]

03_16 Projected Service Outage (PSO) [Expected Result 1, 2, 4] [CHGM.BP5]

03_14 Remedia on plan [Expected Result 1, 3] [CHGM.BP1]

06_03 Post implementa on report [Expected Result 2, 3, 4] [CHGM.BP9]

1.3 SO Service Operation

1.3.1 Incident Management

Process ID INCMProcess Name Incident Management

Process Purpose

The purpose of the Incident Management process is to restore normal service operation as quickly as possible within agreed levels of service quality, and minimize the adverse impact on business operations. [ITIL 2011 - Service Operation: p73]

NOTE 1: “Normal Service operation” is defi ned as an operational state where services and CIs are performing within their agreed service and operational levels.

Process

Expected Results

As a result of successful implementation of the Incident Management process:

1. Incidents and their resolution are documented;

2. Incident management priorities are aligned with those of the business;

3. Incidents are resolved within agreed service levels to restore the normal service operation;

4. Actions are taken to minimize impact on the business;

5. Incidents are tracked through each stage of their lifecycle;

6. Interested parties are kept informed of incidents’ progress and corresponding service level targets.

Base Practices

INCM.BP1: Detect and log the incidents

Record relevant information on the incident whatever the way of logging, or reopen an existing incident (in accordance with reopening rules). [ITIL 2011 - Service Operation: p76; 83] [Expected Result 1, 5]

INCM.BP2: Categorize the incidents

Categorize the incident based on a set of commonly understood incident categories. [ITIL 2011 - Service Operation: p76] [Expected Result 1, 3]

INCM.BP3: Prioritize the incidents

Prioritize the incident according to commonly understood prioritizing criteria (such as business impact and urgency). [ITIL 2011 - Service Operation: p79] [Expected Result 1, 2, 3, 4]

INCM.BP4: Provide initial diagnosis and support for the incidents

Assess the incident details to fi nd a solution to restore service operation, via a degraded mode or a temporary resolution if needed. [ITIL 2011 - Service Operation: p80] [Expected Result 1, 3, 4, 5]

Sample

Mate

rial -

Not for

Rep

rint

Page 16: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.12

Assessor

NOTE 2: Diagnosis scripts and existing information on incidents, problems, known errors and changes should be used to provide the initial support to a new incident.

NOTE 3: If BP4 enables to fi nd a solution to restore service operation, then INCM.BP5, 6 & 7 are not applicable

INCM.BP5: Escalate the incidents to specialized support teams or to higher levels of authority if needed

Route the incident to the appropriate level of support, i.e. second line, or third line if needed. Escalate the incident to a higher hierarchical level if SLA is broken or potential impact on the business is high. [ITIL 2011 - Service Operation: p80] [Expected result 2, 3, 4, 5, 6]

INCM.BP6: Investigate and diagnose the incidents

Analyze and investigate incidents by the appropriated support line(s) if fi st line support failed to restore service. [ITIL 2011 - Service Operation: p82] [Expected Result 3, 5]

NOTE 4: The investigation should include actions such as:

- establishing exactly what has gone wrong or what is being sought by the user,

- identifying any event that could have triggered the incident,

- understanding the chronological order of events,

- confi rming the full impact of the incident, including the number and range of users affected,

- searching incident/problem records and/or known error databases (KEDBs) or manufacturers’/suppliers’ error logs or knowledge database.

INCM.BP7: Select the best potential resolution

Identify and test potential resolutions to ensure that the one chosen will enable to fully restore the service while preventing side effects and adverse impact on the business. [ITIL 2011 - Service Operation:p82] [Expected Result 1, 3, 4]

INCM.BP8: Keep incident records up-to-date

Track the incident statuses until closure and update incident records to document resolution progress. [ITIL 2011 - Service Operation: p75; 82] [Expected Result 1, 5]

INCM.BP9: Implement the incident resolution

Implement incident resolution that enables to resume business activities. [ITIL 2011 - Service Operation: p82] [Expected Result 2, 3, 4]

INCM.BP10: Communicate on incident resolution progress

Communicate on the incident resolution progress (and particularly on timescales for all incident-handling stages), or on the service level breaches to all impacted parties. [ITIL 2011 - Service Operation: p73; 74] [Expected Result 4, 6]

INCM.BP11: Close the incidents

Check that the incident is fully resolved and that the users are satisfi ed. Then close the incident if agreed. [ITIL 2011 - Service Operation: p82] [Expected Result 1, 5, 6]

Sample

Mate

rial -

Not for

Rep

rint

Page 17: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 13

Annexure Book: Part A

Input Work ProductID Name Expected results and related BPs

05_06 Event record [Expected Result 1, 3, 5] [INCM.BP1, 6]

05_05 Confi guration Management System (CMS) [Expected Result 1, 3, 4, 5] [INCM.BP6, 7]

08_01 Service Level Agreement (SLA) [Expected Result 1, 2, 3, 5, 6] [INCM.BP3,5, 10]

05_03 Known Error Database (KEDB) [Expected Result 1, 3, 4, 5] [INCM.BP4,6]

05_04 Incident knowledge base [Expected Result 3, 4, 5] [INCM.BP6]

05_02 Problem knowledge base [Expected Result 3, 4, 5] [INCM.BP6]

01_02 Incident management tool [Expected Result 1, 3, 5, 6] [INCM.BP1,2,10]

02_06 Incident Model [Expected Result 1, 2, 3, 5] [INCM.BP2, 3, 6]

02_07 Incident categories [Expected Result 1, 3] [INCM.BP2]

Output Work ProductID Name Expected results and related BPs

05_07 Incident record [Expected Result 1, 2, 3, 4, 5, 6] [INCM.BP1, 2, 3, 4, 9, 10, 11]

05_04 Incident knowledge base [Expected Result 1, 3, 4, 5] [INCM.BP9,11]

01_02 Incident management tool [Expected Result 1, 5] [INCM.BP1, 8]

02_07 Incident categories [Expected Result 1, 3] [INCM.BP2]

06_05 Customer satisfaction survey [Expected Result 6] [INCM.BP10]

07_02 Request for Change (RFC) [Expected Result 3, 5] [INCM.BP9]

Sample

Mate

rial -

Not for

Rep

rint

Page 18: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.14

Assessor

II Process maturity indicators for level 1 to 5

II.2 FOREWORDThis sec on presents the process maturity indicators related to the process a ributes associated with maturity levels 1 to 5 defi ned in part 2 of ISO/IEC 15504. Process maturity indicators are the means of achieving the capabili es addressed by the considered process a ributes. Evidence of process maturity indicators supports the judgment of the degree of achievement of the process a ribute.

II.3 LEVEL 1: PERFORMED PROCESS

PA 1.1: Process Performance att ribute

The Process Performance att ribute is a measure of the extent to which the process purpose is achieved.

As a result of full achievement of this att ribute:

a) The process achieves its defi ned expected results.

Generic Practi ces for PA 1.1

GP 1.1.1 Achieve the process expected results [PA 1.1 : a]• Perform the intent of the base prac ces.• Produce work products that evidence the process expected results.

NOTE: The assessment of a performed process is based on process Base Prac ces and Work Products, which are defi ned in the previous sec on of this document.

Generic Work Products for PA 1.1…

II.4 LEVEL 2: MANAGED PROCESSThe performed process is now implemented in a managed fashion (planned, monitored and adjusted) and its work products are appropriately established, controlled and maintained.

The following att ributes of the process demonstrate the achievement of this level:

PA 2.1: Performance Management att ribute

The Performance Management att ribute is a measure of the extent to which the performance of the process is managed.

As a result of full achievement of this att ribute:

a) Objecti ves for the performance of the process are identi fi ed.

b) Performance of the process is planned and monitored.

c) Performance of the process is adjusted to meet plans.

d) Responsibiliti es and authoriti es for performing the process are defi ned, assigned and communicated.

e) Resources and informati on necessary for performing the process are identi fi ed, made available, allocated and used.

f) Interfaces between the involved parti es are managed to ensure both eff ecti ve communicati on and clear assignment of responsibility.

Sample

Mate

rial -

Not for

Rep

rint

Page 19: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 15

Annexure Book: Part A

Generic Practi ces for PA 2.1

GP 2.1.1 Identi fy the objecti ves for the performance of the process [PA 2.1 : a]

NOTE: Performance objec ves may include: (1) quality of the ar facts produced, (2) process cycle me or frequency, (3) resource usage and (4) boundaries of the process.

• Performance objec ves are iden fi ed based on process requirements and customer requirements.• The scope of the process performance is defi ned.• Assump ons and constraints are considered when iden fying the performance objec ves.

GP 2.1.2 Plan and monitor the performance of the process to fulfi ll the identi fi ed objecti ves [PA 2.1 : b]• Plan(s) for the performance of the process are developed. The process performance cycle is defi ned.• Key milestones for the performance of the process are established.• Es mates for process performance a ributes are determined and maintained.• Process ac vi es and tasks are defi ned.• Schedule is defi ned and aligned with the approach to performing the process.• Process work product reviews are planned.• The process is performed according to the plan(s).• Process performance is monitored to ensure planned results are achieved.

GP 2.1.3 Adjust the performance of the process [PA 2.1 : c]• Process performance issues are iden fi ed.• Appropriate ac ons are taken when planned results and objec ves are not achieved.• The plan(s) are adjusted, as necessary.• Rescheduling is performed as necessary.

GP 2.1.4 Defi ne responsibiliti es and authoriti es for performing the process [PA 2.1 : d]• Responsibili es, commitments and authori es to perform the process are defi ned, assigned and

communicated. • Responsibili es and authori es to verify process work products are defi ned and assigned.• The needs for process performance experience, knowledge and skills are defi ned.

GP 2.1.5 Identi fy and make available resources to perform the process according to plan [PA 2.1 : e]• The human and infrastructure resources necessary for performing the process are iden fi ed, made

available, allocated and used.• The informa on necessary to perform the process is iden fi ed and made available.

GP 2.1.6 Manage the interfaces between involved parti es [PA 2.1 : f]• The individuals and groups involved in the process performance are determined.• Responsibili es of the involved par es are assigned.• Interfaces between the involved par es are managed.• Communica on is assured between the involved par es.• Communica on between the involved par es is eff ec ve.

Generic Work Products for PA 2.1...

PA 2.2: Work Product Management att ribute

The work product management att ribute is a measure of the extent to which the work products produced by the process are appropriately managed.

NOTE 1: Requirements for documentati on and control of work products may include requirements for the identi fi cati on of changes and revision status, approval and re-approval of work products, and the creati on of relevant versions of applicable work products available at points of use.

NOTE 2: The work products referred to in this clause are those that result from the achievement of the process expected results.

Sample

Mate

rial -

Not for

Rep

rint

Page 20: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.16

Assessor

As a result of full achievement of this att ribute:

a) Requirements for the work products of the process are defi ned.

b) Requirements for documentati on and control of the work products are defi ned.

c) Work products are appropriately identi fi ed, documented, and controlled.

d) Work products are reviewed in accordance with planned arrangements and adjusted as necessary to meet requirements.

Generic Practi ces for PA 2.2

GP 2.2.1 Defi ne the requirements for the work products [PA 2.2 : a]• The requirements for the work products to be produced are defi ned. Requirements may include defi ning

contents and structure.• Quality criteria of the work products are iden fi ed.• Appropriate review and approval criteria for the work products are defi ned.

GP 2.2.2 Defi ne the requirements for documentati on and control of the work products [PA 2.2 : b]• Requirements for the documenta on and control of the work products are defi ned. Such requirements

may include requirements for (1) distribu on, (2) iden fi ca on of work products and their components (3) traceability

• Dependencies between work products are iden fi ed and understood.• Requirements for the approval of work products to be controlled are defi ned.

GP 2.2.3 Identi fy, document and control the work products [PA 2.2 : c]• The work products to be controlled are iden fi ed.• Change control is established for work products.• The work products are documented and controlled in accordance with requirements.• Versions of work products are assigned to product confi gura ons as applicable.• The work products are made available through appropriate access mechanisms.• The revision status of the work products may readily be ascertained.

GP 2.2.4 Review and adjust work products to meet the defi ned requirements [PA 2.2 : d]• Work products are reviewed against the defi ned requirements in accordance with planned arrangements.• Issues arising from work product reviews are resolved.

Generic Work Products for PA 2.2…

II.5 LEVEL 3: ESTABLISHED PROCESSThe managed process is now implemented using a standard process defi niti on capable of achieving its process expected results.

The following att ributes of the process demonstrate the achievement of this level:

PA 3.1: Process Defi niti on att ribute

The Process Defi niti on att ribute is a measure of the extent to which a standard process is defi ned and maintained to support the deployment of the process.

As a result of full achievement of this att ribute:

a) A standard process, including appropriate tailoring guidelines, is defi ned that describes the fundamental elements that must be incorporated into a deployed process.

b) The sequence and interacti on of the standard process with other processes are determined.

Sample

Mate

rial -

Not for

Rep

rint

Page 21: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 17

Annexure Book: Part A

c) Required competencies and roles for performing a process are identi fi ed as part of the standard process.

d) Required infrastructure and work environment for performing a process are identi fi ed as part of the standard process.

e) Suitable methods for monitoring the eff ecti veness and suitability of the process are determined.

Generic Practi ces for PA 3.1

GP 3.1.1 Defi ne the standard process that will support the deployment of the process [PA 3.1 : a]• A standard process is developed that includes the fundamental process elements.• The standard process iden fi es the deployment needs and deployment context.• Guidance and/or procedures are provided to support implementa on of the process as needed.• Appropriate tailoring guideline(s) are available as needed.

GP 3.1.2 Determine the sequence and interacti on between processes so that they work as an integrated system of processes [PA 3.1 : b]

• The standard process’ sequence and interac on with other processes are determined.• Deployment of the standard process maintains integrity of processes.

GP 3.1.3 Identi fy the roles and competencies for performing the standard process [PA 3.1 : c]• Process performance roles are iden fi ed.• Competencies for performing the process are iden fi ed.

GP 3.1.4 Identi fy the required infrastructure and work environment for performing the standard process [PA 3.1 : d]• Process infrastructure components are iden fi ed (facili es, tools, networks, methods, etc.).• Work environment requirements are iden fi ed.

GP 3.1.5 Determine suitable methods to monitor the eff ecti veness and suitability of the standard process [PA 3.1 : e]• Methods for monitoring the eff ec veness and suitability of the process are determined.• Appropriate criteria and data needed to monitor the eff ec veness and suitability of the process are defi ned.• The need to establish the characteris cs of the process is considered.• The need to conduct internal audit and management review is established.• Process changes are implemented to maintain the standard process.

GP 3.1.6 [ITIL 2011: Service Reporti ng] Defi ne and agree upon the content of service management reports [PA 3.1 : e]• Defi ne and agree with the stakeholders the lay out, the contents and frequency of the service management

reports.

Generic Work Products for PA 3.1…

PA 3.2: Process Deployment att ribute

The Process Deployment att ribute is a measure of the extent to which the standard process is eff ecti vely deployed to achieve its process expected results.

NOTE 1: Competency results from a combinati on of knowledge, skills and personal att ributes that are gained through educati on, training and experience.

As a result of full achievement of this att ribute:

a) A standard process is deployed based upon an appropriately selected and/or tailored standard process defi niti on.

b) Required roles, responsibiliti es and authoriti es for performing the standard process are assigned and communicated.

c) Personnel performing the standard process are competent on the basis of appropriate educati on, training, and experience.

d) Required resources and informati on necessary for performing the standard process are made available, allocated and used.

Sample

Mate

rial -

Not for

Rep

rint

Page 22: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.18

Assessor

e) Required infrastructure and work environment for performing the standard process are made available, managed and maintained.

f) Appropriate data are collected and analyzed as a basis for understanding the behavior of, and to demonstrate the suitability and eff ecti veness of the process, and to evaluate where conti nuous improvement of the process can be made.

Generic Practi ces for PA 3.2

GP 3.2.1 Deploy a standard process that sati sfi es the context-specifi c requirements from the standard process defi niti on [PA 3.2 : a]

• The standard process is appropriately selected and/or tailored from the standard process defi ni on.• Conformance of deployed process with standard process requirements is verifi ed.

GP 3.2.2 Assign and communicate roles, responsibiliti es and authoriti es for performing the standard process [PA 3.2 : b]

• The roles for performing the standard process are assigned and communicated.• The responsibili es and authori es for performing the standard process are assigned and

communicated.

GP 3.2.3 Ensure necessary competencies for performing the standard process [PA 3.2 : c]• Appropriate competencies for assigned personnel are iden fi ed.• Suitable training is available for those deploying the standard process.

GP 3.2.4 Provide resources and informati on to support the performance of the standard process [PA 3.2 : d]

• Required human resources are made available, allocated and used.• Required informa on to perform the process is made available, allocated and used.

GP 3.2.5 Provide adequate process infrastructure to support the performance of the standard process [PA 3.2 : e]

• Required infrastructure and work environment is available.• Organiza onal support to eff ec vely manage and maintain the infrastructure and work environment

is available.• Infrastructure and work environment is used and maintained.

GP 3.2.6 Collect and analyze data about the performance of the process to demonstrate its suitability and eff ecti veness [PA 3.2 : f]

• Data required to understand the behavior, suitability and eff ec veness of the standard process are iden fi ed.

• Data are collected and analyzed to understand the behavior, suitability and eff ec veness of the standard process.

• Results of the analysis are used to iden fy where con nual improvement of the standard process can be made.

GP 3.2.7 [ITIL 2011: Service Reporti ng] Produce and publish service management reports [PA 3.2 : f]• Collate data.• Translate data (into meaningful business views).• Produce service management reports according to service repor ng policies and rules.• Communicate service management reports to stakeholders.

Sample

Mate

rial -

Not for

Rep

rint

Page 23: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 19

Annexure Book: Part A

Generic Work Products for PA 3.2…

II.6 LEVEL 4: PREDICTABLE PROCESSThe established process now operates within defi ned limits to achieve its process expected results.

The following att ributes of the process demonstrate the achievement of this level:

PA 4.1: Process Measurement att ribute

The Process Measurement att ribute is a measure of the extent to which measurement results are used to ensure that performance of the process supports the achievement of relevant process performance objecti ves in support of defi ned business goals.

NOTE 1: Informati on needs may typically refl ect management, technical, project, process or product needs.

NOTE 2: Measures may be either process measures or product measures or both.

As a result of full achievement of this att ribute:

a) Process informati on needs in support of relevant business goals are established.

b) Process measurement objecti ves are derived from identi fi ed process informati on needs.

c) Quanti tati ve objecti ves for process performance in support of relevant business goals are established.

d) Measures and frequency of measurement are identi fi ed and defi ned in line with process measurement objecti ves and quanti tati ve objecti ves for process performance.

e) Results of measurement are collected, analyzed and reported in order to monitor the extent to which the quanti tati ve objecti ves for process performance are met.

f) Measurement results are used to characterize process performance.

Generic Practi ces for PA 4.1

GP 4.1.1 Identi fy process informati on needs, in relati on with business goals [PA 4.1 : a]• Business goals relevant to establishing quan ta ve process measurement objec ves for the process are

iden fi ed.• Process stakeholders are iden fi ed and their informa on needs are defi ned.• Informa on needs support the relevant business goals.

GP 4.1.2 Derive process measurement objecti ves from process informati on needs [PA 4.1 : b]• Process measurement objec ves to sa sfy standard process informa on needs are defi ned.

GP 4.1.3 Establish quanti tati ve objecti ves for the performance of the standard process, according to the alignment of the process with the business goals [PA 4.1 : c]

• Process performance objec ves are defi ned to explicitly refl ect the business goals.• Process performance objec ves are verifi ed with organiza onal management and process owner(s) to be

realis c and useful.

GP 4.1.4 Identi fy product and process measures that support the achievement of the quanti tati ve objecti ves for process performance [PA 4.1 : d]

• Detailed measures are defi ned to support monitoring, analysis and verifi ca on needs of process and product goals.

• Measures to sa sfy process measurement and performance objec ves are defi ned.• Frequency of data collec on is defi ned.• Algorithms and methods to create derived measurement results from base measures are defi ned, as

appropriate.• Verifi ca on mechanism for base and derived measures is defi ned.

Sample

Mate

rial -

Not for

Rep

rint

Page 24: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.20

Assessor

GP 4.1.5 Collect product and process measurement results through performing the standard process [PA 4.1 : e]• Data collec on mechanism is created for all iden fi ed measures.• Required data are collected in an eff ec ve and reliable manner.• Measurement results are created from the collected data within defi ned frequency.• Analysis of measurement results is performed within defi ned frequency.• Measurement results are reported to those responsible for monitoring the extent to which qualita ve objec ves

are met.

GP 4.1.6 Use the results of the defi ned measurement to monitor and verify the achievement of the process performance objecti ves [PA 4.1 : f]

• Sta s cal or similar techniques are used to quan ta vely understand process performance and capability within defi ned control limits.

• Trends of process behavior are iden fi ed.

Generic Work Products for PA 4.1…

PA 4.2: Process Control att ribute

The Process Control att ribute is a measure of the extent to which the process is quanti tati vely managed to produce a process that is stable, capable, and predictable within defi ned limits.

As a result of full achievement of this att ribute:

a) Suitable analysis and control techniques where applicable, are determined and applied.

b) Control limits of variati on are established for normal process performance.

c) Measurement data are analyzed for special causes of variati on.

d) Correcti ve acti ons are taken to address special causes of variati on.

e) Control limits are re-established (as necessary) following correcti ve acti on.

Generic Practi ces for PA 4.2

GP 4.2.1 Determine analysis and control techniques, appropriate to control the process performance [PA 4.2 : a]• Process control analysis techniques are defi ned.• Selected techniques are validated against process control objec ves.

GP 4.2.2 Defi ne parameters suitable to control the process performance [PA 4.2 : b]• Standard process defi ni on is modifi ed to include selec on of parameters for process control.• Control limits for selected base and derived measurement results are defi ned.

GP 4.2.3 Analyze process and product measurement results to identi fy variati ons in process performance [PA 4.2 : c]• Measures are used to analyses process performance.• All situa ons are recorded when defi ned control limits are exceeded.• Each out-of-control case is analyzed to iden fy poten al cause(s) of varia on.• Assignable causes of varia on in performance are determined.• Results are provided to those responsible for taking ac on.

GP 4.2.4 Identi fy and implement correcti ve acti ons to address assignable causes [PA 4.2 : d]• Correc ve ac ons are determined to address each assignable cause.• Correc ve ac ons are implemented to address assignable causes of varia on.• Correc ve ac on results are monitored.• Correc ve ac ons are evaluated to determine their eff ec veness.

GP 4.2.5 Re-establish control limits following correcti ve acti on [PA 4.2 : e]• Process control limits are re-calculated (as necessary) to refl ect process changes and correc ve ac ons.

Sample

Mate

rial -

Not for

Rep

rint

Page 25: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 21

Annexure Book: Part A

Generic Work Products for PA 4.2…

II.7 LEVEL 5: OPTIMIZING PROCESSThe predictable process is conti nuously improved to meet relevant current and projected business goals.

The following att ributes of the process demonstrate the achievement of this level:

PA 5.1: Process Innovati on att ribute

The Process Innovati on att ribute is a measure of the extent to which changes to the process are identi fi ed from analysis of common causes of variati on in performance, and from investi gati ons of innovati ve approaches to the defi niti on and deployment of the process.

As a result of full achievement of this att ribute

a) Process improvement objecti ves for the process are defi ned that support the relevant business goals.

b) Appropriate data are analyzed to identi fy common causes of variati ons in process performance.

c) Appropriate data are analyzed to identi fy opportuniti es for best practi ce and innovati on.

d) Improvement opportuniti es derived from new technologies and process concepts are identi fi ed.

e) An implementati on strategy is established to achieve the process improvement objecti ves.

Generic Practi ces for PA 5.1

GP 5.1.1 Defi ne the process improvement objecti ves for the process that support the relevant business goals [PA 5.1 : a]• Direc ons to process innova on are set.• New business visions and goals are analyzed to give guidance for new process objec ves and poten al areas of

process change.• Quan ta ve and qualita ve process improvement objec ves are defi ned and documented.

GP 5.1.2 Analyze measurement data of the process to identi fy real and potenti al variati ons in the process performance [PA 5.1 : b]

• Measurement data are analyzed and made available.• Causes of varia on in process performance are iden fi ed and classifi ed.• Common causes of varia on are analyzed to get quan ta ve understanding of their impact.

GP 5.1.3 Identi fy improvement opportuniti es of the process based on innovati on and best practi ces [PA 5.1 : c]• Industry best prac ces are iden fi ed and evaluated.• Feedback on opportuni es for improvement is ac vely sought.• Improvement opportuni es are iden fi ed.

GP 5.1.4 Derive improvement opportuniti es of the process from new technologies and process concepts [PA 5.1 : d]• Impact of new technologies on process performance is iden fi ed and evaluated.• Impact of new process concepts are iden fi ed and evaluated.• Improvement opportuni es are iden fi ed.• Emergent risks are considered in iden fying improvement opportuni es.

GP 5.1.5 Defi ne an implementati on strategy based on long-term improvement vision and objecti ves [PA 5.1 : e]• Commitment to improvement is demonstrated by organiza onal management and process owner(s).• Proposed process changes are evaluated and piloted to determine their benefi ts and expected impact on

defi ned business objec ves.• Changes are classifi ed and priori zed based on their impact on defi ned improvement objec ves.• Measures that validate the results of process changes are defi ned to determine expected eff ec veness of the

process change.• Implementa on of the approved change(s) is planned as an integrated program or project.• Implementa on plan and impact on business goals are discussed and reviewed by organiza onal management.

Sample

Mate

rial -

Not for

Rep

rint

Page 26: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.22

Assessor

Generic Work Products for PA 5.1…

PA 5.2: Process Opti mizati on att ribute

The Process Opti mizati on att ribute is a measure of the extent to which changes to the defi niti on, management and performance of the process result in eff ecti ve impact that achieves the relevant process improvement objecti ves.

As a result of full achievement of this att ribute:

a) The impact of all proposed changes is assessed against the objecti ves of the defi ned standard process.

b)The implementati on of all agreed changes is managed to ensure that any disrupti on to the process performance is understood and acted upon.

c) The eff ecti veness of process change on the basis of actual performance is evaluated against the defi ned product requirements and process objecti ves to determine whether results are due to common or special causes.

Generic Practi ces for PA 5.2

GP 5.2.1 Assess the impact of each proposed change against the objecti ves of the defi ned standard process [PA 5.2 : a]• Objec ve priori es for process improvement are established.• Specifi ed changes are assessed against product quality and process performance requirements and goals.• Impact of changes to other defi ned standard processes is considered.

GP 5.2.2 Manage the implementati on of agreed changes to selected areas of the defi ned standard process according to the implementati on strategy [PA 5.2 : b]

• A mechanism is established for incorpora ng accepted changes into the defi ned standard process (es) eff ec vely and completely.

• The factors that impact the eff ec veness and full deployment of the process change are iden fi ed and managed, such as:

Economic factors (produc vity, profi t, growth, effi ciency, quality, compe on, resources, and capacity ); Human factors (job sa sfac on, mo va on, morale, confl ict / cohesion, goal consensus, par cipa on,

training, span of control); Management factors (skills, commitment, leadership, knowledge, ability, organiza onal culture and

risks); Technology factors (sophis ca on of system, technical exper se, development methodology, need of

new technologies).• Training is provided to users of the process.• Process changes are eff ec vely communicated to all aff ected par es.• Records of the change implementa on are maintained.

GP 5.2.3 Evaluate the eff ecti veness of process change on the basis of actual performance against process performance and capability objecti ves and business goals [PA 5.2 : c]

• Performance and capability of the changed process are measured and compared with historical data.• A mechanism is available for documen ng and repor ng analysis results to management and owners of standard

process.• Measures are analyzed to determine whether results are due to common or special causes.• Other feedback is recorded, such as opportuni es for further improvement of the standard process.

Generic Work Products for PA 5.2… Sam

ple M

ateria

l - Not

for R

eprin

t

Page 27: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 23

Annexure Book: Part A

4.3.3: INTERVIEW RATING LEVEL 1

ASSIGNMENT 5

Assessment at LARIPSIncident Management

Rating of the Interview with FrankPart 1: Base Practices and Maturity Level 1

Introduction

Alice Good morning Frank, I am Alice and this is my colleague Bob. We have been requested to perform a process assessment at the IT department of LARIPS. The purpose of this assessment is to look at the way IT Service Management processes are executed at LARIPS, to compare them to the standard processes of ITIL, and to use this comparison to assess their current maturity. This assessment will enable us to understand what the current practices are, to look for gaps, and to determine areas of improvement.

Emma, our Lead Assessor, has been working with your IT Quality Manager, Arthur, and your head of IT Operations, Kate. They have established the scope of this assessment. Emma, Arthur, and Kate have spent some time looking at what the drivers are for the business, and they have selected a number of processes that seem strategic for the bank’s future. They have selected Incident Management as one of the key processes, particularly because LARIPS has bought Lux United Bank. The management of incidents has a direct impact on the quality of services provided to your customers. So, we are here today to talk with you about Incident Management.

What we would like to do with you is to discuss your organizational practices, gather data, and investigate what the current process maturity is.

I’ll be running through a standard process description. If you have any questions, please ask me and we’ll try to answer them. It is important to stress that we are assessing processes, not people. We will interview you to understand how the process is implemented at the bank. We will interview four other people about this process and consolidate all answers to assess this process. Your personal answers will not be included in the report as such but will be merged at process level.

Perhaps you could just introduce yourself to help us understand your background.Frank As you know, my name is Frank. I have been working for the Lux United Bank for 10 years in

the IT Service Management fi eld. I started working in the IT Support team. I was promoted to Incident Manager when Lux United Bank was merged with LARIPS Bank 3 years ago. I have followed and passed the ITIL Foundation certifi cation exam. I intend to attend the ITIL Expert training course. Now I am managing the service support part of the IT Operations division.

Alice Thank you for the introduction. First, we want to discover if this process achieves its results. To do so, we are going to cover the typical activities of Incident Management, and you will explain if and how you perform these activities.

Process Performance AttributeINCM.BP1 Detect and Log the Incidents Alice (BP 1) The fi rst activity I would like to cover is about the incident raising. So, how are incidents detected

and logged? Do you have some incidents reported in an automated way?Sam

ple M

ateria

l - Not

for R

eprin

t

Page 28: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.24

Assessor

Frank Well, we use a ticketing tool called Incident Request to manage our incidents. But we have not yet interfaced our existing tools for monitoring the IT infrastructure with our ticketing tool. So, all event-related incidents are logged manually. Support team members as well as users can record an incident in the ticketing tool. The incidents detected by users can also be reported by phone and then recorded by a support team member.

Alice And can you tell us what details are collected on the incidents?Frank For that, the Incident Request tool is very structuring. For each incident, we have to record

the name of the support team member who is recording the incident, the name of the user(s) impacted, the IT equipment involved, and the description of symptoms. We also assign a priority and a category to each incident, as already mentioned. And some information is automatically provided, such as the ticket ID and the date. All this is the main information recorded for any incident ticket.

Alice And are you sure that everyone is able to document all this information in the ticketing tool?Frank Indeed, it is sometimes an issue when a user creates a new ticket. They put something in all

fi elds because it is mandatory to validate the creation of a new ticket. But it is necessary that a support team member review this information before going further. I feel my team is able to document this information correctly in the tool.

Alice Just to be sure, is all the information on incidents recorded in the ticketing tool, or are there other tools or working habits?

Frank Data directly related to incidents is in our ticketing tool, but we have some external databases to support incident investigation and resolution.

Alice OK, we will talk about this a bit later.

INCM.BP2 Categorize the IncidentsAlice (BP 2) When you record an incident in Incident Request, how do you categorize that incident?Frank There are several categories available in the tool, which has been customized to only keep the

best incident categories according to our context. Concretely, each ticket has a mandatory fi eld with this list of predefi ned categories and the ticket initiator has to select one. Again, if a user creates the ticket, a support team member systematically reviews the assigned category. We have delivered a particular training on incident categorization and the way to use predefi ned categories to ensure that this is done correctly.

Alice And how are these categories used during incident handling?Frank First, the category helps identify the team to which the incident will be assigned, according to

the technical domains (network, hardware, mail, printers, and so on). Then within a team, it helps identify the right person who will manage the incident until its closure, according to her competencies. I have not received any negative feedback, so I suppose it works correctly.

Alice And what about the usual “other” category?Frank We have designed the set of categories in such a way that we do not need an “other” category.

That means some tunings may be necessary if the IT environment changes, but it is rarely the case in the banking industry.

Sample

Mate

rial -

Not for

Rep

rint

Page 29: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 25

Annexure Book: Part A

INCM.BP3 Prioritize the IncidentsAlice (BP 3) I have the same question for the prioritization as well: How is the prioritization done in Incident

Request?Frank It is not as easy as for categorization. Whichever way the new ticket is created (directly by the

user or by the support team after a phone call), the priority always needs to be negotiated with the user. Even if the meaning of each priority level has been communicated to the support team, to ensure a common understanding, the users are always right... The evaluation of urgency and impact are done, but informally and with the user. So, it is never done objectively, and the resulting priority does not make sense. We are aware that we need to discuss and clarify this point with our customers.

Alice So, if I understand properly, the assigned priority is very often “high” (or something like that), but does not refl ect the reality?

Frank Exactly! For a user, his incidents are always the most urgent ones. But at a higher-level view of IT services, it is rarely the case.

Alice Do you mean that the real high-priority incidents are mixed up with the false high-priority incidents?

Frank No, no, no... For obvious reasons, it is not acceptable. So, we have an internal convention to write on the fi rst line of the fi eld for the incident description: “HIGH PRIORITY”, in case of real high priority. For example, it can be an incident from a Very Important Person (VIP) or an incident impacting a large number of users. In these cases, support team members know that they have to stop their current work and try to resolve those incidents as soon as possible. Each team member can resume their own work afterward and try to resolve the other incidents.

INCM.BP4 Provide Initial Diagnosis and Support for the IncidentsAlice And how are incidents managed in general just after their recording?Franck The support team member who has recorded the incident has to check in the existing knowledge

databases to verify if the same type of incident has already been resolved in the past. But because they are disconnected from our ticketing tool, I am pretty sure that it is not done systematically. However, that should be done before proceeding with the incident. In many cases, I think my support team does not provide initial support and forward directly the incident to specialized support teams.

INCM.BP5 Escalate the Incidents to Specialized Support Teams or to Higher Levels of Authority if NeededAlice (BP 5) OK. Talking about escalation, what are the general escalation rules? What happens if the person

in charge of an incident is not able to fi nd a solution?Frank Well, in that case, he has to discuss the incident with the other support team members to get help

from them. Most of the time, after asking two or three people, the person in charge is able to get the information required to resolve the incident. You should know that many incidents are quite similar to previous ones and, consequently, at least one member of the support team knows how to resolve them.

Alice What about the other cases, when it is a completely new incident, never faced in the past?Frank It depends... Sometimes, the person in charge can do it alone and sometimes, he needs help

from his colleagues. Most of the time, the support team has enough experience to deal with new incidents.

Alice I understand. Do you have any time limits to resolve incidents?Sample

Mate

rial -

Not for

Rep

rint

Page 30: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.26

Assessor

Frank As I said previously, we have a short list of customers with SLAs. For them, a resolution time for incidents has been defi ned in the SLAs. For the others, it is more informal, that is, when the person in charge thinks he will not be able to resolve the incident within a reasonable time limit, he asks his colleagues for help.

Alice And you, as a manager, are you involved in one way or another, and how?Frank Yes, in some cases... For customers with SLAs, I am informed when the defi ned time limit is

about to be exceeded. According to the importance of the incident, I can decide to communicate with the customer to keep her informed of the situation. But in most cases, it is not necessary, and this incident will be discussed during the next monthly meeting with the customer. For customers without SLAs, I am less involved in incident management. If I am informed, it means that there is a big issue, and it is necessary that I take action to defuse the situation.

INCM.BP6 Investigate and Diagnose the IncidentsAlice (BP 6) Good! Thank you. And what happens if the support team member does not fi nd anything in these

knowledge bases? How do you conduct an in-depth investigation of the incident?Frank Well, in fact, everyone has his/her own practices for doing that according to their competencies

and experience within the bank. For example, some of the support team members have developed their own databases. The technical tasks to be performed depend on the types of incidents. Globally, they have to identify the weak or blocking point, diagnose the reason for the incident, and fi nally fi nd a solution to restore the service.

INCM.BP7 Select the Best Potential Resolution Alice (BP 7) Right... Do you know how the potential resolutions are tested and the best one is selected?Frank No, not really. I do not handle incidents myself, so I don’t know what happens in the fi eld in such

detail. You should ask people from my team.

INCM.BP8 Keep Incident Records Up-to-date

Alice (BP8) OK, no problem! We will be able to get confi rmation on this topic during the next interviews on this process. Now, I would like to talk about the update of the incident records. The fi rst point is about the incident statuses. How are they used and when are they updated?

Frank Well, it is quite simple. The person in charge has to update the status at each stage of the incident lifecycle. We have only fi ve statuses, that is, “Recorded,” “In Progress,” “Escalated,” “Resolved,” and “Closed.” As incident manager, I use them to monitor the progress of incidents.

Alice We have already talked about the recording of incidents. Are there other moments when ticket documentation is updated?

Frank We also have a dedicated white space for the analysis and diagnosis of incidents. But again, in a general way, the level of detail for the incident documentation can be very different from one person to the other. The objective is to be able to reuse this information for the resolution of future incidents, but I am not sure it is really the case. I think it is more a reminder for next time but only usable by the same person, not by another.

Alice Any other moment?Frank Hmm… In fact, when we need to escalate an incident to a third party, we have to check a box

and explain the reason in a free text fi eld. It is the same if the initial priority is changed during the incident lifecycle; we have to explain the reason in the incident description. There is no dedicated text fi eld for this explanation). Obviously, when any relevant information on an incident is collected, the person in charge should complete the incident description.Sample

Mate

rial -

Not for

Rep

rint

Page 31: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 27

Annexure Book: Part A

INCM.BP9 Implement the Incident ResolutionAlice (BP 9) Now, about the incident resolution itself, can you explain how is it done?Frank Again, it depends on the type of incident. For some incidents, the person in charge can resolve

them remotely, through the network. In these cases, the support team member systematically phones the requester to check if the solution works, and if he is able to resume his work. In other cases, it is not possible, for example, to conduct some tests on the user’s workstation remotely. The person in charge has to go on site to solve the incident. There are also cases when we need to replace a hardware piece. We have a contract with an external provider and the person in charge will contact the provider so that the provider goes on site and changes the faulty hardware.

Alice Do you know when the incident resolution is documented in the ticket?Frank It is done after the resolution is implemented and the person in charge has verifi ed that the

incident resolution works effectively. But I would say that the level of detail is very different from one person to the other, and I guess according to the time available to close the ticket before moving to the next one.

INCM.BP10 Communicate on Incident Resolution ProgressAlice (BP10) The topic I am going to talk about now is communication with the user during the incident lifecycle.

We have already talked about the fact that you can communicate with customers in the worse cases and have a monthly meeting with them, but this is at a management level. What about communication with the requester during the resolution of one of their incidents?

Frank The fi rst communication happens when a new incident is recorded in the Incident Request tool. If the user records the incident, the support team systematically has to call the user back to complete the incident description and negotiate the incident priority. That is why in the future, we will prevent the recording of incidents by users and ask them to systematically call the service desk...

After that, we have some automated e-mails sent to the requester when the person in charge has been identifi ed and when this person effectively starts to work on the incident. The rest has already been mentioned: Communication by phone after the resolution to check the effectiveness of the solution and fi nally, communication by e-mail to propose incident closure to the requester.

Obviously, the person in charge is free to call back the user at any time during the incident lifecycle, when required (to collect missing information, make tests, and so on).

INCM.BP11 Close the IncidentsAlice (BP11) And about incident closure, how is it done? And when?Frank In fact, we have several statuses in the Incident Request tool. The last status set by the support

team is “Resolved.” Finally, an e-mail is automatically sent to the requester to propose that she close the ticket. The user has the choice between Accept Solution to close the ticket appropriately or Decline Solution if there is still something that does not work correctly.As I have said earlier, most of the time, there is a phone call made by the person in charge to the requester before switching the incident to Resolved. Consequently, incident closure by the user in the tool is used more to keep a track of this acceptance than as a way to communicate with the user.

Alice Very good! But does this mechanism work correctly?Sample

Mate

rial -

Not for

Rep

rint

Page 32: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.28

Assessor

Frank It depends on the user… Some of them clearly understand that it is important for IT to have their feedback. By the way, I have forgotten to say that in addition to accept or decline the solution, the user can express his satisfaction level through a multiple-choice list, from “very poor” to “excellent” plus a dedicated white space, free for comments. It is one of our indicators to estimate the satisfaction level of our users. However, some users do not care about IT at all! In that case, we apply the rule: “no news, good news.” Once proposed to closure, even if we do not receive any feedback from the requester, the incident will be automatically closed after 1 month.

Alice And what happens if the solution is declined by the users?Frank With all the precautions previously described, it is very rare… But if it happens, the person in

charge will call the user for a more-detailed explanation and fi nd a solution that satisfi es him.Alice To conclude, would you say that the current management of incidents is effective and makes it

possible to implement a solution as soon as possible for resuming business operations?Frank Yes, in most cases. Even if some activities are not systematically performed, the expertise of the

people involved enables the bank to effectively manage its incidents. But, I would say that the supporting tools could be improved.

Alice Frank, thanks a lot for your explanations. We have now covered all process activities. We’ll take a break now and continue later with questions about process effi ciency.

Sample

Mate

rial -

Not for

Rep

rint

Page 33: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 29

Annexure Book: Part A

4.4.4: THE OTHER PROCESS ATTRIBUTES

ASSIGNMENT 6

Teach-Back The Other Process AttributesTeach-Back Process Att ributes

Performance Management Att ribute (PA 2.1)Att ribute ID and Name

PA 2.1 Performance Management Att ribute

Defi niti on The Performance Management a ribute is a measure of the extent to which the performance of the process is managed.

Meaning This a ribute checks if the process is supervised and controlled.

GP 2.1.1 Identi fy Process Performance Objecti ves.

Meaning The concrete performance objec ves to be achieved by the process are defi ned.

Ques ons Are there deadlines to conduct the process ac vi es?Who defi nes the objec ves of the process?Are concrete goals set for the process ac vi es?

Example The objec ve of the Incident Management process is clearly defi ned: Solve major incidents within 2 days and minor ones within a week. The Normal number of unresolved incidents is defi ned.

GP 2.1.2 Plan and Monitor Process Performance

Meaning The performance of the ac vi es of the process is planned and monitored.

Ques ons Is there clear organiza on of ac vi es?Is there a schedule of ac ons implemented to operate the process?Are these process ac vi es monitored and subject to repor ng?

Example The incident manager organizes the Service Desk and decides who is part of the fi rst and who is part of the second line of support. He can also assign Incident Management ac vi es to individuals (incident categoriza on, priori za on, and resolu on). In case of 24/7, the shi s are planned and team transi on is organized.

GP 2.1.3 Adjust the Performance of the Process to Meet Plans.

Meaning Someone checks that the process is conducted as planned, and does whatever is necessary in case of devia on from objec ves.

Ques ons Does someone monitor the process ac vi es?What is done if process ac vi es are not performed according to plan? Are there correc ve ac ons when process results do not meet objec ves (more resources, new planning)?

Example The Incident Manager no ces if the number of unassigned and unresolved incidents is increasing and reacts accordingly. For example, he can assign more resources to manage incidents, and inform customers of delays in the resolu on of some incidents.

GP 2.1.4 Defi ne Responsibiliti es and Authoriti es for Performing the Process.

Meaning Roles to perform the process are clearly assigned and communicated.

Sample

Mate

rial -

Not for

Rep

rint

Page 34: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.30

Assessor

Ques ons Are the roles and responsibili es defi ned within your team?Do people know they are assigned to a role?Do other people know who is in charge of doing what?

Example John and Bob know they are responsible for analyzing new incidents, fi nding a solu on, and implemen ng this solu on. In case of an issue, they know they can escalate and contact the Incident Manager.

GP 2.1.5 Identi fy and Make Available Resources and the Informati on Necessary for Performing the Process.

Meaning Resources such as HR, informa on, and others are available when needed.

Ques ons Are the exis ng HR resources numerous enough and available?Are other resources available when needed?

Example John and Bob are assigned full me to incident solving. Jim can help when needed. They are supported by some individuals or shared knowledge bases (at team level) to analyze and resolve incidents.The Incident Management tool is available.

GP 2.1.6 Manage Interfaces Between the Involved Parti es.

Meaning Someone coordinates all people involved in the management of incidents.

Ques ons Are all par es involved in the process iden fi ed?Do they know they are involved in the process performance and to what extent?How is communica on assured between involved par es?

Example John knows when to escalate an incident to Mike if he is not able to solve it. The support team knows when and what to communicate with the users. The support team knows when and how to communicate with an external supplier.

Sample

Mate

rial -

Not for

Rep

rint

Page 35: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 31

Annexure Book: Part A

Process Deployment Att ribute (PA 3.2)Att ribute ID and Name

PA 3.2 Process Deployment Att ribute

Defi niti on The Process Deployment a ribute is a measure of the extent to which the standard process is eff ec vely deployed.

Meaning This a ribute checks if the standard process is actually deployed.

GP 3.2.1 Deploy a Defi ned Process.

Meaning The standard process defi ned is actually deployed throughout the organiza on.

Ques ons Is the standard process actually used?Does the process that is implemented correspond to the one that is described formally?How is process deployment organized?Do all involved par es have access to the process descrip on?

Example The Incident Management process formally described is actually in place: The process objec ves are actually pursued; the standard ac vi es are performed, and so on. The descrip on of the Incident Management process is easily accessible and regular training sessions are organized for support teams and all involved par es.

GP 3.2.2 Assign and Communicate Required Roles, Responsibiliti es, and Authoriti es for Performing the Defi ned Process.

Meaning Roles and responsibili es are assigned and communicated to all involved par es.

Ques ons Are all the roles iden fi ed for this process assigned to someone?Are all the responsibili es iden fi ed and communicated?

Example An Incident Manager is assigned. People in charge of incident solving are also assigned.

GP 3.2.3 Have Competent Personnel for Performing the Defi ned Process.

Meaning The required competences to perform the process are available and allocated.

Ques ons Do you think people working on this process have the right level of knowledge or training?Are training sessions organized to improve the competence level?

Example Employees assigned to Incident Management have the right competences. They were either selected for their competences or have developed them through training and coaching.

GP 3.2.4 Provide the Required Resources and Informati on Necessary for Performing the Defi ned Process.

Meaning Resources and informa on are actually available and suffi cient.

Ques ons In reality, do you have enough fi nancial and human resources?Do you have trouble fi nding the correct informa on to perform the process?

Example There are actually enough people to perform the process. They are supported by a centralized knowledge base that is easily accessible and has suitable research facili es.

GP 3.2.5 Provide the Required Infrastructure and Work Environment for Performing the Defi ned Process.

Meaning The required infrastructure, or tools, and the work environment are actually in place.

Ques ons Do you have the correct tools?Does the work environment correspond to what has been defi ned and to what is needed?

Example The cke ng system is in place and running. People are trained to use it, and actually use it. The support team can monitor all infrastructure alerts on a large screen to be as reac ve as possible. Some technical facili es are in place to enable the support team to remotely resolve incidents.

Sample

Mate

rial -

Not for

Rep

rint

Page 36: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.32

Assessor

GP 3.2.6 Collect and Analyze Appropriate Data on the Process.

Meaning Data is collected and analyzed to check if the standard process is appropriate and to improve it.

Ques ons Do you collect feedback for the people performing process ac vi es?Do you analyze trends from the data provided by the monitoring system?Do you use these data to review if the standard process is effi cient and to improve it?

Example The Incident Manager collects sta s cs about the recording, analyzing, and resolu on mes (average, best, and worse) of the incidents that occurred during the month. These sta s cs are discussed to defi ne a be er standard process.

GP 3.2.7 Produce and publish Service Management reports.

Meaning The service management reports that have been defi ned are actually produced and published.

Ques ons Do you actually produce the reports that are defi ned for the process?Are these reports published and communicated to the right recipients?

Example The report on incidents is published every month and sent to the expected recipients. Its content is compliant with what has been defi ned.

Sample

Mate

rial -

Not for

Rep

rint

Page 37: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 33

Annexure Book: Part A

Process Measurement Att ribute (PA 4.1)Att ribute ID and Name

PA 4.1 Process Measurement Att ribute

Defi niti on The Process Measurement a ribute is a measure of the extent to which measurement results are used to ensure that the process performance supports the achievement of the relevant process performance objec ves in support of defi ned business goals.

Meaning This a ribute checks if process measures are used to predict and improve process performance.

GP 4.1.1 Establish process informati on needs, in relati on with business goals.

Meaning The informa on on the process needed by involved par es is iden fi ed.

Ques ons Who are the involved par es and what informa on do they need to track the performance of the process? Are these needs iden fi ed?What type of repor ng is required?

Example The informa on needed to decide to escalate an incident to problem management is established. Incident resolu on targets defi ned internally or in Service Level Agreements (SLAs) and the customers’ need for repor ng are clearly specifi ed.

GP 4.1.2 Derive process measurement objecti ves.

Meaning Derive measurement objec ves from the collected informa on needs.

Ques ons What do you have to measure to provide the required informa on?What needs to be measured for this process? What kind of repor ng is necessary?

Example The Incident Manager breaks down the global me for incident resolu on into several me slices: recording me, 1st level analysis me, 2nd level analysis (escala on me), and

implementa on me. He can also defi ne some other measures, such as the frequency of occurrence of each type of incident or other sta s cs that are required to sa sfy SLA repor ng requirements.

GP 4.1.3 Establish quanti tati ve objecti ves for the performance of the process.

Meaning Realis c process performance objec ves are defi ned.

Ques ons Have you defi ned quan ta ve objec ves for the performance of the process?

Example The Incident Manager assigns to each me slice (previously defi ned) a me limit in such a way that the total me is equal to the SLA targets.

GP 4.1.4 Identi fy and defi ne measures and frequency of the measurement.

Meaning Measures to sa sfy process measurement and performance objec ves are defi ned.

Ques ons Do you have measures defi ned? Including the frequency of data collec on?Have you defi ned computa on rules to derive measurement from raw data?

Example The Incident Management system is customized to record when each event occurs (crea on me, start of fi rst-level analysis, start of escala on, and so on). The system produced a

report with the measures of each me slice: 2 hours between the crea on me and the beginning of analysis…In addi on, each week, the Incident Manager measures the number of incidents, the average me to repair them, and the number of service level breaches.Sam

ple M

ateria

l - Not

for R

eprin

t

Page 38: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved.34

Assessor

GP 4.1.5 Collect, analyze, and report the results of measurement.

Meaning The required data is collected in an eff ec ve and reliable manner. Measurements are computed, analyzed, and communicated.

Ques ons Is the data actually collected?Are indicators computed?Does someone analyze these results?Are the results communicated to someone?

Example Data is actually extracted from the Incident Management system every week by the Incident Manager. They are computed in a monthly report, analyzed by the Incident Manager, and reported to the top management or the customer.

GP 4.1.6 Use measurement results to characterize process performance.

Meaning Process performance is evaluated from the computed indicators.

Ques ons Are measurement results used to evaluate the performance of the process?Is there a summary of the evolu on of the process measures somewhere?

Example The monthly reports are regularly discussed with the customers to iden fy and analyze trends, and prepare the review of the SLA targets for the next period.

Sample

Mate

rial -

Not for

Rep

rint

Page 39: TIPA Assessor for ITIL Annexure

Copyright © 2012, ITpreneurs Nederland B.V. All rights reserved. 35

Annexure Book: Part A

Process Control Att ribute (PA 4.2)Att ribute ID and Name

PA 4.2 Process Control Att ribute

Defi niti on The Process Control a ribute is a measure of the extent to which the process is quan ta vely managed to produce a process that is stable, capable, and predictable within defi ned limits.

Meaning This a ribute checks if measurement results are used to improve process performance.

GP 4.2.1 Determine and apply analysis and control techniques.

Meaning Analysis techniques are defi ned and applied.

Ques ons Are there standard sta s cal analyses of indicators?Does it enable you to iden fy root causes of process performance varia ons?

Example Surveys are conducted to measure the user/customer sa sfac on with the handling of incidents by the IT service provider. Because all ac vi es related to Incident Management are supported by a single tool, the organiza on iden fi es the relevant informa on from this tool to analyze them. In addi on as the organiza on has a lot of diff erent incident categories, they decide to analyze this data by category of incident to iden fy poten al diff erences.

GP 4.2.2 Establish control limits of variati on.

Meaning Control limits are defi ned for indicators (lowest and highest values accepted).

Ques ons Are some performance limits defi ned for the process?Where do those limits come from? How have they been defi ned?

Example The analysis of data collected through the Incident Management tool enables the organization to define acceptable limits of variation for each activity in the handling of incidents. For example, the implementation of the incident solution cannot last more than 1 day.

GP 4.2.3 Analyze measurement data.

Meaning Measures are analyzed to iden fy causes of varia on from control limits.

Ques ons Is measurement data analyzed? Does it enable you to iden fy when control limits are exceeded?Do you analyze further to iden fy the cause of varia on?

Example Regular analysis of previously defi ned limits of varia on enables the organiza on to understand that the me limit of hardware replacement ( me limit for the implementa on of incident resolu on for incidents from the Hardware category) is always exceeded because one external supplier is not able to respect his engagement.

GP 4.2.4 Take correcti ve acti ons.

Meaning Correc ve ac ons are implemented to tackle the causes of varia on. Their eff ec veness is monitored.

Ques ons How are causes of varia on treated?Do you implement correc ve ac ons?Do you check that they are eff ec ve?

Example The organiza on can nego ate with its external supplier to fi nd a solu on or can look for another supplier who will be able to sa sfy its needs.Sam

ple M

ateria

l - Not

for R

eprin

t