table of contents - broadswordsolutions · don [t settle for rumors, misinformation and...

34

Upload: others

Post on 12-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited
Page 2: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Table of Contents Introduction .................................................................................................................................................. 3

1. What Is an "Action Plan Re-Appraisal" and Why Should I Care? .............................................................. 4

2. A-DAR-able Moments .............................................................................................................................. 8

3. How Do the Generic Practices Help Us Make Agile Better? ................................................................... 10

4. Tailoring Guidelines – What Are We Missing? ........................................................................................ 12

5. We've Got Binders Full of Policies – Is That Sufficient for GP2.1? .......................................................... 16

6. What Is Training Capability? ................................................................................................................... 19

7. “Alternative,” or Just Innovative? ........................................................................................................... 23

8. How Do We Develop Good PPQA Habits? .............................................................................................. 23

9. How Can We REALLY Know How Things Are Going? .............................................................................. 31

About the Authors ...................................................................................................................................... 33

Page 3: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Introduction

They’re back!

In “Just the FAQs,” two leading software engineering professionals (who are also competitors), Jeff

Dalton and Pat O’Toole, are back at it, penning a series of short stories from different perspectives

about process improvement, CMMI, and Agile development, that address common misconceptions,

myths, and fundamental misunderstandings that persist with the software and engineering professions.

Their mission? To provide answers to the most frequently asked questions about the CMMI, SCAMPI,

engineering strategy and software process improvement.

Initially written and distributed individually as part of a series on Ask the CMMI Appraiser, the “Just the

FAQs” collection is now being presented in eBook format in a second edition.

Don’t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs!”

“Just the FAQs” is written/edited by Pat O’Toole and Jeff Dalton. Please contact the authors at

[email protected] and [email protected] about your questions.

© Copyright 2016: Process Assessment, Consulting & Training, LLC and Broadsword Solutions Corporation

CMMI, SEI and Capability Maturity Model Integration are registered trademarks of Carnegie Mellon University

Page 4: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

1. What Is an "Action Plan Re-Appraisal" and Why Should I Care? - Pat O’Toole

I heard that the CMMI Institute just released this thing called an “Action Plan Reappraisal.” What is it

and why should I care?

PAT: Imagine that your organization just completed a Maturity Level 4 (ML4) SCAMPI A appraisal, but

because you failed a goal in Organizational Process Performance (OPP) at ML4 and had a “Not Rated”

goal in Risk Management (RSKM) at ML3, the SCAMPI A appraisal resulted in an ML2 rating. As Maxwell

Smart might say, “You missed it by that much!”

Let’s assume that the ML4 rating was important to your team as a formal recognition of accomplishment

and/or because a strategically important client requires it. In either case, you would want to remediate

the goal‐impacting weaknesses, institutionalize the associated behavior changes, and be re‐appraised as

soon as possible. Prior to the Action Plan Reappraisal being released, your organization would have to

undergo another soup‐to‐nuts ML4 SCAMPI A appraisal – a very expensive and disruptive undertaking.

Now, however, the Appraisal Sponsor can elect to conduct an Action Plan Reappraisal (APR) – essentially

a “delta appraisal.” An action plan is generated and executed by the organization to remediate the

weaknesses and institutionalize the behavior changes, after which the appraisal team re‐evaluates the

failed or not rated goals. If any of these goals are now rated “Satisfied,” the Maturity Level is

regenerated. In the happiest of scenarios, both the OPP and RSKM goals would now be rated “Satisfied,”

and ML4 will have been achieved!

Any questions?

Is there a time limit in which the Action Plan Reappraisal (APR) must be conducted?

The APR must conclude within four months of the SCAMPI A Final Findings presentation.

Why four months?

Four months was selected as an appropriate timeframe for remediating minor goal‐threatening

weaknesses; the APR is not intended for use when remediating major or systemic weaknesses.

Is there a limit regarding how many APRs may be conducted?

Only one APR can be conducted for a given SCAMPI A appraisal.

What happens if our current Maturity Level expires during this four‐month period?

An appraisal result that exceeds its three‐year validity period is no longer published on PARS. If the

current SCAMPI A was completed and the associated APR is concluded within the four‐month time limit,

the Appraisal Sponsor can authorize the current results to be published in PARS. Publication in PARS

would start after the ARP has been submitted to the CMMI Institute via the SCAMPI Appraisal System

(SAS) and the CMMI Institute has concluded its Quality Audit.

How is the three‐year validity period established for a SCAMPI A that had an APR?

Such an appraisal would expire three years after the delivery of the SCAMPI A Final Finding presentation.

Page 5: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Is it now required to conduct an APR if the target Maturity Level/Capability Profile was not achieved

in the SCAMPI A appraisal? If not required, who makes the decision to conduct an APR or not?

No, the APR is not required; it is an option available to the Appraisal Sponsor. The Lead Appraiser

provides a good‐faith recommendation on whether an APR might be successful if pursued, but the final

decision rests with the Appraisal Sponsor. If proceeding with an APR, this decision must be

communicated to the CMMI Institute prior to submitting the SCAMPI A appraisal via SAS.

Typically, a goal fails due to one or more practices being characterized as “partially implemented.” or

“not implemented.” Does the APR evaluate just these practices? Goals? Process area? Maturity level?

The APR appraises all of the “Unsatisfied” and “Not Rated” goals in the APR appraisal scope in the same

manner as in a SCAMPI A: For each practice that supports the goal, documents are reviewed and

interviews conducted to determine if that practice is fully, largely, partially, or not implemented on that

project. These project‐level characterizations are then used to characterize the practice overall for the

organization, or what’s referred to as the “OU practice characterization.” These OU practice

characterizations are then used to rate the goal as “Satisfied” or “Unsatisfied.”

Note that it may be determined that additional goals need to be put in scope for the APR to properly

evaluate the remediation. For example, if a Project Planning goal was rated “Unsatisfied” in the SCAMPI

A, it might be appropriate to evaluate one or more goals from the Project Monitoring and Control

Process area as well. This decision is made by the Lead Appraiser.

Do we have to evaluate the remediated goals on the same projects, or can we do so on others?

The expectation is that the same projects will be evaluated. However, there may be cases where this is

not feasible – for example, the project terminated prior to the remediation being applied, or the project

is beyond the point in the lifecycle where the remediated practices are executed. In such cases,

replacement projects with the same sampling characteristics may be evaluated instead.

What happens if some of the goals are still rated “Unsatisfied” or “Not Rated” at the end of the ARP?

In the example above, if the previously failed OPP goal was now rated “Satisfied” but the previously

unrated RSKM goal was still “Not Rated,” the Maturity Level would be regenerated but the outcome

would be the same – ML2. On the other hand, if the RSKM goal was now rated “Satisfied” and the OPP

goal was still rated “Unsatisfied,” the regenerated Maturity Level rating would be ML3.

Must the organization have all failed and unrated goals re‐evaluated as part of the APR?

No, the Appraisal Sponsor can elect to conduct the APR on a subset of these. In the example above, the

sponsor may decide to include only the unrated RSKM goal in the APR scope and, if the goal is

successfully remediated, institutionalized, and rerated as “Satisfied,” ML3 will have been achieved.

Can the organization include new process areas in the APR? That is, in the example above, could they

expand the model scope to include some or all of the ML5 process areas?

No, the model scope of the APR cannot be greater than that of the original SCAMPI A appraisal.

Do we have to use the same Lead Appraiser for the APR, and do all of the original appraisal team

members have to be involved?

Page 6: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Yes, but . . . . The original Lead Appraiser must be involved in all evaluation‐related APR activities, and

with the exception of documentation review, these activities must be conducted onsite. In addition, for

each failed or unrated goal in the APR model scope, to achieve continuity at least one member of the

SCAMPI A mini‐team that evaluated the detailed evidence associated with that goal must also be

involved in all of these onsite appraisal activities. In other words, there will be at least two members of

the original SCAMPI A appraisal team, and maybe more, involved throughout. The other SCAMPI A

appraisal team members must also be involved in OU practice characterizations and

goal/capability/Maturity Level ratings, but they may participate remotely (e.g., via teleconference).

So in our example, if separate mini‐teams were responsible for evaluating the evidence and

characterizing the practices for OPP and RSKM, then the Lead Appraiser and at least one member of

each of these mini‐teams must participate in all evaluation‐related APR activities. All of the original

appraisal team members must be involved in generating the OU practice characterizations and rating

the goals.

Special provisions must be made with the CMMI Institute if any of the original appraisal team members,

including the Lead Appraiser, cannot participate in the APR.

Is there a limit on the number of failed goals that may be evaluated in an APR?

No, however, the Lead Appraiser must make a recommendation to the Appraisal Sponsor regarding the

feasibility of achieving the objective. This may include reminding the sponsor that, for a goal to be rated

“Satisfied,” the weaknesses have to be rectified AND the behavior changes have to be institutionalized.

Will the appraisal posting on PARS indicate that an APR was conducted?

No, the SCAMPI A and APR are considered a single appraisal event. The posting on PARS only indicates

the final outcome of the appraisal; there is no indication that an APR was, or was not, conducted.

What do YOU perceive to be “the good, the bad, and the ugly” regarding Action Plan Reappraisals?

[Author’s note: Prior to this question, all answers were based on the changes introduced in the SCAMPI

MDD v1.3b to accommodate the APR. In response to this question, Pat is providing his own opinion,

which may or may not align with that of the CMMI Institute or his fellow Lead Appraisers!]

The Good:

Some organizations insist on being “rock solid” going into a SCAMPI A appraisal, as the fear and cost of

failure is very high – in both financial and political terms. These organizations may conduct a series of

Class C and Class B “dress rehearsals” until they have achieved 99.9% confidence that they will “pass”

the SCAMPI A. The Action Plan Reappraisal option will allow such organizations to lower the overall cost

by embracing more rating risk. The organization may forego one or more “dry run” appraisals if they

know they can address any last potential goal threatening weaknesses over the ensuing four months.

Currently, some Lead Appraisers may struggle with the ethical decision to “flunk” an organization based

on one or two marginal practices. They are torn between staying true to the model and method, and

forcing the organization to incur the significant cost and disruption of another full SCAMPI A appraisal.

The APR option makes the right decision much more palatable – for all “relevant stakeholders!”

The Bad and the Ugly:

Page 7: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Some perceive that the introduction of the APR “tarnishes the brand.” That is, the SCAMPI A appraisal

method is a rigorous evaluation of current organizational outputs and behaviors to determine if a

predetermined set of CMMI goals is being satisfied. Appraisals bring focus to organizations that might

otherwise be easily distracted. And since the SCAMPI A is typically perceived as the “gold standard” of

appraisal methods, anything less, like the APR, is, well, less.

Some organizations may decide to accept too much risk. That is, rather than simply foregoing their

sixteenth pre‐SCAMPI A dry run appraisal, they may throw prudence and caution to the wind and do NO

pre‐ appraisal preparation, trusting that they can address any and all issues during their four month

“grace period.” Damn the torpedoes, full steam ahead!

Finally, some CMMI‐adopting organizations may fear that an unscrupulous Lead Appraiser may see the

Action Plan Reappraisal as a means of extracting even more revenue from them. “Oh gee, you failed a

goal, I guess you’ll just have to pay me for another four or five days of service to get your level!”

Concluding Note

The SCAMPI Method Definition Document v1.3b, Section 4 is the Action Plan Reappraisal’s “official rule

book.” If you would like a copy of this section, a copy of the full MDD, or if you have any additional

questions or “Good, Bad, and Ugly” insights, please feel free to contact Pat at [email protected].

Over time, I suspect that what “Dear Abby” said about birth control may also apply to the APR – “It’s

better to have and not need than to need and not have.”

Page 8: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

2. A-DAR-able Moments - Jeff Dalton

This DAR thing just seems silly to me. Aren’t we supposed to be experts? Why do I need a bunch of

overhead when I’m smart enough to make the call on my own?

JEFF: Make the call on your own? That’s so a-DAR-able! And it reminds me of a little story . . . .

Back in the early 90s, I was leading a software project to develop a retail point-of-sale system for a

major department store (yeah, we were pretty “RAD” back in the day), and the job of selecting a code

library that provided basic retail functionality fell to my team.

We quickly assembled a list of available suppliers that some members of the team had heard of, and

asked them to provide us more information about their code libraries. Even though we didn’t conduct a

“formal” selection process, we thought we were asking the right questions about platform, functionality,

cost, and viability.

After lining all of them up side-by-side, we had three solid, but similar, choices that ran in a text-based

Linux environment, and one that ran on the then new-fangled Windows platform with a touch screen.

Wait. Touch screen? Graphics? Ooooh. Our inner-nerds were salivating!

While we went about the process of discussing our options, the buzz around the office about “touch

screen” was palpable. Words like “sexy,” “innovative,” and “groundbreaking” could be heard at all levels

of the company. Text-based systems were “old-school,” “yesterday’s news,” and “boring.” The CEO even

weighed-in and said the new graphical interface would be the “soul” of the new system.

The CIO, who reviewed the data and found the touch screen system to be lacking in functionality while

higher in cost, added a column to our matrix, which he labeled “pizzazz.”

Oy vey.

I think you know the rest of the story. I don’t need to tell you about how the momentum to choose that

touch screen system was unstoppable, or that the code library of basic retail functions didn’t even

WORK, or that the entire project was a disaster that resulted in substantial cost and schedule overruns.

No, I don’t need to tell you THAT story….

But I do need to tell you about the “3Ds.”

The 3Ds is a tool I use to remind myself about that project – and to help make sure that it never happens

again. It stands for Deliberate, Durable, and Defendable. If you have taken one of my CMMI classes, you

have heard me talk about it during the section on Decision Analysis and Resolution (sometimes I call it

“Dalton’s 3Ds,” but that would make it FOUR Ds, and that’s way too much overhead!).

Here they are:

Deliberate: Applying a deliberate step-by-step approach that leverages proven criteria, which involves a

limited set of the right stakeholders, and follows a useful, fact-based series of steps would have stopped

the momentum of the touch-screen cold in its tracks. This approach is sometimes called “a process.”

Page 9: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Durable: We need our decisions to stand the test of time, and to last beyond the next great thing, or

perhaps just the next new manager. Making a decision durable not only requires that a deliberate

process be followed, but also that consideration is given to scaleability, change (including

reorganization, acquisitions, or unforeseen changes in business climate) and evolving preferences in

technology or culture. “Durable” doesn’t necessarily mean “solid,” “robust,” or “large.” More often than

not, it means “flexible” and “adaptable.”

Defendable. Every decision we make has political consequences, as it reflects positively or negatively on

key stakeholders. A successful choice can catapult a manager’s career into the boardroom, or relegate

him just as easily to the mailroom. Who we involve, how we communicate, and how we present the

data to these stakeholders has an impact on whether it will be accepted, supported and implemented

for the long term. And it will also help control (or at least identify) those who would hide in the weeds

waiting to pounce the minute they smell an opportunity to do so.

Start with the identification of a small number of key moments in time where important decisions need

to be made – the kind of decisions that must be both durable and defendable, and then follow a

deliberate path from selection, to analysis, and then on to resolution.

These could be some of the most important decisions of your career – you might even call them “a-dar-

able moments.”

Page 10: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

3. How Do the Generic Practices Help Us Make Agile Better? - Jeff Dalton

I’ve read some articles about how the CMMI can help make Agile better, but I don’t really see how the

generic practices fit in, or how we should comply with them. Should I just ignore them?

JEFF: Not only do the generic practices “fit,” they should have been named the Most Important

Practices in Agile! **Note to self: Send that suggestion in to the CMMI NextGen team!

While the CMMI can help any organization improve and identify their “state of maturity,” the Generic

Practices represent solid value to Agile teams – they are what make it real.

It has been said that Agile teams don’t like process. It’s too heavy, top-down and “command and

control” for their style of software development. But “ceremonies,” like sprint planning, daily standups,

retrospectives, and sprint demos are simply group behaviors, and behaviors are nothing more than

processes – so OF COURSE they like process! Furthermore, if Agile ceremonies are processes, and the

Generic Practices are intended to enable them, it follows that these are the precise tools we need to

instill a higher level of capability to Agile methods. So bring them on!

Here are twelve ideas that you can take back to your office right now to improve the state of your Agile

implementation:

Idea #1: Work with your management team to establish a clear set of common values that include:

transparency, collaboration, failing–fast, iterative and incremental, and a strong bias towards spending

more effort writing software than writing documentation.

This, of course, is an Agile implementation of Generic Practice 2.1 “Establish an Organizational Policy”

that can leverage practices in Organizational Process Focus and Organizational Process Definition to

make it real with supporting processes and tools. So, instead of saying “Policies? We’ve got binders full

of ‘em!” focus on the values instead.

Idea #2: Establish a precise model for the different levels of planning that your team is going to

perform. Release Planning, Sprint Planning, and planning for the tasks associated with each User Story

are a few examples. Those plans include the who, what, where, and how of each level. Develop a clear

requirements architecture for customer needs, epics, and user stories, including a “definition of ready”

for each one, as well as the estimation methods that are going to be used for each. Establish an

agreement with your team on how each aspect of the software engineering process is going to work:

How is code going to be written? How are code reviews going to be conducted? How is testing going to

work within each Sprint?

I call these “the “CMMI Questions.” Instead of slavishly complying with the practices, turn them into

questions to be answered by the team.

In this case, the questions are about Generic Practice SP2.2 “Plan the Process,” supported by Project

Planning, Project Monitoring and Control, Integrated Project Management, Risk Management, Technical

Solution, and almost every other Process Area in the CMMI!

Idea #3: Procure all of the resources to support the items identified in Idea #2. These might include: co-

located workspace, planning poker decks, pair programming desks, software tools such as Sharepoint,

Page 11: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Jira or Team Foundation Server, and the funding for the various tools, resources, facilities and other

components required to execute the ceremonies and events.

This idea is an Agile manifestation of Generic Practice 2.3 Provide Resources, along with its related

Process Areas Supplier Agreement Management, Project Planning, Integrated Project Management,

Technical Solution, and others.

Idea #4: Some are under the impression that because Agile teams are “self organizing,” that means that

it is less disciplined, but nothing could be further from the truth. A crisp understanding of each person’s

role, responsibility, and authority is essential to any successful Agile team, even more than in a

traditional software development environment. Product Owners must be free to make product

decisions, Scrum Masters need to be able to freely coach teams during each ceremony, and team

members need to step-up as they self-subscribe to each story or task. Personal responsibility is

everything for an Agile team, and without a clear definition of roles, teams cannot be successful.

You guessed it, the CMMI called this one too with Generic Practice 2.4 Assign Responsibility, along with

the related practices in Project Planning and Integrated Project Management.

Idea #5: Suck it up and train people. Train them to be Product Owners and Scrum Masters. Most

importantly, train teams to be self-disciplined, empowered Agile citizens that trust the process and live

the Agile values from Idea #1 every day.

This is Generic Practice GP2.5 Train People, with the related Process Area Organizational Training. If you

do nothing else, do this. Do. It. Now.

Idea #6: Agile teams like to use “information radiators.” No problem! If you make a decision to use

sticky notes and a scrum board, use a camera to capture that information and store it in a repository so

that other parts of your organization can benefit from that information. If not, use a tool like Jira or TFS

to record information while you create (and share) burn-down and burn-up charts, epics, stories, and

tasks. The same goes for the information that is generated from Idea #2. Record your process

definitions in a common repository so that you, and other teams, can benefit from the assets you have

developed.

For more information see Generic Practice 2.6 Control Work Products, and the related Process Areas

Configuration Management, Project Planning, and Organizational Process Definition.

Idea #7: Most of what’s been written about Agile teams focuses on the “nuclear” scrum team, but every

project has external stakeholders who need to have input and participation with the project. For

instance, you might have a group of people who focus on continuous build/continuous integration, or

attendees at a Sprint demo that need to be in attendance to provide useful and meaningful input.

Generic Practice GP2.7 Identify and Involve Relevant Stakeholders, along with the related Process Areas

Project Planning, Integrated Project Management, Validation, and Verification can provide some of the

guidance you’ll need.

Idea #8: How was your Agile team performing? Are they meeting Sprint commitments? Are customers

satisfied with the prioritization of user stories? Are they inserting stories mid-sprint? Do the user stories

meet the definition of done? Inquiring minds want to know!

Page 12: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

This is my favorite practice in the CMMI model: Generic Practice GP2.8 Monitor and Control the

process, along with its related Process Area Measurement and Analysis. Without this, how do we even

know that we’re doing the right things? Special note: If Maturity Level 4 or 5 are in your future, pay

close attention to this one!

Idea #9: Since Agile requires personal discipline it makes sense to me that we occasionally evaluate the

behaviors of Agile teams to ensure that healthy self-discipline exists. Given the value that Agile teams

place on a “high-trust” environment, they are usually loath to accept the idea of project audits, but

there are many ways to objectively evaluate team performance. With a focus on mentoring, coaching,

and improving the discipline of Agile teams, look to the values from Idea #1 to ensure you are getting at

what’s important.

The CMMI nails this with Generic Practice GP2.9 Objectively Evaluate Adherence, and the related

Process Area Process and Product Quality Assurance.

Idea #10: While Agile has grown exponentially across project teams, it hasn’t been quite as successful at

persuading management to join revolution. Sharing successes (and failures) of Agile teams more

proactively with upper management will bring more agility to the entire organization. The key to Agile

success is the implementation of the Agile values described in Idea #1 throughout the company – and if

management isn’t involved, it isn’t going to happen. Ever.

The authors of the CMMI had the foresight to include Generic Practice GP2.10 Review Status with

Higher Level Management, a practice that can be leveraged for this purpose, and is relevant and critical

for the success of Agile teams.

Idea #11: Once size never fits all, and this goes for Agile teams as well. I have always said that the

authors of the CMMI intended for it to be Agile, and this is demonstrated in Generic Practice GP3.1

Establish a Defined Process and the related practices in Integrated Project Management that scream BE

AGILE and do what’s right for your project! So create guidelines that will allow Agile teams to deviate,

when it makes sense. Creating those guidelines is tough so look to Organizational Process Definition for

guidance. It’s well worth the effort.

Idea #12: Agile teams are well versed in the use of the Retrospective, but since Agile ceremonies focus

primarily on the nuclear team those lessons don’t usually get shared with the larger population. The

organization that can expand the concept of Retrospectives beyond the Agile team, so that lessons can

be systemically collected, indexed, and used by all, is the organization that wins!

The CMMI’s Generic Practice GP3.2 Collect Process Related Experiences provides guidance for

systemically sharing lessons, along with its related Process Areas Integrated Project Management and

Organizational Process Definition.

So there you have it. Twelve ideas (and twelve Generic Practices) that can improve Agile performance

right away. The rest is up to you!

4. Tailoring Guidelines – What Are We Missing? - Pat O’Toole

Page 13: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

We’re struggling with “tailoring” and “tailoring guidelines.” We think we understand the concepts, but

we can’t seem to find the value. What are we missing?

PAT: I actively participate in many of the CMMI‐related Yahoo and LinkedIn forums. I’ve noticed that in

many of the exchanges, participants tend to talk past, rather than to, one another, as illustrated in the

following fictitious exchange:

A: Our lead appraiser said that we don’t have to perform some of the CMMI practices and we can still

get our maturity level.

B: Well THAT’s not right – unless, of course, you are performing alternative practices instead.

A: Nope, he said we are not expected to have alternative practices in place either.

B: Your lead appraiser is dead wrong and should have his certification revoked!

This seemingly meaningful conversation goes back and forth for some time until…

A: Yeah, well, we are going for ML2 and he says that none of the ML3, ML4, or ML5 practices have to be

performed – in fact, he won’t even be looking at them during our appraisal.

B: Oh, uh, well, never mind…

The point is that context is important!

Keeping that in mind, consider that “tailoring” can be performed at various levels of process

decomposition, and you really have to understand the context in which tailoring is being discussed in

order to have a meaningful dialogue. As I see it, there are four layers of tailoring:

First, there’s MEGA‐“T” tailoring – essentially, selecting a life cycle. Whether you are consciously aware

of it or not, once you’ve chosen a given development life cycle, a whole lot of rituals associated with all

of the other life cycles approved for use have just been tailored out. One could think of this as “tailoring

with an ax.”

Next, there is CAPITAL‐“T,” or phase‐level tailoring. In very small projects, for example, the tailoring

guidelines may allow the Requirements and Design phases to be combined into a single “Desirements”

phase. Similarly, verification and validation activities might also be conducted in an integrated manner.

One could think of this as “tailoring with a knife.” Then there is small‐“t” tailoring – selecting among

various methods within a given phase ‐ combining High Level Design and Low Level Design specs, for

example, or selecting either formal inspection, desk‐check review, or buddy review for verifying a given

document. This is probably what most people think about when they hear the word, “tailoring.” One

might think of this as “tailoring with scissors.”

Finally, there is micro‐“t” tailoring – adjustments that are made much closer to the bottom of the food

chain. For example, your formal inspection process may indicate that the work product to be inspected

must be pre‐published at least 3.17 days in advance of the inspection meeting, thereby giving the

participants adequate time to prepare. However, a tailoring guideline may indicate that the item can be

pre‐published with less lead time provided each participant explicitly agrees that the truncated review

period still provides them adequate preparation time. One might think of this as “tailoring with a

needle.”

Page 14: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Until you establish the proper context for a given tailoring discussion, people may be doing a lot of

talking, but very little communicating.

. . . . .

OK, now that I’ve (successfully?) argued that context is important to conduct a meaningful conversation

about tailoring, let me set the context for what I see as one of the primary stumbling blocks with respect

to deriving maximum value from the “tailoring guidelines” implemented by many/most organizations –

the source of which is found in the last bit of the CMMI Glossary’s definition: “Tailoring guidelines

describe what can and cannot be modified and identify components that are candidates for

modification.”

Having conducted a ton of appraisals targeting ML3 or higher, I’ve noticed a couple of recurring themes

when it comes to tailoring guidelines – both of which appear to align much too closely to this portion of

the Glossary definition.

One of the recurring themes I see is sets of tailoring guidelines that indicate which process elements are

mandatory, which are strongly encouraged, and which can be tailored out. These are clearly delineated

by CMMI process area to ensure full credit is given to all GP3.1 practices when conducting an appraisal.

This aligns much too closely with that bit in the Glossary that indicates “tailoring guidelines describe

what can and cannot be modified.”

The other recurring theme I see is use of an Excel spreadsheet that lists every conceivable arrow in the

organization’s process quiver. It includes every policy, life cycle description, process document,

methodology, workflow, procedure, work instruction, guideline, template, form, checklist, metric,

bathroom stall number, etc. that can be selected. The project manager is expected to meticulously mark

those elements that the she intends to use. This theme aligns much too closely with that bit in the

Glossary that indicates, “Tailoring guidelines… identify components that are candidates for

modification.” Well, it’s more “inclusion” than “modification,” but the point still stands – explicit

decisions are made regarding the project’s means of conducting its business – its “defined process.”

(Note: Organizations having trouble conjuring up uses of formal decision making are likely to use DAR to

make the tailoring decisions on each project – thereby bagging an appraisal “twofer!”)

Having a list of mandatory, suggested, and optional process elements provides some modest value, as

does using the 1000+ row Excel checklist to make explicit project decisions regarding process element

inclusion. But, like you, I believe there is more value to be had from the concept of tailoring. Let me try

to make the point by relying on everyone’s my favorite analogy – marathon running! Most long distance

runners are concerned about the “Five H’s” – hills, heat, humidity, height (altitude), and head winds.

Two of the five, hills and height, are attributes of the course and can be considered well in advance. The

other three, heat, humidity, and head winds, are race day attributes – and, weather forecasts

notwithstanding, you really won’t know what you’re up against until the day of the race.

Magazines like “Runner’s World” have lots of articles dedicated to such topics:

Heat and How to Handle It

Don’t Sweat Running in Humidity

Page 15: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Running Against the Wind

These articles, written by very experienced marathoners, share secrets, hints, and tips about

adjustments you may want to consider due to race‐day conditions. The author’s objective is to arm you

with information based on their tried‐and‐true experience, but the decision to follow their advice or not

is entirely up to you. After all, what works for them might not work for you.

So why not apply this same value‐added concept to process tailoring? In addition to following the

standard script for establishing bland and boring tailoring guidelines (the one you know will pass the

appraisal), why not write your own “articles” filled with tailoring guidance. Organizations that adopt

such a “tips and hints” approach typically find it much more valuable than those that only follow the

more mechanistic approaches above. Your project retrospectives, lesson learned sessions, or, my

favorite, project post mortems (yet another project has died; let’s examine the corpse and see what we

can find) will evolve from generating a repeatable list of what did and didn’t go well on the project, to

lively discussions that also explore the question, “If we knew then what we know now, what might we

have done differently?”

Like the “Runner’s World” articles, tailoring guidance is intended to provide experiential insight for

future projects’ consideration. Codifying the secrets, hints, and tips of how to tailor project activities to

overcome the inevitable hills, heat, humidity, height, and head winds will arm the next project team

with advice that should make it easier for them to go the distance, and hopefully that’s the value that

you were looking for.

Page 16: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

5. We've got binders full of policies - is that sufficient for GP2.1? - Jeff Dalton

“We’ve created eighteen pages of policies (one for each Process Area), and placed them in a binder in

our CIO's office library for all of our engineers to refer to. There is a “master policy” that says:

“Everyone must follow these policies.” Is this sufficient for GP2.1, and are we Maturity Level Two?”

JEFF: Policies? We’ve got BINDERS full of ‘em [1]. And when you wipe the dust off of them they even

shine up all nice and pretty-like.

As to whether this is “sufficient,” I assume you are asking “is this evidence sufficient to demonstrate we

are performing this practice?”

I have no idea, but I’m skeptical [2].

Sometimes when people are adopting a process model like CMMI, they focus on the wrong things.

Instead of considering the business reason that the practice is present to begin with, they attempt to

reverse engineer it starting with the words and artifacts themselves. This seems clever at first. After all,

the practice says, “Establish an Organizational Policy.” How hard could that be? We have the binder;

end of story.

As it turns out, getting value out of policies is harder than it sounds.

The reason we have policies is to communicate what we expect people to do. But we can’t do that

without understanding the “who, how, and what” of the expectation. [3]

It helps me to better understand the business value of GP2.1 (and others) by first establishing a set of

model-agnostic User Stories that are designed for each end user, and then flipping that into a series of

questions that need to be answered.

So, “Establish an Organizational Policy” within VER for software engineers might become:

As a Software Engineer

I need to perform code reviews

So that I can capture defects earlier and build great software products

Some people may be satisfied with that, but I think it raises more questions than it answers. To really

understand what is expected, we need to go deeper. Hence the question(s):

• “How am I supposed to conduct code reviews?”

• “What is the company trying to accomplish?”

• “How much time/money should I spend on this?”

• “What is the code review experience expected to be?”

• “How do we know we are meeting expectations?”

Page 17: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

It might be a simple matter to develop a code review process and tell everyone they have to follow it

(well, simple enough to just tell them) but unless that process is tied to the true expectations of our

organization how do we know we’re doing the right thing? Like artifacts, the behaviors behind our

actions matter, and the presence of actions alone does not mean we are meeting expectations (or that

we are “sufficient”). We have to be aligned with something.

When I conduct appraisals I like to explore the three primary components of a policy: Vision, Values,

and Expectations. These three components are a trio and cannot add value without alignment.

A Vision is a statement of who we want to be, and exists at the company, organization, program,

product, and team level. It is the baseline from which all behaviors are derived. An example of a

Product Vision might be:

• “We will deliver the highest quality data compression product on the market that will sell at a

premium price.”

OK, that’s a great start. I know what our process needs to support, but I need a little more.

Values describe how we will behave:

• We will trust our teams

• We will strive for transparency

• We will collaborate closely with our customers

• We will treat our suppliers like partners

Or, maybe we want something a little more “traditional” [4]:

• We will maintain strict “command-and-control” at all times

• We will manage tasks down to the hour and expect daily status reports

• We will limit communication with our customers

• We will maintain an adversarial relationship with our suppliers to keep costs as low as possible

I’m already feeling some misalignment between the second example and our Vision Statement. I call

that an “organizational type-mismatch,” and too much of this will lead to eventual failure.

The third component of a policy is expectations. Expectations help us understand what we will do to

achieve the vision while still maintaining our values. Returning to our code review example, some

expectations might be:

• We will hold iterative and incremental code reviews every two weeks

• We will invite our customers, managers, and other stakeholders

• We will record defects we find so everyone knows to avoid them in the future

• We will not focus on blame, but how to improve the process so a particular defect doesn’t happen

again

Page 18: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

In this example, we have direct traceability from our expectations, to our values, to our vision. This

gives software engineers a crisp and clearly defined roadmap for designing a code review process that

meets the needs of our company, organization, product, and team.

I’ll leave the media to you, but we know a dusty binder organized by Process Area and signed by the line

manager in 1972 isn’t the answer. Instead, consider multiple media platforms including WIKIs, online

and classroom training, all-hands meetings, newsletters, and performance reviews, and use your binder

(or an undiscovered folder in SharePoint) as a backup.

To summarize: Who are you, how will you behave, and what will you do to succeed?

End notes:

1. Thanks for the quote, Governor Romney!

2. I was going to say “It depends,” but that was never very funny.

3. I know this is shocking to most of you, but companies regularly don’t know what they are doing, or

why they are doing it.

4. When I wrote this sentence, it seemed like this company would be an awful place to

work…unfortunately it’s what of ninety percent of companies in the world are like.

Page 19: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

6. What Is Training Capability? - Jeff Dalton

Hey, Jeff, what does the CMMI mean by “Training Capability?” We did a PowerPoint lunch session and

we think that’s good enough to be Level Three.

JEFF: Good enough? While I often say that you should have “just enough, not too much” process, this

seems more like “not enough, way too little” to me. The Organizational Training practice you're asking

about (SP1.4) packs a lot more punch than its diminutive size might indicate.

Let me give you an example. When I conduct a CMMI or Agile gap analysis, I like to schedule it so I can

see the organization in action. Artifacts and affirmations are useful indicators for determining whether a

behavior may have occurred, but nothing informs us more than seeing it for ourselves. I always try to

attend standups, sprint demos, and sprint planning sessions. I like to tour team rooms to see the

information radiators. And I like to attend any training sessions that are scheduled.

Last month I attended such a session, and left with the impression that they didn’t quite get the

Organizational Training practice “SP1.4 Establish a Training Capability.”

The group of eager students filed in and took their seats at an oval conference room table, most toting

some kind of brown-bag lunch. While we waited in uncomfortable silence, I scanned the room for

indicators of a training infrastructure.

After a few more minutes, just short of the ten-minute rule we invented in college, the company’s top

engineer, a kindly bearded fellow who had been with the team for many years, took his place at the

front of the room. In his right hand was a Scrum reference book and in the other a folder of process

documents. He gazed across the gaggle of students and said, “So, I’m here…I guess. What do you want

to know?” The “training” went downhill from there, and devolved into a spirited debate between agile

puristas and pragmatists about whether they were practicing Scrum or Scrum-but. (“We’re using Scrum,

but …”) There was no agenda, no learning objectives, and no learning outcomes – in fact, no learning at

all!

At the end of “class” I asked the instructor about it. He turned OT SP1.4 back at me by saying “SP1.4 says

our training capability only needs to address organizational training needs. And this class meets our

needs, so we’re good.”

Hmmmm. I understand it a little differently.

When I think of OT SP1.4, I envision a “training infrastructure” that includes:

- Facilities

- Materials

- Systems

- Qualified instructors, if the class is instructor led.

So, what does it take to create a training capability?

Facilities

Page 20: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

The right facility can make or break a training class. Planning and clear requirements will help make the

training successful and enjoyable. As part of our class kit we supply a checklist and seating chart that

describes the physical requirements for putting on a successful class. These include:

- Tables of four set up in pods with comfortable chairs

- At least four flip charts or some large white boards with dry-erase pens and erasers

- 20 x 20 space behind the tables for hands-on exercises

- Walls suitable for sticky notes and blue tape

- High quality projector with HDMI or VGA interface that is compatible with our laptops

- Projector mounted on the ceiling or an AV cart

- Screen or large white wall suitable for projection

I‘ve improved my checklist over the years as I teach more classes and encounter new road-blocks. For

instance, last month I was greeted with a projector that would ONLY work with a Windows computer,

and not with my Macbook. This was due to the network interface they used to attach wirelessly to the

projector. Since my host may not have even known about this, I now include contact with the IT support

team in my class preparation.

When I arrive onsite to teach each class I use the same checklist to QA the facility as soon as I arrive. Just

like any other process or tool, using the checklist doesn’t mean the room is always appointed as I expect

(no, the CMMI doesn’t make your customers do what you want), but it works more often than not. At

the very least it triggers someone to let us know in advance if they’re NOT going to be able to meet the

requirements. Once we have the facility covered, we’re ready to produce materials….

Materials

Well designed and professionally produced training materials will enhance learning and ground the class

in a comfortable framework that allows both instructor and attendee to focus on the learning objectives

and desired outcomes. Materials should mirror both, and need to be carefully thought out to ensure

attendees depart with the necessary knowledge to be effective in their roles. A thoughtful combination

of “follow-along,” hands-on, job-aids, and reference materials will help ensure that attendees who have

different learning styles will complete the class with the information they need to be successful in their

role.

Our “Introduction to CMMI” checklist includes the production of these four types of materials, some of

which are given to each attendee, and others shared by the class as a whole. They include:

- Follow-along: Bound PowerPoint slides and exercise book

- Job aids: Agile guideline handouts, checklists, quick reference materials, practice exams

- Hands on: Game sheets, planning poker decks, sticky notes, Legos, tennis balls, balloons, and dice

- Reference: CMMI-DEV text book, Scrum textbook

Page 21: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Since I often travel to teach each class, textbooks are drop-shipped to the facility, and everything else is

produced at the local Fedex Office for delivery. So that leads us to Systems…

Systems

A training capability also includes the use of systems to describe the course, house the materials,

facilitate production, register attendees, track completion, and gather feedback. This system could

include the use of technology, but could also be manual depending on size and complexity.

For our CMMI and Scrum classes we leverage the following systems:

- A WIKI catalogue to house the master descriptions of all of our courses

- Eventbrite for each course description, registration and payment

- Fedex Office cloud to house all materials and facilitate production

- Amazon Prime for ordering, purchasing, and shipping textbooks

- Excel template (CMMI classes), corporate portal database (all classes) tracking completion

- Paper surveys, and Survey Monkey (all classes) for capturing and analyzing feedback

We have most of it in place now, so if we just had some instructors….

Qualified Instructors

Now, let’s get back to our bearded friend. He was a nice enough fellow, and he certainly had the

technical knowledge to understand the material, but a teacher? Not so much.

Organizations I work with sometimes want to put their best engineers or project managers in the

instructor’s chair, but that is often not the best choice. Having the prerequisite knowledge and

certifications, if required, is only the first gate for instructor selection. A qualified instructor should also

be able to:

- successfully convey information from multiple perspectives

- guide a class towards completing the learning objectives and desired outcomes

- deal with disruptions professionally and effectively

- provide relevant context, stories, and examples

- think quickly and improvise as needed

- use humor to make relevant points (and counter hecklers!)

- be an entertaining, interesting, and credible presenter

Being a great instructor is challenging work, and it’s not for everybody. Instructors need to know more

about the subject than anyone else in the room, be prepared for any question, no matter how nuanced,

as well as possess the aforementioned personal characteristics. Some of this can be taught, but many of

the best teachers naturally possess these skills.

Page 22: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

So, looking back at my gap analysis, our friends in the brown-bag lunch session weren’t getting what

they deserved, but they were getting what the company paid and prepared for. Smart organizations

develop a training infrastructure to help ensure that their team members know what behaviors are

expected, and commit to training as if they are developing a valuable corporate asset.

Just like anything else with the CMMI, it’s easier to do all of this if you focus on improving performance,

rather than achieving a CMMI rating. The extra-special double bonus for your effort is increased

performance without working harder.

Our bearded instructor was asking the right question when he asked, “what do you want to know?” He

was just asking it a few months too late.

Page 23: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

7. “Alternative” or Just Innovative?

What are “alternative practices” and are they really “alternative?”

JEFF: This week I completed my eighty-eighth presentation of “Introduction to CMMI-DEV” and,

needless to say, my evolution from slide-reader to evangelist took place long ago.

Me (using my best evangelical, big tent voice): “Goals are WHAAAAAT?”

Them (in a postulate tone): “R-E-Q-U-I-R-E-D!”

Me: “Practices are WHAAAAAT?”

Them: “E-X-P-E-C-T-E-D! “

Me: “Everything else?”

Me: “G-O-O-D I-N-F-O-R-M-A-T-I-O-N! “ Whoo hoooo!

I followed that up with my usual sermon about what “expected” really means - it’s what the CMMI

generally expects to see in order to achieve a given goal within the context of the business model. Since

we conduct appraisals through the examination of practices and the “evidence” they collectively

produce, it’s usually a hot topic during my classes that are often filled with potential Appraisal Team

Members.

In the back of the room, a young woman (who I could have sworn had just been nodding off) raised her

hand and asked, “what if we’re just smarter than you are?”

Rut roh! Cue the music!

She was making a great point. I’ve been lucky enough to work with people who are almost universally

smarter than I am, and with so many approaches to systems and software engineering executed across

so many cultures, I am always learning about innovative ways to accomplish a task. Lead Appraiser’s

haven’t seen everything, and the very fact that the practices are “expected” is recognition of that. These

are, of course, known as “Alternative Practices.”

The CMMI glossary defines an alternative practice as something that is a “substitute” for one or more

Specific or Generic Practices.

This leads us to Rule #1: “Sometimes there is another way.”

In general, the CMMI does a reasonably good job of anticipating the practices a project will employ to

achieve a particular Goal, and they are described in general enough terms that they apply to most

situations. In fact, some of the practices are SO general and high-level, that it leads to confusion in the

market about what projects teams are supposed to do!

That leads to Rule #2: “There are not many other ways.”

Page 24: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Every few years this topic is hotly debated on one of the CMMI message boards that has not yet been

shut down due to the religious extremism exhibited by its members, and they always end up in the same

place:

Rule #3: “Just ‘cause it’s interesting doesn’t mean you’ve discovered an alternative.”

My good friend and “Just the FAQs” writing partner, Pat O’Toole, dug into this topic a few years back in

one of his useful ATLAS studies. He asked the community to provide examples of “alternative practices”

that they had run across in their travels. His data suggests that they are few and far between – five out

of forty-four candidates were, in fact, agreed to by the community-at-large to be true alternatives to the

CMMI’s Specific and Generic Practices. And some of those are, IMHO, still open for debate.

Pat calls those thirty-nine remaining practices (and I presume many others he has run across)

“alternative implementations,” and I use the term “innovative implementations.” Teams do often come

up with innovative ideas for behaviors and processes, but that, in itself, does not mean that an

alternative has been discovered (nor does it mean that the authors of the CMMI thought of that

innovative implementation when they designed the model!).

In searching for examples, people sometimes get hung up on the sequence and grouping of practices,

and they find themselves searching for that direct one-to-one alternative to a single practice. Save

yourself some time because . . .

This leads to Rule #4: “Alternatives rarely map one-to-one with the CMMI practices.”

There are often many-to-many, or at least a “many-to-one,” relationships between any alternative and a

single or group practices, creating even a greater opportunity for confusion between “alternative” and

“innovative.”

In my experience, the most commonly referenced “alternatives” are related to one “agile” practice or

another. Perhaps this is because they are gaining in popularity, (or because they just aspire to be

different…), but team members often refer to agile estimation techniques – planning poker or Fibonacci

sequencing for example - as an alternative to the CMMI’s estimation practices in Project Planning. I

mean – who wants to do THOSE THINGS? But even though these are both VERY innovative approaches

to estimation, they are simply collaborative techniques based on sizing that fit quite nicely into the

“expected” category.

Another common suggestion is that the use of “sprints” (scrum) or “iterations” (XP) are an alternative to

the CMMI’s expectation that a “lifecycle” be defined and employed (PP, OPD, IPM, et al.). Ditto for

Kanban for software. Again, while these are an innovation in the software product management, these

are still a kind of “lifecycle” – albeit conceptually very different from what some of us experienced while

we were coming up in the industry.

Here’s some others I often hear:

Pair programming? Nope, peer reviews.

Retrospectives? Uh uh. Collect Process Related Experiences

Page 25: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Backlog Grooming? Ah ha! That must be one! It’s REALLY alternative! Nope. It’s an innovative

combination of Requirements Management, Validation, and Project Planning. Good stuff for sure – just

not an alternative practice.

While Alternative Practices are sometimes treated as secret black magic that only the most seasoned

Lead Appraisers can interpret, my experience has been quite the opposite . . . .

This leads Rule #5 – the final rule: “Most alternative practices are mundane and administrative.”

“What? After all this you tell me that the mythical “Alternative Practice” is BORING?”

I’m afraid that’s mostly true. Maybe not all of them (since I’m not smart enough to know them all), but

the most common ones seem to be pretty unimpressive.

Your customer directs you to use a supplier.

Your customer tells you what the architecture and technology will be.

The government “helps” you by giving you the schedule and budget they want you to use.

Another company does independent V&V.

On a ten-year maintenance project you use a “push and pop” ticketing system since a WBS with life

cycles is not that useful.

On the other hand, if your customer changes the requirements the night before launch, it is NEITHER

alternative nor innovative!

Good luck – and focus on innovation, not alternatives!

Page 26: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

8. How Do We Develop Good PPQA Habits? - Pat O’Toole

Hey, Pat, We’ve just started implementing the CMMI and are really struggling with PPQA; any tips to

point us in the right direction?

PAT: In some ways, implementing PPQA is like learning to play golf. If you don’t go to a pro early on,

you’ll develop a lot of bad habits that will stay with you for a long, long time. So let me see if I can’t give

you those requested tips before the bad habits take hold . . . .

Let’s start by addressing one of the more common mistakes organizations make when they first embark

on their CMMI journey. Having been given an unreasonable time frame in which to achieve maturity

level 2 (ML2), the newly formed EPG spawns a series of working groups – typically one for each of the

seven ML2 process areas*. For their assigned process area, each working group is charged with

establishing the process infrastructure that supports organizational achievement of the coveted ML2

rating.

When the relatively clueless PPQA working group meets for the first time, they look up the PPQA

process area to figure out what it is and how they might proceed. They collectively read through the

one‐sentence PPQA Purpose Statement and figure, “OK, I kinda sorta get that . . . .”

But when they start reading the Introductory Notes, they don’t get past the first bullet:

The Process and Product Quality Assurance process area involves the following activities:

• Objectively evaluating performed processes and work products against applicable process

descriptions, standards, and procedures

They think to themselves, “WHAT process descriptions, standards, and procedures??” – we don’t have

any of those bad boys yet!”

In this regard, some contend that PPQA should really be staged at maturity level 2.5. That is, until the

other six working groups have established the process infrastructure for planning and monitoring

projects, for measuring project activities, and for managing suppliers, requirements, and configuration

items, there really isn’t a whole lot for the PPQA working group to do!

OK, so the PPQA working group hibernates until the other working groups have generated much of their

process stuff. Now what?

Well, according to the CMMI, PPQA objectively evaluates two types of things – processes and work

products. To that end, most PPQA groups generate a series of checklists, typically one checklist for each

CMMI process area (a less CMMI‐oriented approach is provided below). Such checklists are intended to

do triple duty – not only do they cover GP2.9 in their respective process areas, but collectively they are

also intended to cover both process compliance (PPQA SP1.1) and work product compliance (PPQA

SP1.2).

Left to their own devices, most PPQA groups wind up focusing much more intently on work product

compliance than process compliance. That’s understandable as it is much easier to wrap your head

around work products that you can touch and feel (and sometimes smell) than it is process stuff which is

Page 27: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

a tad more amorphous. After all, gravitational pull is based on mass (work products) rather than energy

(processes).

But since I am trying to protect you from “your own devices,” let me suggest a way to establish a better

balance:

1. For each documented process, include “Appendix A” which is the PPQA checklist for that process. To

populate this process‐based checklist, ask yourself, “What are the most important steps in the process,

and how would we know that they were executed properly?”

2. For each template, include “Appendix A” which is the PPQA checklist for the resulting work product.

To populate this work product‐based checklist, ask yourself,” What are the most important elements in

the work product generated based on this template?”

Building your checklists in this manner will provide a proper balance between process compliance and

work product compliance, AND it will focus PPQA reviews on what you have deemed to be your most

important process steps and work product elements. Remember that PPQA ensures that the work is

being performed according to your process, NOT according to the CMMI – that will be handled by

appraisals found in Organizational Process Focus (OPF).

Before I leave the subject of PPQA checklists, let me provide one more tip . . . .

When conducting a SCAMPI A appraisal, each specific and generic practice in scope will ultimately be

characterized as “Fully Implemented,” “Largely Implemented,” “Partially Implemented,” or “Not

Implemented” or FI/LI/PI/NI. This four‐point scale enables the appraisal team to focus organizational

attention on areas that are a bit weak (LI), very weak (PI), or abundantly absent (NI).

Unfortunately, when generating PPQA checklists, most organizations write questions of the “Yes/No”

variety. Such an approach typically leads to project folks doing the absolute minimum amount of work

necessary to rationalize a “Yes” in the review. Using a more robust scale, such as that employed by the

SCAMPI A, allows the PPQA group to treat each review more as a consulting opportunity than a

compliance check. I encourage you to give strong consideration to adopting a scale such “Fully

Compliant,” “Largely Compliant,” “Partially Compliant,” and “Not Compliant.” Rather than debating

“Yes” vs. “No,” the more robust scale leads to discussions that start with, “I gave you ‘Largely Compliant’

rather than ‘Fully Compliant’ because . . . .”

In addition, using this approach enables you to generate a numeric score for your PPQA reviews by

averaging the characterization of each checklist item using a scale such as: FC = 100; LC = 80; PC = 30;

and NC = 0. Some organizations use an approach like this to drive their sampling selection. If you score

90 or above, your project won’t be reviewed on this process for the next six months; between 75 and

90, we’ll see you in three months; less than 75, we’ll be reviewing 100% of the implementations until

you score high enough to earn a reprieve. You’ll also be placed on the list for earlier intervention (e.g.,

coaching) on your next project to make sure you “get it.”

All right, the other working groups are churning out their final deliverables and the PPQA working group

is partnering with them to establish Appendix A for each element. Life is good!

However, just because the working groups “build it” does not mean that “they will come!” It’s going to

take a while for (1) the rough edges to be sanded off; and (2) the value of performing the work in this

Page 28: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

more process‐disciplined way to be recognized. The premature introduction of PPQA reviews is likely to

be counterproductive as “objectively verifying compliance” to processes that have not yet earned the

trust and respect of those being encouraged to use them changes “encouraged” to “forced” – and

NOBODY likes to be forced to do something.

Here’s how to introduce PPQA services for your new or significantly changed process/work product

elements in a manner that is much less likely to encounter significant resistance:

1. Work with the EPG to find somebody willing to pilot the new or modified process/work product and

convince them to agree to subject themselves to a “free” PPQA review.

2. When the process has been executed or the work product completed, generate a hard copy of the

corresponding PPQA review checklist (Appendix A).

3. Sitting with the person who agreed to the pilot, fill out the PPQA checklist together, discussing the

“proper” disposition of each item (FC/LC/PC/NC).

a. Note that the results of this review should not be shared with anyone else.

b. Through this exercise, you are simply “validating” the checklist items and developing the norms for

“scoring” the level of compliance for each item.

4. Based on the pilot, refine the checklist as necessary (add, change, delete items).

5. Recruit a willing participant to conduct a second “free” pilot.

6. At the end of the second process execution/work product population, generate multiple copies of the

PPQA checklist.

7. Recruit the person that conducted the first pilot, plus a number of others who will also be using the

new/modified process or template in the future.

8. Give them each a copy of the checklist and have them INDEPENDENTLY conduct the review.

9. After everyone is done, compare each item’s compliance scores and discuss the inevitable

differences.

a. Hopefully, you and the person who conducted the first pilot are reasonably well aligned.

b. Through this exercise, you are trying to:

i. Ensure that the checklist items are focused on those process/work product elements that are the

most relevant

ii. Establish norms for the proper scoring of each checklist item

iii. Demystify the way PPQA will be applied to this new/modified thingy.

c. Once again, these results should not get communicated to anyone else.

d. Engage the group by eliciting suggestions for tweaking the PPQA checklist.

Page 29: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

i. Incorporate as many changes as you can, even if they are inconsequential. It is part of your

marketing strategy – changing the view from “your” PPQA checklists to “our” PPQA checklists.

10. OK, now you’re ready to “go live” with the new process element and the associated PPQA checklist.

a. Think about establishing a rule that the first time a person is reviewed for a specific process or

template, the results are not reported (i.e., it’s a freebie)''

b. You should try to establish the role of PPQA as that of “process coach” rather than that of “process

proctologist” (by, among other things, using words like “PPQA review” rather than “PPQA audit”).

OK, we’re into the lightning round – quick tips with little explanation . . . .

1. Some organizations perform their PPQA reviews after a life cycle phase has ended or, worse yet, after

the project has completed. I would STRONGLY encourage you to conduct “in progress” reviews as most

people find that information from a health check provides more timely feedback than that from an

autopsy!

2. “Objectivity” is the key to successful PPQA, not necessarily “independence.” You may find that some

PPQA reviews provide more value if they are led by subject matter experts rather than a quality

professional. Or it may be better to use a round‐robin approach: Project A provides PPQA reviews for

Project B; Project B reviews Project C; and Project C reviews Project A. Just remember to equip the PPQA

reviewers with the skills and knowledge to do a professional job.

3. Some organizations rely on peer reviews for the “work product compliance” bits of PPQA. This can be

effective, especially if enabled by: (1) the Appendix A approach to PPQA checklists; and (2) the explicit

designation of the PPQA‐like role to a specific person. I know that “Quality is everybody’s business!” but

PPQA is much more likely to be done properly if it is entrusted to a single qualified person.

4. Don’t have the EPG members also serve as the PPQA staff. PPQA is the “protector of the status quo,”

while EPG members are forever looking for ways to evolve the status quo. Having the same people do

both is likely to result in head explosions.

5. Don’t forget about PPQA GP2.9, typically referred to as “PPQA of PPQA.” Consider using the round‐

robin approach suggested in #2 above to achieve objectivity. When PPQA non‐compliance issues are

found, be sure to model “good reviewee behavior.” Don’t whine and pout; simply address the issues –

and then tell everyone you purposely introduced those non‐compliances just so you could model good

reviewee behavior!

6. Suggest that some management processes also enjoy PPQA services. After all, if it’s good for the

project teams, then it should be good for management as well! (And it might just reduce the number of

“golf course commitments” made by management).

7. Establish indicators of PPQA value. For example, count the number of incoming calls to PPQA where

their value‐added services are being requested, and subtract the number of outgoing calls made by

PPQA where they are informing the unsuspecting of an upcoming review. If this metric is positive, then

congratulations – your value is being recognized! If it’s negative, then change how you’re operating to

provide more value to those that you are reviewing (or bribe people to call in to fulfill O’Toole’s Law on

Measurement: “What gets measured gets manipulated!”)

Page 30: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Good luck – and if you develop any of your own PPQA‐related tips, be sure to pass them along!

* Do you really want the process infrastructure for Project Planning and Project Monitoring and Control

to be developed by different working groups? While we’re at it, shouldn’t Measurement and Analysis be

bundled with those two as well? Come to think of it, why focus on the individual process areas at all?

Why not form working groups to solve real problems – why invent new ones?

Page 31: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

9. How Can We REALLY Know How Things Are Going? - Jeff Dalton

Hey, Jeff – We want to assess the capabilities of our product development teams. I’m reluctant to

schedule a SCAMPI appraisal because, I understand, they are no fun at all. How can we really know

how things are going? ~ Bradley S.

JEFF: Wait a second. I love conducting appraisals. No – I really do!

I know what you’re saying, Bradley. Appraisals are lengthy, tedious, soul-sucking events that drive the

joy and passion out of the otherwise happy-go-lucky appraisal team members who have been “volun-

told” to spend weeks locked in a room ….with me! Seriously, what could be more fun!

But what I love most about appraisals is how eloquently and predictably developers, project managers,

line managers, CIOs, and CEOs tell me how awesome they are. Planning, designing, traceability (OF

COURSE!), peer reviews, collecting lessons learned (OBVIOUSLY!), and more are all described in

excruciating and colorful detail. And they’re always so great! Heck – they’ve “self-assessed” at ML5, and

if I only understood them I would agree. It’s a hoot!

After listening to all of the colorful descriptions of their awesome behavior, I train my gaze to the other

side of the room, and in the furthest corner, in the darkest reaches of that corner, in the most hidden

part of that corner, a tester is waving her hand across her neck while mouthing the words “NOT SO

MUCH.” It’s always the best part! My personal record for the length of time it takes to see this is thirty-

seven seconds, but who’s measuring?

See, if you ever REALLY want to know what is going on in your organization, ask a tester. As consumers

of distorted vision, weak requirements, poor design, and sloppy code, they are asked to behave like one

of Harry’s Potter’s Dementors, the fantasy non-beings that take in chaos and misery, leaving the

developers and project managers happy and burden-free. Tough job!

While the most common reaction to busy testers is MORE testers and tools, building great technology

products is about more than just testing code. In fact, it’s hardly about code at all. Yet most books,

articles, and conference speeches about improving software quality focus on testing tools and

automation. While these are all good and necessary discussions, we have thus far fallen short of

reaching the goal line: consistently building high quality products that delight our customers.

Isn’t it about time we add something new to the discussion?

As a Lead Appraiser and Agile coach, I am often asked to assess the capabilities of product development

teams, and I can usually ascertain strengths and weakness within fifteen minutes – if I start with the test

team.

Great software is an ecosystem borne from a glimmer in someone’s mind, matures into needs, grows

into requirements, transforms into designs, manifests itself as code, all the while being validated and

verified, until it finally emerges to delight and satisfy our customers.

Instead of starting the discussion with expensive tools (who doesn’t like a new toy?), take your first step

toward building better products by using the tool between your ears to answer the following questions

that every tester asks:

Page 32: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

What is the Product Vision? Unclear product vision manifests itself as chaos during testing because

developers and analysts take it upon themselves to interpret – or even create – product vision. Product

vision exists at multiple levels - company, product, team, and individual - and should be clearly defined

and written down at all of them. You’ll need it later when you’re pressured to cut corners after everyone

forgets why they’re building the product! Guidance and tips for developing a comprehensive product

vision live in Requirements Development SP1.1 and SP1.2, Technical Solution SP2.1, Measurement and

Analysis SP1.1, and elsewhere. Put them to work to create an integrated view of your organizational and

product goals, objectives, and outcomes so that everyone clearly understands them.

Are the Requirements any good? Most industry studies peg misunderstanding of requirements as the

primary reason we experience product defects and unhappy customers. Many testers know intuitively

that this is a serious problem that costs our industry billions of dollars – and they know it because they

are beaten up day-in and day-out by the downstream effects of weak requirements. The CMMI’s REQM

SP1.1 provides’ excellent guidance for identifying the most insidious defect – lack of clarity – and

correcting this process defect will results in a dramatic improvement in customer satisfaction and

product quality. Adopting Test Driven Development, a staple in the agile community, is a solid

implementation of practices in Requirements Development Specific Goal 3, and this can be your best

tool to avoid the frazzled hairstyles that most testers are sporting after the “night-before-launch” test

party.

Is the Code any good? The CMMI gives scant attention to code quality, but some guidance is available in

Technical Solutions SP3.1 and a thorough examination should be included in any software development

appraisal effort. Clean code doesn’t happen on its own, but is the result of well-established behaviors,

processes, and coding standards: naming conventions, formatting standards, variable passing

conventions, complexity guidelines, and of course, code reviews. I learned this lesson many years ago

when I received a curt note from the head of the testing team titled “thank you for letting me do your

unit testing for you.” If a tester tells you they are capturing defects that fall into the aforementioned list,

it’s a strong indicator that clean code is not on the top of anyone’s list.

Are we always improving? I tell my classes that “those who are always improving, win.” There is ample

guidance in the CMMI about continual improvements, in fact the ENTIRE CMMI model is about this

subject, but that doesn’t stop numerous Maturity Level Three companies from ignoring the advice in

Generic Practice 3.2, or the dynamic duo of Integrated Project Management and Organizational Process

Focus. Testers will immediately tell you whether they are seeing the same types of defects over and

over (and over) again. Listen to them.

Appraisals can be tedious and time consuming, and everyone is looking for ways to optimize and

accelerate the process (while reducing cost). So, with that in mind, I have a humble suggestion – start by

asking a tester. You’ll be glad you did, and they might enjoy being in the spotlight for once!

Page 33: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

About the Authors

Meet the CMMI Appraiser, Jeff Dalton

Jeff Dalton is President of Broadsword Solutions Corporation. As the pioneer of using Agile methods to

implement CMMI-based solutions that improve software development and organizational processes,

Jeff is a Certified Lead Appraiser and author of “AgileCMMI,” Broadsword’s leading methodology for

incremental and iterative process improvement. He is a recipient of the prestigious Software

Engineering Institute’s SEI member award for outstanding representative for his work uniting the Agile

and CMMI Communities through his popular blog “Ask the CMMI Appraiser.” He holds degrees in Music

and Computer Science and builds experimental airplanes in his spare time. You can reach Jeff at

[email protected].

You can read Jeff’s blog at Ask the CMMI Appraiser.

For more eBooks on CMMI, visit Jeff’s author page on Amazon.

About Broadsword Solutions Corporation

Broadsword is an SEI Partner, CMMI Institute Partner and Performance Innovation firm that is the

world-leader in using Agile and Lean methods to drive high performance engineering using their

AgileCMMI methodology and collaborative consulting and coaching solutions. Working with great clients

like Rockwell Collins, NASA, Boeing, Chrysler, Compuware and L3 Communications, Broadsword’s

methods and success are proven throughout North America and the world.

Broadsword is based in southeastern Michigan. They can be reached at www.broadswordsolutions.com,

+1 248-341-3367 or [email protected].

Page 34: Table of Contents - Broadswordsolutions · Don [t settle for rumors, misinformation and agenda-driven half-truths. Get the facts! "Just the FAQs! _ ^Just the FAQs _ is written/edited

Meet Pat O’Toole

As the Principal Consultant with Process Assessment, Consulting, and Training (PACT), Pat O’Toole works

with all levels of management, Engineering Process Groups (EPGs), and Process Action Teams in

establishing, evaluating, and sustaining their process improvement initiatives. Pat is one of the most

active CMMI Institute-certified SCAMPI high maturity lead appraisers, and has led some of the largest

and most complex maturity level 5 appraisals ever conducted. Pat has published over 70 articles on

process improvement – including his popular “Do’s and Don’ts” series, the ATLAS (“Ask The Lead

Appraiser”) studies, and now, partnering with Jeff Dalton, “Just the FAQs.”

He is a CMMI Institute Partner for the “Intro to CMMI” course, having taught the course more than 75

times. Pat is an Independent Consultant at the CMMI Institute, where he teaches the “Intro to CMMI-

DEV, “Intro to CMMI-SVC,” and “Intermediate Concepts” courses, and also serves as an observer for

“Intro to CMMI” candidate instructors. Pat is currently serving as a member of the 9-member Partner

Advisory Board, and on 3 of the 8 CMMI Next Gen working groups.

You can reach Pat at: [email protected] or at the PACT website at http://pactcmmi.com/.

CMMI, SEI and Capability Maturity Model Integration are registered trademarks of Carnegie Mellon University