techtarget softwarerequirements managementviewer.media.bitpipe.com/.../ibmssoftwarequalitye... ·...

14
Software Requirements Management Many software project defects originate in the requirements gathering process and project failures can often be attributed directly to poor requirements management. This E-Guide examines how to improve requirements management and specifically how to improve software quality through proper requirements management. Learn the tester’s role in requirements and some of the hidden obstacles when integrat- ing QA and testing teams into the requirements process; specific requirements-related metrics which improve the software quality process; and how to avoid costly changes to requirements during the development process. Sponsored By: TechTarget Application Development Media E-Guide

Upload: others

Post on 16-Jul-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

Software RequirementsManagement

Many software project defects originate in the requirements gatheringprocess and project failures can often be attributed directly to poorrequirements management. This E-Guide examines how to improverequirements management and specifically how to improve softwarequality through proper requirements management. Learn the tester’srole in requirements and some of the hidden obstacles when integrat-ing QA and testing teams into the requirements process; specificrequirements-related metrics which improve the software qualityprocess; and how to avoid costly changes to requirements during thedevelopment process.

Sponsored By:

TechTargetApplication Development

Media

E-Guide

Page 2: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

Software Requirements Management

Table of Contents

Sponsored by: Page 2 of 14

Table of Contents:

Quality assurance (QA) and testing's role in requirements

Which requirements have the greatest effect on quality in software development?

How to avoid requirements creep

Resources from IBM

Software RequirementsManagement

E-Guide

Page 3: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

Quality assurance (QA) and testing's role in requirements

By Robin F. Goldsmith, JD

The typical late involvement of quality assurance (QA)/testing in software projects certainly limits their effective-

ness. By the time many QA pros and testers get involved, all the development defects are already in the code,

which makes detecting them much harder and fixing them much more costly. Moreover, late involvement limits how

many of those errors are actually found, due to both lack of time to test and QA/testing's lack of knowledge about

what to test.

Thus, virtually every QA/testing authority encourages the widely accepted conventional wisdom that QA/testing

should get involved early in the life cycle, participating in the requirements process and especially in requirements

reviews. The reasoning goes that by getting themselves admitted during requirements definition, QA/testing can

point out untestable requirements and learn about the requirements early enough to enable developing more

thorough tests by the time the code arrives. Such well-intentioned and seemingly obvious advice unfortunately has

a number of hidden pitfalls that in fact can further reduce QA/testing's effectiveness.

Participating in defining requirements

In many organizations, QA/testing receives inadequate information upon which to base tests. Inadequacy comes

from several directions. The documentation of requirements and/or design frequently is skimpy at best and often is

incomplete, unclear and even wrong. Furthermore, QA/testing often receives the documentation so late that there's

not time to create and run sufficient suitable tests.

QA/testing absolutely has a need to learn earlier and more fully what the requirements and design are. However,

involving QA/testing in requirements definition and/or review raises several critical issues.

Quite simply, defining requirements is not QA/testing's job. Many organizations have some role(s) responsible

for defining requirements, such as business analysts. For some, the business unit with the need is charged with

defining their requirements for projects; in other organizations, the project manager or team members may end up

defining requirements as part of their other project activities. QA/testing may be one of the responsibilities of these

other roles, but I've not heard of any competent organization that makes such other tasks part of the QA/testing

role.

Consequently, if QA/testing staff members are assigned to participate in defining requirements, they're probably

going to be in addition to (rather than instead of) those regularly performing the task. Not many places are likely

to double their definition costs, especially not for something which offers no apparent benefits and may even be

detrimental to the definition itself.

While some folks in QA/testing may have some knowledge of the business, one cannot assume they will, and many

of those in QA/testing may lack relevant business knowledge. Moreover, it's hard enough to find business analysts

with good requirements definition skills, and they're supposed to be specialists in defining requirements. There's no

Software Requirements Management

Quality assurance (QA) and testing’s role in requirements

Sponsored by: Page 3 of 14

Page 4: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

reason to expect that QA/testing people, for whom requirements definition is not a typical job function, would have

had any occasion to develop adequate requirements definition skills. Piling on the costs by including QA/testing

people in requirements definition would be unlikely to help and could just get in the way.

Participation in reviewing requirements

On the other hand, it's much more logical to include QA/testing specialists in requirements reviews. After all,

reviews are a form of QA/testing. In fact, some organizations distinguish QA from testing by saying QA performs

static testing, primarily reviewing documents, whereas testing (or quality control, QC) executes dynamic tests of

products.

Organizations with such a distinction frequently make QA responsible for reviewing requirements, designs and other

documents. It's not these organizations, but rather all the others in which QA/testing is clamoring for admission to

requirements reviews.

In organizations where requirements reviews are run by someone other than QA, such as the business units/users

or management, there may be resistance to allowing QA/testing to join reviews. An obvious reason would be that

limited review budgets may not allow for the added costs of QA/testing staff's time attending reviews.

Of course, budgeted expenses could be shifted from later project activities that presumably would require less

effort due to QA/testing's participation in reviews. Nonetheless, such seemingly logical budget shifts often are not

made, especially when the future savings go to a different part of the organization from that charged for reviews.

However, the bigger but often less apparent obstacle is a surprisingly (to QA/testing) common perception that

adding QA/testing to reviews not only may provide no significant positive value but could actually have a negative

impact on review efficiency and effectiveness. In such cases, the already stressed rest of the organization is unlike-

ly to go out of their way just to help QA/testing meet its needs. Such rejection often is couched in terms of limited

budget, but it may be based on not really wanting QA/testing involved.

The "testability trap"

Why would people feel that QA/testing actually impedes reviews? I call it the "testability trap." In the QA/testing

industry, widely held conventional wisdom is that lack of testability is the main issue with requirements. Generally,

lack of clarity makes a requirement untestable. An unclear/untestable requirement is likely to be implemented

incorrectly, and regardless, without being able to test it, QA/testing has no way to detect whether the requirement

was implemented correctly.

Consequently, it's common for comments of QA/testing folks who have been let into requirements reviews to focus

almost entirely on the various places in the requirements they feel lack testability. The less they know about the

business domain, the more they are stuck speaking only about lack of testability.

While testability is indeed important, frequently it mainly matters to QA/testing, and their repeated review

comments about lack of testability can seem like so much annoying noise to the rest of the review participants. In

Software Requirements Management

Quality assurance (QA) and testing’s role in requirements

Sponsored by: Page 4 of 14

Page 5: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

such instances, the presence of QA/testing can be perceived as simply getting in the way of the review, tying up

participants with trivial nitpicking. At best, QA/testing may be ignored; sometimes they even get "disinvited" from

participating in further requirements reviews.

Be prepared to contribute

The key to not wearing out one's welcome is contributing productively to the review in ways that all participants

recognize as valuable. That takes familiarity with the subject area content and with more effective review tech-

niques.

QA/testing people are not only unlikely to have requirements definition skills, they also often have little familiarity

with the business domain subject area that is the topic of the requirements. The difficulty can be especially acute

for QA/testing groups charged with performing requirements reviews. Since they'll probably be dealing with a wide

variety of business areas, chances are lower that they'll know much about any of the many areas.

Requirements are all about content. To contribute effectively to reviews, it's incumbent upon QA/testing to learn

about the relevant business content before reviewing related requirements. Because few organizations recognize

the need for such preparation, time and budget are seldom provided for it. Therefore, the burden will be on

QA/testing to make the time, quite possibly on their own time, in order to enable them to comprehend content

sufficiently to contribute productively to the reviews.

Review technique effectiveness

Understanding content is necessary but not sufficient. Most reviews are far weaker than recognized, largely because

the reviewers don't really know what to do, how to do it, or how to tell whether they've done it well. Group reviews

generally enhance findings because multiple heads are better than one, but they still find far fewer issues than they

could or that participants presume they've found.

With the best of intentions, typical reviewers look at the requirements and spot in a somewhat haphazard manner

whatever issues happen to occur to them. Even though they may be very familiar with the subject area, it's easy

for them to miss even glaring errors. In fact, their very familiarity sometimes causes them to miss issues by taking

things for granted, where their minds may fill in gaps unconsciously or fail to recognize something that wouldn't be

understood adequately by someone with less subject expertise.

Moreover, it's hard for someone to view objectively what they're accustomed to and often have been trained in and

rewarded for. QA/testing emphasizes the importance of independent reviewers/testers because people are unlikely

to find their own mistakes. Yet, surely the most common reviewers of requirements are the key business stakehold-

ers who were the primary sources of the requirements.

In addition, it's common for typical reviewers to provide insufficient feedback to the requirements writers. For

example, often the comments are largely not much more than, "These requirements need work. Do them over. Do

them better." The author probably did as well as they could, and such nonspecific feedback doesn't give the author

enough information about what, why or how to do differently.

Software Requirements Management

Quality assurance (QA) and testing’s role in requirements

Sponsored by: Page 5 of 14

Page 6: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

Formal requirements reviews

Many authorities on review techniques advise formalizing the reviews to increase their effectiveness. Formal reviews

are performed by a group and typically follow specific procedural guidelines, such as making sure reviewers are

selected based on their ability to participate productively and are prepared so they can spend their one- to two-

hour review time finding problems rather than trying to figure out what the document under review is about.

Formal reviews usually have assigned roles, including a moderator who is like the project manager for the review, a

recorder/scribe to assure review findings are captured and communicated, and a reader/leader other than the

author who physically guides reviewers through the material. The leader often is a trained facilitator charged with

assuring all reviewers participate actively. Many formal reviews keep key measurements, such as length of prepara-

tion time, review rate and number of issues found. Detailed issues are reported back to the author to correct, and a

summary report is issued to management.

Some formal reviews have the reviewers independently review the materials and then report back their findings in

the group review session. Often each reviewer reviews a separate portion of the material or looks at the material

from a specific perspective different from each of the other reviewers' views. Other formal reviews work together as

a group through the materials, frequently walking through the materials' logic flow, which typically covers less

material but may be more effective at detecting problems.

Proponents of some prominent review methodologies essentially rely solely on such procedures to enable knowl-

edgeable reviewers to detect defects. However, I've found that it's also, and probably more, important to provide

content guidance on what to look for, not just how to look at the material under review and assuring reviewers are

engaged.

For example, in my experience, typical reviews tend to miss a lot more than recognized for the reasons above and

because they use only a few review perspectives, such as checking for clarity/testability, correctness and complete-

ness. Often, such approaches find only format errors, and then sometimes only the most blatant, while missing

content issues.

In contrast, I help my clients and seminar participants learn to use more than 21 techniques to review require-

ments and more than 15 ways to review designs. Each different angle reveals issues the other methods may miss.

The more perspectives that are used, the more defects the review detects. Moreover, many of these are more

powerful special methods that also can spot wrong and overlooked requirements and design content errors that

typical weaker review approaches fail to identify.

When QA/testing truly contributes to reviews by revealing the more important requirements content errors, and not

just lack of testability format issues, business stakeholders can appreciate it. When the business stakeholders rec-

ognize QA/testing's review involvement as valuable to them, they're more likely not only to allow participation, but

advocate it.

Software Requirements Management

Quality assurance (QA) and testing’s role in requirements

Sponsored by: Page 6 of 14

Page 7: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since

1982. He works directly with and trains business and systems professionals in requirements analysis, quality and

testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is

the author of the Proactive Testing and REAL ROI methodologies and also the recent book Discovering REAL

Business Requirements for Software Project Success.

Software Requirements Management

Quality assurance (QA) and testing’s role in requirements

Sponsored by: Page 7 of 14

Page 8: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

Which requirements have the greatest effect on quality insoftware development?

By Robin F. Goldsmith, JD

What are the metrics in the requirements phase of the Software development life cycle, which are

responsible for improving software reliability?

Let me suggest there are at least two possible ways of interpreting this question, and I'll briefly respond to each:

What are some possible metrics requirements that would lead to continued improved reliability in the finished

software?

A product/system/service/software is reliable when it repeatedly performs in the same manner. That reliability

needs to be coupled with the performance being suitable. To assure and improve continued suitable performance,

measures need to be taken regularly of the performance and factors which reasonably influence the performance.

Suitable performance includes things typically thought of as "performance," such as the range of required loads

capacity handled and the sufficient response and/or throughput speeds under each of the given load, typically a set

of ranges or quantified definitions of high, medium and low, and operational environments. For instance, a web-

page of given size and complexity might have to download fully to a user's PC in less than two seconds when using

a high-speed connection and less than ten seconds when using a 56K dial-up connection. Such measures usually

are defined in terms of averages over multiple usages and/or durations.

Suitable performance also includes providing required functionality, including suitable usage under identified usage

situations, at specified levels of accuracy and adequacy. For example, that downloaded web page must display on

the user's browser in a readable manner, both with regard to visibility and placement. Visibility in turn could be

defined with respect to brightness, contrast, and color-blindness metrics. Text and images also should be interpret-

ed in a consistent and correct manner. Functionality also means that the content is correct.

Defects are failures to meet the specified requirements. The software is deemed to provide suitable functionality

when it has no more than the number of acceptable defects of each severity categorization for given degrees of

usage of that product. For instance, software running a life-critical medical device probably cannot tolerate any

high-severity defects that cause the product not to function at all and perhaps no more than ten low-severity

defects such as cosmetic issues with packaging—recognizing that a seemingly cosmetic packaging defect could be

high-severity if it causes the device to be misused.

Realize also that battery failure would be considered a high-severity defect if it occurs within say five years of

device installation but perhaps acceptable functionality thereafter, provided the device has suitable means of alert-

ing about and replacing a dying battery. Metrics for the alerting and replacing also could be defined.

Software Requirements Management

Which requirements have the greatest effect on

quality in software development?

Sponsored by: Page 8 of 14

Page 9: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

What are some requirements-related metrics which would improve the software process' ability to produce more

reliable software?

Many of the relevant requirements-related metrics actually are collected later in the software process and include

the results measures as described above regarding product reliability.

In addition, it's important to measure the activities involved in defining the requirements and the indicators of

requirements quality available during the development process prior to the product results themselves.

Requirements definition measures include the effort and duration of requirements discovery activities per skill type

and level, including users/stakeholders. Such activities include differentiating data gathering techniques, analysis

methods, drafting, review, and revision. If tools are employed, they need to be reflected too.

Requirements quality should be measured at each point where requirements issues can be detected, especially

requirements reviews, but also creating and reviewing designs and code, and each of the levels and types of

testing.

One of the big challenges with measures of effort and duration is that they don't reflect skill and applying appropri-

ate methodology, which undoubtedly are the far greater determinants of requirements adequacy, which in turn is

manifested in resulting product reliability.

For instance, the conventional wisdom is that lack of user involvement reduces requirements quality, which is

true, but too often leads to the overly simplistic conclusion that merely increasing user involvement is sufficient to

produce suitable requirements. The fallacy of this seemingly logical conclusion is that just spending more time

doing the same ineffective things the same ineffective ways won't improve your requirements appreciably.

For 15 years, the Standish Group's CHAOS report repeatedly has attributed poor project outcomes mainly to

inadequate user involvement. Everyone knows user involvement is important. What they don't know, because

conventional analyses such as the Standish Group's don't know it either, is that conventional requirements definition

is destined to inadequacy regardless of how much time users are involved.

So long as requirements definers focus mainly on requirements of the product/system/software they expect to

create, they will continue to encounter creep and product inadequacy, including unreliability. Such issues diminish

dramatically when one learns to effectively first discover the REAL, business requirements deliverable whats that

provide value when satisfied by the product/system/software how.

About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since

1982. He works directly with and trains business and systems professionals in requirements analysis, quality and

testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is

the author of the Proactive Testing and REAL ROI methodologies and also the recent book Discovering REAL

Business Requirements for Software Project Success.

Sponsored by: Page 9 of 14

Software Requirements Management

Which requirements have the greatest effect on

quality in software development?

Page 10: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

How to avoid requirements creep

By Robin F. Goldsmith, JD

Most projects experience requirements creep -- changes to requirements which had supposedly been settled upon.

Creep is a major cause of budget and schedule overruns, which in turn cause stress, mistakes and damaged

reputations. So how can you avoid requirements creep and its side effects?

Creep continues to plague projects despite decades of attention directed toward defining requirements. Whether

couched in terms of "systems analysis," "structured analysis," "object-oriented analysis," "business analysis," or

variations on "requirements," these contributions to requirements definition have made marginal gains without

appreciably curtailing requirements creep.

I've found that creep persists because these conventional approaches keep missing the critical component I call

"REAL" business requirements. Let's take a familiar example.

The ATM example

A surprisingly large number of speakers and authors on requirements use automated teller machine (ATM)

examples. Rather than making up my own ATM example, I've simply listed some of the typical requirements stated

by these other authorities. An ATM must:

• Require the customer to insert their card.

• Read the encrypted card number and ID data from magnetic strip.

• Require the customer to enter a PIN (personal identification number).

• Match the entered PIN to a calculated PIN or PIN on file.

• Accept envelopes containing deposits or payments.

• Dispense cash in multiples of $10.

• Display selected account balances.

• Transfer indicated amounts between the customer's accounts.

• Issue printed receipts.

• Return the customer's card.

Most people I ask agree that those are the requirements.

If those are the requirements, then what are these?

Software Requirements Management

How to avoid requirement creep

Sponsored by: Page 10 of 14

Page 11: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

• Provide secure, confidential access to banking services at a time and location convenient to the customer.

• Reliably, quickly and easily confirm the identity and account of the customer.

• Enable the customer to perform ordinary bank transactions involving accounts quickly, accurately and

safely.

• Provide the customer with documented evidence of transactions at the time of the transactions.

I would contend that the latter are the REAL requirements -- business deliverable whats that provide value when

they are delivered or satisfied. Business requirements are the requirements of the user, customer or stakeholder;

therefore I treat these terms as synonymous with "business." Business requirements are conceptual and exist with-

in the business environment, so they must be discovered. There are many possible ways to satisfy the REAL busi-

ness requirements.

In contrast, the former list actually consists of requirements of a product or system. Meeting the product require-

ments provides value if and only if it actually meets the REAL business requirements of the user, customer or stake-

holder.

Let me add two notes to anticipate frequently asked questions. First, I use "REAL" in all capitals to emphasize and

distinguish in a way that survives reformatting, which destroys italics, bold and underlining in the requirements we

end up with (as opposed to those we start with, which are generally modified until they're more accurate and com-

plete -- including those additions and changes identified through the process called creep).

Second, I recognize that use case advocates often consider use cases to be the user's requirements and vice versa.

In fact, use cases are merely a format which can be used to document user or business requirements, but use

cases usually document the expected usage of a product, system or software.

Two creep scenarios

Indeed, an ATM could be created which would satisfy all the items on the former list and not satisfy the latter, in

which case the ATM would not provide value; additional requirements would be defined to overcome the discrepan-

cy and people would call it creep. Invariably, this process is a form of trial and error, trying some additional product

features until value is provided by meeting the (typically unarticulated) REAL business requirements. For many

projects, this type of creep is embedded within typical iterative development and thus may not actually be charac-

terized as creep.

Consider a somewhat different scenario. Let's say the customer has signed off on an ATM system we developed that

indeed meets those former requirements. Then the customer requests some other capability, for example, that we

enable the ATM to identify customers biometrically with fingerprints or retinal scans. That's classic creep, isn't it?

People would say the requirements have changed appreciably after we thought they'd been established.

Undoubtedly it will take a lot more time and money to implement this type of change than if we'd known about it in

the first place.

Software Requirements Management

How to avoid requirement creep

Sponsored by: Page 11 of 14

Page 12: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

Both of these forms of creep are commonly attributed to constantly changing business conditions -- and, frankly,

both the changing and the creep are often considered inevitable. This premise that it's impossible to ascertain the

requirements is the basis for some supposed "best practices" for development methodologies, which actually make

a virtue of not discovering the requirements and turn change and its resulting creep into a self-fulfilling prophecy.

Let me suggest instead that the REAL business requirements haven't changed nearly so much as our awareness of

them. Had we in fact explicitly identified the REAL business requirements, instead of focusing only on the product

design, we could have been attentive to all the ways, including biometrics, which would enable us to "reliably,

quickly and easily confirm the identity and account of the customer."

The cause of requirements creep

In most requirements books and in common usage, product requirements are referred to as the requirements.

Often they are just called "requirements" but they may be characterized as "product," "system," "software" or

"functional" (and its associated "nonfunctional") "requirements," and sometimes they are called "business require-

ments" even though they in fact describe a product.

A widely held conventional model does use the term "business requirements," which are considered to be only a

different level of detail or abstraction. That is, the conventional model considers business requirements to be high-

level and vague, mainly objectives, which decompose into detailed product requirements. Thus, if you have a

requirement which is detailed, it's considered a product requirement, and if it's high-level and vague, it's considered

a business requirement.

Conventional wisdom is that creep is caused by (product) requirements which are not sufficiently clear or testable.

Lack of testability is usually due to lack of clarity. In fact, while clarity and testability are important, much of creep

occurs because the product requirements, regardless of how clear and testable they are, don't satisfy the REAL

business requirements, which haven't been defined adequately.

All the detail in the world on the product requirements (how to deliver) cannot make up for not knowing the REAL

business requirements (what is needed). The key to avoiding requirements creep is to realize that REAL business

requirements are not high-level and vague, but must be defined in detail. They do not decompose into product

requirements; they are qualitatively different. How is a response to what, and when the whats aren't adequately

defined, hows can only be based on presumptions. In contrast, detailed REAL business requirements map to prod-

uct designs, which don't creep nearly so much.

Making the change

I think it's safe to say that all conventional requirements methodologies acknowledge the need to know the user's,

customer's or stakeholder's requirements -- and think they are addressing them. However, requirements creep con-

tinues because of the flawed conventional model that erroneously presumes that the user's objectives decompose

into detailed requirements of a product we can create.

Software Requirements Management

How to avoid requirement creep

Sponsored by: Page 12 of 14

Page 13: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

Thus, the most common requirements definition question ("What do you want?") actually invites the stakeholder to

describe/design a product which presumably will meet their needs. Focusing on the product hows prevents ade-

quately discovering the business deliverable whats the product must satisfy to provide value. It also frequently

interferes with customer cooperation and creates adversarial situations, which further impede finding out the whats.

This leads to creep, which ironically often results in finally finding out the REAL business requirements. By recogniz-

ing what's really happening, we can apply three key methods to avoid much of the highly inefficient back and forth

of creep:

1. Adopting this more appropriate model enables us to go directly to discovering more of the REAL business

requirements, rather than being backed into them through creep.

2. Learning about the business and how to better pay attention to the business people helps us understand

needs from their perspective.

3. Applying the powerful "problem pyramid" tool (which will be discussed in a later article) will help guide

more direct and accurate identification of the problem, value measures, causes and business requirements

that provide the value by addressing the problem.

We can usually avoid requirements creep by mapping the detailed REAL business requirements to a product feature

that will satisfy those requirements and thereby provide value.

About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since

1982. He works directly with and trains business and systems professionals in requirements analysis, quality and

testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is

the author of the Proactive Testing and REAL ROI methodologies and also the recent book Discovering REAL

Business Requirements for Software Project Success.

Software Requirements Management

How to avoid requirement creep

Sponsored by: Page 13 of 14

Page 14: TechTarget SoftwareRequirements Managementviewer.media.bitpipe.com/.../IBMsSoftwareQualityE... · Qualityassurance(QA)andtesting'sroleinrequirements By Robin F. Goldsmith, JD The

Software Requirements Management

Resources from IBM

Sponsored by: Page 14 of 14

Resources from IBM

The Rational Approach to Automation

Boost Agility - 10 Best Practices for Requirements Driven Software Delivery (WC)

Change Management Unleashed: Introducing a new solution for basic bug and issue tracking"(WC)

About IBMAt IBM, we strive to lead in the creation, development and manufacture of the industry's mostadvanced information technologies, including computer systems, software, networking systems,storage devices and microelectronics. We translate these advanced technologies into value for ourcustomers through our professional solutions and services businesses worldwide.