testing process

10
In the last issue, I dealt with a col- laborative test process, getting test releas- es into the test lab. In this article, let's look instead at an internal test process, managing the execution of tests against a test release. One can think of the test manager's role during the test execution process as one of managing a search for the unexpected. While all managers are expected to manage unexpected events within their areas of responsibility, the test manager is the only manager in the software development organization whose area of responsibility is searching for the unexpected. This search takes the form of running a set of tests. How does a test team take these tests, run them against an installed system, and capture information about the system under test, the test cases, and the test execution process itself? One can consider this topic broadly - all the processes involved in getting through a test cycle - but that's like the loose thread on a sweater. If I try to discuss test execution processes with that scope, we'll soon find they're attached to everything else that happens in the whole develop- ment organization during that time peri- od. Instead, let's narrow the scope to include only those internal processes that the test team performs to run tests and report results. Let's further assume that we've already gone through the release management process, so a build is pres- ent in the test lab, ready for testers to start pounding on it. Narrowing the scope does not affect the criticality of the remaining pieces. A competent test manager must be able to manage this process crisply, in the face of unexpected events and findings, and add value consistently for the organization. The spotlight is on the test team during test execution, and failing to execute this internal process deftly will lead to fuzzy, incomplete, or inaccurate test status reports to fellow managers, your superi- ors, and even senior executives. Definitions As usual, let's first agree on vocabulary to avoid confusion. Again, I'm not espous- ing these definitions as necessarily "right”; they are merely the terms in which I'm comfortable thinking and how I will express myself in this article. Test case. A collection of one or more test steps designed to exercise a small number of related test conditions. Test step. A short action, such as entering a page of data, that produces a desired test condition. Test condition. Some interesting situation in which we want to place the system under test for purposes of looking for invalid behavior, response, and/or output. Test suite. A collection of related test cases. Test cohort. All the test suites that apply to the current test phase. Test cycle. A selection of test suites-often a subset of the entire test cohort-run against a particular test release. Test pass. A complete run through the test cohort-every test case in every test suite- either in one test cycle or spanning two or more test cycles. The Test Execution Process The following outlines what I consider a good test execution process. 1. Based on an overall quality risk man- agement strategy, select a subset of test suites from the test cohort for this test cycle. 2. Assign the test cases in each test suite to testers for execution. 3. Execute tests, report bugs, and capture test status continuously. September 2000 Journal of Software Testing Professionals http://www.softdim.com 20 Test Execution Processes By Rex Black and Torsten Baumann

Upload: api-3699222

Post on 13-Nov-2014

962 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Testing Process

In the last issue, I dealt with a col-

laborative test process, getting test releas-es into the test lab. In this article, let'slook instead at an internal test process,managing the execution of tests against atest release. One can think of the testmanager's role during the test executionprocess as one of managing a search forthe unexpected. While all managers areexpected to manage unexpected eventswithin their areas of responsibility, thetest manager is the only manager in thesoftware development organizationwhose area of responsibility is searchingfor the unexpected. This search takes theform of running a set of tests. How doesa test team take these tests, run themagainst an installed system, and captureinformation about the system under test,the test cases, and the test executionprocess itself?

One can consider this topic broadly - allthe processes involved in getting througha test cycle - but that's like the loosethread on a sweater. If I try to discuss testexecution processes with that scope, we'llsoon find they're attached to everythingelse that happens in the whole develop-ment organization during that time peri-od. Instead, let's narrow the scope toinclude only those internal processes thatthe test team performs to run tests andreport results. Let's further assume that

we've already gone through the releasemanagement process, so a build is pres-ent in the test lab, ready for testers to startpounding on it.

Narrowing the scope does not affect thecriticality of the remaining pieces. Acompetent test manager must be able tomanage this process crisply, in the face ofunexpected events and findings, and addvalue consistently for the organization.The spotlight is on the test team duringtest execution, and failing to execute thisinternal process deftly will lead to fuzzy,incomplete, or inaccurate test statusreports to fellow managers, your superi-ors, and even senior executives.

Definitions

As usual, let's first agree on vocabulary toavoid confusion. Again, I'm not espous-ing these definitions as necessarily"right”; they are merely the terms inwhich I'm comfortable thinking and howI will express myself in this article.

Test case. A collection of one or more teststeps designed to exercise a small numberof related test conditions.

Test step. A short action, such as enteringa page of data, that produces a desiredtest condition.

Test condition. Some interesting situationin which we want to place the systemunder test for purposes of looking forinvalid behavior, response, and/or output.Test suite. A collection of related testcases.

Test cohort. All the test suites that applyto the current test phase.

Test cycle. A selection of test suites-oftena subset of the entire test cohort-runagainst a particular test release.

Test pass. A complete run through the testcohort-every test case in every test suite-either in one test cycle or spanning two ormore test cycles.

The Test Execution Process

The following outlines what I consider agood test execution process.

1. Based on an overall quality risk man-agement strategy, select a subset of testsuites from the test cohort for this testcycle.

2. Assign the test cases in each test suiteto testers for execution.

3. Execute tests, report bugs, and capturetest status continuously.

September 2000 Journal of Software Testing Professionals http://www.softdim.com 20

Test ExecutionProcesses

By Rex Black and Torsten Baumann

Page 2: Testing Process

4. Resolve blocking issues as they arise.

5. Report status, adjust assignments, andreconsider plans and priorities daily.

6. Manage the test cycle end game, elim-inating unrealizable tests in reverse-prior-ity (lowest first, highest last) order.

7. Report test cycle findings and status.

In the next section, I'll work through ahypothetical case study illustrating thisprocess, using a simple test-tracking tool.Then, with that example in mind, I'll pro-pose some quality indicators for the testexecution process and outline some of thekey management challenges to keeping agood process running.

A Hypothetical Case Study

Let's assume that you are testing release1.1 of a Web-based word processing pro-gram called Speedy Writer.1 You are inthe System Test phase, and the first buildwas just installed. You intend to run fourtest suites during this phase - yourSystem Test cohort-functionality, per-formance and stress, error handling andrecovery, and localization. Each test suitehas a handful of test cases in it. In reallife, of course, you would have moretest suites and more (and smaller) testcases than in this example, but theexample has the virtue of being read-able as a figure on a printed page.

Figure 2 shows a simple worksheet fortracking test case execution. In thisworksheet, you list all your test suites,and, within suite, the constituent testcases. You tag each test case with anID for cross-reference; e.g., in a bugreport. The "State" column allows youto track the test cases as they movethrough the sequence of states shownin Figure 1. As a tester runs each testcase, she'll need to track: the IDs forthe bug reports she filed (if any); thedate on which she completed the testcase; how long it took her in person-hours; any comments about this testingactivity; and, her initials in case youneed to ask her any questions.

As indicated by the “Planned Date” ofexecution, you have selected the sub

set of tests for this cycle and orderedthem based on the following quality riskmanagement strategy:

1. File functionality and file robustnessare the most critical to the customer.

2. Table functionality tends to unearthserious bugs that take a long time to fix.

3. Editing operations are the second-most critical area of functionality.

4. Data loss during server crashes is amajor tech support headache in the cur-rent release.

5. Font features and printing are thethird-most critical area of functionality.

6. Performance is important - and the tar-get servers are represented in the cus-tomer base in the order shown - but thisrisk was covered partially during earlyperformance testing during integrationtesting, which showed satisfactoryresults.

7. Versions with foreign-language sup-port can ship after the English version is

September 2000 Journal of Software Testing Professionals http://www.softdim.com 21

Figure 1: Test Case States andLifecycle

Figure 2: Initial Test Case Tracking Worksheet

1Word processors, real and imagined,make good case studies for testingbecause they are simple programs, wellunderstood by most testers, in which cor-rect and incorrect behavior are clear.Using banking or missile guidance soft-ware wouldn’t be so easy! For case stud-ies in testing based on a real wordprocessor, see Brian Marick’s excellentWeb resource, www.testingcracft.com.

Page 3: Testing Process

released, so your team will run these testsin the next test cycle.

You have also assigned each test case to aresponsible test engineer. You plan aboutfour person-hours per test case, based onexperience with Speedy Writer release1.0, and plan to run this cycle during theweek of January 8 through January 12.The performance and stress tests have torun for 24 hours, though being automatedthey only involve about four hours oftester effort from Emma Moorhouse,your test automation engineer. L.T. Wong

is to run the manual tests Monday morn-ing through Wednesday afternoon before

turning the test environment over toEmma, who will need it exclusively forher tests.

Figure 3 shows a summary worksheetthat presents the test results on a suite-by-suite basis. This worksheet allows you tosummarize the number of test cases ineach state. (The source spreadsheet forthese illustrations is available from mevia e-mail or on the CD-ROM of mybook, Managing the Testing Process.)

L.T. starts her work with the file func-tionality test, and discovers that the file-creation functionality is seriously broken,corrupting all new files. She finds thatthe only workaround is to save the fileimmediately upon creation before start-ing to edit it. In addition to running theplanned test, L.T. spends time research-ing and writing the bug report. Beforemoving on to the next test case, she notes

the status information associated withrunning this test case in the test casetracking worksheet.

L.T. now proceeds to the error handlingand recovery test for file corruption.However, before she can get too far intoit, a system administrator inadvertentlyshuts down the Solaris server L.T. isusing. After some initial confusion, shegets the server rebooted, but close to anhour is spent resolving this blockingissue. She spends another couple hourstesting, then an hour reading and

responding to various e-mails, beforecalling a ten-hour-day to a close. Youreview her results and update your work-sheets, which, at the close of day one,appear as shown in Figure 4 and Figure 5.You send these reports and a summarybug report to your manager as a statusupdate. Revisiting your plans and priori-ties, you decide to continue on the currentpath, as important bugs are coming tolight. Performance testing is still tenta-tively scheduled for Wednesday. The next day, L.T. continues with hererror handling and recovery tests. Thetesting is very productive; she finds sixseparate problems and files six reports. Also, the simulated server crash the daybefore, while inadvertent, exposed a seri-ous problem she otherwise would havemissed. She spends thirty minutes updat-ing the server crash test case to reflect thenew condition.

Though the testing effort is turning upimportant findings, the schedule is introuble. Adding yourself as a test engi-neer for an afternoon, you run some ofthe functionality tests. However, a sched-ule slip is inevitable, so you must adaptyour plan. First, you ask Emma to startthe performance tests Thursday and work

September 2000 Journal of Software Testing Professionals http://www.softdim.com 22

Figure 3: Initial Test Suite Summary

Figure 4: Test Tracking Worksheet after Day One (1/8) of Cycle One of SystemTest

Page 4: Testing Process

on Saturday to wind them down. Next,reconsidering the priorities, you resched-ule the printing test for Saturday anddecide to run it yourself. At the end ofday two, the test tracking worksheet andtest suite summary look as shown inFigure 6 and Figure 7.

On Wednesday, the testing goes accord-ing to plan, but with lots of new bugs. OnThursday morning Emma starts her serv-er tests, but on Friday morning she findsall the tests hung overnight. She neglect-ed to read the new-file bug report (701),so she didn't adjust the test cases inadvance. Emma now has to rework thetest scripts and restart them. Since the test environment has to be turned over to therelease management team on Sundayafternoon for a new build, you drop theprinting and the Linux performance tests-the lowest priority in your opinion-forthis cycle. During her testing, Emma alsofinds bugs that affect performance. OnSunday morning, you wrap up the testing,debriefing Emma and producing the finaltest status reports shown in Figure 8 andFigure 9. You also select and assign thetest cases for cycle two, deciding to addan extra two hours for each test case totake into account the bugginess of thesystem under test. Printing and Linuxperformance testing come first, followedby the localization testing, since youskipped these in cycle one. Around 1:00PM the release management engineerenters the lab, CDs and tape in hand, andyou and Emma leave to let him preparethe test environment for the next cycle oftesting.

Test Execution Process QualityIndicators

Using the case study, let me run throughmy key quality indicators for a good testexecution process in the following sub-sections.

Finds the scary stuff first

When thinking through your test execu-tion process, you need to make an edu-cated guess where the nastiest bugs liveand test those areas first. The "nasty bugspectrum" is determined by a quality

risks analysis, whether formal or informal, where you rank potential failures inrisk-priority order. This prioritization oftest cases must happen both within testcycles and across the entire test phase. Inour example, testing started with theglobal risks of file handling, editing, andso forth, leaving for later the isolated orcustomer-specific types of quality risks,like printing and localization.

Supports crisp communication of findings

Testing produces information that theproject management team can use to

September 2000 Journal of Software Testing Professionals http://www.softdim.com 23

Figure 5: Test Suite Summary after Day One (1/8) of Cycle One of System Test

Figure 6: Test Tracking Worksheet After Day Two (1/9) of Cycle One ofSystem Test

Page 5: Testing Process

Software Testing & Quality Courses by Leading Industry Experts

Any of these courses count toward the Certified Software Testing Professional Requirements

Testing Web and eBusiness ApplicationsDecember 5, 6th Orlando, FL

Managing the Software Testing ProcessDecember 7, 8th Orlando, FL

Software Test Automation and Scripting TechniquesDecember 14, 15th Orlando, FL

Practical Techniques for Software Quality AssuranceNovember 27, 28th Orlando, FL

Software Test Planning and DesignDecember 11, 12th Orlando, FL

Space is limited to only twenty participants. Register online at [email protected]

Course Fee:Two-day course: $895. If five or more individuals from the same company enroll together, a 10% discount will be applied.All course material, meals and refreshments are included. Upon passing the CSTP exam for any of these courses, atten-dees will receive a Certificate of Completion.

Cancellation Policy:Written cancellations must be received 15 working days before the registered course(s). Any cancellations received afterthe 15 days are subject to a $100 cancellation fee for each person registered once registration is received by our office.Registrants who fail to cancel in writing are liable for the entire fee and can be substituted by other individuals for the sameregistration. In addition, all credit card transactions will incure a $100 cancellation fee once registration is received by ouroffice and all charges have been processed.

Bring these training opportunities to your organization

The International Institute for Software Testing offers any of its courses onsite to maximize benefits andsubstantially reduce cost to your organization. Customized training, mentoring, and consulting is also availablein many areas of software development and quality assurance. Call (651) 306-1387 for more information or e-mail to [email protected]

These courses are also available to be offered on site to minimize cost to your organization.Call (651)306-1387 for details. Visit our web site at www.softdim.com for course details.

Page 6: Testing Process

make smart decisions about quality. Todo that, a good test execution processshould provide for capture of these find-ings in clear reports and for circulation ofthese findings to the appropriate parties.In our example, the test manager pro-duced a nightly report of these findingsand sent it to her manager, the test team,and others.2 However, the process didbreak down when Emma neglected toread the L.T.'s bug reports. Using peerreviews of bug reports and test statusreports can reduce these kinds of mis-communications, but the test manager

needs to make sure that testers spend timeon these tasks.

Has measurable progress attributes

Two key progress metrics apply to man-aging test execution: the rate of bugdetection and the rate of test case evalua-tion. For bug detection, you'd like to havea model that predicted how many bugsyou'd find in a test cycle. As the test cycleproceeded, you could compare the num-ber of bugs you'd found with the expect-ed number. However, most of us have to

rely on our intuition and charts like theone shown in Figure 10 to tell us if we'reon track. Test case execution rates are easier. In project management terms, amanager can measure progress to anyplan using milestones achieved versusmilestones planned to date, effortexpended versus effort planned to date,and a combination of the two viewpoints;i.e., have we achieved the milestoneswe'd expect given the effort we expend-ed? Our example spreadsheet measuredprogress on the test suite summary bybugs found and test cases completed. Wecould have added a project managementsummary that analyzed test case comple-tion rates in these three ways.

Prevents accidental overlapping of test-ing

Two issues arise in this matter, clearassignment of responsibility and preven-tion of test case dependencies. Clearassignment refers to each tester knowingexactly what tests she should run. In ourexample, we had an assignment columnin the test case tracking worksheet thatlisted the current owner. That changed forsome test cases as the cycle proceeded;the test manager must ensure that newcopies of the test tracking worksheetshowing the new assignments go out tothe testers. This may sound like a trivialmatter, but it's easy for confusion to creepin here, with two serious consequences.The mismanagement of two preciousresources (tester time and schedule time)entrusted to the test manager is badenough. The fact that the test managerwill require yet more overtime from histeam to rescue the schedule due to hiserror constitutes a failure of leadershipthat detracts from the manager's ability toget the best work from his people. Howabout test case dependencies? Test casedesign is somewhat off-topic for this arti-

September 2000 Journal of Software Testing Professionals http://www.softdim.com 25

Figure 7: Test Suite Summary after Day Two (1/9) of Cycle One of System Test

Figure 8: System Test Cycle One Final Test Tracking Worksheet (1/14)

2 Managing outward and upward -reporting of test results to individual andpeer managers and to supervisory man-agers, respectively - is a separate collab-orative process. For more information,see my article, “Effective Test StatusReporting”, Software Testing and QualityEngineering, Volume 2, Issue 2.

Page 7: Testing Process

cle, but in passing allow me to mentionthat such dependencies are usually avoid-able and always undesirable. Ideally, wecan pick any test case and assign it to anytester on any given day in the test cyclebased strictly on priority, availability ofthe test hardware, and the skills of thetester. Having to consider who is runningother test cases on what dates makes thisprocess far too complicated. One caveatin preventing such dependencies is tomake sure that test cases don't set up datarequired by subsequent test cases.

Adapts to evolving circumstances

Part of testing is encountering unexpect-ed behaviors and researching thoseanomalies to create a detailed, actionablebug report. Since the behaviors are unex-pected, and since we can't predict howmany bugs we'll find in any given test toany level of certainty, the test processgenerates its own fluid state of affairs. Inaddition, external events such as delayedbuilds, adjustments in build content, andshifts in management priorities bringchange to bear. You can't prevent thesechanges, so the test process must accom-modate them. In our case study, weadapted to changes from within-a highlevel of bugginess in Speedy Writer and amiscommunication on bug findings-aswell as changes from without-a systemadministrator's negligent shutdown of atest server. We adjusted our test caseassignments, added extra resources,changed the planned execution dates, anddropped lower-priority test cases.

Captures data for continuous improve-ment of the test cases and the test processYou could say that, while the testwaretests the system under test through thetest process, the system under test alsotests the testware and the test process.This means that the astute test managerhas available to her information about thequality of the testware and the testprocess that she can use to improve hertest operation. In our case study, L.T. cap-tured an improvement to one test casebased on a serendipitous bug discoveryand the test manager adjusted the testexecution times based on the previoustest cycle.

Handling Challenges

Regardless of how good your test execu-tion process, the successful test managerwill still need to anticipate and handlevarious challenges that arise during mosttest execution efforts. Some of these arisefrom the collaborative nature of the test-ing process, but some are internal chal-lenges. Let's look at a few of these in thefollowing subsections.

Balance progress and certitude

Researching bugs and chasing downother unusual or promising (for bug dis-covery) behaviors involves unpredictableamounts of effort, so the actual amount ofperson-hours required to complete a set

of tests can deviate considerably from theplan. As a manager, you must help yourteam balance the need to make forwardprogress through the planned test casesagainst the need to know the meaning ofyour test findings with some certainty.Not all bugs deserve the same level ofresearch. If an automated test case hangsup once but then runs fine on the next try,and you're pretty sure the test tool is toblame, perhaps it's better to skip that testso as not to delay other tests for hours. Onthe other hand, a bug that sporadicallycorrupts a shared database may warrantpostponing much of the test cycle untilthe failure has been isolated and the prob-lem is reproducible.

September 2000 Journal of Software Testing Professionals http://www.softdim.com 26

Call For Submissions

Guidelines for submission in JSTP:

1. Topic must relate to testing.2. Submissions must include a detailed description of

the topic of the article, learning objectives, detailedoutline, and author bio.

3. Submission must indicate whether the material waspublished before and name and date of publica-tion.

For complete submission guidelines, send an email [email protected].

If your abstract is accepted, you will be contacted to submitthe full article. Once received, your article will be reviewedfor content, relevance, and applicability. If deemed a candi-date for publication, it will be sent to the review board. If anyrevisions are necessary for publication, you will be notified.Submission of either abstract or article does not guaranteepublication. If your article is chosen for publication, you willbe contacted.

All submissions must be directed to:Journal of Software Testing Professionals, Editor-in-Chief8476 Bechtel Ave., Inver Grove Heights, MN 55076e-mail: [email protected]

Page 8: Testing Process

Keep test reports accurate, consistent,and timely

Test findings during test execution evolveconstantly. If you spend two hourspreparing slides for a two-hour projectstatus meeting, you know that, by the endof the meeting, your report is stale. Doyour managers worry about the exactcount of test cases, the number of passesand fails, and the like? If so, you mustwork extra hard to make sure all yourreports are consistent, but this means thatyour results are even more out-of-date bythe time people get them. It helps toinvolve your managers in deciding howto strike the right balance. I find that onceI explain the time required to achieve par-ticular levels of accuracy and consistencyin particular reporting formats, and thetrade-offs involved in my being disen-gaged from the testing process properwhile working on such reports, I can getthe guidance I need to make the rightdecision.

Ensure that testers interpret the test caseresults correctly

In the previous section, I mentioned thattest case execution involves looking formismatches between expected andobserved results. However, does this nec-essarily mean that we're assessing thequality of the system under test? Ourpowers of observation and judgment aresometimes flawed, our expectations ofcorrect behavior are sometimes wrong,and inscrutable software sometimes con-ceals evidence of correct and incorrectbehavior. For any of these three reasons,

we may report a failure when the pro-gram behaves correctly or, worse yet,vice versa (see Figure 11).

To deal with tester observation and judg-ment problems, I have had senior testersreview other tester's results as well asassigning the same test cases to differenttesters during subsequent cycles.Completely resolving the problem of rec-ognizing correct and incorrect behavior-also known as an oracle problem-dependson having unambiguous requirementsand specifications and a foolproof way ofpredicting the right response to any stim-ulus. Lacking these items-the typicalcase-the wise test manager errs on theside of reporting bugs, referring develop-ment objections to such gray-area reportsto a cross-functional team of businesspeople, salespeople, marketers, cus-tomers, and other non-development staff.Finally, the inscrutable software issueboils down to writing testable software.Only senior management can resolve thisissue by ensuring test involvement duringthe program design and development.

Write good bug reports

Bug reports are the tangible product oftest execution, a key communicationchannel to developers, other peer organi-zations, and your managers, and one ofthe pillars supporting your test team'svalue to the company. This makes thesubprocess of writing good bug reportsimportant enough that I will cover it in aforthcoming article, but, if you can't waituntil then, see my web site for a high-level description (www.rexblackconsult-ing.com/publications).

Accept the right level of test case ambi-guity

A test case is ambiguous when two dif-ferent test runs could yield differentresults against the same software, or thesame results against software thatbehaves differently (in a noticeable andpertinent way). I assert that all test casesare ambiguous. To revisit our case studyfor a moment, when L.T. prints the testcase on Monday morning, that case couldread simply, "Spend four hours testingthe file operations available in Speedy

Writer." Alternatively, it could consist ofdozens of detailed steps, starting withsomething like, "Launch Speedy Writerby pointing at the screen icon with themouse pointer and double clicking on theicon. Verify that Speedy Writer loads."Or it could have some level of ambiguityin between.

No manual test case written in English orany other human language can be com-pletely without nuance. No automatedtest tool I'm aware of can practically ver-ify the value of every bit in every register,memory location, and disk sector duringtest execution. So, test case ambiguity isa spectrum, not a binary state or a testdesign shortcoming to be eliminated.Along this spectrum, there is an appropri-ate level of ambiguity. Detailed test casestake a long time to write, require exactknowledge of how the system shouldbehave, and restrict tester judgement, butvaguer test cases can lead to interpreta-tion mistakes. Automated test cases thatcheck more data catch more bugs, butthey also turn up false errors and presentmaintainability problems. Given thisspectrum from purely exploratory testingto tightly scripted test cases, the rightanswer in terms of test case ambiguitydepends on issues like tester experience,the amount of time available to write testcases, the extent to which detailedrequirements or specifications exist, andso forth.

Staying organized in crisis

When I explain this process and thesetools to my clients and students, a fewrespond, "Gee, it seems like a lot of over-head. Do these techniques work?" Yes. Ifyou apply them with discipline and regu-larity you will remain organized and ontop of your test execution effort no matterhow chaotic the overall developmentprocess you're operating in.

I have developed and applied the testprocesses and test tracking tools dis-cussed in this article over dozens of proj-ects. Other test managers have alsoapplied these techniques or ones likethem to their test projects. These aren'tsilver bullets, but they do help the testmanager choreograph a complex andever-changing effort during what's typi-

September 2000 Journal of Software Testing Professionals http://www.softdim.com 27

Figure 11: The Outcome of Valid andInvalid Test Result Interpretation

Page 9: Testing Process

cally a high-stress period in a project'slifecycle. In the next section, a colleagueand fellow test manager, TorstenBaumann, comments on his experienceswith using these-and his experienceswhen the test manager could not or didnot manage the search for the unexpect-ed.

Stories from the Front: Test ExecutionProcess Case Studies by TorstenBaumann

The following case studies are based onexperiences with several organizationswith respect to using as well as not usingthe Test Execution Process definedabove:

Not assigning test cases makes it difficultto track where your Test organization isin terms of schedule and how long thetesting effort will take. It also makes itdifficult to effectively handle and under-stand the test coverage. For instance,overlapping of tests may occur, sinceBetty Tester may be executing test-casesin the same area as Mike Testerson, thusgiving the organization full coverage inone area with a possibility of little ornone in another area. Experiencebecomes a factor as well. You may findassigning certain test cases to certainindividuals may produce the best resultsin terms of reliability and speed of execu-tion.

A retail Web site corporation released aversion of the Web site with the additionof a view to increase the performance ofthe report generation. They tested thegeneration of their report and ran througha test plan that they created the daybefore where they divided up the testcases in an informal meeting. They thentested away without testing Module "X"(everyone forgot that this module exist-ed) in the application thus allowing for adamaging release that would basicallydisable the Web site as a key piece offunctionality was overlooked. It wasdetermined that this view was used alsoin Module "X" and not only in the reportgenerator. Having had a test matrix thatassigned key areas such as Module "X"would have ensured coverage of theproduct.3

The impact to the organization was thatthe production system was down for threehours on their second busiest day of theweek thus impacting revenue by $28,000.Customer satisfaction was adverselyaffected, too, with the rate of calls to thecall center increased threefold due to siteunreliability. Future business loss cannotbe quantified.

Not selecting a subset of tests will resultin testing lasting unacceptably long. TheProject Manager says to the Test Lead,"When will you be finished testing?" andthe Test Lead responds "When all testshave been executed." There is no way inthis day and age to test "everything," andfull regression testing of every releasecan be an impractical goal in the absenceof complete test automation.

With the test suite matrix implementationone has the ability to select for execution-from an extensive library of all test cases-those test cases most appropriate to thechanges in the release. One may alsowish to run variations of the test suiteduring different test cycles to increasecoverage on one area during one cycleand another area during the next cycle, aswas shown in the case study earlier withlocalization testing. This may allow forfinding the scary stuff first, since onewants to know about bugs in priority onetest cases before those lurking behindlower priority test cases.

Poor use of this risk management capa-bility occurred at one corporation. TheTest Manager believed that testing every-thing for each release was crucial. Thiswas evident in the project plan that wassubmitted to Management in that itincluded a test phase of 213 days of testexecution for a small project based on thesize of the software and the amount ofhuman resources available at the time.Tests were scheduled that were complete-ly unrealizable by the test team as welldue to environment restrictions. No testsuites existed for them to reference thechanges with the actual test bed. Thismade project tracking and test status dif-ficult to assess. The Test Manager shouldhave analyzed the changes to the soft-ware and identified key test cases thatshould be executed to provide the best

coverage in the shortest amount of timebased on the data contained in thisspreadsheet. Selecting a subset wouldhave provided a reasonable test periodand the test cases that were marked in thematrix as to be executed could be viewedat any time by anyone for status.

A 213-day test period would haveimposed significant opportunity costs onthe organization. Major competitors hadentered the marketplace by that time andsuch a long test period on such simplechanges would have resulted in thesecompetitors grabbing market share beforethis organization had a chance. Had theyimplemented quality risk assessment aswell as test suite tracking they wouldhave had the ability to quickly select asubset of tests that would have given theneeded coverage in a shorter time period.The Test manager would have kept herjob-the Manager was dismissed over theunreasonable proposal-and the companywould have had the data they needed tomake effective business decisions whentest time was a factor.

As mentioned earlier, a good test processmust support resolving test-blockingissues promptly. This is a special concernwhen people are working towards otherpriorities. The test team may be prevent-ed from executing test cases by blockingissues such as hardware environmentproblems, system configuration prob-lems, "must-fix" bugs that prevent bigchunks of test cases from running, etc.Testers often lose motivation when strug-gling to execute test cases in an unstableenvironment for unpredictable results.The overall effect places at risk yourorganization's ability to meet schedules,stick to budgets, and achieve plans.Everyone involved in a particular teleph-ony software project had different priori-ties. The system was divided into severalsubsystems with each one having a dif-ferent group of developers with one com-mon Test organization. The test team hadblocking issues that needed to be

September 2000 Journal of Software Testing Professionals http://www.softdim.com 28

3 I (Rex) mention as an aside that ana-lyzing test coverage during the testdesign phase can also help alleviate suchoversights.

Page 10: Testing Process

resolved so they could move forward onthis particular release. However, thebusiness was already looking past thisrelease and towards the next one, as wasthe development team. This left the testorganization with little support to resolveblocking issues and thus the releaseschedule was compromised due to mis-matched-and mismanaged-prioritizationof resources.

Test status was difficult to obtain withthis organization as well because theirtracking mechanism was a bug trackingdatabase and each tester was assigned adifferent (allegedly fixed) bug report tore-test based on how many they had lefton their plate.

The execution of test suites was difficultto track as well since this consisted main-ly of test plans which consisted solely oftest case execution steps one after anoth-er in no particular logical order. When theTest Lead was asked, "Are you trackingto schedule?", there was no data to sup-port the answer that was supplied, "Ithink things are ok, I haven't heard any-thing, no news is good news."

This Test Lead had no idea where theteam was in terms of test schedule. Ifthey had a matrix and process such as theone defined above these answers wouldbe readily available. In the event, test sta-tus was only learned towards the end ofthe test cycle. As the testers became nerv-ous that the release was nearing, theydecided to start testing certain areas pro-fusely and sure enough uncovered vari-ous important must-fix bugs that shouldhave been uncovered earlier in the testcycle. If the team would have had aprocess to report test status and assign-ment issues, they could have executedtest cases in a more prioritized and struc-tured manner.

The cost impact to this organization wasthat they delayed release by three weeks.Worse yet, this was four weeks afterMarketing had already informed cus-tomers that the latest release would beavailable. Furthermore, Sales had alreadypre-sold copies of the software that wasnow unavailable. Customer calls into thecall center went up when the release wasnot available on the day it was supposed

to be. The corporation offered those thathad pre-ordered a discount as well. Thedollar value of this missed delivery wasnot negligible and could have been avoid-ed with the proper test process. Test sta-tus reporting provides measurableprogress throughout the project, whichmight have allowed course correctionprior to the train wreck.

Implementing Changes

So, you have read this article and havedecided to get your test execution processunder control. Great! I hope that whatyou have read has given you some ideason where to start. In some cases, though,it can appear a daunting prospect to movefrom where you are to the more-organ-ized situation described in the case study.Every challenge is unique, but perhapsthe following roadmap will help.

1. Assess where you are and what por-tions of the test execution process are notunder control, including collaborativeprocesses not addressed in this article.Maybe some of the sources of chaos areexternal?

2. Put in place some way of trackingbugs if you don't have one. A simplehome-brewed database or even a spread-sheet is better than e-mail.

3. Ensure that you have some idea ofwhat you intend to test and how long itwill take to perform those tests, brokendown into bite-sized pieces. Two to fourhour test tasks are sizes that I find verymanageable. As James mentioned, thisneed not take the form of tightly scriptedtest cases where you count the number ofminutes required on average to executeeach detailed, unambiguous step. Simplybounding a test charter-e.g., test filemanipulation capabilities-with a plannedduration-e.g., spend two hours testing filemanipulation capabilities-can suffice.

4. Put in place some way of tracking testcases-either using the suite/case trackingspreadsheets I showed above or someother mechanism-both in terms of dura-tion and schedule and in terms of find-ings. Again, you can start with somethingvery simple, and you may find that thissimple technique suffices.

5. Set aside the time to review the testexecution status daily with your team.(You're not "managing" unless you man-age your team.) As an agenda for that dis-cussion, use whatever tools you've put inplace to track and manage bugs and testcases.

6. Figure out who needs to receive dailycommunication of test status-if anyone-and make sure that you understand theirinformational needs. Sometimes a sim-ple verbal report once daily will suffice.Other managers will want written reportscirculated via e-mail or brought in theform of slides to a status meeting.

All these steps can be important way sta-tions on the road to getting your test exe-cution processes under control. As men-tioned earlier, though, processes associat-ed with the reporting, tracking, and man-agement of bugs are surely some of themost visible test processes to the devel-opment organization. In an upcomingarticle, let's look at the processes associ-ated with bug reporting to round out thisdiscussion of test execution.

Acknowledgements

Thanks to Torsten Baumann for provid-ing "war stories" on his experiences withtest execution using processes similar tomine.

Author Biography

Rex Black is the President and PrincipalConsultant of Rex Black ConsultingServices, Inc., an international softwareand hardware testing and quality assur-ance consultancy. He and his consultingassociates help their clients with imple-mentation, knowledge transfer, andstaffing for testing and quality assuranceprojects. His book, Managing the TestingProcess, was published in June, 1999 byMicrosoft Press.

Torsten Baumann is currently the QAManager at grocerygateway.com inToronto, Canada. Mr. Baumann attendedConcordia University's BCOMM pro-gram as well as having graduated fromJohn Abbot College's Programmer/Analyst program. He has spent six yearsin Quality Assurance also withSpeedware and iMG.

September 2000 Journal of Software Testing Professionals http://www.softdim.com 29