Download - Principles of Software Testing
Principles of Software Testing
Part ISatzinger, Jackson, and Burd
Testing Testing is a process of identifying defects Develop test cases and test data
A test case is a formal description of
• A starting state
• One or more events to which the software must respond
• The expected response or ending state Test data is a set of starting states and events used
to test a module, group of modules, or entire system
Testing discipline activities
Test types and detected defects
Unit Testing
The process of testing individual methods, classes, or components before they are integrated with other software
Two methods for isolated testing of units Driver
• Simulates the behavior of a method that sends a message to the method being tested
Stub
• Simulates the behavior of a method that has not yet been written
Integration Testing
Evaluates the behavior of a group of methods or classes Identifies interface compatibility, unexpected
parameter values or state interaction, and run-time exceptions
System test Integration test of the behavior of an entire
system or independent subsystem Build and smoke test
System test performed daily or several times a week
Usability Testing
Determines whether a method, class, subsystem, or system meets user requirements
Performance test Determines whether a system or subsystem can
meet time-based performance criteria• Response time specifies the desired or maximum
allowable time limit for software responses to queries and updates
• Throughput specifies the desired or minimum number of queries and transactions that must be processed per minute or hour
User Acceptance Testing
Determines whether the system fulfills user requirements
Involves the end users
Acceptance testing is a very formal activity in most development projects
Who Tests Software? Programmers
Unit testing Testing buddies can test other’s programmer’s code
Users Usability and acceptance testing Volunteers are frequently used to test beta versions
Quality assurance personnel All testing types except unit and acceptance Develop test plans and identify needed changes
Part II
Principles of Software Testing for Testers
Module 0: About This Course
Part II
Principles of Software Testing for Testers
Module 0: About This Course
Course Objectives
After completing this course, you will be a more knowledgeable software tester. You will be able to better: Understand and describe the basic concepts of
functional (black box) software testing. Identify a number of test styles and techniques and
assess their usefulness in your context. Understand the basic application of techniques used to
identify useful ideas for tests. Help determine the mission and communicate the status
of your testing with the rest of your project team. Characterize a good bug report, peer-review the reports
of your colleagues, and improve your own report writing. Understand where key testing concepts apply within the
context of the Rational Unified Process.
Course Outline
0 – About This Course1 – Software Engineering Practices2 – Core Concepts of Software Testing3 – The RUP Testing Discipline4 – Define Evaluation Mission5 – Test and Evaluate6 – Analyze Test Failure7 – Achieve Acceptable Mission8 – The RUP Workflow As Context
Principles of Software Testing for Testers
Module 1: Software Engineering Practices(Some things Testers should know about them)
Principles of Software Testing for Testers
Module 1: Software Engineering Practices(Some things Testers should know about them)
Objectives
Identify some common software development problems.
Identify six software engineering practices for addressing common software development problems.
Discuss how a software engineering process provides supporting context for software engineering practices.
Symptoms of Software Development Problems
User or business needs not metRequirements churnModules don’t integrateHard to maintainLate discovery of flawsPoor quality or poor user experiencePoor performance under loadNo coordinated team effortBuild-and-release issues
Trace Symptoms to Root Causes
Needs not met
Requirements churn
Modules don’t fit
Hard to maintain
Late discovery
Poor quality
Poor performance
Colliding developers
Build-and-release
Incorrect requirements
Ambiguous communications
Brittle architectures
Overwhelming complexity
Undetected inconsistencies
Insufficient testing
Subjective assessment
Waterfall development
Uncontrolled change
Insufficient automation
Symptoms Root CausesSoftware Engineering
PracticesDevelop Iteratively
Manage Requirements
Use Component Architectures
Model Visually (UML)
Continuously Verify Quality
Manage Change
Continuously Verify Quality
Continuously Verify Quality
Poor quality
Poor quality
Undetected inconsistencies
Insufficient testing
Subjective assessment
Undetected inconsistencies
Insufficient testing
Subjective assessment
Software Engineering Practices Reinforce Each Other
Validates architectural decisions early on
Addresses complexity of design/ implementation incrementally
Measures quality early and often
Evolves baselines incrementally
Ensures users involved as requirements evolve
Develop Iteratively
Manage Requirements
Use Component Architectures
Model Visually (UML)
Continuously Verify Quality
Manage Change
Develop Iteratively
Manage Requirements
Use Component Architectures
Model Visually (UML)
Continuously Verify Quality
Manage Change
Software Engineering Practices
Software Engineering Practices
Principles of Software Testing for Testers
Module 2: Core Concepts of Software Testing
Principles of Software Testing for Testers
Module 2: Core Concepts of Software Testing
Objectives
Introduce foundation topics of functional testing
Provide stakeholder-centric visions of quality and defect
Explain test ideas Introduce test matrices
Module 2 Content Outline
Definitions Defining functional testing Definitions of quality A pragmatic definition of defect Dimensions of quality
Test ideas Test idea catalogs Test matrices
Functional Testing
In this course, we adopt a common, broad current meaning for functional testing. It is Black box Interested in any externally visible or
measurable attributes of the software other than performance.
In functional testing, we think of the program as a collection of functions We test it in terms of its inputs and outputs.
How Some Experts Have Defined Quality
Fitness for use (Dr. Joseph M. Juran) The totality of features and characteristics of a
product that bear on its ability to satisfy a given need (American Society for Quality)
Conformance with requirements (Philip Crosby) The total composite product and service
characteristics of marketing, engineering, manufacturing and maintenance through which the product and service in use will meet expectations of the customer (Armand V. Feigenbaum)
Note absence of “conforms to specifications.”
Quality As Satisfiers and Dissatisfiers
Joseph Juran distinguishes between Customer Satisfiers and Dissatisfiers as key dimensions of quality: Customer Satisfiers
• the right features• adequate instruction
Dissatisfiers• unreliable• hard to use• too slow• incompatible with the customer’s equipment
A Working Definition of Quality
Quality is value to some person.
---- Gerald M. Weinberg
Change Requests and Quality
A “defect” – in the eyes of a project stakeholder– can include anything about the program that causes the program to have lower value.
It’s appropriate to report any aspect of the software that, in your opinion (or in the opinion of a stakeholder whose interests you advocate) causes the program to have lower value.
Dimensions of Quality: FURPS
ReliabilityReliability e.g., Test the application
behaves consistently and predictably.
e.g., Test the application behaves consistently and predictably.
PerformancePerformance e.g., Test online
response under average and peak loading
e.g., Test online response under average and peak loading
FunctionalityFunctionality e.g., Test the accurate
workings of each usage scenario
e.g., Test the accurate workings of each usage scenario
UsabilityUsability e.g., Test application from
the perspective of convenience to end-user.
e.g., Test application from the perspective of convenience to end-user.
SupportabilitySupportability e.g., Test the ability to
maintain and support application under production use
e.g., Test the ability to maintain and support application under production use
A Broader Definition of Dimensions of Quality
Accessibility Capability Compatibility Concurrency Conformance to
standards Efficiency Installability and
uninstallability
Localizability
Maintainability Performance Portability Reliability Scalability Security Supportability Testability Usability
Collectively, these are often called Qualities of Service, Nonfunctional Requirements, Attributes, or simply the -ilities
Test Ideas
A test idea is a brief statement that identifies a test that might be useful.
A test idea differs from a test case, in that the test idea contains no specification of the test workings, only the essence of the idea behind the test.
Test ideas are generators for test cases: potential test cases are derived from a test ideas list.
A key question for the tester or test analyst is which ones are the ones worth trying.
Exercise 2.3: Brainstorm Test Ideas (1/2)
We’re about to brainstorm, so let’s review… Ground Rules for Brainstorming
The goal is to get lots of ideas. You brainstorm together to discover categories of possible tests—good ideas that you can refine later.
There are more great ideas out there than you think. Don’t criticize others’ contributions. Jokes are OK, and are often valuable. Work later, alone or in a much smaller group, to
eliminate redundancy, cut bad ideas, and refine and optimize the specific tests.
Often, these meetings have a facilitator (who runs the meeting) and a recorder (who writes good stuff onto flipcharts). These two keep their opinions to themselves.
Exercise 2.3: Brainstorm Test Ideas (2/2)
A field can accept integer values between 20 and 50.
What tests should you try?
A Test Ideas List for Integer-Input Tests
Common answers to the exercise would include:
Test Why it’s interesting Expected result
20 Smallest valid value Accepts it
19 Smallest -1 Reject, error msg
0 0 is always interesting Reject, error msg
Blank Empty field, what’s it do? Reject? Ignore?
49 Valid value Accepts it
50 Largest valid value Accepts it
51 Largest +1 Reject, error msg
-1 Negative number Reject, error msg
4294967296 2^32, overflow integer? Reject, error msg
Discussion 2.4: Where Do Test Ideas Come From?
Where would you derive Test Ideas Lists? Models Specifications Customer complaints Brainstorm sessions among colleagues
A Catalog of Test Ideas for Integer-Input tests
Nothing Valid value At LB of value At UB of value At LB of value - 1 At UB of value + 1 Outside of LB of value Outside of UB of value 0 Negative At LB number of digits or chars At UB number of digits or chars Empty field (clear the default
value) Outside of UB number of digits
or chars
Non-digits Wrong data type (e.g. decimal
into integer) Expressions Space Non-printing char (e.g.,
Ctrl+char) DOS filename reserved chars
(e.g., "\ * . :") Upper ASCII (128-254) Upper case chars Lower case chars Modifiers (e.g., Ctrl, Alt, Shift-
Ctrl, etc.) Function key (F2, F3, F4, etc.)
The Test-Ideas Catalog
A test-ideas catalog is a list of related test ideas that are usable under many circumstances. For example, the test ideas for numeric input
fields can be catalogued together and used for any numeric input field.
In many situations, these catalogs are sufficient test documentation. That is, an experienced tester can often proceed with testing directly from these without creating documented test cases.
Apply a Test Ideas Catalog Using a Test Matrix
Field name
Field name
Field name
Review: Core Concepts of Software Testing
What is Quality? Who are the Stakeholders? What is a Defect? What are Dimensions of Quality? What are Test Ideas? Where are Test Ideas useful? Give some examples of a Test Ideas. Explain how a catalog of Test Ideas could
be applied to a Test Matrix.
Principles of Software Testing for Testers
Module 4: Define Evaluation Mission
Principles of Software Testing for Testers
Module 4: Define Evaluation Mission
So? Purpose of Testing?
The typical testing group has two key priorities.
Find the bugs (preferably in priority order). Assess the condition of the whole product
(as a user will see it).
Sometimes, these conflict The mission of assessment is the underlying
reason for testing, from management’s viewpoint. But if you aren’t hammering hard on the program, you can miss key risks.
Missions of Test Groups Can Vary
Find defects Maximize bug count Block premature product releases Help managers make ship / no-ship decisions Assess quality Minimize technical support costs Conform to regulations Minimize safety-related lawsuit risk Assess conformance to specification Find safe scenarios for use of the product (find ways
to get it to work, in spite of the bugs) Verify correctness of the product Assure quality
A Different Take on Mission: Public vs. Private Bugs
A programmer’s public bug rate includes all bugs left in the code at check-in.
A programmer’s private bug rate includes all the bugs that are produced, including the ones fixed before check-in.
Estimates of private bug rates have ranged from 15 to 150 bugs per 100 statements.
What does this tell us about our task?
Defining the Test Approach
The test approach (or “testing strategy”) specifies the techniques that will be used to accomplish the test mission.
The test approach also specifies how the techniques will be used.
A good test approach is: Diversified Risk-focused Product-specific Practical Defensible
Heuristics for Evaluating Testing Approach
James Bach collected a series of heuristics for evaluating your test approach. For example, he says: Testing should be optimized to find important
problems fast, rather than attempting to find all problems with equal urgency.
Please note that these are heuristics – they won’t always the best choice for your context. But in different contexts, you’ll find different ones very useful.
What Test Documentation Should You Use?
Test planning standards and templates Examples Some benefits and costs of using IEEE-829
standard based templates When are these appropriate?
Thinking about your requirements for test documentation Requirements considerations Questions to elicit information about test
documentation requirements for your project
Write a Purpose Statement for Test Documentation
Try to describe your core documentation requirements in one sentence that doesn’t have more than three components.
Examples: The test documentation set will primarily
support our efforts to find bugs in this version, to delegate work, and to track status.
The test documentation set will support ongoing product and test maintenance over at least 10 years, will provide training material for new group members, and will create archives suitable for regulatory or litigation use.
Review: Define Evaluation Mission
What is a Test Mission? What is your Test Mission? What makes a good Test Approach (Test
Strategy)? What is a Test Documentation Mission? What is your Test Documentation Goal?
Principles of Software Testing for Testers
Module 5: Test & Evaluate
Principles of Software Testing for Testers
Module 5: Test & Evaluate
Test and Evaluate – Part One: Test
In this module, we drill into Test and Evaluate
This addresses the “How?” question: How will you test those
things?
Test and Evaluate – Part One: Test This module focuses
on the activity Implement Test
Earlier, we covered Test-Idea Lists, which are input here
In the next module, we’ll cover Analyze Test Failures, the second half of Test and Evaluate
Review: Defining the Test Approach
In Module 4, we covered Test Approach A good test approach is:
DiversifiedRisk-focusedProduct-specificPracticalDefensible
The techniques you apply should follow your test approach
Discussion Exercise 5.1: Test Techniques
There are as many as 200 published testing techniques. Many of the ideas are overlapping, but there are common themes.
Similar sounding terms often mean different things, e.g.: User testing Usability testing User interface testing
What are the differences among these techniques?
Dimensions of Test Techniques
Think of the testing you do in terms of five dimensions: Testers: who does the testing. Coverage: what gets tested. Potential problems: why you're testing (what
risk you're testing for). Activities: how you test. Evaluation: how to tell whether the test passed
or failed. Test techniques often focus on one or two
of these, leaving the rest to the skill and imagination of the tester.
Test Techniques—Dominant Test Approaches
Of the 200+ published Functional Testing techniques, there are ten basic themes.
They capture the techniques in actual practice. In this course, we call them:
Function testing Equivalence analysis Specification-based testing Risk-based testing Stress testing Regression testing Exploratory testing User testing Scenario testing Stochastic or Random testing
“So Which Technique Is the Best?”
TestersCoverage
Potential problemsActivities
Evaluation
Technique A
Technique B
Technique C
Technique E
Technique F
Technique G
Technique H
Each has strengths and weaknesses
Think in terms of complement
There is no “one true way”
Mixing techniques can improve coverage
Technique D
InceptionInception ElaborationElaboration ConstructionConstruction TransitionTransition
Apply Techniques According to the LifeCycle Test Approach changes over the project Some techniques work well in early phases;
others in later ones Align the techniques to iteration objectives
A limited set of focused tests Many varied tests
A few components of software under test Large system under test
Simple test environment Complex test environment
Focus on architectural & requirement risks Focus on deployment risks
Module 5 Agenda
Overview of the workflow: Test and Evaluate Defining test techniques Individual techniques
Function testing Equivalence analysis Specification-based testing Risk-based testing Stress testing Regression testing Exploratory testing User testing Scenario testing Stochastic or Random testing
Using techniques together
At a Glance: Function TestingTag line Black box unit testing
Objective Test each function thoroughly, one at a time.
Testers AnyCoverage Each function and user-visible variablePotential problems A function does not work in isolationActivities Whatever worksEvaluation Whatever worksComplexity SimpleHarshness VariesSUT readiness Any stage
Strengths & Weaknesses: Function Testing
Representative cases Spreadsheet, test each item in isolation. Database, test each report in isolation
Strengths Thorough analysis of each item tested Easy to do as each function is implemented
Blind spots Misses interactions Misses exploration of the benefits offered by the
program.
At a Glance: Equivalence Analysis (1/2)
Tag line Partitioning, boundary analysis, domain testing
Objective
There are too many test cases to run. Use stratified sampling strategy to select a few test cases from a huge population.
Testers Any
Coverage
All data fields, and simple combinations of data fields. Data fields include input, output, and (to the extent they can be made visible to the tester) internal and configuration variables
Potential problems Data, configuration, error handling
At a Glance: Equivalence Analysis (2/2)
Activities
Divide the set of possible values of a field into subsets, pick values to represent each subset. Typical values will be at boundaries. More generally, the goal is to find a “best representative” for each subset, and to run tests with these representatives. Advanced approach: combine tests of several “best representatives”. Several approaches to choosing optimal small set of combinations.
Evaluation Determined by the dataComplexity Simple
HarshnessDesigned to discover harsh single-variable tests and harsh combinations of a few variables
SUT readiness Any stage
Strengths & Weaknesses: Equivalence Analysis
Representative cases Equivalence analysis of a simple numeric field. Printer compatibility testing (multidimensional variable,
doesn’t map to a simple numeric field, but stratified sampling is essential)
Strengths Find highest probability errors with a relatively small set
of tests. Intuitively clear approach, generalizes well
Blind spots Errors that are not at boundaries or in obvious special
cases. The actual sets of possible values are often
unknowable.
Optional Exercise 5.2: GUI Equivalence Analysis
Pick an app that you know and some dialogs MS Word and its Print, Page setup, Font format dialogs
Select a dialog Identify each field, and for each field
• What is the type of the field (integer, real, string, ...)?• List the range of entries that are “valid” for the field• Partition the field and identify boundary conditions• List the entries that are almost too extreme and too
extreme for the field• List a few test cases for the field and explain why the
values you chose are the most powerful representatives of their sets (for showing a bug)
• Identify any constraints imposed on this field by other fields
At a Glance: Specification-Based TestingTag line Verify every claimObjective Check conformance with every statement in
every spec, requirements document, etc.Testers AnyCoverage Documented reqts, features, etc.Potential problems
Mismatch of implementation to spec
Activities Write & execute tests based on the spec’s. Review and manage docs & traceability
Evaluation Does behavior match the spec?Complexity Depends on the specHarshness Depends on the spec SUT readiness As soon as modules are available
Strengths & Weaknesses: Spec-Based Testing
Representative cases Traceability matrix, tracks test cases associated with each
specification item. User documentation testing
Strengths Critical defense against warranty claims, fraud charges,
loss of credibility with customers. Effective for managing scope / expectations of regulatory-
driven testing Reduces support costs / customer complaints by ensuring
that no false or misleading representations are made to customers.
Blind spots Any issues not in the specs or treated badly in the
specs /documentation.
Traceability Tool for Specification-Based Testing
Stmt 1 Stmt 2 Stmt 3 Stmt 4 Stmt 5
Test 1 X X X
Test 2 X X
Test 3 X X X
Test 4 X X
Test 5 X X
Test 6 X X
The Traceability Matrix
Optional Exercise 5.5: What “Specs” Can You Use?
Challenge: Getting information in the absence of a spec What substitutes are available?
Example: The user manual – think of this as a commercial
warranty for what your product does. What other “specs” can you/should you be
using to test?
Exercise 5.5—Specification-Based Testing
Here are some ideas for sources that you can consult when specifications are incomplete or incorrect. Software change memos that come with new
builds of the program User manual draft (and previous version’s
manual) Product literature Published style guide and UI standards
Definitions—Risk-Based Testing
Three key meanings:1. Find errors (risk-based approach to the technical
tasks of testing)2. Manage the process of finding errors (risk-based
test management)3. Manage the testing project and the risk posed by
(and to) testing in its relationship to the overall project (risk-based project management)
We’ll look primarily at risk-based testing (#1), proceeding later to risk-based test management.
The project management risks are very important, but out of scope for this class.
At a Glance: Risk-Based TestingTag line Find big bugs first
Objective Define, prioritize, refine tests in terms of the relative risk of issues we could test for
Testers AnyCoverage By identified riskPotential problems Identifiable risks
Activities Use qualities of service, risk heuristics and bug patterns to identify risks
Evaluation VariesComplexity AnyHarshness HarshSUT readiness Any stage
Strengths & Weaknesses: Risk-Based Testing
Representative cases Equivalence class analysis, reformulated. Test in order of frequency of use. Stress tests, error handling tests, security tests. Sample from predicted-bugs list.
Strengths Optimal prioritization (if we get the risk list right) High power tests
Blind spots Risks not identified or that are surprisingly more likely. Some “risk-driven” testers seem to operate subjectively.
• How will I know what coverage I’ve reached? • Do I know that I haven’t missed something critical?
Optional Exercise 5.6: Risk-Based Testing
You are testing Amazon.com(Or pick another familiar application)
First brainstorm: What are the functional areas of the app?
Then evaluate risks:• What are some of the ways that each of these
could fail? • How likely do you think they are to fail? Why? • How serious would each of the failure types be?
At a Glance: Stress Testing
Tag line Overwhelm the product
ObjectiveLearn what failure at extremes tells about changes needed in the program’s handling of normal cases
Testers SpecialistsCoverage LimitedPotential problems Error handling weaknessesActivities SpecializedEvaluation VariesComplexity VariesHarshness ExtremeSUT readiness Late stage
Strengths & Weaknesses: Stress Testing
Representative cases Buffer overflow bugs High volumes of data, device connections, long
transaction chains Low memory conditions, device failures, viruses, other
crises Extreme load
Strengths Expose weaknesses that will arise in the field. Expose security risks.
Blind spots Weaknesses that are not made more visible by stress.
At a Glance: Regression TestingTag line Automated testing after changesObjective Detect unforeseen consequences of changeTesters VariesCoverage VariesPotential problems
Side effects of changesUnsuccessful bug fixes
Activities Create automated test suites and run against every (major) build
Complexity VariesEvaluation VariesHarshness VariesSUT readiness For unit – early; for GUI - late
Strengths & Weaknesses—Regression Testing Representative cases
Bug regression, old fix regression, general functional regression
Automated GUI regression test suites Strengths
Cheap to execute Configuration testing Regulator friendly
Blind spots “Immunization curve” Anything not covered in the regression suite Cost of maintaining the regression suite
At a Glance: Exploratory Testing
Tag line Simultaneous learning, planning, and testing
ObjectiveSimultaneously learn about the product and about the test strategies to reveal the product and its defects
Testers ExplorersCoverage Hard to assess
Potential problems Everything unforeseen by planned testing techniques
Activities Learn, plan, and test at the same timeEvaluation VariesComplexity VariesHarshness VariesSUT readiness Medium to late: use cases must work
Strengths & Weaknesses: Exploratory Testing Representative cases
Skilled exploratory testing of the full product Rapid testing & emergency testing (including thrown-over-
the-wall test-it-today) Troubleshooting / follow-up testing of defects.
Strengths Customer-focused, risk-focused Responsive to changing circumstances Finds bugs that are otherwise missed
Blind spots The less we know, the more we risk missing. Limited by each tester’s weaknesses (can mitigate this
with careful management) This is skilled work, juniors aren’t very good at it.
At a Glance: User Testing
Tag line Strive for realismLet’s try real humans (for a change)
Objective Identify failures in the overall human/machine/software system.
Testers UsersCoverage Very hard to measure
Potential problems Items that will be missed by anyone other than an actual user
Activities Directed by userEvaluation User’s assessment, with guidanceComplexity VariesHarshness LimitedSUT readiness Late; has to be fully operable
Strengths & Weaknesses—User Testing Representative cases
Beta testing In-house lab using a stratified sample of target market Usability testing
Strengths Expose design issues Find areas with high error rates Can be monitored with flight recorders Can use in-house tests focus on controversial areas
Blind spots Coverage not assured Weak test cases Beta test technical results are mixed Must distinguish marketing betas from technical betas
At a Glance: Scenario Testing
Tag line Instantiation of a use caseDo something useful, interesting, and complex
Objective Challenging cases to reflect real useTesters AnyCoverage Whatever stories touchPotential problems
Complex interactions that happen in real use by experienced users
Activities Interview stakeholders & write screenplays, then implement tests
Evaluation AnyComplexity HighHarshness VariesSUT readiness Late. Requires stable, integrated functionality.
Strengths & Weaknesses: Scenario Testing
Representative cases Use cases, or sequences involving combinations of use
cases. Appraise product against business rules, customer data,
competitors’ output Hans Buwalda’s “soap opera testing.”
Strengths Complex, realistic events. Can handle (help with)
situations that are too complex to model. Exposes failures that occur (develop) over time
Blind spots Single function failures can make this test inefficient. Must think carefully to achieve good coverage.
At a Glance: Stochastic or Random Testing (1/2)
Tag lineMonkey testingHigh-volume testing with new cases all the time
Objective
Have the computer create, execute, and evaluate huge numbers of tests.
The individual tests are not all that powerful, nor all that compelling. The power of the approach lies in the large number of tests. These broaden the sample, and they may test the program over a long period of time, giving us insight into longer term issues.
At a Glance: Stochastic or Random Testing (2/2)
Testers Machines
Coverage Broad but shallow. Problems with stateful apps.
Potential problems Crashes and exceptions
Activities Focus on test generation
Evaluation Generic, state-based
Complexity Complex to generate, but individual tests are simple
Harshness Weak individual tests, but huge numbers of them
SUT readiness Any
Combining Techniques (Revisited)
A test approach should be diversified Applying opposite techniques can improve
coverage Often one technique can
extend another
TestersCoverage
Potential problemsActivities
Evaluation
Technique G
Technique A
Technique B
Technique C
Technique E
Technique F
Technique H
Technique D
Applying Opposite Techniques to Boost Coverage
Regression Inputs:
• Old test cases and analyses leading to new test cases
Outputs:
• Archival test cases, preferably well documented, and bug reports
Better for:
• Reuse across multi-version products
Exploration Inputs:
• models or other analyses that yield new tests
Outputs
• scribbles and bug reports Better for:
• Find new bugs, scout new areas, risks, or ideas
Contrast these two techniques
Exploration Regression
Applying Complementary Techniques Together Regression testing alone suffers fatigue
The bugs get fixed and new runs add little info Symptom of weak coverage Combine automation w/ suitable variance
E.g. Risk-based equivalence analysis Coverage of the combination
can beat sum of the parts Equivalence
Risk-based
Regression
How To Adopt New Techniques
1. Answer these questions: What techniques do you use in your test approach
now? What is its greatest shortcoming? What one technique could you add to make the
greatest improvement, consistent with a good test approach:
• Risk-focused?• Product-specific?• Practical?• Defensible?
2. Apply that additional technique until proficient3. Iterate