michael felderer - leveraging defect taxonomies for testing - eurostar 2012

18
Michael Felderer, QE LaB Business Services Leveraging Defect Taxonomies for Testing Results from a Case Study www.eurostarconferences.com @esconfs #esconfs

Upload: eurostar-conference

Post on 18-Dec-2014

34 views

Category:

Technology


0 download

DESCRIPTION

EuroSTAR Software Testing Conference 2012 presentation on

TRANSCRIPT

Page 1: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Michael Felderer, QE LaB Business Services

Leveraging Defect Taxonomies for Testing

Results from a Case Study

www.eurostarconferences.com

@esconfs #esconfs

Page 2: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Defect taxonomies are in practice only used for a-posteriori classification of failures, but they can also be applied to …

• control the design of test cases

• provide a precise statement about release quality

• allocate testing resources

Continuous integration of defect taxonomies into the test process

Motivation

Page 3: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Before DTST

Defect Taxonomy Supported Testing in a Nutshell

Test Planning and Control

Test Analysis and Design

Test Implementation and Execution

Test Evaluation and Reporting

DTST

Test Planning and Control

Test Analysis and Design

Test Implementation and Execution

Test Evaluation and Reporting

Page 4: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Integration of Defect Taxonomy in ISTQB like standard test process

Features and test process extensions

• Prioritization of requirements, defect categories and failures

• Effective product-specific defect taxonomy

• Linkage of a requirements, to defect categories and failures

• Test design based on defect categories

• Assignment and analysis of failures after a test cycle

Decision support for application of defect taxonomy supported testing (DTST)

Defect Taxonomy Supported Testing

Page 5: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Defect Taxonomy Supported Testing

(4) Test Evaluation and Reporting

(2) Test Analysis and Design

(1) Test Planning

Step 1: Analysis and Prioritization of Requirements

Step 2: Creation of a Product-Specific Defect Taxonomy

Step 3: Linkage of Requirements and Defect Categories

Step 4: Definition of a Test Strategy with Test Patterns

Step 5: Analysis and Categorization of Failures after a Test Cycle

(3) Test Execution

Page 6: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Prioritized requirements are assigned to use cases defined by business processes and user interfaces

Step 1: Analysis and Prioritization of Requirements

UC Identification

UC Search Client REQ# Description Priority of REQ

… … …

REQ# Description Priority of REQ

39 ANF_0053 Logging and Auditing normal

40 ANF_0057 LOGIN via OID high

Page 7: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Top level categories of Beizer are mapped to product-specific defect categories which are then further refined to concrete low-level defect categories with assigned identifier und severity

Step 2: Creation of a Product-Specific Defect Taxonomy

Defect Category of

Beizer

Product-Specific

CategoryDC Description of DC Severity

1xxx . . Requirements R1 Client not identified correctly critical

11xx . . Requirements

incorrectR2

Goals and measures of case manager are not

processed correctlynormal

16xx . . Requirement

changesR3 Update and termination of case incorrect normal

12xx . . Logic GUI-layout

13xx . . Completeness Syntactic specifications of input fields

Error massages

4xxx . Data D1Incorrect access / update of client information,

states etc.normal

42xx . . Data access and

handlingD2 Erroneous save of critical data critical

R4 major

Unsuitability of the system

taking the organizational

processes and procedures

into account.

Incorrect handling of the

syntactic or semantic

constraints of GUI.

Page 8: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Experience-based assignment of reqs to defect categories

Peer review of assignment important

Requirements should be assigned to only one defect category

Step 3: Linkage of Requirements and Defect Categories

REQ# Description Priority of REQ

39 ANF_0053 Logging and Auditing normal

40 ANF_0057 LOGIN via OID high

Defect Category of

Beizer

Product-Specific

CategoryDC Description of DC Severity

1xxx . . Requirements R1 Client not identified correctly critical

11xx . . Requirements

incorrectR2

Goals and measures of case manager are not

processed correctlynormal

16xx . . Requirement

changesR3 Update and termination of case incorrect normal

12xx . . Logic GUI-layout

13xx . . Completeness Syntactic specifications of input fields

Error massages

4xxx . Data D1Incorrect access / update of client information,

states etc.normal

42xx . . Data access and

handlingD2 Erroneous save of critical data critical

R4 major

Unsuitability of the system

taking the organizational

processes and procedures

into account.

Incorrect handling of the

syntactic or semantic

constraints of GUI.

Page 9: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

A test pattern consists of a test design technique with three test strength and has assigned defect categories

Step 4: Definition of a Test Strategy with Test Pattern

IdTest Design

Technique

Defect

Categories

Test Strength 1

(low)

Test Strength 2

(normal)

Test Strength 3

(high)

S1

Use case-based

testing; process

cycle tests

R1, R2, R3, F1,F2,

F3Main paths Branch coverage Loop coverage

S3State transition

testingI1, I2, F7, F8, F9 State coverage

State transition

coveragePath coverage

D: Data

oriented D1

CRUD (Create,

Read, Update and

Delete)

D1, D2 Data cycle tests Data cycle tests

D3EP: Equivalence

partitioningF3, F5, F6 EP valid EP valid+invalid EP valid+invalid

D4BVA: Boundary

value analysisF3, F5, F6 BVA valid BVA valid+invalid

BVA r values at

boundaries

S: Sequence

oriented

Page 10: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Test Design and Execution

REQ Text PR1 Workflow High

2 Data Normal

REQs D efect C atego ry o f B eizer P ro duct-Specif ic C atego ry D C D escript io n o f D C Severity

1xxx . . Requirements R 1 Client not identified correctly critical

11xx . . Requirements incorrect R 2 Goals and measures of case manager are not processed correctly normal

16xx . . Requirement changes R 3 Update and termination of case incorrect normal

12xx . . Logic GUI-layout

13xx . . Completeness Syntactic specifications of input fields

Error massages

R 4 major

Unsuitability of the system taking the

organizational processes and

procedures into account.

Incorrect handling of the syntactic or

semantic constraints of GUI.

IdTest Design

TechniqueDefect Categories

Test Strength 1

(low)

Test Strength 2

(normal)

Test Strength 3

(high)

S1

Use case-based

testing; process

cycle tests

R1, R2, R3, F1,F2,

F3Main paths Branch coverage Loop coverage

S3State transition

testingI1, I2, F7, F8, F9 State coverage

State transition

coveragePath coverage

D: Data

oriented D1

CRUD (Create,

Read, Update and

Delete)

D1, D2 Data cycle tests Data cycle tests

S: Sequence

oriented

Defect Taxonomy

Test Design PR SDC, SF Test strength

high blocker, critical, major 3

normal blocker, critical, major 3

normal major, normal, minor 2

low minor trivial 1

ID Description Result Comments Severity

1 see test spec. pass no

2 see test spec. pass no

3 see test spec. fail no critical

4 see test spec. pass no

5 see test spec. fail no minor

Test Execution

Page 11: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Defects are exported from Bugzilla

Severity assigned by testers is compared to severity of defect category and priority of req

Weights are adapted if needed

Precise statement about release quality is possible

• Additional information valuable for release planning

Step 5: Analysis and Categorization of Failures after a Test Cycle

Page 12: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Public health insurance domain

Case Study Overview

Characteristics Project A Project B

Area Application for case managers

Administration of clients of the public health insurance institution.

Staff About 7 Up to 10

Duration 9 month development, now under maintenance

9 month development, now under maintenance

Number of iterations 4 3

Size (Reqs + Use Cases) 41+14 28+20

Ratio of system testing 27% of overall project effort 28% of overall project effort

Test Process DTST (based on ISTQB) ISTQB

Page 13: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Smaller number of test cases with higher effectiveness

More goal-oriented test cases

Case Study Results

Metrics Project A Project B

Number of Requirements (NOR) 41 28

Number of Use Cases (NUC) 14 20

SIZE (NUC+NOR) 55 48

Number of Tests (NOT) 148 170

Number of Failures (NOF) 169 114

NOT/SIZE 2.69 3.54

NOF/NOT 1.14 0.67

Page 14: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Estimation is structured according to the phases and steps of the test process

Comparison of cost (measured in time) of DTST and ISTQB

Break-even if cost of DTST is smaller than cost of ISTQB

Adaptation and simulation with parameters

• Analysis and prioritization of a requirement

• Linkage of a requirement to defect category

• Design and implementation of a test case

• Execution time per test case

• Maintenance time per test case

Cost Comparison and Decision of Application

Page 15: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Cost Comparison in Case Study

Diff of Costs

Phases Ph Ph Ph

TE Test Planning -50.00

TE Test Analysis and Design 34.00

TE Test Execution, Evaluation, Maintenance 17.50

TP 50.00 100.00 -50.00

TAD 250.00 266.00 -16.00

TC1 360.00 358.50 1.50

TC2 470.00 451.00 19.00

TC3 580.00 543.50 36.50

TC4 690.00 636.00 54.00

TC5 800.00 728.50 71.50

TC6 910.00 821.00 89.00

TC7 1020.00 913.50 106.50

TC8 1130.00 1006.00 124.00

ISTQB DTST

50.00 100.00

200.00 166.00

110.00 92.50

Page 16: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Cost Comparison Scenarios

Cost comparison with average test execution time 0.5h

Cost comparison with average test execution time 0.1h

0

200

400

600

800

1000

1200

TP TAD TC1 TC2 TC3 TC4 TC5 TC6 TC7 TC8

DTST

ISTQB

Test Cycles

Ph

0

100

200

300

400

500

600

TP TAD TC1 TC2 TC3 TC4 TC5 TC6 TC7 TC8

DTST

ISTQB

Test Cycles

Ph

Page 17: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Integration of defect taxonomies in standard ISTQB test process

Decision support for application

Benefits

• Goal-oriented test cases

• Small number of more effective test cases

• Support for a precise release quality statement

• Clear cost estimation procedure available

Summary

Page 18: Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012

Dr. Michael Felderer

Technikerstr. 21a

A-6020 Innsbruck

Austria

Tel. +43 680 1238038

[email protected]

www.qe-lab.com

Questions or Suggestions?

Dipl.-Ing. Armin Beer

Helenenstr. 114

A-2500 Baden

Austria

Tel. +43 676 65055670

[email protected]

www.arminbeer.at