testability: factors and strategy

41
TESTABILITY: FACTORS AND STRATEGY Robert V. Binder [email protected] Google Test Automation Conference Hyderabad October 28, 2010 1 © 2010, Robert V. Binder

Upload: bob-binder

Post on 05-Dec-2014

457 views

Category:

Documents


2 download

DESCRIPTION

Keynote, Google Test Automation Conference. Hyderabad, India.October 28, 2010

TRANSCRIPT

Page 1: Testability: Factors and Strategy

TESTABILITY: FACTORS

AND STRATEGY

Robert V. Binder

[email protected]

Google Test Automation Conference

Hyderabad

October 28, 2010

1

© 2010, Robert V. Binder

Page 2: Testability: Factors and Strategy

Overview

• Why testability matters

• White box testability

• Black box testability

• Test automation

• Strategy

• Q&A

2

Page 3: Testability: Factors and Strategy

WHY TESTABILITY MATTERS

3

Page 4: Testability: Factors and Strategy

Testability Economics

• Assumptions

• Sooner is better

• Escapes are bad

• Fewer tests means more escapes

• Fixed budget – how to optimize

• Efficiency: average tests per unit of effort

• Effectiveness: average probability of killing a bug

per unit of effort

• Higher testability: more better tests, same cost

• Lower testability: fewer weaker tests, same cost

Testability defines the

limits on producing

complex systems with an

acceptable risk of costly or

dangerous defects.

4

Page 5: Testability: Factors and Strategy

What Makes an SUT Testable?

• Controllability

• What do we have to do to run a test case?

• How hard (expensive) is it to do this?

• Does the SUT make it impractical to run some kinds of tests ?

• Given a testing goal, do we know enough to produce an adequate test

suite?

• How much tooling can we afford?

• Observability

• What do we have to do to determine test pass/fail?

• How hard (expensive) is it to do this?

• Does the SUT make it impractical to fully determine test results ?

• Do we know enough to decide pass/fail/DNF ?

5

Page 6: Testability: Factors and Strategy

Testability

Factors

R e pr ese nta ti on

T ra ce ab ility

R e qu ire m e nts S pe c if ica ti on

O bj ec t ive

Fe as ib le

Q ua ntif i ab le

IEE E 83 0

C o m p le te ne ss

C u rre ncy

C o rre spo n de nce

IEE E 10 16

S R S /S D D

S D D /S U T

S U T/Te s t S ui te

S ep ar at io n o f

C o nce rn s

U se r Inte rface

C o ntro l S tr ate gy

A rch itec tur al P a ckag ing

C o lab o rat io n P acka g ing

Configuration

Managem ent

Struc ture

Encapsulat ion Polym orphism

InheritanceCom plexity

Dem eter Compliance

Fault Sensit ivity

Input Dis tribution

Im plementat ion

Perform ance

Tw eaks

External Interface Determinism Except ion Handling

Mem ory M anagem ent

External API

Control Flow

Tim e Sensit ivity

Concurrency

Mult iprogram ming

Non-determ inistic C ontrol

Shared Resouces

Standard M echanism

Cons is tent U sage

Res tart, R ecovery

Buil t-in T est

R epor ter

Set/Res et

R eus able

State-based

D om ain S am pl ing

T rustw or thy

N o s ide-effects

Pre C onditions

C lass Invar iants

Ass ertions

D river

Generate F rom

Speci fication

State-based

Granular ity

Post C ondi tions

Interm ediate Res ul ts

Safety

Test Suite

Documented

Oracle

Configuration Management

Contro l

Tool Based

Specific ation-based

Automated

Efficient

Test Plan

Test Design

Test Cases

Test Procedures

Test Result Schem a

Test Execution Log

Test Incident Report

Test Sum mary

Reus able

Feas ible

Traceable

Ver ified

DeveloperReview

Customer

Review

Closed Loop

Assesment

Init ialize

Script Edit

Specif icat ion- basedTes t Generator

Tes t SuiteManagem ent

Stat ic CodeAnalyzer

Configuration

Managem ent

Ins trum entor

Inc identTracker

Runtime Trace

Report ing

Com parator

Debug

Tes tTools

Tes t Bed Tes t CaseDevelopm entTes t CaseDevelopm ent

Execute Script

Replay Script

Input C apture

Code-based Tes t Generator

Tes t Data

Generator

Developer Def inition

C us tom e r-o rien te d

Ad equ acy A ssesm en t

C lass

Tes t

Pr ocess

C ap abi lity

Tes t R edin ess A sse sm en t

C on s ta ncy o f

Pu rpo seEffe ct iven ess

Em pow er m en t

Fun ding

Accou ntab ility

Integ rated

Tes t

Stra tegy

Ve rt ical Integr at ion

H or izo ntal Integr at ion

Ap plica ti on

Tes t

C lus ter

Tes t

R qm ts D es ig n C od e Tes t M ai ntain

D efin ed an d R epe atabl e

C lose d Loo p Feed back

Exp erie nce

T ra ining

Sta ff

C ap abi lity

Inspe c tio nsPr ototypin g R ev ie ws

V& V Integ rat ion

M otivat ion

TESTABILITY

Interoperability

6

Page 7: Testability: Factors and Strategy

Testability Factors

Testability

Representation Implementation Built-in Test

Test Suite Test Tools Test Process

7

Page 8: Testability: Factors and Strategy

Examples Controllability Observability

GUIs

Impossible without abstract

widget set/get. Brittle with

one. Latency. Dynamic

widgets, specialized widgets.

Cursor for structured

content. Noise, non-

determinism in non-text

output. Proprietary lock-

outs.

OS Exceptions

100s of OS exceptions to

catch, hard to trigger.

Silent failures

Objective-C Test drivers can’t anticipate

objects defined on the fly

Instrumenting objects on

the fly

Mocks with DBMS Too rich to mock DBMS has better logging

Multi-tier CORBA Getting all DOs desired state Tracing message

propagation

Cellular Base Station RF loading/propagation for

varying base station

population

Proprietary lock-outs. Never

“off-line”

8

Page 9: Testability: Factors and Strategy

WHITE BOX TESTABILITY

9

Page 10: Testability: Factors and Strategy

White Box Testability

Complexity,

Non-Deterministic Dependency (NDD)

PCOs

Built-in Test,

State Helpers,

Good Structure

10

Page 11: Testability: Factors and Strategy

The Anatomy of a Successful Test

• To reveal a bug, a test must

• Reach the buggy code

• Trigger the bug

• Propagate the incorrect result

• Incorrect result must be observed

• The incorrect result must be recognized as such

int scale(int j) { j = j - 1; j = j/30000; return j; }

// should be j = j + 1

Out of 65,536 possible values for j, only six will produce an incorrect

result: -30001, -30000, -1, 0, 29999, and 30000.

11

Page 12: Testability: Factors and Strategy

What Diminishes Testability?

• Non-deterministic

Dependencies • Race conditions

• Message latency

• Threading

• CRUD shared and unprotected data

12

Page 13: Testability: Factors and Strategy

What Diminishes Testability?

• Complexity

• Essential

• Accidental

13

Page 14: Testability: Factors and Strategy
Page 15: Testability: Factors and Strategy

What Improves Testability?

• Points of Control and Observation (PCOs)

• State test helpers

• Built-in Test

• Well-structured code

15

Page 16: Testability: Factors and Strategy

PCOs

• Point of Control or Observation

• Abstraction of any kind of interface

• See TTCN-3 for a complete discussion

• What does a tester/test harness have to do activate SUT components and aspects?

• Components easier

• Aspects more interesting, but typically not directly controllable

• What does a tester/test harness have to do inspect SUT state or traces to evaluate a test?

• Traces easier, but often not sufficient or noisy

• Embedded state observers effective, but expensive or polluting

• Aspects often critical, but typically not directly observable

• Design for testability:

• Determine requirements for aspect-oriented PCOs

16

Page 17: Testability: Factors and Strategy

State Test Helpers

• Set state

• Get current state

• Logical State Invariant Functions (LSIFs)

• Reset

• All actions controllable

• All events observable

17

Page 18: Testability: Factors and Strategy

Implementation and Test Models

α

ω

Two PlayerGam e ( )

~( ) ~ ( )

Gam e S tarte d

P la ye r 1

Served

P la ye r 2

Served

p2 _S tart ( ) /

s im u lateVolle y( )

p1 _W insVolle y( ) /

s im u lateVolle y( )

p1 _W insVolle y( )

[th is.p 1_ Score ( ) < 20 ] /

th is.p 1AddPoin t( )

s im u lateVolle y( )

p1 _S tart ( ) /

s im u lateVolle y( )

p2 _W insVolle y( )

[th is.p 2_ Score ( ) < 20 ] /

th is.p 2AddPoin t( )

s im u lateVolle y( )

p2 _W insVolle y( ) /

s im u lateVolle y( )p1 _W insVolle y( )

[th is.p 1_ Score ( ) == 20] /

th is.p 1AddPoin t( )

p2 _W insVolle y( )

[th is.p 2_ Score ( ) == 20] /

th is.p 1AddPoin t( )

p1 _IsW in ner( ) /

retu rn TR UE;

P la ye r 1

W o n

P la ye r 2

W o n p2 _IsW in ner( ) /

retu rn TR UE;

Ga m e S ta rted

P la yer 3

Serve d

p 3_ S tart ( ) /

s im ulateVo lley( )

p 1_ W insVo lle y( ) /

s im ulateVo lley( )

p 3_ W insVo lle y( )

[this.p 3_ Sco re ( ) < 2 0] /

th is.p3 Add Point ( )

s im ulateVo lley( )

p 2_ W insVo lle y( ) /

s im ulateVo lley( )

p 3_ W insVo lle y( )

[this.p 3_ Sco re ( ) == 2 0] /

th is.p3 Add Point ( )

P la yer 3

W on p 3_ IsW in ne r( ) /

return TR UE;

αTh ree P la yerGa m e ( ) / Two P la yerG am e ( )

p 3_ W insVo lle y( ) /

s im ulateVo lley( )

Tw oP layerG am e ( )

ω~( )

TwoPlayerGame

ThreePlayerGame

+TwoPlayerGame()+p1_Start( )+p1_WinsVolley( )-p1_AddPoint( )+p1_IsWinner( )+p1_IsServer( )+p1_Points( )+p2_Start( )+p2_WinsVolley( )-p2_AddPoint( )+p2_IsWinner( )+p2_IsServer( )+p2_Points( )+~( )

+ThreePlayerGame()+p3_Start( )+p3_WinsVolley( )-p3_AddPoint( )+p3_IsWinner( )+p3_IsServer( )+p3_Points( )+~( )

TwoPlayerGame

ThreePlayerGame

Game Started

Player 1Served

Player 2Served

p2_Start( ) /simulateVolley( )

p1_WinsVolley( ) /simulateVolley( )

p1_WinsVolley( )[this.p1_Score( ) < 20] /this.p1AddPoint( )simulateVolley( )

p1_Start( ) /simulateVolley( )

p2_WinsVolley( ) [this.p2_Score( ) < 20] /this.p2AddPoint( )simulateVolley( )

p2_WinsVolley( ) /simulateVolley( )

p1_WinsVolley( )[this.p1_Score( ) == 20] /this.p1AddPoint( )

p2_WinsVolley( )[this.p2_Score( ) == 20] /this.p1AddPoint( )

p1_IsWinner( ) /return TRUE; Player 1

WonPlayer 2Won

p2_IsWinner( ) /return TRUE;

α

ω

ThreePlayerGame( ) /TwoPlayerGame( )

~( )

Player 3Served

p3_WinsVolley( )[this.p3_Score( ) < 20] /this.p3AddPoint( )simulateVolley( )

p3_WinsVolley( )[this.p3_Score( ) == 20] /this.p3AddPoint( )

Player 3Won

p3_IsWinner( ) /return TRUE;

~( )

p3_Start( )/simulateVolley( )

p3_WinsVolley( ) /simulateVolley( )

p1_WinsVolley( ) /simulateVolley( )

p2_WinsVolley( ) /simulateVolley( )

p3_WinsVolley( ) /simulateVolley( )

~( )

18

Page 19: Testability: Factors and Strategy

Test Plan and Test Size

• K events

• N states

• With LSIFs

• KN tests

• No LSIFs

• K × N3 tests

Player 1 Served

Player 2 Served

Player 3 Won

omega

Player 3 Won

Player 3 Served

Player 3 Served

Player 1 Served

Player 2 Won

omega

Player 2 Won

Player 3 Served

Player 2 Served

Player 2 Served

Player 1 Served

Player 1 Won

omega

Player 1 Won

Player 3 Served

Player 2 Served

Player 1 Served

Gam eStartedalpha

3 p2_Start( )

6 p1_WinsVolley( )[this.p1_Score( ) < 20]

2 p1_Start( )

9 p2_WinsVolley( ) [this.p2_Score( ) < 20]

8 p2_WinsVolley( )

7 p1_WinsVolley( ) [this.p1_Score( ) == 20]

10 p2_WinsVolley( ) [this.p2_Score( ) == 20]

1 ThreePlayerGame( )

4 p3_Start( )

5 p1_WinsVolley( )

14 p1_IsWinner( )

15 p2_IsWinner( )

17 ~( )

12 p3_WinsVolley( ) [this.p3_Score( ) < 20]

13 p3_WinsVolley( ) [this.p3_Score( ) == 20]

16 p3_IsWinner( )

11 p3_WinsVolley( )

1

2

3

4

8

11

7

6

9

11

10

5

12

13

8

5

17

14

17

15

17

16

*

*

*

*

*

*

19

Page 20: Testability: Factors and Strategy

Built-in Test

• Asserts

• Logging

• Design by Contract

• No Code Left Behind pattern

• Percolation pattern

20

Page 21: Testability: Factors and Strategy

No Code Left Behind

• Forces

• How to have extensive BIT without code bloat?

• How to have consistent, controllable

• Logging

• Frequency of evaluation

• Options for handling detection

• Define once, reuse globally?

• Solution

• Dev adds Invariant function to each class

• Invariant calls InvariantCheck evaluates necessary state

• InvariantCheck function with global settings

• Selective run time evaluation

• const and inline idiom for no object code in production

21

Page 22: Testability: Factors and Strategy

Percolation Pattern

• Enforces Liskov Subsitutability

• Implement with No Code Left Behind

Base

+ Base()+ ~Base()+ foo()+ bar()

# invariant()# fooPre()# fooPost()# barPre()# barPost()

Derived1

+ Derived1()+ ~Derived1() + foo()+ bar()+ fum()

# invariant()# fooPre()# fooPost()# barPre()# barPost()# fumPre()# fumPost()

+ Derived2()+ ~Derived2()+ foo()+ bar()+ fee()

# invariant()# fooPre()# fooPost()# barPre()# barPost()# feePre()# feePost()

Derived2

22

Page 23: Testability: Factors and Strategy

Well-structured Code

• Many well-established principles

• Significant for Testability

• No cyclic dependencies

• Don’t allow build dependencies to leak over structural and

functional boundaries (levelization)

• Partition classes and packages to minimize interface

complexity

23

Page 24: Testability: Factors and Strategy

BLACK BOX TESTABILITY

24

Page 25: Testability: Factors and Strategy

Black Box Testability

Size, Nodes, Variants, Weather

Test Model, Oracle,

Automation

25

Page 26: Testability: Factors and Strategy

System Size

• How big is the essential system?

• Feature points

• Use cases

• Singly invocable menu items

• Command/sub commands

• Computational strategy

• Transaction processing

• Simulation

• Video games

• Storage

• Number of tables/views

26

Page 27: Testability: Factors and Strategy

Network Extent

• How many

independent nodes?

• Client/server

• Tiers

• Peer to peer

• Minimum Spanning

Tree

• Must have one at least

of each online

MS-TPSO, Transaction Processing System Overview

27

Page 28: Testability: Factors and Strategy

Variants

• How many configuration options?

• How many platforms supported?

• How many localization variants

• How many “editions”?

• Combinational coverage

• Try each option with every other at least once (pair-wise)

• Pair-wise worst case product of option sizes

• May be reduced with good tools

28

Page 29: Testability: Factors and Strategy

Weather – Environmental

• The SUT has elements that cannot be practically established

in the lab

• Cellular base station

• Expensive server farm

• Competitor/aggressor capabilities

• Internet – you can never step in the same river twice

• Test environment not feasible

• Must use production/field system for test

• Cannot stress production/field system

29

Page 30: Testability: Factors and Strategy

More is Less

• Other things being equal, a larger system is less testable

• Same budget spread thinner

• Reducing system scope improves testability

• Try to partition into independent subsystems: additive versus multiplicative extent of test suite

• For example

• 10000 Feature Points

• 6 Network Node types

• 20 options, five variables each

• Must test when financial markets are open

• How big is that?

30

Page 31: Testability: Factors and Strategy

M66

95000 Light Years

1 Pixel ≈ 400 Light Years

Solar System ≈ 0.0025 LY

Page 32: Testability: Factors and Strategy

Understanding Improves Testability

• Primary source of system knowledge

• Documented, validated?

• Intuited, guessed?

• IEEE 830

• Test Model

• Different strokes

• Test-ready or hints?

• Oracle

• Computable or judgment?

• Indeterminate?

32

Page 33: Testability: Factors and Strategy

TEST AUTOMATION

33

Page 34: Testability: Factors and Strategy

Automation Improves Testability

• Bigger systems need many more tests

• Intellectual leverage

• Huge coverage space

• Repeatable

• Scale up functional suites for load test

• What kind of automation?

• Harness

• Developer harness

• System harness

• Capture-replay

• Model-based

• Generation

• Evaluation

34

Page 35: Testability: Factors and Strategy

Test Automation Envelope

35

5 Nines

4 Nines

3 Nines

2 Nines

1 Nine

1 10 100 1,000 10,000

Bespoke

Model-

Based

Model-

Based

Vision

Reliability (Effectiveness)

Productivity: Tests/Hour (Efficiency)

Manual

Scripting

Page 36: Testability: Factors and Strategy

STRATEGY

36

Page 37: Testability: Factors and Strategy

minimize maximize

How to Improve Testability?

BBT = Model + Oracle + Harness – Size – Nodes – Variants - Weather

minimize maximize

WBT = BIT + State Helpers +PCOs - Structure - Complexity– NDDs

37

Page 38: Testability: Factors and Strategy

Who Owns Testability Drivers? Architects Devs Test

BBT Model

Oracle

Harness

PCOs

Size

Variants

Nodes

Weather

WBT BIT

State Helpers

Good Structure

Complexity

NDDs

Testers typically

don’t own or control

the work that drives

testability

38

Page 39: Testability: Factors and Strategy

Strategy

Black Box

Testability

High

Low

White Box

Testability

Low High

Emphasize

Functional

Testing

Manage

Expectations

Incentivize

Repeaters

Emphasize

Code

Coverage

39

Page 40: Testability: Factors and Strategy

Q & A

40

Page 41: Testability: Factors and Strategy

Media Credits

• Robert V. Binder. Design for testability in object-oriented systems. Communications of the ACM, Sept 1994.

• Diomidis D. Spinellis. Internet Explorer Call Tree.

• Unknown. Dancing Hamsters.

• Jackson Pollock, Autumn Rhythms.

• Brian Ferneyhough. Plötzlichkeit for Large Orchestra.

• Robert V. Binder. Testing Object-Oriented Systems: Models, Patterns, and Tools.

• Microsoft, MS-TPSO, Transaction Processing System Overview.

• NASA, Hubble Space Telescope, M66 Galaxy.

• Holst - The Planets – Uranus. Peabody Concert Orchestra, Hajime Teri Murai, Music Director

• Test Automation Envelope. mVerify Corporation.

41