testing, cont. based on material by michael ernst, university of washington

38
Testing, cont. Based on material by Michael Ernst University of Washington

Upload: jason-harrington

Post on 19-Dec-2015

214 views

Category:

Documents


1 download

TRANSCRIPT

Testing, cont.

Based on material by Michael Ernst, University of Washington

Announcements

HW4 is due March 31st Start early! Several tasks:

1) design the ADT following the ADT methodology Mutable vs. immutable Operations (creators, mutators, etc.) + their specs

2) write a black-box test suite based on those specs (before coding, test-first principle)

3) then code 4) then augment test suite based on actual implementation

(white-box implementation tests)

Grades in LMS: HW 0-2, Exam 1, Quiz 1-3

2

Outline

Testing Strategies for choosing tests

Black box testing White box testing

Spring 15 CSCI 2600, A Milanova 3

Why Is Testing So Hard?

// requires: 1 <= x,y,z <= 10000

// returns: computes some f(x,y,z)

int proc(int x, int y, int z)

Exhaustive testing would require 1 trillion runs! And this is a trivially small problemThe key problem: choosing test suite

Small enough to finish quickly Large enough to validate program

Spring 15 CSCI 2600, A Milanova (based on slide by Michael Ernst) 4

Testing Strategies

Test case: specifies Inputs + pre-test state of the software Expected result (outputs and post-test state)

Black box testing: We ignore the code of the program. We look at the

specification (roughly, given some input, was the produced output correct according to the spec?)

Choose inputs without looking at the code White box (clear box, glass box) testing:

We use knowledge of the code of the program (roughly, we write tests to “cover” internal paths)

Choose inputs with knowledge of implementation

Spring 15 CSCI 2600, A Milanova 5

Black Box Testing Heuristics

Paths in specification A form of equivalence partitioning

Equivalence partitioning

Boundary value analysis

6

Other Boundary Cases

Arithmetic Smallest/largest values Zero

Objects Null Circular list Same object passed to multiple arguments

(aliasing)Spring 15 CSCI 2600, A Milanova (based on slide by Michael Ernst) 7

Boundary Value Analysis: Arithmetic Overflow

// returns: |x|

public int abs(int x)What are some values worth trying?

Equivalence classes are x < 0 and x >= 0 x = -1, x = 1, x = 0 (boundary condition)

How about x = Integer.MIN_VALUE?

// this is -2147483648 = -231

// System.out.println(Math.abs(x) < 0) prints true!

Spring 15 CSCI 2600, A Milanova (based on slide by Mike Ernst) 8

Boundary Value Analysis: Aliasing

// modifies: src, dest

// effects: removes all elements of src and appends them in reverse order to the end of dst

void appendList(List<Integer> src, List<Integer> dst) {

while (src.size() > 0) {

Integer elt = src.remove(src.size()-1);

dst.add(elt);

}

}What happens if we run appendList(list,list)? Aliasing.

9

Summary So Far Testing is hard. We cannot run all inputs Key problem: choose test suites such that

Small enough to finish in reasonable time Large enough to validate the program (reveal bugs or

build confidence in absence of bugs) All we have is heuristics!

We saw black box testing heuristics: run paths in spec, partition into equivalence classes, run with inputs at boundaries of these classes

There are also white box testing heuristics

10

White Box Testing

Ensure test suite covers (covers means executes) all of the program

Measure quality of test suite with % coverage Assumption: high coverage implies few

remaining errors in program Focus: features not described in specification

Control-flow details Performance optimizations Alternate algorithms (paths) for different cases

Spring 15 CSCI 2600, A Milanova (based on slide by Michael Ernst) 11

White Box Complements Black Box

boolean[] primeTable[CACHE_SIZE]

// returns: true if x is prime, // false otherwiseboolean isPrime(int x) { if (x > CACHE_SIZE) {

for (int i=2; i<x/2; i++) if (x%i==0) return false;

return true; } else return primeTable[x];}

Spring 15 CSCI 2600, A Milanova (example due to Michael Ernst) 12

White Box Testing:Control-flow-based Testing

Control-flow-based white box testing: Extract a control flow graph (CFG) Test suite must cover (execute) certain elements

of this control flow graph Idea: Define a coverage target and ensure

test suite covers target Coverage target approximates “all of the

program” Targets: CFG nodes, branch edges, paths, etc.

Spring 15 CSCI 2600, A Milanova 13

Aside: Control Flow Graph (CFG)

Assignment x=y+z => node in CFG:

If-then-else

if (b) S1 else S2 =>

Spring 15 CSCI 2600, A Milanova 14

x=y+z

CFG for S1

bb

True False

CFG for S2

(b) is a predicate node

end-if

Aside: Control Flow Graph (CFG)

Loop

while (b) S =>

Spring 15 CSCI 2600, A Milanova 15

CFG for S

bb

True False

(b) is a predicate node

Aside: Control Flow Graph (CFG)

Draw the CFG for this code:

16

1 s = 0;2 x = 0;3 while (x<y) {4 x = x+3;5 y = y+2;6 if (x+y<10)7 s = s+x+y; else8 s = s+x-y;

}

9 res = s;

Statement Coverage

Traditional target: statement coverage. Write test suite that covers all statements, or in other words, all nodes in the CFG

Intuition: code that has never been executed during testing may contain errors Often this is the “low-probability” code

Spring 15 CSCI 2600, A Milanova 17

Suppose that we run two test cases

Test case #1: follows path 1-2-exit (e.g., we never take the loop)

Test case #2: 1-2-3-4-5-7-8-2-3-4-5-7-8-2-exit (loop twice, and both times take the true branch)

Problems?

1

2

3

5 6

7

8

T F

Example

4

FT

Branch Coverage

Target: write test cases that cover all branch edges at predicate nodes True and false branch edges of each if-then-else The two branch edges corresponding to the condition

of a loop All alternatives in a SWITCH statement

In modern languages, branch coverage implies statement coverage! I.e., a test suite that achieves 100% branch coverage

achieve 100% statement coverage

Spring 15 CSCI 2600, A Milanova 19

We need to cover the red branch edges

Test case #1: follows path 1-2-exit

Test case #2: 1-2-3-4-5-7-8-2-3-4-5-7-8-2-exit

What is % branch coverage?

1

3

5 6

7

8

T F

Example

T

F2

4

Branch Coverage

Statement coverage does not imply branch coverage I.e., a suite that achieves 100% statement coverage

does not necessarily achieve 100% branch coverage Can you think of an example?

Motivation for branch coverage: experience shows that many errors occur in “decision making” (i.e., branching). Plus, it implies statement coverage

21

Example

static int min(int a, int b) {int r = a;if (a <= b)

r = a;return r;

}

Let’s test with min(1,2) What is the statement coverage?What is the branch coverage?

Spring 15 CSCI 2600, A Milanova 22

Code Coverage in Eclipse

Spring 15 CSCI 2600, A Milanova 23

Other White Box Heuristics

White box equivalence partitioning and boundary value analysis

Loop testing Skip loop Run loop once Run loop twice Run loop with typical value Run loop with max number of iterations Run with boundary values near loop exit condition

Branch testing Run with values at the boundaries of branch condition

24

Which White Box Heuristic Reveals the Error?

boolean[] primeTable[CACHE_SIZE]

// returns: true if x is prime, // false otherwiseboolean isPrime(int x) { if (x > CACHE_SIZE) {

for (int i=2; i<x/2; i++) if (x%i==0) return false;

return true; } else return primeTable[x];}

Spring 15 CSCI 2600, A Milanova (example due to Michael Ernst) 25

White Box Testing: Dataflow-based Testing

A definition (def) of x is x at the left-hand-side E.g., x = y+z, x = x+1, x = foo(y)

A use of x is when x is at the right-hand side E.g., z = x+y, x = x+y, x>y, z = foo(x)

A def-use pair of x is a pair of nodes, k and n in the CFG, s.t. k is a def of x, n is a use of x, and there is a path from k to n free of definition of x

26

k: x=…

n: …= x…x = …

White Box Testing: Dataflow-based Testing

Dataflow-based testing targets: write tests that cover paths between def-use pairs

Intuition: If code computed a wrong value at a def of x, the

more uses of this def of x we “cover”, the higher the probability we’ll expose the error

If code had erroneous use of x, the more def-use pairs we “cover”, the higher the probability we’ll expose the use

27

A Buggy gcd

// requires a,b > 0static int gcd(int a, int b) { int x=a; int y=b;

while (x != y) {if (x > y) {x = x – 2y;} else {y = y – x;}

}return x;

}

Let’s test with gcd(15,6) and gcd(4,8).

What’s the statement coverage? Branch?28

CFG for Buggy GCD

29

1.x=a; y=b;

2. x!=y2. x!=y

3. x>y3. x>y

4.x=x-2y; 5.y=y-x;

7.res=x;

True

False

True False

“Merge” node

Back edge

6.

Def-use pairs for x:(node 1, node 2) (4,2)(1,3) (4,3)(1,4) (4,4)(1,5) (4,5)(1,7) (4,7)

Def-use coverage targets: cover paths connecting def-use pairs

Def-use Coverage Targets

The All-defs coverage target: for every def x, cover at least one path (free of definition of x), to at least one use x

The All-uses coverage target: for every def-use pair of x, cover at least one path (free of definition of x) from the def x to the use x

The All-du-paths coverage target: for every def-use pair of x, cover every path (free of definition of x) from the def x to the use x

Spring 15 CSCI 2600, A Milanova 30

Def-use Coverage Targets

Order def-use targets by strength (implication)

Coverage target A is stronger than (i.e., implies) coverage target B if a test suite that achieves 100% coverage with A, achieves 100% coverage with B

All-du-paths => All-uses => All-defs

Spring 15 CSCI 2600, A Milanova 31

White Box Testing: Dataflow-based Testing

Def-use coverage forces more targeted tests Higher probability to find errors Research has shown it leads to higher quality test

suites

Nevertheless, def-use coverage is rarely used in practice. Why? Difficult to find ground truth, i.e., the 100% Aliasing: x.f = A and B = y.f can be a def-use pair

Spring 15 CSCI 2600, A Milanova 32

White Box Testing Covering “all of the program”

Statement coverage Branch coverage Def-use coverage (a kind of path coverage) Path coverage

Just a heuristic In gcd example, 100% All-uses coverage (one of the

strongest targets) can still miss the bug High coverage increases confidence in test suite

33

In Practice

1. Write test suite based on spec (using paths-in-spec, equivalence partitioning, boundary-values). Write test suite before code!

2. Write code

3. Run test suite, fix bugs, measure coverage (typically branch)

4. If coverage is inadequate, write more tests. Go to Step 3 Good “coverage” of paths-in-spec and boundary values

typically (but not always!) yields good program coverage

34

Specification Tests vs. Implementation Tests

Specification tests are black-box tests Based entirely on the specification If some behavior is undefined in spec, then we

cannot write a test case to test this behavior Implementation tests are white-box tests

Based on code Covers control-flow, different algorithmic paths Code may define behavior undefined in spec.

Implementation tests must test this behavior

Spring 15 CSCI 2600, A Milanova 35

Regression Testing

Regression testing is the process of re-running test suite (e.g., after a bug fix)

Whenever you find a bug Add test case with input that elicited the bug Verify that test suite fails Fix the bug Verify that test suite succeeds

Ensures that we populate test suite with good tests

Spring 15 CSCI 2600, A Milanova 36

Rules of Testing

First rule of testing: do it early and do it often Best to catch bugs soon, before they hide Automate the process Regression testing will save time

Second rule of testing: be systematic Writing tests is a good way to understand the

spec Specs can be buggy too! When you find a bug, write a test first, then fix

Spring 15 CSCI 2600, A Milanova (based on slide by Michael Ernst) 37

Testing Summary Testing is extremely important. Two rules

Test early and test often Be systematic

Large test suite does not mean quality test suite Can miss relevant test cases Use black box and white box heuristics

Write tests based on Specification (black-box heuristics) Implementation (white-box heuristics)

Testing cannot prove absence of bugs But can increase confidence in program

38