1 cs 501 spring 2007 cs 501: software engineering lecture 21 reliability 3

32
1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

Post on 20-Dec-2015

220 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

1 CS 501 Spring 2007

CS 501: Software Engineering

Lecture 21

Reliability 3

Page 2: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

2 CS 501 Spring 2007

Administration

Next week

Tuesday, April 17: no class

Thursday, April 19: Quiz 4

Weekly progress reports

Remember to send your progress reports to your TA

Page 3: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

3 CS 501 Spring 2007

Validation and Verification

Validation: Are we building the right product?

Verification: Are we building the product right?

In practice, it is sometimes difficult to distinguish between the two.

That's not a bug. That's a feature!

Page 4: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

4 CS 501 Spring 2007

The Testing Process

Unit, System and Acceptance Testing are major parts of a software project

• It requires time on the schedule

• It may require substantial investment in test data, equipment, and test software.

• Good testing requires good people!

• Documentation, including management and client reports, are important parts of testing.

What is the definition of "done"?

Page 5: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

5 CS 501 Spring 2007

The Heisenbug

Page 6: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

6 CS 501 Spring 2007

Test Design

Testing can never prove that a system is correct. It can only show that (a) a system is correct in a special case, or (b) that it has a fault.

• The objective of testing is to find faults.

• Testing is never comprehensive.

• Testing is expensive.

Page 7: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

7 CS 501 Spring 2007

Testing Strategies

• Bottom-up testing. Each unit is tested with its own test environment.

• Top-down testing. Large components are tested with dummy stubs.

user interfaceswork-flowclient and management demonstrations

• Stress testing. Tests the system at and beyond its limits.

real-time systemstransaction processing

Page 8: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

8 CS 501 Spring 2007

Methods of Testing

Closed box testing

Testing is carried out by people who do not know the internals of what they are testing.

Example. IBM educational demonstration that was not foolproof

Open box testing

Testing is carried out by people who know the internals of what they are testing.

Example. Tick marks on the graphing package

Page 9: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

9 CS 501 Spring 2007

Stages of Testing

Testing is most effective if divided into stages

Unit testing unit test

System testing integration test function test performance test installation test

Acceptance testing

Page 10: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

10 CS 501 Spring 2007

Testing: Unit Testing

• Tests on small sections of a system, e.g., a single class

• Emphasis is on accuracy of actual code against specification

• Test data is chosen by developer(s) based on their understanding of specification and knowledge of the unit

• Can be at various levels of granularity

• Open box or closed box: by the developer(s) of the unit or by special testers

If unit testing is not thorough, system testing becomes almost impossible. If your are working on a project that is behind schedule, do not rush the unit testing.

Page 11: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

11 CS 501 Spring 2007

Testing: System and Sub-System Testing

• Tests on components or complete system, combining units that have already been thoroughly tested

• Emphasis on integration and interfaces

• Trial data that is typical of the actual data, and/or stresses the boundaries of the system, e.g., failures, restart

• Carried out systematically, adding components until the entire system is assembled

• Open or closed box: by development team or by special testers

System testing is finished fastest if each component is completely debugged before assembling the next

Page 12: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

12 CS 501 Spring 2007

Testing:Acceptance Testing

• Closed box: by the client

• The entire system is tested as a whole

• The emphasis is on whether the system meets the requirements

• Uses real data in realistic situations, with actual users, administrators, and operators

The acceptance test must be successfully completed before the new system can go live or replace a legacy system.

Completion of the acceptance test may be a contractual requirement before the system is paid for.

Page 13: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

13 CS 501 Spring 2007

Variants of Acceptance Testing

Alpha Testing: Clients operate the system in a realistic but non-production environment

Beta Testing: Clients operate the system in a carefully monitored production environment

Parallel Testing: Clients operate new system alongside old production system with same data and compare results

Page 14: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

14 CS 501 Spring 2007

Test Cases

Test cases are specific tests that are chosen because they are likely to find faults.

Test cases are chosen to balance expense against chance of finding serious faults.

• Cases chosen by the development team are effective in testing known vulnerable areas.

• Cases chosen by experienced outsiders and clients will be effective in finding gaps left by the developers.

• Cases chosen by inexperienced users will find other faults.

Page 15: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

15 CS 501 Spring 2007

Test Case Selection: Coverage of Inputs

Objective is to test all classes of input

• Classes of data -- major categories of transaction and data inputs.

Cornell example: (undergraduate, graduate, transfer, ...) by (college, school, program, ...) by (standing) by (...)

• Ranges of data -- typical values, extremes

• Invalid data

• Reversals, reloads, restarts after failure

Page 16: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

16 CS 501 Spring 2007

Test Case Selection: Program

Objective is to test all functions of each computer program

• Paths through the computer programs

Program flow graphCheck that every path is executed at least once

• Dynamic program analyzers

Count number of times each path is executed

Highlight or color source code

Can not be used with time critical software

Page 17: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

17 CS 501 Spring 2007

Test Strategies: Program

(a) Statement analysis

(b) Branch testing

If every statement and every branch is tested is the program correct?

Page 18: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

18 CS 501 Spring 2007

Statistical Testing

• Determine the operational profile of the software

• Select or generate a profile of test data

• Apply test data to system, record failure patterns

• Compute statistical values of metrics under test conditions

Page 19: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

19 CS 501 Spring 2007

Statistical Testing

Advantages:

• Can test with very large numbers of transactions

• Can test with extreme cases (high loads, restarts, disruptions)

• Can repeat after system modifications

Disadvantages:

• Uncertainty in operational profile (unlikely inputs)

• Expensive

• Can never prove high reliability

Page 20: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

20 CS 501 Spring 2007

Regression Testing

Regression Testing is one of the key techniques of Software Engineering

When software is modified regression testing is to provide confidence that modifications behave as intended and do not adversely affect the behavior of unmodified code.

• Basic technique is to repeat entire testing process after every change, however small.

Page 21: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

21 CS 501 Spring 2007

Regression Testing: Program Testing

1. Collect a suite of test cases, each with its expected behavior.

2. Create scripts to run all test cases and compare with expected behavior. (Scripts may be automatic or have human interaction.)

3. When a change is made to the system, however small (e.g., a bug is fixed), add a new test case that illustrates the change (e.g., a test case that revealed the bug).

4. Before releasing the changed code, rerun the entire test suite.

Page 22: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

22 CS 501 Spring 2007

Documentation of Testing

Testing should be documented for thoroughness, visibility and for maintenance

(a) Test plan

(b) Test specification and evaluation

(c) Test suite and description

(d) Test analysis report

Page 23: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

23 CS 501 Spring 2007

A Note on User Interface Testing

User interfaces need two categories of testing.

• During the design phase, user interface testing is carried out with trial users to ensure that the design is usable. Design testing is also used to develop graphical elements and to validate the requirements.

• During the implementation phase, the user interface goes through the standard steps of unit and system testing to check the reliability of the implementation.

Acceptance testing is then carried out with users on the complete system.

Page 24: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

24 CS 501 Spring 2007

How we’re user testing:

- One-on-one, 30-45 min user tests with staff levels

- Specific tasks to complete

- No prior demonstration or training

- Pre-planned questions designed to stimulate feedback

- Emphasis on testing system, not the stakeholder!

- Standardized tasks / questions among all testers

A CS 501 Project: Methodology

The next few slides are from a CS 501 presentation (second milestone)

Page 25: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

25 CS 501 Spring 2007

How we’re user testing:

Types of questions we asked:

- Which labels, keywords were confusing?

- What was the hardest task?

- What did you like, that should not be changed?

- If you were us, what would you change?

- How does this system compare to your paper based system

- How useful do you find the new report layout? (admin)

- Do you have any other comments or questions about the system? (open ended)

A CS 501 Project: Methodology

Page 26: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

26 CS 501 Spring 2007

What we’ve found: Issue #1, Search Form Confusion!

A CS 501 Project: Results

Page 27: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

27 CS 501 Spring 2007

A CS 501 Project: Results

What we’ve found: Issue #2, Inconspicuous Edit/ Confirmations!

Page 28: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

28 CS 501 Spring 2007

A CS 501 Project: Results

What we’ve found: Issue #3, Confirmation Terms

Page 29: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

29 CS 501 Spring 2007

A CS 501 Project: Results

What we’ve found: Issue #4, Entry Semantics

Page 30: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

30 CS 501 Spring 2007

Results, Addressing

What we’ve found: #5, Search Results Disambiguation & Semantics

Page 31: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

31 CS 501 Spring 2007

Fixing Bugs

Isolate the bugIntermittent --> repeatableComplex example --> simple example

Understand the bugRoot causeDependenciesStructural interactions

Fix the bugDesign changesDocumentation changesCode changes

Page 32: 1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3

32 CS 501 Spring 2007

Moving the Bugs Around

Fixing bugs is an error-prone process!

• When you fix a bug, fix its environment

• Bug fixes need static and dynamic testing

• Repeat all tests that have the slightest relevance (regression testing)

Bugs have a habit of returning!

• When a bug is fixed, add the failure case to the test suite for the future.