weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on...

31
Weekly lecture notes are posted at: http:// guinness . cs . stevens -tech. edu /~ lbernste / click on courses from left hand navigation click on CS567 course name

Post on 20-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 2: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Orthogonal Arrays continues…..

Techniques of modifying orthogonal arrays:

Dummy level technique: Assigns a factor with m levels to a column that has n levels, where n > m. This technique can be applied to more than one factor in a give case, and the orthogonality will still be preserved.

Example:A case study has two 2-level factors (A and B) and two 3-level factors (C and D). We can assign the 4 factors to the columns of the orthogonal array L9 by taking dummy levels A3 = A’1 ( or A3 = A’2) and B3 = B’1 ( or B3 = B’2). Note that the orthogonaity is preserved even when the dummy level technique is applied to two or more factors.

Page 3: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Column NumberTest Cases

1 1 1 1 1

2 1 2 2 2

3 1 3 3 3

4 2 1 2 3

5 2 2 3 1

6 2 3 1 2

7 3 1 2 3

8 3 2 1 3

9 3 3 2 1

A B C D

1 2 3 4

Factor Assignment

L9 Orthogonal Array Table

Column NumberTest Cases

1 A1 B1 C1 D1

2 A1 B2 C2 D2

3 A1 B3 C3 D3

4 A2 B1 C2 D3

5 A2 B2 C3 D1

6 A2 B3 C1 D2

7 A’1 B1 C2 D3

8 A’1 B2 C1 D3

9 A’1 B3 C2 D1

A B C D

1 2 3 4

Factor Assignment

Layout with Dummy Level Technique

Page 4: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Compound factor method:

Allows to study more factors with an orthogonal array than the number of columns in the array.

It can be used to assign two 2-level factors to a 3-level column:

Example:

Let A and B be two 2-level factors. There are four total combinations of the levels of these factors: A1B1, A2B1, A2B1 and A2B2. We can pick three of the more important levels and call them as three levels of the compound factor AB.

Suppose we choose the three levels as follows: (AB)1 = A1B1, (AB)2 = A1B2, and (AB)3 = A2B1, then factor AB can be assigned to a 3-level column and the effects of A and B can be studied along with the effects of the other factors in the experiment.

Page 5: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

To compute the effect of A and B, we can proceed as follows:

The difference between the levels for (AB)1 and (AB)2 tells us the effect of changing from B1 to B2. Similarly, the difference between the levels (AB)1 and (AB)3 tells us the effect of changing from A1 to A2.

In the compound factor method, there is a partial loss of orthogonality. The two compounded factors are not orthogonal to each other. But each of them is orthogonal to every other factor in the case study.

Testers should always consider the possibility of making small modifications in the requirements for saving the total test effort.

Page 6: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Example for Compound Factor Method:

Suppose we have two 2-level factors (A and E) and three 3-level factors (B, C and D). We can form a compound factor AE with three levels (AE)1 = A1E1 and (AE)2 = A1E2 and (AE)3 = A2E1. This leads us to four 3-level factors that can be assigned to the L9 orthogonal array table.

See next page!

Page 7: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Column NumberTest Cases

1 1 1 1 1

2 1 2 2 2

3 1 3 3 3

4 2 1 2 3

5 2 2 3 1

6 2 3 1 2

7 3 1 2 3

8 3 2 1 3

9 3 3 2 1

A B C D

1 2 3 4

Factor Assignment

L9 Orthogonal Array Table

Column NumberTest Cases

1 A1E1 B1 C1 D1

2 A1E1 B2 C2 D2

3 A1E1 B3 C3 D3

4 A1E2 B1 C2 D3

5 A1E2 B2 C3 D1

6 A1E2 B3 C1 D2

7 A2E1 B1 C2 D3

8 A2E1 B2 C1 D3

9 A2E1 B3 C2 D1

A B C D

1 2 3 4

Factor Assignment

Layout with Compound Factor Technique

Page 8: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Strategy for Constructing an Orthogonal Array

Beginner Strategy:A beginner should stick to the direct use of one of the standard orthogonal arrays. It gets difficult to keep track of data from a larger number of experiments, as a beginner, it is advised not to exceed 18 experiments, which makes the possible choices of orthogonal arrays as L4, L8, L9, L12, L16, L’16 and L18.

Beginner should consider all 2-level factors or 3-level factors (preferably 3-level factors) and not to attempt to estimate any interactions. This may require him or her to modify slightly the case study requirements.

L18 is the most commonly used array, because it can be used to study up to seven 3-level factors and one 2-level factors, which is the situation with many case studies.

Page 9: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Beginner Strategy for selecting an OA

# of 2-levelfactors

Recommended

OA

2 – 3 L4

4 – 7 L8

8 – 11 L12

12 – 15 L16

All 2-level Factors

# of3-levelfactors

RecommendedOA

2 – 4 L9

5 – 7 L18*

* When L18 is used, one 2-level factor can be used in addition to seven 3-level factors.

All 3-level Factors

Page 10: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Intermediate Strategy:

Testers with modest experience should use the dummy level, compound factor techniques in conjunction with the standard orthogonal arrays. The factors should have preferably 2 to 3 levels and the estimation of interactions should be avoided.

Page 11: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

# of 2-levelfactors

# of 3-level factors

0 L9 L9 L9 L18 L18 L181 L9 L9 L18 L18 L18 L182 L4 L8 L9 L9 L18 L18 L18 3 L4 L8 L9 L16 L18 L18 L18

4 L8 L8 L9 L16 L18 L185 L8 L16 L16 L16 L18 L186 L8 L16 L16 L16 L187 L8 L16 L16 L18 L18

8 L12 L16 L16 L189 L12 L16 L16 L1810 L12 L1611 L12 L16

12 L16 L1613 L1614 L1615 L16

0 1 2 3 4 5 6 7

Intermediate Strategy for selecting an OA

Page 12: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Some suggested rules:

1. When the L9 array is suggested and the total number of factors is less than or equal to 4, use the dummy level techniques to assign a 2-level factor to a 3-level column

2. When the L9 array is suggested and the total number of factors exceeds 4, use the compound factor technique to create a 3-level factor from 2-level factors until the total number becomes 4.

3. When the L18 array is suggested and the number of 2-level columns exceeds 1, use the dummy level and compound factor technique in the manner similar to the rules above.

A vast majority of case studies can be taken care of by the beginner and intermediate strategies.

Page 13: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Robust Testing

It is a method for generating efficient multi-factor test plans

• Thorough coverage

• Minimum number of test cases

• Ease of debugging

Page 14: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Benefits of Orthogonal Arrays

• Balanced (equal) coverage of all factor levels and all pairwise combinations

• Test cases are uniformly distributed throughout the test domain

• Number of test cases is comparable to one factor at a time method

• Can detect all single mode faults

• Can provide guidance for specific double mode faults

• Cannot provide proof that the software has no faults

Page 15: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Key Principle

Orthogonal Arraybased

One factor at a timebased

A

B

C

A

B

C

Orthogonal arrays distribute test cases evenly throughout the test domain

Page 16: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Alternative Testing Approach to Certify the Reliability of Software

This is a procedure that helps to take the gamble out of the release software products for both the suppliers and receivers of the software. The goal is to certify the reliability of the software before its release to users.

The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a certified estimate of the MTTF of the product at the time of its release.

Page 17: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

The traditional life cycle of software development uses several defect removal stages of requirements, design, implementation, and testing. But it’s inconclusive in establishing product reliability.

No matter how many errors are removed during this process, no one knows how many remain.

Product users are in general more interested in knowing how reliable the software will be in operation, in particular, how long it runs before it fails, and what are the operational impacts when it fails.

Page 18: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Software certification life cycleRather than considering product design, implementation, and testing as sequential elements in the product life cycle. Product development is considered as a sequence of executable product increments.

i.e instead of

Requirements -> Design -> Implementation -> Testing

A life cycle organized by the incremental development of the product is as follows:

Requirements Design/Implementation incr1 incr2 ….. ProductTesting incr1 incr2 … Product

Increments accumulate over the life cycle into the final product

From Certifying the Reliability of SoftwareCurrit, Dyer and Mills, 1986

Page 19: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

What are the executable increments:Functions available for users in the final product is subset into deeply nested increments that can be developed sequentially.

At the time of a software release, the function in the early increments will be better tested and more mature than the functions in later increments.

Each increment should be released to a tester/test group whose testing results are used to confirm or modify the development practices that are used for later increments.

The reliability of the increments is certified by the tester/test group in terms of a standard projection of their MTTF. The projections for the early increments can be used to project product reliability and to trigger corrective action as required. The MTTF projections of later increments of the life cycle verify whether corrective action had the right effect and the development process was carried out under good control.

Page 20: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

How does this software certification life cycle work?

The tester/test group must have access to the same specifications as the developers so that realistic and user-oriented tests can be developed.

A statistical approach is used for testing which is more natural from the user standpoint than from the developer’s. In software, the basis for a statistical model is in the usage of the software by its users. Test cases which reflect statistical samples of user operations provide observations between failures of the software that are relatable to the operating environment.

Assuming the quality of the software is good, then statistical testing will be effective in uncovering unanticipated deficiencies in the software, it can also be used to certify the reliability of the software with well defined statistical confidence.

Page 21: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Software failure characteristics

In 1980 an analysis was done to study the software failure history of nine large IBM software products (Adams, 1980)

Product 60 19 6 1.9 .6 .19 .06 .019

Mean time to problem occurrence in Kmonths

1 34.2 28.8 17.8 10.3 5.0 2.1 1.2 0.72 34.2 28.0 18.2 9.7 4.5 3.2 1.5 0.73 33.7 28.5 18.0 8.7 6.5 2.8 1.4 0.44 34.2 28.5 18.7 11.9 4.4 2.0 0.3 0.15 34.2 28.5 18.4 9.4 4.4 2.9 1.4 0.76 32.0 28.2 20.1 11.5 5.0 2.1 0.8 0.37 34.0 28.5 18.5 9.9 4.5 2.7 1.4 0.68 31.9 27.1 18.4 11.1 6.5 2.7 1.4 1.19 31.2 27.6 20.4 12.8 5.6 1.9 0.5 0.0

There is a remarkable consistency in the failure rates although the products are quite different from each other. Two striking features of the data are the wide range in failure rates (measured in usage months) and the high percentage of very low rate errors. One-third of the errors have MTTF of 5000 years!!

Page 22: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

The table on the previous page also gives a new insight into the power of statistical testing for improving MTTF when compared to selective testing or inspection.

Finding errors at random is a very different matter than finding execution failures at random. One-third of the errors (column 1 values) found at random hardly affect MTTF; the next quarter of the errors (column 2 values) do little more. However, the two highest rate classes, which account for only 2 percent of the errors, cause a thousand times more failures per error than the two lowest rate classes, which account for some 60 percent of the errors.

That is, statistical testing which tends to find errors in the same order as their seriousness, will uncover failures by a factor of 2000/60, some 30 to 1, over randomly finding errors, without regard to their seriousness, e.g. by structural testing.

Page 23: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

The basis for a statistical model is in the nature of the usage of the software by its users. Any particular user will make use of the software from time to time with different initial conditions and different inputs. The only detectable failures in the software are either from it aborting or from producing faulty output. For fixed initial conditions and fixed input, the software will behave exactly the same for all other users whenever they use it.

We are interested in failure free execution intervals, rather than trying to estimate the errors remaining in a software design. The objective is to measure operational reliability which is often the reason for the usage perspective.

Page 24: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Reliability Prediction

The approach to MTTF prediction is to record the execution time for each statistical test case run against the software, sum the times between successive test case failures and input these interfail times into the statistical model

MTTF prediction are made on an increment basis and the software product reliability is computed as the weighted sum of the increment MTTFs

MTTFm = MTTF0Rm, where R accounts for the average fractional improvement to the MTTF from each change.

Page 25: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Testing Requirements

Why?

• Maintenance is 60-90% of system cost

• 2/3 of finished system errors are requirements and design errors

• Fixing a requirements error will cost10X + during programming75X + after installation

Page 26: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Requirements: • Must be delivered to provide value

• Must be observable and measurable

• Must be realistic and attainable

• Must be testable

• Must be reviewable

• Must be clear and structurally complete

• Should be stated as Itemized Deliverables

Page 27: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Prototypes

Whether it is a paper mock up or a software simulation, allows you to present options to the customer and get feed back that allows the requirements to be more accurately define.

Two approaches to the use of prototypes:1. Throwaway prototype is constructed that is used solely to define

requirements; it is not delivered to the customer as a product.

2. Evolutionary prototype is used on the front end of the process to elicit and analyze requirements, but is also iteratively refined into a product that can be delivered to the customer.

Page 28: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Prototypes can be constructed during the requirements and design phases of the development cycle. The prototype is used during requirement analysis to clarify and test requirements.

Test team can use prototypes developed during requirements analysis to get a head start on testing. Preliminary tests can be developed and run against the prototype to validate key requirements. These tests can be refined later in the process to form a core set of tests for the system and acceptance test phases.

Page 29: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Evolutionary prototyping is a method of developing a system in evolving stages, with each stage being driven by customer feedback and test results. It is particularly useful is you cannot determine solid requirements at the beginning of a project, but are able to work with the customer to iteratively define the needs.

Both static testing and dynamic testing need to be involved during each iteration of the development.

Page 30: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Testing in an evolutionary prototyping life cycle

PrototypeDefinition

PrototypeDesign

Code &Unit Test

IntegrationTest

Fix &EvaluatePrototype

TestDesign

Test Implementation & debug

TestExecution

System Testing

AcceptanceTesting

Operations &Maintenance

TestPlanning

Once the prototype is defined, the development team designs, codes and tests the prototype while the test team in parallel works in test planning, test development, and test execution.This requires close communication between the development and test teams to ensure that each prototype is sufficiently tested before demonstration to the customer. When new function is added with each iteration, the test team needs to perform regression testing to verify that old functionality is not broken when the new functionality is added.

From Rapid Testing byCulbertson, Brown and Cobb

Page 31: Weekly lecture notes are posted at: lbernste/ click on courses from left hand navigation click on CS567 course name

Homework (week 3, 02/03/05)

1. Read chapters 6-8

2. Start to work on project #1, due 2/17/05