unit-5 automated comparison. verification verification and validation are independent procedures...

8
Unit-5 Automated Comparison

Upload: clinton-watts

Post on 01-Jan-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Unit-5

Automated Comparison

VERIFICATION

• Verification and Validation are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose.

• These are critical components of a quality management system such as ISO 9000.

• The words "verification" and "validation" are sometimes preceded with "Independent" (or IV&V), indicating that the verification and validation is to be performed by a disinterested third-party

Comparison

• It is comparing your software with the better one or your Competitor.

• In we basically compare the Performance of the software. For ex If you have to do Comparison Testing of PDF converter (Desktop Based Application) then you will compare your software with your Competitor on the basis of:-

• 1.Speed of Conversion PDF file into Word. 2.Quality of converted file.

Automation -Comparator

• A test comparator helps to automate the comparison between the actual and the expected result produced by the software.

• There are two ways in which actual results of a test can be compared to the expected results for the test.:

Dynamic comparison

• Here the comparison is done dynamically, i.e. while the test is executing.

• This type of comparison is good for comparing the wording of an error message that pops up on a screen with the correct wording for that error message.

• Dynamic comparison is useful when an actual result does not match the expected result in the middle of a test – the tool can be programmed to take some recovery action at this point or go to a different set of tests.

Post-execution comparison• Here the comparison is performed after the test has finished

executing and the software under test is no longer running. • Operating systems normally have file comparison tools

available which can be used for post-execution comparison and often a comparison tool will be developed in-house for comparing a particular type of file or test result.

• Post-execution comparison is best for comparing a large volume of data, for example comparing the contents of an entire file with the expected contents of that file, or comparing a large set of records from a database with the expected content of those records. For example, comparing the result of a batch run (e.g. overnight processing of the day’s online transactions) is probably impossible to do without tool support.

Contn..• Whether a comparison is dynamic or post-

execution, the test comparator needs to know what the correct result is. This may be stored in the test case itself or it may be computed using a test oracle.

• Features or characteristics of test comparators are:• To do the dynamic comparison of transient events that occurs during test execution;• To do the post-execution comparison of stored data, e.g. in files or databases;• To mask or filter the subsets of actual and expected results.

Test Sensitivity

• “Test sensitivity" refers to how likely it is that a test will be affected by inconsistencies and/or failures in the application under test.