lecture 19 23/11/15 software quality and testing
TRANSCRIPT
LECTURE 19 23/11/15
Software Quality and Testing
Objectives ◦ To discuss when testing takes place in the life cycle
◦ Test-driven development advocates early testing!
◦ To cover the strategies and tools associated with object oriented testing◦ Analysis and Design Testing◦ Class Tests◦ Integration Tests◦ Validation Tests◦ System Tests
◦ To discuss test plans and execution for projects
Object-Oriented Testing
◦ When should testing begin?
◦ Analysis and Design: Testing begins by evaluating the OOA and OOD models ◦ How do we test OOA models (requirements and use cases)? ◦ How do we test OOD models (class and sequence diagrams)?
◦ Structured walk-throughs, prototypes
◦ Formal reviews of correctness, completeness and consistency
◦ Programming: How does OO make testing different from procedural programming? ◦ Concept of a ‘unit’ broadens due to class encapsulation ◦ Integration focuses on classes and their context of a use case scenario or their execution
across a thread ◦ Validation may still use conventional black box methods
Test Driven Programming
◦ eXtreme Programming (XP) advocates writing tests for units before writing actual code for units
◦ Why might this practice be a good idea?
◦ Constrains code to design: How so? Design -> Test -> Code … in small iterations
◦ Promotes validation and reliability: Why? Always rerun all tests (easier with automated testing) before integrating new code in a release
◦ Increases confidence to change code: Why? Changes shouldn’t break old code if you can test old code
◦ Creed of XP: “embrace change”
Two issues in Systems Quality ◦ Verification (or narrow view of
quality assurance) ◦ “Am I building the product right?” ◦ (is it good, does it work properly?)
◦ Validation (or usability and user satisfaction) ◦ “Am I building the right product?” ◦ (Is this what users wanted and this
way? Are users happy about it?)
Bug Curve
Building Test Cases
◦ All methods of your system must be checked by at least one test
◦ Construct some test input cases, then describe how the output will look like
◦ Compare the outcomes with the expected output
Test case is a set of ‘What-if questions’
◦ The general format is: ◦ If it receives certain input, it produces
certain output
Guidelines for Developing Test Cases
◦ Describe which feature or service (external or internal) your test attempts to cover
◦ If the test case is based on a use case, it is a good idea to refer to the use case name
◦ Remember that the use cases are the source of test cases
◦ Specify what is being tested and which particular feature (methods)
◦ Specify what you are going to do to test the feature and what you expect to happen ◦ Test normal use of the function ◦ Test abnormal but reasonable use of function ◦ Test abnormal and unreasonable use of function ◦ Test the boundary conditions
Guidelines for Developing Test Cases
◦ Test objects' interactions and the messages sent among them
◦ If you have developed sequence diagrams, they can assist you in this process
◦ When the revisions have been made, document the cases so they become the starting basis for the follow-up test
◦ Attempting to reach agreement on answers generally will raise other what-if questions
◦ Add these to the list and answer them, repeat the process until the list is stabilised, then you need not add any more questions
Guidelines for Developing Test Plan
◦ Requirements might dictate the format of the test plan
◦ The test plan should contain a schedule and a list of required resources
◦ Document every type of testing that you plan to complete
◦ A configuration control system provides a way of tracking the changes to the code
◦ Process must be in place to routinely bringing the test plan in sync with the product and/or product specification
Testing OO Code
Class Test◦ Smallest testable unit is the encapsulated class
◦ •Test each operation as part of a class hierarchy because its class hierarchy defines its context of use
◦ •Approach:
◦ Test each method (and constructor) within a class
◦ Test the state behavior (attributes) of the class between methods
◦ •How is class testing different from conventional testing?
◦ •Conventional testing focuses on input-process-output, whereas class testing focuses on each method, then designing sequences of methods to exercise states of a class
◦ •But white-box testing can still be applied
Integration Test
◦ OO does not have a hierarchical control structure so conventional top-down and bottom-up integration tests have little meaning
◦ • Integration applied three different incremental strategies:
◦ Thread-based testing: integrates classes required to respond to one input or event
◦ Use-based testing: integrates classes required by one use case
◦ Cluster testing: integrates classes required to demonstrate one collaboration
Validation Test ◦ Are we building the right product?
◦ •Validation succeeds when software functions in a manner that can be reasonably expected by the customer.
◦ •Focus on user-visible actions and user-recognizable outputs
◦ •Details of class connections disappear at this level
◦ •Apply:
◦ Use-case scenarios from the software requirements spec
◦ Black-box testing to create a deficiency list
◦ Acceptance tests through alpha (at developer’s site) and beta (at customer’s site) testing with actual customers
◦ •How will you validate your term product?
System Test
◦ Software may be part of a larger system. This often leads to “finger pointing” by other system dev teams
◦ Finger pointing defence: 1.Design error-handling paths that test external information
2.Conduct a series of tests that simulate bad data
3.Record the results of tests to use as evidence
◦ Types of System Testing:
◦ Recovery testing: how well and quickly does the system recover from faults
◦ Security testing: verify that protection mechanisms built into the system will protect from unauthorized access (hackers, disgruntled employees, fraudsters)
◦ Stress testing: place abnormal load on the system
◦ Performance testing: investigate the run-time performance within the context of an integrated system
Other Testing Strategies • Black Box Testing
• White Box Testing
• Top-down Testing
• Bottom-up Testing
Black box◦ In a black box, the test item is
treated as "black" whose logic (inside) is unknown
◦ All that's known is what goes in and what comes out, the input and output
◦ Black box test works very nicely in testing objects in an O-O environment
◦ Once you have created fully tested and debugged classes of objects you will put them into library for use or reuse
White box
◦ White box testing assumes that specific logic is important, and must be tested to guarantee system’s proper functioning
◦ One form of white box testing is called path testing
◦ It makes certain that each path in a program is executed at least once during testing
Top-Down
◦ It assumes that the main logic of the application needs more testing than supporting logic
◦ User interface and event-driven systems
Bottom-Up
◦ It assumes that individual programs and modules are fully developed as stand alone processes
◦ These modules are tested individually, then combined for integration testing