stlc - software testing life cycle

40
STLC - Software Testing Life Cycle There is a systematic cycle to software testing, although it varies from organization to organization Software Testing Life Cycle: Software testing life cycle or STLC refers to a comprehensive group of testing related actions specifying details of every action along with the specification of the best time to perform such actions. There can not be a standardized testing process across various organizations, however every organization involved in software development business, defines & follows some sort of testing life cycle. STLC by & large comprises of following Six Sequential Phases: 1) Planning of Tests 2) Analysis of Tests 3) Designing of Tests 4) Creation & Verification of Tests 5) Execution of Testing Cycles 6) Performance Testing, Documentation 7) Actions after Implementation Every company follows its own software testing life cycle to suit its own requirements, culture & available resources. The software testing life cycle can’t be viewed in isolation, rather it interacts with the every phase of Software Development Life Cycle (SDLC). Prime focus of the software testing life cycle is on managing & controlling all activities of software testing. Testing might be manual testing or an automated testing using some tool. 1) Planning of Tests: In this phase a senior person like the project manager plans & identifies all the areas where testing efforts need to be applied, while operating within the boundaries of constraints like resources & budget. Unless judicious planning is done in the beginning, the result can be catastrophic with emergence of a poor quality product, dissatisfying the ultimate customer. Planning is not limited just to the initial phase, rather it is a continuous exercise extending till the end.

Upload: satish987

Post on 24-Mar-2015

168 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: STLC - Software Testing Life Cycle

STLC - Software Testing Life Cycle

There is a systematic cycle to software testing, although it varies from organization to organization

Software Testing Life Cycle:

Software testing life cycle or STLC refers to a comprehensive group of testing related actions specifying details of every action along with the specification of the best time to perform such actions. There can not be a standardized testing process across various organizations, however every organization involved in software development business, defines & follows some sort of testing life cycle.

STLC by & large comprises of following Six Sequential Phases:

1) Planning of Tests

2) Analysis of Tests

3) Designing of Tests

4) Creation & Verification of Tests

5) Execution of Testing Cycles

6) Performance Testing, Documentation

7) Actions after Implementation

Every company follows its own software testing life cycle to suit its own requirements, culture & available resources. The software testing life cycle can’t be viewed in isolation, rather it interacts with the every phase of Software Development Life Cycle (SDLC). Prime focus of the software testing life cycle is on managing & controlling all activities of software testing. Testing might be manual testing or an automated testing using some tool.

1) Planning of Tests:

In this phase a senior person like the project manager plans & identifies all the areas where testing efforts need to be applied, while operating within the boundaries of constraints like resources & budget. Unless judicious planning is done in the beginning, the result can be catastrophic with emergence of a poor quality product, dissatisfying the ultimate customer. Planning is not limited just to the initial phase, rather it is a continuous exercise extending till the end.

During the planning stage, the team of senior level persons comes out with an outline of Testing Plan at High Level. The High Level Test Plan comprehensively describes the following:

Scope of Testing :

Page 2: STLC - Software Testing Life Cycle

Defining the areas to be tested, identification of features to be covered during testing

Identification of Approaches for Testing:

Identification of approaches including types of testing Defining Risks:

Identification of different types of risks involved with the decided plan Identification of resources :

Identification of resources like man, materials & machines which need to be deployed during Testing

Time schedule:

For performing the decided testing is aimed to deliver the end product as per the commitment made to the customer.

Involvement of software testers begins in the planning phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests will work.

2) Analysis of Tests:

Based upon the High Level Test Plan Document, further nitty-gritty’s covering the following are worked out.

Identification of Types of Testing to be performed during various stages of Software Development Life Cycle.

Identification of extent to which automation needs to be done.

Identification of the time at which automation is to be carried out.

Identification of documentation required for automated testing

The Software project can’t be successful unless there is frequent interaction among various teams involved in Coding & Testing with the active involvement of the Project Managers, Business Analysts or even the customer. Any deficiencies in the decided test plans come to the surface, during such meetings of cross-functional teams. This provides an opportunity to have a rethinking & refining the strategies decided for testing.

Based upon the customer requirements a detailed matrix for functional validation is prepared to cover the following areas:

Ensure that each & every business requirement is getting covered through some test case or the other.

Identification of the test cases best suited to the automated testing

Identification of the areas to covered for performance testing and stress testing

Page 3: STLC - Software Testing Life Cycle

Carry out detailed review of documentation covering areas like Customer Requirements, Product Features & Specifications and Functional Design etc.

3) Designing of Tests:

This phase involves the following:

Further polishing of various Test Cases, Test Plans Revision & finalization of Matrix for Functional Validation.

Finalization of risk assessment methodologies.

In case line of automation is to be adopted, identification of test cases suitable for automation.

Creation of scripts for Test cases decided for automation.

Preparation of test data.

Establishing Unit testing Standards including defining acceptance criteria

Revision & finalization of testing environment.

4) Construction and verification:

This phase involves the following:

Finalization of test plans and test cases Completion of script creation for test cased decided for automation.

Completion of test plans for Performance testing & Stress testing.

Providing technical support to the code developers in their effort directed towards unit testing.

Bug logging in bug repository & preparation of detailed bug report.

Performing Integration testing followed by reporting of defects detected if any.

5) Execution of Testing Cycles:

This phase involves the following:

Completion of test cycles by executing all the test cases till a predefined stage reaches or a stage of no detection of any more errors reach.

This is an iterative process involving execution of Test Cases, Detection of Bugs, Bug Reporting, Modification of test cases if felt necessary, Fixing of bugs by the developers & finally repeating the testing cycles.

Page 4: STLC - Software Testing Life Cycle

6) Performance Testing, Documentation & Actions after Implementation:

This phase involves the following:

Execution of test cases pertaining to performance testing & stress testing. Revision & finalization of test documentation

Performing Acceptance testing, load testing followed by recovery testing

Verification of the software application by simulating conditions of actual usage.

7) Actions after Implementation:

This phase involves the following:

Evaluation of the entire process of testing. Documentation of TGR (Things Gone Right) & TGW (Things Gone Wrong)

reports. Identification of approaches to be followed in the event of occurrence of similar defects & problems in the future.

Creation of comprehensive plans with a view to refine the process of Testing.

Identification & fixing of newly cropped up errors on continuous basis.

Winding up of the test environment & restoration of all test equipment to the original base line conditions.

Life Cycle of Software Testing ( STLC )

Phase Activities Outcome

Planning of Tests

($) Creation of a Test Plan of High Level

Refined Test Plans & Specifications

Analysis of Tests

($) Creation of fully descriptive Test Plan

($) Creation of Matrix for Functional Validation

($) Creation of Test Cases

Refined Test Plans, Test Cases & Matrix for Functional Validation

Designing of Tests

($) Revision of Test Cases

($) Selection of Test Cases fit for automation

Refined Test Cases, Input Data Sets & Documents for Assessment of Risk

Creation & Verification of Tests

($) Creation of scripts suitable for Test Cases for automation

Detailed Procedures for Testing, Testing Scripts, Test Reports & Bug-Reports

Page 5: STLC - Software Testing Life Cycle

Execution of Testing Cycles

($) Completion of Cycles of Testing

Detailed Test Reports & Bug-Reports.

Performance Testing, Documentation

($) Execution of Test Cases related to performance tests & Stress Testing

($) Detailed documentation

Test Reports, Documentation on various metrics used during testing

Actions after Implementation

($) Evaluation of all Processes of Testing

Detailed Plans for improving the process of testing

------------------------------------------------------------------------------------------------------------------------------------------

Software development life cycle(SDLC) and Software Testing Life cycle(STLC) go parallelly.

SDLC (Software Development Life cycle)

STLC (Software Test Life Cycle)

SDLC is Software Development LifeCycle, it is a systematic approach to develop a software.

The process of testing a software in a well planned and systematic way is known as software testing life cycle(STLC).

Requirements gatheringRequirements Analysis is done is this phase, software requirements are reviewed by test team.

DesignTest Planning, Test analysis and Test design is done in this phase. Test team reviews design documents and prepares the test plan.

Coding or developmentTest construction and verification is done in this phase, testers write test cases and finalizes test plan.

TestingTest Execution and bug reporting, manual testing, automation testing is done, defects found are reported. Re-testing and regression testing is also done in this phase.

DeploymentFinal testing and implementation is done is this phase andfinal test report is prepared.

Maintenance Maintenance testing is done in this phase

What is STLC (Software Testing LifeCycle)?

Page 6: STLC - Software Testing Life Cycle

The process of testing a software in a well planned and systematic way is known as software testing lifecycle(STLC).

Different organisations have different phases in STLC however generic Software Test LifeCycle (STLC) for waterfall development model consists of the following phases.

1. Requirements Analysis2. Test Planning

3. Test Analysis

4. Test Design

5. Test Construction and Verification

6. Test Execution and Bug Reporting

7. Final Testing and Implementation

8. Post Implementation

1. Requirements Analysis

In this phase testers analyze the customer requirements and work with developers during the design phase to see which requirements are testable and how they are going to test those requirements.

It is very important to start testing activities from the requirements phase itself because the cost of fixing defect is very less if it is found in requirements phase rather than in future phases.

2. Test Planning

In this phase all the planning about testing is done like what needs to be tested, how the testing will be done, test strategy to be followed, what will be the test environment, what test methodologies will be followed, hardware and software availability, resources, risks etc. A high level test plan document is created which includes all the planning inputs mentioned above and circulated to the stakeholders.

Usually IEEE 829 test plan template is used for test planning.

3. Test Analysis

After test planning phase is over test analysis phase starts, in this phase we need to dig deeper into project and figure out what testing needs to be carried out in each SDLC phase.

Automation activities are also decided in this phase, if automation needs to be done for software product, how will the automation be done, how much time will it take to automate and which features need to be automated.

Page 7: STLC - Software Testing Life Cycle

Non functional testing areas (Stress and performance testing) are also analyzed and defined in this phase.

4. Test Design

In this phase various black-box and white-box test design techniques are used to design the test cases for testing, testers start writing test cases by following those design techniques, if automation testing needs to be done then automation scripts also needs to written in this phase.

5. Test Construction and Verification

In this phase testers prepare more test cases by keeping in mind the positive and negative scenarios, end user scenarios etc. All the test cases and automation scripts need to be completed in this phase and got reviewed by the stakeholders. The test plan document should also be finalized and verified by reviewers.

6. Test Execution and Bug Reporting

Once the unit testing is done by the developers and test team gets the test build, The test cases are executed and defects are reported in bug tracking tool, after the test execution is complete and all the defects are reported. Test execution reports are created and circulated to project stakeholders.

After developers fix the bugs raised by testers they give another build with fixes to testers, testers do re-testing and regression testing to ensure that the defect has been fixed and not affected any other areas of software.

Testing is an iterative process i.e. If defect is found and fixed, testing needs to be done after every defect fix.

After tester assures that defects have been fixed and no more critical defects remain in software the build is given for final testing.

7. Final Testing and Implementation

In this phase the final testing is done for the software, non functional testing like stress, load and performance testing are performed in this phase. The software is also verified in the production kind of environment. Final test execution reports and documents are prepared in this phase.

8. Post Implementation

In this phase the test environment is cleaned up and restored to default state, the process review meeting's are done and lessons learnt are documented. A document is prepared to cope up similar problems in future releases.

Page 8: STLC - Software Testing Life Cycle

What is Validation

Validation represents dynamic testing techniques.

Validation ensures that the software operates as planned in the requirements phase by executing it, running predefined test cases and measuring the output with expected results.

Validation answers the question “Did we build the software fit for purpose and does it provides the solution to the problem”.

Validation is concerned with evaluating the software, component or system to determine it meets end user requirements.

Some important validation techniques are as follows:

1. Unit Testing 2. Integration Testing

3. System Testing

4. User Acceptance Testing

1. Unit Testing

Unit is the smallest testable part of the software system. Unit testing is done to verify that the lowest independent entities in any software are working fine. The smallest testable part is isolated from the remainder code and tested to determine whether it works correctly.

Why is unit testing important

Suppose you have two units and you do not want to test the units individually but as an integrated system to save your time.

Once the system is integrated and you found error in an integrated system it becomes difficult to differentiate that the error occurred in which unit, so unit testing is mandatory before integrating the units.

When developer is coding the software it may happen that the dependent modules are not completed for testing, in such cases developers use STUBS and DRIVERS to simulate the called(stub) and caller(driver) units. Unit testing requires stubs and drivers, stubs simulates the called unit and driver simulates the calling unit.

Let's explain STUBS and DRIVERS in detail.

What is STUB?

Page 9: STLC - Software Testing Life Cycle

Assume you have 3 modules, Module A, Module B and module C. Module A is ready and we need to test it, but module A calls functions from Module B and C which are not ready, so developer will write a dummy module which simulates B and C and returns values to module A. This dummy module code is known as stub.

What is DRIVER?

Now suppose you have modules B and C ready but module A which calls functions from module B and C is not ready so developer will write a dummy piece of code for module A which will return values to module B and C. This dummy piece of code is known as driver.

2. Integration Testing

In integration testing the individual tested units are grouped as one and the interface between them is tested. Integration testing identifies the problems that occur when the individual units are combined i.e it detects the problem in interface of the two units. Integration testing is done after unit testing.

There are mainly three approaches to do integration testing.

1. Top-down Approach

Top down approach tests the integration from top to bottom, it follows the architectural structure.

Example: Integration can start with GUI and the missing components will be substituted by stubs and integration will go on.

2. Bottom-up approach

In bottom up approach testing takes place from the bottom of the control flow, the higher level components are substituted with drivers.

3. Big bang approach

In big bang approach most or all of the developed modules are coupled together to form a complete system and then used for integration testing.

3. System Testing

Testing the behavior of the whole software/system as defined in software requirements specification(SRS) is known as system testing, its main focus is to verify that the customer requirements are fulfilled.

Page 10: STLC - Software Testing Life Cycle

System testing is done after integration testing is complete. System testing should test functional and non functional requirements of the software.

Following types of testing should be considered during system testing cycle. The test types followed in system testing differ from organization to organization however this list covers some of the main testing types which need to be covered in system testing.

1. Sanity Testing 2. Functional Testing

3. Usability Testing

4. Stress Testing

5. Load Testing

6. Performance Testing

7. Regression Testing

8. Maintenance Testing

9. Security Testing

10. Reliability Testing

11. Accessibility Testing

12. GUI Testing

Sanity Testing1. When there are some minor issues with software and a new build is obtained after fixing the

issues then instead of doing complete regression testing a sanity is performed on that build. You can say that sanity testing is a subset of regression testing.

2. Sanity testing is done after thorough regression testing is over, it is done to make sure that any defect fixes or changes after regression testing does not break the core functionality of the product. It is done towards the end of the product release phase.

3. Sanity testing follows narrow and deep approach with detailed testing of some limited features.

4. Sanity testing is like doing some specialized testing which is used to find problems in particular functionality.

5. Sanity testing is done with an intent to verify that end user requirements are met on not.

6. Sanity tests are mostly non scripted.

Functional Testing1. Functional testing is also known as component testing.

Page 11: STLC - Software Testing Life Cycle

2. It tests the functioning of the system or software i.e. What the software does. The functions of the software are described in the functional specification document or requirements specification document.

3. Functional testing considers the specified behavior of the software.

Usability Testing

Usability means the software's capability to be learned and understood easily and how attractive it looks to the end user.

Usability Testing is a black box testing technique.

Usability Testing tests the following features of the software.

1. How easy it is to use the software.2. How easy it is to learn the software.

3. How convenient is the software to end user.

Stress Testing

Stress testing tests the software with a focus to check that the software does not crashes if the hardware resources(like memory,CPU,Disk Space) are not sufficient.

Stress testing puts the hardware resources under extensive levels of stress in order to ensure that software is stable in a normal environment.

In stress testing we load the software with large number of concurrent users/processes which cannot be handled by the systems hardware resources.

Stress Testing is a type of performance testing and it is a non-functional testing.

Load Testing Load testing tests the software or component with increasing load, number of concurrent users

or transactions is increased and the behavior of the system is examined and checked what load can be handled by the software.

The main objective of load testing is to determine the response time of the software for critical transactions and make sure that they are within the specified limit.

It is a type of performance testing.

Load Testing is non-functional testing.

Page 12: STLC - Software Testing Life Cycle

Performance Testing

Performance Testing is done to determine the software characteristics like response time, throughput or MIPS (Millions of instructions per second) at which the system/software operates.

Performance Testing is done by generating some activity on the system/software, this is done by the performance test tools available. The tools are used to create different user profiles and inject different kind of activities on server which replicates the end-user environments.

The purpose of doing performance testing is to ensure that the software meets the specified performance criteria, and figure out which part of the software is causing the software performance go down.

Performance Testing Tools should have the following characteristics:

1. It should generate load on the system which is tested2. It should measure the server response time

3. It should measure the throughput

Performance Testing Tools

1. IBM Rational Performance Tester

Its a performance testing tool from IBM, it supports load testing against applications such as HTTP, SAP, Siebel etc. It is supported on Windows and Linux.

2. Loadrunner

Loadrunner is HP's (formerly Mercury's) load/stress testing tool for web and other applications, it supports a wide variety of application environments, platforms, and databases. Large suite of network/app/server monitors to enable performance measurement of each tier/server/component and tracing of bottlenecks.

3. Apache jmeter

Java desktop application from the Apache Software Foundation designed to load test functional behavior and measure performance. Originally designed for testing Web Applications but has since expanded to other test functions; may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). Can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types; can make a graphical analysis of performance or test server/script/object behavior under heavy concurrent load.

4. DBUnit

Page 13: STLC - Software Testing Life Cycle

Open source JUnit extension (also usable with Ant) targeted for database-driven projects that, among other things, puts a database into a known state between test runs. Enables avoidance of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage. Has the ability to export and import database data to and from XML datasets. Can work with very large datasets when used in streaming mode, and can help verify that database data matches expected sets of values.

Regression Testing

Regression Testing is done to find out the defects that arise due to code changes made in existing code like functional enhancements or configuration changes.

The main intent behind regression testing is to ensure that any code changes made for software enhancements or configuration changes has not introduced any new defects in the software.

Anytime the changes are made to the existing working code, a suite of test cases is executed to ensure that the new changes have not introduced any bugs in the software.

It is necessary to have a regression test suite and execute that suite after every new version of software is available.

Regression test suite is the ideal candidate for automation because it needs to be executed after every new version.

Maintenance Testing

Maintenance Testing is done on the already deployed software. The deployed software needs to be enhanced, changed or migrated to other hardware. The Testing done during this enhancement, change and migration cycle is known as maintenance testing.

Once the software is deployed in operational environment it needs some maintenance from time to time in order to avoid system breakdown, most of the banking software systems needs to be operational 24*7*365. So it is very necessary to do maintenance testing of software applications.

In maintenance testing, tester should consider 2 parts.

1. Any changes made in software should be tested thoroughly.2. The changes made in software does not affect the existing functionality of the software, so

regression testing is also done.

Why is Maintenance Testing required

1. User may need some more new features in the existing software which requires modifications to be done in the existing software and these modifications need to be tested.

Page 14: STLC - Software Testing Life Cycle

2. End user might want to migrate the software to other latest hardware platform or change the environment like OS version, Database version etc. which requires testing the whole application on new platforms and environment.

Security Testing

Security Testing tests the ability of the system/software to prevent unauthorized access to the resources and data.

As per wikipedia security testing needs to cover the six basic security concepts:

confidentiality, integrity, authentication, authorization, availability and non-repudiation.

Confidentiality

A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.

Integrity

A measure intended to allow the receiver to determine that the information which it is providing is correct.

Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.

Authentication

The process of establishing the identity of the user.

Authentication can take many forms including but not limited to: passwords, biometrics, radio frequency identification, etc.

Authorization

The process of determining that a requester is allowed to receive a service or perform an operation.

Access control is an example of authorization.

Availability

Assuring information and communications services will be ready for use when expected.

Information must be kept available to authorized persons when they need it.

Page 15: STLC - Software Testing Life Cycle

Non-repudiation

A measure intended to prevent the later denial that an action happened, or a communication that took place etc.

In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp.

4. User Acceptance Testing

Acceptance testing is performed after system testing is done and all or most of the major defects have been fixed. The goal of acceptance testing is to establish confidence in the delivered software/system that it meets the end user/customers requirements and is fit for use Acceptance testing is done by user/customer and some of the project stakeholders.

Acceptance testing is done in production kind of environment.

For Commercial off the shelf (COTS) software's that are meant for the mass market testing needs to be done by the potential users, there are two types of acceptance testing for COTS software's.

Alpha Testing

Alpha testing is mostly applicable for software's developed for mass market i.e. Commercial off the shelf(COTS), feedback is needed from potential users. Alpha testing is conducted at developers site, potential users, members or developers organisation are invited to use the system and report defects.

Beta Testing

Beta testing is also know as field testing, it is done by potential or existing users/customers at an external site without developers involvement, this test is done to determine that the software satisfies the end users/customers needs. This testing is done to acquire feedback from the market.

What is Verification

Verification represents static testing techniques.

Verification ensures that the software documents comply with the organisations standards, it is static analysis technique.

Verification answer's the question “Is the Software build according to the specifications”. Important Verification techniques are as follows:

1. Feasibility reviews2. Requirements reviews

Page 16: STLC - Software Testing Life Cycle

3. technical Reviews

4. Walk through

5. Inspections

6. Formal reviews

7. Informal reviews

8. Peer reviews

9. Static Code Analysis

Verification vs Validation

Verification Validation

1. Verification represents static testing techniques.

1. Validation represents dynamic testing techniques.

2. Verification ensures that the software documents comply with the organisations standards, it is static analysis technique.

2. Validation ensures that the software operates as planned in the requirements phase by executing it, running predefined test cases and measuring the output with expected results.

3. Verification answers the question “Is the Software build according to the specifications”.

3. Validation answers the question “Did we build the software fit for purpose and does it provides the solution to the problem”.

What is Quality Control1. Quality Control is concerned with the software product being developed. It measures and

controls the quality of the software as it is being developed.2. Quality control system provides routine checks to ensure that the software is being developed

correctly without errors.

3. The Quality Control system identifies and addresses product errors/defects.

4. Quality Control ensures that the final product is error free and satisfactory.

5. Quality Control (QC) can also be referred as testing activity.

What is Quality Assurance1. Quality Assurance (QA) is a process driven approach, it is a process to monitor and improve

existing quality processes.

Page 17: STLC - Software Testing Life Cycle

2. It is a process of verifying whether the software product or services meets or exceeds the customer expectations.

3. It ensures that the product or services are developed or implemented on agreed standards.

4. Quality assurance ensures that the processes designed for the product development and services are effective enough to meet the objectives.

5. It prevents the software defects/errors.

Quality Assurance Vs Quality Control

Quality Assurance Quality Control

1. Process Driven Approach: It is a process to monitor and improve existing quality processes.

2. Concerned with Product: It measures and controls the quality of the software as it is being developed.

2. Quality Assurance ensures that the processes designed for the product development and services are effective enough to meet the objectives.

2. Quality Control ensures that the final product is error free and satisfactory.

3. Quality Assurance focuses on defect prevention. 3. Quality Control finds product defects.

What is a Bug or Defect

While testing when a tester executes the test cases he might observe that the actual test results do not match from the expected results. The variation in the expected and actual results is known as defects. Different organisation have different names to describe this variation, commonly defects are also known as bug, problem, incidents or issues.

Every incident that occurs during testing may not be a defect or bug. Am incident is any situation in which the software system has a questionable behavior, however we call the incident a defect or bug only if the Root Cause is the problem in the tested component.

Incidents can also occur by some other factors as well like testers mistake in test setup, environment error, invalid expected results etc.

We log these incidents just to keep track of the record of what is observed during the testing so that we can find out the solution to correct it.

Defect Life Cycle

In the figure shown below all the defect reports move through a series of clearly identified states.

Page 18: STLC - Software Testing Life Cycle

1. A defect is in open state when the tester finds any variation in the test results during testing, peer tester reviews the defect report and a defect is opened.

2. Now the project team decides whether to fix the defect in that release or to postpone it for future release. If the defect is to be fixed, a developer is assigned the defect and defect moves to assigned state.

3. If the defect is to be fixed in later releases it is moved to deferred state.

4. Once the defect is assigned to the developer it is fixed by developer and moved to fixed state, after this an e-mail is generated by the defect tracking tool to the tester who reported the defect to verify the fix.

5. The tester verifies the fix and closes the defect, after this defect moves to closed state.

6. If the defect fix does not solve the issue reported by tester, tester re-opens the defect and defect moves to re-opened state. It is then approved for re-repair and again assigned to developer.

7. If the project team defers the defect it is moved to deferred state, after this project team decides when to fix the defect. It is re-opened in other development cycles and moved to re-opened state. It is then assigned to developer to fix it.

Page 19: STLC - Software Testing Life Cycle

Smoke Testing

Smoke testing is done for the software in order to verify that the software is stable enough for further testing. it has a collection of written tests that are performed on the software prior to being accepted for further testing. Smoke testing "touches" all areas of the application without getting too deep, tester looks for answers to basic questions like, "Does the application window opens", "Can tester launch the software?" etc.

The purpose is to determine whether the application is stable enough so that a more detailed testing can be performed. The test cases can be performed manually or by using an automated tool.

A subset of planned test cases is decided which covers the main functionality of the software, but does not bother with finer software component details. A daily build and smoke test is among industry best practices. Smoke testing is done by testers before accepting a build for further testing.

In software engineering, a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Sometimes the tests are performed by the automated system that builds the final software. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger production official source code collection or the main branch of source code.

Confirmation Testing or Re-testing

Confirmation testing is also known as re-testing.

Confirmation Testing is done to make sure that the tests cases which failed in last execution are passing after the defects against those failures are fixed.

For Example:

1. Suppose you were testing some software application and you found defects in some component.

2. You log a defect in bug tracking tool.

3. Developer will fix that defect and provide you with the official testable build.

4. You need to re-run the failed test cases to make sure that the previous failures are gone.

This is known as confirmation Testing or Re-testing

Page 20: STLC - Software Testing Life Cycle

Non Functional Testing

Non functional testing tests the characteristics of the software like how fast the response is, or what time does the software takes to perform any operation.

Some examples of Non-Functional Testing are:

1. Performance Testing 2. Load Testing

3. Stress Testing

4. Usability Testing

5. Reliability Testing

Non functionality testing focuses on the software's performance i.e. How well it works.

Integration Testing

In integration testing the individual tested units are grouped as one and the interface between them is tested. Integration testing identifies the problems that occur when the individual units are combined i.e it detects the problem in interface of the two units. Integration testing is done after unit testing.

There are mainly three approaches to do integration testing.

1. Top-down Approach Top down approach tests the integration from top to bottom, it follows the architectural

structure. Example: Integration can start with GUI and the missing components will be substituted by stubs

and integration will go on.

2. Bottom-up approach In bottom up approach testing takes place from the bottom of the control flow, the higher level

components are substituted with drivers.

3. Big bang approach In big bang approach most or all of the developed modules are coupled together to form a

complete system and then used for integration testing.

Page 21: STLC - Software Testing Life Cycle

Globalization Testing

In the current scenario of the global marketplace, it is very important to make software products which are sensitive to the different location and cultural expectations of users around the world. Most non-English-speaking customers have operating system in their native language and they expect that computer programs will not fail on their computers, also they want that the software is available in their native language as well. The software companies which ensure that their software products are easily acceptable in different regions and cultures will definitely gain more market share than the company's which do not focus on globalization.

Globalization is the term used to describe the process of producing software that can be run independent of its geographical and cultural environment. Localization is the term used to describe the process of customizing the globalized software for a specific environment. For simplicity, the term �globalization� will be used to describe both concepts, for in the broadest sense of the term, software is not truly globalized unless it is localized as well.

There are lot aspects that must be considered when producing globalized software. Some of the aspects are as follows:

1. Sensitivity to the English vocabulary2. Date and time formatting

3. Currency handling

4. Paper sizes for printing

5. Address and telephone number formatting

Static Testing

Static testing is the form of software testing where you do not execute the code being examined. This technique could be called non-execution technique. It is primarily syntax checking of the code or manually reviewing the code, requirements documents, design documents etc. to find errors.

From the black box testing point of view, static testing involves reviewing requirements and specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.

The fundamental objective of static testing technique is to improve the quality of the software products by finding errors in early stages of software development life cycle.

Following are the main Static Testing techniques used:

1. Informal Reviews2. Walkthrough

Page 22: STLC - Software Testing Life Cycle

3. Technical Reviews

4. Inspection

5. Static Code Analysis

Dynamic Testing

Dynamic Testing is used to test the software by executing it. Dynamic Testing is also known as Dynamic Analysis, this technique is used to test the dynamic behavior of the code. In dynamic testing the software should be compiled and executed, this analyses the variable quantities like memory usage, CPU usage, response time and overall performance of the software.

Dynamic testing involves working with the software, input values are given and output values are checked with the expected output. Dynamic testing is the Validation part of Verification and Validation. Some of the Dynamic Testing Techniques are given below:

1. Unit Testing 2. Integration Testing

3. System Testing

4. Acceptance Testing

Beta Testing1. Beta Testing is done after alpha testing.2. Testing done by the potential or existing users, customers and end users at the external site

without developers involvement is know as beta testing.

3. It is operation testing i.e. It tests if the software satisfies the business or operational needs of the customers and end users.

4. Beta Testing is done for external acceptance testing of COTS(Commercial off the Shelf) software.

5. It is done to acquire feedback from mass market, for example beta testing of Gmail.

Exploratory Testing

As the name suggests exploratory testing is about exploring more into the software and finding about the software.

In exploratory testing tester focuses more on how the software actually works, testers do minimum planning and maximum execution of the software by which they get in depth idea about the software functionality, once the tester starts getting insight into the software he can make decisions to what to test next.

Page 23: STLC - Software Testing Life Cycle

As per Cem Kaner exploratory testing is "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."

Exploratory Testing is mostly performed by skilled testers.

Exploratory testing is mostly used if the requirements are incomplete and time to release the software is less.

Install Testing

Install Testing is done to ensure that the software and its components get installed successfully and function properly post installation.

While doing Installation testing, test engineer should keep in mind the following points:

1. Product Installer should check for the pre-requisites needed for the software.2. Product Installer should give user the default install location of the software.

3. Installer should allow user to change the install location.

4. Over the network installation should be supported.

5. Try installing the software without administrative privileges.

6. Installation should be successful on all the supported platforms.

7. Installer should give option to repair or uninstall.

8. Un-installation should happen successfully and all the installed files should get cleaned up from the install location, also registry entry should get removed.

9. Silent installation should be successful.

10. Native installation should be successful.

Page 24: STLC - Software Testing Life Cycle

Silent Installation

Silent installation does not send messages to the console, silent installation verifies that messages and errors are stored properly in log files. Response files are used for data input in silent installation.

Native Installation

Native installation installs the software application using the OS installation utilities, it verifies that the installation of native packages(i.e. rpm files, Solaris pkg files, AIX installp files) for Linux and Unix platforms is successful.

Interactive Installation

Interactive installation is the GUI installation of software application, user sees a installation screen and provides the installation parameters.

Interoperability Testing

Interoperability means the capability of the software to interact with other systems/softwares or software components.

Interoperability testing means testing the software to check if it can inter-operate with other software component, softwares or systems.

As per IEEE Glossary interoperability is:The ability of two or more systems or components to exchange information and to use the information that has been exchanged.

Error guessing

This is a Test design technique where the experience of a tester is used to find the components of software where defects might be present.

It is mostly done by experienced testers who can use their past experience to find defects in software.

Error guessing has no rules for testing, it only uses the testers previous skills.

In error guessing testers can think of situations where software will fail. For example:

1. Division by zero2. Pressing submit button on form without filling any entries.

3. Entering wring data in the fields and checking software behavior.

Page 25: STLC - Software Testing Life Cycle

Alpha Testing

Alpha Testing is done to ensure confidence in the product or for internal acceptance testing, alpha testing is done at the developers site by independent test team, potential end users and stakeholders.

Alpha Testing is mostly done for COTS(Commercial Off the Shelf) software to ensure internal acceptance before moving the software for beta testing.

WhiteBox Testing1. WhiteBox testing tests the structure of the software or software component. It checks

what going on inside the software.2. Also Know as clear box Testing,glass box testing or structural testing.

3. Requires knowledge of internal code structure and good programming skills.

4. It tests paths within a unit and also flow between units during integration of units.

WhiteBox Test Design Techniques

Typically Whitebox Test Design Techniques include:

1. Line Coverage or Statement Coverage 2. Decision Coverage

3. Condition Coverage

4. Multiple Condition Decision Coverage

5. Multiple Condition Coverage

1) Statement Coverage or Line Coverage

Statement coverage is also known as line coverage. The formula to calculate statement coverage is:

Statement Coverage=(Number of statements exercised/Total number of statements)*100

Studies in the software industry have shown that black-box testing may actually achieve only 60% to 75% statement coverage, this leaves around 25% to 40% of the statements untested.

To illustrate the principles of code coverage lets take one pseudo-code which is not specific to any programming language. We have numbered the code lines just to illustrate the statement coverage example however this may not always be correct.

1. READ X

Page 26: STLC - Software Testing Life Cycle

2. READ Y

3. IF X>Y

4. PRINT “X is greater than Y”

5. ENDIF

Let us see how can we achieve 100% code coverage for this pseudo-code, we can have 100% coverage by just one test set in which variable X is always greater than variable Y.

TEST SET 1: X=10, Y=5

A statement may be a single line or it may be spread over several lines. A statement can also contain more than one statement. Some code coverage tools group statements that are always executed together in a block and consider them as one statement.

2) Decision Coverage or Branch Coverage

Decision Coverage is also known as Branch Coverage.

Whenever there are two or more possible exits from the statement like an IF statement, a DO-WHILE or a CASE statement it is known as decision because in all these statements there are two outcomes, either TRUE or FALSE.

With the loop control statement like DO-WHILE or IF statement the outcome is either TRUE or FALSE and decision coverage ensures that each outcome(i.e TRUE and FALSE) of control statement has been executed at least once.

Alternatively you can say that control statement IF has been evaluated both to TRUE and FALSE.

The formula to calculate decision coverage is:

Decision Coverage=(Number of decision outcomes executed/Total number of decision outcomes)*100%

Research in the industries have shown that even if through functional testing has been done it only achieves 40% to 60% decision coverage.

Decision coverage is stronger that statement coverage and it requires more test cases to achieve 100% decision coverage.

Let us take one example to explain decision coverage:1. READ X2. READ Y

Page 27: STLC - Software Testing Life Cycle

3. IF "X > Y"

4. PRINT X is greater that Y

5. ENDIF

To get 100% statement coverage only one test case is sufficient for this pseudo-code.

TEST CASE 1: X=10 Y=5

However this test case won't give you 100% decision coverage as the FALSE condition of the IF statement is not exercised.

In order to achieve 100% decision coverage we need to exercise the FALSE condition of the IF statement which will be covered when X is less than Y.

So the final TEST SET for 100% decision coverage will be:

TEST CASE 1: X=10, Y=5

TEST CASE 2: X=2, Y=10

Note: 100% decision coverage guarantees 100% statement coverage but 100% statement coverage does not guarantee 100% decision coverage.

3) Condition Coverage or Predicate Coverage

Condition coverage is also known as Predicate Coverage

Condition coverage is seen for Boolean expression, condition coverage ensures whether all the Boolean expressions have been evaluated to both TRUE and FALSE.

Let us take an example to explain Condition Coverage

IF ("X && Y")

In order to suffice valid condition coverage for this pseudo-code following tests will be sufficient.

TEST 1: X=TRUE, Y=FALSE

TEST 2: X=FALSE, Y=TRUE

Note: 100% condition coverage does not guarantee 100% decision coverage.

Page 28: STLC - Software Testing Life Cycle

4) Multiple Condition Decision Coverage

Multiple Condition Decision Coverage(MCDC) is also known as Modified Condition Decision Coverage.

In MCDC each condition should be evaluated at least once which affects the decision outcome independently.

Example for MCDC

if {(X or Y) and Z} then

To satisfy condition coverage, each Boolean expression X,Y and Z in above statement should be evaluated to TRUE and FALSE at least one time.

The TEST CASES for condition coverage will be:

TEST CASE1: X=TRUE, Y=TRUE, Z=TRUETEST CASE2: X=FALSE, Y=FALSE, Z=FALSE

To satisfy the decision coverage we need to ensure that the IF statement evaluates to TRUE and FALSE at least once. So the test set will be:

TEST CASE1: X=TRUE, Y=TRUE, Z=TRUETEST CASE2: X=FALSE, Y=FALSE, Z=FALSE

However for MCDC the above test cases are not sufficient because in MCDC each Boolean variable should be evaluated to TRUE and FALSE at least once and also affect the decision outcome.

So to ensure MCDC we need 4 more test cases.

TEST CASE3: X=FALSE, Y=FALSE, Z=TRUETEST CASE4: X=FALSE, Y=TRUE, Z=TRUETEST CASE5: X=FALSE, Y=TRUE, Z=FALSETEST CASE6: X=TRUE, Y=FALSE, Z=TRUE

In test case 3 decision outcome is FALSEIn test case 4 decision outcome is TRUEIn test case 5 decision outcome is FALSEIn test case 6 decision outcome is TRUE

So in the above test cases you can see that the change in the value of Boolean variables made a change in decision outcomes.

Page 29: STLC - Software Testing Life Cycle

5) Multiple Condition Coverage

Multiple Condition Coverage is also known as Condition Combination Coverage.

In Multiple Condition Coverage for each decision all the combinations of conditions should be evaluated.

Lets take an example:

if (A||B)thenprint C

Here we have 2 Boolean expressions A and B, so the test set for Multiple Condition Coverage will be:

TEST CASE1: A=TRUE, B=TRUETEST CASE2: A=TRUE, B=FALSETEST CASE3: A=FALSE, B=TRUETEST CASE4: A=FALSE, B=FALSE

As you can see that there are 4 test cases for 2 conditions. Similarly there will be 8 test cases for 3 conditions.

So you can say that if there are n conditions, there will be 2^n tests.

Black Box Testing: Types and techniques of BBT

I have covered what is White box Testing in previous article. Here I will concentrate on Black box testing. BBT advantages, disadvantages and and How Black box testing is performed i.e the black box testing techniques.

Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.

Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ‘structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.

Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.

Page 30: STLC - Software Testing Life Cycle

Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing- Tester can be non-technical.- Used to verify contradictions in actual system and the specifications.- Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing- The test inputs needs to be from large sample space.- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult- Chances of having unidentified paths during this testing

Methods of Black box Testing:

Graph Based Testing Methods:Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Error Guessing:This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

Extends equivalence partitioningTest both sides of each boundaryLook at output boundaries for test cases tooTest min, min-1, max, max+1, typical values

BVA techniques:1. Number of variablesFor n variables: BVA yields 4n + 1 test cases.

Page 31: STLC - Software Testing Life Cycle

2. Kinds of rangesGeneralizing ranges depends on the nature or type of variablesAdvantages of Boundary Value Analysis1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +13. Forces attention to exception handling

Limitations of Boundary Value AnalysisBoundary value testing is efficient only for variables of fixed values i.e boundary.

Equivalence Partitioning:Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:1. If an input condition specifies a range, one valid and one two invalid classes are defined.2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:Different independent versions of same software are used to compare to each other for testing in this method.

Reference - http://www.softrel.org/stgb.html