stqa lab writeup2013
TRANSCRIPT
-
8/12/2019 STQA Lab Writeup2013
1/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
K.J.SOMAIYA COLLEGE OF ENGINEERING
VIDYAVIHAR, MUMBAI400 077
Department of Information Technology
Subject: Software Testing And Quality Assurance
Term: ODD (2013) Class / SEM: VII B.E. (IT)
List of Experiments
Sr.
No.
Title Outcomes Achieved
1 Study of tools and techniques used invarious phases of SDLC.
1.An ability to apply knowledge of
mathematics, science, and engineering.(a)
2. A recognition of the need for, and an abilityto engage in life-long learning(i)
3. An ability to use the techniques, skills, andmodern engineering tools necessary for
engineering practice .(k)
2Use of IEEE-829 format for developingtest plan for an educational instituteapplication designed for online admissionsystem.
1.An ability to design a system, component, or
process to meet desired needs within realistic
constraints. (c)
2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards
and their applications. (m)
3Writing a Unit Test Plan using a standardtemplate for testing a client serverProgram using UDP as a transportprotocol.
1.An ability to design a system, component, or
process to meet desired needs within realistic
constraints. (c)2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards
and their applications. (m)
4White Box Testing using control flow :Designing test cases using CFG.(ControlFlow Graph)
1.An ability to design a system, component, or
process to meet desired needs within realisticconstraints. (c)
2. An ability to identify and formulate engineering
problems. (e)5White Box Testing using data flow :Designing test cases using DFG.(DataFlow Graph)
1.An ability to design a system, component, or
process to meet desired needs within realisticconstraints. (c)
2. An ability to identify and formulate
engineering problems. (e)
6Black Box Testing: Study of HP QTP(QuickTestProfessional)V10.0Automation
1. An ability to use the techniques, skills, and
modern engineering tools necessary for
-
8/12/2019 STQA Lab Writeup2013
2/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
of Functional & Regression Testing . engineering practice.(k)
2. An understanding of best practices, standards
and their applications. (m)
7Automated Performance Testing: Study ofHP Load Runner.
1. An ability to use the techniques, skills, and
modern engineering tools necessary for
engineering practice.(k)2. An understanding of best practices, standards
and their applications. (m)
8 Write the acceptance criteria for current
software project(BE Project) you are
working on.
1. An ability to apply knowledge of
mathematics, science, and engineering.(a)2. An ability to identify and formulate engineering
problems. (e)
3.An understanding of best practices, standardsand their applications. (m)
9 Study of Software Quality Standard: ISO9000:2000 Fundamentals and
Requirements.
1.An ability to apply knowledge of
mathematics, science, and engineering.(a)
2.A recognition of the need for, and an abilityto engage in life-long learning(i)3.An understanding of best practices, standardsand their applications. (m)
10 Exploring WinRunner 1.An ability to use the techniques, skills, and
modern engineering tools necessary for
engineering practice.(k)2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards
and their applications. (m)
Text Books:
1. Software Testing and Quality Assurance: Theory and Practice, Sagar Naik, Universityof Waterloo, Piyu Tripathy, Wiley , 2008
References:
1. Effective methods for Software Testing William Perry, Wiley.
Subject In-charge
-
8/12/2019 STQA Lab Writeup2013
3/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title:. Revision of testing tools and techniques
used in software development life cycle.
-
8/12/2019 STQA Lab Writeup2013
4/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: Revision of testing tools and techniques used in software development lifecycle.
__________________________________________________________________________
Objective:
After completing this experiment you will be able to:
1.Understand basic tools and techniques commonly used in testing software in various
phases.
__________________________________________________________________________
Resources needed: Internet, Libre Office
__________________________________________________________________________
Theory
There are basically two types of software testing tools:
1. Manual tools- These are the tools which are used in early phases of Software
Development Life cycle (SDLC). It requires a tester to play the role of an end
user, and use most of the features of the application to ensure correct behavior.
To ensure completeness of testing, the tester often follows a
written test plan.
2. Automatic tools- These are the tools which are used in later phases of SDLC.
Test automation is the technique of testing software using some test program
rather than people. A test program is written that executes the software and
identifies its defects. These test programs may be written from scratch, or theymay be written utilizing a general test automation framework and can be
purchased from a third party vendor. Test automation can be used to automate
time consuming tasks.SDLC has Six phases:
REQUIREMEN
GATHERING
DESIGN
PHASE
CODING
PHASE
TESTING
PHASE
DEPLOYMENT
PHASE
MAINTENANCE
PHASE
-
8/12/2019 STQA Lab Writeup2013
5/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Following are the testing tools and techniques used in each phase of SDLC
1. Tools and Techniques used in Requirement Gathering phase:
1. Checklist: checklist is a list of probing questions prepared by the tester for
reviewing a predetermined function.
2. Confirmation/ Examination: This verifies the correctness of many aspects of the
system by contacting third parties such as users. This also involves examining a
document to verify that it exists.
3. Desk checking: This mechanism is a review performed by the
originator of requirements, design or program so as to check on the work performed
by the other individuals.
4. Error Guessing: This is a mechanism where experience or judgments of
expert people is used to predetermine through guessing what the most
probable errors will be and then testing only for those errors to ensure whether
system can handle those test conditions.5. Fact Finding: It is a mechanism where information needed to conduct a test or
to provide assurance is obtained through an investigative process.
6. Flow Chart: It is a graphical representation of the program flow in order to
evaluate the completeness of the requirements, design or program specification.
7. Inspection: This is the mechanism where deliverables produced in the each phase
of system development life cycle are reviewed step by step.
8. Modeling: This is the mechanism of simulating the functioning of the
application system and its environment to test if design specifications
can achieve the system objectives. The actual system is built based on the
results of the output.
9. Peer review: This is the process where programmers review the programs of
another programmer. Normally following things are checked. (This happens before
execution so this review is for a document of source code.)
1. compliance to standards (company)
2. compliance to producers standards
3. compliance to guidelines
4. use of good practices
5. efficiency
6. effectiveness7. economy
10. Risk Matrix: This is the mechanism where risk s in the application system is
identified and adequacy of the controls in each part of the software is tested. The
objective is to reduce those risks to the level acceptable to the user.
-
8/12/2019 STQA Lab Writeup2013
6/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
11.Scoring: This is the process used to determine the degree of testing for high
risk systems as well as low risk systems. This helps to decide the amount of testing
required for a particular application. If the score is high then more testing needed.
12.Walkthrough: This is a process where a programmer explains his
program to the test team (without actual execution, just a document).
Programmer may use simulation of the execution of the application system. The
objective of walkthrough is to provide a basis for the test team to identify thedefects.
2. Tools and Techniques used in Design phase of SDLC:
1. Cause-Effect analysis: This is a graphical tool which shows the effect of
every event taken place in the system. This helps the tester to categorize every
event by the effect it has produced. This also helps conditions required for
multiple test events which will produce the same effect.
2. Checklist: A set of questions designed to review the design of the application
system.
3. Confirmation/ Examination: This to examine the design document for the
application system.
4. Correctness proof: This is a mechanism which involves developing
a state of statements/ hypothesis which defines the correctness of processing. These
hypotheses are then tested to determine whether the application system
performs processing in accordance with these correctness statements.
5. Design based functional testing: this is a tool which maps the designed
based functions to the requirements. This tool identifies these functions for testing
purpose.
6. Design reviews: this is a mechanism used during the process softwaredevelopment. This happens in accordance with software development
methodology. The basic objective of design review is to ensure compliance to the
design methodology.
7. Desk checking: This is mechanism where designer of the software reviews the
work done by other people in the team.
8. Error guessing: Here the experienced designer helps the testing team to
guess the probable errors so that test cases can be designed accordingly.
9. Executable specifications: These are the system specifications
which are written in a specific language compiled into testable program. The
compiled specification will have less detail and precision than the final version
of the program, but they are sufficient to evaluate the proper functioning of the
system.
-
8/12/2019 STQA Lab Writeup2013
7/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
10. Fact Finding: This is the process of investigating the facts about design
documents.
11. Flow chart: This is a graphical representation of the program flow. It helps to
evaluate the completeness of the high level design.
12. Inspection: This is a step by step review of the deliverables produced in the
design phase so as to identify the defects.
13.Modeling: This is the method of simulating the functioning of the
application system and its environment to confirm if the design specifications
will achieve system objectives.
14. Peer review: Here experienced and senior designers review the work done by
others. Basically this review is for checking the compliance to standards, procedures
and guidelines and the use of good practices used in design.
15. Risk Matrix: Here high level design is checked to identify any risks and the
controls implemented to mitigate that risk. This is helpful in reaching the level of
acceptable risk to the user.16. Scoring: This mechanism is used to decide the amount of testing required to test
the high level design. This helps to identify areas where more testing is required.
17.Test data: These are system transactions which are specifically created for the
purpose of testing the design of application system.
18. Walkthrough: This is a process where designer of the system explains the
details of design to the testing team so that they can create proper test cases.
The objective of the walkthrough is to provide a basis for questioning of
the test team as a basis of identifying the defects.
3. Tools and Techniques used in Coding phase of SDLC:
1. Boundary Value Analysis: It is a method for dividing code of application
program into segments so that testing can occur within the boundaries of those
segments. This is the concept from top down system design approach.
2. Cause Effect Graphics: This is a graphical tool which shows the effect of every
event taken place in the system. In the coding phase this helps the testing team to
see the effect produced by every piece of code written by the developer. This
helps the testing team to further categorize every event by the effect it has produced.
This also helps in reducing the number of test conditions required for multiple
events which produce same effect.
3. Checklist: In the coding phase checklist is a list of probing
questions prepared by the testing team with respect to coding strategies so
that they can design good test cases.
-
8/12/2019 STQA Lab Writeup2013
8/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
4. Compiler based analysis: This tool utilizes the diagnostic produced by a compiler
to identify program defects during the compilation of the program. This helps the
testing team to design their test plan accordingly.
5. Complexity based metric testing: This mechanism uses the statistics and
mathematics to develop relationships that can be used to identify the complexity
of a compute programs. It also helps in identifying the completeness of testing in
evaluating the complex logic.
6. Control flow analysis: This is a graphical tool which is used to analyze the
branch logic within the program to indentify logic problems, so that testing team can
design appropriate test cases.
7. Confirmation/ Examination: This is a process to confirm that proper design
document exists and to examine that document as per the standard.
8. Coverage based metric testing: This is a tool which uses
mathematical relationship to show, what percentage of the application
system has been covered by the test process. The resulting metrics is used forfinding out the effectiveness of the test process.
9. Data flow analysis: This is a tool used in coding phase to ensure that the data
used in the program has been properly defined and the data which is being
defined is appropriately used.
10.Desk checking: This is the review mechanism performed by the
originator of the system so as to check on the work performed by the individual.
11. Error guessing: This is a mechanism where judgment and experience of
some senior people is taken into account to guess the probable errors. This
helps the testing team to write the test cases which will handle these errors.
12.Fact Finding: It is a mechanism where information needed to conduct a test
or to provide assurance is obtained through an investigative process.
13. Flow Chart: It is a graphical representation of the program flow in order to
evaluate the completeness of the requirements, design or program specification.
14. Inspection: This is the mechanism where deliverables produced in the each phase
of system development life cycle are reviewed step by step.
4. Tools and Techniques used in Testing phase of SDLC:
1. Acceptance test criteria: This tool is used by the testing team to develop systemstandards and functionality which must be achieved before the user will accept
the system in the production environment.
-
8/12/2019 STQA Lab Writeup2013
9/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Acceptance testing is also known as Black Box Testing or Functional
Acceptance test criteria: This is mechanism used by the testing team to develop
Testing or End User Testing or Confidence Testing or Validation Testing
or UAT (user acceptance testing ).
1. Boundary value analysis
2. Checklist
3. Complexity based metric testing
4. Confirmation/ Examination
5. Correctness proof
6. Coverage based metric testing
7. Data dictionary
8. Design based functional testing
9. Disaster testing
10.Error guessing
11. Exhaustive testing: this is the mechanism where every possible path andcondition is evaluated and tested. This is the only test method which
guarantees proper functioning of application program.
12.Fact finding
13.Inspections
14.Instrumentation
5. Tools and Techniques used in Deployment/Installation phase of
SDLC:
1. Acceptance Test Criteria: This is a process used by testing team to develop system
standards and functionality which must be achieved before the user will acceptthe system in the production environment. Acceptance testing is also known
as Black Box Testing or Functional Acceptance test criteria: This is
mechanism used by the testing team to develop Testing or End User Testing or
Confidence Testing or Validation Testing or UAT (user acceptance
testing ).
2. Checklist: This is a list of questions prepared by the testing team to get the insight
into the deployment phase.
3. Confirmation/ Examination: To confirm and examine the documents
related to deployment phase.4. Error Guessing: The people expert in deploying the application is contacted and
from the discussion the probable errors taking place in deployment phase are listed
down. Accordingly testing team will prepare the test plan to address those errors.
-
8/12/2019 STQA Lab Writeup2013
10/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
5. Fact Finding: It is a mechanism where information needed to conduct a test or
to provide assurance is obtained through an investigative process. (Interviews,
Surveys )
6. Inspection: This is the mechanism where deliverables produced in the each phase
of system development life cycle are reviewed step by step.
8. Instrumentation: This makes use of a computer monitor or thecounter to know the frequency of particular error which occurs again and again.
7. Parallel operation: This is a process where old version of the software and new
version of the software are running parallel at the same time so that differences
between two versions can be found out and testing can be planned accordingly.
8. Peer review: This is a process where peers are requested to review the various
aspects of deployment phase. Normally peer review process checks for
compliance to various standards, procedures, guidelines, best practices etc.
9. System logs: This is a mechanism where information is collected during the
operation of a computer system for analysis purpose. This helps to determinehow well the system has performed. The logs are produced by operating software
such as DBMS systems operating systems, job accounting systems and they are used
for testing purpose. The installation logs created during installation process are
extremely useful to fix problems occurring during installation phase.
10. Utility programs: These are general purpose software packages which can be
used in the testing of an application system. The most valuable softwares are
those which analyze data files.
6. Tools and Techniques used in Maintenance phase of SDLC:
1. Checklist: List of questions prepared to understand the maintenance
phase of the system.
2. Code comparison: This tool is used by the tester to identify the difference
between two versions of the same program. This can be used either for the object
code or source code.
3. Confirmation/ Examination: To confirm and examine the relevant document.
4. Desk checking: Owners of modules/process keep check on the work done by
individuals so as to ensure quality.
5. Disaster testing: This is to check the preparedness of the user for
unanticipated disaster. Testing team prepares a special DisasterRecovery Plan to address this issue.
6. Error guessing: Testing team prepares a list of probable errors by discussing with
experts in deployment phase. From those errors the test plan for deployment phase
is prepared so that system will be able to handle those errors.
-
8/12/2019 STQA Lab Writeup2013
11/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
7. Fact finding: This is the process of investigation to find out facts about some
testing condition. This is done mainly by referring to the documents.
8. Inspections: This is a review process to check the deliverables produced by each
phase of SDLC.
9. Instrumentation: This is using the computers or counters to know the frequency
of particular error so that test plan can be prepared accordingly.10.Integrated test facility: The test data is given as input to production
version of an application. So live application is being tested in parallel for test
data as well as live production data. This helps to compare the results obtained so
as modify the test cases if required.
11. Peer review: Peers are requested to review the process of
deployment from the point of view of best practices.
12.SCARF: System Control Audit Review File
This is a mechanism where the software is operated over a period of time and the
data is/info gathered during the operation to perform the analysis. E.g. all data entry
errors are gathered over a period of time and analysis is done whether quality of
input is improving over a time or not.
13. Test data: actual system transaction which are created for the purpose of
testing the application data.
14. Test data generator: These are software system which can be used to automatically
generate the test data for testing purposes. These generators repair parameters of the
data element values in order to generate large amounts of test transactions.
15. Tracing: A representation of the path followed by computer programs as
they process data or the paths followed in the database to locate one or more
pieces of data used to produce a logical record for processing.
16.Utility programs: A general purpose software package which can be
used in the testing of an application system. The most valuable utilities are
those which analyze data files.
Procedure / Approach /Algorithm / Activity Diagram:
Study various tools and techniques used in software testing throughout software
development life cycle.
1.Tools and Techniques used in Requirement Gathering phase.2.Tools and Techniques used in Design phase
3Tools and Techniques used in Coding phase.
4Tools and Techniques used in Testing phase.5Tools and Techniques used in Deployment phase.
6Tools and Techniques used in Maintenance phase.
_________________________________________________________________________
-
8/12/2019 STQA Lab Writeup2013
12/83
-
8/12/2019 STQA Lab Writeup2013
13/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Use of IEEE-829 format for developing
test plan.
-
8/12/2019 STQA Lab Writeup2013
14/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: To develop test plan for an application for educational institute for online
admission system.
_________________________________________________________________________
Objective:
After completing this experiment you will be able to:
1. Develop a test plan for any software project as per the standard IEEE format.
_________________________________________________________________________
Resources needed: Internet, LibreOffice
Theory
A test plan is a document detailing a systematic approach to testing a system such as
a machine or software. The plan typically contains a detailed understanding of what
the eventual workflow will be.
A test plan documents the strategy that will be used to verify and ensure that a
product or system meets its design specifications and other requirements. A testplan is usually prepared by or with significant input from Test Engineers.
Depending on the product and the responsibility of the organization to which the
test plan applies, a test plan may include one or more of the following:
Design Verification or Compliance test - to be performed during the development or
approval stages of the product, typically on a small sample of units.
Manufacturing or Production test - to be performed during preparation or assembly
of the product in an ongoing manner for purposes of performance verification and
quality control.
Acceptance or Commissioning test - to be performed at the time of delivery or
installation of the product.
Service and Repair test - to be performed as required over the service life of the
product.
Regression test - to be performed on an existing operational product, to verify that
existing functionality didn't get broken when other aspects of the environment are
changed (e.g., upgrading the platform on which an existing application runs).
A complex system may have a high level test plan to address the overall
requirements and supporting test plans to address the design details of subsystems
and components.
-
8/12/2019 STQA Lab Writeup2013
15/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Test plan document formats can be as varied as the products and organizations to
which they apply. There are three major elements that should be described in the
test plan: Test Coverage, Test Methods, and Test Responsibilities. These are also
used in a formal test strategy.Test coverage
Test coverage in the test plan states what requirements will be verified during what
stages of the product life. Test Coverage is derived from design specifications and
other requirements, such as safety standards or regulatory codes, where each
requirement or specification of the design ideally will have one or more
corresponding means of verification. Test coverage for different product life stages
may overlap, but will not necessarily be exactly the same for all stages. For
example, some requirements may be verified during Design Verification test, but
not repeated during Acceptance test. Test coverage also feeds back into the design
process, since the product may have to be designed to allow test access.
Test methods
Test methods in the test plan state how test coverage will be implemented. Test
methods may be determined by standards, regulatory agencies, or contractual
agreement, or may have to be created new. Test methods also specify test
equipment to be used in the performance of the tests and establish pass/fail criteria.
Test methods used to verify hardware design requirements can range from very
simple steps, such as visual inspection, to elaborate test procedures that are
documented separately.
Test responsibilities
Test responsibilities include what organizations will perform the test methods and at
each stage of the product life. This allows test organizations to plan, acquire or
develop test equipment and other resources necessary to implement the test methods
for which they are responsible. Test responsibilities also includes, what data will be
collected, and how that data will be stored and reported (often referred to as
"deliverables"). One outcome of a successful test plan should be a record or report
of the verification of all design specifications and requirements as agreed upon by
all parties.
_________________________________________________________________________
Procedure / Approach /Algorithm / Activity Diagram:
IEEE 829 format template for test plan
-
8/12/2019 STQA Lab Writeup2013
16/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
1. Test Plan Identifier
Some type of unique company generated number to identify this test plan, its level
and the level of software that it is related to. Preferably the test plan level will be the
same as the related software level. The number may also identify whether the test planis a Master plan, a Level plan, an integration plan or whichever plan level it
represents. This is to assist in coordinating software and testware versions within
configuration management. Keep in mind that test plans are like other software
documentation, they are dynamic in nature and must be kept up to date.
Therefore, they will have revision numbers. You may want to include author and
contact information including the revision history information as part of either the
identifier section of as part of the introduction.
2. References
List all documents that support this test plan. Refer to the actual
version/release number of the document as stored in the configuration
management system. Do not duplicate the text from other documents as this
will reduce the viability of this document and increase the maintenance effort.
Documents that can be referenced include:
Project Plan
Requirements specifications
High Level design document
Detail design document
Development and Test process standards
Methodology guidelines and examples
Corporate standards and guidelines
3. Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master etc.).
This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that
contain information relevant to this project/process. If preferable, you can create
a references section to contain all reference documents.
Identify the Scope of the plan in relation to the Software Project plan that it relatesto. Other items may include, resource and budget constraints, scope of the testing
effort, how testing relates to other evaluation activities (Analysis& Reviews), and possible the process to be used for change control and
communication and coordination of key activities. As this is the Executive
Summary keep information brief and to the point.
-
8/12/2019 STQA Lab Writeup2013
17/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
4. Test Items (Functions)
These are things you intend to test within the scope of this test plan.
Essentially, something you will test, a list of what is to be tested. This can bedeveloped from the software application inventories as well as other sources of
documentation and information. This can be controlled and defined by your local
Configuration Management(CM) process if you have one. This information
includes version numbers, configuration requirements where needed, (especially
if multiple versions of the product are supported). It may also include key
delivery schedule issues for critical elements. Remember, what you are testing is
what you intend to deliver to the Client. This section can be oriented to the level of
the test plan. For higher levels it may be by application or functional area, for
lower levels it may be by program, unit , module or build.
5. Software Risk Issues
Identify what software is to be tested and what the critical areas are, such as: A.
Delivery of a third party product.
B. New version of interfacing softwareC. Ability to use and understand a new package/tool, etc. D.
Extremely complex functionsE. Modifications to components with a past history of failure
F. Poorly documented modules or change requests
There are some inherent software risks such as complexity; these need to be
identified.A. Safety
B. Multiple interfaces
C. Impacts on Client
D. Government regulations and rules
Another key area of risk is a misunderstanding of the original requirements. This
can occur at the management, user and developer levels. Be aware of vague or
unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help
identify potential areas within the software that are risky. If the unit testing
discovered a large number of defects or a tendency towards defects in a
particular area of the software, this is an indication of potential future
problems. It is the nature of defects to cluster and clump together. If it was
defect ridden earlier, it will most likely continue to be defect prone.One good approach to define where the risks are is to have several
brainstorming sessions. Start with ideas, such as, what worries me about this
project/application.
-
8/12/2019 STQA Lab Writeup2013
18/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
6. Features to be tested
This is a listing of what is to be tested from the USERS viewpoint of what
the system does. This is not a technical description of the software, but a USERS
view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H,
M, L): High, Medium and Low. These types of levels are understandable to aUser. You should be prepared to discuss why a particular level was chosen. It should
be noted that Section 4 and Section 6 are very similar. The only true difference is
the point of view. Section 4 is a technical type description including version
numbers and other technical information and Section 6 is from the Users
viewpoint. Users do not understand technical software terminology; they understand
functions and processes as they relate to their jobs.
7. Features not to be tested
This is a listing of what is NOT to be tested from both the Users viewpoint of
what the system does and a configuration management/version control view. Thisis not a technical description of the software, but a USERS view of the functions.
Identify WHY the feature is not to be tested, there can be any number of
reasons.
Not to be included in this release of the Software. Low risk, has been used before and is considered stable.
Will be released but not tested or documented as a functional part of the
release of this version of the software. Sections 6 and 7 are directly related to
Sections 5 and 17. What will and will not be tested are directly affected by the
levels of acceptable risk within the project, and what does not get tested affects
the level of risk of the project.
8. Approach (Strategy )
This is your overall test strategy for this test plan; it should be appropriate to the
level of the plan (master, acceptance, etc.) and should be in agreement with all
higher and lower levels of plans. Overall rules and processes should be identified.
Are any special tools to be used and what are they? Will the tool require special training?
What metrics will be collected?
Which level is each metric to be collected at? How is Configuration Management to be handled?
How many different configurations will be tested? Hardware
Software
Combinations of HW, SW and other vendor packages
What levels of regression testing will be done and how much at each test level?
Will regression testing be based on severity of defects detected? How will elements in the requirements and design that do not make sense
or are untestable be processed?
If this is a master test plan the overall project testing approach andcoverage requirements must also be identified.
Specify if there are special requirements for the testing.
Only the full component will be tested.
-
8/12/2019 STQA Lab Writeup2013
19/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
A specified segment of grouping of features/components must be tested
together.
Other information that may be useful in setting the approach are:
MTBF, Mean Time Between Failures - if this is a valid measurement for the test
involved and if the data is available.
SRE, Software Reliability Engineering - if this methodology is in use and if
the information is available.
How will meetings and other organizational processes be handled?
9. Item Pass/Fail Criteria:
What are the Completion criteria for this plan? This is a critical aspect of any test
plan and should be appropriate to the level of the plan.
At the Unit test level this could be items such as:
All test cases completed.
A specified percentage of cases completed with a percentage containing some
number of minor defects.
Code coverage tool indicates all code covered.At the Master test plan level this could be items such as: All lower level plans
completed. Or a specified number of plans completed without errors. This could be
an individual test case level criterion or a unit level plan or it can be general
functional Requirements for higher level plans.
What is the number and severity of defects located? Is it possible to compare
this to the total number of defects? This may be impossible, as some defects
are never detected A defect is something that may cause a failure, and
may be acceptable to live in Application. A failure is the result of a defect as seen
by the user.
10. Suspension Criteria and Resumption Requirement:
Know when to pause in a series of tests.
If the number or type of defects reaches a point where the follow on testing has no
value, it makes no sense to continue the test; you are just wasting resources.
Specify what constitutes stoppage for a test or series of tests and what is the
acceptable level of defects that will allow the testing to proceed past the defects.
Testing after a truly fatal error will generate conditions that may be identified as
defects but are in fact ghost errors caused by the earlier defects that were ignored.
11. Test Deliverables:
What is to be delivered as part of this plan?
Test plan document.
Test cases.
Test design specifications.
Tools and their outputs.
Simulators.
-
8/12/2019 STQA Lab Writeup2013
20/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Static and dynamic generators.
Error logs and execution logs.
Problem reports and corrective actions.
One thing that is not a test deliverable is the software itself that is listed under test
items and is delivered by development.
12. Remaining test tasks:
If this is a multi-phase process or if the application is to be released in increments
there may be parts of the application that this plan does not address. These areas
need to be identified to avoid any confusion should defects be reported back on
those future functions. This will also allow the users and testers to avoid incomplete
functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only
cover a portion of the total functions/features. This status needs to be
identified so that those other areas have plans developed for them and to
avoid wasting resources tracking defects that do not relate to this plan.
When a third party is developing the software, this section may contain descriptions
of those test tasks belonging to both the internal groups and the external groups.
13. Environmental Needs:
Are there any special requirements for this test plan, such as:
Special hardware such as simulators, static generators etc.
How will test data be provided. Are there special collection
requirements or specific ranges of data that must be provided?
How much testing will be done on each component of a multi-part feature?
Special power requirements.
Specific versions of other supporting software.
Restricted use of the system during testing.
14. Staffing and Training Needs :
Training on the application/system.
Training for any test tools to be used.
Section 4 and Section 15 also affect this section. What is to be tested and
who is responsible for the testing and training.
15. Responsibilities:
Who is in charge?
This issue includes all areas of the plan. Here are some examples:
Setting risks.
Selecting features to be tested and not tested.
Setting overall strategy for this level of plan.
Ensuring all required elements is in place for testing.
-
8/12/2019 STQA Lab Writeup2013
21/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Providing for resolution of scheduling conflicts, especially, if testing is done
on the production system.
Who provides the required training?
Who makes the critical go/no go decisions for items not covered in the test
plans?
16. Schedule:
Should be based on realistic and validated estimates. If the estimates for the
development of the application are inaccurate, the entire project plan will slip and
the testing is part of the overall project plan.
As we all know, the first area of a project plan is to get cut when it comes to
crunch time at the end of a project is the testing. It usually comes down to the
decision, Lets put something out even if it does not really work all that well.
And, as we all know, this is usually the worst possible decision.
How slippage in the schedule will to be handled should also be addressed
here. If the users know in advance that a slippage in the development will cause aslippage in the test and the overall delivery of the system, they just may be a little
more tolerant, if they know its in their interest to get a better tested application.
By spelling out the effects here you have a chance to discuss them in advance of
their actual occurrence. You may even get the users to agree to a few defects in
advance, if the schedule slips.
At this point, all relevant milestones should be identified with their relationship to
the development process identified. This will also help in identifying and tracking
potential slippage in the schedule caused by the test process.
It is always best to tie all test dates directly to their related development activity
dates. This prevents the test team from being perceived as the cause of a delay. Forexample, if system testing is to begin after delivery of the final build, then system
testing begins the day after delivery. If the delivery is late, system testing starts
from the day of delivery, not on a specific date. This is called dependent or relative
dating.
17. Planning Risks and contingencies.
What are the overall risks to the project with an emphasis on the testing process?
Lack of personnel resources en testing is to begin.
Lack of availability of required hardware, software, data or tools. Late delivery of the software, hardware or tools.
Delays in training on the application and/or tools.
Changes to the original requirements or designs.
Specify what will be done for various events, for example:
Requirements definition will be complete by January 1, 20XX, and, if
the requirements change after that date, the following actions will be
taken.
-
8/12/2019 STQA Lab Writeup2013
22/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
The test schedule and development schedule will move out an
appropriate number of days. This rarely occurs, as most projects tend
to have fixed delivery dates.
The number of test performed will be reduced.
The number of acceptable defects will be increased.
These two items could lower the overall quality of the delivered
product.
Resources will be added to the test team.
The test team will work overtime.
This could affect team morale.
The scope of the plan may be changed.
There may be some optimization of resources. This should be
avoided, if possible for obvious reasons.
You could just QUIT. A rather extreme option to say the least.
Management is usually reluctant to accept scenarios such as
the one above even though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is
that testing is cut back or omitted completely, neither of which should be an
acceptable option.
18. Approval:
Who can approve the process as complete and allow the project to proceed to the
next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind that who the audience
is. The audience for a unit test level plans is different than that of an integration,
system or master level plan.
The levels and type of knowledge at the various levels will be different as
well. Programmers are very technical but may not have a clear
understanding of the overall business process driving the project. Users may
have varying levels of business acumen and very little technical skills. Always be
wary of users who claim high levels of technical skills and programmers that claim
to fully understand business process. These types of individuals can cause more
harm than good if they do not have the skills they believe they possess.
Results: (Program printout with output / Document printout as per the format)
Questions:
1.What is a test case? What are the objectives of testing.
2. Explain the difference between failure, error and fault.
_________________________________________________________________________
-
8/12/2019 STQA Lab Writeup2013
23/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Outcomes:
1.An ability to design a system, component, or process to meet desired needs within
realistic constraints. (c)2. An ability to adopt open source standards (l)
3. An understanding of best practices, standards and their applications. (m)
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_______________________________________________________________
References:
Books/ Journals/ Websites:
1.http://ieeexplore.ieee.org2. http://en.wikipedia.org/wiki/Test_plan
3.http://gerrardconsulting.com/tkb/guidelines/ieee829/main.html
-
8/12/2019 STQA Lab Writeup2013
24/83
-
8/12/2019 STQA Lab Writeup2013
25/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: To develop unit test plan for client server program using UDP as the transport
protocol .
_________________________________________________________________________
Objective:
After completing this experiment you will be able to:
1. Develop a unit test plan for any software project .
_________________________________________________________________________
Resources needed: Internet, Libre Office
Theory
Unit Testingunit testing is a method by which individual units of source code, sets of one or more
computer program modules together with associated control data, usage procedures, and
operating procedures, are tested to determine if they are fit for use. Intuitively, one canview a unit as the smallest testable part of an application. In procedural a unit could be an
entire module but is more commonly an individual function or procedure. In object-
oriented programming a unit is often an entire interface, such as a class, but could be anindividual method. Unit tests are created by programmers or occasionally by white box
testers during the development process.Ideally, each test case is independent from the others: substitutes like method stubs, mock
objects, fakes and test harnesses can be used to assist testing a module in isolation. Unittests are typically written and run by software developers to ensure that code meets its
design and behaves as intended. Its implementation can vary from being very manual
(pencil and paper)[citation needed] to being formalized as part of build automation.
Unit Test Plan
This document describes the Test Plan in other words how the tests will be carried out.This will typically include the list of things to be Tested, Roles and Responsibilities,
prerequisites to begin Testing, Test Environment, Assumptions, what to do after a test is
successfully carried out, what to do if test fails, Glossary and so on
Procedure / Approach /Algorithm / Activity Diagram:
For a given program use the following template to develop the unit test plan.
-
8/12/2019 STQA Lab Writeup2013
26/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Unit Test Plan
Module ID: _________ Program ID: ___________
1. Module Overview
Briefly define the purpose of this module. This may require only a single phrase: i.e.:calculates overtime pay amount, calculates equipment depreciation, etc.
1.1 Inputs to Module
[Provide a brief description of the inputs to the module under test.]
1.2 Outputs from Module
[Provide a brief description of the outputs from the module under test.]
1.3 Logic Flow Diagram
[Provide logic flow diagram if additional clarity is required.]
2. Test Data
(Provide a listing of test cases to be exercised to verify processing logic.)
2.1 Positive Test Cases
[Representative data samples should provide a spectrum of valid field and processing
values including "Syntactic" permutations that relate to any data or record format issues.Each test case should be numbered, indicate the nature of the test to be performed and the
expected proper outcome.]
2.2 Negative Test Cases
[The invalid data selection contains all of the negative test conditions associated with themodule. These include numeric values outside thresholds, invalid Characters, invalid ormissing header/trailer record, and invalid data structures (missing required elements,
unknown elements, etc.)
3. Interface Modules
Identify the modules that interface with this module indicating the nature of the interface:
outputs data to, receives input data from, internal program interface, external program
interface, etc. Identify sequencing required for subsequent string tests or sub-componentintegration tests.
-
8/12/2019 STQA Lab Writeup2013
27/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
4. Test Tools
[Identify any tools employed to conduct unit testing. Specify any stubs or utility programsdeveloped or used to invoke tests. Identify names and locations of these aids for futureregression testing. If data supplied from unit test of coupled module, specify module
relationship.
5. Archive Plan
Specify how and where data is archived for use in subsequent unit tests. Define any
procedures required to obtain access to data or tools used in the testing effort. The
unit test plans are normally archived with the corresponding module specifications.
6. Updates
Define how updates to the plan will be identified. Updates may be required due toenhancements, requirements changes, etc.
____________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
____________________________________________________________________
Questions:
1. List down the tools useful in unit testing and debugging the code.
________________________________________________________________________
Outcomes:1.An ability to design a system, component, or process to meet desired needs within
realistic constraints. (c)
2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards and their applications. (m)
_________________________________________________________________________
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date_________________________________________________________________________
References:
Books/ Journals/ Websites:
1.www.uml.org.cn/test/utp_template.doc
2.http://www.exforsys.com/tutorials/testing/unit-testing.html
3.www.softwaretestinghelp.com
-
8/12/2019 STQA Lab Writeup2013
28/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: White Box Testing using control flow.
-
8/12/2019 STQA Lab Writeup2013
29/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: White Box Testing using control flow______________________________________________________________________
Objective:
After completing this experiment you will be able to:1.Design Control flow graph.
2.Understand path selection criteria.
3.Generate test input data.
_______________________________________________________________________
Resources needed: Libre Office
_________________________________________________________________________
TheoryControl Flow: Successive execution of program statements is viewed as flow of control.Conditional statements alter the default sequential control flow in a program unit.
Control Flow Testing: The main idea in control flow testing is to appropriately select a fewpaths in a program unit and observe whether or not the selected paths produce the expectedoutcome. By executing a few paths in a program unit, the programmer tries to assess thebehavior of the entire program unit.
Control flow testing is a kind of structural testing, which is performed byprogrammers to test the code written by them. Test cases for control flow testing are
derived from source code, such as a program unit rather than from the entire program.
Procedure / Approach /Algorithm / Activity Diagram:
Outline of control flow testing:The overall idea of generating test input data for performing control flow testing isdepicted following .
Inputs to the test generation process
- Source code of the program unit
- Set of Path selection criteria: statement, branch.
Generation of control flow graph : A CFG is a graphical representation of a program
unit.
Idea behind drawing a CFG is to be able to visualize all the paths in the
program unit.
Selection of paths
- Paths are selected from CFG to satisfy path selection criteria.
-
8/12/2019 STQA Lab Writeup2013
30/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Generation of test input data
- Two kinds of paths
Executable path: There exists input so that the path is executed,such a path is called feasible path i.e. executable path
Infeasible path: If there is no input to execute the path then
such a path is called infeasible path.
- Solve the path conditions to produce test input for each path.
Feasibility test of the path
Idea behind checking the feasibility test of a selected path is to meet path selection
criteria .if some chosen paths are found to be infeasible, and then some other paths are
selected to meet the criteria.
Control Flow Graph: It is graphical presentation of the program unit. Three symbols are
used to construct a CFG.
Rectangle: It represents a sequential computation .we label each computation and decision
box with the unique integer .
Decision box: The two branches of decision box are labeled with T and F to represent the
true and false evaluations ,respectively ,of the condition within the box.
Merge Point: we will not label a merge node, because one can easily identify the paths in aCFG even without explicitly considering the merge nodes.
_________________________________________________________________________
__
Results: (Program printout with output / Document printout as per the format)
_________________________________________________________________________
Questions:
You are given the binary search routine in C shown in fig(b).the input array V is assumedto be sorted in ascending order, n is array size, and you want to find the index of anelement X in an array. If X is not found in the array, the routine is supposed to return -1int binsearch (int X, int V [ ], int n) {
int low, high, mid ;
low=0 ;high=n-1 ;
while(low
-
8/12/2019 STQA Lab Writeup2013
31/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
high = mid - 1 ; else if ( X > V [mid])
low = mid + 1 ;else
return mid ; }
return -1 ;}
1. Draw a CFG for binsearch().
2. From the CFG, identify a set of entry-exit paths to satisfy the complete statementcoverage criterion.
3. Identify additional paths, if necessary, to satisfy the complete branch coverage criterion.
4. For each path identified above, derive their path predicate expressions.
5. Solve the path predicate expressions to generate test input and compute thecorresponding expected outcomes.
Outcomes:
1.An ability to design a system, component, or process to meet desired needs within
realistic constraints. (c)
2. An ability to identify and formulate engineering problems. (e)
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
References:
Books/ Journals/ Websites:1. Book Software Testing and Quality Assurance by Kshirasagar Naik and Priyadarshi
Tripathy.2. http://en.wikipedia.org/wiki/Control_flow_graph3. http://suif.stanford.edu/~courses/cs243/joeq/adv_ex3.html
-
8/12/2019 STQA Lab Writeup2013
32/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: White Box Testing using dataflow.
-
8/12/2019 STQA Lab Writeup2013
33/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: White Box Testing using data flow.
_________________________________________________________________________
Objective:
After completing this experiment you will be able to:1.Design Data flow graph.
2.Understand path selection criteria
3.Generate test input data.
_________________________________________________________________________
Resources needed: Libre Office
Theory
A program unit accepts inputs, performs computations, assigns new values to variables,and returns results. One can visualize of flow of data values from one statement toanother. A data value produced in one statement is expected to be used later.
Example:
Obtain a file pointer . use it later.If the later use is never verified, we do not know if the earlier assignment is acceptable.
Motivations of data flow testing
The memory location for a variable is accessed in a desirableway.
Verify the correctness of data values defined (i.e. generated)-observe that all the uses of the value produce the desiredresults.
Data flow testing can be performed at two conceptual levels.:
Static data flow testing
Dynamic data flow testing Static data flow testingIdentify potential defects, commonly known as data flow anomaly. Analyze source codewithout execution. Dynamic data flow testing- Involves actual program execution.- Bears similarity with control flow testing. Identify paths to execute them. Paths are identified based on data flow testing criteria._________________________________________________________________________
-
8/12/2019 STQA Lab Writeup2013
34/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Procedure / Approach /Algorithm / Activity Diagram:
Data flow testing is outlined as follows:
- Draw a data flow graph from a program.- Select one or more data flow testing criteria.
- Identify paths in the data flow graph satisfying the selection criteria.- Derive path predicate expressions from the selected paths- Solve the path predicate expressions to derive test inputs
Data Flow Graph:
It is drawn with the objective of identifying data definitions and their uses.
A data flow graph is a directed graph constructed as follows.- A sequence of definitions and c-uses is associated with each node of
the graph.- A set of p-uses is associated with each edge of the graph.
- The entry node has a definition of each edge parameter and eachnonlocal variable used in the program.
- The exit node has an un definition of each local variable.
Occurrence of data variable is classified as follows:
Definition: A variable gets a new value.
Un definition or kill: This occurs if the value and the
location become unbound.
Use: This occurs when the value is fetched from the memory location of the variable.There are two forms of uses of a variable
Data Flow terms:-
Global c-use:-A c- use of a variable x in node i is set to be a global cuse if x has beendefined before in a node other than node i
-
8/12/2019 STQA Lab Writeup2013
35/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Definition clear path:
path (i - n1 - nm - j), m 0, is called a definition clear path (def-clear path) withrespect to variable x from node i to node j, andfrom node i to edge (nm, j),if x has been neither defined nor undefined in nodes n1 - nm.Global definitionA node i has a global definition of variable x if node i has a definition of x and there is adef-clear path w.r.t. x from node i to somenode containing a global c-use, oredge containing a p-use of variable x
Simple path:A simple path is a path in which all nodes, except possibly the first and the last, are
distinct.Loop-free paths:
A loop-free path is a path in which all nodes are distinct.Complete path:
A complete path is a path from the entry node to the exit nodeDu-path:
A path (n1 - n2 - - nj - nk) is a du-path path w.r.t. variable x if node n1 has a globaldefinition of x and either node nk has a global c-use of x and (n1 - n2 - - nj - nk) is adef-clear simple path w.r.t. x, or Edge (nj, nk) has a p-use of x and (n1 - n2 - - nj - nk)is adef-clear, loop-free path w.r.t. x.
Data Flow Testing Criteria
All-defs:
For each variable x and each node i, such that x has a global definition in node i, select a
complete path which includes a def-clear path from node i to
node j having a global c-use of x, or edge (j, k) having a p-use of x.All-c-uses:
For each variable x and each node i, such that x has a global
definition in node i, select complete paths which include def-clear
paths from node i to all nodes j such that there is a global c-use of x in j.
All-p-uses:
For each variable x and each node i, such that x has a global
definition in node i, select complete paths which include def-clear paths
from node i to all edges (j, k) such that there is a p-use of x on (j, k).
-
8/12/2019 STQA Lab Writeup2013
36/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
All-p-uses/some-c-uses:
This criterion is identical to the all-p-uses criterion except when a variable x has no p-use.
If x has no p-use, then this criterion reduces to the some-c-uses criterion.
Some-c-uses: For each variable x and each node i, such that x has a
global definition in node i, select complete paths which include def-
clear paths from node i to some nodes j such that there is a global c-
use of x in j.
All-c-uses/some-p-uses:
This criterion is identical to the all-c-uses criterion except when a variable x has no c-use.
If x has no global c-use, then this criterion reduces to the some-p-uses criterion.
Some-p-uses: For each variable x and each node i, such that has a
global definition in node i, select complete paths which include def-
clear paths from node i to some edges (j, k) such that there is a p-use
of x on (j, k).
All-uses:
This criterion produces a set of paths due to the all-p-uses criterion and the all-c-uses
criterion.
All-du-paths:
For each variable x and for each node i, such that x has a globaldefinition in node i, select complete paths which include all du-
paths from node i
To all nodes j such that there is a global c-use of x in j
To all edges (j,k) such that there is a p-use of x on (j,k)
Feasible Paths and Test Selection Criteria:
Executable (feasible) path
- A complete path is executable if there exists an assignment of values
to input variables and global variables such that all the path predicates
evaluate to true.
Infeasible path
- We call the path infeasible if there is no such assignment of values to
input variables and global variables exists.
- For a criteria to be useful it must select a set of executables or feasible
paths.
-
8/12/2019 STQA Lab Writeup2013
37/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
_________________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
Questions:
1..Draw a data flow graph for binsearch() function given :
int binsearch (int X, int V [ ], int n) {
int low, high, mid ;
low=0 ;
high=n-1 ;while(low V [mid])
low = mid + 1 ;
else
return mid ; }return -1 ;
}
Q2. Assuming that input array V[ ] has at least one element in it, find an infeasible path in
data flow graph for bin search() function.
Q.3 By referring to data flow graph obtained in Q1.,find set of complete paths satisfyingthe all-def selection criteria with respect to variable mid.
Q.4. By referring to data flow graph obtained in Q1.,find set of complete paths satisfying
the all-def selection criteria with respect to variable high.
Q5. Solve the path predicate expressions to generate test input and compute the
corresponding expected outcomes.
__________________________________________________________________________
Outcomes:
1.An ability to design a system, component, or process to meet desired needs within
realistic constraints. (c)
2. An ability to identify and formulate engineering problems. (e)
_________________________________________________________________________
-
8/12/2019 STQA Lab Writeup2013
38/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
______________________________________________________________________
References:
Books/ Journals/ Websites:
1. Book Software Testing and Quality Assurance by Kshirasagar Naik andPriyadarshi Tripathy.
2. http://en.wikipedia.org/wiki/Data_flow_diagram
-
8/12/2019 STQA Lab Writeup2013
39/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Black Box testing : Study of QTP
(Quick test professional)
-
8/12/2019 STQA Lab Writeup2013
40/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: Black Box testing : Study of QTP (Quick test professional) : A tool for automated
functional testing and regression testing.
____________________________________________________________________
Objective:
After completing this experiment you will be able to:
1. Understand the concept of black box testing.
2. Understand the various features of QTP and know the facilities provided for
automation.
_________________________________________________________________________
Resources needed: Libre Office, Internet
Theory:HP Quick Test professional (QTP)
Software for Automated Functional Testing and Regression testing
Introduction: HP QuickTest Professional is automated testing software designed fortesting various software applications and environments. It performs functional and
regression testing through a user interface such as a native GUI or web interface. It worksby identifying the objects in the application user interface or a web page and performingdesired operations (such as mouse clicks or keyboard events); it can also capture object
properties like name or handler ID. HP QuickTest Professional uses a VBScript scripting
language to specify the test procedure and to manipulate the objects and controlsof the application under test. To perform more sophisticated actions, users can edit the
underlying VBScript.
QTP has active screen which provides for snapshots of the application under test as it
appeared when testing was performed.
The object Data Table in QTP helps in parameterizing the test. In each new test DataTable contains one global tab plus an additional tab for every action. The Data Table is aMicrosoft Excel like sheet which represents the data applicable to your test.
Although HP QuickTest Professional is usually used for "UI Based" Test CaseAutomation, it also can automate some "Non-UI" based Test Cases such as file system
operations and database testing. Following are some of the important features of QTP 10.0
1. Exception handling:
HP Quick Test Professional manages exception handling using recovery scenarios; thegoal
is to continue running tests
-
8/12/2019 STQA Lab Writeup2013
41/83
-
8/12/2019 STQA Lab Writeup2013
42/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
6. User interface
HP QuickTest Professional provides two views--and ways to modify-- a test script:Keyword View and Expert View. These views enable HP QuickTest Professional to act asan Integrated Development Environment (IDE) for the test, and HP QuickTest Professionalincludes many standard IDE features, such as breakpoints to pause a test at predeterminedplaces.
7. Keyword view
Keyword View lets users create and view the steps of a test in a modular, table format.Each row in the table represents a step that can be modified. The Keyword View can also
contain any of the following columns: Item, Operation, Value, Assignment, Comment, and
Documentation. For every step in the Keyword View, HP QuickTest Professional displaysa corresponding line of script based on the row and column value. Users can add, delete or
modify steps at any point in the testing Keyword View, users can also view properties for
items such as checkpoints, output values, and actions, use conditional and loop
statements, and insert breakpoints to assist in debugging a test.
In Expert View, HP QuickTest Professional lets users display and edit a test's source codeusing VBScript. Here already recorded script of the test is displayed and users can edit it if
they need. Designed for more advanced users, users can edit all test actions except for
the root Global action, and changes are synchronized with the Keyword View.
9. Languages
HP QuickTest Professional uses VBScript as its scripting language. VBScript supports
classes but not polymorphism and inheritance. Compared with Visual Basic forApplications (VBA), VBScript lacks the ability to use some Visual Basic keywords,
does not come with an integrated debugger, lacks an event handler, and does not have a
forms editor. HP has added a debugger, but the functionality is more limited whencompared with testing tools that integrate a full-featured IDE, such as those provided withVBA, Java, or VB.NET.
10. Synchronization:
Synchronization is an important mechanism to compensate for inconsistencies in the
performance of applications which respond to the inputs slowly during testing. The default
wait interval in QTP is 20 seconds. If the application responds slowly this time interval isnot enough and the test run fails unexpectedly. Then QTP halts or waits till the object and
its properties get fulfilled. We can add the timeout statements such as Wait orConditional statement. Wait function is sued for hard coded timeout. Conditional
statements are used for synchronization point. The situations in which synchronizationmay be needed are as below,
1. To retrieve information from the data base.
2. Time taken for a window to pop up.
3. Time taken for the progress bar to reach 100%.
4. Time taken for the status message to appear.
The Sync Point can be inserted using a dialogue box where we can specify the time inmillisecond after which QTP will continue to the next step. Default time is 10 seconds.
(10000) milliseconds.
-
8/12/2019 STQA Lab Writeup2013
43/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
11. Facilities for creating actions:
In QTP every test is recorded and replayed back whenever required. This recorded test canbe divided into logical sections. So when new test is created some of the actions from the
earlier tests can be reused. This helps to design more modular and efficient tests. Users
can insert new actions at record time or after recording. An action has its own scriptincluding all the steps recorded. Action can be usable or non usable.
12. Object Repository: The objects associated with each test and each action are stored in
the database which is called as object repository. There are two modes of object repository:
1. Default mode. 2. Shared mode.
13. Check Points: In QTP checkpoints allow us to compare the current behavior of the
application with its behavior in the earlier version. Standard checkpoints are used forchecking different properties of application objects. Bitmap checkpoints are used for
checking images. Text checkpoints are used for checking specific text and more. Database
checkpoints are used for checking contents of the database used in application.
14. User defined functions: When we have a large segments of code which we need to useseveral times in one test or in several different tests then it is required to create userdefined functions. This will make testing easier. These functions can be defined in
individual tests. We can also create external VBScript library file containing these
functions.
Procedure / Approach /Algorithm / Activity Diagram:
Study the following concepts of mobile database system:
The features and applications of QTP.
Data driven test
The recording modes available in QTP
synchronization
________________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
________________________________________________________________________
Questions:
1. What is the need for automating the functional and regression testing?
2. QTP uses VB scripts. What are the advantage and disadvantages of this?
-
8/12/2019 STQA Lab Writeup2013
44/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
3. How the testing is performed in QTP?
4. What is synchronization? During testing when is it necessary to use
synchronization?
5. What are the check points? When are they needed?
6. What is data driven testing? (Parameterization)
7. What is action in QTP?
8. What is object repository in QTP?
_______________________________________________________________________
Outcomes:
1. An ability to use the techniques, skills, and modern engineering tools necessary for
engineering practice.(k)
2. An understanding of best practices, standards and their applications. (m)
_________________________________________________________________________
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_________________________________________________________________________
References:
Books/ Journals/ Websites:
1.A Book prescribed for Diploma in Software Testing named Test Automation Tools abook for official curriculum of SEED InfoTech, Pune.
2.http://askqtp.blogspot.com/
-
8/12/2019 STQA Lab Writeup2013
45/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Automated Performance Testing :Study
of HP Load Runner
-
8/12/2019 STQA Lab Writeup2013
46/83
-
8/12/2019 STQA Lab Writeup2013
47/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Endurance testing
Endurance testing is usually done to determine if the system can sustain the continuousexpected load. During endurance tests, memory utilization is monitored to detect potential
leaks. Also important, but often overlooked is performance degradation. That is, to ensurethat the throughput and/or response times after some long period of sustained activity are
as good or better than at the beginning of the test. It essentially involves applying a
significant load to a system for an extended, significant period of time. The goal is todiscover how the system behaves under sustained use.
Spike testing
Spike testing is done by suddenly increasing the number of or load generated by, users by
a very large amount and observing the behaviour of the system. The goal is to determine
whether performance will suffer, the system will fail, or it will be able to handle dramaticchanges in load.
Configuration testing
Rather than testing for performance from the perspective of load, tests are created todetermine the effects of configuration changes to the system's components on the system'sperformance and behaviour. A common example would be experimenting with different
methods of load-balancing.
Isolation testing
Isolation testing is not unique to performance testing but involves repeating a testexecution that resulted in a system problem. Often used to isolate and confirm the fault
domain.
Setting performance goals
Performance testing can serve different purposes.
It can demonstrate that the system meets performance criteria.
It can compare two systems to find which performs better.
Or it can measure what parts of the system or workload causes the system toperform badly.
Many performance tests are undertaken without due consideration to the setting of realistic
performance goals. The first question from a business perspective should always be "why
are we performance testing?". These considerations are part of the business case of thetesting. Performance goals will differ depending on the system's technology and purpose
however they should always include some of the following:
Concurrency/throughput
If a system identifies end-users by some form of log-in procedure then a concurrency goal
is highly desirable. By definition this is the largest number of concurrent system users that
the system is expected to support at any given moment. The work-flow of a scripted
-
8/12/2019 STQA Lab Writeup2013
48/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
transaction may impact true concurrency especially if the iterative part contains the log-in
and log-out activity.
If the system has no concept of end-users then performance goal is likely to be based on amaximum throughput or transaction rate. A common example would be casual browsing of
a web site such as Wikipedia.
Server response time
This refers to the time taken for one system node to respond to the request of another. Asimple example would be a HTTP 'GET' request from browser client to web server. In
terms of response time this is what all load testing tools actually measure. It may berelevant to set server response time goals between all nodes of the system.
Render response time
A difficult thing for load testing tools to deal with as they generally have no concept of
what happens within a node apart from recognizing a period of time where there is no
activity 'on the wire'. To measure render response time it is generally necessary to includefunctional test scripts as part of the performance test scenario which is a feature not offered
by many load testing tools.
Performance specifications
It is critical to detail performance specifications (requirements) and document them in anyperformance test plan. Ideally, this is done during the requirements development phase of
any system development project, prior to any design effort.
However, performance testing is frequently not performed against a specification i.e. no onewill have expressed what the maximum acceptable response time for a given population of
users should be. Performance testing is frequently used as part of the process of
performance profile tuning. The idea is to identify the weakest link there is inevitably apart of the system which, if it is made to respond faster, will result in the overall system
running faster. It is sometimes a difficult task to identify which part of the system
represents this critical path, and some test tools include (or can have add-ons that provide)
instrumentation that runs on the server (agents) and report transaction times, databaseaccess times, network overhead, and other server monitors, which can be analyzed together
with the raw performance statistics. Without such instrumentation one might have to have
someone crouched over Windows Task Manager at the server to see how much CPU loadthe performance tests are generating (assuming a Windows system is under test).
Performance testing can be performed across the web, and even done in different parts of
the country, since it is known that the response times of the internet itself vary regionally. It
can also be done in-house, although routers would then need to be configured to introducethe lag what would typically occur on public networks. Loads should be introduced to thesystem from realistic points. For example, if 50% of a system's user base will be accessing
the system via a 56K modem connection and the other half over a T1, then the load
injectors (computers that simulate real users) should either inject load over the same mix ofconnections (ideal) or simulate the network latency of such connections, following the same
user profile.
It is always helpful to have a statement of the likely peak numbers of users that might beexpected to use the system at peak times. If there can also be a statement of what constitutes
the maximum allowable 95 percentile response time, then an injector configuration could be
used to test whether the proposed system met that specification.
-
8/12/2019 STQA Lab Writeup2013
49/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
Pre-requisites for Performance Testing
A stable build of the system which must resemble the production environment as close as ispossible.
The performance testing environment should not be clubbed with User acceptance testing(UAT) or development environment. This is dangerous as if an UAT or Integration test or
other tests are going on in the same environment, then the results obtained from the
performance testing may not be reliable. As a best practice it is always advisable to have aseparate performance testing environment resembling the production environment as much
as possible.
Test conditions
In performance testing, it is often crucial (and often difficult to arrange) for the test
conditions to be similar to the expected actual use. This is, however, not entirely possible
in actual practice. The reason is that the workloads of production systems have a randomnature, and while the test workloads do their best to mimic what may happen in the
production environment, it is impossible to exactly replicate this workload variability -
except in the most simple system.
Loosely-coupled architectural implementations (e.g.: SOA) have created additionalcomplexities with performance testing. Enterprise services or assets (that share a commoninfrastructure or platform) require coordinated performance testing (with all consumers
creating production-like transaction volumes and load on shared infrastructures or
platforms) to truly replicate production-like states. Due to the complexity and financialand time requirements around this activity, some organizations now employ tools that can
monitor and create production-like conditions (also referred as "noise") in their
performance testing environments (PTE) to understand capacity and resourcerequirements and verify / validate quality attributes.
Timing
It is critical to the cost performance of a new system, that performance test efforts begin
at the inception of the development project and extend through to deployment. The later a
performance defect is detected, the higher the cost of remediation. This is true in the case
of functional testing, but even more so with performance testing, due to the end-to-endnature of its scope. It is always crucial for performance test team to be involved as early
as possible. As key performance requisites e.g. performance test environment acquisition
and preparation is often a lengthy and time consuming process.
Tools
In the diagnostic case, software engineers use tools such as profilers to measure whatparts of a device or software contributes most to the poor performance or to establishthroughput levels (and thresholds) for maintained acceptable response time.
Technology
Performance testing technology employs one or more PCs or Unix servers to act asinjectors each emulating the presence of numbers of users and each running an
automated sequence of interactions (recorded as a script, or as a series of scripts to
emulate different types of user interaction) with the host whose performance is beingtested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics
from each of the injectors and collating performance data for reporting purposes. The
-
8/12/2019 STQA Lab Writeup2013
50/83
KJSCE/IT/BE/SEMVII/STQA/2013-14
usual sequence is to ramp up