test analysis & design good practices@tdt iasi 17oct2013
DESCRIPTION
Test analysis & design good practices@TDT Iasi 17Oct2013TRANSCRIPT
Test Analysis & Design
– good practices
Raluca Gagea2013, October
17th
We all know the importance of doing the right things from the very beginning and of putting the right questions when the impact on the product under development is still minor. From here to test analysis and test cases design is just a small step, but one that needs special attention, experience, intuition and creativity and many good practices.
I won't "teach" you how to do it best, I'll share with you some tips that helped me during testing and we'll try to cover:& some vocab stuff we usually
don't use or we do it wrong& specific activities and their
benefits& ways to measure progress and
report the benefits& test cases writing/designing
styles - advantages and disadvantages
& important properties for test cases
& few mistakes we all do :)& importance of tools and the
testing structures we choose
There is an A for everything
Theoretical side of things
Definitions Vocab ISTQB
Good practices
Things to remember
Lessons to learn
Practical (realistic ) side of things
Activities and their benefits
Examples
Fundamental Test Process
CONTROL
PLANNING
ANALYSIS DESIGN IMPLEME
NTCLOSU
RE
EXECUTION
EVALUATING EXIT CRITERIA AND REPORTING
Test Analysis & Design
Test Analysis & Design is the activity where general testing objectives are transformed into tangible test conditions and test designs.
Test Analysis
Process of looking at
something that can be used to
derive test information
Test Design
Process of identifying the
associated high-level test cases for a test item
Test Analysis & Design Vocab
Use casesFunction
al specificationsNon-
functional specifications
Business Scenarios
Emails
Technical documents
Test BasisTest
Object 1
Test Object
2
Test Object
nTest
Item 1Test
Item nTest conditio
n 1
Test conditio
n nTest case
1
Test case
nEffective test case
1
Test case design
techniques Effective test case n
Test suite
Input & Output
Test data
TEST
OBJECTIVES
Test Analysis & Design Vocab
User Name
Age
City
Postal CodeSubmit
Subscription Form
Input constraintsUser Name must be between 6 and 12 characters long, must start with a letter and include only digits.Age must be a number greater or equal to 18 and less than 65City must be one of Ottava, Toronto, Montreal or Halifax
Test Object
Test Items
Test Conditions
Test Analysis & Design – Why?
Test Basis
Review test basis
Examine specifications
Evaluate testability
Select relevant documents only; Identify gaps and ambiguities in the specifications because we are trying to identify precisely what happens at each point in the system
Prevents defects appearing in the code
At execution moment, all the requirements are translated in terms of testable items
Test Analysis & Design – Why?
Test Conditions
Analysis of test items
Identify test conditions Gives us a high-level
list of what we are interested in testing
We can start identifying the type of generic test data we might need
Test Analysis & Design – Why?
Test Cases
Design the tests
Use test design techniques The high risks areas
will be covered by tests before the actual execution phase starts
Test Analysis & Design – Why?
Test Data
Identify test data
At execution moment, the test cases will be executed using the most closest data than the one in production
Test Analysis & Design – Why?T
est Environment
Design the environment set-up
Identify any infrastructure and tools
Availability
At execution moment, everything we need to carry out our work is in place
Test Analysis & Design – Why?
Test Basis & Test Cases
Create traceability At every moment, we can
calculate the requirements testing coverage
Requirements
A requirement is a singular documented physical and functional need that a particular product or service must be or perform.
Functional reqsNon-functional reqsArchitectural reqsDesign reqsStructural reqsConstraint reqs
Test Basis
Test Oracle
≠ Emails≠ Skype
When can we really have them all in place?
Lessons I’ve learnt – First Requirements
Piece of Requirements coming
inAre they testable
?Test
Basis
YES
NO
Some documents coming in
Are they
addressing reqs or
additional info for me?
Reqs
Info
Test
Basis
AddtoAd
d to
if relevantKT
pack
Add to
if relevant
Perform static analysis
Lessons I’ve learnt – First Test Cases “Design”
Analyze test basis,
& test oracle Identify
critical functionalit
ies
Which kind of test suites will be needed (e.g. smoke, regression, per functionality, per component)?
How can we prioritize them?
What’s the impact in terms of effort when changing the priority (review test cases, execute them, track execution status, revert to initial priority)
Identify needed testing types
Do we need to have separate test suites / cases for each testing type (manual vs automation, functional vs non-functional, etc.)
Do we need to have different design & writing styles for the test cases?
Identify automation
need
Create separate test suites for various automated test cases for different purposes.
Choose an appropriate design and writing style for these test cases.
Identify ways to
keep traceability
Lessons I’ve learnt – Testing Structure
Delivery
Model
Test Levels
Number of
Testing
Cycles
Time
Estimated
level of change
Testing Structu
re
Risk
Testing Structures – some examples
Traditional Waterfallrelease cycles are
typically several weeks to several months long, and usually have multiple phases of testing (Functional, System test, Performance, User Acceptance Test, etc) during any given release cycle
execution cycles are planned and scheduled based on these phases and metrics are tracked for each phase as well
Organize by Test Phase
Organize by Functionality or System and then by Phase
Testing Structures – some examples
Agile release products in
shorter and frequent release cycles, each consisting of multiple Sprints in which one or more User Stories are targeted for development completion
Organize by Sprints and then by User Stories or functionality-> when having large number of releases with shorter Sprint cycles and many overlapping release cycles
Testing Structures – some examples
Agile release products in
shorter and frequent release cycles, each consisting of multiple Sprints in which one or more User Stories are targeted for development completion
Organize by moving Sprints at project level as releases-> you have larger release cycles with many Sprints
Testing Structures – some examples
Testing of System of systems large, complex platforms that may contain multiple
systems or sub-systems each with its own development and QA tracks, and there may be a need to track testing progress on a per system bases followed by System wide testingOrganize
releases as projects, followed by System level testing. This is useful when different teams are tracking progress for different systems.
Testing Structures – some examples
Testing of System of systems large, complex platforms
that may contain multiple systems or sub-systems each with its own development and QA tracks, and there may be a need to track testing progress on a per system bases followed by System wide testing
Organize releases under the same project, followed by systems. This is useful when the platform has larger number of frequent releases and a single PM over all
Test Cases Design – some good practices
Why do we write test cases?
The test cases are more than some sentences used to test various flows. They are our way to proof the level of confidence in what
we deliver by: measuring the requirements
coverage and their status at every point in
the development process: (if the requirements are covered by enough test cases, if the testing execution status at some point is the desired or planned one)
Test Cases Design – some good practices
Write Test Cases before the implementation of the requirementsWrite Test Cases for all the requirements
Test Cases should map precisely to the requirements and not be an enhancement to the requirement
Test Cases Design – some good practices
Use same naming convention for all the test cases in a project
Create unique names for your test cases (use “TC” + identifier + title)Use “Action - Target – Scenario” method to formulate the title
Test Cases Design – some good practices
Test Case TitleAction a verb that describes what you are doing (create, delete,
ensure, edit, open,
populate, login)
Targetthe focus of
your test (screen, object entity,
program)
Scenariothe rest of what your
test is about and how you distinguish
multiple test cases for the same Action and Target
Test Cases Design – some good practicesAction – Target –
ScenarioCreate – Task – title is not suppliedCreate – Task – title is the maximum allowable length
Test Cases Design – some good practices
Write detailed description of every step of execution.
Define one single action per execution step.
Write clear and precise expected results
Test Cases Design – some good practices
The Expected Result states: "Verify if error message is displayed." Issue: As executing the Test Case, what if the error message says, “Please provide postcode ", while it should say, "Your postcode is invalid"? Solution: The Expected Results states: "Verify that the error message about an invalid postcode is displayed.”
Test Cases Design – some good practices
Each Test Case checks only one testing idea, but two or more expected results are totally acceptable if there is a need to perform several verifications for that testing idea.
Testing idea: “Payment can be performed by MasterCard credit card." Expected results:
1) In DB, cc_transaction table, in MasterCard column, value 1 is registered.
2) Credit card balance is reduced by the amount equal to the amount of the payment.
Test Cases Design – some good practices
Expected results should met the test case purpose. Additional steps should be specified separately.
Test Cases Design – some good practices
TC ID Execution Steps Expected Results
TC01.01 Verify Customer User is able to create keyword
1.Login as Customer User
2.Navigate to Create Keyword page.
3.Complete all fields with valid data. Submit data.
4.Navigate to Keywords List page.
5.Verify that the created keyword is present in the keywords list.
1. Successful login.
2. Create Keyword page is displayed.
3. Info message is displayed that the keyword is successfully created.
4. Keywords list page is displayed.
5. Newly created keyword is displayed in the keywords list.
Wrong – The Login and Navigate to steps are not required, as the purpose of the test is to verify that the user is able to successfully create keywords. Login and page displaying should be verified in separate Test Cases.
TC ID Execution Steps Expected Results
TC01.01 Verify Customer User is able to create keyword
Login as Customer User
Navigate to Create Keyword page.
1.Complete all fields
with valid data. Submit data.
2.Navigate to Keywords List page. Verify that the created keyword is present in the keywords list.
1.Info message is displayed that the keyword is successfully created.
2.Newly created keyword is displayed in the keywords list.
Test Cases Design – important attributes
Test cases
Effective
Non-redundant
Clear
Detailed
AccurateShort and simple
language
Evolvable
Complete
Traceable
Repeatable
Self cleaning
Have a high probability of
detecting errors
Practical and low redundancy. Any
feature under test should not be
repeated in different test cases. Two test
cases should not find the same
defect.
Clear flow of events, correspondence
between execution steps and expected
results; unambiguously
defined execution steps and expected
results
Contain detailed steps needed to test a particular function; no missing execution
steps; no unnecessary
execution steps
No drawbacks like spelling mistakes;
use the system exact functionality / GUI
names
Short rather than lengthy; written in
simple language, so that any person is
able to understand the scope of each
test case
Well structured and maintainable,
neither too simple nor too complex;
separated test cases for positive and
negative scenarios; limit to 15 execution
steps
Should cover all the features/functionaliti
es that have to be tested
Each test case can be traced back to a requirement/use
case
The result of the test case should be
always the same, no matter how many times it has been executed before
Returns the test environment to the clean state
Test Cases Cascading vs Independent design
Test Cases built on each other Simpler and smaller The output of one Test Case becomes the input of the next Test Case. Arranging Test Cases in a right order saves time during test execution If one Test Case fails, the subsequent tests may be invalid or blocked
Cascading style
Independent styleEach Test Case is self contained, does not rely on any other Test Cases Any number of Test Cases can be executed in any order Larger and more complex Harder to design, create and maintain
Test Cases High-Level vs Low-Level writing style
Test Cases defining what to test in general terms, without specific values for input data and expected results. less time to write greater flexibility in execution more appropriate when tests are executed by testers with a vast knowledge of the application
High-level style
Low-level styleTest Cases with specific values defined for both input and expected results. repetitive it can be executed even by a tester that is just learning the application easier to determine pass or fail criteria easier to automate
Test Cases Design – some good practices
Test cases must evolve during the entire software development lifecycle.
Test Cases Design – some good practices
As requirements change, the testers must adjust test cases accordingly
Test cases must be modified to accommodate the additional information obtained from other phases
Each test case modified upon a change request should have in the description the record that describes the change (email, meeting minutes, use case ID)
As defects are found and corrected, test cases must be updated to reflect the changes and additions to the system
When a new scenario is encountered, it must be evaluated, assigned a priority and added to the set of test cases
Due to changes in requirements, design or implementation, test cases become often obsolete, out-of-date. Given the pressures of having to complete the testing, testers continue their tasks without ever revisiting the test cases. The problem is that if the test cases become outdated, the initial work creating these tests is wasted and additional manual tests executed without having a test case in place cannot be repeated.
Test Analysis & Design – Metrics & Measurements
Percentage of requirements or quality (product) risks covered by test conditions
Percentage of test conditions covered by test casesNumber of defects found during test analysis and design
What cannot be measured cannot be managed.
Thanks for attending this session!
Questions? Debates?Thoughts?