management issues in test automation

72
rial Presented by: Dorothy Ind nt Brought to you by: 340 Corporate Way, Suite Orange Park, FL 32073 8882 MC AM Tuto 4/7/2014 8:30 AM “Management Issues in Test Automation” Graham ependent Consulta 300, 688770 9042780524 [email protected] www.sqe.com

Upload: techwellpresentations

Post on 10-May-2015

385 views

Category:

Technology


1 download

DESCRIPTION

Many organizations never achieve the significant benefits that are promised from automated test execution. Surprisingly often, this is due not to technical factors but to management issues, especially at system testing level. Surprisingly often, this is due not to technical factors but to management issues. Dot Graham describes the most important management concerns the test manager must address for test automation success, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use or your current state of automation. Dot explains how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts leading to success, and why return on investment can be dangerous and what you can realistically expect. Dot also reviews a few key technical issues that can make or break the automation effort. Come away with an example set of automation objectives and measures, and a draft test automation strategy that you can use to plan or improve your own automation.

TRANSCRIPT

Page 1: Management Issues in Test Automation

 

 

 

rial  

 

Presented by: 

Dorothy  Ind nt 

Brought to you by: 

  

340 Corporate Way, Suite   Orange Park, FL 32073 888‐2

MC AM Tuto4/7/2014 8:30 AM     

“Management Issues in Test Automation”  

  Graham

ependent Consulta     

    

300,68‐8770 ∙ 904‐278‐0524 ∙ [email protected] ∙ www.sqe.com 

 

Page 2: Management Issues in Test Automation

In software testing for forty years, Dorothy Graham is coauthor of four books—ng and

the

st

Dorothy Graham ultant Independent Test Cons

Software Inspection, Software Test Automation, Foundations of Software TestiExperiences of Test Automation—and is currently working with Seretta Gamba on a new book on test automation patterns. A popular and entertaining speaker at conferences and seminars worldwide, Dot has attended STAR conferences sincefirst one in 1992. She was a founding member of the ISEB Software Testing Board anda member of the working party that developed the ISTQB Foundation Syllabus. Dot was awarded the European Excellence Award in Software Testing in 1999 and the firISTQB Excellence Award in 2012. Learn more about Dot at DorothyGraham.co.uk.  

 

 

Page 3: Management Issues in Test Automation

Management Issues in Test Automation Contents

Session 0: Introduction to the tutorial Tutorial objectives

What we cover (and don’t cover) today

Session 1: Planning and Managing Test Automation Responsibilities Pilot project Test automation objectives (and exercise) Two measures: coverage and EMTE Return on Investment (ROI)

Session 2: Technical Issues for Managers Testware architecture Scripting, keywords and Domain-Specific Test Language (DSTL) Automating more than execution

Session 3: Final Advice, Strategy and Conclusion Final advice Strategy exercise Conclusion

Appendix (useful stuff) That’s no reason to automate (Better Software article) Man and Machine, Jonathan Kohl (Better Software) Technical vs non-technical skills in test automation

Page 4: Management Issues in Test Automation

0-1

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

0-1

Management Issues in Test Automation

Prepared and presented by

Dorothy Graham

© Dorothy Graham 2014

www.DorothyGraham.co.uk email: [email protected]

Twitter: @DorothyGraham

0-2

Tutorial description •  Many organizations never achieve the significant benefits that are

promised from automated test execution. Surprisingly often, this is due not to technical factors but to management issues.

•  Dot Graham describes the most important management concerns the test manager must address for test automation success, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use or your current state of automation.

•  Dot explains how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts leading to success, and what return on investment means in automated testing and why ROI can be dangerous and what you can realistically expect.

•  Dot also reviews the key technical issues that can make or break the automation effort.

•  Come away with an example set of automation objectives and measures, and a draft test automation strategy that you can use to plan or improve your own automation.

Page 5: Management Issues in Test Automation

0-2

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

0-3

Objectives of this tutorial

•  help you achieve better success in automation –  independent of any particular tool

•  mainly management but a few technical issues –  responsibilities, pilot project – objectives for automation – Return on Investment (ROI) – critical technical issues for managers – what works in practice (case studies)

•  help you plan an effective automation strategy

0-4

Tutorial contents

1) Planning & Managing Test Automation 2) Technical Issues for Managers 3) Final advice and Conclusion

Page 6: Management Issues in Test Automation

0-3

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

0-5

Shameless commercial plug

www.DorothyGraham.co.uk [email protected] testautomationpatterns.org

Part 1: How to do automation - still relevant today, though we plan to update it at some point

New book!

0-6

What is today about? (and not about)

•  test execution automation (not other tools) •  I will NOT cover:

– demos of tools (no time, which one, expo) – comparative tool info / selecting a tool*

•  at the end of the day – understand management issues – be aware of critical technical issues – have your own automation objectives – plan your own automation strategy

* I will email you STA Ch 10 on request – [email protected]

Page 7: Management Issues in Test Automation

0-4

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

0-7

About you

•  your Summary and Strategy document – where are you now with your automation?

– what are your most pressing automation problems?

– why are you here today? •  your objectives for this tutorial

Page 8: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-1

Planning & Managing Test Automation

Management Issues in Test Automation

1 Managing 2 Technical 3 Conclusion

1-2

Contents Responsibilities

Pilot project Test automation objectives

Two measures for automation Return on Investment (ROI)

Managing

1 2 3

Management Issues in Test Automation

Page 9: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-3

What is an automated test?

•  a test! – designed by a tester for a purpose

•  test is executed –  implemented / constructed to run automatically

using a tool – could be run manually also

•  who decides what tests to run? •  who decides how a test is run?

1-4

Existing perceptions of automation skills

•  test automation is technical in some ways •  using the test execution tool directly (script writing) •  designing the testware architecture (framework /

regime) •  debugging automation problems

–  this work requires technical skill – most people now realise this (but many still don’t)

•  do testers need to be the automators? – common perception now: testers need to be able

to write code

See article: “Technical vs non-technical skills in test automation”

Page 10: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-5

Responsibilities

•  test the software –  design tests –  select tests for automation

•  requires planning / negotiation

•  execute automated tests –  should not need detailed

technical expertise

•  analyse failed automated tests –  report bugs found by tests –  problems with the tests may

need help from the automation team

•  automate tests (requested by testers)

•  support automated testing –  allow testers to execute tests –  help testers debug failed tests –  provide additional tools (home-

grown)

•  predict –  maintenance effort for software

changes –  cost of automating new tests

•  improve the automation –  more benefits, less cost

Testers Automators

Pattern: AUTOMATION ROLES

1-6

Test manager’s dilemma

•  who should undertake automation work? – not all testers can automate (well) – not all testers want to automate – not all automators want to test!

•  conflict of responsibilities –  (if you are both tester and automator) – should I automate tests or run tests manually?

•  get additional resources as automators? – contractors? borrow a developer? tool vendor?

Page 11: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-7

Agile automation: Lisa Crispin – starting point: buggy code, new functionality

needed, whole team regression tests manually –  testable architecture: (open source tools)

•  want unit tests automated (TDD), start with new code •  start with GUI smoke tests - regression •  business logic in middle level with FitNesse

– 100% regression tests automated in one year •  selected set of smoke tests for coverage of stories

– every 6 mos, engineering sprint on the automation – other key success factors

•  management support & communication •  whole team approach, celebration & refactoring

1-8

Automation and agile •  can’t do agile without automation

–  in agile teams, developer-tester works well •  apply agile principles to automation

– automation sprints, refactor when needed •  support manual and automated tests •  fitting automation into agile development

–  ideal: automation is part of “done” for each sprint – alternative: automation in the following sprint ->

•  may be better for system level tests

See www.satisfice.com/articles/agileauto-paper.pdf (James Bach)

Page 12: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-9

Automation in agile/iterative development

A manual testing of

this release (testers) A B

B C A

F E D C B A

regression testing (automators automate the best tests)

run automated tests (testers)

1-10

Contents Responsibilities

Pilot project Test automation objectives

Two measures for automation Return on Investment (ROI)

Managing

1 2 3

Management Issues in Test Automation

Page 13: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-11

A tale of two projects: Ane Clausen

– Project 1: 5 people part-time, within test group •  no objectives, no standards, no experience, unstable •  after 6 months was closed down

– Project 2: 3 people full time, 3-month pilot •  worked on two (easy) insurance products, end to end •  1st month: learn and plan, 2nd & 3rd months: implement •  started with simple, stable, positive tests, easy to do •  close cooperation with business, developers, delivery •  weekly delivery of automated Business Process Tests

– after 6 months, automated all insurance products

1-12

Pilot project •  reasons

–  you’re unique –  many variables /

unknowns at start •  benefits

–  find the best way for you (best practice)

–  solve problems once –  establish confidence

(based on experience) –  set realistic targets

•  objectives –  demonstrate tool value –  gain experience / skills

in the use of the tool –  identify changes to

existing test process –  set internal standards

and conventions –  refine assessment of

costs and achievable benefits

Pattern: DO A PILOT

Page 14: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-13

What to explore in the pilot

•  build / implement automated tests (architecture) – different ways to build stable tests (e.g. 10 – 20)

•  maintenance – different versions of the application –  reduce maintenance for most likely changes

•  failure analysis – support for identifying bugs – coping with common bugs affecting many

automated tests Also: naming conventions, reporting results, measurement

1-14

After the pilot…

•  having processes & standards is only the start – 30% on new process – 70% on deployment

•  marketing, training, coaching •  feedback, focus groups, sharing what’s been done

•  the (psychological) Change Equation – change only happens if (x + y + z) > w

x = dissatisfaction with the current state y = shared vision of the future z = knowledge of the steps to take to get from here to there w = psychological / emotional cost to change for this person

Source: Eric Van Veenendaal, successful test process improvement

Page 15: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-15

Contents Responsibilities

Pilot project Test automation objectives

Two measures for automation Return on Investment (ROI)

Managing

1 2 3

Management Issues in Test Automation

1-16

An automation effort

•  is a project (getting started or major changes) – with goals, responsibilities, and monitoring – but not just a project – ongoing effort is needed

•  not just one effort – continuing – when acquiring a tool – pilot project – when anticipated benefits have not materialized – different projects at different times

•  with different objectives

•  objectives are important for automation efforts – where are we going? are we getting there?

Page 16: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-17

fast testing

slow testing

Effectiveness

Low

High

Efficiency Manual testing Automated

Efficiency and effectiveness

poor fast

testing

poor slow

testing

good good

greatest benefit

not good but common

worst

better

1-18

Good objectives for automation? –  run regression tests evenings and weekends

–  increase test coverage

–  run tests tedious and error-prone if run manually

– gain confidence in the system

–  reduce the number of defects found by users

Page 17: Management Issues in Test Automation
Page 18: Management Issues in Test Automation

Test Automation Objectives Exercise

© Dorothy Graham HMSTA140203 Page 1 of 3

Test Automation Objectives Exercise The following are possible test automation objectives. Evaluate each one - is it a good one? If not, why not? Which are already in place in your own organisation?

Possible test automation objectives Good automation objective? (If not, why not)

Already in place?

Run tests every night on all PCs NO. Or MAYBE, if we know the tests being run are worthwhile

Automate all of our manual tests

Make it easy for business users to write and run automated tests

Ensure repeatability of regression tests

Ensure that we meet our next release deadline (we’re under time pressure)

Find lots of bugs by running automated tests

Find defects in less time

Free testers from repeated (boring) test execution to spend more time doing exploratory testing

Reduce the number of testers

Reduce elapsed time for testing by x%

Run more tests

Run tests more often

Page 19: Management Issues in Test Automation
Page 20: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-19

Same tests automated

edit tests (maintenance) set-up execute

analyse failures clear-up

Manual testing

More mature automation

Reduce test execution time

1-20

Automate x% of the manual tests?

manual tests automated

tests

tests not worth

automating exploratory

test automation

manual tests automated (% manual)

tests (& verification)

not possible to do manually

tests not automated

yet

Page 21: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-21

What finds most bugs?

regression tests exploratory testing

likelihood of finding bugs

most often automated

What is usually automated?

automation objective – find lots of bugs? No! - not for regression test automation

1-22

Automation success = find lots of bugs?

•  tests find bugs, not automation •  automation is a mechanism for running tests •  the bug-finding ability of a test is not affected

by the manner in which it is executed •  this can be a dangerous objective

– especially for regression automation!

Automated tests Manual Scripted Exploratory Fix Verification

Experiences of Test Automation, Ch 27, p 503, Ed Allen & Brian Newman

Page 22: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-23

When is “find more bugs” a good objective for automation?

•  objective is “fewer regression bugs missed” •  when the first run of a given test is

automated – MBT, Exploratory test automation, automated

test design – keyword-driven (e.g. users populate

spreadsheet) •  find bugs in parts we wouldn’t have tested?

–  indirect! (direct result of running more tests)

1-24

Good objectives for test automation •  realistic and achievable •  short and long term •  regularly re-visited and revised •  measurable •  should be different objectives for testing and

for automation •  automation should support testing activities

Pattern: SET CLEAR GOALS

Page 23: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-25

Contents Responsibilities

Pilot project Test automation objectives

Two measures for automation Return on Investment (ROI)

Managing

1 2 3

Management Issues in Test Automation

1-26

“We need to increase our coverage”

“Make sure you cover 100%!”

“What coverage are we getting?”

“If we automate, we will get better coverage”

Have you heard:

“We need as much coverage as possible”

Page 24: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-27

What is coverage?

the tests

this part of the software has been covered by these tests

the rest has not been covered by these tests

1-28

Tested everything?

system system system system software

more

tests

100%? – of what? modules, statements, branches, states, data? menu options, user stories, error messages?

Page 25: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-29

What is coverage?

•  coverage is a relationship – between a set of tests – and some part of the software

•  an objective measure of some aspect of thoroughness

•  100% coverage is not 100% tested – other levels of coverage – only needs one test to “tick the box” –  tests can fail, poor quality tests – still get coverage – coverage only of what is there, not what’s missing

1-30

Coverage is NOT

software

the

tests

the

tests this is test completion! don’t call it “coverage”!

“we’ve run all of the tests” [that we have thought of]

Page 26: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-31

Coverage

•  next time you hear “coverage”, ask: “of what”? – and why

•  coverage is a relationship – between tests and what-is-tested

•  completing the planned tests is NOT coverage! – ban the phrase “test coverage”

1-32

EMTE – what is it?

•  Equivalent Manual Test Effort – given a set of automated tests, – how much effort would it take

•  IF those tests were run manually

•  note – you would not actually run these tests manually – EMTE = what you could have tested manually

•  and what you did test automatically

– used to show test automation benefit

Page 27: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-33

EMTE – how does it work? a manual test

the manual test now automated

Manual testing Automate the manual testing?

only time to run the tests 1.5 times

doesn’t make sense – can run them more

1-34

EMTE – how does it work? (2) Automated testing

EMTE

Page 28: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-35

EMTE example

•  example – automated tests take 2 hours –  if those same tests were run manually, 4 days

•  frequency – automated tests run every day for 2 weeks

(including once at the weekend), 11 times •  calculation

– EMTE =

1-36

Contents Responsibilities

Pilot project Test automation objectives

Two measures for automation Return on Investment (ROI)

Managing

1 2 3

Management Issues in Test Automation

Page 29: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-37

Is this Return on Investment (ROI)? •  tests are run more often •  tests take less time to run •  it takes less human effort to run tests •  we can test (cover) more of the system •  we can run the equivalent of days / weeks of

manual testing in a few minutes / hours •  faster time to market

1-38

Examples of ROI achieved

•  Michael Snyman, S African bank (Ch 29.13) – US$4m on testing project, automation $850K – savings $8m, ROI 900%

•  Henri van de Scheur, Database testing (Ch 2) –  results: 2400 times more efficient

•  Stefan Mohacsi, Armin Beer: European Space Agency (Ch 9) – MBT, break even after four test cycles

from: Experiences of Test Automation book

Page 30: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-39

How important is ROI?

•  ROI can be dangerous – easiest way to measure: tester time – may give impression that tools replace people

•  “automation is an enabler for success, not a cost reduction tool”

•  Yoram Mizrachi, “Planning a mobile test automation strategy that works, ATI magazine, July 2012

•  many achieve lasting success without measuring ROI (depends on your context) – need to be aware of benefits (and publicize them)

1-40

An example comparative benefits chart

0

10

20

30

40

50

60

70

80

exec speed times run data variety tester work

man aut

ROI spreadsheet – email me for a copy

14 x faster 5 x more often 4 x more data 12 x less effort

Page 31: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-41

Why measure automation?

•  to justify and confirm starting automation – business case for purchase/investment decision,

to confirm ROI has been achieved e.g. after pilot – both compare manual vs automated testing

•  to monitor on-going automation “health” –  for increased efficiency, continuous improvement – build time, maintenance time, failure analysis time,

refactoring time •  on-going costs – what are the benefits?

– monitor your automation objectives

1-42

Sample ‘starter kit’ for metrics for test automation (and testing)

•  possible examples: – some measure of benefit

•  e.g. EMTE or coverage – average time to automate a set of tests – maintenance effort (average per test) –  failure analysis effort (different types of change) – also measure testing (e.g. DDP)

•  change if not giving you useful information

Page 32: Management Issues in Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

1-43

Summary: key points •  Assign responsibility for automation (and testing) •  Use a pilot project to explore the best ways of

doing things •  Know your automation objectives •  Measure what’s important to you •  Show ROI if needed

Managing

1 2 3

Management Issues in Test Automation

1-44

Good objectives for automation? –  run regression tests evenings and weekends

–  increase test coverage

–  run tests tedious and error-prone if run manually

– gain confidence in the system

–  reduce the number of defects found by users

only if they are worthwhile tests!

can be a good one but depends what is meant by “test” coverage

good objective

an objective for testing, but automated regression tests help achieve it

good objective for testing, maybe not a good objective for automation!

Page 33: Management Issues in Test Automation
Page 34: Management Issues in Test Automation

Test Automation Objectives Solution

© Dorothy Graham HMSTA140203 Page 2 of 3

Test Automation Objectives: Solution Ideas Possible test automation objectives Good automation objective?

(If not, why not) Run tests every night on all PCs NO. or MAYBE, if we know the tests being run are

worthwhile

Automate all of our manual tests NO. Automate only those tests that are worth automating. Plus automate more than manual tests.

Make it easy for business users to write and run automated tests

YES. With the right testware architecture, non-technical testers can do this (with support).

Ensure repeatability of regression tests PROBABLY. Tools will run the same tests in the same way every time. But this is not what users do!

Ensure that we meet our next release deadline (we’re under time pressure)

NO. Automation may help to run some tests that are required before release, but good automation takes time and effort to implement.

Find lots of bugs by running automated tests

NO. Automation just runs tests. It is the tests that find bugs, whether they are run manually or are automated.

Find defects in less time Not really. Some types of defects (regression bugs) will be found more quickly by automated tests, but it may actually take longer to analyse the failures found.

Free testers from repeated (boring) test execution to spend more time doing exploratory testing

YES. This is a good objective for test execution automation.

Reduce the number of testers NO. You will need more staff to implement the automation, not less. It can make existing staff more productive by spending more time on test design.

Reduce elapsed time for testing by x% NO. Elapsed time depends on many factors, and not much on whether tests are automated (see further explanation in the slides).

Run more tests MAYBE. It is better to increase the value of tests, not the number. But long term, this can be a benefit, if a useful number of useful tests are automated.

Run tests more often YES – this is what the test execution tools do best. (As long as they are useful tests.)

Page 35: Management Issues in Test Automation
Page 36: Management Issues in Test Automation

Test Automation Objectives Solution

© Dorothy Graham HMSTA140203 Page 3 of 3

Test Automation Objectives: Selection and Measurement On this page, record the test objectives that would be most appropriate for your organisation (and why), and how you will measure them (what to measure and how to measure it). If you currently have automation objectives in place in your organisation that are not good ones, make sure that they are removed and replaced by the better ones below!

Proposed test automation objective (with justification)

What to measure and how to measure it

Add any comments or thoughts here or on the back of this page.

Page 37: Management Issues in Test Automation
Page 38: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-1

Technical Issues for Managers

Management Issues in Test Automation

1 Managing 2 Technical 3 Conclusion

2-2

Contents

Testware architecture Scripting, keywords and DSTL

Automating more than execution

Technical

1 2 3

Management Issues in Test Automation

Page 39: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-3

Testware architecture

abstraction here: easier to write

automated tests widely used

abstraction here: easier to maintain, and change tools

long life

testware  architecture  

Testers    

Test  Execu/on  Tool  runs  scripts  

High Level Keywords

Structured Scripts

structured  testware  

Test Automator(s)

write  tests  (in  DSTL)  

2-4

Easy way out: use the tool’s architecture

•  tool will have its own way of organising tests – where to put things (for the convenience of the

tool!) – will “lock you in” to that tool – good for vendors!

•  a better way (gives independence from tools) – organise your tests to suit you – as part of pre-processing, copy files to where the

tool needs (expects) to find them – as part of post-processing, copy back to where

you want things to live

Page 40: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-5

Tool-specific script ratio Testers    

Test  Execu/on  Tool  

Testers    

Test  Execu/on  Tool  

Tool-specific scripts

Not Tool-specific

High maintenance and/or tool-dependence

2-6

Key issues •  scale

– many scripts, data files, results files, etc. •  shared scripts and data

–  reuse and sharing, not multiple copies •  multiple versions

– different software versions need different test versions; old tests may still be required

•  multiple environments / platforms •  test results are different to the test materials •  standard approach improves productivity

Pattern: TESTWARE ARCHITECTURE

Page 41: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-7

Contents

Testware architecture Scripting, keywords and DSTL

Automating more than execution

Technical

1 2 3

Management Issues in Test Automation

2-8

Levels of scripting •  capture replay high maintenance costs •  structured scripts use programming constructs

– modular, calling structure, loops, IF statements –  few scripts are then affected by changes

•  data-driven: control scripts process SSs/ DBs – easy to add new similar tests

•  keyword-driven / DSTL / Framework – one control script proccesess actions and data –  including verification actions

Patterns: DATA-DRIVEN TESTING, KEYWORD-DRIVEN TESTING, DOMAIN-DRIVEN TESTING

Page 42: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-9

Data file: TestCase2

FILE ADD MOVE DELETE Europe France Germany 1,3 2,2 1 5,3

Data-driven example Data file: TestCase1

FILE ADD MOVE DELETE countries Sweden USA

4,1 Norway 2 7

For each TESTCASE OpenDataFile(TESTCASEn) ReadDataFile(RECORD) For each record ReadDataFile(RECORD) Case (Column(RECORD)) FILE: OpenFile(INPUTFILE) ADD: AddItem(ITEM) MOVE: MoveItem(FROM, TO) DELETE: DeleteItem(ITEM) …..

Next record Next TESTCASE

Control script

2-10

Data vs Keyword driven files

ScribbleOpen Europe AddToList France Italy MoveItem 1 to 3 MoveItem 2 to 2 DeleteItem 1 MoveItem 5 to 2 SaveAs Test2

keyword approach data-driven approach

FILE ADD MOVE DELETE SAVE Europe France Italy 1,3 2,2 1 5,2 Test2

which is easier to read/understand? what happens when

the test becomes large and complex?

this looks more like a test

Page 43: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-11

Test Tool

software under test

script libraries

tool dependent tool independent

Tool-independent framework

frame- work

test procedures /definitions

tool independent scripting language

Another Test Tool

sut

Test Tool

software under test

script libraries some tests run manually

2-12

Contents

Testware architecture Scripting, keywords and DSTL

Automating more than execution

Technical

1 2 3

Management Issues in Test Automation

Page 44: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-13

Automated tests/automated testing

Select / identify test cases to run Set-up test environment:

•  create test environment •  load test data

Repeat for each test case: •  set-up test pre-requisites •  execute •  compare results •  log results •  analyse test failures •  report defect(s) •  clear-up after test case

Clear-up test environment: •  delete unwanted data •  save important data

Summarise results

Automated tests

Select / identify test cases to run Set-up test environment:

•  create test environment •  load test data

Repeat for each test case: •  set-up test pre-requisites •  execute •  compare results •  log results •  clear-up after test case

Clear-up test environment: •  delete unwanted data •  save important data

Summarise results Analyse test failures Report defects

Automated testing

Automated process Manual process

2-14

Comparison of results – more reliance on the correctness of your expected

results (“golden version”) – masking/filtering (e.g. date test is run, different

order, etc) •  may take significant effort

– dynamic comparison vs post- execution – sensitive vs robust tests (what to compare) –  false fail (eats time), false pass (misses bugs) – make your automated tests red until proven green

Page 45: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-15

Outside the box: Jonathan Kohl –  task automation (throw-away scripts)

•  entering data sets to 2 browsers (verify by watching) •  install builds, copy test data

– support manual exploratory testing –  testing under the GUI to the database (“side door”) – don’t believe everything you see

•  1000s of automated tests pass too quickly •  monitoring tools to see what was happening •  “if there’s no error message, it must be ok”

–  defects didn’t make it to the test harness –  overloaded system ignored data that was wrong –  “zombie tests” (see also Julian Harty, Christophe Mecke)

2-16

Automation +

execution

comparison

traditional test

automation

DSTL structured

testware architecture

Looser oracles

ETA, monkeys

Pre & post

processing

Metrics e.g. EMTE Utiliti

es

eg data load

Disposable scripts

Page 46: Management Issues in Test Automation

2-Technical

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-17

Summary: key points

•  Structure your automation testware to suit you •  Use the highest level of scripting that you need

•  e.g. keyword / Domain-Specific Test Language •  Automate more than execution

Technical

1 2 3

Management Issues in Test Automation

Page 47: Management Issues in Test Automation
Page 48: Management Issues in Test Automation

3-Conclusion

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-1

Final Advice and Conclusion

Management Issues in Test Automation

1 Managing 3 Conclusion 2 Technical

2-2

Contents

Final advice Your strategy Conclusion

Conclusion

1 3 2

Management Issues in Test Automation

Page 49: Management Issues in Test Automation

3-Conclusion

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-3

Dealing with high level management •  management support

– building good automation takes time and effort – set realistic expectations –  for long-term success, you need this!

•  benefits and ROI – make benefits visible (charts on the walls) – metrics for automation

•  to justify it, compare to manual test costs over iterations •  on-going continuous improvement

–  build cost, maintenance cost, failure analysis cost –  coverage of system tested

2-4

Dealing with developers

•  critical aspect for successful automation – automation is development

•  may need help from developers •  automation needs development standards to work

–  testability is critical for automatability – why should they work to new standards if there is “nothing in it

for them”?

– seek ways to cooperate and help each other •  run tests for them

–  in different environments –  rapid feedback from smoke tests

•  help them design better tests?

Page 50: Management Issues in Test Automation

3-Conclusion

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-5

Standards and technical factors

•  standards for the testware architecture – where to put things – what to name things – how to do things

•  but allow exceptions if needed

•  new technology can be great – but only if the context is appropriate for it (e.g.

Model-Based Testing) •  use automation “outside the box”

2-6

On-going automation

•  you are never finished – don’t “stand still” - schedule regular review and re-

factoring of the automation – change tools, hardware when needed –  re-structure if your current approach is causing

problems •  regular “pruning” of tests

– don’t have “tenured” test suites •  check for overlap, removed features •  each test should earn its place

Page 51: Management Issues in Test Automation

3-Conclusion

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-7

More information –  Test Automation Patterns wiki

•  preview on testautomationpatterns.org •  email me for invitation to join the full wiki

–  Automated Testing Institute (conference and magazine) •  www.automatedtestinginstitute.com

–  SQE (Software Quality Engineering sqe.com) •  www.stickyminds.com •  Linda Hayes automation course

–  Randy Rice: presentation on Free and Cheap tools and automation course

•  www.riceconsulting.com (search on “free tools”)

–  LinkedIn has test automation groups

2-8

Contents

Final advice Your strategy Conclusion

Conclusion

1 3 2

Management Issues in Test Automation

Page 52: Management Issues in Test Automation

3-Conclusion

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-9

What next?

•  we have looked at a number of ideas about test automation today

•  what is your situation? –  what are the most important things for you now? –  where do you want to go? –  how will you get there?

•  make a start on your test automation strategy now –  adapt it to your own situation tomorrow

2-10

Strategy exercise •  your automation strategy / action plan

–  review your objectives for today (p1) –  review your “take-aways” so far (p2) –  identify the top 3 changes you want to make to your

automation (top of p3) – note your plans now on p3

Page 53: Management Issues in Test Automation

3-Conclusion

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-11

Please fill in the evaluation form

•  against this tutorial’s description: – management concerns for automation success – staffing, how to support it, what to expect, ROI? – key technical issues to be aware of – your objectives and strategy

•  I appreciate: –  improvement suggestions (content, timing etc) – high marks ;-) –  if you give a lower mark, please explain why

•  and put your name on the form - thanks

2-12

Summary: key points

•  Management issues: •  staffing, pilot, objectives, Return on Investment (ROI)

•  Technical issues: •  testware architecture, scripting, more than execution

•  Final advice •  Your Objectives and Strategy

Conclusion

1 3 2

Management Issues in Test Automation

Page 54: Management Issues in Test Automation

3-Conclusion

presented by Dorothy Graham [email protected]

© Dorothy Graham 2014 www.DorothyGraham.co.uk

2-13

any more questions? please email me!

[email protected] Thank you for coming today

I hope this was / will be useful for you All the best in your automation!

Page 55: Management Issues in Test Automation
Page 56: Management Issues in Test Automation

32 BETTER SOFTWARE JULY/AUGUST 2009 www.StickyMinds.com

Page 57: Management Issues in Test Automation

www.StickyMinds.com JULY/AUGUST 2009 BETTER SOFTWARE 33

“Why automate?” This seems such an easy question to answer; yet many people don’t achieve the success they hoped for. If you are aiming in the wrong direction, you will not hit your target!

This article explains why some testing objectives don’t work for auto-mation, even though they may be very sensible goals for testing in general. We take a look at what makes a good test automation objective; then we examine six commonly held—but misguided—objectives for test execution automation, explaining the good ideas behind them, where they fail, and how these objectives can be modified for successful test auto-mation.

Good Objectives for Test Automation

A good objective for test automation should have a number of characteristics. First of all, it should be measurable so that you can tell whether or not you have achieved it.

Objectives for test automation should support testing activities but should not be the same as the objectives for testing. Testing and automation are different and distinct activities.

Objectives should be realistic and achievable; otherwise, you will set your-self up for failure. It is better to have smaller-scale goals that can be met than far-reaching goals that seem impossible. Of course, many small steps can take you a long way!

Automation objectives should be both short and long term. The short-term goals should focus on what can be achieved in the next month or quarter. The long-term goals focus on where you want to be in a year or two.

Objectives should be regularly revised in the light of experience.

Misguided Objectives for Test Automation

objective 1: find moRe bugsGood ideas behind this objective:• Testing should find bugs, so au-

tomated testing should find them quicker.

• Sincetestsarerunquicker,wecanrun more tests and find even more bugs.

• We can test more of the systemso we should also find bugs in the parts we weren’t able to test manually.

Basing the success of automation on finding bugs—especially the automa-tion of regression tests—is not a good thing to do for several reasons. First, it is the quality of the tests that determines whether or not bugs are found, and this has very little, if anything, to do with automation. Second, if tests are first run manually, any bugs will be found then, and they may be fixed by the time the automated tests are run. Finally, it sets an expectation that the main purpose of test automation is to find bugs, but this is not the case: A repeated test is much less likely to find a new bug than a new test. If the software is really good, auto-mation may be seen as a waste of time and resources.

Regression testing looks for unex-pected, detrimental side effects in un-changed software. This typically in-volves running a lot of tests, many of which will not find any defects. This is ideal ground for test automation as it can significantly reduce the burden of this repetitive work, freeing the testers to focus on running manual tests where more defects are likely to be. It is the testing that finds bugs—not the automa-tion. It is the testers who may be able to find more bugs, if the automation frees them from mundane repetitive work.

The number of bugs found is a mis-leading measure for automation in any case. A better measure would be the per-centage of regression bugs found (com-pared to a currently known total). This is known as the defect detection per-centage (DDP). See the StickyNotes for more information.

Sometimes this objective is phrased in a slightly different way: “Improve the quality of the software.” But iden-tifying bugs does nothing to improve software—it is the fixing of bugs that improves the software, and this is a de-velopment task.

If finding more bugs is something that you want to do, make it an objective for measuring the value of testing, not for measuring the value of automation.

Better automation objective: Help tes-

ters find more regression bugs (so fewer regression failures occur in operation). This could be measured by increased DDP for regression bugs, together with a rating from the testers about how well the automation has supported their ob-jectives.

objective 2: Run RegRession tests oveRnight and on Weekends

Good ideas behind this objective: • We have unused resources (eve-

nings and weekends).• We could run automated tests

“while we sleep.”

At first glance, this seems an excellent objective for test execution automation, and it does have some good points.

Once you have a good set of auto-mated regression tests, it is a good idea to run the tests unattended overnight and on weekends, but resource use is not the most important thing.

What about the value of the tests that are being run? If the regression tests that would be run “off peak” are really valuable tests, giving confidence that the main areas of the system are still working correctly, then this is useful. But the focus needs to be on supporting good testing.

It is too easy to meet this stated objec-tive by just running any test, whether it is worth running or not. For example, if you ran the same one test over and over again every night and every weekend, you would have achieved the goal as stated, but it is a total waste of time and electricity. In fact, we have heard of someone who did just this! (We think he left the company soon after.)

Of course, automated tests can be run much more often, and you may want some evidence of the increased test exe-cution. One way to measure this is using equivalent manual test effort (EMTE). For all automated tests, estimate how long it would have taken to run those tests manually (even though you have no intention of doing so). Then each time the test is run automatically, add that EMTE to your running total.

Better automation objective: Run the most important or most useful tests, em-ploying under-used computer resources when possible. This could be partially

Page 58: Management Issues in Test Automation

34 BETTER SOFTWARE JULY/AUGUST 2009 www.StickyMinds.com

measured by the increased use of re-sources and by EMTE, but should also include a measure of the value of the tests run, for example, the top 25 per-cent of the current priority list of most important tests (priority determined by the testers for each test cycle).

objective 3: Reduce testing staff Good ideas behind this objective:• We are spending money on the

tool, so we should be able to save elsewhere.

• Wewant toreducecostsoverall,and staff costs are high.

This is an objective that seems to be quite popular with managers. Some managers may go even further and think that the tool will do the testing for them, so they don’t need the testers—this is just wrong. Perhaps managers also think that a tool won’t be as argumentative as a tester!

It is rare that staffing levels are re-duced when test automation is intro-duced; on the contrary, more staff are usually needed, since we now need people with test script development skills in addition to people with testing skills. You wouldn’t want to let four testers go and then find that you need eight test au-tomators to maintain their tests!

Automation supports testing activi-ties; it does not usurp them. Tools cannot make intelligent decisions about which tests to run, when, and how often. This is a task for humans able to assess the current situation and make the best use of the available time and resources.

Furthermore, automated testing is not automatic testing. There is much work for people to do in building the au-tomated tests, analyzing the results, and maintaining the testware.

Having tests automated does—or at least should—make life better for testers. The most tedious and boring tasks are the ones that are most amenable for au-tomation, since the computer will hap-pily do repetitive tasks more consistently and without complaining. Automation can make test execution more efficient, but it is the testers who make the tests themselves effective. We have yet to see a tool that can think up tests as well as a human being can!

The objective as stated is a manage-ment objective, not an appropriate ob-jective for automation. A better manage-ment objective is “Ensure that everyone is performing tasks they are good at.” This is not an automation objective either, nor is “Reducing the cost of testing.” These could be valid objectives, but they are related to management, not automation.

Better automation objective: The total cost of the automation effort should be significantly less than the total testing ef-fort saved by the automation. This could be partially measured by an increase in tests run or coverage achieved per hour of human effort.

objective 4: Reduce elapsed time foR testing

Good ideas behind this objective:• Reduce deadline pressure—any

way we can save time is good.• Testing is a bottleneck, so faster

testing will help overall.• Wewanttobequickertomarket.

This one seems very sensible at first and sometimes it is even quantified—“Reduce elapsed time by X%”—which sounds even more impressive. However, this objective can be dangerous because of confusion between “testing” and “test execution.”

The first problem with this objec-tive is that there are much easier ways to achieve it: run fewer tests, omit long tests, or cut regression testing. These are not good ideas, but they would achieve the objective as stated.

The second problem with this ob-jective is its generality. Reducing the elapsed time for “testing” gives the im-pression we are talking about reducing the elapsed time for testing as a whole. However, test execution automation tools are focused on the execution of the tests (the clue is in the name!) not the whole of testing. The total elapsed time for testing may be reduced only if the test execution time is reduced suffi-ciently to make an impact on the whole. What typically happens, though, is that the tests are run more frequently or more tests are run. This can result in more bugs being found (a good thing), that take time to fix (a fact of life), and

increase the need to run the tests again (an unavoidable consequence).

The third problem is that there are many factors other than execution that contribute to the overall elapsed time for testing: How long does it take to set up the automated run and clear up after it? How long does it take to recognize a test failure and find out what is actually wrong (test fault, software fault, envi-ronment problem)? When you are testing manually, you know the context—you know what you have done just before the bug occurs and what you were doing in the previous ten minutes. When a tool identifies a bug, it just tells you about the actual discrepancy at that time. Whoever analyzes the bug has to put together the context for the bug before he or she can really identify the bug.

In figures 1 and 2, the blocks repre-sent the relative effort for the different activities involved in testing. In manual testing, there is time taken for editing tests, maintenance, set up of tests, ex-ecuting the tests (the largest component of manual testing), analyzing failures, and clearing up after tests have com-pleted. In figure 1, when those same tests are automated, we see the illusion that automating test execution will save us a lot of time, since the relative time for execution is dramatically reduced. How-ever, figure 2 shows us the true picture—total elapsed time for testing may actu-ally increase, even though the time for test execution has been reduced. When test automation is more mature, then the total elapsed time for all of the testing activities may decrease below what it was initially for manual testing. Note that this is not to scale; the effects may be greater than we have illustrated.

We now can see that the total elapsed time for testing depends on too many things that are outside the control or in-fluence of the test automator.

The main thing that causes increased testing time is the quality of the soft-ware—the number of bugs that are al-ready there. The more bugs there are, the more often a test fails, the more bug reports need to be written up, and the more retesting and regression testing are needed. This has nothing to do with whether or not the tests are automated or manual, and the quality of the software

Page 59: Management Issues in Test Automation

www.StickyMinds.com JULY/AUGUST 2009 BETTER SOFTWARE 35

is the responsibility of the developers, not the testers or the test automators.

Finally, how much time is spent main-taining the automated tests? Depending on the test infrastructure, architecture, or framework, this could add consid-erably to the elapsed time for testing. Maintenance of the automated tests for later versions of the software can con-sume a lot of effort that also will detract from the savings made in test execution. This is particularly problematic when the automation is poorly implemented, without thought for maintenance issues when designing the testware architec-ture. We may achieve our goal with the first release of software, but later ver-sions may fail to repeat the success and

may even become worse.Here is how the automator and tester

should work together: The tester may request automated support for things that are difficult or time consuming, for example, a comparison or ensuring that files are in the right place before a test runs. The automator would then pro-vide utilities or ways to do them. But the automator, by observing what the tester is doing, may suggest other things that could be supported and “sell” additional tool support to the tester. The rationale is to make life easier for the tester and to make the testing faster, thus reducing elapsed time.

Better automation objective: Reduce the elapsed time for all tool-supported

testing activities. This is an ongoing objective for automation, seeking to improve both manual and existing auto-mated testing. It could be measured by elapsed time for specified testing activi-ties, such as maintenance time or failure analysis time.

objective 5: Run moRe testsGood ideas behind this objective:• Testingmoreofthesoftwaregives

better coverage.• Testing is good, so more testing

must be better.

More is not better! Good testing is not found in the number of tests run, but in the value of the tests that are run. In fact, the fewer tests for the same value, the better. It is definitely the quality of the tests that counts, not the quantity. Automating a lot of poor tests gives you maintenance overhead with little return. Automating the best tests (however many that is) gives you value for the time and money spent in automating them.

If we do want to run more tests, we need to be careful when choosing which additional tests to run. It may be easier to automate tests for one area of the software than for another. However, if it is more valuable to have automated tests for this second area than the first, then automating a few of the more difficult tests is better than automating many of the easier (and less useful) tests.

A raw count of the number of au-tomated tests is a fairly useless way of gauging the contribution of automation to testing. For example, suppose testers decide there is a particular set of tests that they would like to automate. The real value of automation is not that the tests are automated but the number of times they are run. It is possible that the testers make the wrong choice and end up with a set of automated tests that they hardly ever use. This is not the fault of the automation, but of the testers’ choice of which tests to automate.

It is important that automation is responsive, flexible, and able to auto-mate different tests quickly as needed. Although we try to plan which tests to automate and when, we should always start automating the most important tests first. Once we are running the tests,

Figure 1

Figure 2

Page 60: Management Issues in Test Automation

36 BETTER SOFTWARE JULY/AUGUST 2009 www.StickyMinds.com

the testers may discover new information that shows that different tests should be automated rather than the ones that had been planned. The automation regime needs to be able to cope with a change of direction without having to start again from the beginning.

During the journey to effective test automation, it may take far longer to au-tomate a test than to run that test manu-ally. Hence, trying to automate may lead, in the short term at least, to run-ning fewer tests, and this may be OK.

Better automation objective: Auto-mate the optimum number of the most useful and valuable tests, as identified by the testers. This could be measured as the number or percentage automated out of the valuable tests identified.

objective 6: automate X% of testing

Good ideas behind this objective:• We should measure the progress

of our automation effort.• Weshouldmeasurethequalityof

our automation.

This objective is often seen as “Au-tomate 100 percent of testing.” In this form, it looks very decisive and macho! The aim of this objective is to ensure that a significant proportion of existing manual tests is automated, but this may not be the best idea.

A more important and fundamental point is to ask about the quality of the tests that you already have, rather than how many of them should be auto-mated. The answer might be none—let’s have better tests first! If they are poor tests that don’t do anything for you, automating them still doesn’t do any-thing for you (but faster!). As Dorothy Graham has often been quoted, “Auto-mated chaos is just faster chaos.”

If the objective is to automate 50 per-cent of the tests, will the right 50 percent be automated? The answer to this will depend on who is making the decisions and what criteria they apply. Ideally, the decision should be made through nego-tiation between the testers and the au-tomators. This negotiation should weigh the cost of automating individual tests or sets of tests, and the potential costs of maintaining the tests, against the value

of automating those tests. We’ve heard of one automated test taking two weeks to build when running the test manually took only thirty minutes—and it was only run once a month. It is difficult to see how the cost of automating this test will ever be repaid!

What percentage of tests could be au-tomated? First, eliminate those tests that are actually impossible or totally imprac-tical to automate. For example, a test that consists of assessing whether the screen colors work well together is not a good candidate for automation. Auto-mating 2 percent of your most important and often-repeated tests may give more benefit than automating 50 percent of tests that don’t provide much value.

Measuring the percentage of manual tests that have been automated also leaves out a potentially greater benefit of automation—there are tests that can be done automatically that are impossible or totally impractical to do manually. In figure 3 we see that the best automation includes tests that don’t make sense as manual tests and does not include tests that make sense only as manual tests.

Automation provides tool support for testing; it should not simply auto-mate tests. For example, a utility could be developed by the automators to make comparing results easier for the testers. This does not automate any tests but may be a great help to the testers, save them a lot of time, and make things much easier for them. This is good auto-mation support.

For more on the following topics go to www.StickyMinds.com/bettersoftware.n Dorothy Graham’s blog on DDP and

test automationn Software Test Automation

Sticky Notes

Better automation objective: Automa-tion should provide valuable support to testing. This could be measured by how often the testers used what was provided by the automators, including automated tests run and utilities and other support. It could also be measured by how useful the testers rated the various types of sup-port provided by the automation team. Another objective could be: The number of additional verifications made that couldn’t be checked manually. This could be related to the number of tests, in the form of a ratio that should be increasing.

What are your objectives for test execution automation? Are they good ones? If not, this may seriously impact the success of your automation efforts. Don’t confuse objectives for testing with objectives for automation. Choose more appropriate objectives and measure the extent to which you are achieving them, and you will be able to show how your automation efforts benefit your organi-zation. {end}

Figure 3

Page 61: Management Issues in Test Automation
Page 62: Management Issues in Test Automation

20 BETTER SOFTWARE DECEMBER 2007 www.StickyMinds.com

--- ,

Page 63: Management Issues in Test Automation
Page 64: Management Issues in Test Automation
Page 65: Management Issues in Test Automation
Page 66: Management Issues in Test Automation
Page 67: Management Issues in Test Automation
Page 68: Management Issues in Test Automation

© Dorothy Graham, 2013 Page 1 of 5

Technical versus non-technical skills in test automation

Dorothy Graham Software Testing Consultant

[email protected]

SUMMARY In this paper, I discuss the role of the testers and test automators in test automation. Technical skills are needed by test automators, but testers who do not have technical skills should not be prohibited from writing and running automated tests.

Keywords Tester, test automator, test automation, skills.

1. INTRODUCTION

Test automation is a popular topic in software testing, and an area where a number of organizations have had good success. Tests that may take days to run manually can be executed in hours, running overnight and at weekends, with greater accuracy and repeatability. Tests can be run more often, giving immediate feedback for new builds.

Yet despite the obvious potential, many organizations are still struggling to achieve good benefits from automation. I believe that one reason for this is the role of the “test automator”. There is a common misperception that testers should take on this role. This paper explains why this may not be the best solution.

It is popular for testers to be encouraged to develop programming skills. For example at EuroStar 2012, a keynote speaker advised all testers to learn to code. I don’t agree with this, and this paper, originally written for the CAST conference 2010, explains why.

2. TERMS

I will start by defining the terms I use in this paper. Test automation: the computer-assisted running of software tests, i.e. the automation of test execution.

Test automator: A person who builds and maintains the testware associated with automated tests. [4] Tester: A person who identifies test conditions, designs test cases and verifies test results. A tester may also build and execute tests and compare test results. [4] Testware: The artifacts required to plan, design and execute tests, such as documentation, scripts, inputs, expected outcomes, set-up and clear-up procedures, files, databases, environments, and any additional software or utilities used in testing. [4]

Page 69: Management Issues in Test Automation

© Dorothy Graham, 2013 Page 2 of 5

3. TEST AUTOMATION SKILLS

3.1 Existing perceptions

The automation of test execution is a popular application of computer technology to itself. There are a number of books about test automation. [1,2,3,4,7,8,10,11,12] Many of them do not appear to mention skills needed (or it was not obvious if they did). There is a general perception that testers must be or become technical, i.e. programmers, if they are to become involved in automation, although there are a few exceptions that mention a distinction between testers and automators. Linda Hayes in her useful booklet on automation [7] says: “… developing test scripts is essentially a form of programming; for this role, a more technical background is needed.” She distinguishes between “Test Developers” i.e. testers, and “Script Developers”, which is part of the role of a test automator.

Dustin et al in [3] says: “When people think of AST [Automated Software Testing], they often think the skill set required is one of a ‘tester’, and that any manual tester can pick up and learn how to use an automated testing tool. Although the skills [of a tester] … are still needed to implement AST, a complement of skills similar to the broad range of skill sets needed to develop the software product itself is needed.” (p 225) A paper by Mosaic [13] mentions three roles: “Manual Test Engineer”, “Automation Test Engineer” and “Lead Automator”. In this model, the design of tests (i.e. the tester’s role) is done by both test engineers; the automation work (i.e. test automator’s role) is done by the lead automator and automation test engineer. The key distinction is who designs the tests, which in my view is best done by the tester, but collaborating with the test automator for tests that are to be automated.

3.2 Is test automation a technical task? The answer to this question depends on what you include as part of “test automation”. If you view it as the direct use of a test execution tool, i.e. writing, editing and running scripts written in the tool’s scripting language, then it is a technical task, and programming (i.e. scripting) skills are needed.

Another technical aspect of test automation is the design of the testware architecture – the structure and relationship of all of the items of testware that comprise the artefacts required for automated tests to successfully run. The design of the testware architecture is a critical aspect for successful test automation, and the skills needed for this include technical expertise, as well as knowledge of how the tests are to be used. The person who designs the testware architecture may be called a test automator, test architect, or lead automator.

3.3 Constructing automated tests is not entirely a technical process The construction of the automation architecture, and the scripts and other testware that will be used to run automated tests is a technical task, but automated testing is not just the structure of the architecture and scripts.

The whole purpose of test automation is to make it possible to run tests with minimal human involvement in test execution (and comparison).

There is a need for testers to be able to use automated tests, both to write tests to be run automatically, and to run those tests and view the results. The tests that are to be automated could be technical tests, such as those written by developers as part of Test-Driven Development or unit or integration testing,

Page 70: Management Issues in Test Automation

© Dorothy Graham, 2013 Page 3 of 5

but system and acceptance tests can also be automated, and the testers who write those tests are not always technical (i.e. software developers).

The content of the test needs to be determined, but this is a task that is done by a tester; the implementation of the test is what is done by the automator.

4. TESTERS TO AUTOMATORS?

4.1 Testers become automators?

I have seen it work well to have a team of manual testers embarking on an automation project, where all (or nearly all) of the testers effectively become programmers, i.e. test programmers, or scripters. At a former colleague’s company, five out of the team of six testers went on the tool vendor’s training course and became familiar with the tool’s scripting language. One tester decided he didn’t want to become technical, so he concentrated on manual testing, but the others all became good test automators. There were two interesting side-effects of the testers’ newly acquired skillset. First, they had a lot more sympathy for the developers, as they now understood first-hand the frustrations of trying to get the computer to do what you wanted it to do. Second, they found that the developers treated them with a bit more respect, as they now also had some development skills. This led to a better relationship between the developers and testers. Another example where it worked very well to have all of the testers become automators is described in a chapter by Lisa Crispin [2] in our forthcoming book. An agile team of 9 to 12 people were all involved in doing manual regression testing, so were highly motivated to automate 20% of their work, and everyone became involved in the automation.

4.2 A separate team of test automators?

I have seen other organizations where a separate team is set up to automate tests, leaving the testers free to concentrate on designing tests and running manual tests. As the automation team gets going, they automate tests nominated by the testers, freeing the testers from having to do those tests manually. The automation team provides a service to the testers, designing the testware architecture and structure of the tests, and assisting where needed when problems are encountered with the automated tests. For example, if an automated test fails, it could be because of a software fault (in which case the tester would have found a bug), but it could fail for a technical reason such as a problem with the environment, a missing testware item (i.e. a bug in the automated testware), or a problem with the tool itself. The tester, not being technical, will need technical assistance to identify the source of the problem. So we have the situation where test automation does require technical skills, but we have testers who do not have those skills – can this really work? Yes it can, but it needs two key separations or layers of abstraction.

5. AUTOMATION SUCCESS NEEDS LAYERS OF ABSTRACTION

5.1 Technical Layer Technical aspects are very important for test automation. A good testware architecture will have two layers of abstraction [6]. The technical layer will implement good software development practices for the testware, separating the tool itself and the direct scripting of the tool from the software or scriptware

Page 71: Management Issues in Test Automation

© Dorothy Graham, 2013 Page 4 of 5

that calls and uses the lower level scripts. Modularity and reuse are key factors in minimizing maintenance of automated testware. If something changes in the software, the testware will need to reflect that change. With lower levels of scripting (a recorded test or linear script being the lowest), a small change to a screen can result in making “magnetic trash” [9] of the automated tests.

If possible, the testware should be designed so that it can cope with changes in the software under test without needing any changes to the testware. If this is not possible, the effects of any change to the software being tested should be confined to only one testware artefact (or a minimum number if this is not practical).

This layer gives good maintainability to the automated test regime.

5.2 Tester Layer

If all of the testers are technical, such as developers who are doing Test-Driven Design or unit testing, then this layer is not as critical. The Tester layer of abstraction is needed when system testers or user acceptance testers want to use test automation, but do not want to become technical, i.e. programmers. In order to achieve this, the non-technical testers must be able both to write tests (that can then be run automatically) and also to run tests, i.e. to “kick off” a set of automated tests. If the testware architecture uses a keyword-driven approach [1,4,5,6], the testers can write tests using keywords that are related to the business knowledge or domain knowledge that they are familiar with. Yes, they do have to follow the correct syntax for the keywords, but tools enable this to be relatively easy to do, for example by providing a drop-down list of valid keywords and checking the syntax of parameters entered to the keywords.

The keywords are implemented (i.e programmed) by test automators, using the scripting language of the tool, or using any other programming language that they know and would be appropriate. The testers are not involved in the implementation of the keywords, but they are able to use them to write tests. The testers also need to be able to select a set of tests to be run automatically. This can be implemented by the test automators to make it easy for the testers to kick off a set of tests, for example by providing options in a user-friend interface to the automation.

The testers also need to receive and understand the results of the automated tests, and the way in which this information is communicated to them is also designed by the test automator.

This separation of the tester from the automation is needed for the automation to grow within an organization and to give long-lasting benefits and wide-spread acceptance.

6. SUMMARY AND CONCLUSION Test automation does need technical skill – for those who are closest to the tool itself. The skills of the tester and the skills of the test automator may be found in the same person, but it may work better to have different people performing the two roles. The test automator’s role is critical in establishing a modular and well-structure testware architecture, separating the tool from the testware, and providing a tester-friendly interface to the testware for non-technical testers. Not every tester can or should become a test automator. Many non-technical people are very good testers; they should be able to use test automation without needing to have technical skills. Getting to

Page 72: Management Issues in Test Automation

© Dorothy Graham, 2013 Page 5 of 5

this point, however, does require good technical support, but that support does not have to be provided by the tester.

7. REFERENCES [1] Buwalda, H., Janssen, D. and Pinkster, I. 2002. Integrated Test Design and Automation. Addison Wesley/Pearson Education, London.

[2] Crispin, L. Zero to 100% Regression Test Automation in one year: an Agile Approach to Automation 2010. In Graham, D. and Fewster, M. Experiences of Test Automation. [Publisher not yet determined]

[3] Dustin, E., Garrett, T. and Gauf, B. 2009. Implementing Automated Software Testing. Addison Wesley/Pearson Education, Boston, MA.

[4] Fewster, M. and Graham, D. 1999. Software Test Automation. Addison Wesley/Pearson Education, ACM Press, NY.

[5] Gijsen, M. 2009. Effective Automated Testing with a DSTL [Domain Specific Test Language]. Paper from the author and http://www.linkedin.com/ppl/webprofile?action=ctu&id=5550465&pvs=pp&authToken=7sp6&authType=name&trk=ppro_getintr&lnk=cnt_dir

[6] Graham, D. and Fewster, M. 2012 Experiences of Test Automation, Addison Wesley/Pearson Education, Boston, MA. [7] Hoffman, D and Strooper, P. 1995. Software Design, Automated Testing, and Maintenance. International Thompson Computer Press,

Boston, MA. [8] Kaner, C., Falk, J. and Nguyen, H. Q. 1993. Testing Computer Software. Van Nostrand Reinhold, NY.

[9] Mosley, D. J. and Posey, Bruce. A. 2002. Just Enough Software Test Automation. Yourdon Press/Pearson Education, Upper Saddle River, NJ.

[10] Siteur, M.M. 2005. Automate your testing! Sdu Uitgevers bv, Den Haag. [11] Stottlemyer, D. 2001. Automated Web Testing Toolkit. Wiley, NY.

[12] [author unknown] 2002. Staffing your test automation team. Mosaic Inc, Chicago IL. www.mosaicinc.com/mosaicinc/successful_test.htm