why automation fails—in theory and practice
TRANSCRIPT
W3
Test Automation
10/15/2014 11:30:00 AM
Why Automation Fails – in
Theory and Practice
Presented by:
Jim Trentadue
Ranorex
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ [email protected] ∙ www.sqe.com
Jim Trentadue
Ranorex Jim Trentadue has more than fifteen years of experience as a coordinator and manager in the software testing field. Jim’s various roles in testing have focused on test execution, automation, management, environment management, standards deployment, and test tool implementation. In the area of offshore testing, he has worked with multiple large firms on developing and coordinating cohesive relationships. As a speaker, Jim has presented at numerous industry conferences, chapter meetings, and at the University of South Florida's software testing class, where he mentors students on the testing industry and discusses trends for establishing future job searches and continued training.
Why Test Automation Fails in Theory and in Practice
Jim Trentadue Account / Sales Manager - Ranorex
[email protected] Wednesday, October 15th, 2014
STARWEST Conference
3
Agenda
• Test Automation Industry recap • Test Automation Industry outlook • Adoption challenges with Test Automation; common myths and perceptions • Why are Test Automation adoptions failing? • Why are Test Automation implementations failing? • Correcting the Test Automation approach, concepts and applications • Session recap
5
Test Automation Industry recap
• First Record \ Playback tools appeared about 20-25 years ago • The primary objective was to test desktop applications. Some tools
could test broken URL links, but not dig into the objects • The tester was required to have scripting or programming knowledge
to make the tests run effectively, even in Record \ Playback.
• Below is an evolution of Test Automation Frameworks:
Record \ Playback
Structured Testing (invokes
more conditions)
Data Driven Keyword
Driven
Model / Object Based
Actions Based
Hybrid (combines 2 or more of
the previous frameworks)
7
Test Automation Industry outlook
• Number of test automation tools have increased to over 25-30, primarily focused on taking the scripting / programming out of the equation for the manual testers
• Technical complexities have increased considerably to cover the following:
• License models now offer a Test Execution only component, apart from the license to create and maintain the automated tests and object repository
• Tools have the capability to integrate with various Continuous Integration tools and Test Management solutions
9
Test Automation adoption challenges
• New challenges below have slowed the test automation productivity:
• Rapid deployment times to production • Shifts from a Waterfall SDLC to an Agile methodology • How Test-Driven Development impacts when to automate tests • How component testing such as: Database testing, Data Warehouse testing,
Web-Service testing or Messaging testing impacts what tests to automate
• To ensure success, many organizations have employed a Proof of Concept study
10
Common myths and perceptions for automation
Test Automation
can’t be accomplished!
Significant increase in time and
people required
Existing testers are not needed once
automation is in place
Automation will serve all
testing needs for a system
Associates can absorb test automation
without project or training impacts
Implementing test
automation is a click of a
button
12
Case Study 1: Struggling with (against) Mgmt
Excerpts from an Austrian test engineer / manager and his interactions with management
“It Must Be Good, I’ve Already Advertised It”
Management’s intention was to reduce testing time for the system test. There was no calculation for our investment, we had to go with it because manual testing was no longer needed. The goal was to automate 100 % of all test cases.
“Testers Aren’t Programmers”
My manager hired testers not be to programmers so he did not have more developers in the department. Once told that some tools require heavy scripting / coding, the message was relayed that testers need to do “Advanced Scripting”, rather than “Programming”
“Automate Bugs”
An idea by one manager was to automate bugs we received from our customer care center. We were told to read this bug and automate this exact user action. Not knowing the exact function, we hard-coded the user data into our automation. We were automating bugs for versions that were not in the field anymore.
“Impress the Customers (the Wrong Way)”
My boss had the habit of installing the untested beta versions for presentations of the software in front of the customer. He would install unstable versions and then call developers at 5:30am to fix immediately. We introduced automated smoke tests to ensure good builds.
Experiences of Test Automation – Case Studies of Software Test Automation: Graham & Fewster
13
Test Automation adoption questions
Who
• are the right people to create, maintain test cases and object repository information?
• may execute automated tests instead creating?
• will be the dedicated go to person(s) for the tool?
What
• is our organization’s definition of automated testing: what it is and what it is not?
• are the test automation objectives?
• is the expected testing coverage sought after with test automation?
When
• are testing resources available for this initiative?
• can training happen in relation to the procurement?
• are there project deadlines that compete against test automation?
Where
• will the test automation take place; dedicated test environment?
• are the associates located? This will decide whether a training should be remote on onsite
• will a central repository reside?
Why
• start a test automation initiative – what are the specific issues?
• run a vendor POC program; what do we want from this assessment?
• start with a top-down management approach?
How
• do our test automation objectives differ from our overall testing objectives?
• does manual testing fit into a test automation framework?
• do existing tools such as (CI, CM, TM) work with the automation solution?
Aligned into categories for questions that may not have been answered…or asked
14
Test Automation adoption failures
Reviewing some not so best practices and assumptions around test automation adoption
• Manual Test Cases may not have been written for automation with entry points, data and exit criteria
• Error handling is not considered as part of manual testing; separate from defect detection
• Modularity is not a key consideration for manual test case planning, as it is for test automation
Test Automation automates
manual test cases
• Testers should just go through all of the entire regression test library and strive to have that all automated
• New application changes should just be added to the automation library of tests as it is complete
• This full automation run should happen overnight and any discrepancy should be logged as an application defect
Test Automation must cover 100% of the application
• Replace the time required for manual testing to do automated testing the first time and each subsequent run will be shorter
• Every manual tester we have can automate their own tests, they have the testing knowledge to do so
• This does not have to be a separate initiative, there is ample time during each project release
Testers can automate their tests
during a release
16
Case Study 2: Chosing the Wrong Tool
Experiences of Test Automation – Case Studies of Software Test Automation: Graham & Fewster
• The goal was to automate the testing of major functionality for web development tools at a large internet software development company
Background for the Case Study
• Most of the functional tests were executed manually. There were some unmaintained automation scripts that could potentially serve useful if restored
Pre-existing Automation
• The GUI had gone through major changes for the application. The automation tool had to be flexible enough to automate the changes and be easy to maintain
New Tool or Major Maintenance Effort?
• The consensus was that the tool and coding required was difficult for manual testers to maintain. Additionally, the type of object recognition provided did not work
Moving Forward with existing tool?
• Clearly understand how automated tools use objects and realize that identification by image was not adequate. Research was done to find the best tool to identify objects
What was done after the tool was replaced?
• Closely examine the tools that work best with the controls in your environments. Topics like maintainability and error-handling/debugging should be critical
Conclusion
17
Test Automation implementation failures
What could go wrong when trying to use the purchased solution
• The application(s) under test uses technologies that do not work well with the test automation tool
• When I attempt to playback the test I just recorded, the recording doesn’t playback at all
• I have to use one test automation solution for one technology (Web), another for Mobile, yet another for legacy desktop apps
The test automation tool won’t
work on my application
• The automated test case(s) needs to be re-recorded in most times of any GUI changes, causing heavy maintenance
• It’s not worth automating newer applications; automation is really only for regression testing of legacy applications
• It’s said that automation is key with Agile Development, but automation can’t work when deployed in increments
Screens & Objects in
the AUT are changing;
automate?
• We don’t have time to work with the test automation solution with overly aggressive project schedules and over-allocation
• Test Automation software is primarily marketed to testers with a programming and development skillset / background
• This initiative doesn’t have the cross-department support that it should to be effective
Automation tools are too complex for
manual testers
19
Test Automation adoption corrections
Changing the mindset concerning test automation
• Automated Test Cases are separate from manual test cases with entry points, data and exit criteria
• Error handling is an essential part of automated test cases; separate from defect detection
• Modularity is the approach on how automated test cases are developed, to ensure reusability for each automated test step
Test Automation automates
manual test cases
• Testers start automation with a regression test library, but focus on tests that are time-consuming and have a high ROI
• Applications changes need to be planned for in a Master Test Plan and automated during a maintenance window
• Any automation failure needs to be analyzed whether the defect is in the automated test case or the system under test
Test Automation must cover 100% of the application
• The initial cycle of creating and executing automated tests will be longer than manual tests, with each cycle runtime reduced
• Manual testers will require training on the test automation tool, approach and best practices
• Test automation should be a separate initiative, defined with its own goals and objectives - apart from an application release
Testers can automate their tests
during a release
Test Automation
objectives aim for the highest ROI
Test Plan includes
automated and manual
test cases
Separate goals &
timelines for Test
Automation
20
Case Study 3: Develop a Mutli-Solution Strategy
• A leading electronic design automation company had a family of products that was comprised of over 120 programs. This was GUI-based and ported to all leading industry workstation hardware platforms
Background for the Case Study
Current Testing Activities
• Three distinct test phases: Evaluation (done by end-user), Integration and System
• Integration and System was done manually and consumed 38 person-weeks of effort for every software release on every platform
Proposed Solution
• Evaluate and decide between purchasing a solution or building in-house
• Hire an outside consultant to validate the tool selection prior to purchase, then train testing staff on selected tool
• An in-house solution was built
Integration test
automation
•Completed automation in just under a year
•First trial: 2-person weeks manual; automated: 1-person week – but not everything could be automated
•Without automation, manual: 10 person-weeks on every platform; automate: 3 person-weeks
System test automation
•Automate in two phases: breadth and depth
•Manual testing took 8 person-weeks per platform; automation took 4 person-weeks
•Total tests automated was approximately 79%
- Test effort per platform cut in half - Consistent regression testing
- Less dependent on application SME’s - Better use of testing resources
Key Benefits
Software Test Automation: Effective use of test execution tools – Fewster & Graham
21
Test Automation implementation corrections
• Develop a set of criteria that matches your portfolio of applications to automate and request the vendors for a POC
• Start to request the object names from development as they turn over new builds. Very important for test automation
• Investigate software packages that contain everything in one; vendor management and procurement will thank you!
The test automation tool won’t
work on my application
• Find a tool that you don’t have to re-record. If a new control was added, you wouldn’t rewrite your manual test case
• Consider starting your testing with automated tests to start. The chances of these tests staying current is much higher
• Test automation should be a key element in sprint planning; for new functionality and existing maintenance
Screens & Objects in
the AUT are changing;
automate?
• Schedule a just-in-time training session with the tool shortly after procurement, with staff not allocated to a critical project
• Layout the resource plan on how every tester contributes. Plenty of roles are needed for this initiative than just coding
• Testing management should work with the IT leaders to discuss the time required from a PM, BA, Dev and DBA
Automation tools are too complex for
manual testers
Develop a resource
plan for all required
associates
Understand the object landscape
and prepare the test plan
Successful POC will
prove the tools works on all apps
Changing the approach to putting test automation into practice
22
Case Study 4: Automating Government Systems
• The goal was to speed up the Department of Defense (DoD), software delivery to the field. It was leading-edge technology, but lengthy testing and certification processes were in place before software could be released
Background for the Case Study
Experiences of Test Automation – Case Studies of Software Test Automation: Graham & Fewster
•The tool has to be independent of the System Under Test; no modules should be added
Can’t Be Intrusive to System Under Test (SUT)
•The solution(s), had to go across Windows, Linux, Solaris, Mac, etc.
Must Be OS Independent
•The tool needed to support all languages that the applications were written in
Must Be Independent of the GUI
•The system should be able to handle operations through the GUI and backend processes
Must Automate Tests for Both Display & Non-Display Interfaces
•The tool needs to test a system of systems: multiple servers, monitors and displays interconnected to form one SUT
Must Work in a Networked Multicomputer Environment
•One of the main requirements from the DoD. Testers were SME’s in the application, but not software developers writing scripts.
•The steps should be action driven: Projects – Test Cases – Test Steps - Actions
Non-Developers Should Be Able to Use the Tool
•Needed a framework that could allow for test management activities, including documenting a test case in the Automated Test and Re-Test (ATRT) solution, along with a hierarchical breakdown of test cases into test steps and sub-steps
Must Support an Automated Requirements Traceability Matrix
Solutions selected
spanned across technologies
Reduced some test cycles
from 5 testers to 1 After initial resistance,
testing department now worked on automated test cases, both development and execution
Additional processes for Continuous Integration and Test Management were now enabled
Test Automation ROI
24
Recap of the presentation
Reviewing the main points of the presentation
• The Test Automation industry focused on desktop, but is driving to multi-technology stacks to cover a large amount of platforms
• Primary skillset for those working in test automation was Programming or
Scripting, but solutions are available for those without these skills • Common perceptions and myths will exist, review the list of questions to be
asked and use that to drive your strategy • Present some of the real-life case studies forward to management, to ensure
this does not occur during your research and implementation
• Review some common failures in the theory and implementation and compare if any of those are problematic at your organization
• Make the corrective action for each failure step accordingly. Incorporate test
automation as part of your overall test strategy
Thank you for attending!
Jim Trentadue Account / Sales Manager - Ranorex
[email protected] Wednesday, October 1st 2014
Software Quality and Test Management Conference