escaping test hell - our journey - xpdays ukraine 2013

Post on 06-May-2015

1.121 Views

Category:

Technology

3 Downloads

Preview:

Click to see full reader

DESCRIPTION

My updated slides about the journey to hell and back to normality wrt automated tests at scale. Based on real 10+ years experience of JIRA development teams. I delivered this talk at XPDays in Kiev in October 2013.

TRANSCRIPT

Automated Test Hell

Wojciech Seligawojciech.seliga@spartez.com

@wseliga

Our Journey

About me

• Coding for 30 years, now only in "free time"

• Agile Practices (inc. TDD) since 2003

• Dev Nerd, Tech Leader, Agile Coach, Speaker, PHB

• 6 years with Atlassian (JIRA Dev Manager)

• Spartez Co-founder & CEO

XP PromiseC

ost

of C

hang

e

Time

WaterfallXP

1.5 year ago

Almost 10 years of accumulating

garbage automatic tests

About 20 000 tests on all levels of abstraction

Very slow (even hours)and fragile feedback loop

Serious performance and reliability issues

Dispirited devs accepting RED as a norm

FeedbackSpeed

`Test

Quality

Test Code is Not Trash

Design

MaintainRefactor

Share

Review

Prune

Respect

Discuss

Restructure

Test Pyramid

Unit Tests (including QUnit)

REST / HTML Tests

Selenium

Optimum Balance

Isolation Speed Coverage Level Access Effort

Dangerous to temper with

MaintainabilityQuality / Determinism

Now

People - MotivationMaking GREEN the norm

Shades of RedShades of Green

Pragmatic CI Health

Build Tiers and Policy

Tier A1 - green soon after all commits

Tier A2 - green at the end of the day

Tier A3 - green at the end of the iteration

unit tests and functional* tests

WebDriver and bundled plugins tests

supported platforms tests, compatibility tests

Wallboards: Constant

Awareness

Extensive Training

• assertThat over assertTrue/False and assertEquals

• avoiding races - Atlassian Selenium with its TimedElement

• Favouring unit tests over functional tests

• Promoting Page Objects

• Brownbags, blogs, code reviews

Quality

Automatic Flakiness Detection Quarantine

Re-run failed tests and see if they pass

Quarantine - Healing

SlowMo - expose races

Selenium 1

Selenium ditching Sky did not fall in

Ditching - benefits

• Freed build agents - better system throughput

• Boosted morale

• Gazillion of developer hours saved

• Money saved on infrastructure

Ditching - due diligence

• conducting the audit - analysis of the coverage we lost

• determining which tests needs to rewritten (e.g. security related)

• rewriting some of the tests (good job for new hires + a senior mentor)

Flaky Browser-based TestsRaces between test code and asynchronous page logic

Playing with "loading" CSS class does not really help

Races Removal with Tracing// in the browser:function mySearchClickHandler() {    doSomeXhr().always(function() {        // This executes when the XHR has completed (either success or failure)        JIRA.trace("search.completed");    });}// In production code JIRA.trace is a no-op

// in my page object:@InjectTraceContext traceContext; public SearchResults doASearch() {    Tracer snapshot = traceContext.checkpoint();    getSearchButton().click(); // causes mySearchClickHandler to be invoked    // This waits until the "search.completed" // event has been emitted, *after* previous snapshot        traceContext.waitFor(snapshot, "search.completed");     return pageBinder.bind(SearchResults.class);}

Can we halve our build times?

Speed

Parallel Execution - Theory

End of Build

Batches

Start of Build Time

Parallel Execution

End of Build

Batches

Start of Build Time

Parallel Execution - Reality Bites

End of Build

A1

Batches

Start of Build

Agent availability

Time

Dynamic Test Execution Dispatch - Hallelujah

"You can't manage what you can't measure."

not by W. Edwards Deming

If you believe just in it

you are doomed.

You can't improve something if you can't measure it

Profiler, Build statistics, Logs, statsd → Graphite

Anatomy of Build*

CompilationPackaging

Executing Tests

Fetching Dependencies

*Any resemblance to maven build is entirely accidental

SCM Update

Agent Availability/Setup

Publishing Results

JIRA Unit Tests Build

Compilation (7min)

Packaging (0min)

Executing Tests (7min)Fetching Dependencies (1.5min)

SCM Update (2min)

Agent Availability/Setup (mean 10min)

Publishing Results (1min)

Decreasing Test Execution Time

to ZERRO alone would not let us

achieve our goal!

Agent Availability/Setup

• starved builds due to busy agents building very long builds

• time synchronization issue - NTPD problem

Fixes applied

• Proximity of SCM repo

• shallow git clones are not so fast and lightweight + generating extra git server CPU load

• git clone per agent/plan + git pull + git clone per build (hard links!)

• Atlassian Stash was thankful (queue)

SCM Update - Checkout time

2 min → 5 seconds

Trade disk space for speed

• Fix Predator

• Sandboxing/isolation agent trade-off:rm -rf $HOME/.m2/repository/com/atlassian/*

intofind $HOME/.m2/repository/com/atlassian/ -name “*SNAPSHOT*” | xargs rm

• Network hardware failure found (dropping packets)

Fetching Dependencies

1.5 min → 10 seconds

Compilation

• Restructuring multi-pom maven project and dependencies

• Maven 3 parallel compilation FTW -T 1.5C*optimal factor thanks to scientific trial and error research

7 min → 1 min

Unit Test Execution

• Splitting unit tests into 2 buckets: good and legacy (much longer)

• Maven 3 parallel test execution (-T 1.5C)

7 min → 5 min

3000 poor tests(5min)

11000 good tests(1.5min)

Functional Tests

• Selenium 1 removal did help

• Faster reset/restore (avoid unnecessary stuff, intercepting SQL operations for debug purposes - building stacktraces is costly)

• Restoring via Backdoor REST API

• Using REST API for common setup/teardown operations

Functional Tests

We like this trend

Publishing Results

• Server log allocation per test → using now Backdoor REST API (was Selenium)

• Bamboo DB performance degradation for rich build history - to be addressed

1 min → 40 s

Unexpected Problem

• Stability issues with our CI server

• The bottleneck changed from I/O to CPU

• Too many agents per physical machine

JIRA Unit Tests Build Improved

Compilation (1min)

Packaging (0min)

Executing Tests (5min)

Fetching Dependencies (10sec)

SCM Update (5sec)

Agent Availability/Setup (3min)*

Publishing Results (40sec)

Improvements Summary

Tests Before After Improvement %

Unit tests 29 min 17 min 41%

Functional tests 56 min 34 min 39%

WebDriver tests 39 min 21 min 46%

Overall 124 min 72 min 42%

* Additional ca. 5% improvement expected once new git clone strategy is consistently rolled-out everywhere

The Quality Follows

But that's still bad

We want CI feedback loop in a few minutes maximum

Splitting The Codebase

Codebase Split - Problems

• Organizational concerns - understanding, managing, integrating, releasing

• Mindset change - if something worked for 10 years why to change it?

• We damned ourselves with big buckets for all tests - where do they belong to?

Splitting code base• Step 0 - JIRA Importers Plugin (3.5 years ago)

• Step 1- New Issue View and NavigatorJIRA 6.0

We are still escaping hell. Hell sucks in your soul.

Conclusions

• Visibility and problem awareness help

• Maintaing huge testbed is difficult and costly

• Measure the problem, measure improvements

• No prejudice - no sacred cows

• Automated tests are not one-off investment, it's a continuous journey

• Performance is a damn important feature

Revised XP PromiseC

ost

of C

hang

e

Time

WaterfallXPSad Reality

Interested in such stuff?

Talk to me at the conference or visit http://www.spartez.com/careers

We are hiring in Gdańsk

• Turtle - by Jonathan Zander, CC-BY-SA-3.0

• Loading - by MatthewJ13, CC-SA-3.0

• Magic Potion - by Koolmann1, CC-BY-SA-2.0

• Merlin Tool - by By L. Mahin, CC-BY-SA-3.0

• Choose Pills - by *rockysprings, CC-BY-SA-3.0

• Flashing Red Light - bt Chris Phan, CC BY 2.0

Images - Credits

Thank You!

Tweet your feedback at @wseliga

top related