mobile testing challenges at 6wunderkinder
TRANSCRIPT
Intro
• What I will talk about.
• A little history of our Wunderlist 3 undertaking and the challenges we faced.
• Some of the software tools we use for reporting, monitoring and metrics.
• An brief example of our release flow.
• Some background of our automation efforts.
A Brief History of Wunderlist 3 • In late 2013 we started building out an entire new API for Wunderlist 3.
• We built around 30 new micro web services.
• Half of the apps are german named (they handle authentication/security).
• The other half are english named (they handle the services logic).
• During this same period we started building all new client applications.
• iOS, Mac, Android, Web, Chrome OS, Windows Desktop, Windows Phone. (and I’m sure I’m missing some…) ;)
• These were the dark days…
• Servers were in constant flux.
• Sync was constantly failing.
• Apps were crashing.
• Users were not happy at all! such as…
• I give up on @Wunderlist. You have ONE job and that’s keeping my lists in sync across devices.
• I want to love @wunderlist but SYNC FAIL! back to legal pads for me.
• @Wunderlist Please just work. I’m on the edge of my patience. It won’t sync. It won’t let me log in. WHYY JUST WORK.
• So we decided to focus our automation efforts on the API, Integration tests and adding as much monitoring scripts as possible.
• Any UI automation would have to wait.
• Approximately June 2014 - We started to see the light!
• Our services and apps were stabilising!
• Outages were few and far between.
• Apps were not crashing regurlarly.
• We weren’t receiving many sync failure complaints anymore.
• So on 30 July 2014 we launched Wunderlist 3.
• We spun up many EC2 instances to handle the traffic load.
• It actually ended up being a really boring launch. Thankfully! All the preparation we did paid off.
Monitoring Reporting Analytics• Librato
• Returns realtime metrics of our many services.
• PagerDuty
• Alerts our on-call team when our errors, latency or queue thresholds are exceeded.
• On-call person can easily scale up more AWS resources, if needed.
• HockeyApp
• Sends us crash reports and captures analytics for our iOS, Mac and our Windows apps.
• Crashlytics
• Sends us crash reports and captures analytics for our Android apps.
• Analytics
• We use a homegrown system to capture all of our client analytics data.
• We use this data for A/B testing.
• This data also helps us decide where to focus our efforts. e.g. new features, platforms, os’s, testing, automation etc…
• Integration, monitoring and API tests run continuously.
• Slack
• Most all of the above reports into our Slack channels.
• Example of our continuously running API Monitor and API Tests scripts reporting to Slack.
• These scripts give us the flexibility of deploying often.
• Empower others to easily run the tests themselves!
Example of our release flow• Client dev’s are given a feature to build.
• Client dev’s add unit tests, accessibility and translations for the feature.
• We crowd source our translations strings with getlocalization.com. (or used to at least… Microsoft has a team dedicated for localisation.)
• Client dev’s are responsible to test the changes before merging.
• When changes are merged into master, automation tests (Android, iOS or Web) trigger.
• HockeyApp generates a nightly build.
• Internal users receive an update notification.
• We then internally test “dog-food” the app.
• All business is funnelled in and out of Wunderlist. (Requirements, Design, Bug Tracking, etc…)
• We always only use the development version of Wunderlist, internally.
• Our Support/QA team then manually tests changes associated with the release.
• We (qa/support) or company users report bugs/enhancement into Wunderlist.
• For larger releases we create a beta build.
• Depending on the platform, the build can be released to thousands of testers.
• When we feel the app is ready, it’s released into the wild.
• We constantly monitor crash reports, our continuous monitoring scripts, and our Librato data.
• For Android, it’s released in small increments through Google Play.
• Our Support team is world class!
• They keep a pulse on whats happening with our users and the system at all times.
• We use TestObject to test against real devices, if we don’t have a device in-house.
Web Automation
• Our Web team is a machine!
• They’re constantly cranking out updates and have many frequent releases.
• Web is also one of our top user platforms.
• So it’s imperative we have a good regression test suite. Thanks to my colleague Vikram, we do!
Our Mobile Grid
So what Can it do?• Runs our Android and iOS automation suite.
• It can run single threaded, in parallel, or distribute the tests amongst devices.
• Collects important data about the devices at the beginning of each test run.
• We unfortunately (for now) have to share our devices with developers and designers, so it’s not unusual for devices to walkaway.
• It captures critical test data such as (appium log, logcat, stdout, stderr, screenshots, and video), then inserts it all into our reporting tool, Allure.
• Easy to run with a one-liner. (starts Appium server and creates nodes)
• rake android[parallel_rspec, grid, smoke]
• rake android[parallel, testobject, regression]
• rake ios[single, local, smoke] * iOS cannot run parallel at the moment.
• rake irb[android, local]
• To do’s: Complete iOS parallelisation, Mac Automation.
• Selenium Grid with connected Nodes.
• We capture OS, SDK, UDID, Manufacture and Model.
Lets see it run!
That is great & all but reporting is key!
• Our Android CI Pipeline.
• Unfortunately, not everyone is a fan. ;)