ibm rational mobile testing point-of-view · 2 ibm rational mobile testing point-of-view 2 ibm...

8
Featuring research from Issue 1 IBM Rational Mobile Testing Point-of-View

Upload: phungdang

Post on 01-Apr-2018

222 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: IBM Rational Mobile Testing Point-of-View · 2 IBM Rational Mobile Testing Point-of-View 2 IBM Rational Mobile Testing Point-of-View 5 A Guide to Mobile Testing Tools and Services

Featuring research from

Issue 1

IBM Rational Mobile Testing Point-of-View

Page 2: IBM Rational Mobile Testing Point-of-View · 2 IBM Rational Mobile Testing Point-of-View 2 IBM Rational Mobile Testing Point-of-View 5 A Guide to Mobile Testing Tools and Services

2

IBM Rational Mobile Testing Point-of-View

2IBM Rational Mobile Testing Point-of-View

5A Guide to Mobile Testing Tools and Services

The majority of mobile apps are multi-tier architecture, with the code running on the mobile device itself being the “front-end” client to data and services supplied by more traditional middle-tier and data center “back-office” systems. Effective and comprehensive testing of mobile apps requires that all aspects of the “application” be addressed, not only the code on the mobile device.

The implication of comprehensive, multi-tier testing is more than one tool and more than one test execution capability must be used and coordinated into a single application quality result. No one testing technique is adequate by itself for mobile apps. IBM Rational Quality Manager (RQM) is an excellent choice for tying together and managing the various test execution engines, IBM supplied and non-IBM, that are required for mobile testing. This paper describes the IBM Point-of-View for the necessary execution engines appropriate for a complete mobile testing solution.

There are several techniques for testing and validating mobile apps in use in the market today. Our Point-of-View is that an effective development project will employ all of the applicable techniques for their mobile app because each technique has its strengths and weaknesses. There is no single perfect answer for mobile app testing, and the various techniques available are not mutually exclusive. The most effective testing strategy balances the use of all forms of mobile test execution and rolls up the results from each into a comprehensive overall mobile app “quality metric”.

The following are various techniques for mobile app testing.

Manual testing

Manual testing is the most common approach for mobile testing in use in the industry today. It is an essential element of any quality plan for mobile apps because it is the only technique that currently provides results for the consume-ability of the app. Intuitiveness and consume-ability are crucial aspects of successful mobile apps and, so far, we do not have a mechanism for automating the testing of this aspect of the code.

But manual testing is also the most time-consuming, error-prone, and costly technique for mobile testing. Solutions that organize the manual test cases, guide the tester through execution, and store the test results can substantially reduce the costs associated with manual testing, and RQM includes just such capabilities.

Page 3: IBM Rational Mobile Testing Point-of-View · 2 IBM Rational Mobile Testing Point-of-View 2 IBM Rational Mobile Testing Point-of-View 5 A Guide to Mobile Testing Tools and Services

3

Manual testing can be combined with other techniques such as crowd-sourcing and “device-cloud” techniques so that the costs and time required can be somewhat mitigated.

Emulators and simulators

Emulators come with all of the native mobile operating system development kits, and simulators are available from IBM and several other sources. Emulators attempt to replicate the actual mobile operating system running on top of some other hardware, such as a PC workstation. Simulators do not attempt to replicate the mobile OS, but instead provide a light-weight simulation of the user interface only.

Using emulators and simulators for some amount of testing the mobile app can be cost effective, especially in early stages of code development. The typical code/deploy/debug cycle for emulators and simulators is much more rapid than for physical devices (typically) and use of these tools eliminates the need for a developer to have the real physical device in hand. However, there are subtle differences in behavior between device emulators and real physical devices (even for the very best emulator programs), and simulators do not allow execution of the application logic flow at all (only the UI look and flow). So, while emulators and simulators can be used to cut costs and speed development, they are generally not acceptable as the only form of test execution for mobile apps.

IBM Rational offers mobile simulators as part of the development tools for the IBM mobile enterprise solution. Emulators are always supplied directly by the supplier of the mobile operating systems.

Test virtualization

Because mobile applications are multi-tier architecture, the process of setting up the infrastructure to support test execution of the code on the mobile device can be time-consuming and costly. All of the middleware servers and services need to be up and available, and typically it is not acceptable to use real production servers for testing purposes. In addition, many test case failures can occur not because of defects in the code under test, but instead because of problems in the connected components of the application running in other tiers. In other words, if the middle-tier app server has a problem, the mobile device accessing it will fail its test case.

Cost and deployment delays can be eliminated through use of solutions that effectively replicate connecting components of the multi-tier system so that the testing can concentrate narrowly on the code executing on one specific tier of the app. By leveraging the IBM Rational Test Workbench (RTW) solution, test teams can avoid the need to setup complex middleware environments in support of test execution for code running on the mobile devices. RTW can emulate the middle tier and back end services and protocols so that the test execution can concentrate on the client tier of the mobile app that is running on the device itself. Conversely, the middle tier components of the mobile app need to be validated also, and RTW can emulate the

protocols delivered by the mobile device clients so that tests can be focused solely on the middle-tier functions and services, without need of coordinating physical mobile devices.

Mobile app instrumentation

Ultimately there is a requirement to replace manual functional verification with some form of automated testing of the code that is executing on the mobile device. This area of the market is very new and immature. There are a variety of approaches that have been created by different vendors. Some of these approaches to “on device” automated function test are better suited to typical mobile business applications than others.

A typical approach is to place some kind of additional code on the device where the automated testing is to occur. This code acts as a local “on device” agent that drives automated user input into the application and monitors the behavior of the application resulting from this input.

The instructions for telling the agent what to input into the app are typically formatted as either a script or an actual computer program (for instance, written in Java). Creation of these automated test instructions usually requires some proficiency in programming, and many test organizations are short on such skills. Furthermore, creation of these automated test programs is a development effort in its own right, which can delay the delivery of the mobile app into production.

IBM Rational’s Point-of-View is that the creation of automated mobile function test scripts should be possible for testers who have no programming skills at all. The tester should be able to put the device into “record mode” and execute a series of tests while the testing solution (i.e. the test agent running on the device) captures the user input from the tester and converts it into a high level script on the testers behalf. Once the tests have been “recorded” into a language that is close to human written instructions, these test scripts can be organized, managed, and replayed whenever necessary.

Furthermore, since the language employed for the captured test scripts is nearly natural human language, the tester can easily read and modify the script to add elaboration and additional verification points to the instructions. If the language is suitably abstracted from the details of the underlying mobile operating system, these scripts can be executed against real physical devices other than the type used to capture the script in the first place.

The current development effort in the IBM Labs is driving to produce just such an automated solution. This technique for automating the mobile function tests is quite complementary to the other techniques described in this document, and can be very effectively used in combination with these other techniques. This automated mobile function testing capability is not commercially available today but will become part of the Rational Test Workbench set of tools so that it can

Page 4: IBM Rational Mobile Testing Point-of-View · 2 IBM Rational Mobile Testing Point-of-View 2 IBM Rational Mobile Testing Point-of-View 5 A Guide to Mobile Testing Tools and Services

4

easily be combined with other testing techniques for mobile apps.

Device clouds

Some amount of testing on real physical devices is a generally acknowledged requirement prior to production release of any mobile app. Previously described techniques address both manual and automated execution of tests on real devices, but what about the problem of the sheer number of different physical mobile devices on the market? There are literally thousands of different device types running different release levels of mobile operating systems, and connected to different network carrier providers and wireless networks. The combinatorial complexity of the universe of possible permutations is almost beyond comprehension. The cost of owning and setting up and managing all of those different combinations is completely prohibitive even for very well funded projects.

A technique that can address this problem is to employ a “device cloud” testing solution. Device cloud is a term used to describe a very large array of real physical devices that have been made available on a rental basis for access across the internet much in the same way that general compute resources are made available in a generic “Test Cloud” solution. The test organization arranges to reserve some mobile devices for test for a certain amount of time, and deploys the mobile app code to the devices where automated tests are run using whatever “on device” automated test solution is the choice of the test organization.

Once the current testing cycle is completed, the reserved devices are relinquished back to the “device cloud” where they are available to be used for other mobile app testing, potentially by a completely different project.

This technique does not eliminate the need for manual and automated “on device” solutions, and in fact only works in conjunction with some form of execution of those other techniques. What this approach is good at is reducing the cost of ownership for the huge variety of device types that exist and can be expected to be employed by the users of the mobile app once it gets into production. A test organization can invest in purchasing just a few key mobile devices and “rent” the rest of the combinations from the “device cloud”. The same manual and automated techniques for mobile testing used on the stand-alone physical devices can also be used for the devices in the “cloud”, so results of the testing are consistent.

There are issues related to this kind of testing, whether the resources in the cloud are general compute resources or mobile device resources. Issues of security of the app under test, public or private device cloud, and balancing the cost of the cloud with the potential cost of the defects eliminated are all issues that any cloud testing solution has to address.

IBM Rational offers integration between its overall mobile testing management solution (RQM) and a variety of business partners who have device clouds. Keynote DeviceAnywhere is a Ready-for-Rational certified business partner that offers one such device

cloud environment. And IBM is also working on certification of other alternative device cloud vendors, such as Perfecto Mobile, as well.

Summary

Our Point-of-View for a comprehensive solution for mobile testing and quality management is an approach that encompasses the full spectrum of mobile test techniques currently in use in the industry. The key elements of this break-out solution are:

1. Use RQM to organize and manage the various test execution tools for mobile app testing. Consolidate and link the test results from these multiple execution tools into a single mobile app quality metric, and link data from test case failures back to development work items for defect removal.

2. Use RQM’s manual test management capabilities to reduce the costs and increase the efficiency of your manual testing on the mobile app.

3. Use RTW to isolate various tiers of the mobile app so that testing can be concentrated solely on those specific tiers. Test the code on the mobile device without needing to have a complete middle-tier up and running. Test the middle-tier mobile app logic without having to coordinate large numbers of mobile device clients. Use RQM to coordinate between RTW and the execution tool for the tier of the mobile app being tested.

4. Automate the tests for the code that executes directly on the mobile device using the pending RTW automated mobile testing capability. There is no need to hire skilled programmers in order to produce the automated versions of your manual test cases. Use your existing test staff for this.

5. Concentrate your investment in real physical devices to only the highest priority device types and OS release levels. Rent the other permutations from a device cloud vendor that is integrated with RQM so that the same consistent tests can be applied to the “cloud devices” as is used for your in-house physical devices.

6. Use emulators and simulators for the code construction stages of app development, and potentially for post-build “sniff tests”. Use the same Rational automated mobile test capability to automate emulator test cases as you use for the real physical devices (the solution works on both emulators and real devices).

IBM’s point of view for a comprehensive approach to mobile application testing is based on input from many sources. We’ve collaborated with clients who are developing mobile apps, we have many internal product development teams that have given us feedback, and we actively seek market trends from notable industry analysts. Gartner is one such analyst organization that has developed a guide for mobile testing. As you read through the guide from Gartner, look for recommendations that match the IBM mobile testing point of view.

Source: IBM Leigh Williamson

Distinguished Engineer

Page 5: IBM Rational Mobile Testing Point-of-View · 2 IBM Rational Mobile Testing Point-of-View 2 IBM Rational Mobile Testing Point-of-View 5 A Guide to Mobile Testing Tools and Services

5

From the Gartner files

A Guide to Mobile Testing Tools and Services

Mobile application testing will become more complex, critical and expensive. Organizations won’t find a tool to meet all their mobile testing needs, but can assemble a useful portfolio of tools to address many mobile testing challenges.

Key Findings

• Mobiletestingisalreadychallenging,andwillbemoresoasdevices become more diverse and sophisticated, platform variants emerge and applications grow more complex due to the introduction of concepts such as context.

• Awiderangeoftoolssupportaspectsofmobiletesting,althoughno single source can provide a one-stop shop.

• Mobiletestingtoolsarecapable,butcannotautomateorsimulateall aspects of a mobile device and application.

Recommendations

• Allorganizationsdevelopingmobileapplicationsshouldselectaportfolio of testing tools and services to help manage the costs and risks associated with testing increasingly sophisticated contextual mobile applications.

• Asapplicationsbecomemoredemandingandcomplex,plantoperform more testing on real devices and networks, rather than through emulators.

• Prioritizeyourtestingefforttoconcentrateonhigh-riskfunctionsandthe devices that are popular in your user population.

Analysis

For mobile developers, mobile testing is the “elephant in the room.” It will become increasingly expensive and challenging, because of device and platform diversity, variable network performance, growing application complexity and the increasing sophistication of mobile user experiences. In this research, we summarize the landscape of tools, techniques and services that may be required to create a complete mobile testing capability. We specifically highlight mobile issues, but not tasks such as unit testing and server-side performance testing, which are important, but have fewer mobile implications. We focus on testing for quality assurance; other tools and techniques may be needed to test security (see Note 1).

All vendors and products in this research are illustrations, and the inclusion or exclusion of a vendor does not imply a Gartner recommendation or criticism. We will publish a deeper analysis of specific categories of mobile testing tools and vendors in the future.

Why Mobile Testing Is DifficultA mobile testing strategy and tool portfolio must address several key challenges:

• Diversity in platforms, OSs and devices: In some regions, up to five different mobile OSs have significant market share, and the installed base will include several versions of each. The most popular mobile OS — the Android — is the most fragmented, with five major versions corresponding to nine API sets that had over 1% market share in April 2012. This complexity is compounded by hundreds of different device designs, screen sizes and form-factor variations, such as tablets and handsets. In markets where developers must target feature phones, there can be more than 100 different device types in the installed base (see Note 2).

• Automation challenges: Sophisticated user experiences involve touch, gestures, GPS location, audio, sensors (such as accelerometers) and physical actions (such as touching the handset to Near Field Communication [NFC] readers). Such interactions can’t be fully scripted or simulated, and may involve manual testing on real devices.

• Network performance: Cellular networks are nondeterministic, and exhibit wide variations in bandwidth and latency. Users in different locations accessing different operators can experience vastly different application performance.

• Application complexity and sophistication: Mobile devices and applications are becoming more sophisticated, using techniques such as context, 3D graphics and gaming. Greater sophistication implies more complex testing.

• New OS versions often break applications: Developers have no control over when new OS versions will appear, and when or whether users will upgrade. Thus, it’s common for new OS releases to break existing native applications.

• Bug-fix latency: Some app stores have a submission latency of one to two weeks, meaning that bugs cannot be corrected rapidly, making application quality more important.

• New technology risks: Mobile is at the leading edge of new technologies, such as HTML5, that are not yet well-understood in terms of testing.

• Performance variations: There is a performance difference of greater than one order of magnitude across devices in the smartphone installed base. An app or HTML5 website that runs well on a top-end device may not be acceptable on a low-performance handset.

• Operator intervention: Operators may modify mobile Web content to optimize network performance so that desk-based testing may not show what a real-world handset user experiences.

• Contextual issues: Mobile applications and websites are used in a wide range of contexts, which raises many new testing challenges. For example, can the app be used by someone wearing gloves?

Page 6: IBM Rational Mobile Testing Point-of-View · 2 IBM Rational Mobile Testing Point-of-View 2 IBM Rational Mobile Testing Point-of-View 5 A Guide to Mobile Testing Tools and Services

6

Is the on-screen text readable by users of all ages? Are the screen buttons and Web page hyperlinks far enough apart so that they can’t be selected by mistake? Applications dealing with critical or regulated data may demand much more rigorous testing.

• Peripherals: Smartphones are acquiring a growing range of add-on devices, such as Bluetooth peripherals, that are generally not accessible to testing tools.

• Testing user opinions: Many applications will be distributed via public app stores, where user reviews and attitudes play a large part in determining whether an app is downloaded. Understanding user reactions to an app and its store collateral before it’s published may be valuable.

The challenges of testing are illustrated by the fact that even on platforms, such as the iOS, that exhibit limited fragmentation, application bugs are one of the most frequent causes of poor application reviews.

The Landscape for Mobile Testing Tools and ServicesThe mobile testing landscape is complex. While there are no simple solutions that address all needs, many products and services can form useful components of a testing strategy, including:

• Hardwarecloudservices

• Scriptedtestingtools

• Monitoringandanalytics

• Emulators

• CodeandWebanalysistools

• Testingservicesandcrowdsourcing

Hardware Cloud Services

Cloud testing vendors take handsets and perform electronic modifications so that they can be rack-mounted and accessed remotely over the Internet. These handset farms allow access to a wide range of handsets on different network operators, and offer scripting tools to enable test automation. The main advantage of hardware cloud services is that they provide access to real devices, allowing testers to validate most aspects of application performance and behavior without the loss of fidelity introduced by emulators.

Cloud testing is an excellent way to access a wide range of handsets. However, it has limitations, as test automation may use unreliable techniques such as optical character recognition (OCR) and image recognition, and some interactions involving sensors, peripherals and location may not be available via the cloud. Hardware clouds

can only provide access to handsets in a limited range of countries, as the cloud vendor has to establish hosting sites in each location. A typical cost for basic cloud testing services is approximately $10 to $25 per handset per hour, depending on the time purchased, access to automation tools or dedicated hardware, which costs more.

Cloud testing services are accessed via the Internet, so they can be used in conjunction with outsourced testing labor. Cloud services can also be used to access devices on a range of networks to test application performance (see Note 3 for selected vendors).

Scripted Testing Tools

Scripted testing tools operate with handsets connected to a PC (e.g., via a USB), or are accessed remotely via a cloud vendor such as those discussed in the prior section. These tools allow developers to run automated tests under the control of a script. Some mobile platforms don’t allow test software to access low-level OS services, so scripted testing tools often resort to techniques such as OCR and image recognition to identify screen features and events, although these techniques may not be wholly reliable. Open-source browser scripting tools such as Selenium have equivalents on some mobile platforms such as the Android and iOS. Like the hardware clouds discussed above, scripting tools can’t necessarily test all aspects of an application or a device, particularly if the testing involves sensors, peripherals or location-based services (see Note 4 for selected vendors).

Monitoring and Analytics

Monitoring and mobile Web analytics tools can play a part in some mobile testing strategies to analyze application performance and the user experience. Monitoring tools typically allow organizations to track performance from several locations using several operator networks. These tools can provide ongoing checks on the user experience once a mobile app or website has been deployed, and can identify performance issues on specific operator networks. Some monitoring tools track native applications in addition to mobile websites (see Note 5 for selected vendors).

Emulators

Two types of mobile emulators are commonly encountered: browser emulators and handset emulators. The latter are included in most mobile platform software development kits (SDKs), and will be part of all native application development efforts. However, handset emulators have many limitations; they may not support a wide range of devices, and the emulation of features such as sensors, handset performance and network conditions is necessarily incomplete. Handset emulators are useful for early functional testing, but are not a substitute for access to real devices.

Browser emulators allow developers to test a mobile Web application with software that can be configured to behave like a wide range of smartphone and feature-phone browsers. Sophisticated browser emulators can imitate anything from a few hundred to a few

Page 7: IBM Rational Mobile Testing Point-of-View · 2 IBM Rational Mobile Testing Point-of-View 2 IBM Rational Mobile Testing Point-of-View 5 A Guide to Mobile Testing Tools and Services

7

thousand device types, and may include features such as scripted testing. Browser emulators are not perfect, but may be more effective than handset emulators, because browsers are less complex than handsets, and many share common underlying components such as a WebKit. Some high-end browser emulators include a cloud service that accesses the application from a range of geographic locations and wireless networks, and may be alternatives to the cloud hardware products discussed above (see Notes 6 and 7 for selected vendors).

Code and Web Analysis Tools

Some types of errors may be detected by a static analysis of the source code or HTML, in the case of mobile websites. Analysis tools can identify issues such as inappropriate use of APIs, potential null pointer exceptions, cross-site scripting risks and resource leaks. Static analysis has significant limitations; it’s not available for all languages and platforms, and can’t detect all instances of errors, such as resource leaks. However, it’s a simple way to detect certain types of errors, and can be integrated with development environments, which makes it easy to incorporate in the application life cycle. Another category of analysis tool that may highlight certain types of risks is the website checker, which is a simple Web-hosted product that analyzes a site for compliance with mobile Web guidelines, such as those issued by World Wide Web Consortium (W3C) or dotMobi (see Note 8 for selected vendors).

Testing Services and Crowdsourcing

Several types of services may form part of a testing strategy:

• Outsourced testing: Testing is on the critical path for mobile application delivery, and may require more effort than the organization can easily provide, e.g., when releasing a new application version on several platforms. Organizations may use onshore or offshore service partners for testing, perhaps in conjunction with cloud services, to access a wide range of devices. Organizations such as network operators often use service providers for complex tasks such as handset compliance testing.

• Usability testing: Many IT organizations are not experts in all areas of mobile development, and may engage consultants to perform special-purpose testing and reviews of areas such as the application user interface. Crowdsourced testing may be used to assess usability.

• Crowdsourced testing: A crowdsourcing vendor manages a marketplace that connects individual testers with organizations that need applications tested. Many organizations are cautious about crowdsourced testing, as it involves releasing preproduction applications to a relatively uncontrolled community of testers that may have confidentiality implications. Crowdsourcing may not provide the same consistency of results as a more structured test environment, but provides access to a large population of

devices, users and networks, and may be an effective way to get early indications of how well applications may be reviewed when they enter a store. Crowdsourcing vendors have moved beyond functional testing to provide services such as surveys, user experience reviews and market research.

• Focus groups: In cases where the application is performing a critical function, or is dealing with regulated data or services such as healthcare, an organization may use monitored focus groups to test usability. This typically requires specially equipped facilities and trained staff, and is very expensive.

See Note 9 for selected testing service vendors.

Other Testing Considerations

Other issues to consider when planning a mobile testing tool portfolio and testing strategy include:

• Testing efficiency: It will generally be impossible to conduct tests on all devices in the marketplace, so it’s essential to focus testing efforts effectively. First, understand which devices your customers own, and which OS versions they use; then address the most popular cases first. Also review the application to identify the areas that pose greater risks and demand more testing, for example, functions involving mobile payment.

• Collateral: App stores require application collateral such as screen shots and descriptions. This material often dictates whether a user downloads an app, and thus must be tested for clarity and effectiveness. Focus groups and crowdsourcing are useful techniques.

• Implement in ways that minimize testing: Thin-client architectures may be easier to test than thick clients. One approach to mobile website delivery that can reduce the testing burden is to use dynamic adapter tools that adapt Web content to the capabilities of a device and browser. Such tools include libraries containing the characteristics of several thousand devices, and are commonly used to deliver mobile Web applications to feature phones that don’t support HTML5. If the tool is trusted to do a reasonable job of adaptation, then less testing may be required (see Note 10).

• Manual testing will be required: Although testing tools are improving, mobile devices and applications always involve interactions and peripherals that are not accessible to tools, or can only be partially simulated by tools. Examples include Bluetooth peripherals, some device sensors (such as the compass and accelerometers), GPSs and Wi-Fi systems for indoor location. Developers creating sophisticated applications should expect to perform some level of manual testing for the foreseeable future.

Page 8: IBM Rational Mobile Testing Point-of-View · 2 IBM Rational Mobile Testing Point-of-View 2 IBM Rational Mobile Testing Point-of-View 5 A Guide to Mobile Testing Tools and Services

8

Evidence

The contents of this research were derived from client discussions, research and vendor briefings.

IBM Rational Mobile Testing Point-of-View is published by IBM. Editorial content supplied by IBM is independent of Gartner analysis. All Gartner research is used with Gartner’s permission, and was originally published as part of Gartner’s syndicated research service available to all entitled Gartner clients. © 2012 Gartner, Inc. and/or its affiliates. All rights reserved. The use of Gartner research in this publication does not indicate Gartner’s endorsement of IBM’s products and/or strategies. Reproduction or distribution of this publication in any form without Gartner’s prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see “Guiding Principles on Independence and Objectivity” on its website, http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp.

Note 1. Quality Versus Security Testing We use the term “security testing” to refer to tools that help identify security risks, such as applications vulnerable to hackers, or that might expose personal data such as GPS locations.

Note 2. Handset Fragmentation Android poses the greatest smartphone fragmentation challenges, because the installed base contains many platform versions. There are varied device types, and manufacturers customize the platform in many ways. A useful site that lists active device versions is Android developer resources. Also, handsets don’t always follow the 80:20 rule, particularly in feature phones where handset model distribution tends to have a long tail.

Note 3. Cloud Hardware Testing Vendors Vendors that provide hosted testing farms include Keynote and Perfecto Mobile.

Note 4. Scripted Testing Tool Vendors Tools and vendors that perform scripted or automated mobile testing include Experitest, TestPlant, Gorilla Logic, Parasoft Mobile Test, Tricentris and Jamo Solutions.

Note 5. Monitoring and Metrics Vendors Tools and vendors that monitor mobile Web or native application performance include Tealeaf CX Mobile, Shunra (Mobile Zone), Compuware Gomez, Keynote and SmartBear.

Note 6. Emulator Vendors Platform owners generally provide handset emulators for developer use. Tools and vendors that perform browser emulation include Adobe Device Central, Keynote, iPad Peek, Opera mini simulator and dotMobi emulator.

Note 7. Website Checker Vendors Tools and vendors that check or validate mobile websites include W3C MobileOK checker and mobiReady.

Note 8. Static analysis Tools and vendors that perform static mobile code and HTML analysis include Klokwork, Coverity, Parasoft and Veracode.

Note 9. Testing Service Vendors Usability testing services are available from vendors such as Userzoom, Testing Circle, Infostretch and Testing4Success. Crowdsourced testing vendors include Mob4Hire and Utest.

Note 10. Dynamic Web Adapter Tools Tools and vendors that perform dynamic Web adaptation for mobile devices are discussed in “Magic Quadrant for Mobile Application Development Platforms” and include Andanza, Antenna, Usablenet and Netbiscuits.

Source: Gartner Research G00230814, Nick Jones, 19 June 2012