operational acceptance test - white paper, 2015 capgemini

Upload: anthony-woods

Post on 05-Feb-2018

356 views

Category:

Documents


14 download

TRANSCRIPT

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    1/12

    Software testing is necessary because the item being tested does not always do what

    it is expected to do.

    - International Organization for Standardization 29119, Part 1

    Performance driven. Quality assured.

    Operational Acceptance an

    application of the ISO 29119

    Software Testing standard

    Anthony J Woods, Testing Services, @usa63w

    Capgemini Australia

    5 June 2015

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    2/12

    2

    acceptance testing (UAT). As a result of this interpretation, UAT

    was often perceived to be the last, or one of the last, lines of

    defence between a software development and its

    implementation into the production environment.

    Arguably it was not until the year 2000, and the infamous

    Millennium Bug, that a need to have wholly separated and

    designated test teams was realized. Since the turn of the

    century, more organizations have begun to realize the benefits

    of engaging independent consultants and teams that

    specialize in testing; thus ensuring the highest level of quality

    is attained for their asset developments.

    As the testing industry matured, a need to assess the

    competency of test professionals was identified and in

    2002 [REF-3]the International Software Testing Qualifications

    Board (ISTQB) was founded. The ISTQB provided the testing

    industry with not only a baseline of good practice for software

    testing, but developed multiple levels of certification for

    individuals: foundation, advanced, and expert.

    Through the practices of organizations like the ISTQB, and its

    glossary, the software test industry furthered its interpretation

    and application of test practices. Largely due to organizationslike the ISTQB, traditional twentieth century understanding of

    what acceptance testing is (being limited to that of User) has

    been expanded into the new paradigm of the Operational

    Acceptance Test. This new paradigm signalled a clear

    evolution from a purely functional scope to a more holistic

    scope of acceptance testing encompassing both functional

    and non-functional aspects.

    IntroductionBefore any new asset is engaged for productive use

    Acceptanceis one of the final steps in the software

    development life cycle.

    For decades has Operational Acceptance been ill-defined,

    misunderstood, and/or just plain ignored. Where User

    Acceptance has been written about and hailed as a final

    phase in testing before production. User Acceptance is but

    one side of the coin, Operational Acceptance is the other.

    Whether a project team uses an agile, iterative or sequential

    development methodology, there are three key questions that

    need to be addressed:

    1. Is the asset ready to for productive use by the

    organization?

    2. Has the asset been built in accordance

    with specifications?

    3. Is the organization ready to operate and support the

    asset?

    and to address these questions the project team needs to

    perform the non-functional aspects of acceptance testing and

    evaluation commonly known as either Operation Acceptance

    Testing (OAT). This whitepaper evaluates the quality

    characteristics associated with operational acceptance test

    scope from the perspective of the newly released software

    testing standard ISO 29119.

    Background

    Historically it was common practice for software development

    teams to test their own work; where their level of impartiality

    was limited to members of their team. Over time, to improve

    the quality of software products, multiple types of test phases

    were introduced as the software development

    industry matured.

    During the 1980s Paul Rook designed the V-Model

    methodology to improve the overall efficiency and

    effectiveness in software development processes. By this

    time, the differing test phases had grown to include: Unit

    Testing, Component Testing, System Integration Testing (SIT),

    System Test (ST) and Acceptance Testing, [REF-1]but most

    testing was still being conducted within the development

    team. References to acceptance testing were (then)

    understood and interpreted as to mean business or user

    The International Software Testing

    Qualifications Board (ISTQB) [REF-2]defines

    Acceptance Testing as:

    Formal testing with respect to user needs,

    requirements, and business processes

    conducted to determine whether or not a

    system satisfies the acceptance criteria and to

    enable the user, customers or other authorized

    entity to determine whether or not to accept the

    system.

    Operational Acceptance an application of the ISO 29119 Software Testing standard

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    3/12

    3

    the way we see itTesting

    After this new paradigm of Acceptance had been accepted by

    the development and testing communities, a new

    understanding of what is, and what is not, OAT should had

    been forthcoming. By 2010 the software quality industry

    published the ISO 25000 SQuaRE [REF-5]series of standards

    that outlined the scope of OAT, in respect to the specification

    and evaluation of quality requirements, at a framework and

    strategic level.

    In 2013 the Testing industr y published the ISO 29119 Software

    Testing [REF-4]series of standards, which enabled effective and

    efficient testing by qualified and quantified means to support

    the delivery of reliable, robust IT assets.

    Operational Acceptance

    In traditional sequential development method, the Operational

    Acceptance test (OAT) is likely to be executed near the end of

    the software development life cycle. This would enable the test

    to be executed on a like or near-production instance of the

    asset that is to be implemented.The OAT concept may not be easily accepted into an agile or

    iterative way of thinking for the simple fact that these

    developments are not conducive to waiting for the asset to be

    wholly developed; therefore one of the following is likely:

    The project team will create a hybrid variant with sequential

    styled OAT haphazardly attached near the end of the

    software development life cycle, increasing risk to the

    overall project; or

    The ISTQB [REF-2]defines Operational

    Acceptance Testing as:

    Operational testing in the Acceptance test

    phase, typically performed in a (simulated)

    operational environment by operations and/or

    systems administration staf f focusing on

    operational aspects, e.g. recoverability,

    resource-behaviour, installability and technical

    compliance. See also operational testing.

    Operational Testing:Testing conducted to evaluate a component or

    system in its operational environment.

    The project team will forego or water down the quality ofOAT within the software development life cycle, increasing

    risk of substandard, low quality and/or untested asset

    components being prematurely introduced into the

    production environment.

    Or the agile/iterative project team may continue to refer to the

    test phase as OAT, but instead adopt an Operational

    Readiness and Assurance Test (OR&A). OR&A is a term

    frequently used by NASA1to describe the concept of mission

    readiness. The acronym can be used anywhere where a

    requirement for a certain level of assurance, that a given status

    or specified capability exists (or will exist, at a given point in

    time), is required. It is commonly known by the acronymOR&A. [REF-5]

    Operational Readinessis the process of preparing the

    future asset owner, and the support team, so that, at the

    time of implementation/cutover, they are fully ready to

    assume ownership and operation of the asset.

    Assuranceis where the stakeholders of a project are no

    longer able to await for the asset to be fully developed and

    handed over to the operations team, only to discover that

    something is askew, or that would otherwise diminish or

    prevent operation of the asset in the intended

    operational manner.

    Assurance, in this context, refers to the act of re-assuring the

    project and organization stakeholders that their (iterative)

    developing asset, and their organization, is in a state of

    operational readiness or by which to provide a measure of

    assurance that the asset willbe ready by the time it is required

    to be ready.

    Quality Characteristics and Test Subtypes

    Whether OAT or OR&A is being implemented, there are

    various characteristics of quality and subtypes of testing that

    subsume the scope of Operational Acceptance.

    Recently the international community advanced the quality of

    Testing through the development of the ISO 29119 series of

    standards whose primary focus is on testing and test

    coverage. These newly developed, and proposed,

    international standards clearly articulate multiple testing

    subtypes which are applicable to all variants of testing,

    including functional and non-functional.

    1 National Aeronautics and Space Administration, United States

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    4/12

    4

    Figure 1 above identifies the types and sub-types of testing

    associated non-functional quality characteristics which may

    subsume the scope of an OAT or OR&A test. The test types

    may be grouped into three primary characteristics of quality;

    namely:

    Functional Stability

    Portability

    Reliability

    These testing characteristics, along with their respective

    sub-types, are each further explored below for definition, test

    method and test objective.

    Functional Stability

    Functional testing, in itself, is normally an aspect of User

    Acceptance Testing (UAT), but it is also a necessary aspect in

    the performance of non-functional operational acceptance.

    From the perspective of a non-functional test execution:

    It isnotthe primary role of functional testing to detect all of the

    defects within a given asset. It isnotthe responsibility of

    functional testing to ensure that any outstanding defects are

    resolved;nordoes functional testing determine when the

    asset will be ready for deployment or production use.

    The purpose of functional testing is to determine whether the

    functional requirements of the asset have been met. And in

    respect to Operational Acceptance, functionaltesting is

    performed on behalf of a legitimate user of the asset, who is

    endeavouring to use the asset in a means that it is intended to

    be used and for its intended purpose. [REF-6]Ideally, functional

    testing is performed from the perspective of the end-user,the customer.

    Sub-types of functional stability include:

    Accessibility Testing

    Conversion Testing

    Stability Testing

    Usability Testing

    Accessibility TestingAccessibility testing is defined by the International

    Organization for Standardization (ISO), as a type of usability

    testing used to measure the degree to which [an asset] can be

    operated by users with the widest possible range of

    characteristics and capabilities.[REF-4]

    In more simple terms, it is better defined as a type of non-

    functional testing that specifically evaluates the assets

    capability to interact and convey information to people who

    may be disabled or impaired from using the asset in a

    traditional method. Internationally, the World Wide Web

    Consortiums (W3C) Web Content Accessibility Guidelines

    (WCAG) has been generally accepted as the pseudo-standard

    for accessibility testing. [REF-7]

    Test Method

    Accessibility measures the level of access an asset has

    available for use by users with varying levels of ability and/or

    disability. Some people are unable to make use of their hands

    whereas others may have difficulty distinguishing between

    colours; there are deaf people as there are blind people.

    Accessibility removes the barriers that would otherwise make

    it difficult, or impossible, for these users to make use of the

    asset. [REF-8]

    Sub Types

    Non Functional

    Quality

    Characteristics

    Accessibility

    Backup/Recovery

    Compatibility

    Conversion

    DisasterRecovery

    Installability

    Interoperability

    Localization

    Maintainability

    Performance

    Procedure

    Security

    Stability

    Usability

    Types

    Functional Stability

    Portability

    Reliability

    Figure 1. Quality Characteristics

    Operational Acceptance an application of the ISO 29119 Software Testing standard

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    5/12

    5

    the way we see itTesting

    In accessibility testing, a test model is used that specifies therequirements, including any accessibility design standards

    (i.e. WCAG).

    Where possible, agile and iterative projects may wish to

    consider the use of automated test tools that generates a

    report on the inaccessibility aspects of the asset. The

    automation tool should suffice for most aspects of

    accessibility adjustment.

    Test Objective

    The objective of this test type is to evaluate the degree to

    which an asset can be operated by users with the greatest

    range of diversity and ability [REF-5], including accessibility

    requirements in example of age, or visual or

    hearing impairment.

    Scope Considerations

    Validation of asset design

    Validation of asset data schema

    Validation of asset construction

    Interdependencies

    Dynamic validation of accessibility compliance is dependent

    on functional user interfaces.

    When it comes to accessibility testing, the single best thing

    you can do is involve people with disabilities.

    [REF-7]

    Schedule Considerations

    With accessibility being a non-functional test activity, it is

    commonly bundled as a subcomponent of OAT and, in many

    asset development projects, left until the final moments of the

    development life cycle.

    Leaving accessibility testing until near the end of the project

    life cycle generally creates more work for the development

    teams by not discovering accessibility issues earlier in the

    project life cycle.

    Various aspects of OAT, including accessibili ty, should be

    addressed early in the product development life cycle oftenfollowing design and preceding code development thus

    improving overall asset quality.

    Though Accessibility Testing is non-functional in nature, and

    generally follows asset development, its dynamic test activities

    are often performed via functional testing performed as part of

    UserAcceptance Testing (UAT) and/or System Testing (ST).

    Therefore, it is not uncommon practice for projects to

    reallocate this test activity to a functional test team.

    Conversion Testing

    Conversion testing is about ascertaining whether continued

    capability can be maintained following data format changeand/or software amendments are applied to the asset; for

    example implementation of new database schemas or

    migration to new software platforms.

    A common subtype of conversion testing is data migrationtesting. [REF-8]

    Test Method

    Irrespective of the subtype of conversion testing to be

    engaged, each uses a method that specifies the requirements

    of the conversion process, its tolerances of variance and in

    particular that which is to remain invariant through the

    conversion process, those that are new, modified, or

    obsoleted by the conversion [REF-8]as well as any other

    organizational standards and/or architectural design that the

    asset must adhere to.

    Test Objective

    The objective of conversion testing is to discover non-

    conformities, data losses and degradations of service in/within

    the asset; and to improve the overall quality of the conversion

    processes. [REF-9]

    Stability Testing

    Stability testing is often confused with load testing, though it is

    actually a type of testing that evaluates how durable and

    stable the asset is to continue to function and perform

    throughout the life cycle of the asset.

    Test Method

    A common test method is where the stabi lity requirements ofthe asset are specified in terms of the capability of the asset to

    avoid unexpected and/or unwanted effects after amendment.

    Test Objective

    The objective of stability testing is to ascertain the degree in

    which an asset continues to function over its intended, and

    forecasted, life cycle. [REF-8]

    Usability Testing

    Usability testing refers to the evaluation of an asset by testing

    it with representative users. Typically test participants will

    endeavour to perform typical business behaviour and/or user

    functions using the asset as it would be used during normalproduction use. The objective is to identify usability issues,

    identify non-conformances (defects) and to determine if the

    asset meets pre-defined business requirements. [REF-10]

    Test Method

    A usability testing method should outline the learnability of

    the asset, and specifies the use requirements, inclusive of any

    design standards to which the asset must comply. The

    testing may be achieved test labs, desktop sharing programs

    or other technologies used to monitor how people use

    the interfaces.

    Note: ISO/IEC 9241 defines standards for defining therequirements for human-systems interaction.

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    6/12

    6

    Test ObjectiveThe objective of Usability testing is to evaluate the asset to

    ensure that its interfaces or technologies are usable by end

    users for the intended purposes; effectively, efficiently and

    satisfactorily within the specified contexts of use.

    Schedule Considerations

    Though non-functional in nature, and following asset

    development, in respect to the validation of dynamic test

    activities of Usability it is often validated via functional testing

    that completed during UserAcceptance Testing (UAT).

    Therefore, it is not uncommon practice for projects to

    reallocate this test activity to a functional test team.

    Portability Testing

    Portability testing evaluates the ability in which an asset may

    be transferred from one operating platform to another;

    including any re-configuration needed for it to be executed in

    various types of environments.

    The objective of portability testing is to ascertain the degree of

    ease, or difficulty, in which an asset is effectively and efficiently

    transferred from one operational platform to another; or to

    alter the configuration of the operational platform.

    Traditional thinking endeavoured to accommodate various

    quality characteristics in the evaluation of software quality,including adaptability of the asset, coexistence with other

    assets and asset portability compliance.

    These quality characteristics are confirmed via por tability

    testing and the sub-types of:

    Compatibility Testing

    Installability Testing

    Interoperability Testing

    Localization Testing

    Test Method

    The test method is developed in conjunction with its sub-types

    to minimise on redundancy and test overlap. [REF-11]The method

    should outline the portability requirements, including any

    design standards to which the asset must conform. [REF-8]It is

    measured by the maximum amount of effort required to

    transfer the asset from one environment to another. [REF-12]

    For example, this could include evaluating if the asset is able

    to be operated from a variety of different browsers and

    versions. [REF-8]

    Compatibility Testing

    Compatibility testing evaluates the ability to which an asset

    can satisfactorily operate, function and coexist with other

    independent assets operating and functioning in the same

    shared environment, and where necessary, interoperate with

    the other assets. See also section 4.8 Interoperability Testing.

    Test MethodThe model of testing [REF-8]would typically identify if:

    the asset is able to be installed/uninstalled into/from a

    shared environment without adversely impacted said

    environment;

    multiple instances of said asset are able to be instantiated

    and/or utilised concurrently;

    multiple versions of said asset are able to coexist within said

    environment and/or utilised concurrently with said

    environment;

    the asset articulates any environment constraints: including

    CPU, memory, architecture, and/or configuration.

    Test Objective

    The objective of Compatibility Testing is to ascertain whether

    the asset can function alongside other dependent and/or

    independent assets, whether communicating or non-

    communicating, in a shared environment.[REF-8]

    Scope Considerations

    Phase test scope [REF-13]should consider inclusion of

    Hardware

    Networks

    Operating System

    User Interface (i.e. browser, mobile, smart TV)

    Versions (backward and forward)

    Backward: testing of the asset in earlier releases/

    versions.

    Forward: testing of the asset in new or upcoming

    releases/versions.

    Schedule Considerations

    Exclusion or delay of this test activity may result in conflicts

    with other assets, inability to interoperate with other assets, or

    compromised and/or insecure environments.

    Installability Testing

    Installability testing determines if an asset can be installed,uninstalled, removed, and/or upgrade as required. ISO defines

    it as a type of portability testing that is conducted to evaluate

    whether an asset can be installed as required in all specified

    environments.[REF-4]

    Test Method

    The method of testing for Installabili ty is specified in terms of

    the processes outlined in the assets installation manual, for

    each pre-specified target environment. [REF-14]The installability

    requirement should be defined as part of the overall asset

    requirement.

    Operational Acceptance an application of the ISO 29119 Software Testing standard

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    7/12

    7

    the way we see itTesting

    Test ObjectiveA primary objective of Installability testing is to validate if the

    asset can be successfully made operational in the intended

    target environment, but it is also an objective to ascertain if the

    environment, device or platform can be returned to the same

    state as it was before the asset was installed and uninstalled.

    Does the process of installation and un-installation of the asset

    permanently or adversely alter the host environment, device or

    platform?

    Scope Considerations

    Should the installability requirements be incomplete or

    missing, the test team may need to analyse possible attributes

    of asset installation which are more or less desirable.

    Derivation of installability requirements may be possible from

    these attributes.

    Installability testing and validation should also be considered in

    conjunction with localization testing

    Schedule Considerations

    Installability testing is sometimes referred to as Implementation

    or Deployment testing. It is common practice to retain

    Installability testing until near the end of the release

    development cycle. The project team ought to schedule the

    test early enough where any defects and adjustments required

    do not adversely impact the critical path, yet late enough to

    ensure a stable asset in which to validate.

    Interoperability Testing

    Interoperability is the ability of making differing assets work

    together (inter-operate). It describes the capability of different

    assets to exchange data via a common set of exchange

    formats, to read and write the same file formats, and to use

    the same protocols. [REF-15]

    According to ISOinteroperabilityis defined as The capability

    to communicate, execute programs, or transfer data among

    various functional units in a manner that requires the user to

    have little or no knowledge of the unique characteristics of

    those units. [REF-16]

    Test Method

    Interoperability testing uses a model that outlines the syntax

    and data format compatibility, sufficient physical and logical

    connection methods, and the ease of use features. The

    interoperation requirements should include design standards

    to which the asset must conform.

    Test Objective

    Interoperability testing involves validation of whether an asset

    is capable of communicating and exchanging data and

    information with other assets within, or across, environments;

    and whether the asset is able to make effective use of said

    data and information received from the other assets. [REF-8]

    Localization TestingLocalization is sometimes referred to as Internationalization or

    Globalization. Irrespective of which term the organization is

    using, the purpose is to ascertain the asset be understood in

    using a common use language of the local geographical

    region it is required to be used in.

    Test Method

    Localization testing should include (but is not limited to) an

    analysis of whether any text appearing in the asset [REF-8]is not

    mistranslated or misapplied [REF-17]by way of testing that

    evaluates the linguistics used therein; specific to each country

    or region of use.

    Test Objective

    The objective is to ensure that person or persons of a local

    region are capable of understanding and using the asset

    without having to translate the foreign language to their native

    language, or worse the foreign variant of the same language to

    their variant of a language.

    Assuming that English is English is not always wise,

    particularly when safe operation of the asset is paramount.

    There are many variants of the language, not only differences

    in spelling between dictionaries, but where the interpretation

    and use of words and phrases alter amongst the English

    speaking countries (e.g. Australia, India, United Kingdom,

    United States and South Africa). This is not to mention those

    countries that use English as a second language.

    It is not only the English language that has many dialects and

    variants; the same is true for many other languages including

    Chinese, French, Russian and others.

    Reliability Testing

    In respect to asset quality reliability was first attempted to be

    evaluated in terms of can the asset be easily analysed,

    changed, maintain stability, undergo maintenance, efficiently

    use resources, and be generally recoverable. These terms or

    characteristics of quality are measured via reliability testing.

    Reliability testing is a type of testing executed to determine the

    probability that an asset will continue to perform its required

    function without failure, or within tolerable performance levels,

    under stated conditions for a stated period of time. [REF-18]

    Those conditions may include:

    The production operational state;

    The backup state;

    The failed / inoperative state;

    The failover (recovery) state;

    And the maintenance and enhancement state.

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    8/12

    8

    Thus, the sub-types of Reliability Testing include:

    Backup & Recovery Testing

    Disaster Recovery Testing

    Maintainability Testing

    Performance Testing

    Procedure (Document) Testing

    Security Testing

    Test Method

    Reliability testing assesses the degree to which an asset is

    capable of performing its required functions, including

    measures of frequency within which incidents may occur, as it

    is used - under stated conditions - over a predetermined

    period of time.

    The test method, in the production operation state, should

    articulate the required levels of reliability, including mean time

    to failure (MTTF) and mean time between failures (MTBF). [REF-8]

    Note 1 The test method needs to include a functional

    definition of failure, as failure means different things to the

    user, the tester and the person who needs to resolve

    the problem.

    Note 2 The test method ought to include the operational

    profile of the asset; or an approach to determine the profile.

    Backup & Recovery TestingBackup and recovery testing is a subtype of reliability testing

    that evaluates the degree by which the asset is recovered from

    a previously taken backup. And to which the asset is capable

    of being restored as measured by accuracy, completeness,

    cost, and time. [REF-4]

    Test Method

    Backup and recovery testing makes use of a test method

    which articulates the backup and recovery requirements.

    These requirements specify a need to backup the operational

    state of the asset under test (including data, configuration and/

    or environment settings), at various points in time prior to theoutage simulation or dataflow interruption. Then restore the

    state of the asset under test from a backup closest to the

    interruption (simulated or staged event).

    The test compares the state of the restored asset under test

    to the state of the asset pre-failure. The test may also compare

    the process of restoration against the plan or procedures

    for restoration.

    Note: This type of testing may be used during the support of IT

    disaster recovery tests.

    Test Objective

    The purpose of backup/recovery testing is to determine if, in

    the event of failure, the [asset] can be restored from backup to

    its pre-failure state. [REF-19]

    Scope ConsiderationsIn many projects it is common practice to combine the

    Backup and Recovery (B&R) test with the Disaster Recovery

    (DR) test to reduce overall test efforts. Ascertaining whether

    B&R is to be amalgamated with, or segregated from, DR is the

    first step in determining the B&R test scope.

    Schedule Considerations

    Backup and recovery often proves itself invaluable to the

    project during the project development life cycle for both

    taking critical checkpoint backups and restoring to last known

    working instances. Thus, it is recommended to regularly test

    and validate B&R early on, and throughout the project

    life cycle.

    Backup and recovery testing may be dependent upon the

    implementation of equivalent or compatible hardware/software

    assets in multiple environments and/or data centres during

    instances where recoveries are to occur in alternate

    environments from which the backups were originally taken

    (e.g. during disaster recovery tests).

    Should backup and recovery testing not be performed during

    the project life cycle, there is risk of adverse performance

    issues not being realized until after go-live.

    Disaster Recovery TestingIn the Information Technology (IT) industry, and those

    industries which rely upon IT services, the term Disaster

    Recovery (DR) is often made in reference to the restoration of

    IT services following the catastrophic, or disastrous, loss of

    the primary data centre.

    Thus, a common purpose of disaster recovery testing is to

    determine if, in the event of such a catastrophic, or disastrous,

    loss of the primary data centre that the application, system or

    IT component can be successful transferred to an alternate

    data centre and IT services restored thereto.

    Note: DR testing is considered to be a subcomponent of

    Business Continuity testing.

    Test Method

    Typically disaster recovery testing uses a model of validating

    the DR strategy, plan or procedure; a document which details

    the disaster recovery requirements, including any required

    design standards to which the DR solution must comply. [REF-8]

    Testing may include additional aspects including procedures

    to be completed by operational staff, relocation of data,

    software, personnel, or other facilities, or the recovery of

    previously backed up data located at an offsite facility.

    Operational Acceptance an application of the ISO 29119 Software Testing standard

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    9/12

    9

    the way we see itTesting

    In the instances where new hardware or virtual configurationsare to be deployed for the PROD and PROD-DR

    environments, it is always recommended that the disaster

    recovery test be executed against the new-PROD and

    new-PROD-DR environments (before live users use the

    environment) wherever possible.

    Test Objective

    The objective of DR testing is to provide a level of assurance to

    the organization that

    the asset can be recovered in response to an untoward

    event; and/or

    the strategy/plan/procedure, as documented, appropriately

    describes how to recover the asset; and/or

    the environment or facilities are able to support and operate

    the new or amended asset, without degradation to other

    products which may share the environment / facilities; and/

    or

    the team or personnel possess the appropriate knowledge,

    familiarity and capability to implement and perform

    the recovery.

    Scope Considerations

    If disaster recovery testing is omitted, or not validated for its

    ability to function in conjunction with the documented

    procedures, the result thereof may be inability to recover thepresent asset; and may also adversely impact other pre-

    existing assets ability to recover (of the same shared

    environment). [REF-20]

    For iterative projects an organization may wish to adopt a risk

    based regression testing approach for components being

    introduced into the production environment, thereby reducing

    potential impacts to any cohabiting assets, for instances

    where component (or partial component) implementations

    may occur.

    Schedule Considerations

    Using a sequential style project methodology, the disasterrecovery test phase is often the last test phase performed

    before engaging the implementation rehearsal exercises and

    production cutover. However, in the wor ld of agile and iterative

    projects, disaster recovery testing may need to reconsider this

    traditional placement in the schedule. As components,

    instances and/or releases of the newly development asset are

    introduced into the environment; they need to be continually

    revalidated for their recoverability.

    Depending on the size and complexity of the asset

    development, it is not uncommon practice for the project team

    to engage DR testing as an independent and separate test

    activity or phase from OAT.

    Maintainability TestingMaintainability corresponds to the ability to change the asset.

    Maintainability Testing is the determination as to how easy is it

    for an asset to be retained in an operational state, or pre-

    determined state or condition; through the application of an

    acceptable amount of effort.[REF-21]Maintainability can be

    indirectly evaluated by applying static analysis.

    Test Method

    Maintainability testing uses a model of the ability to maintain

    the requirements of the asset. The requirements shall be

    specified in terms of the effort required to effect change in

    Adaptive, Corrective, Perfective and Preventative maintenance.

    Adaptive to changes in the environment.

    Corrective in that problems (defects) are resolved.

    Perfective in which enhancements may be applied.

    Preventive where actions may be applied to reduce future

    maintenance costs.

    The model should also include static analysis and reviews.

    Performance Testing

    Performance testing is a technique used to ascertain the

    parameters of the asset in terms of responsiveness,

    effectiveness and stability under various workloads.

    This process involves quantitativetests performed to measure

    and report on the load, stress, endurance, volume and

    capacity threshold limits of the asset.

    Performance testing measures the quality attributes of the

    system, such as scalability, capacity and resource utilization.

    Test Method

    Performance testing uses a method of testing that pushes or

    stresses the asset to evaluate the limits and thresholds of its

    capabilities. This may be accomplished through: [REF-8]

    Capacity test (over flow memory, disk capacity, network

    bandwidth) Endurance testing: run for hours on end

    Load testing: which assesses the behaviour of the asset

    Stress testing: where the asset is pushed beyond its

    anticipated peak load (memory, CPU, disk use, etc)

    Volume testing (excessive transactions, overflow caches,

    data stores, etc)

    Test Objective

    Performance testing is performed to ascertain how well a

    system performs in terms of responsiveness and stability

    under a particular workload. The objective is to evaluate how

    well an asset performs when it is placed under various types

    and sizes of load. The evaluation may include: capacity,

    endurance, load, stress and volume tests. [REF-8]

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    10/12

    10

    Schedule ConsiderationsDepending on the size and complexity of the asset

    development, it is not uncommon practice for the project team

    to engage Performance testing as an independent and

    separate test activi ty or phase from OAT.

    Procedure (Document) Testing

    The term procedure is defined, by ISO 26513 [REF-22], as an

    ordered series of steps that a user follows to do one or more

    tasks. Procedure Testing, on the other hand, is defined by

    ISO 29119-1 [REF-4], as a type of functional suitability testing

    conducted to evaluate whether procedural instructions for

    interacting with [an asset] or using its outputs meet user

    requirements and support the purpose of their use.

    By name alone it is easy to presume that Procedure type

    testing would only be suited for functional, over non-functional,

    test activities. However it is arguable that since theinstructions

    are being evaluated that certain aspects of Procedure testing

    should, in fact, be included as part of the NFT suite.

    Test Method

    The documentation management process, as described in

    ISO 12207 [REF-23], includes review of formats, technical content,

    and presentation styles against established documentation

    standards.

    Testing is performed with both the asset and the

    documentation, to evaluate that the documentation is fit-for-

    purpose and supports the users sufficiently in their use of the

    asset. At a minimum, the documentation is validated [REF-21]to

    include:

    Procedure to receive, record, resolve, track incidents and

    provide information on their status;

    Procedure to test the asset in its intended operating

    environment;

    Procedure to release the asset for operational use;

    Procedure to use the asset in its actual operating

    environment;

    Thus the review and evaluation of procedural documentation

    should be part of the same project life cycle, and performed in

    conjunction with the asset development; so that both the

    asset and the procedural documentation are tested,

    distributed and maintained conjointly.

    Test Objective

    The objective of Procedure evaluation is to ensure that the

    procedure is fit-for-purpose.

    Procedure evaluation is based on the required functions and

    qualities. Acceptance comes from the users, but the

    managers, developers, testers, and maintenance personnel

    must accept the documented procedure. [REF-20]

    Scope ConsiderationsShould the procedural documentation be excluded from test

    scope the quality of the final procedural documentation

    produced may prove inadequate.

    If operational teams do not have fit-for-purpose procedural

    documentation, their ability to maintain, operate, recover and/

    or support the assets may be adversely impacted.

    Schedule Considerations

    Should procedure testing not be performed early enough

    during the project life cycle an increased number of non-

    conformities (defects) may be encountered, resulting in

    additional delays to the project time line.[REF-11]

    Security Testing

    ISO defines this as a type of testing conducted to evaluate

    the degree to which [an asset], and associated data and

    information, are protected so that unauthorised persona or

    systems cannot use, read or modify them, and authorized

    persons or systems are not denied access to them. [REF-4]It is

    a technique used to ascertain if the asset protects the data

    and maintains the functionalities as intended; in respect to

    authentication, authorization, availability, confidentiality,

    integrity and non-repudiation.

    Test MethodThere are varying techniques for assessing the security of an

    asset:

    Penetration testing, whereby the test mimics the actions of

    an unauthorised user.

    Privacy testing, where attempts are made to access private

    or secure data.

    Security audits, in which records or logs are inspected or

    reviewed, or the code and/or requires of an asset are

    assessed for vulnerabilities.

    Vulnerability or code scanning, whereby automated test

    tools scan the asset for signs of specific

    known vulnerabilities.

    For each technique, or combination of techniques, applied the

    security requirements gathering [REF-24]is expanded to identify

    all potential test activities by:

    Creating an exhaustive list of all security vulnerabilities

    within the asset.

    Identifying all possible parameters for each of

    the vulnerabilities.

    Listing all of the testing activities for each parameter.

    Test Objective

    Security testing, also known as Security & Penetration Testing,

    is a type of testing whose primary purpose is to evaluate the

    level of protection in which an asset (under test) secures and

    controls access to its associated data so that only authorized

    Operational Acceptance an application of the ISO 29119 Software Testing standard

  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    11/12

    11

    the way we see itTesting

    and authenticated persons or systems are granted accesswhilst unauthorized persons and systems are denied access.

    Scope Considerations

    According to the Business Continuity Institute (BCI) and the

    British Standards Institute (BSi) [REF-25]cyber-attack is the top

    threat to security in information technology assets today.

    Security validation and testing should be included, at

    appropriate levels according to risks related to the asset, the

    organization and its brand, for each and every environment

    the asset is intended to operate within.

    To exclude security testing would greatly increase the

    probability of asset misuse.

    Schedule Considerations

    The optimal time to execute security testing will vary from

    project to project. Threat model evaluations may be

    conducted during design, and automated code-scanning may

    occur throughout the project life cycle - particularly during or

    immediately following agile and iterative development cycles.

    More detailed vulnerability and threat analysis and penetration

    testing should be executed close to the development cycle to

    minimise repair cost whilst balancing a need to be on

    production like deployment configuration.

    Depending on the size and complexity of the assetdevelopment, it is not uncommon practice for the project team

    to engage Security testing as an independent and separate

    test activity or phase from OAT.

    Conclusion

    Operational acceptance subsumes many types and sub-types

    of testing, covering various attributes of quality. Organizations

    may choose to employ Operational Acceptance Testing (OAT)

    or Operational Readiness & Assurance (OR&A) tests to

    evaluate these attributes associated with their traditional

    sequential or modern iterative software development

    methodologies; respectively.

    Application of the international sof tware testing standard ISO

    29119, by certified testing professionals who are

    knowledgeable in the interrelationship of the various standards

    as they relate to OAT and OR&A will greatly increase an

    organizations probability in:

    Effectively and efficiently determining if/when an asset has

    been built to specification,

    Ascerta ining if the asset is ready to be implemented, and

    Confirming if the organization is ready to operate and

    support the asset when it is implemented.

    Bibliographical References1. ISTQB, International Software Testing Qualifications Board

    Standard glossary of terms used in Software Testing,

    Version 2.1 (dd. April 1st, 2010)

    2. http://slideshare.net/suhasreddy1/v-model-final3. http://istqb.org/about-istqb/history.html

    4. ISO/IEC/IEEE 29119-1:2013 Software and Systems

    Engineering Software Testing Part 1

    5. http://en.wikipedia.org/wiki/

    Operations_readiness_and_assurance

    6. http://w3.org/wiki/Accessibility_testing

    7. Testing ASP.NET Web Applications, Jeff McWherter and

    Ben Hall, Ch.9 - Accessibility Testing

    8. ISO/IEC/IEEE DIS-2 29119-4:2013 Software and

    Systems Engineering Software Testing Part 4

    Test Techniques

    9. http://softwaretestinghelp.com/

    software-compatibility-testing

    10. Effective Testing & Quality Assurance in Data Migration

    Projects, 28/01/2014, Swati Jindal

    11. Operational Acceptance Testing Business Continuity

    Assurance, Dec 2012, Dirk Dach, et al

    12.Testing Code Security, by Maura A. van der Linden, 2007

    13. http://testingstandards.co.uk/installability_guidelines.htm

    14. http://en.wikipedia.org/wiki/Interoperability#Software

    15. ISO/IEC 2382-01, Information Technology Vocabulary,

    Fundamental Terms

    16. http://gala-global.org/localization-testing

    17. http://dictionary.reference.com/browse/maintain

    18. https://wiki.oasis-open.org/tab/TestingPolicy19. http://tutorialspoint.com/software_testing_dictionary/

    portability_testing.htm

    20. ISO/IEC 26513-2009 Requirements for Testers and

    Reviewers of User Documentation

    21. ISO/IEC 12207:2008 Systems and Software Engineering

    Software Life Cycle Processes

    22. http://web.utk.edu/~leon/rel/overview/reliability.html

    23. QAI India Ltd, 3rd Annual International Software Testing

    Conference, 2001

    24. 4th annual Horizon Scan Report by the Business

    Continuity Institute (BCI), in association with the British

    Standards Institute (BSi)

    25. http://usability.gov/how-to-and-tools/methods/usability-

    testing.html

    http://usability.gov/how-to-and-tools/methods/usability-testing.htmlhttp://usability.gov/how-to-and-tools/methods/usability-testing.htmlhttp://usability.gov/how-to-and-tools/methods/usability-testing.htmlhttp://usability.gov/how-to-and-tools/methods/usability-testing.html
  • 7/21/2019 Operational Acceptance Test - White Paper, 2015 Capgemini

    12/12

    About Capgemini and Sogeti

    With more than 145,000 people in over 40 countries, Capgemini is one of the worlds foremost providers of

    consulting, technology and outsourcing services. The Group repor ted 2014 global revenues of EUR 10.573 billion.

    Together with its clients, Capgemini creates and delivers business and technology solutions that fit their needs

    and drive the results they want. A deeply multicultural organization, Capgemini has developed its own way of

    working, the Collaborative Business Experience and draws on Rightshore, its worldwide delivery model.

    Sogeti is a leading provider of technology and software testing, specializing in Application, Infrastructure and

    Engineering Services. Sogeti offers cutting-edge solutions around Testing, Business Intelligence & Analytics,

    Mobile, Cloud and Cyber Security. Sogeti brings together more than 20,000 professionals in 15 countries and has

    a strong local presence in over 100 locations in Europe, USA and India. Sogeti is a wholly-owned subsidiary of

    Cap Gemini S.A., listed on the Paris Stock Exchange.

    Together Capgemini and Sogeti have developed innovative, business-driven qualit y assurance (QA) and Testingservices, combining best-in-class testing methodologies (TMapand TPI) to help organizations achieve their

    testing and QA goals. The Capgemini Group has created one of the largest dedicated testing practices in the

    world, with over 12,000 test professionals and a further 14,500 application specialists, notably through a common

    center of excellence with testing specialists developed in India.

    For more information, please visit:

    www.capgemini.com/testing orwww.sogeti.com/testing

    2015 Capgemini and Sogeti. Rightshoreis a registered trademark belonging to Capgemini.

    TMap, TMap NEXT, TPIand TPI NEXTare registered trademarks of Sogeti.

    No part of this document may be modified, deleted or expanded by any process or means without

    prior written permission from Capgemini.

    MCOS_

    GI_AH_

    20150717

    To find out how Capgemini and Sogetis Testing Services can help your organization achieve its Testing and QA business

    goals, please contact your local Capgemini or Sogeti testing representative or our Global Testing Services Sales Team:

    Anthony Woods

    Test Manager

    Phone: + 61 (2) 9293 4024

    Mobile: + 61 420 580 085

    [email protected]

    Sudhir Pai

    VP Testing Service Line Lead - Australia

    Phone: + 61 3 9613 3253

    Mobile: + 61 407 328321

    [email protected]