qa reviewer

Upload: mosh-moshin

Post on 05-Apr-2018

234 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 QA Reviewer

    1/31

    What is 'Software Quality Assurance'?

    Software QA involves the entire software development PROCESS - monitoring and improving theprocess, making sure that any agreed-upon standards and procedures are followed, and ensuring thatproblems are found and dealt with. It is oriented to 'prevention'.

    What is 'Software Testing'?

    Testing involves operation of a system or application under controlled conditions and evaluating the

    results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then Dshould happen'). The controlled conditions should include both normal and abnormal conditions. Testing

    should intentionally attempt to make things go wrong to determine if things happen when they shouldn't orthings don't happen when they should. It is oriented to 'detection'.

    Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes

    they're the combined responsibility of one group or individual. Also common are project teamsthat include a mix of testers and developers who work closely together, with overall QA processes

    monitored by project managers. It will depend on what best fits an organization's size andbusiness structure.

    Does every software project need testers?

    While all projects will benefit from testing, some projects may not require independent test staff tosucceed.

    Which projects may not need independent test staff? The answer depends on the size and context of the

    project, the risks, the development methodology, the skill and experience of the developers, and otherfactors. For instance, if the project is a short-term, small, low risk project, with highly experienced

    programmers utilizing thorough unit testing or test-first development, then test engineers may not be

    required for the project to succeed.

    In some cases an IT organization may be too small or new to have a testing staff even if the situation

    calls for it. In these circumstances it may be appropriate to instead use contractors or outsourcing, oradjust the project management and development approach (by switching to more senior developers andagile test-first development, for example). Inexperienced managers sometimes gamble on the success of

    a project by skipping thorough testing or having programmers do post-development functional testing oftheir own work, a decidedly high risk gamble.

    For non-trivial-size projects or projects with non-trivial r isks, a testing staff is usually necessary. As in any

    business, the use of personnel with specialized skills enhances an organization's ability to be successfulin large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution

    of differing perspectives. For example, programmers typically have the perspective of 'what are thetechnical issues in making this functionality work?'. A test engineer typically has the perspective of 'what

    might go wrong with this functionality, and how can we ensure it meets expectations?'. A technical personwho can be highly effective in approaching tasks from both of those perspectives is rare, which is why,sooner or later, organizations bring in test specialists.

  • 7/31/2019 QA Reviewer

    2/31

  • 7/31/2019 QA Reviewer

    3/31

    How can new Software QA processes be introduced in an existing organization?

    A lot depends on the size of the organization and the risks involved. For large organizations with

    high-risk (in terms of lives or property) projects, serious management buy-in is required and aformalized QA process is necessary.

    Where the risk is lower, management and organizational buy-in and QA implementation may be a

    slower, step-at-a-time process. QA processes should be balanced with productivity so as to keepbureaucracy from getting out of hand.

    For small groups or projects, a more ad-hoc process may be appropriate, depending on the typeof customers and projects. A lot will depend on team leads or managers, feedback to developers,

    and ensuring adequate communications among customers, managers, developers, and testers.

    The most value for effort will often be in (a) requirements management processes, with a goal of

    clear, complete, testable requirement specifications embodied in requirements or designdocumentation, or in 'agile'-type environments extensive continuous coordination with end-users,(b) design inspections and code inspections, and (c) post-mortems/retrospectives.

    Other possibilities include incremental self-managed team approaches such as 'Kaizen' methodsof continuous process improvement, the Deming-Shewhart Plan-Do-Check-Act cycle, and others.

    What is verification? validation?

    Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements,and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings.

    Validation typically involves actual testing and takes place after verifications are completed. The term 'IV& V' refers to Independent Verification and Validation.

    What is a 'walkthrough'?

    A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation isusually required.

    What's an 'inspection'?

    An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator,reader, and a recorder to take notes. The subject of the inspection is typically a document such as a

    requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fixanything. Attendees should prepare for this type of meeting by reading thru the document; most problemswill be found during this preparation. The result of the inspection meeting should be a written report.

    Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective

    methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in

    the parable in 'Why is it often hard for organizations to get serious about quality assurance?'. Their skillmay have low visibility but they are extremely valuable to any software development organization, sincebug prevention is far more cost-effective than bug detection.

    What kinds of testing should be considered?

    Black box testing - not based on any knowledge of internal design or code. Tests are based on

    requirements and functionality.

    http://www.softwareqatest.com/qat_lfaq1.html#LFAQ1_1http://www.softwareqatest.com/qat_lfaq1.html#LFAQ1_1
  • 7/31/2019 QA Reviewer

    4/31

    White box testing - based on knowledge of the internal logic of an application's code. Tests arebased on coverage of code statements, branches, paths, conditions.

    unit testing - the most 'micro' scale of testing; to test particular functions or code modules.

    Typically done by the programmer and not by testers, as it requires detailed knowledge of the

    internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test

    harnesses. incremental integration testing - continuous testing of an application as new functionality is added;

    requires that various aspects of an application's functionality be independent enough to work

    separately before all parts of the program are completed, or that test drivers be developed asneeded; done by programmers or by testers.

    integration testing - testing of combined parts of an application to determine if they function

    together correctly. The 'parts' can be code modules, individual applications, client and serverapplications on a network, etc. This type of testing is especially relevant to client/server and

    distributed systems.

    functional testing - black-box type testing geared to functional requirements of an application; thistype of testing should be done by testers. This doesn't mean that the programmers shouldn't

    check that their code works before releasing it (which of course applies to any stage of testing.)

    system testing - black-box type testing that is based on overall requirements specifications;

    covers all combined parts of a system. end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of

    a complete application environment in a situation that mimics real-world use, such as interacting

    with a database, using network communications, or interacting with other hardware, applications,or systems if appropriate.

    sanity testing or smoke testing - typically an initial testing effort to determine if a new software

    version is performing well enough to accept it for a major testing effort. For example, if the newsoftware is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting

    databases, the software may not be in a 'sane' enough condition to warrant further testing in itscurrent state.

    regression testing - re-testing after fixes or modifications of the software or its environment. It canbe difficult to determine how much re-testing is needed, especially near the end of thedevelopment cycle. Automated testing approaches can be especially useful for this type of

    testing.

    acceptance testing - final testing based on specifications of the end-user or customer, or based

    on use by end-users/customers over some limited period of time.

    load testing - testing an application under heavy loads, such as testing of a web site under a

    range of loads to determine at what point the system's response time degrades or fails.

    stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used todescribe such tests as system functional testing while under unusually heavy loads, heavyrepetition of certain actions or inputs, input of large numerical values, large complex queries to a

    database system, etc.

    performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally

    'performance' testing (and any other 'type' of testing) is defined in requirements documentation orQA or Test Plans.

    usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on thetargeted end-user or customer. User interviews, surveys, video recording of user sessions, andother techniques can be used. Programmers and testers are usually not appropriate as usability

    testers.

    install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

    recovery testing - testing how well a system recovers from crashes, hardware failures, or other

    catastrophic problems.

    failover testing - typically used interchangeably with 'recovery testing'

  • 7/31/2019 QA Reviewer

    5/31

    security testing - testing how well the system protects against unauthorized internal or externalaccess, willful damage, etc; may require sophisticated testing techniques.

    compatability testing - testing how well software performs in a particular

    hardware/software/operating system/network/etc. environment.

    exploratory testing - often taken to mean a creative, informal software test that is not based on

    formal test plans or test cases; testers may be learning the software as they test it.

    ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers havesignificant understanding of the software before testing it.

    context-driven testing - testing driven by an understanding of the environment, culture, andintended use of software. For example, the testing approach for life-critical medical equipment

    software would be completely different than that for a low-cost computer game.

    user acceptance testing - determining if software is satisfactory to an end-user or customer.

    comparison testing - comparing software weaknesses and strengths to competing products.

    alpha testing - testing of an application when development is nearing completion; minor design

    changes may still be made as a result of such testing. Typically done by end-users or others, notby programmers or testers.

    beta testing - testing when development and testing are essentially completed and final bugs andproblems need to be found before final release. Typically done by end-users or others, not by

    programmers or testers.

    mutation testing - a method for determining if a set of test data or test cases is useful, bydeliberately introducing various code changes ('bugs') and retesting with the original test

    data/cases to determine if the 'bugs' are detected. Proper implementation requires largecomputational resources.

    What are 5 common problems in the software development process?

    poor requirements - if requirements are unclear, incomplete, too general, and not testable, there

    may be problems.

    unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.

    inadequate testing - no one will know whether or not the software is any good until customers

    complain or systems crash. featuritis - requests to add on new features after development goals are agreed on.

    miscommunication - if developers don't know what's needed or customer's have erroneousexpectations, problems can be expected.

    In agile projects, problems often occur when the project diverges from agile principles (such as forgettingthat 'Business people and developers must work together daily throughout the project.' - seethe Manifesto for Agile Software Development.)

    What are 5 common solutions to software development problems?

    solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are

    agreed to by all players. In 'agile'-type environments, continuous close coordination withcustomers/end-users is necessary to ensure that changing/emerging requirements are

    understood.

    realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing,changes, and documentation; personnel should be able to complete the project without burning

    out.

    adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for

    testing and bug-fixing. 'Early' testing could include static code analysis/testing, test-first

    http://agilemanifesto.org/principles.htmlhttp://agilemanifesto.org/principles.html
  • 7/31/2019 QA Reviewer

    6/31

    development, unit testing by developers, built-in testing and diagnostic capabilities, automated

    post-build testing, etc.

    stick to initial requirements where feasible - be prepared to defend against excessive changesand additions once development has begun, and be prepared to explain consequences. If

    changes are necessary, they should be adequately reflected in related schedule changes. Ifpossible, work closely with customers/end-users to manage expectations. In 'agile'-type

    environments, initial requirements may be expected to change significantly, requiring that trueagile processes be in place and followed.

    communication - require walkthroughs and inspections when appropriate; make extensive use of

    group communication tools - groupware, wiki's, bug-tracking tools and change managementtools, intranet capabilities, etc.; ensure that information/documentation is available and up-to-date

    - preferably electronic, not paper; promote teamwork and cooperation; use protoypes and/orcontinuous communication with end-users if possible to clarify expectations.

    What is software 'quality'?

    Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or

    expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who

    the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers'of a software development project might include end-users, customer acceptance testers, customer

    contract officers, customer management, the development organization'smanagement/accountants/testers/salespeople, future software maintenance engineers, stockholders,

    magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accountingdepartment might define quality in terms of profits while an end-user might define quality as user-friendly

    and bug-free. (See the Softwareqatest.com Bookstore section's 'Software QA' category for useful bookswith more information.)

    What is 'good code'?

    'Good code' is code that works, is reasonably bug free, and is readable and maintainable. Some

    organizations have coding 'standards' that all developers are supposed to adhere to, but everyone hasdifferent ideas about what's best, or what is too many or too few rules. There are also various theoriesand metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of

    standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' pair programming,code analysis tools, etc. can be used to check for problems and enforce standards.

    For example, in C/C++ coding, here are some typical ideas to consider in setting rules/standards; thesemay or may not apply to a particular situation:

    minimize or eliminate use of global variables.

    use descriptive function and method names - use both upper and lower case, avoid

    abbreviations, use as many characters as necessary to be adequately descriptive (use of morethan 20 characters is not out of line); be consistent in naming conventions.

    use descriptive variable names - use both upper and lower case, avoid abbreviations, use as

    many characters as necessary to be adequately descriptive (use of more than 20 characters isnot out of line); be consistent in naming conventions.

    function and method sizes should be minimized; less than 100 lines of code is good, less than 50lines is preferable.

    function descriptions should be clearly spelled out in comments preceding a function's code.

    organize code for readability.

    use whitespace generously - vertically and horizontally

    each line of code should contain 70 characters max.

    one code statement per line.

    http://www.softwareqatest.com/qatbks1.htmlhttp://www.softwareqatest.com/qatbks1.html
  • 7/31/2019 QA Reviewer

    7/31

  • 7/31/2019 QA Reviewer

    8/31

    Level 3 - standard software development and maintenance processes

    are integrated throughout an organization; a Software

    Engineering Process Group is is in place to oversee

    software processes, and training programs are used to

    ensure understanding and compliance.

    Level 4 - metrics are used to track productivity, processes,

    and products. Project performance is predictable,

    and quality is consistently high.

    Level 5 - the focus is on continouous process improvement. The

    impact of new processes and technologies can be

    predicted and effectively implemented when required.

    Perspective on CMM ratings: During 1997-2001, 1018 organizations

    were assessed. Of those, 27% were rated at Level 1, 39% at 2,

    23% at 3, 6% at 4, and 5% at 5. (For ratings during the period

    1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and0.4% at 5.) The median size of organizations was 100 software

    engineering/maintenance personnel; 32% of organizations were

    U.S. federal contractors or agencies. For those rated at

    Level 1, the most problematical key process area was in

    Software Quality Assurance.

    ISO = 'International Organisation for Standardization' - The ISO 9001:2008 standard (which

    provides some clarifications of the previous standard 9001:2000) concerns quality systems thatare assessed by outside auditors, and it applies to many kinds of production and manufacturingorganizations, not just software. It covers documentation, design, development, production,

    testing, installation, servicing, and other processes. The full set of standards consists of:(a)Q9001-2008 - Quality Management Systems: Requirements; (b)Q9000-2005 - Quality

    Management Systems: Fundamentals and Vocabulary; (c)Q9004-2009 - Quality ManagementSystems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party

    auditor assesses an organization, and certification is typically good for about 3 years, after whicha complete reassessment is required. Note that ISO certification does not necessarily indicatequality products - it indicates only that documented processes are followed. Also

    see http://www.iso.org/for the latest information. In the U.S. the standards can be purchased viathe ASQ web site at http://www.asq.org/quality-press/

    ISO 9126 is a standard for the evaluation of software quality and defines six high level qualitycharacteristics that can be used in software evaluation. It includes functionality, reliability,

    usability, efficiency, maintainability, and portability.

    IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards

    such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEEStandard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for SoftwareQuality Assurance Plans' (IEEE/ANSI Standard 730), and others.

    ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.;publishes some software-related standards in conjunction with the IEEE and ASQ (AmericanSociety for Quality).

    Other software development/IT management process assessment methods besides CMMI andISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.

    http://www.iso.org/http://www.asq.org/quality-press/http://www.itil-officialsite.com/home/home.asphttp://www.itil-officialsite.com/home/home.asphttp://www.asq.org/quality-press/http://www.iso.org/
  • 7/31/2019 QA Reviewer

    9/31

    See the Softwareqatest.com 'Other Resources' section for further information available on theweb.

    What is the 'software life cycle'?

    The life cycle begins when an application is first conceived and ends when it is no longer in use. It

    includes aspects such as initial concept, requirements analysis, functional design, internal design,documentation planning, test planning, coding, document preparation, integration, testing, maintenance,updates, retesting, phase-out, and other aspects.

    What makes a good Software Test engineer?

    A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, astrong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a

    cooperative relationship with developers, and an ability to communicate with both technical (developers)and non-technical (customers, management) people is useful. Previous software development experiencecan be helpful as it provides a deeper understanding of the software development process, gives the

    tester an appreciation for the developers' point of view, and reduce the learning curve in automated testtool programming. Judgement skills are needed to assess high-risk or critical areas of an application onwhich to focus testing efforts when time is limited.

    What makes a good Software QA engineer?

    The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able tounderstand the entire software development process and how it can fit into the business approach and

    goals of the organization. Communication skills and the ability to understand various sides of issues are

    important. In organizations in the early stages of implementing QA processes, patience and diplomacyare especially needed. An ability to find problems as well as to see 'what's missing' is important forinspections and reviews.

    Return to top of this page's FAQ list

    What makes a good QA or Test manager?

    A good QA, test, or QA/Test(combined) manager should:

    be familiar with the software development process

    be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what isa somewhat 'negative' process (e.g., looking for or preventing problems)

    be able to promote teamwork to increase productivity

    be able to promote cooperation between software, test, and QA engineers

    have the diplomatic skills needed to promote improvements in QA processes

    have the ability to withstand pressures and say 'no' to other managers when quality is insufficientor QA processes are not being adhered to

    have people judgement skills for hiring and keeping skilled personnel

    http://www.softwareqatest.com/qatlnks1.htmlhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatlnks1.html
  • 7/31/2019 QA Reviewer

    10/31

    be able to communicate with technical and non-technical people, engineers, managers, andcustomers.

    be able to run meetings and keep them focused

    What's the role of documentation in QA?

    Generally, the larger the team/organization, the more useful it will be to stress documentation, in order tomanage and communicate more efficiently. (Note that documentation may be electronic, not necessarilyin printable form, and may be embedded in code comments, may be embodied in well-written test cases,

    user stories, etc.) QA practices may be documented to enhance their repeatability. Specifications,designs, business rules, configurations, code changes, test plans, test cases, bug reports, user manuals,

    etc. may be documented in some form. There would ideally be a system for easily finding and obtaininginformation and determining what documentation will have a particular piece of information. Change

    management for documentation can be used where appropriate. For agile software projects, it should bekept in mind that one of the agile values is "Working software over comprehensive documentation", which

    does not mean 'no' documentation. Agile projects tend to stress the short term view of project needs;documentation often becomes more important in a project's long-term context.

    What's the big deal about 'requirements'?

    Depending on the project, it may or may not be a 'big deal'. For agile projects, requirements are expected

    to change and evolve, and detailed documented requirements may not be needed. However somerequirements, in the form of user stories or something similar, are useful. For non-agile types of projectsdetailed documented requirements are usually needed. (Note that requirements documentation can be

    electronic, not necessarily in the form of printable documents, and may be embedded in code comments,or may be embodied in well-written test cases, wiki's, user stories, etc.) Requirements are the details

    describing an application's externally-perceived functionality and properties. Requirements are ideallyclear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement

    would be, for example, 'user-friendly' (too subjective). A more testable requirement would be somethinglike 'the user must enter their previously-assigned password to access the application'. Determining and

    organizing requirements details in a useful and efficient way can be a difficult effort; different methods andsoftware tools are available depending on the particular project. Many books are available that describevarious approaches to this task. (See the Softwareqatest.com Bookstore section's 'SoftwareRequirements Engineering' category for books on Software Requirements.)

    Care should be taken to involve ALL of a project's significant 'customers' in the requirements process.'Customers' could be in-house personnel or outside personnel, and could include end-users, customer

    acceptance testers, customer contract officers, customer management, future software maintenanceengineers, salespeople, etc. Anyone who could later derail the success of the project if their expectationsaren't met should be included if possible.

    Organizations vary considerably in their handling of requirements specifications. Often the requirementsare spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications

    should not be confused with 'requirements'; design specifications are ideally traceable back to therequirements.

    In some organizations requirements may end up in high level project plans, functional specificationdocuments, in design documents, or in other documents at various levels of detail. No matter what they

    are called, some type of documentation with detailed requirements will be useful to testers in order toproperly plan and execute tests. Without such documentation, there will be no clear-cut way to determineif a software application is performing correctly.

    http://www.softwareqatest.com/qatbks1.htmlhttp://www.softwareqatest.com/qatbks1.html
  • 7/31/2019 QA Reviewer

    11/31

    If testable requirements are not available or are only partially available, useful testing can still be

    performed. In this situation test results may be more oriented to providing information about the state ofthe software and risk levels, rather than providing pass/fail results. A relevant testing approach in thissituation may include an approach called 'exploratory testing'. Many software projects have a mix of

    documented testable requirements, poorly documented requirements, undocumented requirements, andchanging requirements. In such projects a mix of scripted and exploratory testing approaches may be

    useful. See the Softwareqatest.com 'Other Resources' page in the 'General Software QA and TestingResources' section for articles on exploratory testing, and in the 'Agile and XP Testing Resources'sectionfor articles on agile software development and testing.)

    'Agile' approaches use methods requiring close interaction and cooperation between programmers andstakeholders/customers/end-users to iteratively develop requirements, user stories, etc. In the XP 'testfirst' approach developers create automated unit testing code before the application code, and theseautomated unit tests essentially embody the requirements.

    Return to top of this page's FAQ list

    What steps are needed to develop and run software tests?The following are some of the steps to consider:

    Obtain requirements, functional design, and internal design specifications, user stories, and other

    available/necessary information

    Obtain budget and schedule requirements

    Determine project-related personnel and their responsibilities, reporting requirements, required

    standards and processes (such as release processes, change processes, etc.)

    Determine project context, relative to the existing quality culture of theproduct/organization/business, and how it might impact testing scope, aproaches, and methods.

    Identify application's higher-risk and mor important aspects, set priorities, and determine scope

    and limitations of tests.

    Determine test approaches and methods - unit, integration, functional, system, security, load,usability tests, etc.

    Determine test environment requirements (hardware, software, configuration, versions,

    communications, etc.)

    Determine testware requirements (automation tools, coverage analyzers, test tracking,problem/bug tracking, etc.)

    Determine test input data requirements

    Identify tasks, those responsible for tasks, and labor requirements

    Set schedule estimates, timelines, milestones

    Determine, where apprapriate, input equivalence classes, boundary value analyses, error classes

    Prepare test plan document(s) and have needed reviews/approvals

    Write test cases

    Have needed reviews/inspections/approvals of test cases

    Prepare test environment and testware, obtain needed user manuals/referencedocuments/configuration guides/installation guides, set up test tracking processes, set up loggingand archiving processes, set up or obtain test input data

    Obtain and install software releases

    Perform tests

    Evaluate and report results

    Track problems/bugs and fixes

    Retest as needed

    Maintain and update test plans, test cases, test environment, and testware through life cycle

    http://www.softwareqatest.com/qatlnks1.html#GENERALhttp://www.softwareqatest.com/qatlnks1.html#GENERALhttp://www.softwareqatest.com/qatlnks1.html#GENERALhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatlnks1.html#GENERALhttp://www.softwareqatest.com/qatlnks1.html#GENERAL
  • 7/31/2019 QA Reviewer

    12/31

    What's a 'test plan'?

    A software project test plan is a document that describes the objectives, scope, approach, and focus of asoftware testing effort. The process of preparing a test plan is a useful way to think through the effortsneeded to validate the acceptability of a software product. The completed document will help people

    outside the test group understand the 'why' and 'how' of product validation. It should be thorough enoughto be useful but not so overly detailed that no one outside the test group will read it. The following are

    some of the items that might be included in a test plan, depending on the particular project:

    Title

    Identification of software including version/release numbers

    Revision history of document including authors, dates, approvals

    Table of Contents

    Purpose of document, intended audience

    Objective of testing effort

    Software product overview

    Relevant related document list, such as requirements, design documents, other test plans, etc.

    Relevant standards or legal requirements

    Traceability requirements

    Relevant naming conventions and identifier conventions

    Overall software project organization and personnel/contact-info/responsibilties

    Test organization and personnel/contact-info/responsibilities

    Assumptions and dependencies

    Project risk analysis

    Testing priorities and focus

    Scope and limitations of testing

    Test outline - a decomposition of the test approach by test type, feature, functionality, process,

    system, module, etc. as applicable

    Outline of data input equivalence classes, boundary value analysis, error classes

    Test environment - hardware, operating systems, other required software, data configurations,

    interfaces to other systems

    Test environment validity analysis - differences between the test and production systems and

    their impact on test validity. Test environment setup and configuration issues

    Software migration processes

    Software CM processes

    Test data setup requirements

    Database setup requirements

    Outline of system-logging/error-logging/other capabilities, and tools such as screen capture

    software, that will be used to help describe and report bugs

    Discussion of any specialized software or hardware tools that will be used by testers to help trackthe cause or source of bugs

    Test automation - justification and overview

    Test tools to be used, including versions, patches, etc.

    Test script/test code maintenance processes and version control

    Problem tracking and resolution - tools and processes Project test metrics to be used

    Reporting requirements and testing deliverables

    Software entrance and exit criteria

    Initial sanity testing period and criteria

    Test suspension and restart criteria

    Personnel allocation

    Personnel pre-training needs

    Test site/location

  • 7/31/2019 QA Reviewer

    13/31

  • 7/31/2019 QA Reviewer

    14/31

    Test date

    Bug reporting date

    Name of developer/group/organization the problem is assigned to

    Description of problem cause

    Description of fix

    Code section/file/module/class/method that was fixed

    Date of fix Application version that contains the fix

    Tester responsible for retest

    Retest date

    Retest results

    Regression testing requirements

    Tester responsible for regression tests

    Regression testing results

    A reporting or tracking process should enable notification of appropriate personnel at various stages. Forinstance, testers need to know when retesting is needed, developers need to know when bugs are foundand how to get the needed information, and reporting/summary capabilities are needed for managers.

    Return to top of this page's FAQ list

    What is 'configuration management'?

    Configuration management covers the processes used to control, coordinate, and track: code,requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches,

    changes made to them, and who makes the changes. (See the 'Tools' section for web resources withlistings of configuration management tools. Also see the Softwareqatest.com Bookstore section's'Configuration Management' category for useful books with more information.)

    Return to top of this page's FAQ list

    What if the software is so buggy it can't really be tested at all?

    The best bet in this situation is for the testers to go through the process of reporting whatever bugs orblocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem

    can severely affect schedules, and indicates deeper problems in the software development process (suchas insufficient unit testing or insufficient integration testing, poor design, improper build or release

    procedures, etc.) managers should be notified, and provided with some documentation as evidence of theproblem.

    How can it be known when to stop testing?This can be difficult to determine. Most modern software applications are so complex, and run in such an

    interdependent environment, that complete testing can never be done. Common factors in deciding whento stop are:

    Deadlines (release deadlines, testing deadlines, etc.)

    Test cases completed with certain percentage passed

    Test budget depleted

    Coverage of code/functionality/requirements reaches a specified point

    http://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qattls1.htmlhttp://www.softwareqatest.com/qatbks1.htmlhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatbks1.htmlhttp://www.softwareqatest.com/qattls1.htmlhttp://www.softwareqatest.com/qatfaq2.html#TOP
  • 7/31/2019 QA Reviewer

    15/31

    Bug rate falls below a certain level

    Beta or alpha testing period ends

    Also see 'Who should decide when software is ready to be released?' in the LFAQ section.

    Return to top of this page's FAQ list

    What if there isn't enough time for thorough testing?

    Use risk analysis, along with discussion with project stakeholders, to determine where testing should be

    focused.Since it's rarely possible to test every possible aspect of an application, every possible combination of

    events, every dependency, or everything that could go wrong, risk analysis is appropriate to mostsoftware development projects. This requires judgement skills, common sense, and experience. (Ifwarranted, formal methods are also available.) Considerations can include:

    Which functionality is most important to the project's intended purpose?

    Which functionality is most visible to the user?

    Which functionality has the largest safety impact?

    Which functionality has the largest financial impact on users?

    Which aspects of the application are most important to the customer?

    Which aspects of the application can be tested early in the development cycle?

    Which parts of the code are most complex, and thus most subject to errors?

    Which parts of the application were developed in rush or panic mode?

    Which aspects of similar/related previous projects caused problems?

    Which aspects of similar/related previous projects had large maintenance expenses?

    Which parts of the requirements and design are unclear or poorly thought out?

    What do the developers think are the highest-risk aspects of the application?

    What kinds of problems would cause the worst publicity?

    What kinds of problems would cause the most customer service complaints?

    What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

    What if the project isn't big enough to justify extensive testing?

    Consider the impact of project errors, not the size of the project. However, if extensive testing is still notjustified, risk analysis is again needed and the same considerations as described previously in 'What if

    there isn't enough time for thorough testing?' apply. The tester might then do ad hoc or exploratorytesting, or write up a limited test plan based on the risk analysis.

    How does a client/server environment affect testing?

    Client/server applications can be highly complex due to the multiple dependencies among clients, datacommunications, hardware, and servers, especially in multi-tier systems. Thus testing requirements canbe extensive. When time is limited (as it usually is) the focus should be on integration and system testing.Additionally, load/stress/performance testing may be useful in determining client/server application

    limitations and capabilities. There are commercial and open source tools to assist with such testing. (Seethe 'Tools' section for web resources with listings that include these kinds of test tools.)

    http://www.softwareqatest.com/qat_lfaq1.html#LFAQ1_3http://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatfaq2.html#FAQ2_12http://www.softwareqatest.com/qatfaq2.html#FAQ2_12http://www.softwareqatest.com/qattls1.htmlhttp://www.softwareqatest.com/qattls1.htmlhttp://www.softwareqatest.com/qatfaq2.html#FAQ2_12http://www.softwareqatest.com/qatfaq2.html#FAQ2_12http://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qat_lfaq1.html#LFAQ1_3
  • 7/31/2019 QA Reviewer

    16/31

    How is testing affected by object-oriented designs?

    Well-engineered object-oriented design can make it easier to trace from code to internal design tofunctional design to requirements. While there will be little affect on black box testing (where anunderstanding of the internal design of the application is unnecessary), white-box testing can be oriented

    to the application's objects, methods, etc. If the application was well-designed this can simplify test designand test automation design.

    What is Extreme Programming and what's it got to do with testing?

    Extreme Programming (XP) is a software development approach for small teams on risk-prone projects

    with unstable requirements. It was created by Kent Beck who described the approach in his book'Extreme Programming Explained' (See the Softwareqatest.com Books page.). Testing ('extreme testing')

    is a core aspect of Extreme Programming. Programmers are expected to write unit and functional testcode first - before writing the application code. Test code is under source control along with the rest of the

    code. Customers are expected to be an integral part of the project team and to help develope scenariosfor acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun

    for each of the frequent development iterations. QA and test personnel are also required to be an integralpart of the project team. Detailed requirements documentation is not used, and frequent re-scheduling,

    re-estimating, and re-prioritizing is expected. For more info on XP and other 'agile' software developmentapproaches (Scrum, Crystal, etc.) see the resource listings in the 'Agile and XP TestingResources' section.

    Return to top of this page's FAQ list

    Article Purpose

    The purpose of this article is to present an overview of the ISO 9126 standard and to give adetailed description of the software quality model used by this standard.

    ISO 9126 is an international standard for the evaluation of software. The standard is divided intofour parts which addresses, respectively, the following subjects: quality model; external metrics;internal metrics; and quality in use metrics. ISO 9126 Part one, referred to as ISO 9126-1 is anextension of previous work done by McCall (1977), Boehm (1978), FURPS and others indefining a set of software quality characteristics.

    ISO9126-1 represents the latest (and ongoing) research into characterizing software for the

    purposes of software quality control, software quality assurance and software processimprovement (SPI). This article defines the characteristics identified by ISO 9126-1. The otherparts of ISO 9126, concerning metrics or measurements for these characteristics, are essential forSQC, SQA and SPI but the main concern of this article is the definition of the basic ISO 9126Quality Model.

    The ISO 9126 documentation itself, from the official ISO 9126 documentation, can only bepurchased and is subject to copyright. SQA.net only reproduces the basic structure of the ISO

    http://www.softwareqatest.com/qatbks1.html#OTHERhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.sqa.net/softwarequalityattributes.htmlhttp://www.sqa.net/index.htm#furpshttp://www.iso.org/iso/en/ISOOnline.frontpagehttp://www.iso.org/iso/en/ISOOnline.frontpagehttp://www.sqa.net/index.htm#furpshttp://www.sqa.net/softwarequalityattributes.htmlhttp://www.softwareqatest.com/qatfaq2.html#TOPhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatlnks1.html#AGILEhttp://www.softwareqatest.com/qatbks1.html#OTHER
  • 7/31/2019 QA Reviewer

    17/31

    9126 standard and any descriptions, commentary or guidance are original material based onpublic domain information as well as our own experience.

    The ISO 9126-1 software quality model identifies 6 main quality characteristics, namely:

    Functionality Reliability

    Usability

    Efficiency

    Maintainability

    Portability

    These characteristics are broken down into subcharacteristics, a high level table is shown below.It is at the subcharacteristic level that measurement for SPI will occur. The main characteristicsof the ISO9126-1 quality model, can be defined as follows:-

    FunctionalityFunctionality is the essential purpose of any product or service. For certain items this is relativelyeasy to define, for example a ship's anchorhas the function of holding a ship at a given location.The more functions a product has, e.g. an ATM machine, then the more complicated it becomesto define it's functionality. For software a list of functions can be specified, i.e. a sales orderprocessing systems should be able to record customer information so that it can be used toreference a sales order. A sales order system should also provide the following functions:

    Record sales order product, price and quantity.

    Calculate total price.

    Calculate appropriate sales tax.

    Calculate date available to ship, based on inventory. Generate purchase orders when stock falls below a given threshold.

    The list goes on and on but the main point to note is that functionality is expressed as a totality ofessential functions that the software product provides. It is also important to note that thepresence or absence of these functions in a software product can be verified as either existing ornot, in that it is a Boolean (either a yes or no answer). The other software characteristics listed(i.e. usability) are only present to some degree, i.e. not a simple on or off. Many people getconfused between overall process functionality (in which software plays a part) and softwarefunctionality. This is partly due to the fact that Data Flow Diagrams (DFDs) and other modelingtools can depict process functionality (as a set of data in\data out conversions) and software

    functionality. Consider a sales order process, that has both manual and software components. Afunction of the sales order process could be to record the sales order but we could implementa hard copy filing cabinet for the actual orders and only use software for calculating the price,tax and ship date. In this way the functionality of the software is limited to those calculationfunctions. SPI, or Software Process Improvement is different from overall Process Improvementor Process Re-engineering, ISO 9126-1 and other software quality models do not help measureoverall Process costs\benefits but only the software component. The relationship betweensoftware functionality within an overall business process is outside the scope of ISO 9126 and it

  • 7/31/2019 QA Reviewer

    18/31

    is only the software functionality, or essential purpose of the software component, that is ofinterest for ISO 9126.

    Following functionality, there are 5 other software attributes that characterize theusefulness of the software in a given environment.

    Each of the following characteristics can only be measured (and are assumed to exist) when thefunctionality of a given system is present. In this way, for example, a system can notpossess usability characteristics if the system does not function correctly (the two just don't gotogether).

    ReliabilityOnce a software system is functioning, as specified, and delivered the reliability characteristicdefines the capability of the system to maintain its service provision under defined conditions fordefined periods of time. One aspect of this characteristic isfault tolerance that is the ability of asystem to withstand component failure. For example if the network goes down for 20 secondsthen comes back the system should be able to recover and continue functioning.

    UsabilityUsability only exists with regard to functionality and refers to the ease of use for a givenfunction. For example a function of an ATM machine is to dispense cash as requested. Placingcommon amounts on the screen for selection, i.e. $20.00, $40.00, $100.00 etc, does not impactthe function of the ATM but addresses the Usability of the function. The ability to learn how touse a system (learnability) is also a major subcharacteristic of usability.

    EfficiencyThis characteristic is concerned with the system resources used when providing the requiredfunctionality. The amount of disk space, memory, network etc. provides a good indication of this

    characteristic. As with a number of these characteristics, there are overlaps. For example theusability of a system is influenced by the system's Performance, in that if a system takes 3 hoursto respond the system would not be easy to use although the essential issue is a performance orefficiency characteristic.

    MaintainabilityThe ability to identify and fix a fault within a software component is what the maintainabilitycharacteristic addresses. In other software quality models this characteristic is referenced assupportability. Maintainability is impacted by code readability or complexity as well asmodularization. Anything that helps with identifying the cause of a fault and then fixing the faultis the concern of maintainability. Also the ability to verify (or test) a system, i.e. testability, is

    one of the subcharacteristics of maintainability.

    PortabilityThis characteristic refers to how well the software can adopt to changes in its environment orwith its requirements. The subcharacteristics of this characteristic include adaptability. Objectoriented design and implementation practices can contribute to the extent to which thischaracteristic is present in a given system.

  • 7/31/2019 QA Reviewer

    19/31

    The full table of Characteristics and Subcharacteristics for the ISO 9126-1 Quality Model is:-

    Characteristics Subcharacteristics Definitions

    Suitability

    This is the essential Functionalitycharacteristic and refers to theappropriateness (to specification) of thefunctions of the software.

    AccuratenessThis refers to the correctness of the functions, anATM may provide a cash dispensing function butis the amount correct?

    FunctionalityInteroperability

    A given software component or system does nottypically function in isolation. Thissubcharacteristic concerns the ability of asoftware component to interact with othercomponents or systems.

    Compliance

    Where appropriate certain industry (orgovernment) laws and guidelines need to becomplied with, i.e. SOX. This subcharacteristicaddresses the compliant capability of software.

    SecurityThis subcharacteristic relates to unauthorizedaccess to the software functions.

    MaturityThis subcharacteristic concerns frequency offailure of the software.

    Reliability Fault toleranceThe ability of software to withstand (and recover)from component, or environmental, failure.

    RecoverabilityAbility to bring back a failed system to fulloperation, including data and networkconnections.

    Understandability

    Determines the ease of which the systemsfunctions can be understood, relates to usermental models in Human Computer Interactionmethods.

    Usability Learnability Learning effort for different users, i.e. novice,expert, casual etc.

    OperabilityAbility of the software to be easily operated by agiven user in a given environment.

    EfficiencyTime behavior

    Characterizes response times for a given thru put,i.e. transaction rate.

    Resource behavior Characterizes resources used, i.e. memory, cpu,

  • 7/31/2019 QA Reviewer

    20/31

    disk and network usage.

    AnalyzabilityCharacterizes the ability to identify the root causeof a failure within the software.

    Maintainability Changeability

    Characterizes the amount of effort to change a

    system.

    StabilityCharacterizes the sensitivity to change of a givensystem that is the negative impact that may becaused by system changes.

    TestabilityCharacterizes the effort needed to verify (test) asystem change.

    AdaptabilityCharacterizes the ability of the system to changeto new specifications or operating environments.

    Portability InstallabilityCharacterizes the effort required to install thesoftware.

    Conformance

    Similar to compliance for functionality, but thischaracteristic relates to portability. One examplewould be Open SQL conformance which relatesto portability of database used.

    Replaceability

    Characterizes theplug and play aspect ofsoftware components, that is how easy is it toexchange a given software component within aspecified environment.

    ISO 9126 Observations

    For the most part, the overall structure of ISO9126-1 is similar to past models, McCall (1977)and Boehm (1978), although there are a couple of notable differences. Compliance comes underthe functionality characteristic, this can be attributed to government initiatives like SOX. Inmany requirements specifications all characteristics, that are specified, that are not purefunctional requirements are specified asNon-Functional requirements. It is interesting to note,with ISO9126, that compliance is seen as a functional characteristic.

    Using the ISO 9126 (or any other quality model) for derivation of system requirements brings

    clarity of definition of purpose and operating capability .For example a rules engine approach to compliance would enable greater adaptability, should thecompliance rules change. The functionality for compliance could be implemented in other waysbut these other implementation methods may not produce as strong an adaptability characteristicas a rules, or some other component based, architecture.

    Also, a designer typically will need to make trade offs between two or more characteristics whendesigning the system. Consider highly modularized code, this code is usually easy to maintain,

  • 7/31/2019 QA Reviewer

    21/31

    i.e. has a good changeability characteristic, but may not perform as well (for cpu resource, asunstructed program code). On a similar vein a normalized database may not perform as well as anot normalized database. These trade offs need to be identified, so that informed design decisionscan be made.

    Although ISO 9126-1 is the latest proposal for a useful Quality Model, of softwarecharacteristics, it is unlikely to be the last. One thing is certain, the requirements (includingcompliance) and operating environment of software will be continually changing and with thischange will come the continuing search to find useful characteristics that facilitate measurementand control of the software production process.

    The Capability Maturity Model (CMM) is a service mark registered with the U.S.

    Patent and Trademark Office by Carnegie Mellon University(CMU) and refers to a

    development model that was created after study of data collected from organizations

    that contracted with the U.S. Department of Defense, who funded the research. This

    became the foundation from which CMU created the Software EngineeringInstitute (SEI). Like any model, it is an abstraction of an existing system.

    When it is applied to an existing organization's software development processes, it

    allows an effective approach toward improving them. Eventually it became clear that the

    model could be applied to other processes. This gave rise to a more general concept

    that is applied to business.

    Overview

    The Capability Maturity Model (CMM) was originally developed as a tool for objectivelyassessing the ability of government contractors'processesto perform a contracted

    software project. The CMM is based on the process maturity framework first described

    in the 1989 bookManaging the Software Processby Watts Humphrey. It was later

    published in a report in 1993 (Technical Report CMU/SEI-93-TR-024 ESC-TR-93-177

    February 1993, Capability Maturity Model for Software, Version 1.1) and as a book by

    the same authors in 1995.

    Though the CMM comes from the field of software development, it is used as a general

    model to aid in improving organizational business processes in diverse areas; forexample in software engineering, system engineering, project management, software

    maintenance, risk management, system acquisition, information technology (IT),

    services, business processes generally, and human capital management. The CMM

    has been used extensively worldwide in government offices, commerce, industry and

    software development organizations.[citation needed]

    http://en.wikipedia.org/wiki/Service_markhttp://en.wikipedia.org/wiki/Carnegie_Mellon_Universityhttp://en.wikipedia.org/wiki/Software_Engineering_Institutehttp://en.wikipedia.org/wiki/Software_Engineering_Institutehttp://en.wikipedia.org/wiki/Watts_Humphreyhttp://en.wikipedia.org/wiki/Software_developmenthttp://en.wikipedia.org/wiki/Software_engineeringhttp://en.wikipedia.org/wiki/System_engineeringhttp://en.wikipedia.org/wiki/Project_managementhttp://en.wikipedia.org/wiki/Software_maintenancehttp://en.wikipedia.org/wiki/Software_maintenancehttp://en.wikipedia.org/wiki/Risk_managementhttp://en.wikipedia.org/wiki/Information_technologyhttp://en.wikipedia.org/wiki/Wikipedia:Citation_neededhttp://en.wikipedia.org/wiki/Wikipedia:Citation_neededhttp://en.wikipedia.org/wiki/Wikipedia:Citation_neededhttp://en.wikipedia.org/wiki/Wikipedia:Citation_neededhttp://en.wikipedia.org/wiki/Information_technologyhttp://en.wikipedia.org/wiki/Risk_managementhttp://en.wikipedia.org/wiki/Software_maintenancehttp://en.wikipedia.org/wiki/Software_maintenancehttp://en.wikipedia.org/wiki/Project_managementhttp://en.wikipedia.org/wiki/System_engineeringhttp://en.wikipedia.org/wiki/Software_engineeringhttp://en.wikipedia.org/wiki/Software_developmenthttp://en.wikipedia.org/wiki/Watts_Humphreyhttp://en.wikipedia.org/wiki/Software_Engineering_Institutehttp://en.wikipedia.org/wiki/Software_Engineering_Institutehttp://en.wikipedia.org/wiki/Carnegie_Mellon_Universityhttp://en.wikipedia.org/wiki/Service_mark
  • 7/31/2019 QA Reviewer

    22/31

  • 7/31/2019 QA Reviewer

    23/31

    The result of the Air Force study was a model for the military to use as an objective

    evaluation of software subcontractors' process capability maturity. Humphrey based this

    framework on the earlier Quality Management Maturity Grid developed by Philip B.

    Crosby in his book "Quality is Free".[1]However, Humphrey's approach differed because

    of his unique insight that organizations mature their processes in stages based on

    solving process problems in a specific order. Humphrey based his approach on the

    staged evolution of a system of software development practices within an organization,

    rather than measuring the maturity of each separate development process

    independently. The CMM has thus been used by different organizations as a general

    and powerful tool for understanding and then improving general business process

    performance.

    Watts Humphrey's Capability Maturity Model (CMM) was published in 1988[3]and as a

    book in 1989, in Managing the Software Process.[4]

    Organizations were originally assessed using a process maturity questionnaire and a

    Software Capability Evaluation method devised by Humphrey and his colleagues at the

    Software Engineering Institute (SEI).

    The full representation of the Capability Maturity Model as a set of defined process

    areas and practices at each of the five maturity levels was initiated in 1991, with Version

    1.1 being completed in January 1993.[5]The CMM was published as a book [6]in 1995 by

    its primary authors,Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth

    Chrissis.

    [edit]Superseded by CMMI

    The CMM model proved useful to many[citation needed], but its application in software

    development has sometimes been problematic. Applying multiple models that are not

    integrated within and across an organization could be costly in training, appraisals, and

    improvement activities. TheCapability Maturity Model Integration (CMMI) project was

    formed to sort out the problem of using multiple For software development processes,

    the CMM has been superseded by CMMI, though the CMM continues to be a general

    theoretical process capability model used in the public domain.

    [edit]Adapted to other processes

    The CMM was originally intended as a tool to evaluate the ability of government

    contractors to perform a contracted software project. Though it comes from the area of

    software development, it can be, has been, and continues to be widely applied as a

    http://en.wikipedia.org/wiki/Philip_B._Crosbyhttp://en.wikipedia.org/wiki/Philip_B._Crosbyhttp://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-crosby79qif-0http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-crosby79qif-0http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-crosby79qif-0http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-2http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-2http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-watts89msp-3http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-watts89msp-3http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-watts89msp-3http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-sei93cmm-4http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-sei93cmm-4http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-sei93cmm-4http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-5http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-5http://en.wikipedia.org/w/index.php?title=Mark_C._Paulk&action=edit&redlink=1http://en.wikipedia.org/wiki/Bill_Curtishttp://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=6http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=6http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=6http://en.wikipedia.org/wiki/Wikipedia:Citation_neededhttp://en.wikipedia.org/wiki/Wikipedia:Citation_neededhttp://en.wikipedia.org/wiki/Wikipedia:Citation_neededhttp://en.wikipedia.org/wiki/Capability_Maturity_Model_Integrationhttp://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=7http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=7http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=7http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=7http://en.wikipedia.org/wiki/Capability_Maturity_Model_Integrationhttp://en.wikipedia.org/wiki/Wikipedia:Citation_neededhttp://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=6http://en.wikipedia.org/wiki/Bill_Curtishttp://en.wikipedia.org/w/index.php?title=Mark_C._Paulk&action=edit&redlink=1http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-5http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-sei93cmm-4http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-watts89msp-3http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-2http://en.wikipedia.org/wiki/Capability_Maturity_Model#cite_note-crosby79qif-0http://en.wikipedia.org/wiki/Philip_B._Crosbyhttp://en.wikipedia.org/wiki/Philip_B._Crosby
  • 7/31/2019 QA Reviewer

    24/31

    general model of the maturity ofprocess(e.g., IT service management processes) in

    IS/IT (and other) organizations.

    [edit]Model topics

    [edit]Maturity model

    A maturity model can be viewed as a set of structured levels that describe how well the

    behaviors, practices and processes of an organization can reliably and sustainably

    produce required outcomes. A maturity model may provide, for example :

    a place to start

    the benefit of a communitys prior experiences

    a common language and a shared vision

    a framework for prioritizing actions.

    a way to define what improvement means for your organization.

    A maturity model can be used as a benchmark for comparison and as an aid to

    understanding - for example, for comparative assessment of different organizations

    where there is something in common that can be used as a basis for comparison. In the

    case of the CMM, for example, the basis for comparison would be the organizations'

    software development processes.

    [edit]Structure

    The Capability Maturity Model involves the following 10 aspects:

    Maturity Levels:a 5-level process maturity continuum - where the uppermost (5th)

    level is a notional ideal state where processes would be systematically managed by

    a combination of process optimization and continuous process improvement.

    Key Process Areas:a Key Process Area (KPA) identifies a cluster of related

    activities that, when performed together, achieve a set of goals considered

    important.

    Goals:the goals of a key process area summarize the states that must exist for that

    key process area to have been implemented in an effective and lasting way. The

    extent to which the goals have been accomplished is an indicator of how much

    capability the organization has established at that maturity level. The goals signify

    the scope, boundaries, and intent of each key process area.

    http://en.wikipedia.org/wiki/IT_service_managementhttp://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=8http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=8http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=9http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=9http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=9http://en.wikipedia.org/wiki/Structurehttp://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=10http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=10http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=10http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=10http://en.wikipedia.org/wiki/Structurehttp://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=9http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=8http://en.wikipedia.org/wiki/IT_service_management
  • 7/31/2019 QA Reviewer

    25/31

    Common Features:common features include practices that implement and

    institutionalize a key process area. There are five types of common features:

    commitment to Perform, Ability to Perform, Activities Performed, Measurement and

    Analysis, and Verifying Implementation.

    Key Practices:The key practices describe the elements of infrastructure and

    practice that contribute most effectively to the implementation and institutionalization

    of the KPAs.

    [edit]Levels

    There are five levels defined along the continuum of the CMM and, according to the

    SEI: "Predictability, effectiveness, and control of an organization's software processes

    are believed to improve as the organization moves up these five levels. While not

    rigorous, the empirical evidence to date supports this belief".

    1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new or

    undocumented repeat process.

    2. Repeatable - the process is at least documented sufficiently such that repeating

    the same steps may be attempted.

    3. Defined - the process is defined/confirmed as a standard business process, and

    decomposed to levels 0, 1 and 2 (the latter being Work Instructions).

    4. Managed - the process is quantitatively managed in accordance with agreed-

    upon metrics.

    5. Optimizing - process management includes deliberate process

    optimization/improvement.

    Within each of these maturity levels are Key Process Areas (KPAs) which characterise

    that level, and for each KPA there are five definitions identified:

    1. Goals

    2. Commitment

    3. Ability

    4. Measurement

    5. Verification

    The KPAs are not necessarily unique to CMM, representing as they do the stages

    that organizations must go through on the way to becoming mature.

    http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=11http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=11http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=11http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=11
  • 7/31/2019 QA Reviewer

    26/31

    The CMM provides a theoretical continuum along which process maturity can be

    developed incrementally from one level to the next. Skipping levels is not

    allowed/feasible.

    N.B.: The CMM was originally intended as a tool to evaluate the ability of government

    contractors to perform a contracted software project. It has been used for and may be

    suited to that purpose, but critics pointed out that process maturity according to the

    CMM was not necessarily mandatory for successful software development. There

    were/are real-life examples where the CMM was arguably irrelevant to successful

    software development, and these examples include many shrinkwrapcompanies (also

    called commercial-off-the-shelfor "COTS" firms orsoftware packagefirms). Such firms

    would have included, for example, Claris, Apple, Symantec, Microsoft, and Lotus.

    Though these companies may have successfully developed their software, they would

    not necessarily have considered or defined or managed their processes as the CMMdescribed as level 3 or above, and so would have fitted level 1 or 2 of the model. This

    did not - on the face of it - frustrate the successful development of their software.

    Level 1 - Initial (Chaotic)

    It is characteristic of processes at this level that they are (typically)

    undocumented and in a state of dynamic change, tending to be driven in an ad

    hoc, uncontrolled and reactive manner by users or events. This provides a

    chaotic or unstable environment for the processes.

    Level 2 - Repeatable

    It is characteristic of processes at this level that some processes are repeatable,

    possibly with consistent results. Process discipline is unlikely to be rigorous, but

    where it exists it may help to ensure that existing processes are maintained

    during times of stress.

    Level 3 - Defined

    It is characteristic of processes at this level that there are sets of defined and

    documented standard processes established and subject to some degree of

    improvement over time. These standard processes are in place (i.e., they are the

    AS-IS processes) and used to establish consistency of process performanceacross the organization.

    Level 4 - Managed

    It is characteristic of processes at this level that, using process metrics,

    management can effectively control the AS-IS process (e.g., for software

    development ). In particular, management can identify ways to adjust and adapt

  • 7/31/2019 QA Reviewer

    27/31

    the process to particular projects without measurable losses of quality or

    deviations from specifications. Process Capability is established from this level.

    Level 5 - Optimizing

    It is a characteristic of processes at this level that the focus is on continually

    improving process performance through both incremental and innovativetechnological changes/improvements.

    At maturity level 5, processes are concerned with addressing

    statistical common causesof process variation and changing the

    process (for example, to shift the mean of the process performance)

    to improve process performance. This would be done at the same

    time as maintaining the likelihood of achieving the established

    quantitative process-improvement objectives.

    [edit]Software process frameworkThe software process framework documented is intended to guide

    those wishing to assess an organization/projects consistency with the

    CMM. For each maturity level there are five checklist types:

    TypeSD Description

    Policy Describes the policy contents and KPA goals recommended by the CMM.

    StandardDescribes the recommended content of select work products described in

    the CMM.

    Process

    Describes the process information content recommended by the CMM.

    The process checklists are further refined into checklists for:

    roles

    entry criteria

    inputs

    activities

    outputs

    exit criteria

    http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=12http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=12http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=12http://en.wikipedia.org/w/index.php?title=Capability_Maturity_Model&action=edit&section=12
  • 7/31/2019 QA Reviewer

    28/31

    reviews and audits

    work products managed and controlled

    measurements

    documented procedures

    training

    tools

    ProcedureDescribes the recommended content of documented procedures

    described in the CMM.

    Level

    overview

    Provides an overview of an entire maturity level. The level overview

    checklists are further refined into checklists for:

    KPA purposes (Key Process Areas) KPA Goals

    policies

    standards

    process descriptions

    procedures

    training

    tools

    reviews and audits

    work products managed and controlled

    measurements

    Throughout the entire software project, the team does many things to find and prevent defects. Once

    the software has been built, its time to look back and make sure that it meets the requirements. The

    goal of software testing is to make sure that the product does what the users and stakeholders need it

    to do. Software testers review the final product to make sure that the initial requirements have been

    met.

    In software testing, qualityis defined as conformance to requirements. Every use case, functional

    requirement, and other software requirement defines a specific behavior that the software mustexhibit. When the software does not behave the way that the requirements say it must behave, that is

    a defect. This means that software testers are responsible for figuring out whether the software that

    was produced by the team behaves in the way that the requirements it was built from say that it

    should.

  • 7/31/2019 QA Reviewer

    29/31

    Test Plans and Test CasesThe goal of test planning is to establish the list of tasks that, if performed, will identify all of the

    requirements that have not been met in the software. The main work product is the test plan. There

    are many standards that can be used for developing test plans. The following table shows the outline of

    a typical test plan. (This outline was adapted from IEEE 829, the most common standard for software

    test plans.)

    Purpose

    A description of the purpose of the application under test.

    Features to be tested

    A list of the features in the software that will be tested. It is a catalog of all of the test cases

    (including a test case numberand title) that will be conducted, as well as all of the base states.

    Features not to be tested

    A list of any areas of the software that will be excluded from the test, as well as any test cases that

    were written but will not be run.

    Approach

    A description of the strategies that will be used to perform the test.

    Suspension criteria and resumption requirements

    Suspension criteria are the conditions that, if satisfied, require that the test be halted. Resumption

    requirements are the conditions that are required in order to restart a suspended test.

    Environmental Needs

    A complete description of the test environment or environments. This should include a description of

    hardware, networking, databases, software, operating systems, and any other attribute of the

    environment that could affect the test.

    Schedule

    An estimated schedule for performing the test. This should include milestones with specific dates.

    Acceptance criteria

    Any objective quality standards that the software must meet, in order to be considered ready for

    release. This may include things like stakeholder sign-off and consensus, requirements that the

    software must have been tested under certain environments, minimum defect counts at various priority

    and severity levels, minimum test coverage numbers, etc.

    Roles and responsibilities

    A list of the specific roles that will be required for people in the organization, in order to carry out the

    test. This list can indicate specific people who will be testing the software and what they are

  • 7/31/2019 QA Reviewer

    30/31

    responsible for.

    The test plan represents the overall approach to the test. In many ways, the test plan serves as a

    summary of the test activities that will be performed. It shows how the tests will be organized, and

    outlines all of the testers needs that must be met in order to properly carry out the test. The test planis especially valuable because it is not a difficult document to review, so the members of the

    engineering team and senior managers can inspect it. The bulk of the test planning effort is focused on

    creating the test cases. A test case is a description of a specific interaction that a tester will have, in

    order to test a single behavior of the software. Test cases are very similar to use cases, in that they are

    step-by-step narratives that define a specific interaction between the user and the software. However,

    unlike use cases, they contain references to specific features of the user interface. The test case

    contains actual data that must be entered into the software and the expected result that the software

    must generate.

    A typical test case includes these sections, usually laid out in a table:

    A unique name and number A requirement that this test case is exercising

    Preconditions that describe the state of the software before the test case (which is often a previoustest case that must always be run before the current test case)

    Steps that describe the specific steps that make up the interaction

    Expected results that describe the expected state of the software after the test case is executed

    The following table shows an example of a test case that would exercise one specific behavior inthesearch and repace requirement. This requirement specified how a search-and-replace functionmust deal with case sensitivity. One part of that requirement said, If the original text was all

    lowercase, then the replacement text must be inserted in all lowercase.

    Name TC-47: Verify that lowercase data entry results in lowercase insert

    Requirement FR-4 (Case sensitivity in search-and-replace), bullet 2

    Preconditions The test document TESTDOC.DOC is loaded (base state BS-12).

    Steps

    1. Click on the Search and Replace button.2. Click in the Search Term field.3. Enter This is the Search Term.4. Click in the Replacement Text field.

    5. Enter This IS THE Replacement TeRM.6. Verify that the Case Sensitivity checkbox is unchecked.7. Click the OK button.

    Expected

    Results

    1. The search-and-replace window is dismissed.2. Verify that in line 38 of the document, the text this is the search term has

    been replaced by this is the replacement term.

    http://www.stellman-greene.com/content/view/40/41/#Search%20and%20replace%20requirementhttp://www.stellman-greene.com/content/view/40/41/#Search%20and%20replace%20requirementhttp://www.stellman-greene.com/content/view/40/41/#Search%20and%20replace%20requirementhttp://www.stellman-greene.com/content/view/40/41/#Search%20and%20replace%20requirementhttp://www.stellman-greene.com/content/view/40/41/#Search%20and%20replace%20requirement
  • 7/31/2019 QA Reviewer

    31/31