checkpoints v1.01 tpi-automotive

35
1A1 1A2 1A3 1A4 1A5 1A6 1B1 1B2 1B3 1B4 1B5 mbined st 1C1 1C2 1C3 1C4

Upload: guru-prasad

Post on 26-Dec-2015

32 views

Category:

Documents


0 download

DESCRIPTION

Checklist for scoring TPI maturity

TRANSCRIPT

Page 1: Checkpoints v1.01 TPI-Automotive

1. Test strategy

Test Strategy for single high-level test1A1

1A2

1A3

1A4

1A5

1A6

Combined testing strategy for high-level tests1B1

1B2

1B3

1B4

1B5

Combined strategy for high-level tests plus low-level tests (like Component Integration Tests) or evaluation1C1

1C2

1C3

1C4

Page 2: Checkpoints v1.01 TPI-Automotive

1C5

1C6

Combined strategy for all test and evaluation levels1D1

1D2

1D3

1D4

1D5

1D6

2. Life-cycle model

2A1

2A2

Planning, Preparation, Design, Execution and Completion2B1

2B2

3. Momement of involvement

3A1

Start of test basis3B1

Start of requirements definition3C1

Project initiation3D1

4. Planning and estimation

Substantiated estimating and planning

Page 3: Checkpoints v1.01 TPI-Automotive

4A1

4A2

4A3

Statistically substantiated estimating and planning4B1

4B2

5. Test design techniques

Informal techniques5A1

5A2

Formal techniques5B1

5B2

5B3

5C16. Static test techniques

Inspection of test basis6A1

6A2

Checklists6B1

7. Metrics

Project metrics (product)7A1

7A2

7A3

Project metrics (process)

Page 4: Checkpoints v1.01 TPI-Automotive

7B1

7B2

System metrics7C1

7C2

Organization metrics (>1 system)7D1

7D2

8. Test automation

Use of tools8A1

8A2

8A3

Managed test automation8B1

8B2

8B3

8B4

Page 5: Checkpoints v1.01 TPI-Automotive

8B5

8B6

Optimal test automation8C1

8C2

8C3

8C4

8C5

9. Test environment

Managed and controlled test environment9A1

9A2

9A3

9A4

9A5

9A6

9A7

9A8

Testing in the most suitable environment9B1

Page 6: Checkpoints v1.01 TPI-Automotive

9B2

9B3

Environment on call9C1

10. Office and laboratory environment

10A1

10A2

10A3

11. Motivation and engagment

Assignment of budget and time11A1

11A2

11A3

11A4

11A5

11A6

Testing integrated in project organisation11B1

11B2

11B3

11B4

11B5

11B6

Test engineering11C1

11C2

11C3

11C4

11C5

11C6

11C7

12. Test functions and training

Page 7: Checkpoints v1.01 TPI-Automotive

12A1

12A2

12A3

12A4

(Formal) Methodical, Technical and Functional support, Management of the test process, testware and infrastructure12B1

12B2

12B3

12B4

12B5

12B6

12B7

12B8

12C1

12C2

12C3

12C4

13. Scope of methodology

Project specific13A1

13A2

13A3

Project specific with external scope13B1

Organization generic13C1

13C2

13C3

Organization optimizing, R&D activities13D1

Page 8: Checkpoints v1.01 TPI-Automotive

13D2

14. Communication

Internal communication14A1

14A2

14A3

Project communication (defects, change control)14B1

14B2

14B3

14B4

14B5

14B6

14B7

14B8

Communication in organization about the quality of the test processes14C1

14C2

15. Reporting

Defects15A1

15A2

15A3

Progress (status of tests and products), activities (costs and time, milestones), defects with priorities15B1

15B2

Risks and recommendations, substantiated with metrics15C1

15C2

Page 9: Checkpoints v1.01 TPI-Automotive

15C3

15C4

15D1

16. Defect management

Internal defect management16A1

16A2

Extensive defect management with flexible reporting facilities16B1

16B2

16B3

Page 10: Checkpoints v1.01 TPI-Automotive

16B4

Project defect management16C1

16C2

16C3

17. Testware management

Internal testware management 17A1

17A2

17A3

External management of test basis and test object 17B1

17B2

17B3

17B4

17B5

17B6

17B7

External management of test basis and test object 17C1

17C2

18. Test processs management

Page 11: Checkpoints v1.01 TPI-Automotive

Planning and execution 18A1

18A2

Planning, executing, monitoring and adjusting18B1

18B2

18B3

18B4

Monitoring and adjustment in organisation 18C1

18C2

18C3

19. Evaluation

Informal evaluation 19A1

19A2

19A3

19A4

Evaluation techniques 19B1

Evaluation strategy 19C1

19C2

19C3

19C4

19C5

20. Low-level testing

20A1

20A2

White-box techniques

Page 12: Checkpoints v1.01 TPI-Automotive

20B1

20B2

20B3

Low-level test strategy 20C1

20C2

20C3

20C4

20C5

21. Integration test

Integration identified as a separate and planned process21A1

21A2

21A3

21A4

21A5

Strategy for integration21B1

21B2

21B3

21B4

21B5

21B6

Page 13: Checkpoints v1.01 TPI-Automotive

21B7

Standardized strategy for integration 21C1

21C2

21C3

Page 14: Checkpoints v1.01 TPI-Automotive

1. Test strategy

Test Strategy for single high-level test

Combined testing strategy for high-level tests

Each high-level test determines its own test strategy, based on the coordinating strategy, as is described in level A.

Combined strategy for high-level tests plus low-level tests (like Component Integration Tests) or evaluation

Each high-level test determines, on the basis of the coordination, its own test strategy, as described in level A.

A motivated consideration of the product risks takes place. Typical risk categories to be verified are: technical risks (a FMEA can be used as basis), organizational risks associated to development/test, operational usage of the product, political and contractual risks and liability.

The consideration implies at least the following aspects:- Regression testing of unmodified parts of the software is part of this strategy when the test object is an update or new release of existing software.- Software is often parameterized, for instance because of different country legislation. If this is the case, part of the test strategy should be whether and how the different parameter settings are tested, based on the estimated risks- The software to test can be commercial-off-the-shelf (COTS), reuse or core,  tailor made and often a combination of these. The test strategy should take into account the different risk profiles of COTS, reuse/core  or tailor made software.- Risks starting from the dependencies of products of the HW- and SW-baselines are taken into account, e.g. required backward compatibility or HW breaks. 

The stakeholders of the product are involved in the process of defining the test strategy. At least the stakeholders (especially the acceptant of the product) have to be invited to review the proposed test strategy and its present status.  

There is a differentiation in test depth, depending on the risks and, if present, on the acceptance and entry and exit criteria: not all system parts, variants and versions are tested equally thoroughly and not all quality characteristics are tested (equally thoroughly).  When incremental delivery of functionality takes place (so-called A-Muster, B-Muster, etc) for every increment a clear decision is made what must be tested and which regression tests must be carried out .One or more test design techniques  are used, suited for the required depth of a test. 

For re-tests also a (simple) strategy determination takes place, in which a motivated choice between "test solutions only" and "full re-test" is made. At least a differentiation between changes in parameter settings and changes in source code is made.  

Coordination takes place between the different high-level tests, often the system-, acceptance- and production acceptance test, or supplier and commissioner side testing, in the field of test strategy (risks, quality characteristics, area of consideration of the test and planning).

The result of the coordination is a coordinated strategy, which is documented. During the total test process this strategy is controlled.

Deviations from the coordinating strategy are reported. A substantiated adjustment of the coordinating strategy is made based on the risks identified for these deviations. The validity of the strategy is checked in case of incremental delivery for each increment.For retest coordination takes place between the different test levels. In case the different test levels exceed the borders of the organization commissioner and supplier make clear decisions about the retest at both sides based on the entry and exit criteria.

Coordination takes place between the high-level tests and the low-level tests or the evaluation levels in the area of test strategy (risks, quality characteristics, area of consideration of the test/evaluation and planning)

The result of the coordination is a coordinated strategy, which is documented. During the total (evaluation and) test process this strategy is controlled.

(if applicable) Each low-level test determines, on the basis of the coordination, its own test strategy, as is described in key area "Low-level testing", level C.

Page 15: Checkpoints v1.01 TPI-Automotive

Combined strategy for all test and evaluation levels

Each high-level test determines its own test strategy on the basis of the coordination, such as described in level A.

2. Life-cycle model

Planning, Preparation, Design, Execution and Completion

3. Momement of involvement

Start of test basis

Start of requirements definition

Project initiationWhen the project is initiated, the activity "testing" is also started.

4. Planning and estimation

Substantiated estimating and planning

(if applicable) Each evaluation level determines, on the basis of the coordination, its own evaluation strategy, as is described in key area "Evaluation", level C .

Deviations from the coordinating strategy are reported. A substantiated adjustment of the coordinating strategy is made based on the risks identified for these deviations.

Coordination takes place between the high-level tests, the low-level tests and the evaluation levels in the area of test strategy (risks, quality characteristics, area of consideration of the test/evaluation and planning).

The result of the coordination is a coordinating strategy, which is documented. During the total evaluation and test process this strategy is controlled.

Each low-level test determines its own test strategy on the basis of the coordination, such as is described in the key area "Low-level testing", level C.

Each evaluation level determines its own evaluation strategy on the basis of the coordination, such as is described in the key area "Evaluation", level B.Deviations from the coordinating strategy are reported. A substantiated adjustment of the coordinating strategy is made based on the risks identified for these deviations.

Planning, Design , Execution

For the tests (at least) the following phases are recognized: planning, design and execution. These are subsequently performed, possibly per subsystem. A certain overlap between the phases is allowed.

Activities to be performed per phase are mentioned in sheet2. Each activity contains sub-activities and/or aspects. These sub-activities and/or aspects are meant as additional information and are not obligatory.

For the tests  the following phases are distinguished: Planning, Preparation, Design  , Execution and Completion. The phases are executed consecutively, possibly per subsystem. A certain overlap between the phases is allowed.

Each activity is supplied with sub activities and/or aspects. These are meant as additional information and are not obligatory. Activities to be executed per phase are mentioned in Sheet2

Completion of test basis 

The activity "testing" starts simultaneously with or earlier than the completion of the test basis for a restricted part of the system that is to be tested separately. The system can be divided into several parts which are built, finished and tested separately. The testing of the first subsystem has to start at the same time or earlier than the completion of the test basis of that particular subsystem. 

The activity "testing" starts simultaneously with or earlier than the phase in which the test basis (often the functional specifications) is defined.

The activity "testing" starts simultaneously with or earlier than the phase in which the (customer and system) requirements   are defined.

Page 16: Checkpoints v1.01 TPI-Automotive

In the test process, estimating and planning are monitored, and adjustments are made if needed.

Statistically substantiated estimating and planning

This data is used to substantiate test estimating and -planning.5. Test design techniques

Informal techniques

The technique at least consists of: a) start situation, b) change process = test actions to be performed, c) expected result.

Formal techniquesBesides informal techniques, formal techniques are used; unambiguous ways of getting from test basis to test cases are used.

A substantiated judgment is possible about the level of coverage based on the collection of test cases (compared to the test basis).The testware is reusable (within the test team) by means of a uniform working method.

At least one mathematical method is used to derive test cases.6. Static test techniques

Inspection of test basis

Checklists

7. Metrics

Project metrics (product)

Project metrics (process)

The test estimating and -planning can be substantiated (so not just "we did it this way in the last project"). For basic activities is clear how much time it costs to execute those activities.. 

In case of short-term changes forced by the commissioner and/or supplier, a re-planning of test activities is performed.  

Metrics about progress and quality are structurally maintained (on level B of the key area Metrics) for multiple, comparable projects.

The test cases are defined according to a documented technique which describes how test cases should be derived  .

Mathematical methods 

Preceding to the definition of the test cases, a study of the testability of the test basis is performed and documented.   In this study checklists are used. The checklists are related to the test design techniques that are selected in the test strategy.   .

Static tests other than inspection of the test basis take place by means of checklists   (approved by project and/or commissioner). These checklists are used to conduct a static test on the test object  for non functional quality characteristics.

In the (test) project Input metrics are recorded: • used resources - hours, • performed activities - hours and lead time, • size and complexity of the tested system - number of functions and/or building effort, number of system requirements, Lines of Code, etc.

In the (test) project Output metrics are recorded:• test products - specifications and test cases, log reports,• test progress - performed tests, status (executed - passed/failed /not finished),• number of defects - defects by test level, by subsystem, by cause, priority, status (new, in solution, corrected, retested).• Achieved code coverage for at least the low-level test, e.g. statement coverage C0, branch coverage C1The metrics are used in test reporting. 

Page 17: Checkpoints v1.01 TPI-Automotive

Metrics including trend analysis (e.g. predefined curve compared with actual situation) are used in test reporting.System metrics

Metrics are used in the assessment of the effectiveness and efficiency of the test process.Organization metrics (>1 system)

Organization-wide mutually comparable metrics are maintained for the already mentioned data.

8. Test automation

Use of tools

Managed test automation

If the decision on automation of the test execution is a positive one, there will now also be a tool for test execution.

In the (test) project Result measurements are made for at least 2 of the items mentioned below: 

defect detection-effectiveness:- the detected defects compared to the total defects present (in %); the last entity is difficult to measure, but think of the number of defects in later tests or in the first months after SOP (Start of Production);- analyse which previous test level should have detected the defects (this indicates something about the effectiveness of preceding test levels!);defect detection-efficiency:- the number of detected defects per spent hour, measured over the entire test period or over several test levels;test coverage level:- test objectives covered by a test case compared to the number of possible test objectives (in %). These objectives can be determined for system requirements, software requirements and software design, e.g. functional coverage or requirement coverage; testware defects:- number of "defects" detected whose cause turned out to be wrong testing, compared to the total number of defects found (in %);perception of quality:- means of reviews and interviews of users, testers and other people involved, e.g. provided by quality departments

Metrics mentioned above are recorded for development, maintenance and after SOP .

Metrics are used in assessing the effectiveness and efficiency of the separate test processes, to achieve an optimization of the generic test methodology and future test processes.

A decision has been taken to automate certain activities in the planning and/or execution phases. The test management and the party who allocates budget for the tools (generally the line management or project management) are involved in this decision.

 Use is made of automated tools that support certain activities in the planning and execution phases (such as a scheduling tool, a defects registration tool and/or home-built stubs and drivers, code checkers, MiL, Sil and Hil).

 The test management and the party allocating budget for the tools acknowledge that the tools being used provide more advantages than disadvantages.

A well-considered decision has been taken regarding the parts of the test execution that should or should not be automated. This decision involves those types of test tools and test activities that belong to the test execution.

The introduction of new test tools is preceded by an inventory of technical aspects (does the test tool work in the infrastructure?) and any possible preconditions set for the test process (for example, test cases should be established in a certain structure instead of in a free-text form, so that the test tool can use this as input).

If use is made of a test sequencer  for automated test execution, explicit consideration should be given during implementation to maintainability of the test scripts included.

Page 18: Checkpoints v1.01 TPI-Automotive

Optimal test automation

There is a periodic review of the advantages of the test automation.There is awareness of the developments on the test tool market.

9. Test environment

Managed and controlled test environmentOnly with the permission of the test manager are changes allowed to the test object and the test environment.

Commissioner and supplier shall coordinate the configuration of the shared/non shared test environment.Testing in the most suitable environment

Most of the test tools can be reused for a future test project . To do so, the management of the test tools has been arranged. The fact that ‘in general’ test tools should be reusable, means that the test tools that are used explicitly within one test process need not be reusable.

The use of the test tools matches the desired methodology of the test process, which means that use of a test tool will not result in inefficiency or undesired limitations of the test process.

A well-considered decision has been taken regarding the parts of the test process that should or should not be automated. All possible types of test tool and all test activities are included in this decision.

There is insight in the cost/profit ratio for all test tools in use (where costs and profits need not merely be expressed in terms of money).

New test tools for the test process are implemented according to a structured process. Aspects that require attention within this process include:- aims (what should the automation yield in terms of time, money and/or quality);- scope (which test levels and which activities should be automated);- required personnel and expertise (any training to be taken);- required technical infrastructure;- selecting the tool;- implementation of the tool;- developing maintainable scripts;- institutionalize management and control of the tool.

The test  environment must be set up in time (can also mean that the test object must be delivered in time if the test object is part of the test environment). In case of a dedicated designed and/or build test environment (e.g. stub components, etc.) an early start of design, purchasing, installation and configuration must be planned.

The test environment is managed (with regard to setup, availability, maintenance, configuration management (soft- and hardware), error handling, authorizations, system parts supplied by suppliers and third parties …). The configuration is updated and reflects the expectations of the next test level. 

The saving and restoring of certain test situations with the associated version of the test environment (soft- and hardware)  can be arranged quickly and easily. In case of a prototype car which is only available for a limited time this will be almost impossible to realise later on in the project. For this case this checkpoint can be neglected.

The environment is sufficiently representative for the test to be performed. This depends on the scope of the test and the type of testing. In general: the closer the test level is to pre-production test the more the test environment has to be as the real environment.The (hardware and software) requirements for the test environment are well defined, understood and documented. 

The test environment which is supplied by the commissioner (e.g. prototype, external ECU’s or prototype car) is defined and documented .

Each test is performed in the most suitable environment, either by execution in another environment (the environment of supplier or commissioner) or by quickly and easily adapting the own environment.

Page 19: Checkpoints v1.01 TPI-Automotive

The risks taken with adapted and changed environments are analyzed and adequate measures have been taken.Environment on call

The environment which is most suited for a test is very flexible and can quickly be adapted to changing requirements.10. Office and laboratory environment

11. Motivation and engagment

Assignment of budget and timeTesting is regarded by the people involved as necessary and important.An amount of time and budget is allocated for testing.

Testing integrated in project organisationAll those involved find that testing has a noticeable positive influence on the quality of the product.The management wants to have insight in the depth and quality of testing.

In the project planning the cycle testing, rework and re-testing is taken into account.Testing is involved in the planning of the delivery sequence of the parts. The advices from testing are discussed in the project meetings.

Test engineeringThe test team is involved in the design and realization to provide an optimal testability of the system ("design for test").The test team has sufficient knowledge and skills to provide a meaningful realization of the checkpoint mentioned above.Recommendations of the test team are considered "seriously" by the organization and/or project. Management supports testers (with people and means) and is working continually on the improvement of the test process.Participation in testing is regarded as a "promotion", testing has a high status.The development process is of sufficient maturity: at least time and quality are controlled. Test jobs are described at an organization level, including career possibilities and reward structures.

12. Test functions and training

The environment is finished in time for the test and there is no disturbance by other activities during the test. In case of prototype cars the disturbance is most of the time there, because a prototype car can contain additional prototype ECU’s which can influence the test. In this case the disturbance must be minimised as far as possible under control of the tester.

Adequate and timely office and laboratory environment 

The office and laboratory  infrastructure needed for testing (offices, meeting rooms, telephones, PCs, network connections, office software, printers, data communication connections, etc.) is arranged on time.

Things related to office organization have a minimal impact on the progress of the test process (as little moving as possible, physical distance between testers and the rest of the project not too large, etc.).

The office and laboratory infrastructure needed for external testing (e.g. test benches, car testing abroad), has a well-defined connection to the "headquarter". 

Management controls testing based on time and money. A feature is that if the test time or budget is exceeded, initially a solution is sought within the test  (doing overtime or employing extra people when exceeding these limits or on the contrary cutting time and/or budget ).

In the team there is enough knowledge and experience in the field of testing to complete testing tasks allocated to the team. The team can use knowledge built up in expertise groups dedicated to a certain subject (e.g. automated testing, HIL-test, etc.).  

The activities for testing are full-time for most participants during the test project  (therefore not many conflicts with other activities).There is a well defined  relationship between the testers and other disciplines in the project and the organization.

The management controls testing based on time, money and quality. A feature is that the solution for test problems (for example exceeding test time or budget) is also sought outside the test project. Possibly the developer is addressed here.

Test manager, integrator  and testers

Page 20: Checkpoints v1.01 TPI-Automotive

For the test, domain expertise is available to the test team.(Formal) Methodical, Technical and Functional support, Management of the test process, testware and infrastructure

The role Technical Support is separately outlined.The role Functional Support is separately outlined.

The persons who carry out these tasks have sufficient knowledge and experience.The time needed for these tasks is planned. Supervision is carried out to see that these tasks are in fact performed.

The results of test review activities are used as input for further test process improvement.

13. Scope of methodology

Project specificMethodology is formulated for each project.

Methodology is followed.Project specific with external scope

Organization genericThe methodology is defined in a generic model for the organization. Each project works according to this generic model.Variances are sufficiently argued and documented.

Organization optimizing, R&D activities

At least there are two roles  defined, test manager and tester. If it is part of the test project to integrate deliveries from 2nd or 3rd parties the role of an integrator  has to be defined additionally.The tasks and responsibilities, with needed experience and possible training, have been defined and documented  .

The test personnel has had specific test training (e.g. test management, test design techniques, etc.) or has sufficient experience in the field of testing.

The role Methodical Support is separately outlined. Its activities are defining and maintaining test instructions, procedures and techniques and advising about and evaluating the right application of the above.

The task Management test process is outlined separately and is responsible for the registration, storage, and availability of all management objects of the test process. Sometimes one will carry out the management oneself, in other cases one will organize and/or evaluate that management. Objects to be managed are progress, budgets and defects.  

The task Management testware is outlined separately and is responsible for the registration, storage, and availability of all management objects of the testware. Sometimes one will carry out the management oneself, in other cases one will organize and/or evaluate that management. Objects to be managed are test documentation, test basis, test objects (internal), test cases including test files and databases, test instructions and procedures. 

The task Management test infrastructure is outlined separately and is responsible for the registration, storage, and availability of all management objects of the test infrastructure. Sometimes one will carry out the management oneself, in other cases one will organize and/or evaluate that management. Objects to be managed are test environments (test databases) and test tools.  

Formal internal reviewing 

Parallel to the test plan, an internal reviewing  plan for testing is formulated.The person for the test reviewing  task has no other tasks within the test team..

The person, who performs the review, has sufficient test  knowledge and experience.

The aspects described cover at least: description of the full life-cycle model of testing, management of the test process (progress and quality), test product management, defect management,  test design  techniques to be used.

The dependencies (test strategy, life cycle model, test design techniques, communication, reporting, defect management and testware management) arranging the interfaces between commissioner and supplier, are realized.  

There is a structured feedback process (both formally elicited and implemented by the R&D department) in the generic methodology.

Page 21: Checkpoints v1.01 TPI-Automotive

Structural maintenance and innovation (R&D) are done in the generic methodology, e.g. on the basis of feedback.14. Communication

Internal communication

Periodically, each team member participates in the meeting.Deviations from the test plan are communicated and documented.

Project communication (defects, change control)In the test team meeting minutes are taken.

Agreements in this meeting are documented.

Testing is involved in change control for judging the impact of change proposals on the test effort.

Communication in organization about the quality of the test processes

Participants are representatives of the test teams and of the line department for testing.15. Reporting

DefectsThe defects found are reported periodically, divided into solved and unsolved defects (pending or closed).

Progress (status of tests and products), activities (costs and time, milestones), defects with prioritiesThe defects are reported, divided into severity categories according to clear and objective norms.

Risks and recommendations, substantiated with metrics

Possible trends with respect to progress and quality are documented and reported periodically.

There is a periodical meeting within the test team and within the development team  . This meeting has a fixed agenda and as its main focus progress (lead time and spent hours) and the quality of the object to be tested. The results of this meeting are documented by means of notes, protocol or a status list..

In the test team meeting, besides progress and the quality of the test object, the quality of the test process is a fixed subject on the agenda.

Periodically, the test manager reports about the progress and about the quality of the object to be tested, including the risks, in the project meeting. The test manager also reports about the quality of the test process.

The test manager is informed immediately  about changes in planned and agreed delivery dates of test basis as well as test   object and test environment (e.g. mechanical parts, prototype cars, simulators , software, models.)

In a periodic defects meeting (or analysis meeting) solutions to defects are discussed between representatives of the test team and of other parties (e.g. supplier and/or commissioner)  involved.

Agreements for support are made between the test team and the supplier of the test object(s). These agreements involve: solving of defects, solving of test-blocking defects, lines of communication, escalation procedure.  

There is periodic meeting in which propositions for improvement of the used test methodology used and the test processes are discussed.

It is agreed beforehand, preferably defined in the test plan, what the aspects in terms of reporting are:•  Content of reports 

•  Interval of report generation (periodically, on request and ad hoc)•  Addressee of reports•  Formal/informalBesides the commissioner of the test, other stakeholders like the developer of  the system must be reported to.

The progress of each test activity is documented and reported periodically. Aspects to be reported are: lead time, hours spent, which tests have been specified, what has been tested, what part of the object performed correctly and incorrectly and what must still be tested. 

A quality judgement on the test object is made. The judgement is based on the acceptance criteria, entry or exit criteria, if present, and related to the test strategy.

Page 22: Checkpoints v1.01 TPI-Automotive

The reporting contains risks (for the commissioner) and recommendations.

Advice is given not only in the area of testing but also on other parts of the project.16. Defect management

Internal defect management

Extensive defect management with flexible reporting facilities

There is someone responsible for ensuring that defect administration is carried out properly and consistently.

The quality judgment and the detected trends are substantiated with metrics (from the defect administration and the progress monitoring) e.g. found defects against executed test cases per timeframe or executed test cases against planned test cases.

Recommendations have  a Software Process Improvement character

The different stages of the life-cycle of the defects  are administrated (up to and including re-test). Possible statuses are:• New• Assigned• In progress • Postponed• Rejected• Ready for retest• Retest OK• Closed

The following items of the defect are recorded: • unique identification• person raised defect• date• severity category• test object plus version• problem description• status

Defect data needed for later trend analysis are recorded in detail:- test case- test level- system part - sub system- priority (test blocking Y/N)- test object plus version- cause (probable + definitive)- all status transitions of the defect including dates- a description of the problem solution- (version of) test object in which defect is solved- problem solver- test configuration 

The administration supports extensive reporting possibilities, which means that reports can be selected and sorted in different ways.

Page 23: Checkpoints v1.01 TPI-Automotive

Project defect management

17. Testware management

Internal testware management

External management of test basis and test object

Management contains the relations between the various parts (test basis and test object).

Each requirement and/or design item is related to one or more test cases.

The new configuration items (delivered by internal and external parties) are only delivered and accepted in a standard format.External management of test basis and test object

The transferred test products are actually reused. 18. Test processs management

Synchronization takes place between defect management system of supplier and defect management of commissioner (e.g. workflow, possible states of defects, attributes, time for synchronization). This means that open defects mentioned by the supplier at the moment of a (partial) delivery should be entered in the defect management system of the commissioner. And at least defects found by the commissioner must be submitted to the defect management system of the supplier.

Defect management system is provided by the commissioner (usually the OEM) and is accessible by the parties involved by the project (also the ones outside the commissioner organization). 

Only one defect management system is in use throughout the whole project (e.g. the development of a certain ECU), even if more than one independent organization is involved in the test process. The defects originate from the various disciplines, teams (also supplier and commissioner) or departments are submitted to the defect management system.

Every entity (commissioner, supplier, sub-supplier) has its own view of the defect management. This view gives access to only that kind of information necessary to do his job. Every entity manages its own information and decides what type of information is made accessible to whom by using authorization profiles.  

The testware (test cases, test scripts, (initial) test data, trace files , etc.), test basis, test object, test environment (additional components – HW and SW) , test configurations , test documentation and test guidelines are managed internally according to a described procedure, containing steps for delivery, registration, archiving and referring.

The relations between the various parts (test basis, test object, testware, test environment, hardware, etc.) are made visible and controlled.

Transfer to the test team takes place according to a standard procedure. The parts comprising a transfer should be known:·     which parts and versions of the test object (including Tailor-Made, Reuse/Core  and Commercial-off-the-shelf components),·     which (version of the) test basis,·     solved defects, still open defects, including those from the developer himself,·     Optionally other parts (such as source code, required hard- or software, testware from previous tests) may be included in the transfer.

The test basis and the test object are managed by the project according to a described procedure, with steps for delivery, registering, archiving and reference.

The test team is informed about changes in test basis or test object in a timely fashion. This also applies for changes in Commercial-Off-The-Shelf or Tailor Made components or software.

These relations are traceable through separate versions (e.g. system requirement A, version 1.0, is related to functional design B, version 1.3, is related to programs C and D, version 2.5 and 2.7, and is related to test cases X to Z, version 1.4).

Criteria for compliance with system requirements, software requirements and software design are defined and related to the right version of these requirements and design (depending on scope of analysis whether all three must be realized).

(A selection, which is agreed on beforehand of) the test products are completed after the end of the test (=fully and up-to-date) and transferred to the maintenance organization, after which the transfer is formally agreed.

Page 24: Checkpoints v1.01 TPI-Automotive

Planning and execution

Planning, executing, monitoring and adjustingMonitoring of the execution of all planned activities takes place.Each activity is also to be monitored in terms of time and money.

Monitoring and adjustment in organisation

Deviations are documented and are reported to the test process.

19. Evaluation

Informal evaluation

Reporting of the evaluation and its results are documented.The handling of the results is monitored.

Evaluation techniques In evaluating (intermediate) products techniques are used, in other words a formal and described working method is applied.

Evaluation strategy A conscious evaluation of risks takes place.

A choice is made from multiple evaluation techniques, suitable for the desired depth of an evaluation.

20. Low-level testing

White-box techniques

Prior to the actual test activities a test plan is formulated in which all activities to be performed are defined and the relevant stakeholders and their responsibilities are  identified. For each activity there is an indication of the period in which it is to be executed, the resources (people or means) required and the products to be delivered .

The commissioner of the test reviews the test plan resulting from the planning phase. Changes in this plan should be offered for reviewing to this commissioner

 In case of deviations adjustment is done, either by adjusting the planning, or by again performing activities as planned. The adjustment is substantiated. Deviations are documented and communicated to the commissioner. 

At an organizational level, monitoring of the application of the organization’s methodology (methods, standards, techniques and procedures) is executed.

In the case of deviations the risks are analyzed and adjustments are made for instance by adjusting the methodology or by adapting activities or products so that they are in line with the methodology. The adjustment is substantiated .

Checklists in combination with peer-expertise are used for the evaluation. 

Testers and people representing the different stakeholders (e.g. project leader, technical experts, developer and commissioner) are involved in these evaluations.

There is a differentiation in the scope and the depth of the evaluation, depending on the possible risks and, if present, depending on the acceptance criteria: not all types of software are equally evaluated and not every quality attribute is equally evaluated.

For re-evaluations a (simple) strategy determination takes place, in which a conscious choice is made between 'evaluate solutions only' and ' complete re-evaluation'.

The strategy is defined and afterwards also executed. It is monitored that the tests are executed according to the strategy and, if necessary, the execution will be adjusted.

Life cycle: planning, design  and execution

For the low-level test (at least) the following phases are recognized: planning, design   and execution. These are performed in sequence, for each subsystem, if applicable.

Each activity is supplied with subactivities and/or aspects. These are meant as additional information and are not obligatory. Activities to be executed per phase are mentioned in Sheet2.

Page 25: Checkpoints v1.01 TPI-Automotive

The testware is reusable (within the test team) by a uniform working method. Low-level test strategy

One or multiple formal or informal test design techniques are used, suitable for the desired test depth.

21. Integration test

Integration identified as a separate and planned process

The sequence of delivery from supplier(s) to testing has been previously defined and documented.

In the case of deviations of the plan (e.g. late delivery of parts) adjustments are made. The adjustment is substantiated.

Strategy for integration

Besides informal test design techniques the low-level tests use also formal test design techniques, providing an unambiguous way from the test basis to test cases.

For the low-level tests it is possible to make a substantiated statement about the level of coverage of the test set (compared to the test basis).

A motivated consideration of the product risks takes place whereby the commissioner is involved, for which knowledge of the system, its use and its operational management is required.

There is a differentiation with respect to area of consideration and depth of the tests, depending of the risks taken and, if present, of the acceptance criteria: not all kinds of programs are tested equally thoroughly; this is also the case for quality characteristics  .

For retests a (simple) strategy determination takes place, in which a substantiated choice is made between variations of "test solutions only" and "complete retest".

The strategy is determined and subsequently executed. It is checked that the execution of the tests takes place according to the strategy. If necessary, adjustments are made.

A person is made responsible for integrating, including integration testing, the separate parts into an assembled system (integrator role).

The integration and testing sequence for both hardware and software (so-called integration strategy), based on the software architecture and delivery plan of the parts, has been defined and documented. This planning includes:- Activities- Dependencies- Milestones

Transfer to and from the test team takes place according to a standard procedure. The parts comprising a transfer should be known (in the form of a delivery report): which parts and versions of the test object, which version of the test basis, (un)solved defects, configuration.

The sequence of delivery and the entry criteria for the parts to be delivered is coordinated, documented and agreed upon by the parts supplier and integrator.

The number and sequence of integration steps are based on a motivated consideration of the estimated risks of the product or parts of the product, the technical constraints and the impact of the changes.

Entry criteria for the parts and exit criteria for the integration tests have been defined, documented and used. These criteria are preferably defined in terms of achieved test coverage and/or amount of unsolved defects.

There is a differentiation in the depth of the tests, depending on the risks and, if present, depending on the acceptance and entry or exit criteria: not all parts, variants and versions are tested equally thoroughly and not all quality characteristics are tested (equally thoroughly).

For retests also a (simple) strategy determination takes place, in which a motivated choice between 'test solutions only' and 'full retest' is made.

Deviations from the coordinating strategy are reported, after which a substantiated adjustment to the coordinating strategy is made, based on the risks.

Page 26: Checkpoints v1.01 TPI-Automotive

Standardized strategy for integration The procedures are defined in a generic approach for the organization.Each project works according to this generic approach.Variances are sufficiently argued and documented.

Agreements for support by the part supplier are made. These agreements involve: solving of defects, solving of test-blocking defects, lines of communication (e.g. integration meetings on a regular basis) , and escalation procedure.

Page 27: Checkpoints v1.01 TPI-Automotive

Life cycle model - level A

Activity

formulate assignment

determine the test basis

set up organization

define infrastructure and tools

- define general planning

produce test plan

·             For the Planning phase: Sub activities  / aspects

- commissioner and supplier- scope- aim- precondition- starting points

- determine relevant documentation- like system requirements, software requirements, software design and other documents that are used to derive test cases- identify documentation

define scope (area(s) of consideration): what will and what will not be tested

determine test strategy for this iteration 

- strategy determination- estimating

- determine required functions- allocate tasks, authorizations and responsibilities- describe organization- allocate personnel- determine training- determine communication structures- determine reporting lines

identify  test deliverables - determine test products- set up norms and standards

- define test environment- define test tools- define office environment- define infrastructure planning

define test management  - define test process management (progress, quality, reporting)- define infrastructure management- define test product management- define defects procedure

determine planning (aspects like  activities, dependencies, milestones, start & end dates and needed resources )

- determine risks, threats and measures in relation to the test process- determine critical test environments (e.g. EMV- hall, prototype car)- determine test plan- fixate test plan (commissioner approval)

synchronize the planning of the test process with the complete product process

Page 28: Checkpoints v1.01 TPI-Automotive

Activity Sub activities / aspects

design test cases and test scripts

realize test infrastructure

Activity Sub activities / aspects

intake test object and infrastructure

execute (re)tests

Life cycle model - Level B

Activity Sub activities / aspects

Activity Sub activities / aspects

evaluate test object

evaluate test process

formulate final report

Low Level testing - Level A

·              For the Design  phase:

- test cases- define starting test databases- test scripts- define parameter settings

specify entry check of test object and infrastructure

- checklist test object and infrastructure (completeness check )- test script pre-test

- test environment- test tools

·              For the Execution phase:

- intake infrastructure and test object (completeness check)- perform pre-test

create initial data, (starting conditions for execution of testcases)  

- Enter initial data- Set parameter values

- execute test scripts- execute static tests(incl. evaluation of test results and analysing differences)

·             For the Preparation phase:

inspection of test basis (check if the test basis is suitable for the selected test design  techniques)

- determine relevant documentation- define checklists for study- define documentation (study)- report about testability

·             For the Completion phase:

Archive the testware (complete and bring  the testware up to date , in a way that the testware is re-usable for other test processes)

- select testware to be archived- collect and update testware- transfer testware

- determine open defects and identified trends- determine risks at release- formulate advice

- evaluation of test strategy- planning versus realization

Page 29: Checkpoints v1.01 TPI-Automotive

Activity Sub-activities/aspects

formulate assignment

determine test basis

setup organization

describe test products

define infrastructure and tools

define test management

determine planning - formulate global planningproduce test plan

Activity Sub-activities/aspects

design test cases and test scripts

Activity Sub-activities/aspects

execute (re-)tests

·     For the Planning phase: 

- commissioner and supplier- area of consideration- aim- preconditions- starting points

- determine relevant documentation- identify documentation

- determine required functions- allocate tasks, authorizations and responsibilities- describe organization- allocate personnel

- determine test products- define test tools

- define test environment- define test tools

- define test process management(progress, quality, reporting)- define test product management- define defect procedure

- determine risks, threats and measures- determine test plan- fixate test plan (commissioner approval)

·     For the Design phase: 

- test cases- define initial data- test scripts

·     For the Execution phase:  

- execute test scripts- execute static tests (e.g. conformance checks on coding standards),incl. evaluation of test results and analysis of differences- produce code metrics

Page 30: Checkpoints v1.01 TPI-Automotive

Life cycle model - level A

Product

defined in test plan

defined in test plan

defined in test plan

defined in test plan

defined in test plan

defined in test plan

defined in test plan

defined in test plan

test plan

defined in test plan; ideally the planning is integrated in the overall project planning 

Page 31: Checkpoints v1.01 TPI-Automotive

Product

#NAME?

operational test environment and tools

Product

testable test object

initial data sets

Life cycle model - Level B

Product

Product

testware

release advice, defined in final report

defined in final report

final report

Low Level testing - Level A

- test cases- definition starting test database / table for parameter settings- test scripts

- test defects- test reports

- test basis defects- testability report

Page 32: Checkpoints v1.01 TPI-Automotive

Product

determined in test plan

determined in test plan

determined in test plan

determined in test plan

determined in test plan

determined in test plan

determined in test plan test plan

Product

Product

- test cases- initial data sets- tests scripts

- test defects - test reports