metrics selection guidelines

26
Metrics S.No Metrics Type Metrics Name 1 Project Metrics Schedule variance(%) 2 Project Metrics Effort variance(%) A metric is a mathematical number that which a system, component or process,

Upload: rajasekher

Post on 16-Nov-2014

495 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Metrics Selection Guidelines

Metrics Selection Guidelines

Metrics

S.No Metrics Type Metrics Name Explanation

1 Project Metrics Schedule variance(%)

2 Project Metrics Effort variance(%)

A metric is a mathematical number that shows the relationship between two variables. It is a quantitative measure of the degree of which a system, component or process, possesses a given attribute.

Actual calendar time is compared with the planned calendar time and the Schedule variance is calculated.

To Note :a. Planned Calendar days is as per the Plan and Actual Calender days should be the actuals spent. b. This is done for both completed and ongoing states. When ongoing, an approximation is made by estimated days to completion and % completed.

Actual effort is compared against the planned effort and the effort variance is calculated.

To Note :a. Planned effort is the effort estimated upfront. b. Actual effort is the effort as per the actualsc. This is done for both completed and ongoing states. When ongoing, an approximation is made by estimated days to completion and % completed.

Page 2: Metrics Selection Guidelines

3 Product Metrics Size variance

4 Project Metrics

Actual size is compared with estimated size and Size variance is calculated. Project size is the total size of the project measured in terms of LOC or Function points or Feature Points.

To Note :a. Business model governs when the size measure is done - during proposal or after requriements b. Use the same measure for planned and actuals.

Requirement Stability Index

Requirement changes occur as the project progresses. Requirement changes are logged and traced to design, code and test cases. For every change in requirement, impact analysis is done for schedule and effort. Requirement stability index is calculated to indicate the stability of the requirements

To Note :a. Always Requirement changes have to be updated. B. All defects raised in reviews or testing with type 'RC' have to be incorporated separately in Change Track sheet and closed to completionc. Changes need to be grouped and counted. Similar changes need to be grouped and this decision is done by the PM/PL and person responsible for Requirements change management in the project. d. Higher the stability index value, the higher is the stability of requirements. As it nears zero (or) negative values, it shows requirements are unstable

Page 3: Metrics Selection Guidelines

5 Product Metrics

6 Product Metrics Productivity in coding

7 Product Metrics

8 Product Metrics

Productivity in project(Size measure FP or Feature / Day)

Productivty of the project is the rate at which the project is made. Total size of the project is factored against the total effort and this is calculated. This is done at the end of the project generally. Midway through the project, there can be an estimation done on size completed and factored against the effort put in.

To Note:Producitivty is done as per the size measure. Comparison between projects should be made in similar measures.

Coding Productivity is measured by the size of the code (LOC) and the effort spent on coding.

To Note :a. Group it as per Development environment if applicable

Productivity in Test Case Preparation

Test Case Preparation Productivity is measured by the size of the test case (number of test cases) and the effort spent on preparing the test cases.

To Note :a. If there are test cases, test data or test programs, have different measures for them.

Productivity in Test Case Execution

Test Case Execution Productivity is measured by the number of test cases executed and the effort spent on executing the test cases.

To Note :a. If there are test cases, test data or test programs, have different measures for them.

Page 4: Metrics Selection Guidelines

9 Product Metrics

10 Project Metrics

11 Project Metrics

12 Project Metrics

Productivity in Defect Detection

Defects occur in projects as a result of reviews and testing.Productivity in Defect Detection is measured by the total number of defects detected as a result of reviews and testing and the effort spent on reviews and testing.

Productivity in Defect fixation

Defects occur in projects as a result of reviews and testing.Productivity in Defect Fixation is measured by the total number of defects fixed as a result of reviews and testing and the effort spent on fixing the defects found during reviews and testing.

Schedule Variance across phases

Actual calendar time is compared with the planned calendar time and the Schedule variance is calculated for each phase.

To Note :a. Planned Calendar days is as per the Plan and Actual Calender days should be the actuals spent for phase. b. This is done for both completed and ongoing states. When ongoing, an approximation is made by estimated days to completion and % completed.

Effort Variance across phases

Actual effort for a phase is compared against the planned effort for the phase and the effort variance is calculated for the particular phase.

To Note :a. Planned effort is the effort estimated upfront for phase. b. Actual effort is the effort as per the actuals for phase.c. This is done for both completed and ongoing states. When ongoing, an approximation is made by estimated days to completion and % completed.

Page 5: Metrics Selection Guidelines

13 Project Metrics

14 Project Metrics

15 Project Metrics Cost of Quality

16 Project Metrics Cost of Poor Quality

Effort distribution across phases

Project is planned as per the Phase – Task – Activity hierarchy. Planned effort and actual effort are recorded for each activity, task and phase. Phases for a project are decided during commencement of the project. Effort distribution across phases is calculated at the end of the project and acts as an input to future projects, further planning and resource allocation.

To Note : Give the distribution as per the Project phases. There could be a few projects where the phases are not applicable. Let them not be shown with a distribution of 0.

Effort Distribution across Activity Types

Project is planned as per the Phase – Task – Activity hierarchy. Planned effort and actual effort are recorded for each activity, task and Activity Type. Activities are grouped by Activity Types as per the organization standard.

To Note:This metric could be very useful when you have to measure the effort spent on a few heads like Measurement, Miscellaneous, Quality assurance etc which do not exactly fall under a phase.

Cost of Quality is the total effort spent for Testing, Reviews, QA, Measurement, Training and Configuration Management

Cost of Poor Quality is the total effort spent for rework done for reviews, testing and verification.

Page 6: Metrics Selection Guidelines

17 Product Metrics Defect density

18

19 Product Metrics Defect age

20 Project Metrics Review Efficiency(%)

21 Project Metrics Testing Efficiency(%)

Defects occur in projects as a result of reviews and testing. Defect density for the project is defined as the total number of defects per project size (KLOC/FP).

To Note :Could be at project level or at work product level. It is just the same formula with different applicability.

Residual Defect Density

RDD is defined as the no of defects caught in Acceptance per project size (KLOC/FP).

Defect age is the difference between the Phase detected and Phase introduced. Defect age is calculated for a defect and the average defect age is calculated for all the defects.

To Note :Phase detected has to be the phase sequence of the project. Phase introduced is also the sequence of the phase and should identfiy clearly which phase would have introduced this defect

Defects are detected via two detection mechanism, Reviews and Testing. Review efficiency is the number of defects detected in Reviews compared to the total defects. This metric along with the Phase occurred could determine the strength of the review process.

Defects are detected internally in reviews and testing. Testing efficiency indicates how many defects are detected in Testing before being detected at customer site.

Page 7: Metrics Selection Guidelines

22 Process Metrics

23 Process Metrics

24

25 Process Metrics

26 Process Metrics

Overall Defect Removal Efficiency(%)

Defects are detected internally in reviews and testing. Defect Removal efficiency indicates how many defects are detected before being detected at customer site. This inclusive of Review and Testing.

Phasewise Detection Effieincy

Defects detected, occurred and escaped in each phase are calculated and phasewise detection effiency is arrived at.

Defect Distribution across phases

Defects are analyzed based on various phases. Number of defects in each phase is proportioned with the total number of defects and the ratio is given. This metric is applied on the total defects in the project in reviews and testing.

Defect Distribution across Origin

Defects are analyzed based on Origin. Number of defects for each item in the classification is proportioned with the total number of defects and the ratio is given. This metric is applied on the total defects in the project in reviews and testing.

Defect Distribution across types

Defects are analyzed based on various types. Number of defects for each item in the classification is proportioned with the total number of defects and the ratio is given. This metric is applied on the total defects in the project in reviews and testing.

Page 8: Metrics Selection Guidelines

27 Process Metrics

28 Process Metrics

29 Process Metrics

30 Process Metrics

31 Process Metrics

32 Process Metrics

33 Process Metrics

Defect Distribution across causes

Defects are analyzed based on various causes. Number of defects for each item in the classification is proportioned with the total number of defects and the ratio is given. This metric is applied on the total defects in the project in reviews and testing.

Audit Execution Effectiveness

Audit Execution Effectiveness indicates the no of audits conducted versus planed

Process Complaince Index

process complaince indicates the adherence of the projects to the project process established

SEPG Execution Effectiveness

SEPG Execution Effectiveness indicates the no of QMS Release Planed versus the Actual no of releases

% of Improvement suggestions addressed

Indiactes the no of improvement information addressed

% of deviations for a specific process

No of deviations requested by projects are collected to idnetify

Average Time for deployment per QMS release

Mean Time period taken by projects to incoporate the changes given in the QMS release

Page 9: Metrics Selection Guidelines

Metrics Selection Guidelines

Formula Calculation Tips

A metric is a mathematical number that shows the relationship between two variables. It is a quantitative measure of the degree of which a system, component

Schedule variance = ((Actual Calendar days - Planned calendar days) + Start Variance) / Planned calendar days * 100

1. Original planned calendar days for the project was 200 calendar days2. Actual calendar days for the project was 225 calendar days3. Start variance is the difference between Planned and Actual Start4. Start Variance was 5 calendar days5. Schedule variance = 15.00%The above measure is for the project as a whole.

Effort variance = (Actual Effort - Planned Effort)/Planned Effort * 100

1. Planned Effort for the project was 1000 person days2. Actual Effort for the project was 1125 person days3. Effort variance = 12.50 %

Page 10: Metrics Selection Guidelines

Size variance = (Actual size - Estimated Size) / Estimated Size * 100

Estimated Size is 800 FPActual Size(FP/KLOC) is 1000 FPSize variance = 25.00 %

RSI = 1- ((No of changed + No of deleted + No of added) / Total no of Initial requirements) *100

No. of Changed is 6No. of Deleted is 2No. of Added is 4Total No. of Requirements is 120RSI = 90.00%

Page 11: Metrics Selection Guidelines

Productivity in project = Actual Project Size / Actual Effort spent for the project

Actual Project Size = 1000 FPActual Effort Spent = 1125 Person daysProductivity for the project = 0.89 FP/Day

Productivity in Coding = Actual LOC or FP / Actual Effort spent for Coding

Actual Size = 100 KLOCEffort Spent for Coding = 400 Person DaysProductivity in Coding = 250 LOC/Day

Productivity in Test Case Preparation = Actual No of test cases / Actual Effort spent on Test Case preparation

Acutal No. of Test Cases prepared = 3000Actual Effort Spent = 50 Person DaysProductivity in Test Case Preparation = 60 TC/Day

Productivity in Test Case Execution = Actual No of Test cases (Planned + Adhoc) / Actual Effort spent on Testing

Actual No. of Test Cases = 3000Adhoc Test Cases = 150Actual Effort Spent on Testing = 30 PDProductivity in Test Case Execution = 105 TC/Day

Page 12: Metrics Selection Guidelines

Productivity in Defect Detection = Actual no of Defects (Review + Testing) / Actual effort spent on (Review + Testing)

Actual No. of Review Defects = 650Actual No. of Testing Defects = 450Actual Effort Spent for Review = 25 PDActual Effort Spent for Testing = 30 PDProductivity in Defect Detection = 20

Productivity in Defect Fixation = Actual No of defects fixed / Actual Effort spent on Defect fixation

Actual No. of Defects Fixed = 1100Actual Effort Spent on Defect Fixation = 50 PDProductivity in Defect Fixation = 22

Schedule variance for a Phase = (Actual Calendar days for a phase – Planned Calendar days for a phase + Start Variance for a phase) / (Planned Calendar days for a phase) * 100

Schedule varaince for the various phases arePresales is 10.00%Requirements is 20.00%Design is 8.00%Development is 21.43%Testing is 5.00%Acceptance 0.00 %Customer implementation 0.00 %

Effort variance for a phase = (Actual Effort for a phase – Planned Effort for a phase) / (Planned Effort for a phase) * 100

Effort varaince for the various phases arePresales is 7.14%Requirements is 17.65%Design is 15.38%Development is 7.14%Testing is 25.00%Acceptance 11.11 %Customer implementation 11.11 %

Page 13: Metrics Selection Guidelines

Distrbution of effort in each phase / Total effort * 100

Planning is 5%Requirements is 10%Design is 25%Development is 35%Testing is 15%Acceptance is 10%

Distrbution of effort in each ActivityGroup / Total effort * 100

SDLC Regular is 10%Review is 10%Test is 15%Rework-Review is 10%Rework-Test is 10%Verification-Review is 5%Verification-Test is 5%Quality Assurance is 5%Configuration mgt is 5%Measurement is 10%Marketing is 5%Training is 10%

Cost of Quality = (Review + Testing + Verification-Review + Verification Testing + QA + Config Mgmt + Measurement + Training + Re-Work Review + Re-Work Testing) / Total Effort * 100

Testing Effort = 80 PDReview Effort = 100 PDQA Effort = 30 PDMeasurement Effort = 20 PDTraining Effort = 20 PDConfig Mgt Effort = 20 PDCost of Quality = 24.00 %

Cost of Poor Quality = Rework Effort / Total Effort * 100

Actual Effort Spent for Rework on Testing = 40 PDRework on Reviews = 40 PDRework on verification = 50 persondaysCost of Poor Quality = 11.5 %

Page 14: Metrics Selection Guidelines

Defect density for the project = Total number of defects / Project size in KLOC or FP

Defect density for each work product = Total number of defects / Work product size measure

Total No of Defects = 1100Project Size = 200 KLOC or 1000 FPDefect Density = 5.5 Defects/KLOC or 1.1 Defects/FP

Residual Defect Density = (No. of Defects caught in Acceptance/Project Size)

No of Defects in Acceptance = 50Project Size = 200 KLOC or 1000 FPRDD = 0.25 Defects/KLOC or 0.05 Defects /FP

Defect age = sigma (Phase detected – Phase introduced) ^2 / Total no of defects

Defect Introduced: During Requirements - Phase 2Defect Detected: During Testing - Phase - 5Defect Age is calculated by using this phase difference.

Review Efficiency = (Number of defects caught in reviews) / (Total number of defects caught ) * 100

Total no. of defects in Reviews = 650Total No of Defects in (Reviews + Testing) = 1100Review Efficiency = 59.09 %

Testing Efficiency = 1-((Defects found in Acceptance) / Total No of Testing Defects) * 100

Total no. of defects in Testing = 450No of Defects in Acceptance Testing = 50Testing Efficiency = 88.88 %

Page 15: Metrics Selection Guidelines

Defect Removal Efficiency = (1- (Total defects Caught by customer / Total No of Defects)) * 100

No of Defects caught by customer during Review, Acceptance and Implementation = 80

Total no. of defects in Review and Testing = 1100

Defect Removal Efficiency is 92.72 %

Depending on the Defects detected in each phase compared to the incoming escapes and overall escapes

Phasewise detection effieincy Requirements - 50%Design - 45%

% of defects across various phases

Defect Distribution across various phasesPlanning is 5%Requirements is 10%Design is 25%Development is 35%Testing is 15%Acceptance is 10%

% of defects across various origin

Defect Distribution across various severitiesRequirements<10%Design<15%Coding<40%Integration Testing<20System Testing< 15%

% of defects across various types

Defect Distribution across various typesRun Time Error < 5Missing Implementation < 10 Validation < 10%Data Invariant < 5% Logical Error< 5 % Functional Specification < 5% Build Error < 10% GUI Error < 20%Performance Errors< 20 %Design Errors < 10%

Page 16: Metrics Selection Guidelines

% of defects across various causes

Defect Distribution across various causes Lack of Standards 20%Process Deficiency 10%Lack of Training 10%Schedule Pressure 15%New Technology 10%Clerical Error 5%Others 10%Lack of Domain 20%

(No. of Audits Counducted *100)/ No. of Audits Planned

(No of check points complied with during audit) /(Total of applicable check points)

Note : 1) No of check points complied with during audit = Total No. of Check Points - No of Check Points not applicable-No of check points not complied with during audit

2) Total of applicable check points = Total No. of Check Points - No of Check Points not applicable

(No. of QMS Release *100)/ No. of QMS Release Planned

(Nof improvement addressed* 100)/No of improvement suggested

No of deviations requested for a process* 100/total no of projects

Sum of time taken by projects / nof of projects

Page 17: Metrics Selection Guidelines

Date Version Section Author

3-Mar-09 1.0 Initial Draft Raja Sekher

Description of the Change