software engineering practice - software metrics and estimation

27
Software metrics and estimation McGill ECSE 428 Software Engineering Practice Radu Negulescu Winter 2004

Upload: radunegulescu

Post on 06-Sep-2014

536 views

Category:

Technology


1 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Software Engineering Practice - Software Metrics and Estimation

Software metrics and estimation

McGill ECSE 428Software Engineering Practice

Radu Negulescu

Winter 2004

Page 2: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 2

About this module

Measuring software is very subjective and approximate, but necessary to answer key questions in running a software project:

• Planning: How much time/money needed?

• Monitoring: What is the current status?

• Control: How to decide closure?

Page 3: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 3

Metrics

What to measure/estimate?

Product metrics

• Size: LOC, modules, etc.

• Scope/specification: function points

• Quality: defects, defects/LOC, P1-defects, etc.

• Lifecycle statistics: requirements, fixed defects, open issues, etc.

• ...

Project metrics

• Time

• Effort: person-months

• Cost

• Test cases

• Staff size

• ...

Page 4: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 4

Basis for estimation

What data can be used as basis for estimation?

• Measures of size/scope

• Baseline data (from previous projects)

• Developer commitments

• Expert judgment

• “Industry standard” parameters

Page 5: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 5

Uncertainty of estimation

Cone of uncertainty

• [McConnell Fig. 8-2]

• [McConnell Table 8-1]

Sources of uncertainty

• Product relatedRequirements changeType of application (system, shrinkwrap, client-server, real-time, ...)

• Staff relatedSick days, vacation timeTurnoverIndividual abilities

Analysts, developers (10:1 differences)Debugging (20:1 differences)

Team productivity (5:1 differences)

• Process relatedTool support (or lack thereof)Process used

• …

Page 6: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 6

Estimate-convergence graph

Initialproductdefinition

Approvedproductdefinition

Requirementsspecification

Productdesign

specification

Detaileddesign

specification

Productcomplete

1.0×

0.25×

0.5×

1.5×

0.67×

1.25×

0.8×1.0×

0.6×

1.6×

1.25×

0.8×

1.15×

0.85×

1.1×

0.9×

Project Cost(effort and size)

Projectschedule

Page 7: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 7

LOC metrics

LOC = lines of code

A measure of the size of a program• Logical LOC vs. physical LOC

Not including comments and blank linesSplit lines count as one

• Rough approximation: #statements, semicolons

Advantages

• Easy to measure

• Easy to automate

• Objective

Disadvantage

• Easy to falsify

• Encourages counter-productive coding practices

• Implementation-biased

Page 8: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 8

FP metrics

A measure of the scope of the program

• External inputs (EI)Number of screens, forms, dialogues, controls or messages through which an end user or another program adds deletes or changes data

• External outputs (EO)Screens, reports, graphs or messages generated for use by end users or other programs

• External inquiries (EQ)Direct accesses to data in database

• Internal logical files (ILF)Major groups of end user data, could be a “file” or “database table”

• External interface files (EIF)Files controlled by other applications which the program interacts with

Page 9: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 9

Examples

[Source: David Longstreet]

EI:

EO:

Page 10: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 10

Examples

EQ:

Page 11: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 11

Examples

ILF

EIF

Page 12: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 12

FP metrics

Complexity weights

Low Med High

EI 3 4 6

EO 4 5 7

EQ 3 4 6

ILF 7 10 15

EIF 5 7 10

Influence multiplier: 0.65..1.35

• 14 factors

Page 13: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 13

Counting function points

349.6Adjusted Function Point Total

1.15Influence Multiplier

304Unadjusted Function Point total

651027059External Interface Files

10015310275Logical Internal Files

32644230Inquiries

63705747Outputs

44634236Inputs

totalmultipliercountmultipliercountmultipliercountProgram Characteristic

High ComplexityMedium

ComplexityLow Complexity

Page 14: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 14

Influence factors

Was the application designed for end-user efficiency?

End-user efficiency7

What percentage of the information is entered On-Line?

On-Line data entry6

How frequently are transactions executed daily, weekly, monthly, etc.?

Transaction rate5

How heavily used is the current hardware platform where the application will be executed?

Heavily used configuration4

Did the user require response time or throughput?

Performance3

How are distributed data and processing functions handled?

Distributed data processing2

How many communication facilities are there to aid in the transfer or exchange of information with the application or system?

Data communications1

Page 15: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 15

Influence factors

Was the application specifically designed, developed, and supported to facilitate change?

Facilitate change14

Was the application specifically designed, developed, and supported to be installed at multiple sites for multiple organizations?

Multiple sites13

How effective and/or automated are start-up, back up, and recovery procedures?

Operational ease12

How difficult is conversion and installation?

Installation ease11

Was the application developed to meet one or many user’s needs?

Reusability10

Does the application have extensive logical or mathematical processing?

Complex processing9

How many ILF’s are updated by On-Line transaction?

On-Line update8

Page 16: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 16

Influence score

Strong influence throughout5

Significant influence 4

Average influence 3

Moderate influence 2

Incidental influence 1

Not present, or no influence0

InfluenceScore

Page 17: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 17

Influence score

INF = 0.65 + SCORE/100

Page 18: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 18

FP metrics

Some advantages

• Based on specification (black-box)

• Technology independent

• Strong relationship to actual effort

• Encourages good development

Some disadvantages

• Needs extensive training

• Subjective

Page 19: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 19

Jones’ “rules of thumb” estimates

Code volumes:

• Approx. 100 LOC/FP, varies widely

[Source: C. Jones “Estimating Software Costs” 1998]

• Schedule: #calendar months = FP^0.4

• Development staffing: #persons = FP/150 (average)Raleigh curve

• Development effort: #months * #persons = FP^1.4/150

McConnell

• Equation 8-1 “Software schedule equation”#months = 3.0 * #man-months^(1/3)

• Table 8-9 “Efficient schedules”

Page 20: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 20

Quality estimation

Typical tradeoff:

Adding a dimension: quality

• Early quality will actually reduce costs, time

• Late quality is traded against other parameters

product (scope)

cost (effort) schedule (time)

Page 21: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 21

Quality estimation

Quality measure:

• Fault potential: # of defects introduced during development

• Defect rate: #defects in product

[Source: C. Jones “Estimating Software Costs” 1998]

• Test case volumes: #test cases = FP^1.2

• Fault potential: #faults = FP^1.25

• Testing fault removal: 30%/type of testing85…99% total

• Inspection fault removal: 60..65%/inspection type

Page 22: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 22

Other typical estimates

[Source: C. Jones “Estimating Software Costs” 1998]

• Maintenance staffing: #persons = FP/750

• Post-release repair: rate = 8 faults/PM

• Software plans and docs: Page count = FP^1.15

• Creeping requirements: Rate = 2%/month0% … 5% / month, depending on method

• Costs per requirement:$500/FP initial reqs$1200/FP close to completion

Page 23: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 23

Sample question

Consider a software project of 350 function points, assuming: the ratio of calendar time vs. development time (development speed) is 2; testing consists of unit, integration, and system testing; and new requirements are added at a rate of 3% per month.

(a) Using the estimation rules of thumb discussed in class, give anestimate for each of the following project parameters, assuming a waterfall process.

(i) The total effort, expressed in person-months.

(ii) The total cost of the project.

(iii) The number of inspection steps required to obtain fewer than 175 defects.

(b) Re-do the estimates in part (a) assuming that the project can be split into two nearly independent parts of 200 function points and 150 function points, respectively.

Page 24: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 24

Lifecycle statistics

Life cycle of a project item

• Well represented by a state machine

• E.g. a “bug” life cycleSimplest form: 3 states

May reach 10s of states when bug prioritization is involved

• Statistics on bugs, requirements, “issues”, tasks, etc.

Open Fixed ClosedDEV QA

QA

QA

Page 25: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 25

Estimation process

Perceived

Actual

Page 26: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 26

Example procedure

What do you think of the following procedure:

[Source: Schneider,Winters - “Applying Use Cases” Addison-Wesley, 1999]

Starting point: use cases.

UUCP: unadjusted use case points

• ~ # of analysis classes: 5, 10, 15

TCF: technical complexity factor

• 0.6 + sum(0.01 * TFactor)

• TFactor sum range: 14

EF: experience factor

• 1.4 + sum(-0.03 * EFactor)

• Efactor sum range: 4.5

UCP: use case points

• UUCP * TCF * EF

PH: person-hours

• UCP * (20..28) + 120

Page 27: Software Engineering Practice - Software Metrics and Estimation

McGill University ECSE 428 © 2004 Radu NegulescuSoftware Engineering Practice Software metrics—Slide 27

Estimation tips

Adapted from [McConnell].

Avoid tentative estimates.

• Allow time for the estimation activity.

Use baselined data.

Use developer-based estimates.

• Estimate by walkthrough.

Estimate by categories.

Estimate at a low level of detail.

Use estimation tools.

Use several different estimation techniques.

• Change estimation practices during a project.