tutorial: cocomo, ada cocomo and cocomo
TRANSCRIPT
Tutorial: COCOMO, Ada COCOMO
and COCOMO 2.0
Barry Boehm, USC COCOMOISCM Forum 9 October 5,1994
Outline
COCOMO Summary
Ada COCOMO Summary
COCOMO 2.0 Status
Summary
COCOMO Baseline Overview I
Software product size estimate s Software development, mainten- ance cost and schedule estimates
Software product, process, corn- c 0 co M 0 0' - -
I I Software reuse, maintenance, I and increment parameters 4
Cost, schedule distribution by phase, activity, increment
Software organization's project data
COCOMO recalibrated to L organization's data
COCOMO Baseline Overview II
Open interfaces and internals
- Published in Software Enqineerinq Economics, Boehm, 1981
Numerous implementations, calibrations, extensions - Incremental development, Ada, new
environment technology - Arguably the most frequently-used software
cost model worldwide.
1 I 1
COCOMO FEATURES
EVELOQMENT COST AN0 SC).(€OUUE ESTIMATES
3 LEVELS OF FIDELITY
MACRO AND MICRO ESTIMATION
SENSITIVITY ANALYSIS
SOFTWARE ADAFTATION, CONVERSION COSTS
SOFWARE MAINTENANCE COSTS
CALI6RAWN TO US- ENVIRONMENT
INTERMEDIATE COCOMO MODEL
1. DETERMINE NOMINAL EFFORT AS FUNCTION OF SIZE
ORGANIC MODE: MMNoM = 3.2 ( K D S I ) ~ ~
SEMI-DET MODE: MMNoM = 3.0 (KDS1)1-12
EMBEDDED MODE: MMNoM = 2.8 ( K D S I ) ~ . ~ ~
2. DETERMINE EFFORT MULTIPLERS (EM)i AS FUNCTION OF 15 OTHER SOFTWARE COST DRIVER ATTRIBUTES
3. ESTIMATE DEVELOPMENT EFFORT AS:
4. ESTIMATE DEVELOPMENT SCHEDULE FROM MM i
i i
.- .-- .- .. .
DtSTmGUMMG FEATURES OF $ORWARE DEVELWMLNT Moo€S
ORGANIZATIONAL UNDERSTANDIM OF PRODUCT OBJECTIVES
EXP€RWCE IN WORKING WITH -TIED SOFTWARE SYSTEMS ... PRODUCT SUE RANGE
EXAMCCES
ORGANIC
THOROUGH
EXTENSIVE
< 50 KDSl
BATCH DATA REDUCTION
SCIENTIFIC MOOELS
CONSIOERABLE
CONSIOERABLE
0 .
~ 3 0 0 KDSl
MOST TRANSACTKM PROCESSING SYSTEMS
MOST OS, DBMS
AMBITIOUS INVEWTWW, PRODUCTION COMTROC
EMBEDDED
GENERAL
MODERATE
AU SUES
NGUAQE EXP6RENCL
1.23 SCWULE CONSTRAINT 4
1.a3 DATA BASE SIZE
T B T I Y
V m T W - U(P€RIENCL -
1 .40 V m T W YkCmm rOUTlLlTY 1
1.40 WTIIIYICIL TOOLS
m C M PRACTICES
S M COWETRAMT
A m m m m r EX)(ER(IE)(CE
1 .W T m COmTRAMT
1 .ST MOVlMO #LIA.ILITY
2.38 PMK)(JCT COMPLEXITY
EXAMPLE - MEGABIT COMMUNICATIONS CO.
Ib@JO DSi COMMUNICATIONS PROCESSING SOFTWARE
EMBEODfD W O E
LOCAL USE - MODERATE EFFECT OF FAILURES
20,000 BYTE DATA BASE
MICROPROCESSOR COMM-LINE HANDLING
9 USES 7U% OF AVAILABLE CPU CAPACITY
USES 45K OF 64K STORE I
SQFTWARL COST DRIVER RATINGS 1
COST mvm
IK) OPERATIOMS AT m s s 1 W Q PHYSICAL L/O lmXUOEs D€vICL SELECT)O)(, STATUS ADDRESS CmCKlWO AND TRANSLATtOMS; ERCIOII SEEKS, READS. PROCESSIN0 ETC OPTIMIZED
110 J v E w A P .
EXTRA H W
SOFTWARC DEVQOPUENT EFFORT MULTIPLIERS 'J
COST DRIVERS
PCIOOUCT AlTRlBUTES
'R8LY RIEQUlFKD SOFTWARE RELIABILITY
'DATA' DATA BASE SIZE
'CrCX' MOOUCT COMPLEXITY
COYCUTER ATTRIBUTES
'TIME* EXECUTION TIME CONSTRAINT . . .
VERY LOW
- LOW
.88
-94
.85
-
NOMINAL
- HlGH -
1.15
1.00
1.15
1.11
-
VERY HKW
EXTRA HIGH -
MICROPROCESSOR COMMWESTK)NS SOFTWARE 1
COST DRIVER SITUATION
W L USE OF SYSTEM. NO SERIOUS RECOVERY PROBLEMS 20,000 BYTES COMMUNICATIONS PROCESSING WILL USE 70% OF AVAILABLE TIME 4% OF 64K STORE (70%) MSED ON COMMERCIAL MICROPROCESSOR HAROWARE TWMOUR AVERAGE TURNAROUND TIME GOO0 SENIOR ANALYSTS THREE YEARS GOOO SENIOR PROGRAMMERS SIX MONTHS TWELVE MONTHS MOST TECHNIQUES IN USE OVER ONE YEAR AT BASK MINICOMPUTER TOOL LEVU NINE MONTHS
NOMINAL
Low VERY HIGH HlGH HlGH NOMINAL
NOMINAL HlGH NOMINAL HlGH LOW NOMINAL HlGH LOW NOMINAL
EFFORT MULTIPLIER
INTERYEOlATE COCOMO ESTIMATES V3. PRWECT ACTUALS
0 - SEMIO€TKHED MODE A - E M B E M D MOOE
COCOMO ESTIMATES VS. ACTUALS: ORGANIZATION A
Outline
COCOMO Summary
=> Ada COCOMO Summary
COCOMO 2.0 Status
Summary
Ada COCOMO STRUCTURAL OVERVIEW
SIZE IN KDSI --- J ..NO. - 2.8 (KDSI) 1.04 t 1:
Ada PROCESS C I
1 EM, FROM RATING TABLES VERSUS R:
./ COST DRIVER
I
RATINGS Ri 18 M M - MMNOM n (EM)i
i- 1
PHASE DISTRIBUTION TABLES MM----b-~
IC
I MM, TDEV FOR EACH PHASE
Ada PROCESS X b
M M , TDEV ESTIMATES FOR EACH INCREMENT
SIZES, PHASING INCREMENT, USING
PnEVlOUS RELATIONS
TDEV - 3 ( M M ) 0.32 + 0.2 2:
I I
MM, TDEV FOR EACH INCREMENT
TDEV IN MONTHS
Ada COCOMO SCALING EQUATION
MMNOM a (2.8) (KDSI) 1.04 + C Wi i- I
PROCESS MODEL > 1 MISSION- CRITICAL PROJECT
DESIGN THOROUGHNESS FULLY AT PDR: UNIT PACKAGE (100%) SPECS COMPILED, BODIES OUTLINED
RISKS ELIMINATED BY PDR FULLY
REQUIREMENTS VOLATILITY DURING DEVELOPMENT CHANGES
SUCCESSFUL ON 1 MISSION-
CRITICAL PROJECT
NO FAMILIARITY
WlTH PRACTICES
GENERAL LITTLE FAMlLlARlTY I F A / FLtMlLtARITy -
WITH PRACTICES WIT14 PRACTICES WlTH PRACTICES
MOSTLY (9096)
LITTLE (20%)
GENERALLY (75%)
ALL 0.04's: COCOMO EMBEDDED MODE
TYPICAL MMNOM*S
OFTEN (60961 (40%) I
MOSTLY (90%)
SMALL NONCRITICAL
CHANGES
GENERALLY (75%)
FREQUENT NONCRITICAL
CHANGES
r
I: Wi - 100 KDSl
600 KDSl >
OFTEN (60%)
OCCASIONAL MODERATE CHANGES
0.00
336
1.795
SOME (40%)
FREQUENT MODERATE CHANGES
0.04
405
2.302
-
LITTLE (20961
MANY LARGE
CHANGES
0.08
487
2.951
0.12
585
3,784
0.16
703
4.852
0.20
846
6,221
COST AND SCHEDULE IMPLICATIONS OF Ada PROCESS MODEL
0 MORE TlME AND EFFORT TO PDR. CDR
0 LESS TlME AND EFFORT IN CODE, INTEGRATION AND TEST
LESS EFFORT OVERALL
EARLIER IOC, LATER FOC
@ REDUCED PROJECT COMMUNICATION OVERHEAD, RESULTING DISECONOMIES OF SCALE
1 .O4 + b MM = a . (SIZE)
b = 0 FULL USE OF Ada PROCESS MODEL
b = 0.16 NON-USE OF Ada PROCESS MODEL
OTHER CHANGES FROM STANDARD COCOMO
Ada EFFECTS
REDUCED MULTIPLIERS FOR HIGH RELIABILITY, COMPLEXITY
LARGER ANALYST INFLUENCE, SMALLER PROGRAMMER INFLUENCE
@ LARGER LANGUAGE EXPERIENCE INFLUENCE
LARGER MODERN PROGRAMMING PRACTICES INFLUENCE
NEW INSTRUCTION-COUNTING RULES - INCLUDING REUSE EFFECTS
0 COMPLEMENTARY MAINTENANCE EFFECTS
OTHER CHANGES FROM STANDARD COCOMO
GENERAL EFFECTS
LARGER TOOLS, TURNAROUND TIME INFLUENCE
VIRTUAL MACHINE VOLATILITY SPLIT INTO HOST AND TARGET EFFECTS
SCHEDULE-STRETCH PENALTY ELIMINATED
INCREMENTAL DEVELOPMENT MODEL INCORPORATED
SECURITY, REUSABILITY EFFECTS ADDED
ADA COCOMO rr STANDARD COCOMO
-
Schedule Constraint
Language Experience
Data Bass Size
Turnaround T i m
Computer E x p w h w
Computer Volatility
Tools
MockrnPmctkes
s- - A-E#(ml.rm
T h h g Constmint
Mlability
Team Capablltty
INCREMENTAL DEVELOPMENT MODEL
( I 8
INPUTS
0 KDSli (i - 1. . . ., n) - SIZE OF INCREMENT i
0 CHOICE OF STARTING POINT FOR NEW INCREMENTS
STARTi- SSRi (USED IN ORIGINAL GECOMO MODEL)
- PDWi (FOLLOWS Ada PROCESS MODEL; PDW ASSUMED TO BE 75% OF THE WAY FROM SSR TO PDR)
- PDRi (USED IN ORIGINAL SOFTWARE ENGINEERING ECONOMICS EXAMPLE, PAGES 499, 504)
0 FOR INCREMENTS i- 2, .... n : ATi. MSi-l, AAFi
0 LOCATION IN TIME OF STARTING POINT FOR NEW INCREMENT IN RELATION TO PREVIOUS INCREMENT. IN TERMS OF AN OFFSET ATi IN MONTHS FROM AN INPUT-SPECIFIED INCREMENT i-I MILESTONE: MSi-i CDR. UTC, OR FQT
ADAPTATION ADJUSTMENT FACTOR AAFi - % OF SOFTWARE IN PREVIOUS INCREMENTS MODIFIED DURING DEVELOPMENT OF NEW INCREMENT
INCREMENTAL DEVELOPMENT SCHEDULE MODEL
EXAMPLE USING PDW AS NEW INCREMENT STARTING POINT
SSR PDR
t / I - -- SINGLE-SHOT SCHEDULE FOR
t - f - - OVERALL PRODUCT
CDRl UTCl FQTl
---- ' 0 INCREMENT 1 SCHEDULE
I AT2 I - -
PDW2 PDR2 CDR2 UTC2 FQT2
' P D W ~ T u ~ ~ l AT^ - --- 1 INCREMENT 2 SCHEDULE
- - PDW3 PDR3 CDR3 UTC3 FQT3
' P D W ~ - T ~ ~ ~ 2 - - - 1-1 INCREMENT 3 SCHEDULE
CAN ALSO START NEW INCREMENTS AT THEIR SSR OR PDR
INCREMENTAL DEVELOPMENT MODEL (Concluded)
OUTPUTS
FOR EACH INCREMENT i= I ..., n
a CUMULATIVE SCHEDULE FOR EACH MILESTONE
a CUMULATIVE EFFORT TO EACH MILESTONE
MMPDR~, MMcDR~, MMuTc~, M M F Q T ~
Partial List of COCOMO Packages - From STSC Report, Mar. 1993
CB COCOMO . GECOMO Plus COCOMOID GHL COCOMO COCOMO1 . REVIC CoCoPro SECOMO COSTAR SWAN COSTMODL
Outline
COCOMO Summary
Ada COCOMO Summary
=> COCOMO 2.0 Status
Summary
COCOMO 2.0 Status
Model Overview Affiliates' Overview Activity Sequence - Madachy: Litton experience
Long Range Plan Cost-Benefit Rationale
COCOMO 2.0 Model Overview
COCOMO 2.0 Objectives
Coverage of Future Market Sectors
Hierarchical Sizing Model
Stronger ReuseIReengineering Model
Other Model Improvements
COCOMO 2.0 Objectives
Retain COCOMO internal, external openness Develop a 1990-2000's software cost model - Addressing full range of market sectors - Addressing new processes and practices
Develop database, tool support for continuous model improvement Develop analytic framework for economic analysis - E.g., software product line return-on-investment
analysis Integrate with process, architecture research
The future of the software practices marketplace
User programming
(55M performers in US in year 2005)
Infrastructure (0.75M)
pplication enerators (0.6M)
Application composition
(0.7M)
System integratio
(0.7M)
COCOMO 2.0 Coverage of Future SW Practices Sectors
User Programming: No need for cost model Applications Composition: Use object counts or object points - Count (weight) screens, reports, 3GL routines
System Integration; development of applications generators and infrastructure software - Prototyping: Applications composition model - Early design: Function Points and 4-6 cost drivers - Post-architecture: Source Statements or Function
Points and 14-20 cost drivers - Stronger reuselreengineering model
University of Southern California Center for Software Engineering
Software Costing and Sizing Accuracy vs. Phase
1 . 5 ~
Relative 1 . E x
Size x Range
\ Completed Size (DSI)
+ Cost ($)
/ Product Detail J
- -
Concept of Rqts. Design Design Accepted Operation Spec. Spec. Spec. Software -
A -
A A A A A Feasibility Plans Product Detail Devel.
and Design Design and Rqts. Test
Phases and Milestones
Baseline Object Point Estimation Procedure
Step 1: Assess Object-Counts: estimate the number of screens, reports, and 3GL components that will comprise this application. Assume the standard defi- nitions of these objects in your ICASE environment.
Step 2: Classlfy each object instance into simple, medium and difficult complexity levels depending on values of characteristic dimensions. Use the following scheme:
For Screens For Reports 1 Number of
views
c3
;tep 3: Weigh the number in each cell using the following scheme. The weights reflect the relative effort required to implement an instance of that com- plexity level. :
3 - 7 > 8
# and source of data tables
simple medium
Object Type
Screen
itep 4: Determine Object-Points: add all the weighted object instances to get one number, the Object-Point count.
Step 5: Estimate percentage of reuse you expect to be achieved in this project. Compute the New Object Points to be developed, NOP = (Object-Points) (1 00 - %reuse)/ 100.
Step 6: Determine a productivity rate, PROD = NOP / person-month, from the fol- lowing scheme
Number of sections
contained
0 or 1
Total c 4 (< 2 srvr < 3 clnt)
simple
Report 3GL Compo- nent
medium dif5cult
Complexity-Weight
Total < 8 (213 swr 3-5 clnt)
simple
# and source of data tables
Simple
1 2
Developers' experience and capability
ICASE maturity and capability
PROD
Total 8s (> 3 swr > 5 cint)
medium
Total < 4 (c 2 s m < 3 clnt)
simple difficult difficult
Medium
2 5
;tep 7: Compute the estimated person-months: PM = NOP / PROD.
Very Low
Very Low
4
Total < 8 (213 srvr 3-5 clnt)
simple 2 or 3 4 +
Difficult
3 8 10
Total 8+ (> 3 srvr > 5 clnt)
medium
Low
Low
7
simple medium
Nominal
Nominal
13
medium diEcult
difficult difficult
High
High
25
Very High
Very High
50
Software Development Estimation Model Person-months = A * n(EM) * (size) ** B
Size : either KSLOC (K-source statements) or EKSLOC = (UFP's) * (KSLOCIUFP)
= UFP: Unadjusted Function Points (no technical complexity factors applied)
= KSLOCIUFP: a function of source language - Adjusted for reuse
9: a function of exponent drivers
n(EM): 5 EM'S for Early Design; 14 for Post-Architecture
University of Southern California Center for Software Engineering
Function Count Procedure. Step 1:
Step 2:
Determine function counts by type. The unadjusted function counts should be counted by a lead technical person based on information in tht: software requirements and design documents. The number of each of the: five user function types should be counted (external input, external output, logical internal file, external interface file, and external inquiry).
Determine comdexity-level function counts. Class* each funciion count into Low, Average and High complexity levels depending on the number of data element types contained and the number of file types referenced. Use the following scheme:
1 For Files and InterFaces 11 For Outputs and Inquiry (1 For Inputs 1
Step 3:
Step 4:
Apply complexity weights. Weight the number in each cell using the fol- lowing scheme. The weights reflect the relative value of the func~ion to the user.
Complexity- Weight Function Type
Average High
Outputs I I
Files 7 I 10 I 15
Compute Unad-iusted Function Points. Add all the weighted functions counts to get one number, the Unadjusted Function Points.
t Interfaces
Queries 3 3
7 4
10 -
6
New Scaling Exponent Approach
Nominal person-months = A*(size)**B B = 1.01 + 0.01 2 (exponent driver ratings) - B ranges from 1 .O1 to 1.26 - 5 drivers; ratings from 0 to 5
Exponent drivers: - Precedentedness - Development flexibility - Architecture1 risk resolution - Team cohesion - Process maturity (being derived from SEI CMM)
University of Southern California Center for Software Engineering
Rating Scheme for the COCOMO 2.0 Scale Factors
Scale Factors ( Wi)
Precedented- ness
Development Flexibility Architecture / risk resolution* Team cohesion
Very Low (5)
thoroughly unprece- dented rigorous
little (20%)
strongly adversarial
Low (4)
largely unprece- dented occasional relaxation some (40%)
occasionally cooperative
Nominal (3)
somewhat unprece- dented some relaxation often (60%)
moderately cooperative
High Very High (2) (1)
generally largely farnil- familiar iar
general conformity generally mostly (90%) (75%)
Extra High (0)
throughly familiar
general goals
seamless
I I I I I
Process matu- Weighted average of "Yes" answers to CMM Maturity Questionnaire
lity+
*. % significant module interfaces specified,% significant risks eliminated. "r The form of the Process Maturity scale is being resolved in coordination with the SEI. The intent is to produce a
process maturity rating as a weighted average of the project's responses to the most recent Capability Maturity Mod- el-based Maturity Questionnaire, rather than to use the previous 1-to-5 maturity levels. The weights to be applied to the Maturity Questionnaire questions are still being determined.
Process Maturity Rating
Assess yeslno scores on new SEI Capability Maturity Model (CMM) Maturity Questionnaire - Via Interim Profiles (roughly 2lyearlproject)
Weight questions by productivity impact for each Key Process Area (KPA) - Determine weighted yes-fraction
Weight KPA scores by KPA level - EDgB, 60% on Level 2's; 50% on Level 3's => - Ratina - = (.60\(2\ l + (.50\(3-2\ = 3-3
11-1 \ - - - I \ - -1 ---
COCOMO 2.0 Model Overview
2.0 Objectives COCOMO 1
Coverage of Future Market Sectors
Hierarchical Sizing Model
=> Stronger ReuseIReengineering Model
Other Model Improvements
0.75
Relative cost 0.5
Nonlinear Reuse Effects
Data on 2954 NASA modules I .o
Amount Modified
Nonlinear Reuse Effects
47% of software modification effort consumed by understanding existing software [Parikh, 19831 Nonlinear growth of modif ied-module interfaces to check [Gerlach-Denskat, 19941 -To modify k of n modules, N = k(n-k) + k(k-1)/2 - Example for n = 10 modules
Reuse and Reengineering Effects
Add Assessment & Assimilation increment (AA) - Similar to conversion planning increment
Add software understanding increment (SU) -To cover nonlinear software understanding effects -Apply only if reused software is modified
Results in revised Adaptation Adjustment Factor (AAF) - Equivalent DSI = (Adapted DSI) * (AAFII 00) -AAF = AA + SU + 0.4(DM) + 0.3 (CM) + 0.3 (IM)
Prototype of Software Understanding Ratingllncrement Table
Structure
Application Clarity
Self- Descript- iveness
SU Incre- ment toAAF
Very Low
Low-cohesion, High-coupling spaghetti code
No match between program and application world-views
Obscure code; Documentation missing,obscure, or obsolete
Low
m m m
Nom High Very High
Strong modularity,. infomation hiding, datalcontrol stucture
Clear match between program and application world-views
Self-descriptive code; Documentation up-to-date, well-organized, with design rationale
COCOMO 2.0 Model Overview
COCOMO 2.0 Objectives
Coverage of Future Market Sectors
Hierarchical Sizing Model
Stronger ReuseIReengineering Model
Other Model Improvements
Other Major COCOMO 2.0 Changes
Range versus point estimates Requirements Volatility replaced by Breakage % Scaling exponent: modes replaced by exponent drivers Multiplicative cost driver changes - Product CD's - Platform CD's - Personnel CD's - Project CD's
No other major changes to maintenance model
Product Cost Drivers
Driver Rating Scale Multipliers
RELY same same
DATA same same
CPLX modified same
RUSE modified modified
Platform Cost Drivers
Driver Rating Scale Multipliers
TIME same same
STOR same same
PVOL same same (VI RT)
TURN deleted
Personnel Cost Drivers
Driver Rating Scale Multipliers
ACAP same modified
PCAP same modified
AEXP modified same
PEXP modified modified (VEXP)
LTEX modified modified
Driver
MODP
TOOL
SCED
Project Cost Drivers
Rating Scale Multipliers
deleted
modified
same
modified
same
DRAFT 2.1
Table 7: Effort Multipliers Cost Driver Ratings for Stage 3
RELY
DATA
CPLX
RUSE
DOCU
TIME
STOR
PVOL
ACAP
PCON
AEZ(P
PEXP LTEX
TOOL
SITE: Collocation
SITE: Communications
SCED
Very Low
light income- uence
Nominal
moderate, eas- ily recoverable losses
10SD/P<100
Low
low, easily recoverable losses
DBbytesPgm
I
vlany life- :ycle needs mcovered
15th percentile
15th percentile
48% / year
S 2 months
2 2 months
1 2 months
edit, code, debug
International
Some phone, mail
75% of nomi- nal
I
see Table
across project
I&&-sized to life-cycle needs
S 50% use of available exe- cution time
< 50% use of available stor- age major: 6 mo.; minor: 2 wk.
55th percentile
55th percentile
12% / year
1 year
1 year
1 year
basic lifecycle tools, moder- ately integrated
Multi-city or Multi-corn- PanY Narrowband email
100%
I
none
Some life- cycle needs uncovered.
major change every 12 mo.; minor change every 1 mo.
35th percentile
35th percentile
24% / year
6 months
6 months
' 6 months
simple, fron- tend, backend CASE, little integration
Multi-city and Multi-corn- PanY Individual phone, FAX
85%
High
high financial loss
100SD/?<
I
8
across program
Excessive for life-cycle needs
70%
70%
major: 2 mo.; minor: 1 wk.
75th perceut.de
75th percentile
6% / year
3 years
3 years
3 years
strong, mature lifecycle tools, moderately integrated
Same city or metro. area
Wideband electronic communica- tion.
130%
Very High
risk to human life
DP21OOO
Extra High -
I
across product line
Very excessive for life-cycle needs
85 %
85%
major: 2 wk.; minor: 2 days
90th percentile
90th percentile
3% / year
6 years
6 year
6 year
strong, mature, proac- tive lifecycle tools, well inte- grated with processes, methods, reuse
Same building or complex
W~deband elect. comm, occasional video cod.
160%
across multi- ple product lines
95%
95%
Fully collo- cated
Weractive multimedia
Table 8: Module Complexity Ratings versus Type of Module
Control Opera- tions
Computational Operations
Device-depen- dent Operations
Data Manage- ment Opera- tions
User Interface Management 3perations
Vcry Low
Straight-line code with a few non-nested struc- tured programming operators: DOs, CASEh, IFTHENeLSEs. Sim- ple module composi- tion via procedure calls or simple scripts.
Evaluation of simple expressions: e.g., A=B+C*(D-E)
Sinlple read, write sta ternelits with simple formats.
Simple arrays in main nemoly. Simple ZOTS-DB queries, lpdates.
Simple input forms, -eport generators.
I,ow
S traiglitforward nesting of structured program- ming operators. Mostly simple predi- cates
Evaluation of ~noder- ate-level expressions: e.g., D=SQRT(B**2- 4.*A*C)
No cognizance needed of particular processor or 110 device character- istics. 110 done at GET/ PUT level.
Single file subsisting with no data structure changes, no edits, no intermediate files. Moderately cotnplex COTS-DB queries, updates.
Use of simple graphic user interface (GUI) builders.
Mostly sinlple riesling. Some intermodule con- trol. Decision tables. Simple callbacks or message passing, including niiddleware- supported distributed processing
Use of standard math and statistical routines. Basic matrix/vector operations.
110 processing i~icludes device selection, status checking and error pro- zessing.
Multi-file input and single file output. Sim- ple structural changes, pimple edits. Complex COTS-DR qr~eries. ~pdates.
Simple use of widget set.
l ligh
lliglily ~lested s~ruc- tured progranlming operators with rnany compound predicates. Queue and stack con- trol. Homogeneous, distributed processing. Single processor soft real-time control.
Basic numerical a~ialy- sis: multivariate inter- polation, ordinary differential equalio~is. Basic truncation, roundoff concerns.
Operatio~is at physical 110 level (physical stor- age address transla- tions; seeks, reads, etc.). Optimized 110 overlap.
Si~nple triggers acti- vated by data stream contents. Conlplex data restn~cturing.
Widget set develop- ment and extension. Siniple voice 110, mul- timedia.
Vcry IIigh
Reen t r i ~ ~ t and recursive coding. Fixed-priority interrupt handling. Task synclironizatio~i, complex callbacks, het erogeneous distributed processi~ig. Single-pro. cessor hard real-time control.
Difficult but stn~ctured numerical analysis: near-singular matrix equations, partial dif- ferential equations. Simple parallelization.
Ibutines for interrupt 3iagnosis, servicing, masking. Cornmunica- ti011 line handling. Per- formance-intensive mbedded systems.
Distribuled database :oordination. Complex triggers. Search optimi- zation.
bloderately complex lD/3D, dy~ianlic graph- cs. multimedia.
Extrii 11igl1
Multiple resource scheduling with dynamically changing priorities. Microcode- level control. Distrib- uted hard real-time control.
Difficult a~ id unstruc- tured numerical analy- sis: highly accurate analysis of noisy, sto- :hastic data. Complex ~arallelization.
Device timing-depen- lent corling, ~nicro-pro. grammed operations. Performance-critical mbedded systems.
Iiglily coupled, Iynamic relational and ~bject stnlctures. Natu- .a1 language data man- lgernent. . ,
lornplex multimedia, ~irtual reality.
DRAFT 2.1 , e
xhsta s*s ~ e p d j m d h l . e e - k* L w
Table 13: COCOMO 2.0 Effort Multipliers
Rating I Cost
Driver Productivity
Range very Nominal High Very Extra Low ( Low I I I High I High
RELY
DATA
CPLX
RUSE
DOCU
TIME
STOR - -
PVOL
ACAP
PC AP
PCON
PEXF'
LTEX
TOOL
SITE
SCED
scaling factors in Section 5 are handled in the same way as for Stage 3. In Stage 2, however, a re- duced set of effort multiplier cost drivers is used. These are obtained by combining the Stage 3 cost drivers as shown in able 14.
Table 14: Stage 2 and Stage 3 Cost Drivers
1 RCPX I RELY, DATA, CPLX, DOCU
Stage 2 Cost Driver
I
RUSE 1 RUSE 1
Counterpart Combined Stage 3 Cost Driver
The corresponding combined baseline effort multipliers are given in.Table 15. The resulting seven cost drivers are easier to estimate in early stages of software development than the 17 Stage
PDIF
PERS
PREX
FCIL SCED
TIME, STOR, PVOL
ACAP, PCAP, PCON
AEXP,PW, L m - TOOL, SITE
S CED
Future COCOMO 2.0 Improvements
Effects of reuse, applications composition on schedule, phase distribution
Risk assessment - Madachy Expert COCOMO, Monte Carlo
Full life cycle coverage - System engineering, HW-SW integration, OT&E
COCOMO 2.0 Status
Model Overview => Affiliates' Overview
Activity Sequence - Madachy: Litton experience
Long Range Plan Cost-Benefit Rationale
Figure 3. COCOMO 2.0 Affiliates' Program
Affiliates,
Research
Sponsors
- lundir~g support - sanitized dala - feecll~ack 011 riiodols, tools - special project rlolirritiori, support - v i s i l l q researclwrs
- + Worksl~ops, join1 projecls
4- -- P
4
- COCOMO 2.0 nlodels, 1001s - Amadeus niolrics package - Tulorials, repurls, relaled research - Spcci i~ l 1)rojccl rcsrrlls
Others u Process Definillor~s
&
COCOMO
Project
- USC - CS - USC - IOM - UCI - ICS
4--------+ COCOMO 2.0
Database
Current COCOMO 2.0 Affiliates Aerospace Motorola AT&T Bell Labs Northrop
EDS Rational E-Systems Rockwell
Hewlett-Packard SAlC
Hughes SPC IDA Teledyne Litton Data Systems TRW
Lockheed USAF Rome Lab
MDAC US Army Research Lab Xerox
via Technology Cooperation Agreement: CMU-SEI
in process: DISA, JPL, Loral, Sun, TI
COCOMO 2.0 Activity Sequence
Cost model review (draft prepared) Hypothesis formulation (initial set prepared) Data definitions (initial set prepared) Monthly electronic data collection - Amadeus data collection support
Data archiving Incremental model development and refinement - New USC COCOMO (baseline version prepared)
Related research
Definition Checklist for Source Statement Counts
Definition name: Logicd Source Sfafements Date: 7 1 7 17q (busic definition) Originator: c 0 cor? o 2.0
Measurement unit: Physical source lines
The terms in this cheddist are defined and discussed in CMU/SEI-92-TR-20 Page 1
classify it as the type with the highest p 1 Executable r of pmcedence -> 2 Nonexecutable 3 Declarations 4 Compiler directives 5 Comments 6 On their own fines 7 On lines with source code 8 Banners and nonblank spacers 9 Blank (empty) comments
10 Blank lines 11 12 I How produced Deflnttion Data army (
1 Programmed 2 Generated with source code generators 3 Converted with automated translators
. T
Usage Dellnttlon Data array ( Includes Excludes 1 In or as part of the primary product d 2 External to or in support of the primary product d
8 Dellnttlon 1 Data array ( Indudes Excludes
1 New work: no prior existence d
4 Copied or mused without change 4 a 5 Modified d W 6 Removed V 7
Includes d
2 Prior work taken or adapted from 3 A previous version. build, or release 4 Commercial, off-the-shelf software (COTS), other than libraries 5 Government furnished software.(GFS), other than reuse libraries 6 Another product
I 7 A vendor-supplied language support fibiary (unmodified) 8 A vendor-wpplied operating system or utility (mrnodi) 9 A local or modified language support library or operating system
10 Other commercial library 11 A rewe library (software designed for reuse) 12 Other software component or library
Excludes
J
.. ... . . . . . . . . .... . . _._._ ...... ._... :.::~:::~:~:~:::c.:::::::.:.;.x.::y.~Tcy.. ?ciB,5i222 ;!:::! j $!s:E2! ; ~ ~ . ~ ~ ~ ~ ~ ~ & .:.::.:.s2y .x<. 92 ,:.:.:>*..,..:
* W
d I A
d
-
d
r d
#
3k
J d d d
- Jlc %
i
AMADEUS MEASUREMENT-DRIVEN ANALYSIS AND FEEDBACK SYSTEM - ._ . _ - . _ _ _ _ _ -
AMADEUS --- WHAT IS IT? Amadeus is an automated system for software
metric collection, metric analysis, metric reporting, metric graphing, and metric prediction
Amadeus enables software process and product analysis and improvement.
Amadeus is very flexible, has an open architecture, and has a low entry barrier to usage.
Example metrics supported by Amadells are:
Size Effort Structure Cycle time Changes Cost Errors Schedule
Aniadeus/Selby/Ja~i-94 Atnadcus Soflware Research, Iric.
AMADEUS AUTOMATES METRICS
Amadeus is an automated metric collection, analysis, reporting, graphing, and prediction system
Flexible, unobtrusive measuremerrt system with low en t ry barr ier Open architecture emphasizing user-extensibility and tailorability Enables process and product i~nprovement
s pecilicalior~s a n d agents
Amadeus Measurement
System
------.--. --.--.------.--- -.-
Amadeus/Selby/Jan-94 Amadeus Software Research, Inc.
COCOMO 2.0 Affiliate Usage Experience
Madachy
COCOMO 2.0 Kickoff Meeting USC Center for Software Engineering
University of Southern California July 26, 1994
OUTLINE
Overview
Cost Driver Rating
Tool Usage Evaluation and Examples
Tool Implementation Plan
Question and Answer
AFFILIATE CONSIDERATIONS
Relevance to process improvement efforts
AmadeusB tool learning
Tool integration with current data collection and cost estimation processes
Incomplete cost data and inconsistent data granularity - may need to strengthen cost collection procedures
Data confidentiality, security and legal considerations -check constraints as soon as possible and resolve
PROCESS IMPROVEMENT
COCOMO 2.0 Affiliate activities are compatible with process improvement via metrics definition and collection, and specific improvement of the cost estimation process.
To support continuous process improvement: -establish baselines by measuring products and processes -provide periodic status reports and graphs for feedback and improvement -perform comparative analysis, monitor progress relative to baselines and
support project tracking -analyze and summarize data for cost and schedule estimation -periodically update baselines
Monthly COCOMO data collection and analysis is an implementation of continuous improvement.
COST DRIVER RATING
Need to rate cost drivers in a consistent and objective fashion within an organization.
Cost driver ratings profile: -graphical depiction of historical ratings to be used as a reference baseline -used in conjunction with estimating tools to gauge new projects against past ones objectively
COST DRIVER RATINGS PROFILE
Very Low Low Nominal High Very High Extra High PRO12
RELY - required software P R O J ~ PRO13 PROJS
reliability PRO14 PRO16 I I I I I
effect: slight low, easily moderate, high risk to inconvenience recoverable easily financial human life
losses recoverable loss losses
DATA - data base size PRO12 PRO13 PRO14 PROJS PROJ 1 PRO16
I 1 I 1 DB 10 <DIP < 100 rD/P DIP 2 1000
bytes/Prog . 100 < 1000 SLOCS <
10
PROJ3 PRO15 PROJ2
CPLX - product complexity I I PRO] 1 I I
see attached table - - - -
COST SAVINGS
AmadeusB saves costs by replacing current manual processes
-most of the cost of implementing AmadeusB is in planning and the development of custom templates and agents
Batch mode also an option versus manual invocation
Break-even point of using automated procedures should , .
occur relatively early
CUSTOMIZATION EXAMPLES
Form template for manual input of COCOMO 2.0 general project data took about 30 minutes.
Form template and agent to measure source code required less than 30 minutes per project.
Templates for importing data from metrics spreadsheet took about one hour.
Templates only need to be developed once and are reused therkafier.
Templates are platform independent.
ANOTHER PILOT PROJECT EXAMPLE
Before: it took 4-5 weeks for project personnel to provide the SEPG with the size of the source code.
-what was measured did not conform to the standard counting rules consistent with COCOMO sizing and cost estimation procedures.
After: AmadeusB could be invoked to measure the same code and provide results within seconds.
-not only are we confident that the measured size conforms to our cost model definition, additional measures are provided to increase visibility.
EVALUATION SUMMARY
Tool automation increases process visibility by providing consistent metrics not previously available.
Requires less effort over the long run compared to manual procedures.
Can leverage current processes if carefully planned.
AmadeusQ can serve as a framework for integrating metrics and enable cross- metric analyses.
Customization requires minimal amount of nominal technical skills.
Still have open issues for some projects.
PHASED IMPLEMENTATION APPROACH
Phase 1 - code analysis for pilot projects, and implemention of import, entry and report templates for project tracking metrics. Generate initial baselines for pilot projects.
Phase 2 - build up data repositories and strengthen analysis both within and across projects, providing process improvement insight.
Phase 3 - division-wide adoption and standardization, use of networking across repositories. This will provide a stable framework for continuous process improvement across all projects.
COCOMO 2.0 Status
Model Overview Affiliates' Overview Activity Sequence - Madachy: Litton experience
=> Long Range Plan Cost-Benefit Rationale
COCOMO 2.0 Long Range Plan
1995 1996 1997 1998- Project effort, schedule IOC U pgrades -------
Phase, activity distributionIOC Upgrades ---------
Costlquality model IOC Upgrades --
Dynamics model, Sizing model, ...
IOC: Initial Operational Capability Upgrades: Accuracy, functionality, tailored versions
Cost-Benef it Rationale
General COCOMO 2.0 benefits
Additional COCOMO 2.0 Affiliate benefits
COCOMO 2.0 Affiliate costs
Bottom Line
General COCOMO 2.0 Benefits
Reduced risk of inaccurate estimates -Stronger base for planning and control
More relevant, credible sensitivity analyses More accurate technology investment analyses - Tools, components, methods, processes
Acceleration of continuous process improvement -Means toward achieving SEI CMM Level 4
Means toward compliance with Government contract data reporting requirements
Added COCOMO 2.0 Affiliate Benefits
Early access to general COCOMO 2.0 benefits
Higher level of general COCOMO 2.0 benefits via specially-cali brated versions
Comparative benchmarking of project and organizational productivity
Amadeus, USC COCOMO 2.0, future tools
COCOMO 2.0 Affiliate Costs (Rough Estimates)
$15K/year or portion of $25K general USC or UCI affiliate membership
1 person-monthlsite startup effort 1 person-daylproject initialization effort 30 minuteslperiodic project data update -These vary as a function of existing data
collection procedures
2 tripslpersonlyear, synchronized with USC Research Review and COCOMOICSM Forum
Bottom Line
Difficult to compute ROI
Varies as function of existing data collection and analysis efforts -If strong, added effort should be small - If weak, added effort should be reasonable
"If you think measurement is expensive, try ignorance"
Table 1: Comparison of COCOMO. Ada COCOMO, and COCOMO 2.0
COCOMO Adi~ COCOMO
DSI or SLOC
COCOMO 2.0: Stage 1
Object Points
COCOMO 2.0: Sliljy 2
COCOMO 2.0: Stilge 3
FP and Language or SLOC
Equivalent SLOC =
Delivered Source Instructions (DSI) or Source Lines Of Code (SLOC)
-
Function Points (IT) and Size Language
% un~nodified reuse: SR % modified reuse: nonlinear f(AA,SU,DM,Ch4,lM) Breakage %: BRAK
Equivalent SLOC = Linear f (DM, CM, IM)
Quivalent SLOC = Linear f (DM, CM, IM)
Implicit in model Reuse nonlinear f(AA,SU,DM,CM,IM)
Requirements Volatility :sting: (RVOL) 4nnual Change Traffic (ACT) = %added + %modified
3rganic: 1 .O5 Semidetached: 1.12 3mbedded: 1.20
RVOL rating Implicit i n model
Object Point ACT
BRAK Breakage
ACT Maintenance Reuse model Reuse model
Scale (b) in
MMNoM = a(sizelb
Embedded: 1 .O4 - 1.24 depending on degree of:
early risk elimination solid architecture stable requirements Ada process maturity
1.02 - 1.26 depending on the degree of:
pl-cccdcn~etl~iess * confornlity
early architecture, risk resolution team cohesion process m a h ~ i t y (SF![)
1.02 - 1.26 depending on the degree of:
precedcntedness confomlity early architechire, risk resolution
* team cohesion process maturity (SEI)
RELY, DATA, D O C U * ~ C P L X ~ , RUSE*^ TIME, STOR, PVOL(=VIRT)
WLY, DATA, CPLX
rIME, STOR, VIRT,TURN
Product Cost Drivers RELY*, DATA, CPLX*, RUSE TIME, STOR, VMVII, VMVT, TURN
None
Platform Cost Drivers None Platform difficulty:
P D F * ~
Personnel Cost Drivers W A P , AEXF', PCAP, r'EXP, LEXP
None Personnel capability and :xperience: PITRS*~. P R R X * ~
'reject Cost Drivers VIODP, TOOL, SCED MODP*, TOOL*, SCED, SECU
None
* Different multipliers. t Different rating scale