session 1: the sel - nasa

15
Session 1: The SEL Frank E. McGarry, NASAJGoddard Vic Basili, University of Maryland Michael Stark, NASA/Goddard PAGE BLANK NOT FILMED SEL-92-004 page 39

Upload: others

Post on 01-Jan-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Session 1: The SEL - NASA

Session 1: The SEL

Frank E. McGarry, NASAJGoddard

Vic Basili, University of Maryland

Michael Stark, NASA/Goddard

PAGE BLANK NOTFILMED

SEL-92-004 page 39

Page 2: Session 1: The SEL - NASA

N94-11423

EXPERIMENTAL SOFTWARE ENGINEERING:

SEVENTEEN YEARS OF LESSONS IN THE SEL

F. McGarry

SOFTWARE ENGINEERING BRANCH

Code 552

Goddard Space Flight Center

Greenbelt, Maryland 20771

(301) 286-5048

ABSTRACT

This paper describes seven key principles developed by the the Software Engineering

Laboratory (SEL) at the Goddard Space Flight Center (GSFC) of the National Aeronautics

and Space Adminislration (NASA). For the past 17 years, the SEL has been experimentally

analyzing the development of production software as varying techniques and methodologies

are appfied in this one environment. The SEL has collected, archived, and studied detailedmeasures from more than 100 flight dynamics projects, thereby gaining significant insightinto the effectiveness of numerous software techniques (e.g., References 1,2, and 3), as well

as extensive experience in the overall effectiveness of "Experimental Software Engineer-

ing". This experience has helped formulate follow-on studies in the SEL, and it has helpedother software organizations better understand just what can be accomplished and what

cannot be accomplished through experimentation.

INTRODUCTION

The Software Engineering Laboratory (SEL) was

established in 1976 as a joint venture among

Goddard Space Flight Center (GSFC) of theNational Aeronautics and Space A_tration

(NASA), the University of Maryland, and Com-

puter Sciences Corporation to study software

technologies as applied to production software in

the Flight Dynamics Division. The goal was tomeasure the effects of various methodologies on the

development process and then to use on ensuing

projects those techniques that proved favorable.

During its 17 years of existence, the SEL has

experimented with more than 100 development

efforts, collecting detailed information on each. For

each project (expe "nment), a goal or set of goals was

defined; then a plan was developed m ask the

necessary questions and collect the necessarymeasures for the particular project This approachhas come to be called the Goal-Question-Metric

(GQM) paradigm (Reference 4).

The projects studied are all of the flight dynamicsclass, with similar levels of complexity. They

ranged in size from four thousand to five thousandsource lines of code (KSLOC) to more than 1.5

million source lines of code (MSLOC), the typical

project being around 150 KSLOC and requiringabout 25 staff years to develop. The relative

homogeneity of the class of systems studied in the

10000810L

PRECEDING PAGE BLANK NOT FILME, I_

SEL-92-004 page 41

Page 3: Session 1: The SEL - NASA

SEL is particularly attractive to experimental

software engineering.

The SEL has generated more than 200 papers

(Reference 5) documenting the studies that have

been completed. Each of these typically reports on

some particular method that was studied (e.g. Ada,

OOD, IV&V, testing techniques), but some of the

reports also reflect major lessons learned about

overall approaches to software experimentation and

research Many papers, for instance, have been

written about software metrics, obviously an inte-

gral part of this process but not its major driver. This

kind of information is probably as important as the

specific study results pertaining to a software

development approach. Seven principles emerge

from this activity as critical, in the SEL's view, for

any organization pursuing efforts in experimental

software engineering.

. The Seven Principles of Exper-imental Software Engineering

In the course of conducting more than 100 experi-

ments during the past 17 years, the SEL has

collected and archived more than 135 megabytes

(MB) of measurement data; it has documented its

results in more than 200 reports and papers. The

many valuable successes, as well as the mistakes,

encountered during this activity have taught the

SEL a great deal about the overall process of

experimental software engineering (Reference 3).

The information presented here is based on the 100

production projects studied, some 200,000 forms

collected and analyzed, data and subjective in-

formation from 800 to 1000 people, and more than

15 years of experience with various levels of

software engineering experiments.

The seven key principles that the staff of the SELhas derived are these:

° Improvement is charaaed_ by continuous,

sustained, and methodical change, not by wait-

ing for some technology breakthrough.

2. Experimental data analysis must be

addressed in a specific context/domain.

3. The goal of experimental software engineer-

ing must be serf-improvement, not external

comparisons.

4. Data collection must not be the dominant ele-

ment of process improvement; analysis and

application are the goal.

5. Data are uncertain and fallible; you must de-

sign experimentation to accept those facts.

6. There must be a separate organizational

element---not the development organization

itself--to package experience.

7. Effective packaging must be experiencebased.

2.1. Principle 1: Improvement is charac-terized by continuous, sustained,and methodical change, not bywaiting for some technology break-through.

The SEL originatly expected to identify specific

technologies having a potential for remarkable

improvements in productivity, but now, after so

many years of study, we see that any genuine

improvement results from a slow evolutionary

process in which change is guided by experience

and learning. Many software technologies, such as

Ada, reuse, OOD, SADT, CASE, and integrated

environments, were initially expected to improve

software development by orders of magnitude, but

in fact there is no realization of attaining N to

1 gains in productivity through some specific

approach.

Overall, the SEL has found that any single software

technique must be integrated into a repeatable

process, along with a means of measuring effective-

hess and providing feedback to the development

organization, before it can have a lasting favorable

effect on the development process in an organiza-

tion. This concept of continuous improvement, of

com'se, is similar to the concept of TQM and other

improvement paradigms; in this environment, ithas

led to impressive gains in reliability (65 percent

improvement in 17 years), in reuse (some classes

rising from a 25-percent average to more than a

60-percent average), and in productivity of new

software (25 percent greater in 1993 than in 1980),

10009810L

SEL-92-004 page 42

2

Page 4: Session 1: The SEL - NASA

but it is important to note that these gains have beensmall incremental changes guided by the continu-

ous examination of completed projects.

2.2. Principle 2: Experimental data

analysis must be addressed in a

specific context/domain.

One of the most significant barriers to successful

experimental software engineering, and to data

analysis in particular, is the failure of the analysts to

completely understand and factor in the context fromwhich the information was taken. Measurement data

are a very atWacfive device to the software _e_,

but unless the domain is un_ very mi._io_dingor enoneous conclusions will be drawn. It is

imperative to tm_r_and the characteristics of the

project fzu_n which the data were extracted.

Figure 1 depicts data that characterize the level of

reuse for projects in the SEL during the past 8 years.

The data represent all projects, developed on

several different platforms, of varying size, using

different methodologies, and--most importantly--

having different goals. Without understanding thecontexts of the data, one could easily conclude that

the level of reuse has remained essentially constant

in the SErf.,, and, on the average, that could be true.

Figure 2, however, depicts the same data with

projects that used OOD identified by circles. These

100

XX

X

X X

X X XX x20 x x

X X

0 _" ;: !

1984 1985

8O

LUco 60

uJ

40

X x

X

X

X

X X xX X

X x

xx X

x I x

1988 1991

X

1992

RECENT SEL PROJECTS WITH DETAILED REUSE INFO

(MULTIPLE GOALS ACROSS ALL PROJECTS)

Unfortunately it is nearly impossible to capture

every relevant piece of information pertaining to the

context of the project and store it on the database.

The analysts must understand not only the environ-

ment, the staff, the management style, the project

goals, and the like; they must also understand any

extenuating circumstances that could have causedcertain characteristics. This can only be done by

spending time with the developers and managers;very questionable conclusions can be drawn from

bulk data alone. Because only limited information

pertaining to the context can be captured in the

database, the SEL discourages external usage of thedatabase.

2.3. Principle 3: The goal of experimen-tal software engineering must be

self.improvement, not externalcomparisons.

Any process-improvement program is intended to

guide the evolution of change toward some set of

objectives within the organization. The principal

goal is self improvement; it is not comparison withother domains. When software organizations focus

on external comparisons, they can easily lose sight

of their own goals.

Also, problems may not be similar from onedomain to another, and comparisons across domain

IO0

8O

P, oo.::a,_. 40

20

XX

X

X X

x x,<xx ®® xx ®

0 _1984 1985

OOD SPECIFIC PROJECTS OF "SIMILAR CLASS"

(FOCUSED REUSE EXPERIMENTS)

0®®

®

X

® x_XX X

XX XXl x IX x

1988 1991 1992

Figure 1. Recent SEL Projects WithDetailed Reuse Information

projects, all of a similar class, had a focus on reuseand OOD. This chart indicates, therefore, that the

particulartechnology (OOD) is showing a favor-able trend for software reuse, but also that many

more circumstances (context) must be studied

before that effect can be completely understood.

Figure 2. 0OI)-Speclfic Projects ofSimilar Class

boundaries are always uncertain. As an organiza-

tion understands the general characteristics of its

own process and products, some high-level attrib-

utes may be compared generally, but that must be

_eated as a secondary goal, not as theprimary goal

of any expe "nmental software engineering effort.

1OOO081O1. 3

SEL-92-004 page 43

Page 5: Session 1: The SEL - NASA

If the focus of any process improvement should beon the local domain, it makes little sense to expend

effort generating broadly accepted standard defini-

tions (lines of code, errors, etc.,) or buildingnational databases.

The experience of the SEL indicates that effort is

better spent in efforts to understand the organlaa-tion's own domain, its own set of definitions, its

own software data, and its own goals. As the

organization matures, it may begin to map its owncharacteristics into the given characteristics of other

domains for some possible high- level compari-

sons, but, again, this is a secondary process of no

certain value.

Consequently, the assumption that more broadly

populated databases, local or national, will result in

greater insight into experimental software engi-neering is not valid. Rather than continuously

adding new data to a database, the key goal must be

generating more focused data of higher quality.

2.4. Principle 4: Data collection mustnot be the dominant element of

process improvement; analysis

and application are the goaL

Software measurement is often viewed as a goal in

itself, and it is sometimes assumed to be a measure

of success. This erroneons perception of measure-

ment is often sustained by the Iremendons publicity

given to measurement as a goal: the conferences,

papers, studies, guidebooks, etc., that support theconclusion that measurement itself is a goal worth

pursuing and ignoring the question of which

application the measurement information will be

applied to and why.

As long as measurement is viewed as a goal in itself,

and not as a means to an end, the measurement

program will be doomed to failure. In fact, without a

clear application, there is no reason to collect themeasurement data. This fact in itself should mini-

mize the amount of data that any software organiza-

tion attempts to collect. But often, measurement

programs are instituted so that measurement data

can be used to qualify complexity and general

characteristics of software without a prior defini-

tion of just what criteria are to be acceptable and forwhat reason. This nsually leads to the collection of

excessive amounts of data and very questionable

(forced) interpretation of some of the information.

As a rule of thumb, there are three functional areas

in typical measurement programs: data collection,

data analysis, and general support. The bulk of any

effort put into a significant software measurement

program must be directed at analysis, not at

collecting and processing the information. In the

SEL, the typical analysis function consumes

approximately 60 percent of the total measurementeffort. The routine data collection is less than 10

percent of the effort, and the overall processing,

archiving, quality assurance, an so forth runs about

30 percent of the effort.

2.5. Principle 5: Data are uncertain and

fallible; you must design exper-imentation to accept those facts.

Many inexperienced researchers see any softwaremetrics database populated with production soft-ware characteristics as a reservoir of answers for

empirical studies in software engineering tech-

nology. Carrying out large numbers of statistical

analyses on very large databases with hundreds or

thousands of projects---such as computing numer-ons correlation coefficients on every set of param-

eters that can be made available---can easily lead to

very erroneous implications. The context, goals,

subjective information, and domain understanding

are as important as the amount of data available for

study.

Figure 3 is an example of one study that the SELcarried out in 1985. By merely running a large

number of statistical correlations on a class of

FORTRAN modules, one could almost conclude

that large modules (as counted by lines of code) are

better (less prone to error) than smaller ones, and

that modules of higher complexity (as measured by

McCabe complexity) are more reliable than smaller

ones. These data, taken in a very limited context,

differed completely from subsequent studies that

later showed it to be inconclusive.

In addition to the fact that such data, processed with

appropriate statistical analysis and correlation coef-

ficients, will not generate new insights automatical-

ly, researchers also find, inevitably, that data are

faulty, missing, inconsistent, or otherwise unns-

10_09810b

SEL-92-004 page 44

4

Page 6: Session 1: The SEL - NASA

.06OO

_" .0525t-t0o .0450LL

03, N _ .03OO

LUnuJ .0225er"

" .01500

"- .0075

0

$

ew •

_l,4t .4= . _, •

-':.'::': :..

l_lJlJ_oQ•ee_lell,l_ i_ o • t aft • • •

I I I I I I15 45 75 105 135 165

McCABE COMPLEXITY

I195

ooLL

n

Oee-

0.600

._

._

.0375

._OO

.0225

.01_

.0_5

0

oa• eo • • •

-:"-'. : ..

I I I I t I I100 300 500 700 900 1100 1300

lINES OF CODE

CORRELATIONS

TOTAL EXECUTABLELINES UNES

HALSTEAD LENGTH 0.85

McCABE COMPLEXITY 0.81

EXECUTABLE LINES 0.84

TOTAL LINES 1.00

SAMPLE OF 688 MODULES

0.91

0.87

1.00

McCABE HALSTEADCOMPLEXITY LENGTH

0.91 1.00

1.00

Figure 3. Software Measures the SEL

able, perhaps to such a degree that the database will

be viewed as worthless and fully corrupted. This is a

universal occurrence; one cannot assume that any

organization can produce complete, accurate, and

consistent software development data. Completelyaccurate data is precluded by the tremendous

ambiguity and uncertainty inherent in the software

process itself. Even the best data-collection proce-

dures, contending with varying terminology, the

subjective nature of much of the information, and

the limited resources that can be spent on producing

such information, can produce at the most only a

view of general wends. There are at least five key

points that must be considered in handling data for

experimental software engineering:

l. The context of the information.

2. Defined goals of the process and organiza-tion.

3. Subjective information from the developers

and managers.

4. Measurement data.

5. Qualitative analysis of the data and informa-tion.

Each of these is vital in carrying out valid studies in

experimental software engineering. To assume thatthe data itself can provide any more insight into the

process than any of the other factors is not true.

2.6. Principle 6: There must be a

separate organizational element -not the development organizationItself - to package experience.

Many measurement programs and process

improvement programs fail because the developers

are expected to use any collected set of measures

and apply the data toward self improvement. In fact,successful developers have a single goal: to produce

quality software on a given schedule and for a givenbudget. They have neither the lime nor the interest

to develop measurement programs or to startwriting new processes for ensuing projects. Be-

cause of this, a process improvement program is

more likely to be effective when a separate

organization is established specifically to acquire,

assess, synthesize, and feedback data to the devel-

opers. The developers are then free to produce the

software, having only to provide the small amount

1OOO98101.

SEL-92-004 page 45

Page 7: Session 1: The SEL - NASA

of additional information that this new organization

needs to carry out the process improvement studies.

In addition to the analysts who are responsible for

the synthesis and feedback to the developers, thereshould also be a support organization to handle the

processing of data, quality assurance, archiving,

library maintenance, and so forth; such a group isinvaluable to the overall success of the process

improvement efforts.

2.7. Principle 7: Effective packaging

must be experience based.

One of the key lessons that the SEL has learned over

the past 15 years is that the developers have

excellent insight as to what processes are useful and

appropriate and which are of minimal use or evendetrimental. They are therefore best qualified to

produce software policies or processes for a devel-opment group. The experiences and general insight

of the development teams must be incorporated into

any attempt to generate processes or to carry out

process improvement to any helpful degree.

As part of the paradigm of the experience factory

(Reference 6), the first major step is to understandthe local environment. This implies not only

gathering data from development projects but also

listening to the teams that produce software in thatenvironment. Thus, the strengths, weaknesses,

needs, and successes of the environment serve as a

basis for useful processes captured in the form of

standards or training programs, and---most

important--the processes themselves will reflect

the experiences of the development organization.

As developers gain experience with defined stan-

dard processes for particular domains, they will be

able to judge better the impact of specific elements

of the process. The experience packagers must

adjust the process in response to observations that

some elements (perhaps even something as specific

as the design review process) are of no value; this

includes changes to standards, training, tools, and

general management practices.

ACKNOWLEDGEMENT

Kevin Orlin Johnson of CSC carried out the

complete editing, integration, and organizing of

this paper in its final form.

REFERENCES

1. Green, Scott, et al. The Cleanroom Case Study

in the Software Engineering Laboratory:

Project Description and Early Analysis,SEL-90-002, March 1990.

2. MeIknnott, T., et aL, Gamma Ray Observatory

Dynamic Simulator in Ada (GRODY) Experi-ment Summary, SEL-904304, September 1990.

3. Card, D. N., McGarry, E E., and Page, G. T.,

"Evaluating Software Engineering Technolo-

gies," IEEE Transactions on Software Engi-

neering, July 1987.

4. Basili, V. R., "Quantitative Evaluation of

Software Methodology," Proceedings of the

First Pan-Pacific Computer Conference,

September 1985.

5. Morusiewicz, L., and Valett, J., Annotated

Bibliography of Software Engineering Labo-

ratory Literature, SEL-82-1106, November

1992.

6. Basili, V. 11., and Caldleza, G., "Methodological

and Architectural Issues in the Expe_ence

Factory," Proceedings of the Sixteenth Annual

Software Engineering Workshop, SEL-91-006,December 1991.

lOOO_lOt.

SEL-92-004 page 46

Page 8: Session 1: The SEL - NASA

EXPERIMENTAL SOFTWARE ENGINEERING

17 YEARS OF LESSONS IN THE SEL

FRANK McGARRYNASA/GSFC

G498.001

G498.002

EXPERIMENTAL SOFTWARE ENGINEERING IN THE SELBACKGROUND

O0I.-ZUJ

IZ:UJ

XUJLL0

100

8O

6O

4O

2O

D_¥_-Ik_JTEDDEVELOPMENT

CASE(2)

CLEANROOM(4)

SAMPLE EXPERIMENTS Am(9)

OOD (10)

_v&v (3)

TESTINGAPPROACHES(4)

OF.S_N TECHNIQUES(5)

OEFECTNt_LYSlS

01976 1980 1984 1988 1992

SEL HAS CONDUCTED OVER 100 EXPERIMENTSUSING NASA PRODUCTION SOFTWARE PROJECTS

SEL-92-004 page 47

Page 9: Session 1: The SEL - NASA

G498.003

SUMMARY OF ACTIVITIES

(1976- 1992)

SOFTWARE DEVELOPMENT

# PROJECTS: 105

FORMS: 220,000

PEOPLE 600 TO 1,000

NASA MISSIONS: 27

EFFORT: 2000 STAFF YEARS

EXPERIENCE PACKAGING (ANALYSIS)

TECHNOLOGIES STUDIED: >50

PARERS PRODUCED: >300

(EXPERIMENTAL ANALYSIS)

POUCIES PRODUCED (PROCESSES):DEVELOPMENTMANAGEMENT

COSTING, AND OTHERS

EFFORT: 200 STAFF YEARS

/SUPPORT (DATA PROCESSINGIARCHIVING)_

I - DATABASE 1_ ME_ZS II - REPORTS/PAPERS OUT 5,000 TO 10,000 II -FORMS PROCESSING 220,000 I

I EFFORT: 100 STAFF YEARS I

I THE SEL HAS INFUSED THE E_PERIENCE OF OVER 100PROJECTS BACK INTO THE DEVELOPMENT ENVIRONMENT

I EXPERIENCE HAS RESULTED IN SEVEN GUIDING PRINCIPLES IFOR EXPERIMENTAL SOFTWARE ENGINEERING I

IMPROVEMENT IS CHARACTERIZED BY CONTINUAL,

(_) SUSTAINED, AND METHODICAL CHANGE; NOT BYTECHNOLOGY BREAKTHROUGH

• WILL NOT ATTAIN =N TO 1" IMPROVEMENT

(WAITING ON UNIQUE TECHNOLOGY)

• EFFECTIVE CHANGE/IMPROVEMENT MUST BE DRIVEN BY EXPERIENCE

(MUST FOCUS ON SPECIFIC "DOMAIN")

• SINGLE "TECHNIQUES" ARE INCOMPLETE ANSWERS

• CHANGE MUST BE MEASURED AND DEMONSTRATED

G498.004

SHOULD A TECHNOLOGY BREAK THROUGH OCCUR, EXPERIENCE DRIVEN ORGANIZATIONSWOULD BE IDEALLY POSTURED TO ADOPT.

SE_92-004_ge48

Page 10: Session 1: The SEL - NASA

ELEMENTS OF IMPROVED SOFTWARE

(;SINGLE METHODSI

METHODOLOGIES

(INTEGRATED - MULTIPLE METHODS I IMPROVEMENT TECHNIQUES

DATA FLOW DIAGRAMS _ SADT _ _

STRUCTURAL TESTING

ENTITY RELATION:::S _ '_

,.SPEO-,,ONS ,. ER, ,.,OE,:AOTORY

BOX STRUCTURE ANALYSIS / /" _ PROCESS IMPROVEMENT

REUABIUTY MODELING ./

STRONG TYPING__._% Ada /__

ABSTRACTIONS N _ /__ TQM

INFORMATION HIDING_ /_-

MODULARITY _ / _--

INHERITANCE OOD

POLYMORPHISMS _ _ _//

G498.005

SUSTAINED IMPROVEMENT REQUIRES: I- TIME I- MORE THAN "METHOD" ADOPTION

SAMPLE TYPICAL

GOALSm

LOWERCOST

HIGHERRELIABILITY

IMPROVEDMANAGEABILITY

DECREASEDCYCLE TIME

HIGHER "QUALITY"

EXPERIMENTAL DATA ANALYSIS MUST BE(_ ADDRESSED IN SPECIFIC CONTEXT (DOMAIN)

• DATA OUT OF CONTEXT WILL BE MISLEADING AND RESULT IN ERRONEOUS IMPLICATION

• EXTREME CAUTION REQUIRED IN ANY GENERAUZATION OR IN EXTERNAL COMPARISONS

• SHARING OF DATA IS OF VERY UMITED BENEFIT

lOO

8o

UJ60

UJ

2O

G498.006

XX

X

X X

X XXX x X

X

X X

0 _ _ll

1984 1985

X x

X

X

®®®

®

X _ __ X

x xxX x m xX_ x

X X X X XX XX X

XX X X

:_ x i x x I x i x x

1988 1991 1992 1988 1991 1992

100

8O

o_6oUJ

2O

XX

X

X X

x xxx x ®

0 ,.x '_ ®®

1984 1985

RECENT SEL PROJECTS WITH DETAILED REUSE INFO

(MULTIPLEGOALSACROSSALL PROJECTS)

OOD SPECIFIC PROJECTS OF "SIMILAR CLASS"

(FOCUSED REUSE EXPERIMENTS)

SEL-92-004 page 49

Page 11: Session 1: The SEL - NASA

GOAL OF EXPERIMENTAL SOFTWARE ENGINEERING

(_) MUST BE SELF IMPROVEMENT,NOT EXTERNAL COMPARISONS

G498.007

• COMPARING WITH EXTERNAL DOMAINS HAS VERY UMITED VALUE

- YOUR LOCAL DOMAIN IS THE WHOLE WORLD

- EXTERNAL COMPARISONS OFTEN MISLEADING, INCONCLUSIVE,AND TIME CONSUMING

- ANY COMPARISON MUST BE ADDRESSED AT EXTREMELY HIGH LEVEL

(e.g., COMPARATIVE PROCESSES OR DEFECT RATE ON IDENTICAL PRODUCTS)

• NATIONALIZING THE EXPERIMENTAL PROCESS IS WRONG FOCUS

- NATIONAL STANDARDS NOT OF SIGNIRCANT CONCERN (e.g., UNE OF CODE)

- NATIONAL "DATA BASE" OF MEASUREMENT DATA NOT NEEDED (NOT AT THIS TIME)

• DEVELOP YOUR OWN DEFINITIONS (e.g., LINE OF CODE, ERROR CLASSES, etc.)

- DEVELOP LOCAL DERNITIONS (e.g., UNE OF CODE, ERROR CLASSES, etc.)

- AS YOU MATURE, DEVELOP TRANSFORMATIONS

(e.g., I UNE OF EXECUTABLE CODE "_,2 PHYSICAL UNES)

• KEY TO IMPROVED RESEARCH IS NOT BROADER POPULATED DATA BASES

(BETTER QUALITY AT LOCAL LEVEL; NOT MORE DATA)

G498.006

DATA COLLECTION (MEASUREMENT) MUST NOT BE

(_) DOMINANT ELEMENT OF PROCESS IMPROVEMENT;ANALYSIS/APPLICATION IS THE GOAL

• EXCESSIVE EMPHASIS OCCASIONALLY PUT ON SEARCHING FOR "THE MEASURE"- AS MANAGEMENT AID, USE SIMPLE, PROVEN DATA (e.g., EARNED VALUE)o DON'T SEARCH FOR KEY "THRESHOLD" MANAGEMENT INDICATOR

• APPUCATION_SE OF MEASURES MUST DOMINATE COLLECTION/PROCESSING OF MEASURES- MORE "LESSONS LEARNED" ARE WRrl-rEN THAN READ- MINIMIZE MEASUREMENT TO DAT EXPUCITLY REQUIRED

• USE OF MEASUREMENT IS TOWARD "ENGINEERING" AND "UNDERSTANDING"- GOALS MUST BE DERNED EXPUCITLY (e.g., ARE INSPECTIONS LOWERING ERROR RATES?)- DEVELOPERS HAVE RIGHT TO KNOW

10UJN

UJ=E

Oi,o

0COLLECTING DATA SUPPORT

7%

ANALYSIS/PACKAGING

CURRENT SEL EXPENDITURES

SEL-92-O04 page 50

Page 12: Session 1: The SEL - NASA

20%

10%

GETTING PRIORITIES CORRECT(AS % OF THE DEVELOPMENT COST)

MEASURING

SUPPORT

mE ANALYSIS (PACKAGING)

G498.009

76-81 82_B7 88-92(EXCESSIVE EFFORT (STILL TOO (MATURE EXPERIENCE

ON "DATA') MUCH "DATA') FACTORY)

SEL "IMPROVEMENT" COST AS % OF DEVELOPMENT COST

(COSTS INCLUDE SME AND SOME PURE RESEARCH EFFORTS)

J UNCERTAINTY/FALLIBILITY OF MEASUREMENT DATA IS J(_ FACT, DESIGN EXPERIMENTATION TO ACCEPT THAT

• MEASUREMENT DATA CANNOT BE TREATED AS EXACT OR COMPLETE INFORMATION

-CONTEXT + DEFINED GOALS + SUBJECTIVE INFO + DATA + QUALITATIVE ANALYSIS

(ALL PART OF THE EQUATION)

• RUNNING 1000 CORRELATION STUDIES ON 1000 MEASURES WILL CERTAINLY

PRODUCE SOME QUESTIONABLE "BREAKTHROUGH"

• SUBJECTIVE INSIGHT EXTREMELY VALUABLE

• MULTIPLE EXPERIMENTS NECESSARY

- SINGLE STUDIES CAN BE MISLEADING

- CANNOT PROMISE QUICK INSIGHTFUL FEEDBACK

G496.010

SEL-92-004 page 51

Page 13: Session 1: The SEL - NASA

DO NOT PLACE UNFOUNDED CONFIDENCEIN RAW MEASUREMENTS DATA

A498.023

.O6O0

g.

_,o, .o__=__= .0300::3,,,"' =_ .0225_'¢/)

m .01500

.0075

SOFTWARE MEASURES IN THE SEL0.600

D

m

-,___..3.T,T,.,:': "',: :,I I 1 I I I

15 45 75 105 135 165 165McCABE COMPLEXITY

_ .0525.0450

M.

.0375

!i =.0225

__ .0150I=:

I .o

-,.-.:.

.* -

- .:22..OI_I411411,*,m'_qNNI _I,*

I I I I l I I100 300 500 700 900 1100 1300

LINES OF CODE

HALSTEADLENGTH

1.00

CORRELATIONS

TOTAL EXECUTABLE McCABEUNES UNES COMPLEXITY

HALSTEAD LENGTH 0.85 0.91 0.91

McCABE COMPLEXITY 0.81 0.87 1.00

EXECUTABLE UNES 0.84 1.00

TOTAL LINES 1.00

SAMPLE OF 688 MODULES

MUST ESTABLISH SEPARATE ORGANIZATIONAL(_ ELEMENT TO ADDRESS EXPERIENCE PACKAGING

(CANNOT BE THE DEVELOPMENT ORGANIZATION)

• DEVELOPERS CANNOT BE RESPONSIBLE FOR ANALYSIS/PACKAGING

- THEY HAVE OTHER PRIORITIES

- WILL HAVE LIMITED INTEREST- BURDEN MUST BE SHARED

• ROLE OF =PACKAGING" ORGANIZATION IS TO ANALYZE, SYNTHESIZE, AND PACKAGE

!PACKAGERS

STUDY TECHNOLOGY I

PRODUCE POUCIES J._

ARCHIVE DATA

G498.011

SEL-92-004 page 52

Page 14: Session 1: The SEL - NASA

EFFECTIVE PACKAGING MUST(_ BE EXPERIENCE BASED

• EXPERIENCES OF DEVELOPERS MUST BE REFLECTED IN "PACKAGE"

(e.g., STANDARDS, PROCESS,...)

• STANDARDS MUST REFLECT KNOWLEDGE/ENVIRONMENT/EXPERIENCES OF YOUR DOMAIN

• COOKBOOKS CAN AND HAVE BEEN PRODUCED AND ARE EXTREMELY VALUABLE

G496.012

_ PACKAGINGI MAKE IMPROVEMENTS PART OF

_SSESSING I YOUR BUSINESS

I DETERMINE EFFECTWE IMPROVEMENTS

UNDERSTANDING I

I KNOW THE NATURE OF YOUR BUSINESS

POLICIES AND STANDARDS SHOULD REFLECT DOMAINSPECIFIC EXPERIENCES AND CHARACTERISITICS

A498.022

TEST TECHNIQUES EXPERIMENT DESCRIPTION

• 3 APPROACHES STUDIED

- CODE READING

- FUNCTIONAL TESTING- STRUCTURAL TESTING

% OF FAULTS DETECTED

32 PEOPLE PARTICIPATED

(GSFC, UM, CSC)3 UNIT-SIZED (100 SLOC)PROGRAMS SEEDED WITH ERRORS

NUMBER OF FAULTS DETECTEDPER HOUR OF EFFORT

CODE FUNCTIONAL STRUCTURALREADING TESTING TESTING

CODEREADING

EFFECTIVE TECHNOLOGY SHOULD FOCUS ON

FUNCTIONAL STRUCTURALTESTING TESTING

I"PERSONNE_ POTENT_L

SEL-92-004page53

Page 15: Session 1: The SEL - NASA

SUMMARY

• EXPERIENCE-BASED IMPROVEMENT IS DOABLE FOR SOFTWARE

• UTILIZE THE OPINION/EXPERIENCE OF THE DEVELOPMENT ORGANIZATION

- THEY SHOULD DRIVE THE PROCESS

• DOMAINS/CONTEXTS MUST BE REALIZED

(ALL SOFTWARE IS NOT THE SAME)

• EXPERIENCE FACTORY STRUCTURE ENABLES SELECTIONAND ADOPTION OF EVOLVING TECHNOLOGY (e.g. OOD)

G498.013

SEL-92-004 page 54