software measurement body of knowledge - amazon...

12
Software Measurement Body of Knowledge Alain Abran Alain April Software Engineering and IT Department, ETS (E ´ cole de technologie supe ´rieure) University, Montreal, Quebec, Canada Luigi Buglione Industry and Services Business Unit, Engineering.IT SPA, Rome, Italy Abstract Measurement is fundamental to sciences and to the engineering disciplines. In the 2004 version of the Guide to the Software Engineering Body of Knowledge—the SWEBOK Guide—the software measurement topic is dispersed throughout the Guide and discussed in every knowledge area. To facilitate and improve teaching and use of measurement in software engineering, an integrated and more comprehensive view of software measurement has been built in the form of a software measurement body of knowledge. This entry presents this integrated view on software measurement. In the 2010 version of the SWEBOK Guide, it has been proposed that software measurement be assigned its own knowledge area. INTRODUCTION Measurement is fundamental to sciences and to the engi- neering disciplines. Similarly, the importance of measure- ment and its role in better management practices is widely acknowledged. Effective measurement has become one of the cornerstones of increasing organizational maturity. Measurement can be performed to support the initiation of process implementation and change or to evaluate the consequences of process implementation and change, or it can be performed on the product itself. At the inception of the Software Engineering Body of Knowledge [1] project, measurement activities had been assigned to all the Knowledge Areas (KA) associate editors as a criterion for identifying relevant measurement-related knowledge in their respective KA. It is therefore a common theme present across most of the software engineering activ- ities. However, no attempt had been done to ensure that the measurement-related topic had had a full coverage. This entry presents both an integrated view of the measurement-related knowledge spread throughout all other chapters of the SWEBOK as well as an improved coverage of the information about measurement, as it is actually understood in software engineering. The information presented here on software measurement has been proposed for inclusion in the 2010 version of the Software Engineering Body of Knowledge to be published jointly by the Computer Society and IEEE. BREAKDOWN OF TOPICS FOR SOFTWARE MEASUREMENT The breakdown for software measurement is presented in Fig. 1, together with brief descriptions of the major topics associated with it. Appropriate references are also given for each of the topics. Fig. 1 presents a graphical represen- tation of the top-level decomposition of the breakdown of measurement topics. BASIC CONCEPTS This section presents the foundations of software measure- ment, the main definitions, concepts, as well as software information models. Foundations In sciences and engineering disciplines, it is the domain of knowledge referred to as “metrology” that focuses on the development and use of measurement instruments and measurement processes. The international consensus on metrology is documented in the ISO vocabulary of basic and general terms in metrology. [2] The ISO vocabulary defines 144 measurement-related terms. They are orga- nized into the following five categories: 1. Quantities and units 2. Measurements 3. Devices for measurement 4. Properties of measurement devices 5. Measurement standards (etalons) To represent the relationships across the elements of these five categories, the classical representation of a production process is selected, e.g., input, output, and control vari- ables, as well as the process itself inside. In Fig. 2, the Encyclopedia of Software Engineering DOI: 10.1081/E-ESE-120044182 Copyright # 2011 by Taylor & Francis. All rights reserved. 1 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112

Upload: others

Post on 20-Mar-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

Software Measurement Body of Knowledge

Alain AbranAlain AprilSoftware Engineering and IT Department, ETS (Ecole de technologie superieure) University,Montreal, Quebec, Canada

Luigi BuglioneIndustry and Services Business Unit, Engineering.IT SPA, Rome, Italy

AbstractMeasurement is fundamental to sciences and to the engineering disciplines. In the 2004 version of the Guide

to the Software Engineering Body of Knowledge—the SWEBOK Guide—the software measurement topic is

dispersed throughout the Guide and discussed in every knowledge area. To facilitate and improve teaching

and use of measurement in software engineering, an integrated and more comprehensive view of software

measurement has been built in the form of a software measurement body of knowledge. This entry presents

this integrated view on software measurement. In the 2010 version of the SWEBOK Guide, it has been

proposed that software measurement be assigned its own knowledge area.

INTRODUCTION

Measurement is fundamental to sciences and to the engi-

neering disciplines. Similarly, the importance of measure-

ment and its role in better management practices is widely

acknowledged. Effective measurement has become one of

the cornerstones of increasing organizational maturity.

Measurement can be performed to support the initiation

of process implementation and change or to evaluate the

consequences of process implementation and change, or it

can be performed on the product itself.

At the inception of the Software Engineering Body of

Knowledge[1] project, measurement activities had been

assigned to all the Knowledge Areas (KA) associate editors

as a criterion for identifying relevant measurement-related

knowledge in their respective KA. It is therefore a common

theme present across most of the software engineering activ-

ities. However, no attempt had been done to ensure that the

measurement-related topic had had a full coverage. This entry

presents both an integrated view of the measurement-related

knowledge spread throughout all other chapters of the

SWEBOK as well as an improved coverage of the information

about measurement, as it is actually understood in software

engineering. The information presented here on software

measurement has been proposed for inclusion in the 2010

version of the Software Engineering Body of Knowledge to

be published jointly by the Computer Society and IEEE.

BREAKDOWN OF TOPICS FORSOFTWARE MEASUREMENT

The breakdown for software measurement is presented in

Fig. 1, together with brief descriptions of the major topics

associated with it. Appropriate references are also given

for each of the topics. Fig. 1 presents a graphical represen-

tation of the top-level decomposition of the breakdown of

measurement topics.

BASIC CONCEPTS

This section presents the foundations of software measure-

ment, the main definitions, concepts, as well as software

information models.

Foundations

In sciences and engineering disciplines, it is the domain of

knowledge referred to as “metrology” that focuses on the

development and use of measurement instruments and

measurement processes. The international consensus on

metrology is documented in the ISO vocabulary of basic

and general terms in metrology.[2] The ISO vocabulary

defines 144 measurement-related terms. They are orga-

nized into the following five categories:

1. Quantities and units

2. Measurements

3. Devices for measurement

4. Properties of measurement devices

5. Measurement standards (etalons)

To represent the relationships across the elements of these

five categories, the classical representation of a production

process is selected, e.g., input, output, and control vari-

ables, as well as the process itself inside. In Fig. 2, the

Encyclopedia of Software Engineering DOI: 10.1081/E-ESE-120044182

Copyright # 2011 by Taylor & Francis. All rights reserved. 1

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 2: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

output is represented by the “measurement results” and the

process itself by the “measurement” in the sense of mea-

surement operations, while the control variables are the

“etalons” and the “quantities and units.” This set of con-

cepts then represents the “measuring devices,” and the

measurement operations are themselves influenced as

well by the “properties of measurement devices.”

As in any new domain of application, empirical trials

represent the starting point to develop a measure, not

necessarily following a rigorous process. But after a com-

munity of interest accepts a series of measures to quantify a

concept (or a set of concepts through a modeling of the

relationships across the concepts), it is usual to ascertain

that the proposed measures are verified and validated.

Fig. 1 Breakdown of topics for the software measurement KA.

Fig. 2 Model of the categories of metrology terms in the ISO

terminology.

2 Software Measurement Body of Knowledge

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 3: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

Definitions and Concepts

In sciences and engineering disciplines, it is the domain of

knowledge referred to as “metrology” that focuses on the

development and use of measurement instruments and

measurement processes. The international consensus on

metrology is documented in the ISO vocabulary of basic

and general terms in metrology.[2]

The quality of the measurement results (accuracy,

reproducibility, repeatability, convertibility, random mea-

surement errors) is essential for the measurement programs

to provide effective and bounded results. Key characteris-

tics of measurement results and related quality of measur-

ing instruments are defined in the ISO international

vocabulary on metrology previously cited. The theory of

measurement establishes the foundation on which mean-

ingful measurements can be made. The theory of measure-

ment and scale types is discussed in Ref. [3].

Measurement is defined in the theory as “the assign-

ment of numbers to objects in a systematic way to represent

properties of the object.” An appreciation of software mea-

surement scales and the implications of each scale type in

relation to the subsequent selection of data analysis meth-

ods are important.

Meaningful scales are related to a classification of

scales:

1. If the numbers assigned are merely to provide labels

to classify the objects, they are called nominal.

2. If they are assigned in a way that ranks the objects

(e.g., good, better, best), they are called ordinal.

3. If they deal with magnitudes of the property relative to

a defined measurement unit, they are called interval

(and the intervals are uniform between the numbers

unless otherwise specified, and are therefore additive).

4. Measurements are at the ratio level if they have an

absolute zero point, so ratios of distances to the zero

point are meaningful.

Software Measurement Methods and Models

Software engineers that specialize in measurement should

be able to design a software measurement method when

required. This knowledge entails that the following con-

cepts are known and can be used:

1. The measurement principle: This is a precise defini-

tion of the entities concerned and the attribute to be

measured. It involves a description of the empirical

world and the numerical world to which the entities

are to be mapped:

a. The empirical world can be described through

conceptual modeling techniques or through

mathematical axioms, or both.

b. The numerical world can be, in the general case,

any mathematical set, together with the

operations performed on it. It can also be defined

through the selection of one scale type (ordinal,

interval, or ratio). This also includes the defini-

tion of units and other allowed composition

operations on the mathematical structure.

2. The measurement method: This is a description of the

mapping making it possible to obtain a value for a

given entity. It involves some general properties of

the mapping (mathematical view), together with a col-

lection of assignment rules (operational description):

a. Mapping properties: In addition to the homo-

morphism of the mapping, this can also include

a description of other mapping properties; for

instance, a unit axiom (the mandatory associa-

tion of the number 1 with an entity of the empiri-

cal set) or, more generally, an adequate selection

of a small finite representative set of elements

ranked by domain practitioners.

b. Numerical assignment rules: These correspond

to an operational description of the mapping, i.e.,

how to map empirical entities to numerical

values: identification rules, aggregation rules,

procedural modeling of a measurement instru-

ments family, usage rules, etc.

3. The measurement procedure: This corresponds to a

complete technical description of the modus operandi

of the measurement method in a particular context

(goal, precision, constraints, etc.) and with a particu-

lar measuring instrument.

Verification activities of the design of a software measure

refer to the verification of each of the above design pro-

ducts of a software measurement method and of its appli-

cation in a specific measurement context. The concept of

measurement information model (MIM) is presented and

discussed in the Appendix A of ISO 15939 (see Fig. 3) to

help in determining what must be specified during mea-

surement planning, performance, and evaluation:

� A specific measurement method is used to collect a

base measure for a specific attribute.

� The values of two or more base measures can then be

used in a computational formula (by means of a mea-

surement function) to produce and construct a specific

derived measure.

� These derived measures are in turn used in an analysis

model in order to produce an indicator, which is a

value, and to interpret the indicator’s value in order to

explain the relationship between it and the information

needed, doing so in the language of the measurement

user, to produce an Information Product in order to

accomplish with the user’s Information Needs.

Software Measurement Body of Knowledge 3

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 4: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

The bottom section of Fig. 4 represents the “Data

Collection” (e.g., the measurement methods and the base

measures) in this MIM; the middle section the “Data

Preparation,” using agreed-upon mathematical formulas

and related labels (e.g., measurement functions and derived

measures); and the top section the “Data Analysis”

(e.g., analysis model, indicator, and interpretation).

THE MEASUREMENT PROCESS

This topic follows the international standard ISO/IEC

15939, which describes a process that defines the activities

and tasks necessary to implement a software measurement

process and includes, as well, an MIM (Fig. 5).

Establish and Sustain Measurement Commitment

Each software measurement endeavor should be guided by

organizational objectives and driven by a set of measure-

ment requirements established by the organization and the

project. An organizational objective might be “first-to-

market with new products.”

This in turn might engender a requirement that factors

contributing to this objective be measured so that projects

might be managed to meet this objective. The scope of

measurement may consist of a functional area, a single

project, a single site, or even the whole enterprise.

All subsequent measurement tasks related to this require-

ment should be within the defined scope. In addition, the

stakeholders should be identified. The commitment must be

formally established, communicated, and supported by

resources. The organization’s commitment to measurement

is an essential factor for success, as evidenced by assignment

of resources for implementing the measurement process.

Plan the Measurement Process

The organizational unit provides the context for measure-

ment, so it is important to make this context explicit and to

articulate the assumptions that it embodies and the con-

straints that it imposes. Characterization can be in terms of

organizational processes, application domains, technology,Fig. 4 Measurement steps and main phases within the ISO 15939.

Fig. 3 ISO 15939 measurement model.

Source: From Systems and software engineering—

Measurement process.[4]

4 Software Measurement Body of Knowledge

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 5: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

and organizational interfaces. An organizational process

model is also typically an element of the organizational

unit characterization.[4]

Information needs are based on the goals, constraints,

risks, and problems of the organizational unit. Information

needs may be derived from business, organizational, reg-

ulatory, and/or product objectives. These information

needs must be identified and prioritized. Then, a subset to

be addressed must be selected and the results documented,

communicated, and reviewed by stakeholders.[4]

Candidate measures must be selected, with clear links to

the information needs. Measures must then be selected

based on the priorities of the information needs and other

criteria such as cost of collection; degree of process disrup-

tion during collection; ease of analysis; ease of obtaining

accurate, consistent data; and so on.[4] This encompasses

collection procedures and schedules, storage, verification,

analysis, reporting, and configuration management of

data.[4] Criteria for evaluation are influenced by the techni-

cal and business objectives of the organizational unit.

Information products include those associated with the pro-

duct being produced, as well as those associated with the

processes being used to manage and measure the project.[4]

The measurement plan must be reviewed and approved by

the appropriate stakeholders. This includes all data collec-

tion procedures, storage, analysis, and reporting procedures;

evaluation criteria; schedules; and responsibilities.

Criteria for reviewing these artifacts should have been

established at the organizational unit level or higher and

should be used as the basis for these reviews. Such criteria

should take into consideration previous experience, availabil-

ity of resources, and potential disruptions to projects when

changes from current practices are proposed. Approval

demonstrates commitment to the measurement process.

Resources should be made available for implementing

the planned and approved measurement tasks. Resource

availability may be staged in cases where changes are to

be piloted before widespread deployment. Consideration

should be paid to the resources necessary for successful

deployment of new procedures or measures.[4] This includes

evaluation of available supporting technologies, selection of

the most appropriate technologies, acquisition of those tech-

nologies, and deployment of those technologies. The next

two processes of Fig. 5 (i.e., “Perform the measurement

process” and “Evaluate measurement”) are described in

more details in the balance of this text.

THE MEASUREMENT BY SOFTWARE LIFECYCLE PHASE

This topic follows the international standard ISO/IEC

15939, which describes a process that defines the activities

and tasks necessary to implement a software measurement

process and includes, as well, an MIM.

Primary Processes

Software has a number of primary processes. These pri-

mary processes are described in great detail in the ISO

12207 standard.[5] This topic describes the typical software

measurement activities that a software engineer engaged in

measurement should master.

Software requirements

As a practical matter, it is typically useful to have some

concept of the “volume” of the requirements for a particu-

lar software product. This number is useful in evaluating

Fig. 5 The ISO 15939 process model.

Source: From Systems and software engineering—Measurement process.[4]

Software Measurement Body of Knowledge 5

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 6: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

the “size” of a change in requirements, in estimating the

cost of a development or maintenance task, or simply for

use as the denominator in other measurements.

Functional size measurement (FSM) is a technique for

evaluating the size of a body of functional requirements; ISO

has adopted the ISO 14143 family of international standards

for the measurement of the functional size of the software.

Software design

Measures can be used to assess or to quantitatively estimate

various aspects of a software design’s size, structure, or

quality. Most measures that have been proposed generally

depend on the approach used for producing the design.

These measures are classified into two broad categories:[6,7]

� Function-oriented (structured) design measures: the

design’s structure, obtained mostly through functional

decomposition; generally represented as a structure

chart (sometimes called a hierarchical diagram) on

which various measures can be computed.

� Object-oriented design measures: the design’s overall

structure is often represented as a class diagram on

which various measures can be computed. Measures

on the properties of each class’s internal content can

also be computed.

Software construction

Numerous construction activities and artifacts can be mea-

sured, including code developed, code modified, code

reused, code destroyed, code complexity, code inspection

statistics, fault-fix and fault-find rates, effort, and scheduling.

These measurements can be useful for purposes of managing

construction, ensuring quality during construction, improv-

ing the construction process, as well as for other reasons.

Even if measures applied to software source code

evolve with new programming languages and styles,

some “older” measures continue to be discussed and revis-

ited with new technologies (for instance, the cyclomatic

complexity number).

Software testing

Numerous testing measures have been proposed in soft-

ware engineering. The most important relate to the

following:

� Program measurements to aid in planning and design-

ing testing: Measures based on program size (e.g.,

source lines of code or function points) or on program

structure (like complexity) are used to guide testing.

Structural measures can also include measurements

among program modules in terms of the frequency

with which modules call each other (see Chapter 7,

Section 4.2[8]; Chapter 9[9,10,11]).

� Fault density: A program under test can be assessed by

counting and classifying the discovered faults by their

types. For each fault class, fault density is measured as

the ratio between the number of faults found and the size

of the program[11] (see Chapter 20[12]; Chapter 9[13]).

� Life test, reliability evaluation: A statistical estimate of

software reliability, which can be obtained by reliabil-

ity achievement and evaluation (see Chapter 5, sub-

topic 2.2.5[1]), can be used to evaluate a product and

decide whether or not testing can be stopped (see

Chapter 9[14]; pp. 146–154[15]).

� Reliability growth models: Reliability growth models

provide a prediction of reliability based on the failures

observed under reliability achievement and evaluation

(see Chapter 5, sub-topic 2.2.5[1]). They assume, in gen-

eral, that the faults that caused the observed failures have

been fixed (although some models also accept imperfect

fixes), and thus, on average, the product’s reliability

exhibits an increasing trend. There now exist dozens of

published models. Many are laid down on some common

assumptions, while others differ. Notably, these models

are divided into failure-count and time-between failure

models (see Chapters 3, 4, and 7[13]; Chapter 9[14]).

� Coverage/thoroughness measures: Several test adequacy

criteria require that the test cases systematically exercise a

set of elements identified in the program or in the speci-

fications (see Chapter 5, sub-area 3[1]). To evaluate the

thoroughness of the executed tests, testers can monitor the

elements covered, so that they can dynamically measure

the ratio between covered elements and their total num-

ber. For example, it is possible to measure the percentage

of covered branches in the program flow-graph, or that of

the functional requirements exercised among those listed

in the specifications document. Code-based adequacy

criteria require appropriate instrumentation of the pro-

gram under test[11] (see Chapter 9[9]; Chapter 8[14]).

� Fault seeding: Some faults are artificially introduced into

the program before test. When the tests are executed,

some of these seeded faults will be revealed, and possibly

some faults that were already there will be as well. In

theory, depending on which of the artificial faults are

discovered, and how many, testing effectiveness can be

evaluated, and the remaining number of genuine faults

can be estimated. In practice, statisticians question the

distribution and representativeness of seeded faults rela-

tive to genuine faults and the small sample size on which

any extrapolations are based. Some also argue that this

technique should be used with great care, since inserting

faults into software involves the obvious risk of leaving

them there. (see Chapter 8[14]; Section 3.1[16]).

� Mutation score: In mutation testing (see Chapter 5, sub-

topic 3.4.2[1]), the ratio of killed mutants to the total

number of generated mutants can be a measure of the

effectiveness of the executed test set (see Sections

3.2–3.3[16]).

6 Software Measurement Body of Knowledge

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 7: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

� Cost/effort estimation and other test process measures:

Several measures related to the resources spent on

testing, as well as to the relative fault-finding effective-

ness of the various test phases, are used by managers to

control and improve the test process. These test mea-

sures may cover aspects such as number of test cases

specified, number of test cases executed, number of test

cases passed, and number of test cases failed, among

others. Evaluation of test phase reports can be combined

with root-cause analysis (RCA) to evaluate test process

effectiveness in finding faults as early as possible. Such

an evaluation could be associated with the analysis of

risks. Moreover, the resources that are worth spending

on testing should be commensurate with the use/criti-

cality of the application: different techniques have dif-

ferent costs and yield different levels of confidence in

product reliability[11] (see Chapter 4, Chapter 21,

Appendix B[12]; pp.139–145[15]).

� Termination: A decision must be made as to how much

testing is enough and when a test stage can be termi-

nated. Thoroughness measures, such as achieved code

coverage or functional completeness, as well as esti-

mates of fault density or of operational reliability,

provide useful support, but are not sufficient in them-

selves. The decision also involves considerations about

the costs and risks incurred by the potential for remain-

ing failures, as opposed to the costs implied by con-

tinuing to test. See also Chapter 5, sub-topic 1.2.1[1]

Test selection criteria/Test adequacy criteria (see

Chapter 2, Section 2.4[8]; Chapter 2[12]).

Supporting Processes

Software has a number of supporting processes. These

supporting processes are described in great detail in the

ISO 12207 standard. A supporting process “supports

another process as an integral part with a distinct purpose

and contributes to the success and quality of the software

project. A supporting process is employed and executed, as

needed, by another process.”[5]

Software configuration management

Software configuration measures can be designed to pro-

vide specific information on the evolving product or to

provide insight into the functioning of the software config-

uration management (SCM) process. A related goal of

monitoring the SCM process is to discover opportunities

for process improvement. Measurements of SCM pro-

cesses provide a good means for monitoring the effective-

ness of SCM activities on an ongoing basis.

These measurements are useful in characterizing the

current state of the process, as well as in providing a

basis for making comparisons over time. Analysis of the

measurements may produce insights leading to process

changes and corresponding updates to the SCM process.[17]

Software libraries and the various SCM tool capabilities

provide sources for extracting information about the char-

acteristics of the SCM process (as well as providing project

and management information). For example, information

about the time required to accomplish various types of

changes would be useful in an evaluation of the criteria

for determining what levels of authority are optimal for

authorizing certain types of changes. Care must be taken to

keep the focus of the surveillance on the insights that can

be gained from the measurements, not on the measure-

ments themselves.

Software engineering process

Evaluate the measurement process against specified eva-

luation criteria and determine the strengths and weaknesses

of the process. This may be performed by an internal

process or an external audit and should include feedback

from measurement users. Record lessons learned in an

appropriate database.[4] When incomplete or insufficient

data are available, appropriate techniques should be used

for analysis of historical databases.[18]

Software engineering tools and methods

Software engineering tools evolve continuously and it is

necessary to evaluate them.[14,19,20] The ISO JTC1/SC7/

WG4 is developing a series of standards related to the

evaluation of software tools (including criteria and mea-

sures). On software methods, three groups are discussed in

SWEBOK, Chapter 10 (heuristic, formal, and prototyp-

ing), each one dealing with a series of approaches.

In particular, an added value to measurement that these

methods can bring is, respectively, about proposing meth-

ods emphasizing the non-functional view on the project,

refinement of specifications, and prototyping evaluation

techniques.

Software quality

As a multidimensional attribute, quality measurement is less

straightforward to define than those above. Furthermore,

some of the dimensions of quality are likely to require

measurement in qualitative rather than quantitative form.

A more detailed discussion of software quality measurement

is provided in the Software Quality KA, topic 3.4.

ISO models for software product quality and related

measurements are described in ISO 9126, parts 1 to 4

(see Chapter 15[7]; Chapters 9, 10[21]; Chapter 24[22]) and

often include measures to determine the degree of each

quality characteristic attained by the product.[23] When

selected properly, measures can support software quality

(among other aspects of the software life cycle processes)

in multiple ways. They can help

Software Measurement Body of Knowledge 7

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 8: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

� in the management decision-making process;

� find problematic areas and bottlenecks in the related

software process; and

� the software engineers assess the quality of their work

for SQA (Software Quality Assurance) purposes and

for longer-term process quality improvement.

With the increasing sophistication of software, questions of

quality go beyond whether or not the software works to how

well it achieves measurable quality goals. There are a few

more topics where measurement supports software quality

management (SQM) directly. These include assistance in

deciding when to stop testing. For this, reliability models

and benchmarks, both using fault and failure data, are useful.

The cost of software quality is an issue that is almost

always raised in deciding how a project quality assurance

should be organized. Often, generic models of cost of

quality (CoQ) are used, which are based on when a defect

is found and how much effort it takes to fix the defect

relative to finding the defect earlier in the development

process.

Project data may give a better picture of cost.

Discussion on this topic can be found in Ref. [24, pp.

39–50]. Related information can be found in the

SWEBOK chapters on Software Engineering Process

(Chapter 8) and Software Engineering Management

(Chapter 9).

Finally, the software quality reports provide valuable

information not only on the executed processes but also on

how all the software life cycle processes can be improved.

Discussions on these topics can be found in Ref. [25].

Measures of product quality can be represented using

mathematical and graphical techniques. This aids in the

interpretation of the measures. Authors assign the mea-

sures to the following categories:[3,13,21,26]

� Statistically based (e.g., Pareto analysis, run charts,

scatter plots, normal distribution)

� Statistical tests (e.g., the binomial test, chi-squared

test)

� Trend analysis

� Prediction (e.g., reliability models)

The statistically based and statistical tests categories pro-

vide a snapshot of the more troublesome areas of the soft-

ware product under examination. The resulting charts and

graphs are visualization aids, which the decision makers

can use to focus resources where they appear most needed.

Results from trend analysis may indicate that a schedule

has not been respected, such as in testing, or that certain

classes of faults will become more intense unless some

corrective action is taken. The predictive techniques can

assist in planning test time and in predicting failure.

Defect analysis consists of measuring defect occurrences

and then applying statistical methods to understanding the

types of defects that occur most frequently, that is, answer-

ing questions in order to assess their density.[3,14,21] Defect

analysis also aid in understanding the trends and how well

detection techniques are working, and how well the devel-

opment and maintenance processes are progressing.

Measurement of test coverage helps to estimate how

much test effort remains to be done, and to predict possible

remaining defects. From these measurement methods,

defect profiles can be developed for a specific application

domain. Then, for the next software system within that

organization, the profiles can be used to guide the SQM

processes, that is, to expend the effort where the problems

are most likely to occur.

Similarly, benchmarks, or defect counts typical of that

domain, may serve as one aid in determining when the

product is ready for delivery.

TECHNIQUES AND TOOLS

Measurement techniques and tools are useful to support

decision makers in the use of quantitative information.

Additional techniques can be found in the total quality

management (TQM) literature.

Measurement Techniques

Software measurement techniques may be used to analyze

software engineering processes and to identify strengths and

weaknesses. This can be performed to initiate process imple-

mentation and change, or to evaluate the consequences of

process implementation and change. The qualities of mea-

surement results, such as accuracy, repeatability, and repro-

ducibility, are issues in the measurement of software

engineering processes, since there are both instrument-

based and judgmental measurements, as, for example,

when assessors assign scores to a particular process.

A discussion and method for achieving quality of mea-

surement are presented in Ref. [27]. Process measurement

techniques have been classified into two general types:

analytic and benchmarking.

The two types of techniques[28] can be used together since

they are based on different types of information. First, the

analytical techniques are characterized as relying on

“quantitative evidence to determine where improvements

are needed and whether an improvement initiative has been

successful.” The analytical type is exemplified by the Quality

Improvement Paradigm (QIP) consisting of a cycle of under-

standing, assessing, and packaging.[29] The techniques pre-

sented next are intended as other examples of analytical

techniques, and reflect what is done in practice.[13,21,27,30,31]

Whether or not a specific organization uses all these techni-

ques will depend, at least partially, on its maturity:

� Experimental studies: Experimentation involves setting

up controlled or quasi experiments in the organization to

8 Software Measurement Body of Knowledge

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 9: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

evaluate processes.[32] Usually, a new process is com-

pared with the current process to determine whether or

not the former has better process outcomes.

� Process simulation: This type of study can be used to

analyze process behavior, explore process improve-

ment potentials, predict process outcomes if the current

process is changed in a certain way, and control process

execution. Initial data about the performance of the

current process need to be collected, however, as a

basis for the simulation.

� Process definition: Process definition review is a means

by which a process definition (either a descriptive or a

prescriptive one, or both) is reviewed, and deficiencies

and potential process improvements identified. Typical

examples of this are presented in Refs. [33, 34]. An easy

operational way to analyze a process is to compare it to

an existing standard (national, international, or profes-

sional body), such as ISO 12207.0.[5] With this approach,

quantitative data are not collected on the process, or, if

they are, they play a supportive role. The individuals

performing the analysis of the process definition use

their knowledge and capabilities to decide what process

changes would potentially lead to desirable process out-

comes. Observational studies can also provide useful

feedback for identifying process improvements.[35]

� Root-cause analysis: It is another common analytical

technique that is used in practice. This involves tracing

back from detected problems (e.g., faults) to identify

the process causes, with the aim of changing the pro-

cess to avoid these problems in the future, removing the

inner, original cause instead of the solely back-sided,

temporary effect. Examples for different types of pro-

cesses are described in Refs. [36–38].

� Orthogonal defect classification (ODC): It is a techni-

que that can be used to link faults found with potential

causes. It relies on a mapping between fault types and

fault triggers.[39,40] The IEEE standard on the classifi-

cation of faults (or anomalies) may be useful in this

context (IEEE Standard for the Classification of

Software Anomalies—IEEE1044-02). ODC is thus a

technique used to make a quantitative selection for

where to apply RCA.

� Statistical process control (SPC): It is an effective way

to identify stability, or the lack of it, in the process

through the use of control charts and their interpreta-

tions. A good introduction to SPC in the context of

software engineering is presented in Ref. [41].

� Personal software process (PSP): It defines a series of

improvements to an individual’s development prac-

tices in a specified order. It is “bottom-up” in the

sense that it stipulates personal data collection and

improvements based on the data interpretations.

� Benchmarking techniques: Benchmarking depends on

identifying an “excellent” organization in a field and

documenting its practices and tools. Benchmarking

assumes that if a less-proficient organization adopts the

practices of the excellent organization, it will also

become excellent. Benchmarking involves assessing the

maturity of an organization or the capability of its pro-

cesses. It is exemplified by the software process assess-

ment work. A general introductory overview of process

assessments and their application is provided in Ref. [42].

Measurement Tools

Software engineering management tools are subdivided

into three categories:[43]

� Project planning and tracking tools. These tools are

used in software project effort measurement and cost

estimation, as well as project scheduling.

� Risk management tools. These tools are used in identi-

fying, estimating, and monitoring risks.

� Measurement tools. The measurement tools assist in

performing the activities related to the software mea-

surement program.

QUANTITATIVE DATA

Quantitative data is identified by Vincenti as one of the

engineering knowledge types.[44] According to Vincenti,

quantitative data should be represented in tables and

graphs, proposing measurement references and codified

experimental data, in order to show to those interested

practical results from the consistent application of a shared

practice in the domain. Such quantitative data is currently

not referenced in any of the SWEBOK chapters.

Types of Entities

Quantitative data are typically collected based on the fol-

lowing entities being measured.

Organization

Data on organizations include, for example, the semiann-

ual results on Sw-CMM (Software Capability Maturity

Model) and CMMI (Capability Maturity Model

Integration) appraisals,[45,46] as well as the results from

prizes associated to Performance Management Models as

Malcolm Baldrige in the USA and EFQM (European

Foundation for Quality Management) in Europe.

Project

Data on projects are available from project benchmarking

repositories such as the ISBSG (International Software

Benchmarking Standards Group),[47] with over more than

þ5000 projects gathered worldwide as of 2009. A character-

istic of this repository is that the size of most of these projects

is measured with one of the ISO-recognized FSM methods.

Software Measurement Body of Knowledge 9

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 10: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

Resource

About the “resource” entity, P-CMM[48] appraisal data

could be a good source of information about people man-

agement.[49] Other information on resources can be

retrieved from the ISBSG repository.[47]

Process

Process data are typically available from project reposi-

tories or from process assessment appraisals.[45,46]

Repositories of process assessment appraisal data are typi-

cally proprietary.

Product

The data collected on software products are based on either

individually defined measures and models, or on measures

and models proposed by standards organizations: ISO 9126

(that are going to be moved into the new upcoming ISO

25000 series) contains both examples of measures and

model to quantify software product quality. There are

some publications on data based on such measures and

models, but with limited scope. An example of the use of

the ISO 9126 approach is documented in Ref. [50] for Web

environment and software package selection. A mapping

of ISO 9126 to ISBSG is provided in Ref. [51].

Software Measurement Repositories

In 2009, two publicly available data repositories were

available to the software engineering community: the

ISBSG[47] and the PROMISE repository for quality stu-

dies.[52] The ISBSG repository had, as of the beginning of

2007, more than 4100 projects of several types. These

repositories are sources of data for external benchmark

analysis as well as examples of data collection procedures,

including standardized definitions for the data collected.

MEASUREMENT STANDARDS

Software engineers should be aware of measurement stan-

dards to be used and referred when developing, maintain-

ing, and operating software. A de jure standard is typically

defined and adopted by a (independent) standardization

body such as ISO and IEC, ANSI (United States),

AFNOR (France), etc. A de facto standard is typically

defined and adopted by a community of interests, such as

IEEE, ISBSG, SPIN (Software Process Improvement

Network), itSMF (Information Technology Service

Management Forum), PMI (Project Management

Institute), etc.

De Jure Standards

Software product quality

Software engineers should be able to specify product qual-

ity objectives and measure them throughout the software

life cycle. ISO defines a standard model for assessing the

quality of a software product (ISO 9126-01). This standard

presents three perspectives of software quality: internal (as

viewed by the software engineer), external (as viewed by

the client), and quality in use (as viewed by the end user).

While ISO 9126 proposes candidate measures for each

of these perspectives, none of these measures are yet con-

sidered by ISO as measurement standards. The ISO 9126

standard is currently under revision and will be renamed

and extended into the ISO 25000 series. This new version

will progressively include measurement standards. In addi-

tion, ISO currently proposes a software usability model

(ISO 9241-11) that can be used to measure the usability

of a software.

Software functional size

Software engineers should be able to measure the size of a

software. FSM is an important measure of software product

functional size. The key concepts of this measure have

been specified in ISO 14143-1 for a method to be recog-

nized as an FSM method. The ISO 14143 series includes

five additional parts (two international standards and three

technical reports) on various aspects related to FSM meth-

ods, such as verification criteria and functional domains.

Five FSM methods have been adopted as ISO standards in

conformity with ISO 14143-1:

� COSMIC-FFP: ISO 19761

� IFPUG FPA: ISO 20926

� Mk-II FPA: ISO 20968

� NESMA FPA: ISO 24570

� FISMA FPA: ISO 29881

Software measurement process

Software engineers should be able to apply a formal mea-

surement process. ISO 15939 documents the software mea-

surement process to be followed by software engineers, as

previously discussed.

De Facto Standards

ISBSG

The ISBSG has put into place a data collection standard on

software projects. The ISBSG data collection question-

naire[53] includes seven sections:

10 Software Measurement Body of Knowledge

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 11: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

� Submitter information: information about the organiza-

tion and the people filling out the questionnaire.

� Project process: information about the project process.

The information collected can be detailed along the

various phases of a software life cycle.

� Technology: information about the tools used for

developing and carrying out the project and for each

stage of the software life cycle.

� People and work effort: three groups of people are

considered: development team, customers and end

users, and IT operators. Collected information includes

information about the various people working on the

project, their roles and their expertise, and the effort

expended for each phase of the software life cycle.

� Product: information about the software product itself

(e.g., software application type and deployment plat-

form, such as client/server, etc.).

� Project functional size: information about the software

product functional size, according to the adopted count

type, the measurement context, and the experience of

the people doing the count for such system.

� Project completion: information about an overall pic-

ture of the project, including project duration, defects,

number of lines of code, user satisfaction, and project

costs, including cost validation.

SPIN

A SPIN is a community of interest that supports and promotes

the use of software and systems engineering maturity models

(i.e., ISO 15504, Capability Maturity Model Integration,

Software Maintenance Maturity Model,[54] etc.) to improve

software engineering processes. Measurement exemplary

practices are described in each maturity model.

itSMF

The itSMF is a community of interest that supports and

promotes service management frameworks such as ITIL,

BS 15000, and ISO 20000. These exemplary practices

apply to the IT infrastructure, services, and operations.

Measurement exemplary practices such as key process

indicators and service level agreements are described in

these references.

CONCLUSION

Achieving consensus by the profession on a software mea-

surement body of knowledge is a key milestone for the

evolution of software engineering toward a professional

status. This entry has presented the content of the new

knowledge area on software measurement proposed for

the next version of the Software Engineering Body of

Knowledge—Guide to the SWEBOK. This proposal is

designed to build, over time, consensus in industry, profes-

sional societies, and standards-setting bodies, among prac-

ticing software developers and in academia.

ACKNOWLEDGMENT

Figures 2 and 5 were adapted from Fig. 1 (p. 9) of ISO/IEC

15939:2007 [or] Fig. A.1 (p. 21) ISO/IEC 15939:2007.

These figures are not considered official ISO figures nor

were they authorized by ISO. Copies of ISO/IEC

15939:2007 can be purchased from ANSI at http://

webstore.ansi.org.

REFERENCES

1. SWEBOK. Guide to the Software Engineering Body of

Knowledge; IEEE Computer Society: Los Alamitos, CA,

http://www.swebok.org (accessed May 2010).

2. ISO/IEC Guide 99:2007, International vocabulary of

metrology—Basic and general concepts and associated

terms (VIM), 2007.

3. Kan, S.H. Metrics and Models in Software Quality

Engineering, 2nd Ed.; Addison-Wesley: Boston, MA, 2002.

4. ISO/IEC 15939:2007, Systems and software engineering—

Measurement process, 2007, http://webstore.ansi.org

(accessed May 2010).

5. ISO/IEC 12207:2008, Systems and software engineering—

Software life cycle processes, 2008.

6. Jalote, P. An Integrated Approach to Software Engineering,

2nd Ed.; Springer-Verlag: Secaucus, NJ, 1997.

7. Pressman, R. S. Software Engineering: A Practitioner’s

Approach, 6th Ed.; McGraw-Hill: New York, 2004.

8. Beizer, B. Software Testing Techniques; International

Thomson Press: Boston, MA, 1990.

9. Jorgensen, P.C. Software Testing: A Craftsman’s Approach,

2nd Ed.; CRC Press: Boca Raton, FL, 2004.

10. Bertolino, A.; Marre, M. How many paths are needed for

branch testing? J. Syst. Software 1996, 35 (2), 95–106.

11. IEEE 982.1-2005, IEEE Standard Dictionary of Measures

of the Software Aspects of Dependability, Institute of

Electrical and Electronics Engineers, Jan 1, 2006, 41 pp.

12. Perry, W. Effective Methods for Software Testing, Wiley:

New York, 1995.

13. Lyu, M.R. Handbook of Software Reliability Engineering,

Mc-Graw-Hill/IEEE: New York, 1996.

14. Pfleger, S.L. Software Engineering: Theory and Practice,

2nd Ed.; Prentice-Hall: Englewood Cliffs, NJ, 2001.

15. Poston, R.M. Automating Specification-Based Software

Testing, IEEE Press: Los Alamitos, CA, 1996.

16. Zhu, H.; Hall, P.A.V.; May, J.H.R. Software unit test cover-

age and adequacy. ACM Comput. Surv. 1997, 29 (4),

366–427, http://citeseerx.ist.psu.edu/viewdoc/download?doi=

10.1.1.93.7961&rep=rep1&type=pdf (accessed May 2010).

17. Royce, W. Software Project Management, A United

Framework; Addison-Wesley: Boston, MA, 1998.

18. Strike K.; El-Emam, K.; Madhavji N. Software Cost

Estimation with Incomplete Data, NCR 43618, Technical

Software Measurement Body of Knowledge 11

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

Page 12: Software Measurement Body of Knowledge - Amazon S3s3.amazonaws.com/publicationslist.org/data/a.april/ref-260/final proof encyclopedia.pdfSoftware Measurement Body of Knowledge Alain

Report, National Research Council Canada, January 2000,

http://nparc.cisti-icist.nrc-cnrc.gc.ca/npsi/ctrl?action=rtdoc

&an=8913263&article=0&lang=en (accessed May 2010).

19. Mosley, V. How to assess tools efficiently and quantita-

tively. IEEE Software 1992, 9 (3), 29–32.

20. Valaer, L.A.; Babb, R.G. II. Choosing a user interface

development tool. IEEE Software 1997, 14 (4), 29–39.

21. Fenton, N.E.; Pfleeger, S. L. Software Metrics: A Rigorous

& Practical Approach, 2nd Ed.; International Thomson

Computer Press: Boston, MA, 1998.

22. Sommerville, I. Software Engineering, 7th Ed.: Addison-

Wesley: Reading, MA, 2005.

23. Grady, R.B. Practical Software Metrics for project

Management and Process Management; Prentice Hall,

Englewood Cliffs, NJ, 1992.

24. Rakitin, S.R. Software Verification and Validation: A

Practitioner’s Guide; Artech House, Inc.: Boston, MA, 1997.

25. McConnell, S. Code Complete: A Practical Handbook of

Software Construction; Microsoft Press: Redmond, WA,

1993, http://standards.ieee.org/reading/ieee/std_public/new_

desc/se/1012-2004.html (accessed May 2010).

26. Musa, J. Software Reliability Engineering: More Reliable

Software, Faster Development and Testing; McGraw Hill:

New York, 1999.

27. Goldenson, D.; El-Emam, K.; Herbsleb, J.; Deephouse, C.

Empirical Studies of Software Process Assessment

Methods, ISERN Technical Report, ISERN-97-09,

Germany, 1997, http://isern.iese.de/moodle/file.php/1/

Reports/reports97/isern-97-09.ps.gz (accessed May 2010).

28. Carver, R. H.; Tai, K. C. Replay and testing for concurrent

programs. IEEE Software 1991, 8 (2), 66–74.

29. Software Engineering Laboratory, Software Process

Improvement Guidebook, NASA/GSFC, Technical Report

SEL-95-102, April 1996.

30. Weinberg, G.M. Measuring Cost and Value, “Quality

Software Management,” First-Order Measurement, Dorset

House: New York; 1993, vol. 2, Chap. 8.

31. Zelkowitz, M.V.; Wallace, D. R. Experimental models

for validating technology, IEEE Comput. 1998, 31,

23–31, www.idi.ntnu.no/emner/empse/papers/zelkowitz.pdf

(accessed May 2010).

32. McGarry, F. et al. “Software Process Improvement in the

NASA Software Engineering Laboratory,” Software

Engineering Institute CMU/SEI-94-TR-22, 1994, http://

www.sei.cmu.edu/library/abstracts/reports/94tr022.cfm (acce-

ssed May 2010).

33. Bandinelli, S. et al. Modeling and improving an industrial soft-

ware process. IEEE Trans. Software Eng.1995, 21 (5), 440–454.

34. Kellner, M. et al. Process guides: Effective guidance for

process participants. Presented at the 5th International

Conference on the Software Process, Chicago, IL, 1998,

http://users.ece.utexas.edu/~perry/prof/ispa/icsp5/program.

html (accessed May 2010).

35. Agresti, W. The role of design and analysis in process

improvement. Presented at Elements of Software Process

Assessment and Improvement, Los Alamitos, CA, 1999.

36. Collofello, J.; Gosalia, B. An application of causal analysis

to the software production process. Software Pract. Exp.

1993, 23 (10) 1095–1105.

37. El-Emam, K.; Holtje, D.; Madhavji, N. Causal analysis of

the requirements change process for a large system. In

Proceedings of the International Conference on Software

Maintenance, Bari, Italy, 1997.

38. Nakajo, T.; Kume, H. A case history analysis of software

error cause-effect relationship. IEEE Trans. Software Eng.

1991, 17 (8), 830–838.

39. Chillarege, R. et al. Orthogonal defect classification—a

concept for in-process measurement. IEEE Trans.

Software Eng. 1992, 18 (11), 943–956.

40. Chillarege, R. Orthogonal defect classification. Handbook

of Software Reliability Engineering, M. Lyu, Ed.; IEEE

Computer Society Press: Los Alamitos, CA, 1996.

41. Florac, W.; Carleton, A. Measuring the Software Process:

Statistical Process Control for Software Process

Improvement; Addison-Wesley: Boston, MA, 1999.

42. Zahran, S. Software Process Improvement: Practical Guid-

elines for Business Success; Addison-Wesley: Reading,

MA, 1998.

43. Dorfman, M.; Thayer, R.H. Eds. Software Engineering; IEEE

Computer Society Press: Los Alamitos, CA, 2002; Vol. 1 and 2.

44. Vincenti, W.G. What Engineers Know and How They

Know It—Analytical Studies from Aeronautical History,

John Hopkins University Press: Baltimore, MD, 1990.

45. Software Engineering Institute, Process Maturity Profile

Sw-CMM 2006 Mid-Year Update, Software Engineering

Measurement and Analysis Team, Carnegie Mellon

University, August 2006; http://www.sei.cmu.edu/cmmi/

casestudies/profiles/swcmm.cfm (accessed May 2010).

46. Software Engineering Institute, Process Maturity Profile

CMMI SCAMPI Class A Appraisal Results 2007- Year-

End Update, Software Engineering Measurement and

Analysis Team, Carnegie Mellon University, March 2008;

http://www.sei.cmu.edu/cmmi/casestudies/profiles/cmmi.cfm

(accessed May 2010).

47. ISBSG, Data Repository Release 10 January 2007, http://

www.isbsg.org (accessed May 2010).

48. Curtis, B.; Hefley, B.; Miller, S. People Capability Maturity

Model (P-CMM) version 2.0, Software Engineering

Institute, Technical Report, CMU/SEI-2009-TR-003, http://

www.sei.cmu.edu/library/abstracts/reports/09tr003.cfm

(accessed May 2010).

49. Miller, S., People Capability Maturity Model – Product Suite

Maturity Profile, Software Engineering Institute, January

2008, http://www.sei.cmu.edu/ (accessed May 2010).

50. Franch X.; Cavallo J.P., Using quality models in software

package selection. IEEE Software January/February 2003,

20 (1), 34–41.

51. Cheikhi, L.; Abran, A.; Buglione, L. The ISBSG Software

Projects Repository from the ISO 9126 Quality Perspective.

J. Software Qual., American Society for Quality, March 2007,

9 (2), pp. 4–16.

52. Boetticher, G.; Menzies, T.; Ostrand, T. PROMISE

Repository of empirical software engineering data repository,

West Virginia University, Department of Computer Science,

2007, http://promisedata.org/ (accessed May 2010).

53. ISBSG, Data Collection Questionnaire—New

Development, Redevelopment, or Enhancement sized

using IFPUG or NESMA Function Points, version 5.10,

Sept 2007, http://www.isbsg.org (accessed May 2010).

54. April, A.; Abran, A. Software Maintenance Management:

Evaluation and Continuous Improvement; John Wiley &

Sons, Inc.: Hoboken, NJ, 2008.

12 Software Measurement Body of Knowledge

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112