the « sqale » models for assessing the quality of software ... · the need to assess and know the...

7
Inspearit Quality Model and Analysis Model The need to assess and know the quality of the software one has required or payed for is not new. When an application is accepted, in addi- tion to functional and performance tes- ting, it should be possible to obtain easily and precisely other quality attributes of the expected deliverable. For example, it should be possible to know how the code will accept future evolutions (which are bound to come), or whether it will be easily possible to outsource its maintenance to a third party team. Although some large customers have deve- loped their own methods, or even their own tools to satisfy such assessment needs, regularly assessing the delivered software has not been generalized. This situation is partly the result of a lack of an accepted standardized method allowing systematic assessments to be performed within redu- ced schedules and costs. This absence of methods to assess software is also felt by software development mana- gers. They lack standard and simple means to assess the quality of developed software in order to be able to monitor and control their projects within the three major axes that are quality, cost and schedule. A software development organization’s maturity level and the capabilities of its processes may be assessed and measured on a scale as provided by a standard model such as CMMI. However, as far as the software product resulting from these processes is concer- ned, there is no complete and calibrated system to assess and rate its quality. The « SQALE » Models for assessing the Quality of Software Source Code

Upload: dinhkiet

Post on 28-Apr-2018

216 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: The « SQALE » Models for assessing the Quality of Software ... · The need to assess and know the quality of ... model is its capacity to support two diffe- ... “touchy” element

Inspearit

Quality Model andAnalysis Model

The need to assess and know the quality ofthe software one has required or payed foris not new.

When an application is accepted, in addi-tion to functional and performance tes-ting, it should be possible to obtain easilyand precisely other quality attributes of theexpected deliverable.

For example, it should be possible to knowhow the code will accept future evolutions(which are bound to come), or whether it

will be easily possible to outsource itsmaintenance to a third party team.

Although some large customers have deve-loped their own methods, or even theirown tools to satisfy such assessment needs,regularly assessing the delivered softwarehas not been generalized. This situation ispartly the result of a lack of an acceptedstandardized method allowing systematicassessments to be performed within redu-ced schedules and costs.

This absence of methods to assess softwareis also felt by software development mana-gers. They lack standard and simple means

to assess the quality of developed softwarein order to be able to monitor and controltheir projects within the three major axesthat are quality, cost and schedule.

A software development organization’smaturity level and the capabilities of itsprocesses may be assessed and measuredon a scale as provided by a standard modelsuch as CMMI.

However, as far as the software productresulting from these processes is concer-ned, there is no complete and calibratedsystem to assess and rate its quality.

The « SQALE » Models for assessingthe Quality of Software Source Code

Page 2: The « SQALE » Models for assessing the Quality of Software ... · The need to assess and know the quality of ... model is its capacity to support two diffe- ... “touchy” element

The SQALE Quality Model

This model has four important properties:

� It has been built in a systematic mannertaking into account the lifecycle ofsource code.

� It is structured in ranked layers.

� It is a requirement model and not a setof best practices to implement.

� It takes into account both the develo-per’s point of view and the softwareuser’s point of view.

To define the basic quality characteristicsof our model, an approach based on thetypical lifecycle of a source code file wasused (Figure 1).

This file granularity level was retainedsince within development project, thecommon practice is that “owners” areusually assigned to files.

It is also the granularity that is commonlysupported by configuration managementtools, compilers and linkers.

Whichever the overall lifecycle (V lifecycle, spiral or iterative lifecycle) of anapplication, a file will follow a sequencesimilar to the one described in figure 1.

Even if there are on the overall levelcomings and goings between differentphases (for example between coding andtesting), Figure 1 illustrates a general time

Inspearit - Aligning for Customer Value

Figure 1 : The “File” Lifecycle Used for the SQALE Quality Model.Dependencies between Activities and Quality Characteristics

are mixed together with internal cha-racteristics.

� The models present a different view-point from the assessor’s point of viewwho is responsible for rating the qualityof a source code.

� The defined characteristics of themodel are non-orthogonal, sharingredundant properties.

But the great weakness of such models isthat none provide a practical and concretemethod to rate an application.

In practice, in order to apply such modelsone must create one’s own assessmentmethod. For example, a panel of expertsmay be gathered to give a rating for eachsub-characteristic, between 1 and 5.Averaging the ratings, it will be possible toidentify the “good” sub-characteristics andthe “weak” ones. Of course, such a methodis costly, takes time to implement, and isnot repeatable.

Therefore DNV has developed the SQALEmethod and models to satisfy the needs inassessing strategic large scale softwaresource code. We have been successfullyusing this method for several years for ourcustomers.

We describe hereafter the two SQALEmodels to contribute to the collective worknecessary to build a common assessmentstandard.

Best practice models such as CMMI forAcquisition1 or CMMI for Development2

do require the control and the monitoringof the quality of the pro-ducts during the wholeproject lifecycle.

For example, this is thecase of the practice SP 1.2 from the ProcessArea (PA) Product andProcess QualityAssurance (PPQA)“Objectively EvaluateWork Products andServices” and the prac-tice SP 2.3 of the PASupplier AgreementManagement (SAM)“Evaluate SelectedSupplier WorkProducts”.

Such practices would bemuch easier to implement if a simple andstandard method was available to rate soft-ware source code.

The two main components of an assess-ment method are on the one hand, a qua-lity model, and on the other hand, an ana-lysis model.

� The quality model defines the qualitycharacteristics (or attributes), expectedin the context of the assessment.Characteristics are broken down intodetailed sub-characteristics. Such abreakdown allows the definition of pre-cise controls to be set up for the pro-duct assessment.

� The analysis model defines the rules tocharacterise or rate the products (orcomponents thereof) based on measu-rements obtained on the control pointsof the quality model. This is the “analy-sis model” defined in standard ISO/IEC159393.

The best known quality models are fromBoehm4, McCall5 or ISO 91266. Suchmodels are general and have not been specifically developed to rate the sourcecode of an application.

Several issues arise when trying to applythem to the assessment of software sourcecode:

� Functional and external characteristics

2

ReuseReusability

Maintainability

Efficiency

Changeability

Reliability

Testability

Pro

ject

Tea

m

Oth

er P

artie

s

Maintain

Deliver

Evolve

Test

Code

Page 3: The « SQALE » Models for assessing the Quality of Software ... · The need to assess and know the quality of ... model is its capacity to support two diffe- ... “touchy” element

sequence and some practical rules:

� An untested code will not be delivered.

� Evolutions must be taken into accountand subsequent corrections madebefore delivery.

� An unmaintainable code will not be re-used.

We asked ourselves what the major qualityprerequisites are for each step in the lifecycle? We used the quality characteristicsand sub-characteristics of the 3 models(Boehm4, McCall5 and ISO 91266) whichallowed us to establish the dependenciespresented in figure 1.

The needed “reusability” characteristic hasbeen added as it is not included in the ISO9126 model.

This is how this approach organizessequentially the requirements and needsfor source code quality. In this sequence,testability is the first characteristic and reu-sability is the last one required.

The layered model presented in figure 2 isthe end result of this approach. This basicmodel may be adapted by taking intoaccount the expected lifecycle for thesource code to be assessed.

Since we have used a generic approach, itis possible to add other layers to satisfyother needs such as code portabilityand/or code security.

Two other complementary and more detai-led levels are derived from the quality cha-racteristics (or quality factors) of the firstlevel. A second level of sub-characteristics is thendefined. A third level of control points onthe software source code completes themodel.

The standard Goal/Question/Metrics(GQM) from Basili8 was applied to buildthe additional 2 layers. The selectedcontrol points in any assessment dependson the context, in particular the program-ming languages used as well as the execu-tion environment of the software.

The existing literature on the subject provi-ded an extensive set of candidate controlpoints, in particular the very completetaxonomy established par Diomodis

Spinellis10. Figures 3 and 4 depict someelements of the complete quality model.

When necessary, sub-characteristics corres-ponding to sub-activities of the source lifecycle have been defined. In this way, theTestability characteristic is broken downinto the Unit Testability and theIntegration Testability. In the same man-ner, Maintainability is broken down intoReadability and Understandability: readand/or navigation and understanding acti-vities are considered to be the principalactivities of a maintenance person whenworking on a piece of source code develo-ped by another.

In this structure, a sub-characteristic or acontrol point used in the model is neverduplicated in any higher level. It is usedonly once in the model. Using a controlpoint and a sub-characteristic only once isan essential rule of the SQALE model,which must be followed at all times.

The consequence of this rule is that no tes-tability criterion will appear in the

Maintainability layer. The only sub-charac-teristics that remain attached to this cha-racteristic are Readibility andUnderstandability. The sub-characteristics such as Testabilityor Evolutivity traditionally associated toMaintainability in existing models are nowprevious characteristics of the SQALEmodel. It is a consequence of the fact thatthis characteristics are needed earlier inthe lifecycle of a source code file.

A particular property of the SQALE qualitymodel is that it is a requirements model.Each control point in the model is a qua-lity requirement to be satisfied. This is inaccordance with Ph Crosby7’s first “abso-lute concept”: “Quality must be defined asconformance to requirements.Consequently, the non conformance detec-ted is the absence of quality”.

Therefore the control points in the qualitymodel are “Must have” that, when com-plied to, describe without controversypieces of source code of high quality. Thebase requirements, the control points are

Figure 2 : The SQALE Quality Model in its Standard Representation

Figure 3 : Ranked Sub-Characteristic Example

Inspearit - Aligning for Customer Value 3

Reusability

Maintainability

Efficiency

Changeability

Reliability

Testability

Modularity

Transportability

Understandability

Readability

Memory use

Processor use

Architecture related changeability

Logic related changeabiltiy

Data related changeability

Fault tolerance

Exception handling

Architecture related reliability

Logic related reliability

Instruction related reliability

Data related reliability

Integration level testability

Unit level testability

Reusability

Maintainability

Efficiency

Changeability

Reliability

Testability

Page 4: The « SQALE » Models for assessing the Quality of Software ... · The need to assess and know the quality of ... model is its capacity to support two diffe- ... “touchy” element

Reusability

Maintainability

Changeability

Efficiency

Reliability

Testability

either metrics with thresholds outside ofwhich the source code is not compliant, orcoding rules universally recognized as wellfounded and which do not allow anyexceptions.

Also the requirements must be consistent.Indeed, it must be possible in practice todeliver a piece of software without anydefect (non-compliance) when referring tothe model’s requirements.

The last property of the SQALE qualitymodel is its capacity to support two diffe-rent points of view, the developer point ofview and the owner/user point of view, asdescribed in figure 5.

By reading the model horizontally, non-compliances can be analyzed from the ori-ginating characteristic or sub-characteristic

“touchy” element of a product assessmentmethod.

In a hypothetical project, let’s suppose thata project leader wishes to rate the maintai-nability of his/her application softwareusing an overall grade.

In order to compute the overall gradevarious basic grades must be aggregated.

This aggregate must be computed on twodimensions:

� In the dimension of the application’sstructure, the overall Maintainabilitygrade of the application results necessa-rily from the Maintainability grade ofeach of its components and sub compo-nents (for example, each class and eachmethod of each class).

and one can understand the impact on thedevelopment activities (testing, implemen-ting changes ...).

By reading the model vertically, the conse-quences for the owner of operating thesoftware with non-compliances can easilybe understood.

The SQALE Analysis Model

The following discussion illustrates the factthat the analysis model is the most

Figure 4 : Some SQALE Quality Model Control points

Figure 5 : The two Viewpoints of the SQALE Quality Model

� In the dimension of the ranked qualitymodel, the sub-characteristics(Readability and Understandability)which contribute to the Maintainabilitycharacteristic are aggregated.

Currently, many assessment reports arebuilt during each aggregate step by com-puting the grade as an average of the com-ponent grades.

In many cases, the grades are weighted bytrying to take into account the relative

priorities of sub-characteristics or the sizeof each assessed file.

This commonly used method, supportedby many analysis tools, is not satisfactorysince it produces erroneous diagnoses andconsequently leads to wrong decisions.

In the hypothetical project, one of therequirements for Maintainability is that thesource code is sufficiently commented. Inthe context, the threshold for an accepta-ble level of comments is fixed at 25% perfile. If half of the software source code hasa comment ratio of 15% and half of thecode has a comment ratio of 40%, then anoverall grade computed by averaging basegrades will deliver a ratio of 32,5%, whichtherefore will be taken as an acceptableresult for the overall maintainability of theapplication.

Sadly, in practice, the additional commentratio in one part of the software will nothelp in any manner to maintain the partwhere comments are missing. Each piece of software, if it is to be main-tainable, must have enough comments.Such issues do not happen with theSQALE analysis model.

As stated previously, the quality model is arequirements model. The way it has beenbuilt ensures the quality targets a totalabsence of non compliances. As written byPh. Crosby, assessing a software sourcecode is therefore similar to measuring thedistance which separates it from its qualitytarget.

To measure this distance, the concept ofremediation index has been defined andimplemented in the SQALE analysismodel.

An index is associated to each componentof the software source code (for example,a file, a module or a class). The indexrepresents the remediation effort whichwould be necessary to correct the non-compliances detected in the component,versus the model requirements.

Since the remediation index represents awork effort, the consolidation of theindices is a simple addition of uniforminformation, which is a critical advantageof our model.

A component index is computed by thesimple addition of the indices of its ele-

Inspearit - Aligning for Customer Value 4

Maintainability Understandability There is no function with an essentiel complexity ev(G) > 5

Maintainability Understandability There is no continuous instruction

Maintainability Understandability There is no break instruction outside a switch structure

Maintainability Understandability The comment percentage is > 25%

Maintainability Readability There is no file with more than a 1000 lines

Maintainability Readability There is no dead code

An analytic view provided by

orthogonal characteristics

One understands impact of each

Non Conformity and improvement

on quality characteristic and life

cycle issues

An external view that represents the

perceived quality evaluated by

consolidation of the ranked

characteristics

Page 5: The « SQALE » Models for assessing the Quality of Software ... · The need to assess and know the quality of ... model is its capacity to support two diffe- ... “touchy” element

or C++ software source code.For Java, C++ and Ada, the quality modelcontains, of course, object-oriented metricsto assess Testability, Evolutivity andReusability. The quality model also provides controlpoints to detect the absence of antipatternssuch as those identified by Brown10.

The indices are computed based on theaverage remediation efforts estimated bythe development team.

The index thresholds providing a rating infive levels (from “poor” to “excellent”) areestablished by the application managers.Several graphs and diagrams are used toefficiently visualize the strengths and weak-nesses of the assessed source code:

� Which software pieces contribute themost to the risks.

� Which quality criteria are the mostimpacted.

� What is the impact of the identifiednon-compliances on the project or itsusers.

Some of the indicators we use in ourassessments and that provide in our expe-rience a good visibility and good informa-tion for taking decisions are presentedhere. (Figures 6 and 7)In the case of the application shown in thisexample, the Testability index is high.Therefore, the other external characteris-

Figure 6 : Overall Application Index Dashboardments. A characteristic index is computedby the addition of thebase indices of its sub-characteristics.

A sub-characteristicindex is computed bythe addition of theindices of its controlpoints.

Base indices are compu-ted by rules which com-ply to the following prin-ciples:

� A base index takes into account the unit remediation effort to correct the non-compliance. In practice,this effort mostlydepends on the type

of the non- compliance.

For example correcting a presentation defect (bad indentation, dead code)does not have the same unit effort costas correcting an active code defect(which implies the creation and execu-tion of new unit tests, possible new inte-gration and regression tests).

� A base index also considers the numberof non-compliances. For example, a filewhich has three methods which need tobe broken down into smaller methodsbecause of complexity will have an indexthree times as high as a file which hasonly one complex method, all otherthings being equal.

In the end, the remediation indices pro-vide a means to measure and compare noncompliances of very different origins andvery different types. Coding rule violationnon-compliances, threshold violations for ametric or the presence of an antipatternnon-compliance can be compared usingtheir relative impact on the index.

Practical Use

The SQALE quality and analysis modelshave been used to perform many assess-ments of software source code, of varioussizes and in different application domains.

The same layered quality model has beenused to assess Cobol, Java, embedded Ada

Figure 7 : File Distribution Diagram based on the Indices for each Characteristic

Inspearit - Aligning for Customer Value 5

131

559

118

772

151

6 535

8 266

131

118

772

151

6 535

8 135

118

118

772

151

6 535

7 458

151

6 535

6 686

6 535

6 535

Testability Reliability Changeability Efficiency Maintainability Reusability

Reusability

Maintainability

Efficiency

Changeability

Reliability

Testability

Reusability

Maintainability

Efficiency

Changeability

Reliability

Testability

0% 20% 40% 60% 80% 100%

Page 6: The « SQALE » Models for assessing the Quality of Software ... · The need to assess and know the quality of ... model is its capacity to support two diffe- ... “touchy” element

The SQALE Discovery Kit

tics cannot be satisfied (even partially), even in the presence of a rather good com-pliance to the control points for the highercharacteristics.

The diagram presents a finding which mayseem paradoxical. Indeed, to markedlyimprove the Maintainability of the applica-tion, as perceived by the owner, it is prefe-rable and more efficient to improve theTestability than to add comments.

The analysis indicators mentioned haveallowed us to localize precisely the highpriority modifications (quick-wins) to applyto increase the Testability, and therefore,the overall quality of the application.

The analysis of the index density as shownin figure 8 for example (the density is theratio of the index to the size of the piece ofsoftware) has also enabled us to comparethe non compliance distribution betweenReused, Modified and newly Createdsource code files. An additional effort isneeded to improve the testability of modi-fied files for example by re-factoring.

Conclusion

More than a quality model, we propose astandard approach for developing qualitymodels derived from the software lifecycle.This approach enables us to precisely spe-cify the expectations on software sourcecode.

The resulting models remove several weak-nesses in current reference quality models.The quality models implemented using theSQALE method present two similaritieswith the CMMI process model:

� On the one hand, like the CMMI, theyorganize the requirements in levels orga-nized by priority.

� On the other hand, like the CMMImodel which may be used to assess capa-bility or maturity, the SQALE qualitymodels may be used in an analyticalmanner or from the point of view of anowner or user of the source code.

Such similarities of the models and theassessment methods allow us to performjoint product/process assessments in a fastand efficient manner with high added-value results to the development or ownerteams.

Inspearit - Aligning for Customer Value 6

Références

1. SEI, CMMI for Acquisition Version 1.2, CMU/SEI 2007-TR-017, Carnegie Mellon University, Pittsburg,2007

2. SEI, CMMI for Development Version 1.2, CMU/SEI-2006-TR-008, Carnegie Mellon University, Pittsburgh,2006

3. ISO/IEC, International Standard, «ISO/IEC 15939: 2007 (E), Systems and software engineering –Measurement process, 2007

4. Boehm, B. W., Brown, J. R., Kaspar, H., Lipow, M., McLeod, G., and Merrit, M., Characteristics of SoftwareQuality, North Holland, 1978

5. McCall, J. A., Richards, P. K., and Walters, G.F., Factors in Software Quality, The National TechnicalInformation Service, No. Vol 1, 2 and 3, 1977

6. ISO, International Organization for Standardization, «9126-1:2001, Software engineering – Product qua-lity, Part 1: Quality model», 2001

7. Crosby, P. B., Quality is free: the art of making quality certain, New York: McGraw-Hill, 19798. Victor Basili, Software Modeling and Measurement: The Goal/Question/ Metric Paradigm. Technical

report. University of Maryland at College Park, 19929. Diomidis Spinellis, Code Quality The Open Source Perspective, ISBN: 0-321-16607-8, Addison-Wesley,

200610. Brown et al, Anti Patterns: Refactoring Software, Architectures and Projects in Crisis, ISBN:978-0-471-

19713, John Wiley, 1998

TESTABILITY R M C

Unit level Testability 4 130 970 495

Integration level testability 835 105 -

TOTAL 4 965 1 074 495

Figure 8 : Index Distribution and Density Graph

M

16%

R

76%

0,350

0,300

0,250

0,200

0,150

0,100

0,050

-R M C

0,173

0,330

0,195

C

8%

Index Distribution Index Density by v(G)

Author : Jean-Louis Letouzey

Page 7: The « SQALE » Models for assessing the Quality of Software ... · The need to assess and know the quality of ... model is its capacity to support two diffe- ... “touchy” element

About inspearit

Contact :

Alain [email protected]

Le Visium22 avenue Aristide Briand CS 9000194742 Arcueil Cedex FranceTel : +33 (0)1 49 08 58 00

www.inspearit.comVersion 4