softw. process improve. pract. 8 tool support for ...lanubile/papers/spip2003.pdf · tool support...

15
SOFTWARE PROCESS IMPROVEMENT AND PRACTICE Softw. Process Improve. Pract. 2003; 8: 217–231 (DOI: 10.1002/spip.184) Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile* ,, Teresa Mallardo and Fabio Calefato Dipartimento di Informatica, University of Bari, Italy Software inspection is one of software engineering’s best practices for detecting and removing defects early in the development process. However, the prevalence of manual activities and face-to-face meetings within software inspections hinder their applicability in the context of global software development, where software engineering activities are spread across multiple sites and even multiple countries. In this article, we describe a web-based tool, called the Internet-Based Inspection System (IBIS), that aims to support geographically dispersed inspection teams. On the basis of findings from empirical studies of software inspections, the IBIS tool adopts a reengineered inspection process to minimize synchronous activities and coordination problems. We present the underlying process model, how the tool is used within the inspection stages, and experiences using the IBIS tool as the enabling infrastructure for distributed software inspections. Copyright 2004 John Wiley & Sons, Ltd. KEY WORDS: software inspection; distributed teams; tool support; empirical study 1. INTRODUCTION Over the last few years, enterprise organiza- tions have embraced global software development, where software projects are distributed over mul- tiple geographical sites, separated by a national boundary (Carmel 1999, Herbsleb and Moitra 2001). Searching for low-cost but well-trained develop- ers in emerging countries, and integrating groups from mergers and acquisitions are two of the main forces behind global software development. How- ever, distance causes overheads for management and affects teamwork cooperation, thus result- ing in longer development time (Herbsleb et al. 2001). Correspondence to: Filippo Lanubile, Dipartimento di Infor- matica, University of Bari, via Orabona 4, 70126, Bari, Italy E-mail: [email protected] Copyright 2004 John Wiley & Sons, Ltd. The focus of our research is on software inspec- tion, an industry-proven type of peer review for detecting and removing defects as early as possible in the software development cycle, then reducing avoidable rework (Ebenau and Strauss 1994, Freed- man and Weinberg 1990, Gilb and Graham 1993, Wiegers 2001). Even if software inspection is con- sidered as the ‘best practice’ for improving software quality, the prevalence of paper-based activities and face-to-face meetings hinders its applicability in the context of global software development (Johnson 1998). A case study at Alcatel’s Switching and Rout- ing Division shows that collocated inspection teams were more efficient and effective than distributed teams, where code inspections were conducted remotely (Ebert et al. 2001). Software inspection is an excellent example of a traditional software process that might be globally deployed, on condition that it is reorga- nized to reduce synchronization and coordination,

Upload: others

Post on 18-Apr-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

SOFTWARE PROCESS IMPROVEMENT AND PRACTICESoftw. Process Improve. Pract. 2003; 8: 217–231 (DOI: 10.1002/spip.184)

Tool Support forGeographically DispersedInspection Teams

Research SectionFilippo Lanubile*,†, Teresa Mallardo and Fabio CalefatoDipartimento di Informatica, University of Bari, Italy

Software inspection is one of software engineering’s best practices for detecting and removingdefects early in the development process. However, the prevalence of manual activities andface-to-face meetings within software inspections hinder their applicability in the context ofglobal software development, where software engineering activities are spread across multiplesites and even multiple countries.

In this article, we describe a web-based tool, called the Internet-Based Inspection System (IBIS),that aims to support geographically dispersed inspection teams. On the basis of findings fromempirical studies of software inspections, the IBIS tool adopts a reengineered inspection processto minimize synchronous activities and coordination problems. We present the underlyingprocess model, how the tool is used within the inspection stages, and experiences using the IBIStool as the enabling infrastructure for distributed software inspections. Copyright 2004 JohnWiley & Sons, Ltd.

KEY WORDS: software inspection; distributed teams; tool support; empirical study

1. INTRODUCTION

Over the last few years, enterprise organiza-tions have embraced global software development,where software projects are distributed over mul-tiple geographical sites, separated by a nationalboundary (Carmel 1999, Herbsleb and Moitra 2001).Searching for low-cost but well-trained develop-ers in emerging countries, and integrating groupsfrom mergers and acquisitions are two of the mainforces behind global software development. How-ever, distance causes overheads for managementand affects teamwork cooperation, thus result-ing in longer development time (Herbsleb et al.2001).

∗ Correspondence to: Filippo Lanubile, Dipartimento di Infor-matica, University of Bari, via Orabona 4, 70126, Bari, Italy†E-mail: [email protected]

Copyright 2004 John Wiley & Sons, Ltd.

The focus of our research is on software inspec-tion, an industry-proven type of peer review fordetecting and removing defects as early as possiblein the software development cycle, then reducingavoidable rework (Ebenau and Strauss 1994, Freed-man and Weinberg 1990, Gilb and Graham 1993,Wiegers 2001). Even if software inspection is con-sidered as the ‘best practice’ for improving softwarequality, the prevalence of paper-based activities andface-to-face meetings hinders its applicability in thecontext of global software development (Johnson1998). A case study at Alcatel’s Switching and Rout-ing Division shows that collocated inspection teamswere more efficient and effective than distributedteams, where code inspections were conductedremotely (Ebert et al. 2001).

Software inspection is an excellent exampleof a traditional software process that might beglobally deployed, on condition that it is reorga-nized to reduce synchronization and coordination,

Page 2: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section F. Lanubile, T. Mallardo, F. Calefato

and then supported by some Internet-mediatedenvironment.

In this article, we describe a web-based tool,called the Internet-Based Inspection System (IBIS),that supports geographically dispersed inspectionteams. Analogously to Perry et al. (2002), we usedfindings from empirical studies to determine howto restructure the inspection process in the contextof geographical separation of reviewers.

Section 2 of the article summarizes and discussessoftware inspection, including the process variantwe have adopted for our tool. Section 3 presents theIBIS tool and shows its use according to the under-lying inspection process. Section 4 describes experi-ences where the IBIS tool was used as the enablinginfrastructure for distributed software inspections.In Section 5, we review related work. In Section 6,we discuss the impact of synchronous and asyn-chronous activities on the software inspection pro-cess. Finally, in Section 7, we provide conclusions.

2. SOFTWARE INSPECTION

Software inspection is an industry’s best practice fordelivering high-quality software. The main benefitof software inspections derives from detectingdefects early during software development and thenreducing avoidable rework. Software inspectionsare distinguished from other types of peer reviewsin that they rigorously define:

• a phased process to follow;• roles performed by peers during review;• a reading toolset to guide individual defect

detection activity;• forms and report templates to collect product

and process data.

From the seminal work of Fagan (1976) to itsmany variants (Laitenberger and DeBaud 2000), thesoftware inspection process is essentially made upof six consecutive steps, as shown in Figure 1.

During Planning, the moderator selects theinspection team, arranges the inspection mate-rial and sends it to the rest of the team, andmakes a schedule for the next steps. DuringOverview, the moderator can optionally presentprocess and product-related information for new-comers, if any. During Preparation, each inspectoranalyzes the document to become familiar with itand individually find potential defects. During the

Planning

Overview

Inspection Meeting

Rework

Follow-up

Preparation

Figure 1. Conventional inspection process

Inspection Meeting, all the inspectors meet to collectand discuss the defects from the individual reviews,and further review the document to find furtherdefects. During Rework, the author revises the doc-ument to fix the defects. Finally, during Follow-up,the moderator verifies author’s fixes, gives a finalrecommendation, and collects process and productdata for quality improvement.

The main changes from the original Fagan’sinspection have been a shift of primary goals forthe Preparation and Inspection Meeting stages.The main goal for Preparation has changed frompure understanding to defect detection, and soinspectors have to individually take notes of defects.Consequently, the main goal of the InspectionMeeting has been reduced from defect discoveryto defect collection and discrimination, includingthe discussion of defects individually found duringPreparation.

In the attempt to shorten the overall cost and totaltime of the inspection process, the need for a meet-ing of the whole inspection team has been debatedamong researchers and practitioners. Parnas andWeiss (1987) first dropped the team meeting intheir Active Design Reviews. Then Votta (1993)showed how defect collection meetings lengthenedthe elapsed time of software inspections at LucentTechnologies by almost one-third, with defects dis-covered at the meeting (meeting gains), matchedby defects not recorded at the meeting althoughfound during Preparation (meeting losses). Furtherstudies (Bianchi et al. 2001, Ciolkowksi et al. 1997,

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

218

Page 3: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section Tool Support

Fusaro et al. 1997, Land et al. 1997, Miller et al. 1998,Porter et al. 1995, Porter and Votta 1998) have alsoobserved that the net meetings improvement (differ-ence between meeting gains and meeting losses)was not positive, and then nominal teams (teamsthat do not interact in a face-to-face meeting) areat least equivalent to real teams, at a lower costand time. However, meetings have been founduseful for filtering out false positives (defects erro-neously reported as such by inspectors), trainingnovices, and increasing self-confidence (Johnsonand Tjahjono 1998, Land et al. 1997).

On the basis of the above empirical studies thatargue the need for traditional meetings and onthe behavioral theory of group performance, Saueret al. (2000) have proposed a reorganization of theinspection process to shorten the overall cost andtotal time of the inspection process. The alternativedesign for software inspections mainly consists ofreplacing the Preparation and Inspection Meetingphases of the classical inspection process with threenew sequential phases: Discovery, Collection, andDiscrimination (see Figure 2).

The Discovery phase reflects the shift of goal forthe Preparation phase that has changed from pureunderstanding to defect detection, and so inspectorsare asked to individually take note of defects.

The other two inspection phases are the result ofseparating the activities of defect collection (i.e.

Planning

Overview

Collection

Discrimination

Rework

Discovery

Follow-up

Figure 2. Reengineered inspection process

putting together defects reported by individualreviewers) from defect discrimination (i.e. removingfalse positives), having removed the goal forteam activities of finding further defects. TheCollection phase is an individual task and requireseither the moderator or the author himself. TheDiscrimination phase is the only phase in whichinspectors interact in a meeting. Sauer et al. (2000)suggest that the participation of the entire inspectionteam is not required; the number of discussants canbe reduced to a minimal set, even a single expertreviewer paired with the author.

Another change for saving time and diminishingcoordination overhead is introduced by skippingthe Discrimination phase either entirely, passingall the collated defects directly to the author forrework, or partially, excluding from the discussionany potential defects (found by inspectors duringthe Discovery phase and merged in the Collectionphase) that are considered to have high chancesto be true defects. Sauer et al. (2000) suggest select-ing for the Discrimination phase only unique defects,that is, defects that were found by only one inspec-tor during the Discovery phase, while excludingduplicates, that is, defects that were discovered bymultiple inspectors and were merged during theCollection phase.

By reducing the need for synchronous commu-nication between reviewers and the overload ofcoordination on the moderator’s shoulders, thereengineered inspection process provides the modelupon which to build an automated support for run-ning distributed inspections.

3. IBIS

In order to provide an Internet-based infrastructurefor geographically distributed inspection teams, wedeveloped the Internet-Based Inspection System.

IBIS is mainly a web-based application to achievethe maximum of simplicity of use and deployment.All structured and persistent data are stored asXML documents (Bray et al. 2000), programmati-cally accessed via the DOM API (Wood et al. 1998),and automatically manipulated by XSL transfor-mations (Adler et al. 2001, Clark 1999). Most ofthe required groupware features are developedfrom dynamic web pages on the basis of scriptsand server-side components. Event notification isachieved through automatic generation of e-mails.

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

219

Page 4: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section F. Lanubile, T. Mallardo, F. Calefato

In the following sections, we describe the use ofthe tool according to the seven stages that comprisethe reengineered inspection process.

3.1. Planning

A new inspection starts when an author has sent aproduct to be reviewed and a moderator (who alsoacts as the coordinator of the inspection team) hasdetermined that the product meets the entry criteriafor inspection. The moderator selects a template forthe inspection process, which may vary accordingto the inspection goal (e.g. analyze software require-ments documents for the purpose of defect detec-tion) and the reading technique, which inspectorswill use for individual analysis (e.g. a checklist forsoftware requirements documents). Then review-ers are invited to be part of the inspection team,and a specific reading technique is assigned to eachinspector (see Figure 3).

Finally, the moderator can optionally schedulean overview meeting when the inspection team isunfamiliar with either the inspection process or theproduct.

3.2. Overview

The Overview meeting is optionally held as a virtualmeeting to provide background information on theinspection process or the product being inspected.

The meeting is run using a helper application,called P2PConference (2004), which is launchedfrom the browser. P2PConference is a peer-to-peer remote-conferencing tool, based on the JXTA

Figure 3. Team selection during Planning

framework (Wilson 2002). During a meeting, thespeaker delivers his/her own text-based speechand the other participants can ask questions, afterhaving ‘raised their hands’ (the moderator managesa queue of the pending questions). The moderatorcan also forbid participants to type and sendstatements. Figure 4 shows a screenshot of theconferencing tool.

3.3. Discovery

In the Discovery stage, members of the teamperform individually the review task and recordpotential defects on a discovery log (Figure 5).

In order to support defect detection, inspectorscan answer to checklist questions or, if theyneed more guidance, they can follow a readingscenario (Basili et al. 1996, Porter et al. 1995), which

Figure 4. Remote conferencing during Overview

Figure 5. Defect logging during Discovery

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

220

Page 5: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section Tool Support

gives a step-by-step description of the activities aninspector should perform while reading a documentfor the purpose of defect detection.

Defect logging can be performed in multiplesessions. When the Discovery task is finished, anotification message is sent to the moderator. Themoderator can also browse discovery logs to checkwhether inspectors have adequately performedtheir task, and send invitations to complete theassignment.

3.4. Collection

In the Collection stage, all the Discovery logsare collapsed into a unique defect inspection list.Then either the moderator or the author looks forduplicate defects within the merged collection ofdiscovery logs.

This is an iterative task that can be performedover multiple sessions too. For each iteration,the merged defect list is sorted with respect tolocation fields (e.g. document page number orrequirement number) or questions related to theassigned reading technique (i.e. which questionin a checklist or a reading scenario was helpfulfor defect discovery). When two or more defectsare recognized as identical or almost identical(Figure 6), they are collapsed into a single defectentry and set as a duplicate (Figure 7). Duplicatescan be excluded from the Discrimination stage andbe routed directly to the Rework stage.

Looking at the merged defect list the moderatormay plan the Discrimination stage by selecting

Figure 6. Duplicate search during Collection

Figure 7. Duplicate setting during Collection

Figure 8. Discrimination planning during Collection

which defects are worth being discussed and whichinspectors need to participate in the discussion(Figure 8). This decision can be supported by thedisplay of inspectors’ performance statistics, suchas total number of reported defects and number ofunique defects.

3.5. Discrimination

In the Discrimination stage, discussion takes placeasynchronously, as in a discussion forum, where

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

221

Page 6: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section F. Lanubile, T. Mallardo, F. Calefato

Figure 9. Defects as discussion threads during Discrimi-nation

each defect is mapped to a threaded discussion(Figure 9).

Discussants can add comments within a threadand express votes by rating any potential defect asa true defect or a false positive. When a consensushas been reached, the moderator can mark potentialdefects as false positives. False positives appearas strikethrough in the discussion forum and areexcluded from rework.

3.6. Rework

In the Rework stage, the author is invited to correctthe defects found. He fills in defect resolution entriesincluding information on how the defect has beenfixed or, if not, the explanation (Figure 10). At theend, the author can upload the revised documentand a notification message is sent to the moderator.

3.7. Follow-up

In the Follow-up stage, the moderator determinesif the defects that have been found have been cor-rected by the author and that no additional defectshave been introduced. In order to decide whetherexit criteria have been met or another rework cycleis needed, the moderator may invite additionalreviewers among inspection team members.

Once the inspection is closed with a final recom-mendation, the system sends a notification mail tothe entire team, including a summary and a detailedreport, as a feedback for participants (Figure 11).The detailed report mainly consists of the final

Figure 10. Defect correction during Rework

Figure 11. Final report sent to the inspection team

defect list, together with the author’s comments,and the list of defects that have been reported byeach inspector. Both the summary and detailedreports include process and product data that havebeen collected during the inspection (the list ofmeasures is shown in Table 1). Collected measurescan be backed up to a project repository and ana-lyzed for the purpose of process assessment andimprovement.

4. EXPERIENCES WITH IBIS

We have conducted various distributed inspectionsas part of three consecutive editions of a web

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

222

Page 7: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section Tool Support

Table 1. Product and process data measured in IBIS

Measure Description

Document size Number of pages (for requirements or design documents) or LOC (for source code).Discovery effort Person/hours spent during the Discovery stage.Reported defects Number of defects from the Discovery stage.Unique defects Number of defects that were reported by only one inspector during the Discovery stage.Duplicate defects Number of defects that were reported by multiple inspectors during the Discovery stage.Collated defects Number of defects resulting from merging unique and duplicate defects during the Collection stage.True defects Number of defects for which a general consensus is achieved during the inspection process.Defect density True defects/document size.True unique defects Number of true defects that were reported by only one inspector during the Discovery stage.True duplicate defects Number of true defects that were reported by multiple inspectors during the Discovery stage.Removed false positives Number of defects that are removed by the moderator at the end of the Discrimination stage because

they are not considered as defects.Slipped false positives Number of defects from the Discrimination stage that are not acknowledged as true defects by the

author during the Rework stage.Fixed defects Number of true defects for which the author provides a fix during the Rework stage.Unfixed defects Number of true defects for which the author does not provide a fix during the Rework stage.

engineering course at the University of Bari. Par-ticipants were graduate students interacting withthe IBIS tool from university laboratories or home,thus simulating the conditions of geographicallydispersed inspection teams.

4.1. First Experience

In the first experience with IBIS, we conductedtwo distributed inspections with the goal of testingthe tool, gathering a first feedback from users,and observing the behavior of unusually largeinspection teams (Lanubile and Mallardo 2002).In the first inspection, ten reviewers had to finddefects in the IBIS requirements document (19 pageslong). In the second inspection, eight reviewershad to detect programming style violations ina dynamic web document (694 lines of markupelements mixed to scripting code), which was partof the IBIS configuration. Using the IBIS tool itself,we measured inspection performance both at theindividual and team level.

Figure 12 shows the box plots of individualreviewers’ performance. Individual findings var-ied a lot between reviewers, particularly for thefirst inspection. In the requirements inspection,there were a couple of outstanding reviewers whoreported 24 defects each, with very few duplicates(respectively one and four). On the other hand,there were also poor reviewers who contributedwith only three reported defects or no uniquedefects. In the second inspection (checking for styleconformance), there were more defects reported by

ReportedUniqueDuplicates

Insp1: Reqmts Insp2: ProgStyle –2

4

10

16

22

28

Figure 12. Individual reviewers’ performance in the firstexperience

each reviewer but most defects overlapped ratherthan being unique contributions. Then, there weremany more reviewers discovering independentlythe same programming style violation than detect-ing individually the same requirements defect.

Table 2 summarizes the measures for both theinspections at the team level. Looking at the resultswe highlight the following differences between thetwo inspections:

• Defects worth discussing, and then selectedfor the Discrimination stage, were much morefor the requirements document (discussionfiltering = 0.47) than for the dynamic web doc-ument (discussion filtering = 0.13).

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

223

Page 8: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section F. Lanubile, T. Mallardo, F. Calefato

Table 2. Inspection team performance in the first experience

Inspection 1(requirements

document)

Inspection 2(programming

style)

Collated defects 72 68Defects selected fordiscrimination

34 9

Discussion filtering (selecteddefs/collated)

0.47 0.13

Messages 78 16Discussion intensity(messages/selected defs)

2.3 1.8

Removed false positives 21 6Slipped false positives 7 8Discrimination efficacy(Removed FP/all FP)

0.75 0.43

True defects 44 54

• Discussants were more active when raising com-ments about requirements defects (discussionintensity = 2.3) than programming style devia-tions (discussion intensity = 1.8).

• Discriminating true defects from false positiveswas much more effective for requirementsdefects (discrimination efficacy = 0.75) than forstyle violations (discrimination efficacy = 0.43).

We also collected qualitative data from a ques-tionnaire submitted to the participants at the endof the two inspections. Apart from reporting someproblems with the tool, students stated that the IBIStool was straightforward to use and convenient forthe purpose because they could work at a time anda place of their favor.

All students who had been invited into the Dis-crimination stage found the discussion with respectto learning purposes useful, but most of themacknowledged that discussing about programmingstyle deviations was not so worthwhile becausethe style was explicitly specified in a supple-mentary document (available to all the inspectorsthrough IBIS).

When asked if they would have preferred todiscuss in a face-to-face meeting or in a chat, theanswers were of two types. Some answered thatthey would prefer a chat or a face-to-face meeting,because they would feel less alone and more activein the discussion. Others answered that they liked tobe part of an asynchronous discussion because, notbeing in a hurry, they had time to think about eachother’s messages and could write better comments.

4.2. Second Experience

In the second experience, IBIS was used as aninstrument for an empirical study on softwareinspections (Lanubile and Mallardo 2003). The goalof the study was to find out how to reduce meetingeffort by restricting the scope of discussions and thenumber of discussants. For this purpose, we aimedto identify: (1) which collated defects are worth adiscussion, and (2) which discussants actively con-tribute to the decision of removing false positives.The former factor allows the moderator to shortenthe discussion agenda while the latter makes it pos-sible to shorten the size of the discussion group.Both factors have an effect on the meeting effort.

The second edition of the web engineering courserequired, as a student project work, to developa web application, including documentation. Therequirements documents of nine student projects(ranging from 7 to 23 pages) were submitted forinspection and a member of the development teamwas selected to act as the author in the inspection ofhis/her requirement document. In order to have atrained moderator, one of the researchers played therole of moderator for all the nine inspections. Therest of the inspection team was formed by two orthree expert reviewers, external to the class, plus astudent who was randomly selected from the class.

The Discrimination stage was planned to includeall the collated defects (both unique and duplicatedefects) and to invite the entire inspection team tothe discussion.

We specifically tested the hypothesis that defectsindividually found by multiple inspectors (dupli-cates) are accepted as true defects in a groupdiscussion, and can then skip the Discriminationstage, as suggested by Sauer et al. (2000).

We found that unique defects had higher chancesthan duplicates of being identified as false positivesto be removed (conversely, duplicates had higherchances to be accepted as true defects). Figure 13shows multiple bar plots for the two variables asmeasured in the nine inspections. We also foundthat decisions about false positives were actuallysupported by group consensus, because negativeacknowledgments (votes as false positives) wereproportionally higher for duplicates than for uniquedefects (as shown in Figure 14). These findingsare consistent with the hypothesis of consideringduplicates unworthy of a group discussion for thepurpose of discrimination.

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

224

Page 9: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section Tool Support

% d

uplic

ates

rem

oved

as

FP

% u

niqu

e de

fs r

emov

ed a

s F

P

Insp1 Insp2 Insp3 Insp4 Insp5 Insp6 Insp7 Insp8 Insp90

5

10

15

20

25

30

35

40

45

50

(%)

Figure 13. Duplicates and unique defects removed asfalse positives

vote

s as

FP

per

dup

licat

evo

tes

as F

P p

er u

niqu

e de

f

Insp1 Insp2 Insp3 Insp4 Insp5 Insp6 Insp7 Insp8 Insp90.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

Figure 14. Votes as false positive per duplicate andunique defect

We also tested another hypothesis in the workof Sauer et al. (2000), stating that an expert pairperforms the Discrimination task as well as anylarger group. Figure 15 shows multiple bar plots forthree variables, messages from moderator, author,and the most active reviewer respectively.

We found that most of the group discussionsconsisted of messages sent by either the moderatoror the document’s author. The other inspectorswere less active in the discussion and mainlyexpressed their judgments by means of electronicvotes without complementary messages. Then,these findings confirm the indication of limitingdiscrimination meetings to a couple of discussants(in this case, the moderator and the author).

At the end of each inspection, we collectedqualitative data from interviews to the participants.Students, when acting in the role of author of arequirements document, declared to have foundthe discussion useful, because they had a chance to

mes

sage

s fr

om m

oder

ator

mes

sage

s fr

om a

utho

rm

essa

ges

from

the

mos

t act

ive

revi

ewer

Insp1 Insp2 Insp3 Insp4 Insp5 Insp6 Insp7 Insp8 Insp90

5

10

15

20

25

30

35

40

45

Figure 15. Posted messages per sender

learn from experts how to write a good requirementdocument.

When asked how to improve discussions in theDiscrimination stage, most participants complainedabout late answers to their messages. However,they did not ask for a synchronous discussion ora face-to-face meeting but they pointed out thelack of features such as subscription to discussionthreads or presentation of the most recent/activediscussions.

4.3. Third Experience

In the third experience with IBIS, we ran a controlledexperiment with the goal to find out if remoteasynchronous discussions can replace face-to-face(F2F) meetings in the Discrimination stage withoutlosses in effectiveness.

As in the second edition, students were requiredto develop a web application as the final courseassignment. Again, the requirements documents oftwelve student projects were peer reviewed withthe help of the IBIS tool. However, this time sixinspection teams performed the Discrimination taskby discussing from a distance in an asynchronousmode, while a control group of other six inspectionteams met in a university lab for the same purpose.

Participants were randomly assigned as review-ers to inspection teams. The Discrimination stagewas planned to include the whole inspection team(three reviewers plus the author) and all the poten-tial defects, which had been individually reportedduring the Discovery stage and merged by theauthor in the Collection stage. One of the inspec-tors was randomly selected for playing the role of

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

225

Page 10: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section F. Lanubile, T. Mallardo, F. Calefato

moderator. The moderator was assigned the respon-sibility of managing the discussion and markingdefects as false positives, provided that there wasthe consensus of the other inspectors.

We invited participants to F2F meetings two daysin advance. In order to guarantee similar conditionsbetween the two types of inspections, two dayswere made available for asynchronous discussions.

At the team level, we collected the same measuresas in the first experience, with the exception ofthe number of posted messages and discussionintensity, which could not be measured for F2Fmeetings.

The results are summarized in Table 3. Asyn-chronous discussions were slightly more effective(average discrimination efficacy = 0.895) than F2Fmeetings (average discrimination efficacy = 0.888).Only in two cases (Insp8 and Insp10) is the effec-tiveness of the asynchronous discussion lower thanthe minimum effectiveness of F2F meetings (Insp4).These low values seem to depend on poor discus-sion intensity.

Although the sample size is too small to runa test for differences, we can at least say thatasynchronous discussions were as effective as F2Fmeetings to discriminate between false positivesand true defects.

5. RELATED WORK

Conventional software inspection activities arepaper-based and require face-to-face meetings but

none of these two characteristics can live throughgeographical separation of inspectors.

There have been various research prototypes thatprovide an infrastructure technology for distributedsoftware inspections. In a survey of computersupport systems for software inspections, Mac-Donald and Miller (1999) reviewed 14 tools thathad moved the entire inspection process online,then eliminating the need for paper. However,the first generation of tools did not provide sup-port for distributed inspection teams. For example,ICICLE (Brothers et al. 1990) was designed to com-pletely support the inspection of C and C++ code.The inspection meeting was intended to be collo-cated in the same room, with synchronized inspec-tors’ screens. Another example is InspeQ (Knightand Myers 1993), which was developed to supportonly individual analysis with no support for groupactivities.

Conversely, CSI (Mashayekhi et al. 1993), Scrutiny(Gintell et al. 1993) and ASSIST (MacDonald andMiller 1997) focus on supporting a distributedenvironment by emulating face-to-face inspectionmeetings. CSI supports synchronous activities bymeans of shared displays and audio-conferencing,while Scrutiny uses teleconferencing facilities orsimple text-based messages between participants.ASSIST is a flexible inspection tool supportingdifferent inspection processes. It adds a sharedwhiteboard to text/audio/videoconferencing andthreads of discussions.

CSI, Scrutiny, and ASSIST can be considered theprecursors of inspection tools based on group sup-port systems (GSS), which is a form of groupware. A

Table 3. Inspection team performance in the third experience

F2F meeting Asynchronous discussion

Insp1 Insp2 Insp3 Insp4 Insp5 Insp6 Insp7 Insp8 Insp9 Insp10 Insp11 Insp12

Collated defects 19 25 43 27 35 17 31 28 45 60 17 35Defects selected fordiscrimination

19 25 43 27 35 17 31 28 45 60 17 35

Discussion filtering (selecteddefs/collated)

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00

Messages – – – – – – 72 48 124 151 38 85Discussion intensity(messages/selected defs)

– – – – – – 2.32 1.71 2.76 2.52 2.24 2.43

Removed false positives 12 8 21 5 13 5 22 13 31 14 10 6Slipped false positives 3 1 0 1 3 0 0 7 2 4 0 0Discrimination efficacy (RemovedFP/all FP)

0.80 0.89 1.00 0.83 0.81 1.00 1.00 0.65 0.94 0.78 1.00 1.00

True defects 4 16 21 21 19 12 9 8 12 42 7 29

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

226

Page 11: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section Tool Support

GSS tool supports synchronous meetings by meansof a shared list of tools, including categorizer, groupoutliner, topic commenter, vote, and so on. ModernGSS tools are Internet-based and then they can alsosupport group tasks in distributed organizationalcontexts.

The GRIP tool (Halling et al. 2001) has beendeveloped for inspecting requirements documentsby tailoring a COTS GSS. GRIP exploits the GSSbuilt-in tools to run collaborative synchronousinspection meetings. Experiments performed inan academic environment have shown that GRIP-based inspections outperform traditional manualinspections. Inspectors were interacting at thesame time and the same place, through laboratorycomputers (Grunbacher et al. 2003, Halling et al.2003).

Another GSS tool has been successfully appliedin an industrial environment to support codeinspections (van Genuchten et al. 2001). Inspectorsworked from their office desks but they were locatedat the same organizational site.

GSS-based tools attempt to preserve the meeting-based characteristic of the classical inspectionprocess. IBIS also supports synchronous remotemeetings, through the P2PConference tool, butonly in the Overview stage, where novices learnfrom the author about the artifact to inspectand from the moderator about the process tofollow.

Another approach is to restructure the inspec-tion process to eliminate the need for the inspec-tion meeting or to replace it with asynchronousdiscussion forums. The goal is to remove themain bottleneck of the inspection process, with-out reducing inspection effectiveness. Furthermore,overlapping time windows for synchronous com-munication can dramatically increase the time tocomplete an inspection, when the inspection teamworks from multiple time zones.

CSRS, combined with the FTArm inspectionmethod (Johnson 1994), and CAIS (Mashayekhiet al. 1994) are the first tools that depart from con-ventional meeting-based inspections by focusingon message-based asynchronous communication.AISA (Stein et al. 1997) evolved from CAIS to pro-vide inspectors with a web browser as a front-endfor performing inspection. The University of Ouluhas developed a number of web-based inspectiontools, starting from WiP (Harjumaa and Tervonen1998), then WiT (Harjumaa and Tervonen 2000)

and finally XATI (Hedberg and Harjumaa 2002).These tools support asynchronous inspections whileimproving user interaction and interoperability,thanks to the evolution of web technologies (fromJava applets to XML).

HyperCode (Perry et al. 2002) is a web-basedtool, which eliminates the inspection meeting andreduces the number of stages in the inspectionprocess. Findings from individual defect discoveryare shared among the inspection team as soonas they are detected. The Discovery stage andthe Collection stage are performed concurrently,thus dramatically reducing synchronizing tasks.HyperCode has been used at Lucent Technologies tosupport geographically distributed reviewers andhas resulted in a reduction of about 25% of the timeneeded to complete code inspections. However, itis still unknown how shared knowledge of defectsaffects effectiveness of inspections, also consideringthat HyperCode does not provide any guidance toinspectors while reading.

As shown in Table 4, IBIS has many similar fea-tures to those proposed in asynchronous softwareinspection tools, which organize defects to formthreaded discussions.

However, discussions in IBIS are an option (as inWiT) and are limited to discriminating true defectsfrom false positives, according to the redefinedprocess beneath the tool. When discussions fordiscrimination purposes are conducted, IBIS makesit possible to reduce the number of participants(even a pair of discussants) and the number ofdefects that are worth a discussion (e.g. onlyunique defects). The selection is performed bythe moderator before entering the Discriminationstage.

Also, GRIP makes it possible to reduce thenumber of reported defects, which are worth adiscussion (Grunbacher et al. 2003). However, theselection criterion adopted by GRIP is based onvotes expressed by inspectors at the beginning ofthe Discrimination stage. Only reported defects withdiverging votes are discussed but, unlike IBIS, in asynchronous mode.

Furthermore, IBIS improves support for indi-vidual analysis by providing procedural readingscenarios other than checklists, and allowing themoderator to assign a specific reading support onan individual basis. These features are in commononly with GRIP.

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

227

Page 12: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section F. Lanubile, T. Mallardo, F. Calefato

Tab

le4.

Com

pari

son

ofth

efe

atur

esin

tool

supp

ortf

orso

ftw

are

insp

ecti

on

Feat

ures

Too

lsD

ocum

entt

ypes

Mee

ting

Vot

ing

Rea

din

gsu

ppor

tD

etec

tion

resp

onsi

bilit

yD

ata

colle

ctio

nan

dm

easu

rem

ent

Def

ect

clas

sific

atio

nE

nabl

ing

infr

astr

uctu

re

ICIC

LE

(Bro

ther

set

al.1

990)

Sour

ceco

de

F2F

No

Ad

hoc

Iden

tica

lY

esY

esC

lient

-ser

ver

Insp

eQ(K

nigh

tand

Mye

rs19

93)

Any

text

doc

umen

tF2

FN

oC

heck

lists

Iden

tica

lN

oN

oC

lient

-ser

ver

CSI

(Mas

haye

khie

tal.

1993

)A

nyd

ocum

ent

Sync

.N

oC

heck

lists

Iden

tica

lY

esY

esC

lient

-ser

ver

Scru

tiny

(Gin

tell

etal

.199

3)A

nyte

xtd

ocum

ent

Sync

.Y

esA

dho

cId

enti

cal

Yes

Yes

Clie

nt-s

erve

rA

SSIS

T(M

acD

onal

dan

dM

iller

1997

)A

nyd

ocum

ent

Asy

nc./

sync

.Y

esC

heck

lists

Iden

tica

lY

esY

esC

lient

-ser

ver

GR

IP(H

allin

get

al.2

001)

Any

doc

umen

tSy

nc.

Yes

Che

cklis

ts/s

cena

rios

Iden

tica

l/d

isti

nct

Yes

Yes

GSS

-bas

edG

SSat

Baa

n(v

anG

enuc

hten

etal

.20

01)

Any

doc

umen

tSy

nc.

Yes

Ad

hoc

Iden

tica

lN

oN

oG

SS-b

ased

CSR

S(J

ohns

on19

94)

Any

text

doc

umen

tN

oY

esC

heck

lists

Iden

tica

lY

esY

esC

lient

-ser

ver

CA

IS(M

asha

yekh

ieta

l.19

94)

Any

text

doc

umen

tSy

nc.

Yes

Che

cklis

tsId

enti

cal

No

Yes

Clie

nt-s

erve

rA

ISA

(Ste

inet

al.1

997)

Any

doc

umen

tA

sync

.ye

sA

dho

cId

enti

cal

No

Yes

Web

-bas

edW

iP(H

arju

maa

and

Ter

vone

n19

98)

Any

text

doc

umen

tN

oN

oC

heck

lists

Iden

tica

lN

oY

esW

eb-b

ased

WiT

(Har

jum

aaan

dT

ervo

nen

2000

)A

nyte

xtd

ocum

ent

Asy

nc.

(opt

iona

l)N

oC

heck

lists

Iden

tica

lN

oY

esW

eb-b

ased

XA

TI(

Hed

berg

H.a

ndH

arju

maa

2002

)A

nyht

mld

ocum

ent

No

No

Che

cklis

tsId

enti

cal

Yes

Yes

Web

-bas

ed

Hyp

erC

ode

(Per

ryet

al.2

002)

Sour

ceco

de

No

No

Ad

hoc

Iden

tica

lN

oN

oW

eb-b

ased

IBIS

(Lan

ubile

and

Mal

lard

o20

03)

Any

doc

umen

tA

sync

.(o

ptio

nal)

Yes

Che

cklis

ts/

scen

ario

sId

enti

cal/

dis

tinc

tY

esY

esW

eb-b

ased

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

228

Page 13: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section Tool Support

6. DISCUSSION

Software inspections have been adopted from manyyears by large industrial organizations becauseof their impact on product quality and cost ofnonquality. First generation of computer supportfor software inspection involved computers beingused as replacements of clerical and labor-intensivetasks to increase inspection effectiveness and reducecost of inspection. The ever-growing phenomenonof global software development has posed a newchallenge to computer support software inspection:how to bridge the gap in physical distance amonginspection team members.

In the previous section, we have presented var-ious attempts, which have exploited Internet tech-nology to let inspections be conducted online.One major approach has been to provide soft-ware inspection tools that incorporate synchronousmeeting support systems as an alternative to face-to-face meetings. This approach (different-place,same-time) still preserves the classical inspec-tion process, which is built around a structuredencounter of technical people. However, the block-ing effects inherent in synchronous meetings, dueto the divergent schedules of participants, aug-ments the idle time and introduces delays thatnegatively affect the inspection interval time (Perryet al. 2002), and then the interval time of the over-all software development process. For this reason,asynchronous mechanisms are recommended forthe software inspection process while meetings areconsidered as ‘the phase of last resort’ (Johnson1998).

Idle premeeting time can still be longer forglobal software inspections when time zones dif-fer between team members. Time zone disparitybetween distant sites can provide few or no over-lapping work hours for synchronous activities suchas meetings. Even a time difference of a few hourscan be a challenge in coordinating personal sched-ules because of dissimilar time slots for lunch anddifferent national holidays.

Another problem that hinders synchronous com-munication in global software development is lan-guage differences. English is the common cross-border language for software business. However,nonnative English-speaking people can find it dif-ficult to understand, write or speak at the paceimposed by real-time communication with English-native partners.

7. CONCLUSIONS

In this paper, we have presented IBIS, a web-basedsupport tool for geographically distributed inspec-tion. On the basis of lessons learned from empir-ical studies of software inspections, IBIS followsa different-place, different-time approach, with aprevalence of asynchronous communication amonginspectors. IBIS adopts a reengineered inspectionprocess to minimize coordination problems, with-out loosing the advantages of inspections (phasedprocess, reading techniques, roles, measurement)over less formal review processes. The conformanceto a specific but flexible redesigned inspectionprocess, and the support for different reading tech-niques in the individual defect discovery activity,make the IBIS different from other asynchronousweb-mediated inspection tools.

We can identify three lessons that we havedrawn from our experiences in running distributedinspections through the IBIS tool. First we havelearnt that, having removed the need for theinspection team to meet in the same place andin the same time, the usual team size limit of five toseven members can be exceeded without incurringan unbearable overhead. Second, we have learnthow to optimize the redesigned inspection processby focusing the discussion on those defects thathave been reported by only one reviewer, andrestricting the number of discussants to a subset ofthe whole inspection team. Finally, we have learntthat asynchronous discussions can be as effectiveas F2F meetings when discriminating between truedefects and false positives.

Further work will be to empirically assess theeffects of synchronous and asynchronous-basedapproaches on inspection effectiveness, cost andinterval time. To pursue these investigations furtherwill require both controlled experiments and fieldstudies, and the availability of a global softwaredevelopment context, with differences in distance,time zone, and national culture.

ACKNOWLEDGEMENTS

We thank the anonymous reviewers for their helpfulimprovement suggestions. We are also gratefulto the students at the University of Bari whoperformed distributed inspections and providedvaluable feedback to enhance the tool.

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

229

Page 14: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section F. Lanubile, T. Mallardo, F. Calefato

REFERENCES

Adler S, Berglund A, Caruso J, Deach S, Graham T,Grosso P, Gutentag E, Milowski A, Parnell S, Richman J,Zilles S. 2001. Extensible Stylesheet Language (XSL) 1.0,W3C Recommendation, World Wide Web Consortium.http://www.w3.org/TR/xsl/.

Basili VR, Green S, Laitenberger O, Lanubile F, Shull F,Sorumgard S, Zelkowitz M. 1996. The empiricalinvestigation of perspective-based reading. EmpiricalSoftware Engineering 1(2): 133–164.

Bianchi A, Lanubile F, Visaggio G. 2001. A controlledexperiment to assess the effectiveness of inspectionmeetings. Proceedings of 7th METRICS (London, England,April, 2001). IEEE Computer Society Press: Los Alamitos,CA, 42–50.

Bray T, Paoli J, Sperberg-McQueen CM, Maler E. 2000.Extensible Markup Language (XML) 2.0, W3CRecommendation, World Wide Web Consortium.http://www.w3.org/TR/REC-xml.

Brothers L, Sembugamoorthy V, Muller M. 1990. ICICLE:groupware for code inspection. Proceedings ofConference on Computer Supported Cooperative Work,Los Angeles, CA, October 1990, 169–181.

Carmel E. 1999.. Global Software Teams: Collaborating AcrossBorders and Time Zones. Prentice Hall PTR: San Francisco,CA.

Ciolkowksi M, Differding C, Laitenberger O, Munch J.1997. Empirical Investigation of Perspective-basedReading: A Replicated Experiment. ISERN Report 97-13.

Clark J. 1999. Extensible Stylesheet Language Transfor-mations (XSLT) 1.0, W3C Recommendation, World WideWeb Consortium. http://www.w3.org/TR/xslt.

Ebenau RG, Strauss SH. 1994. Software Inspection Process.McGraw Hill, NewYork, USA.

Ebert C, Parro CH, Suttels R, Kolarczyk H. 2001.Improving validation activities in a global softwaredevelopment. Proceedings of ICSE (Toronto, Canada, May2001). IEEE Computer Society Press: Los Alamitos, CA,545–554.

Fagan ME. 1976. Design and code inspections to reduceerrors in program development. IBM Systems Journal15(3): 182–211.

Freedman DP, Weinberg GM. 1990. Handbook ofWalkthroughs, Inspections, and Technical Reviews, DorsetHouse Publishing, NewYork, USA.

Fusaro P, Lanubile F, Visaggio G. 1997. A replicatedexperiment to assess requirements inspection techniques.Empirical Software Engineering 2: 39–57.

Gilb T, Graham D. 1993. Software Inspection. Addison-Wesley, Reading, Massachusetts, USA.

Gintell J, Arnold J, Houde M, Kruszelnicki J, McKen-ney R, Memmi G. 1993. Scrutiny: a collaborative inspec-tion and review system. Proceedings of 4th European Soft-ware Engineering Conference, Garmisch-Partenkirchen,Germany, September 1993.

Grunbacher P, Halling M, Biffl S. 2003. An empiricalstudy on groupware support for software inspectionmeetings. Proceedings of 18th Int. Conference on AutomatedSoftware Engineering (Montreal, Canada, October 2003).IEEE Computer Society Press: Los Alamitos, CA, 4–11.

Halling M, Biffl S, Grunbacher P. 2003. An experimentfamily to investigate the defect detection effect oftool-support for requirements inspection. Proceedings of9th Int. Software Metrics Symposium (Sydney, Australia,September 2003). IEEE Computer Society Press: LosAlamitos, CA, 278–285.

Halling M, Grunbacher P, Biffl S. 2001. Tailoring aCOTS group support system for software requirementsinspection. Proceedings of 16th Int. Conference on AutomatedSoftware Engineering (San Diego, CA, November 2001).IEEE Computer Society Press: Los Alamitos, CA, 201–210.

Harjumaa L, Tervonen I. 1998. A WWW-based tool forsoftware inspection. Proceedings of 31st HICSS Conference(Hawaii, HI, January 1998), Vol. III. IEEE ComputerSociety Press: Los Alamitos, CA, 379–388.

Harjumaa L, Tervonen I. 2000. Virtual softwareinspections over the internet. Proceedings of 3rd ICSEWorkshop on Software Engineering over the Internet,Limerick, Ireland, June 2000.

Hedberg H, Harjumaa L. 2002. Virtual software inspec-tions for distributed software engineering projects. Pro-ceedings of ICSE International Workshop on GlobalSoftware Development, Orlando, FL, May 2002.

Herbsleb JD, Mockus A, Finholt TA, Grinter RE. 2001. Anempirical study of global software development: Dis-tance and speed. Proceedings of ICSE (Toronto, Ontario,May 2001). IEEE Computer Society Press: Los Alamitos,CA, 81–90.

Herbsleb JD, Moitra D. 2001. Global software develop-ment. IEEE Software 18(2): 16–20.

Johnson PM. 1994. An instrumented approach toimproving software quality through formal technicalreview. Proceedings of 16th International Conference onSoftware Engineering (Sorrento, Italy, May 1994). IEEEComputer Society Press: Los Alamitos, CA, 113–122.

Johnson PM. 1998. Reengineering inspection. Communi-cation of the ACM 41(2): 49–52.

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

230

Page 15: Softw. Process Improve. Pract. 8 Tool Support for ...lanubile/papers/spip2003.pdf · Tool Support for Geographically Dispersed Inspection Teams Research Section Filippo Lanubile*,†,

Research Section Tool Support

Johnson PM, Tjahjono D. 1998. Does every inspectionreally need a meeting? Empirical Software Engineering 3:9–35.

Knight JC, Myers EA. 1993. An improved inspectiontechnique. Communications of the ACM 36(11): 51–61.

Laitenberger O, DeBaud JM. 2000. An encompassing lifecycle centric survey of software inspection. The Journal ofSystems and Software. 50(5-31).

Land LPW, Jeffery R, Sauer C. 1997. Validating the defectdetection performance advantage of group designs forsoftware reviews: report of a replicated experiment.Caesar Technical Report 97/2, University of New SouthWales, Sydney, Australia.

Lanubile F, Mallardo T. 2002. Preliminary evaluationof tool-based support for distributed inspection.Proceedings of ICSE International Workshop on GlobalSoftware Development, Orlando, FL, May 2002.

Lanubile F, Mallardo T. 2003. An empirical study of web-based inspection meetings. Proceedings of 2nd InternationalSymposium on Empirical Software Engineering (Roma,Italy, October 2003). IEEE Computer Society Press: LosAlamitos, CA, 244–251.

MacDonald F, Miller J. 1997. A Software inspectionprocess definition language and prototype support tool.Software Testing, Verification, and Reliability 7(2): 99–128.

MacDonald F, Miller J. 1999. A comparison of computersupport systems for software inspection. AutomatedSoftware Engineering 6: 291–313.

Mashayekhi V, Drake JM, Tsai W-T, Riedl J. 1993.Distributed, collaborative software inspection. IEEESoftware 10(5): 66–75.

Mashayekhi V, Feulner C, Reidl J. 1994. CAIS:collaborative asynchronous inspection of software.Proceedings of 2nd ACM SIGSOFT Symposium on theFoundations of Software Engineering, New Orleans, LA,December 1994.

Miller J, Wood M, Roper M. 1998. Further experienceswith scenarios and checklists. Empirical SoftwareEngineering 3: 37–64.

Parnas DL, Weiss DM. 1987. Active design reviews:principles and practice. Journal of Systems and Software7: 259–265.

Perry DE, Porter A, Wade MW, Votta LG, Perpich J. 2002.Reducing inspection interval in large-scale softwaredevelopment. IEEE Transaction on Software Engineering28(7): 695–705.

Porter A, Votta LG. 1998. Comparing detection methodsfor software requirements specification: a replicationusing professional subjects. Empirical Software Engineering3: 355–379.

Porter A, Votta LG, Basili VR. 1995. Comparing detectionmethods for software requirements inspections: areplicated experiment. IEEE Transactions on SoftwareEngineering 21(6): 563–575.

P2PConference homepage. 2004. http://p2pconference.jxta.org (last accessed on 03/01/2004).

Sauer C, Jeffery DR, Land L, Yetton P. 2000. Theeffectiveness of software development technical reviews:a behaviorally motivated program of research. IEEETransactions on Software Engineering 26(1): 1–14.

Stein M, Riedl J, Harner SJ, Mashayekhi V. 1997. A casestudy of distributed, asynchronous software inspection.Proceedings of 19th International Conference on SoftwareEngineering (Boston, MA, May 1997). IEEE ComputerSociety Press: Los Alamitos, CA, 107–117.

van Genuchten M, van Dijk C, Scholten H, Vogel D. 2001.Using group support systems for software inspections.IEEE Software 18(3): 60–65.

Votta LG. 1993. Does every inspection need a meeting?ACM Software Engineering Notes 18(5): 107–114.

Wiegers KE. 2001. Peer Reviews in Software: A PracticalGuide. Addison-Wesley Reading, Massachusetts, USA.

Wilson BJ. 2002. JXTA. New Riders.

Wood L, Nicol G, Robie J, Byrne S, Le Hors A, Sutor RApparao V, Champion M, Isaacs S, Jacobs I, Wilson C.1998. Document Object Model (DOM) Level 1Specification, W3C Recommendation, World WideWeb Consortium. http://www.w3.org/TR/REC-DOM-Level-1/.

Copyright 2004 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2003; 8: 217–231

231