agile and incremental development of large systems755592/fulltext01.pdfagile and incremental...
TRANSCRIPT
Agile and Incremental Development of Large
Systems
Lars Taxén and Ulrik Pettersson
Linköping University Post Print
N.B.: When citing this work, cite the original article.
Original Publication:
Lars Taxén and Ulrik Pettersson, Agile and Incremental Development of Large Systems, 2010,
7th European Systems Engineering Conference (EuSEC 2010), Stockholm, Sweden, May 23–
26.
Postprint available at: Linköping University Electronic Press
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111349
Agile and Incremental Development of Large Systems
Lars Taxén
Linköping University
Ulrik Pettersson
SAAB Aerosystems
Copyright © 2010 by Lars Taxén, Ulrik Pettersson. Published and used by INCOSE with permission.
Abstract. Agile methods have been suggested as a means to meet the challenges in large sys-
tems’ development, in particular large-scale development of software. However, with in-
creasing scale, limitations of agile methods begin to materialize. To this end, we outline an
integration driven development (IDD) approach that combines plan-driven, incremental
development with agile methods. The planning is based on a visualization of dependencies
between needed system changes – deltas – called the anatomy. The IDD approach originated in
the Ericsson telecom practice more than a decade ago, and is now used regularly in large
system development projects.
Introduction ................................................................................................................................ 1
Background ............................................................................................................................ 2 Integration Driven Development ............................................................................................... 3
The anatomy........................................................................................................................... 3 Planning ................................................................................................................................. 6 Realization ............................................................................................................................. 7
Variants ...................................................................................................................................... 8 Variant 1 - Sub-system teams ................................................................................................ 9
Variant 2 - “Daily build”...................................................................................................... 10 Discussion and Conclusions .................................................................................................... 11 References ................................................................................................................................ 12
Introduction The development of large systems implemented in hardware and software pose immense
challenges. In particular, large software projects often result in failures or less than anticipated
results (e.g. Standish Group International 2009). The trend towards increasing size and
complexity of systems shows no signs of flattening out. On the contrary, ultra-large-scale
(ULS) systems are expected in the near future (SEI Carnegie Mellon 2006).
ULS systems “will push far beyond the size of today’s systems and systems of systems by
every measure: number of lines of code; number of people employing the system for different
purposes; amount of data stored, accessed, manipulated, and refined; number of connections
and interdependencies among software components; and number of hardware elements. The
sheer scale of ULS systems will change everything” (ibid, ix). In particular, the social aspects
need to be considered:
We will need to look at [ULS systems] differently, not just as systems or systems of systems, but as
socio-technical ecosystems. We will face fundamental challenges in the design and evolution, orches-
tration and control, and monitoring and assessment of ULS systems. These challenges require break-
through research. (ibid, ix)
In this contribution, we outline an integration driven development (IDD) approach towards the
development of large systems, which combines plan-driven, incremental development with
agile methods. The planning is based on a visualization of dependencies between needed sys-
tem changes – deltas (∆) – called the anatomy. IDD originated in the early 1990s at Ericsson, a
major provider of telecommunication systems and services worldwide. During the years the
approach has been substantially elaborated, and is now used regularly in large system
development projects. Thus, we propose that the principles inherent in, and the experiences
from using the IDD approach, may be valuable for addressing the challenges in large systems’
development.
Background
The traditional way of developing systems, also known as the “waterfall” approach, can
roughly be characterized as distributing the work to be done to various modules that are indi-
vidually developed and tested. Towards the end of the project, these modules are integrated and
tested in a so called “big-bang” activity (see Figure 1). Consequently, the system qualities
cannot be guaranteed until late in the project. This approach has a number of reported problems
(see e.g. Royce 1970; Karlsson 2002; McConnell 2004) – requirements may change during the
implementation; implementation problems may be discovered late in the project; lack of user
involvement after the specification is written; testing resources such as test equipments and
experienced testers are unevenly utilized towards the end of the project; to mention but a few.
syste
m q
uali
ties
time
traditional
syste
m q
uali
ties
time
alternative
Figure 1: The general idea
The alternative way is to develop the system in many small steps – ∆:s – and verify the system
after each step. This means that each step:
is a realization of a planned system change (∆),
results in a working system version ready for advanced tests in target environment,
is made within a few weeks or even faster.
There are several generally agreed advantages of developing a system in small steps:
Continuous quality assurance – frequent and relevant feedback improves quality.
Reducing risk – by reducing the size of a step we reduce the time and money invested
in the step, and hence, the risks attached to it.
Tracking progress – small steps provide frequent and indisputable milestones since true
progress is visible at the end of each step.
Improving performance – the things that you need to do frequently are things that you
become good at and do efficiently.
Increasing flexibility – we are never more than a few steps away from formal
verification and a possible delivery to the customer.
Agile methods such as “daily build”, Scrum, XP (Extreme Programming), continuous inte-
gration, Evo (Gilb 1989) and others, are examples of stepwise development approaches (see
e.g. Meso and Jain 2006, for a comprehensive overview). No doubt, agile methods have proven
viable in industrial development of large systems (Larman and Vodde 2008). However, with
increasing scale of applications, issues begin to materialize:
In recent agile methods workshops with our large-company industry affiliates, the participants have unani-
mously agreed that agile methods have helped them become more flexible and adaptive to change. But they
have also agreed that scalability and legacy practices have limited their range of adoption of agile methods.
(Boehm, in Fraser et al. 2006, 226)
Scalability issues are, among other things, “team-of-teams coordination and change manage-
ment, independent–team product interoperability, multi-customer change coordination and
unscalable COTS1 or architectural sub optimization on early increments” (ibid).
A central issue for scaling agile methods is to strike a balance between agility and common-
ality:
Being successful in large-scale product development requires finding ways to enable self-directing teams
to work towards a common goal without compromising empowerment and feeling of ownership in the
teams - a task easier said than done. (Vilkki, in Fraser et al. 2006, 228)
We claim that the IDD approach is one way to address scaling of agile methods and other
issues relevant for large scale system development. The paper is organized as follows. In the
next section, we outline IDD in three steps: we describe the system anatomy, the planning
process, and the agile realization of planned changes (∆). This is followed by two variants of
the basic IDD strategy: one with focus on sub-system teams, and one variant similar to the well
known “daily build” concept for software development. Finally, we discuss the IDD strategy
and draw some conclusions.
Integration Driven Development Integration Driven Development consists of two major activities: rigorous anatomy-based
planning and agile realization of planned changes done by “∆ teams” working in parallel. We
begin with a short description of the anatomy construct.
The anatomy
The anatomy is the key to master parallel, stepwise development. It is an illustration – pref-
erably on one page – that shows the dependencies between capabilities in the system from
start-up to an operational system (Adler 1999; Anderstedt et al. 2002; Jönsson 2006; Taxén and
Lilliesköld 2005). Here, “capability” shall be understood as the capability of a certain system
element to provide a utility that other system elements need.
An example of a realistic system anatomy from the Ericsson practice is shown in Figure 2:
1 Commercial, off-the-shelf (COTS) is a term for software or hardware products that are ready-made and available
for sale, lease, or license to the general public.
Power regulation Busy channel supervision
HW malf . log
LocatingTime alignment
BS detectors, measurement administration
TRAB MS-MS
SACCH, FACCH TRAB speech path
Mobile access
EMRPS Start
Output power settingBlocking
TX on
SupervisionIPCH activation
Change of BCCH Set f requency
SCCHPCH activeTRAB synch
Deblock IPCH
Deblock TRX
Deblock TIM, CTC, RFTL
External alarms
Deblock CPHC
Conf ig LCH
C-link comm.
Adm LCH
Load TIM, RFTL
Deblock ALM
Load CTC, TRX
Conf ig Site RCG, CEOTRAB Control
Ring-back tone
Load ALM
Figure 2: An anatomy for a radio-base station
Each box (the details of which are less important here) should be read as a capability provided
by one or several modules (subdued in the figure). The dependencies (lines) proceed from the
bottom to the top of the anatomy. For example, the capability “EMRPS Start” (an extension
module regional processor with a speech bus interface) is a basic capability. If this capability
fails, the whole system will fail. In line with this, the gist of the anatomy-centric approach is to
design and test the system in the same order as the capabilities are invoked. In a metaphorical
sense, this can be seen as the order in which the system “comes alive”; hence the term “anat-
omy”.
When doing IDD the anatomy is a vital “input” in the creation of a slightly different type of
anatomy – the anatomy. The anatomy does not show all the capabilities of a system, but
only the changes and addition of capabilities currently planned for realization (see Figure 3).
The white squares represent ∆:s in various stages of completion. For simplicity of drawing, the
∆ anatomy is constructed to be read from the top down in contrast to the anatomy in Figure 2,
which is read bottom up.
D e s ignBase:
M G wR3, inc 703m m dd
S h ipment 1, w04
S h ipment 5, w42
O & M : WP7M oM5.0 (proposal)Req:
S h ipment 2, W21
S S 7: WP4As y ncM TP3BI SCCIStart: 030901 To LSV 040210Req: CPP4 TR
S S 7:WP1-ASIGTRAN ITU (Complete ITU) im prov em ents ,CMA + EMASStart: 031006 To LSV: 040409Req: M R413, 416, 418, 419
S S 7: WP1-DTTCStart:040329To LSV 040702Req: M R413, 416, 418, 419
C o re: WP7HW faul t detection and bas ic tes t envi ronments tart: 031027To LSV: W17Req: M R423, 353
M S P: WP1SW loadingStart: To LSV: W25Req: M R349
C A D E: WP1Com m ercia l too l upgrade (ROSE RT, Tes t Real tim e2003) Req: M R
D e s ignBase:
C e l lo4.3 TC3To LSV: 0
M S P: WP2M FD fram work + TS dev ic e, (DEM O)Start: 030901 To LSV: No,del iv eryReq: M R348
S S 7:WP264-l ink s upportStart: 030901 To LSV: 040130Req: M R350
M S P: WP51TRFO v ers ion1report RFCI, act/deac tSCRA updatesStart: To LSV: W33Req: M R409
C A D E: WP4Ex t. CADE re leas e 4Start: To LSV: W22Req: M R
C o re: WP13CPIStart: 040315To LSV: W22Reg: M R387
M S P: WP522TRFO v ers ion 2Start: To LSV: W40Req:M R409
N C H : WP1Robus t AAL5 c onnec tion & 64 link (AAL1)Start: To LSV: 040318Req:
O & M : WP3Faul t m anagements tart: 031117To LSV: W3Req: M R426?
C o re: WP18End s upport of PRI-v ers ion 1,2Start: To LSV:Reg: in ternal
S S 7: WP1-CSIGTRAN ChinaStart: 040412 To LSV: 040625Req: M R413, 416, 418, 419
S S 7: WP1-BSIGTRAN ANSI (c omplete)+ WPS-FOC, CM A + EMASStart: 040322 To LSV:040604Req: M R413, 416, 418, 419
C 5 : WP3ISP, s y s tem upgradeO&M ?
Start: To LSV: W25Reg: M R430, 431, 432
C 5 : WP7ISP, c harac teris tics m eas urementsStart: To LSV:Reg: M R427
C 5 : WP6Reduc tion of RAM usageStart: To LSV:Reg: M R362
C A D E: WP5CADE releas e 5Start: To LSV:Req: M R
O & M : WP12M oM5.2 (final )Start:To LSV:
O & M : WP13M oM5.3 (proposal)Start:To LSV:
O & M : WP4M oM5.3 (final )Start:To LSV:
C o re/CADE:WP8Ex t. CEDE re leas e 5s tart: to LSV:Req: M R
C o re: WP8OSE 4.5.2Start: 031103To LSV: W14Req:
I P :W P J TbJ oint Function Tes t o f IP
I nc ludedW P s:IP: WP2,7,8,10,11,12,13NCH: WP1SS7: WP5
Start: To LSV: W29Req: M R340
I P :W P J TaJ oint Function Tes t of IP
I nc ludedW P s:IP: WP1,3,5,6,9M SP: 10Start: To LSV:W17Req: M R332CR: UABtr17479
M S P: WP34M FD Ver1Start: To LSV:W19Lawfu l In terceptionReq: M R348, MR403
M S P: WP11M FD v er0, MSP-WPStart: To LSV: W19Req: M R348
M S P: WP14HW diagnos ticsStart: To LSV: W33Req: M R28
N C H : WP2PRI v 3 (AAL2)Start: To LSV: 040318Req:
C o re: WP2Core adopted to GCCStart: 031103To LSV: W5Req: ?
C o re: WP3RM M part 1Start: 031027 To LSV: W11Req: ?
C o re: WP4DB upgradeStart: 031020To LSV: W9Req: ?
O & M : WP1SS7 GUI preparations tart:031006To LSV: W49Req: ?C 5 : WP2
New CPP 4.3 TC 4 To LSV: 1Req: M R370
O & M : WP2SS7 GUI s impli fic ationStart: To LSV: W10Req: ?
C o re: WP6SPP s pl i tStart: 031103To LSV: W14Req:
C o re: WP5RM M im provem ent 2Start: 040130To LSV: W24Req:
C 5 : WP9CBU m ergeStart: To LSV: W40Req: CPP4 CR
C o re: WP11Ov erload protec tionStart: 040105 To LSV: W24Req:
O & M : WP9Upgrade obs erv abi li tyStart: To LSV: W17Req: CPP4 CR
C o re: WP14Start up tests GPBStart: 031201 To LSV: W35Req: M R28
C o re: WP16As y nc IP c om municationStart: 031103 To LSV: W19Req:
C 5 : WP1New ClearCas e s truc ture to LSV: W49Req: M R370
C o re: WP0OSE 4.5.1 & GCCStart:To LSV: W44
S h ipment 0, w51C A D E: WP2CADE tools 2s tart: To LSV:W3Req: M R
M S P: WP15Sy s tem upgrade, MSP-WPStart: To LSV: W25Req: M R431
M S P: WP521M FD v ersion 2 + LIStart: To LSV: W25Req: M R349
M S P: WP0ClearCas e s truc tureStart: 031124To LSV: 2Req: M R349
I P :W P J TcJ oint Function Tes t of IP
I nc ludedW P s:IP: WP2,4,8
Start: To LSV: W40Req: M R340
On schedule
but not started
On schedule
Completed
Risk
Off track
C A D E: WP3CADE tools 3Start: To LSV: W18Req: M R
C o re: WP10Trac e & DebugStart: 040301 To LSV: W18Req:
N C H : WP15HW Diagnostic s (Board)Start: To LSV:Req:
N C H : WP17HW Diagnostic s (Board)Start: To LSV:Req:
N C H : WP16HW Diagnostic s (Board)Start: To LSV:Req:
N C H : WP4CLI data (AET)Start: To LSV: 040318Req:
N C H : WP6ISP im provem ent 1 (SPAS)Start: To LSV: 040318Req:
E X TERNALInte l NP-c odeM ontav is taOS
Req: ?
N C H : WP3PRI v 3 (PHY)Start: To LSV: 040318Req:
S h ipment 3, w 27
C o re: WP1PQ2 driv erStart: 031020To LSV: W23Req: ?
N C H : WP10Im prov ed PM (PHY)Start: To LSV: 040618Req:
C o re: WP17GPB-GX & PQIIStart: To LSV: W27Req: M R350
N C H : WP14PRI v 3 (AAL015)Start: To LSV: 040618Req:
N C H : WP11Im prov ed PM (AET)Start: To LSV: 040709Req:
S S 7: WP11J oint Tes t 64-L ink SupportStart: To LSV:Req: M R350
S S 7: WP7CPIStart: 030901 To LSV: 040917Reg: M R387
C o re: WP12ISP loggingStart: 040202 To LSV: W35Req:
N C H : WP13ISP im provem ent(AAL2)Start:To LSV: 040518Req:
N C H : WP5PRI v 3 (AET)Start: To LSV: 040430Req:
N C H : WP7PRI v 3 & Preload FPGA (Equ ip )Start: To LSV: 040422Req:
N C H : WP12ISP im provem ent 1 (SPAS)Start: To LSV: 040518Req:
S S 7: WP6Ov erload protec tion SAALStart: 040202 To LSV: 040917Req: M R421
N C H : WP8OVLP (AAL015)Start: To LSV: ?Req:
S h ipment 4, w35
Figure 3: An example of the ∆ anatomy
The dependencies between :s constrain the order in which changes can be done, and deter-
mines the possible degree of parallelism. There are two types of dependencies that should be
considered and managed:
“Design Dependency” (∆A should be delivered before design of ∆B can start). The
motivation for controlling this kind of dependency is to avoid, as far as possible,
parallel development of software components, and by that, avoid the costly task of
merging code.
“Verification Dependency” (∆A must be delivered before verification of ∆B can start).
This dependency indicates that design of ∆B can start before ∆A is ready, but
verification of B can only be done after the completion and integration of ∆A.
The use of the ∆ anatomy is based on several important principles:
The anatomy should be easy to read and comprehend for all stakeholders involved. The
intention is that the anatomy should be a common illustration of the tasks at hand. Thus,
it becomes a means by which various decisions can be taken and evaluated.
Developing the anatomy is a joint and continuously ongoing effort; it’s not a one time
effort done by a few specialists. Although the anatomy is an efficient way to
communicate the work at hand, the process of creating and maintaining the anatomy is
far more important and valuable than the resulting anatomy chart.
The anatomy is the prime instrument for planning and monitoring the project. As such,
there is a unique anatomy for each project geared precisely to the task at hand. During a
typical project the anatomy will be revised at least once a week, reflecting the dynamic
nature – new and changing needs and priorities – of system development.
There should be only one anatomy for each system development project. If it turns out
that there is a need for several interconnected anatomies, the level of detail is wrong.
The reason for maintaining this principle is again instrumental; there should be only
one common overall view of the work to be done, and this view should be sufficient for
all stakeholders involved.
Planning
The purpose of the planning activity is to decide what should be developed based on customer
needs, how these needs should be transformed into suitable system changes (:s), and in what
order these changes are to be developed and integrated into the system. The planning is an
ongoing process that results in regular releases at predetermined intervals. Thus, a “rhythm” is
introduced in the development, something which appears to be advantageous for implementing
large-scale systems (e.g. Lui and Chan 2008).
In the planning activity, three major, interrelated tasks are performed by system management,
systems engineering and project management respectively (see Figure 4):
Where
&
How?
What
&
Why?
When
&
Who?
Edition Plan
Edition Specifications
# _________
# _________
# _________
# _________
# _________
System Mgmt
# _________
# _________
# _________
# _________
# _________
Anatomy
E 4.0
E 5.0
System Engineering
Integration Plan
Project Mgmt
E 4.0 E 5.0
Figure 4: The anatomy-based planning
System management - What and Why? The purpose of this task is to specify the content of
system editions – official system versions delivered to customers – in terms of needed func-
tions and qualities, and maintain a system edition plan.
System engineering - Where and How? The system engineering activity transforms the edi-
tion content into small verifiable system changes (∆) and clarifies their dependencies in a
anatomy. This includes specifying each in terms of affected components and interfaces, and
estimating its size.
Project management - When and who? The purpose of this task is to determine a suitable
integration order based on their dependencies and available development recourses, add
these ∆:s into an integration plan, and kick off their realization by assigning :s to development
teams.
Realization
The realization activity can be regarded from two contexts: the coordination context, where the
coordination of ∆ deliveries are in focus, and the development context, where the development
of a certain ∆ by a ∆ team is in focus (see Figure 5).
Formal Product Verification
E 4.0
Merge
Corrections
Product Integration Test
E 3.0
21 272019181716Working System Version: ...
Working
System
Qualities
Time
....
2 weeks
System Test (in Target Environment)
10 .... 159
D team
D team
D team
D team
D team
Where&
How?
What
&
Why?
When
&Who?
Anatomy
E 4.0
E 5.0
GO
D&UT
-Go
DT RT
-OK
UT
Ready
DT
Ready
A&RDelta
A&R – Analysis & Review
D&UT – Design & Unit Test (Components)
DT – Delta Test (Complete System)
RT – Regression Test (Complete System)
Figure 5: The realization activity
The coordination context. From the planning activity, the overall work has been divided into
verifiable system changes, ∆:s, and an integration order has been planned. Each new system
version resulting from integrating a ∆ is called a Working System Version (WSV).
Figure 5 shows how new working versions of a system are produced frequently, (typically
every second week). Each WSV is developed by a cross-functional team that has the
end-to-end responsibility for its realization. The typical size of a development team is five to
ten persons, and the development time may vary between a few weeks up to several months.
The task for a development team is to design, implement and verify a specified . As can be
seen in Figure 5, development teams must work in parallel in order to carry out the frequent
(bi-weekly) integration pace. To avoid the situation of having several teams working with the
same components in parallel, dependencies between deltas must be clearly understood – the
anatomy – and the order of realization carefully planned – the integration plan.
Before a development team is allowed to deliver the realization of a , they must make sure
that their new and modified components work in a complete system build based on the latest
WSV. That is, development teams don’t deliver components or sub-systems but always com-
plete system versions that have been demonstrated to work as expected in a target like environ-
ment. This means that once a working system version is sent to System Test or Formal Product
Verification we don’t expect to find any trivial software faults. Note that Formal Product
Verification not is done for each working system version, but only for system versions
representing the implementation of a complete edition (for example WSV 19 in Figure 5).
The ∆ team context. The procedure to be followed by each ∆ team is outlined in Figure 5, the
lower left part.
Assignment: The ∆ team leader has been asked by the project manager to start the reali-
zation of the ∆.
∆-Go: The ∆ team has reviewed and analyzed the information specifying the ∆, identi-
fied the system components and documents that will be affected by the change, speci-
fied how the change will be verified (test cases), and agreed on a delivery date.
UT Ready: The unit test is ready and all the changed and new components have been
tested as agreed upon at the ∆-Go.
∆T Ready: The ∆ test is ready and the ∆ has been demonstrated to work in a system test
environment without any major remarks.
∆-Ok: All quality assurance activities agreed upon at the ∆-Go, including ∆- and regres-
sion-test, have been carried out without major remarks, and the ∆ team can deliver a
new WSV. Regression testing is done to assure that previously working capabilities are
still working.
A precondition for the ∆ team is that the team really has an end-to-end responsibility (from
“needed change” to “new system version”). This requires that boundaries are clearly stated in
terms of criteria that must be fulfilled: 1) before an assignment can start, and 2) before a
delivery can be made. Teams must also have the possibility to make a true commitment, that is,
time for analysis, planning and negotiation. Moreover, teams are allowed to decide how to do
the work, and everything should be done to provide them with the environment and support
they ask for. Communication within and among teams (“team rooms“, “daily stand-up meet-
ings”, “coordination meetings” etc.) are encouraged and facilitated.
The ∆ teams should work according to agile methods. The realization of a ∆ can, for example,
be seen as one or several SCRUM sprints. The coordination context would then show a number
of parallel, ongoing sprints. The teams are expected to use all kinds of agile methods such as
refactoring, test-driven development, pair-programming, daily build, user stories, and so on.
IDD does not prescribe any specific methodology or tools that must be used by the teams. This
is up to the teams themselves to decide. Thus, the IDD can be seen as an approach where rigor-
ous planning is combined with agile implementation.
Variants
The original IDD approach can be conveniently summarized as in Figure 6, where the follow-
ing principles are adhered to: 1) “the work is divided into verifiable system changes”, 2) “teams
have an end-to-end responsibility”, 3) “teams do verification before integration”, 4) “product
verification (i.e. a formal verification before a new edition is released), is done in parallel with
development”. There are, however, a number of variants of this approach that can be exploited
during certain circumstances.
Formal Product Verif ication
E 4.0
Merge
Corrections
Product Integration Test
E 3.0
21 272019181716Working System Version: ...
Working
System
Qualities
Time
....
2 weeks
System Test (in target environment)
12 .... 1511
D team
D team
D team
D team
D team
Wher
e
&
How?
What
&
Why?
When
&
Who?
Anatomy
E 4.0
E 5.0
GO
1. Work is divided into verifiable system changes
2. Teams have an end-to-end responsibility
3. Teams do verification before integration
4. Product Verification in parallel with Development
✔
✔
✔
✔
Figure 6: The original approach
Variant 1 - Sub-system teams
In the original approach, the teams are expected to be general, long lasting cross-functional
teams that can take on the realization of any regardless which subsystems or components the
will affect. This requires that the ownership of the code and its design is “everyone’s
responsibility”, that is, each team has the right (and ability) to do all the changes needed in
order to get the realization of their current done.
However, for large complex and technical systems, clear individual ownership and
responsibility for each and every component is considered crucial in order to preserve the
components’ original architecture and “design idea” over time. Thus, for some systems (for
example, telecom systems and avionics systems) it’s more common to form teams and
responsibility round components and subsystems, instead of having general development
teams that are expected to do work everywhere in the system (see Figure 7).
Formal Product Verif ication
E 4.0
Product Integration Test
E 3.0
21 272019181716Working System Version: ...
Working
System
Qualities
Time
....
2 weeks
System Test (in target environment)
12 .... 1511
SIM
Wher
e
&
How?
What
&
Why?
When
&
Who?
Anatomy
E 4.0
E 5.0
CCS
UCS
COM
AUT
FCS
1. Work is divided into verifiable system changes
2. Teams have an end-to-end responsibility
3. Teams do verification before integration
4. Product Verification in parallel with Development
✔
ø✔
✔
Figure 7: Forming subsystem teams
In this variant, the second principle “teams have an end-to-end responsibility” is not fully ap-
plied. Instead the quality and integrity of each sub-system are more emphasized. As a conse-
quence, each ∆ delivery must be coordinated between impacted subsystem teams. For example,
the ∆ delivered to WSV 21 in Figure 7, impacts subsystem teams SIM, UCS, and COM. Before
these three teams are allowed to deliver their modified subsystems, they must do a common test
to show that their subsystems work together, that is, to show that the new system version works
as expected, and that the complete is properly implemented.
Variant 2 - “Daily build”
Applying the third principle “verification before integration” means that each team must work
in isolation from the official WSV, that is, work on an own development branch and not di-
rectly in the WSV branch. The main advantages of this are: 1) It’s possible to always have full
control of the content and quality of the WSV, 2) it’s possible to halt the realization of a
without any extra work of “removing” things from the WSV, and 3) teams can choose when
they want to rebase their design base with the latest official design base (WSV) instead of being
forced to always use the latest “checked in” code for testing activities.
However, having teams working on their own development branches requires a modular sys-
tem architecture, with clear interfaces, and subsystems and components implementing well
defined parts of the overall services provided by the system. If, from a total system perspective,
every valuable and verifiable change () typically requires changes to a majority of the compo-
nents and subsystems, then developing many changes in parallel on different branches, may not
be the best approach. Instead, it might be better to strive for one single development branch;
thus avoiding the risk of spending most of the development time merging component versions
developed in parallel.
This variant of stepwise development of large systems can be viewed as the well known “daily
build approach” applied on the complete system level; perhaps not daily but frequent (see
Figure 8).
13 27 332625242322Working System Version: ...
Verif ied
System
Qualities
Time
....
1 week
Function & System
Test (target)
12 ....11
Un-
Verif ied
Code
Automated System Regression Test
Sub
-Syste
m T
eam
s
Check in a fault correction or
a contribution to the planned
Check in “no harm” code
(preparing for a coming )
Wher
e
&
How?
What
&
Why?
When
&
Who?
Anatomy
E 4.0
E 5.0
Product Integration Test
E 3.0
Formal Product Verif ication
E 4.0
1. Work is divided into verifiable system changes
2. Teams have an end-to-end responsibility
3. Teams do verification before integration
4. Product Verification in parallel with Development
✔
øø
✔
Figure 8: "Daily build"
In the “daily build” approach the planning process is similar to the original approach. The
development scope is divided into verifiable system changes (), and the integration plan
shows in what order and when the s are expected to be implemented and ready for function
test.
The WSV is a weekly (or biweekly) created baseline in the one and only code branch. Every
night a complete system build is done and an automated regression test is run on the latest
checked in version of all components. The quality and coverage of the regression test deter-
mines the quality level of the WSV. In the best of worlds, the automated regression test will
check that basic functions, which worked in the latest edition or released WSV (for example
WSV no.12 in the picture above), are still working. So, when establishing a WSV baseline, we
know that basic functions still work. However, it remains to check whether the newly inte-
grated works. In the “daily build” approach “function test” is done after integration, not
before as in the previous two approaches.
Moreover, in the “daily build” approach, the second principle “teams have an end-to-end
responsibility” is obviously not applied. Development teams are delivering components with-
out knowing if the components are good enough to result in a new working system version with
planned function and quality. In this approach, delivering working system versions is typically
the job for a test- or integration-team on system level.
Discussion and Conclusions Agile methods emerged as a well justified reaction to the “heavy-weight” development ideals
promoted by formal “systems engineering” and the waterfall approach. As often happens with
such paradigm shifts, the pendulum may swing all the way to the other end of the spectrum.
Agile methods are sometimes seen as the “silver bullet” that will solve software development
issues once and for all. However, with increasing scale of projects, concerns with “agile-only”
methods begin to show. IDD might be seen as yet another step in the evolution of methods,
combining rigorous planning with agile development. In a sense, the IDD approach is an
empirical, strong voiced answer to the issue of whether plan-driven or agile methods should be
used (Boehm and Turner 2003); the answer of which is – both. With increasing scale of system
development, an element of stability in agile methods is indispensible.
A critical element in the IDD approach is the ∆ anatomy. The anatomy construct has remained
in one form or another all through the years at Ericsson since it was first conceived almost two
decades ago. The main reason for this is undoubtedly that it visualizes the most important
things when developing complex systems: how capabilities in the system depend on each
other. If a capability in the chain of dependencies is not present, the system will not work as
intended.
Moreover, the anatomy is consciously kept as simple as possible in order for it to be easily
understood by all stakeholders in the project. The importance of shared or common
understanding cannot be overestimated. If there is no common picture of the system to be
developed, the risk of failure is imminent. Because of its focus on communication rather than
technical details, the anatomy is one way of approaching the social dimension of ULS systems.
As with any approach, IDD of course has its drawbacks. The approach is hard to implement,
and requires a long-lasting, committed effort to “stick” in the organization. This has been evi-
dent in efforts to transfer the IDD to other organizations than Ericsson. The basic principles of
the anatomy, planning procedure and execution of ∆:s, are easily comprehended. Putting these
to work is an entirely different matter.
Concluding the paper, the main point about IDD can be expressed as follows. If there are many
agile teams or “sprints” going on simultaneously, and all teams deliver parts of the same sys-
tem, then it’s necessary to spend time on how to coordinate these teams. This effort concerns
primarily dividing the development scope into suitable changes or additions (:s), and deciding
their order of integration. Having done that, it’s possible to have many teams working in paral-
lel, letting them apply whatever agile practices they like, and still having them stepwise de-
velop one and the same system. Hence, combining agile development practices with
plan-driven methods is not only possible; it’s probably necessary in order to improve large
scale development performance.
References Adler, N. 1999. Managing Complex Product Development – Three approaches. EFI,
Stockholm School of Economics. ISBN: 91-7258-524-2
Anderstedt, J., Anderstedt, U., Karlsson, M., and Klasson, M. 2002. Projekt och helhet – att
leda projekt i praktiken. Författares Bokmaskin (in Swedish), Stockholm, ISBN:
91-7910-380-4.
Boehm, B. W., and Turner, R. 2003. Balancing Agility and Discipline: A Guide for the
Perplexed. Boston: Addison-Wesley.
Fraser, S., Boehm, B., Järkvik, J., Lundh, E., and Vilkki, K. 2006. How Do Agile/XP
Development Methods Affect Companies? In XP 2006 LNCS 4044, ed. P. Abrahamsson, M.
Marchesi, and G. Succi, 225–228. Springer-Verlag: Berlin Heidelberg.
Gilb, T. 1989. Principles of Software Engineering Management. Addison-Wesley Longman.
Jönsson, P. 2006. The Anatomy-An Instrument for Managing Software Evolution and
Evolvability. In Second International IEEE Workshop on Software Evolvability (SE'06),
31-37. Philadelphia, Pennsylvania, USA. September 24, 2006.
Karlsson, E. A. 2002. Incremental Development- Terminology and Guidelines. In Handbook
of Software Engineering and Knowledge Engineering, Volume , 381–401. World Scientific.
Larman, C., and Vodde, B. 2008. Scaling lean & agile development: thinking and
organizational tools for large- scale Scrum. Upper Saddle River, NJ: Addison-Wesley.
Lui, K., and Chan, C. C. 2008. Software Development Rhythms: Harmonizing Agile Practices
for Synergy. Wiley
McConnell, S. 2004. Code Complete. 2nd
edition. Microsoft Press. ISBN 1-55615-484-4.
Meso, P., and Jain, R. 2006. Agile Software Development: Adaptive Systems Principles and
Best Practices. Information Systems Management, 23(3), 19 – 30.
Royce, W. 1970. Managing the Development of Large Software Systems. In Proceedings of
IEEE WESCON 26 (August), 1–9, Retrieved March 9, 2010, from
http://www.cs.umd.edu/class/spring2003/cmsc838p/Process/waterfall.pdf
SEI Carnegie Mellon. 2006. Ultra-Large-Scale Systems: The Software Challenge of the
Future. ISBN: 0-9786956-0-7. Retrieved Nov 30, 2009 from
http://www.sei.cmu.edu/library/abstracts/books/0978695607.cfm?WT.DCSext.abstractsource
=RelatedLinks
Standish Group International. 2009. CHAOS Summary 2009. The 2009 update to the CHAOS
report. Retrieved Nov 30, 2009, from http://www.standishgroup.com
Taxén, L., and Lilliesköld, J. 2005. Manifesting Shared Affordances in System Development –
the System Anatomy. In ALOIS*2005, The 3rd
International Conference on Action in
Language, Organisations and Information Systems, 15–16 March 2005, Limerick, Ireland,
28-47. Retrieved Nov 30, 2009, from http://www.alois2005.ul.ie/
Lars Taxén, Associated Professor– Has more than 30 years of experience from the telecom
industry, where he has held several positions related to processes and information systems. His
thesis concerns the coordination of large, globally distributed development projects with focus
on ‘soft’ issues like sense-making. He has written a book, several book chapters, and published
in various conference proceedings and journals. He is now active as a researcher and consultant
(homepage: www.neana.se).
Ulrik Pettersson has worked 20 years in the systems and software engineering business taking
on various roles such as system designer and project manager. For ten years Ulrik worked at
Ericsson with the forming and establishment of Ericsson’s system development strategies.
Ulrik is now managing a unit at Saab Aerosystems responsible for strategies, methods and tools
used to develop safety critical avionic systems.