overcoming barriers to systems engineering process improvement

9
Overcoming Barriers to Systems Engineering Process Improvement Sarah A. Sheard,* Howard Lykins, and James R. Armstrong Software Productivity Consortium, 2214 Rock Hill Road, Herndon, VA 20170 OVERCOMING BARRIERS TO SYSTEMS ENGINEERING PROCESS IMPROVEMENT Received August 27, 1999; Accepted January 11, 2000 ABSTRACT Neither systems engineering nor process improvement is new. Since 1992, INCOSE papers and others have been reporting success in documenting and improving processes. A consid- erable body of process improvement literature is available, particularly related to improve- ment of software development processes. Even systems engineering process improvement is gaining in popularity, judging from the increasing number of INCOSE papers detailing various efforts. Yet the nature of systems engineering poses challenges over and above those seen in other process improvement efforts. This paper focuses on identifying and resolving typical barriers to improving the systems engineering process. 2000 John Wiley & Sons, Inc. Syst Eng 3: 5967, 2000 INTRODUCTION Increasing numbers of organizations are using systems engineering capability models to improve how they develop products or perform services. Many of the first adopters of these models were systems engineering organizations within companies that were also using the Capability Maturity Model for Software (SW- CMM ) 1 in their software development organizations. Lately, adopters increasingly include other types of organizations, such as government or commercial com- pany divisions and groups who do not develop soft- ware. A promising occurrence is that some software development organizations are appraising their own ability to do Systems Engineering as a part of software development. Each organization attempting to use these models dis- covers roadblocks, hurdles, and barriers to success. As- sessments with a variety of organizations have allowed the authors to determine some common stumbling blocks to the improvement of systems engineering. This paper discusses these barriers and describes proven ways to overcome them. 2 Regular Paper *Author to whom all correspondence should be addressed. An early version of this paper was presented at the Ninth Annual International Symposium of the international Council on Systems Engineering, Brighton, England, June 611, 1999. 1 Capability Maturity Model and CMM are registered in the U.S. Patent and Trademark Office. Systems Engineering, Vol. 3, No. 2, 2000 ' 2000 John Wiley & Sons, Inc. 2 In this paper, systems engineering process improvement means any attempt to improve a process that an organization consid- ers to be systems engineering. This is often independent of job titles or roles [Sheard, 1996]. Most such efforts involve one of the three systems engineering capability models mentioned in this paper. 59

Upload: independent

Post on 21-Mar-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Overcoming Barriers toSystems EngineeringProcess ImprovementSarah A. Sheard,* Howard Lykins, and James R. Armstrong

Software Productivity Consortium, 2214 Rock Hill Road, Herndon, VA 20170

OVERCOMING BARRIERS TO SYSTEMS ENGINEERING PROCESS IMPROVEMENT

Received August 27, 1999; Accepted January 11, 2000

ABSTRACT

Neither systems engineering nor process improvement is new. Since 1992, INCOSE papers

and others have been reporting success in documenting and improving processes. A consid-

erable body of process improvement literature is available, particularly related to improve-

ment of software development processes. Even systems engineering process improvement

is gaining in popularity, judging from the increasing number of INCOSE papers detailing

various efforts. Yet the nature of systems engineering poses challenges over and above those

seen in other process improvement efforts. This paper focuses on identifying and resolving

typical barriers to improving the systems engineering process. 2000 John Wiley & Sons, Inc.

Syst Eng 3: 59�67, 2000

INTRODUCTION

Increasing numbers of organizations are using systems

engineering capability models to improve how they

develop products or perform services. Many of the first

adopters of these models were �systems engineering�

organizations within companies that were also using

the Capability Maturity Model® for Software (SW-

CMM)1 in their software development organizations.

Lately, adopters increasingly include other types of

organizations, such as government or commercial com-

pany divisions and groups who do not develop soft-

ware. A promising occurrence is that some software

development organizations are appraising their own

ability to do Systems Engineering as a part of software

development.

Each organization attempting to use these models dis-

covers roadblocks, hurdles, and barriers to success. As-

sessments with a variety of organizations have allowed the

authors to determine some common stumbling blocks to

the improvement of systems engineering. This paper

discusses these barriers and describes proven ways to

overcome them.2

Regular Paper

*Author to whom all correspondence should be addressed.

An early version of this paper was presented at the Ninth AnnualInternational Symposium of the international Council on SystemsEngineering, Brighton, England, June 6�11, 1999.

1Capability Maturity Model and CMM are registered in theU.S. Patent and Trademark Office.

Systems Engineering, Vol. 3, No. 2, 2000

© 2000 John Wiley & Sons, Inc.

2In this paper, �systems engineering process improvement�means any attempt to improve a process that an organization consid-ers to be �systems engineering.� This is often independent of job titlesor roles [Sheard, 1996]. Most such efforts involve one of the threesystems engineering capability models mentioned in this paper.

59

THE PROCESS IMPROVEMENT PROCESS

Process improvement is not a new concept. Almost 100

years ago, Henry Ford proved that quality and produc-

tivity could be vastly improved by automating and

standardizing the manufacture of cars. In the 1990s it

became quite trendy to improve processes [Sheard,

1997a, 1997b], so trendy that backlash is already oc-

curring [Pajerek, 1999].3 Among the various models,

though, there is a basic agreement on how an organiza-

tion should perform process improvement, as described

in the paragraphs below and illustrated in Figure 1.

Phase 1: Select a model. Many organizations

choose to learn lessons from industry and academia by

following a path established by a capability model. For

systems engineering, most organizations choose one or

more of the three systems engineering capability mod-

els, namely, the Systems Engineering Capability Ma-

turity Model (SE-CMM) [Bate et al., 1995], INCOSE�s

Systems Engineering Capability Assessment Model

(SECAM) [INCOSE, 1996], or the recently released

Electronics Industry Alliance (EIA) interim standard on

Systems Engineering Capability (EIA/IS 731) [EIA,

1999]. Sheard and Lake [1998] compare these models

in more detail.

Examples in this paper refer to whichever model was

used by the organization at the time. Focus Areas (FAs)

in EIA/IS 731 are equivalent to Process Areas (PAs) in

the SE-CMM and to Key Focus Areas of the SECAM.

Note that there is no central authority to certify

compliance to any of the systems engineering models,

nor to certify assessors. Anyone can use the models for

process improvement in any manner, and anyone can

claim to be an assessor. This is in contrast to capability

models owned by the Software Engineering Institute,

such as the SW-CMM. The lack of centralized certifi-

cation was a conscious choice by the models� authors,

to address process improvement needs rather than con-

tractor selection needs. See Sheard and Roedler [1999]

for more on the two kinds of needs.

Phase 2: Evaluate yourself against the model. An

assessment is performed according to the method ac-

companying the model. The method is tailored as de-

sired by the assessed organization. Since there is no

standard training for a systems engineering lead asses-

sor, there is no agreement on what tailoring is allowed

in order to receive any given set of ratings.

Phase 3: Determine where to improve. The assess-

ment produces a set of findings, showing where the

organization falls short of meeting the next level of

maturity in each area. The organization uses these

findings to establish an action plan to improve proc-

esses. Action planning follows the assessment and pri-

oritizes the findings, creates alternative plans of action

to resolve them, and selects the best alternatives.

Phase 4: Obtain funding for improvements. Often

the most difficult part of a process improvement effort

is getting the resources to make the improvements.

Once an assessment is complete, some people sigh with

relief, thankful that they can now stop thinking about

this and get back to �real work.� In reality, the whole

point is for improvement to be made to said �real work.�

For this to happen, both a budget and management

attention must be made available.

Phase 5: Make improvements. Making improve-

ments is usually within the capability of systems engi-

neers provided the following conditions exist:

• Funding and time are available.

• Knowledgeable and respected engineers are

working on the process improvement effort.

• Management is paying attention to the success of

the effort.

In practice, the process improvement group usually has

to expend effort both to make improvements and to

keep these three conditions true.

Phase 6: Repeat. Each cycle may raise the organi-

zation�s assessed capability levels, but numbers do not

tell the whole story. Continuous improvement is also

necessary to keep processes flexible and responsive to

the needs of the projects, the technology, and the com-

petitive environment.

3Voiced frustrations include too much focus on the process

rather than on having dedicated and intelligent people, too much

emphasis on documenting the process and not enough on the unique

needs of any given product, and focusing on a ratings level to be

achieved rather than on doing things right.

Figure 1 Phases of process improvement.

60 SHEARD, LYKINS, AND ARMSTRONG

BARRIERS TO SYSTEMS ENGINEERINGPROCESS IMPROVEMENT

The following sections describe six barriers to systems

engineering process improvement and recommended

solutions to these problems.

Barrier 1: Thinking �We�re Different�

Especially in systems engineering, which is needed most

for unprecedented products, many groups believe industry

knowledge does not apply because �we�re different.�

Examples:

• We�re different�we hire the smartest people

and give them free rein. If we tried to force

them to follow processes, they wouldn�t like

it and they wouldn�t be as creative.

• We�re different�we don�t start with a clean

sheet of paper. We have legacy systems we

have to build a new system around.

• We�re different�we�re system integrators;

we represent the customer, we don�t produce

systems.

• We�re different�we build components that

are reused in multiple products.

• We�re different�we�re commercial, not

DOD.

• We�re Navy�Air Force processes do not

apply.

• We do submarines�surface warfare

processes do not apply.

• We do submarine communications�torpedo

processes do not apply.

One reason groups may believe that capability mod-

els are not applicable is that their processes use different

terminology from that in the maturity model chosen.

This happens most frequently for groups that are not

based in the aerospace and defense fields, such as

commercial engineering of consumer products.

Example: A group chartered with maintaining in-

ternal business software (e.g., for taking and proc-

essing customer orders) did not have �projects� or

�project managers.� A terminology mapping had to

be made; rather than projects, the group chose to

look at the �build� process.

In many other cases, organizations are not aware of the

capabilities of other companies in the industry and truly

believe that they are unique in hiring smart people,

building around legacy systems, or reusing compo-

nents. In each case, these groups will find assistance in

improving their own processes if they look at a systems

engineering capability model in detail.

Solutions.

1. When publicizing the effort, emphasize that you

will review the chosen capability model, select

focus areas that are applicable to what you do,

and tailor the areas to meet your business needs.

Close review will most likely show that the

model does incorporate valid systems engineer-

ing principles, which can be applied advanta-

geously to most system development efforts.

2. Focus on what do you do in your business that

relates to the model, rather than on specific word-

ing in the model. Do you need to understand

problems thoroughly and see which of several

ideas would solve the problems best? Do you

need to look for what might go wrong and plan

how to keep going even if it does? Do you have

to determine what the pieces of the solution are

and how they will fit together? If you use lan-

guage like this and then show how the FAs sup-

port these needs, you will show even skeptics that

most of the FAs in the model will apply.

3. When documenting processes, focus on captur-

ing the interactions among people and groups

rather than specifying processes for any one per-

son to follow. Imperfect interactions annoy al-

most everyone, and this gives your effort

leverage [Garcia, 1996]. People do not like being

told how to work, but they want processes that

define roles to help them do their job and get

support when needed.

Barrier 2: No Generally Acknowledged

Definition of Systems Engineering

Most organizations starting a systems engineering

process improvement effort do not have a clear defini-

tion of what systems engineering is. Even the founders

of the International Council on Systems Engineering do

not agree [Sheard, 1996]. A recent wrinkle is that there

is considerable overlap between systems engineering

and software engineering [Sheard, 1998], particularly

if the software engineering has been done by people

mindful of the systems engineering process.

OVERCOMING BARRIERS TO SYSTEMS ENGINEERING PROCESS IMPROVEMENT 61

Example: A project had a lead design engineer, a

program manager, and a systems engineer. The first

two did not want to talk about what systems engi-

neering needed to be done on the project because

�that�s the systems engineer�s job.� They needed to

understand that systems engineering, like quality

management, has to involve nearly everyone.

Example: An organization had a significant expe-

rience base with hardware-intensive programs

(things that fly, record sensor data, etc.) and in

recent years had been growing its capability to pro-

duce software systems. The organization had docu-

mented several good systems engineering processes.

Software engineers on software-only projects were

not even aware of the availability of these processes,

as they believed their program did not do systems

engineering, since there was no hardware.

Example: One organization had a systems engi-

neering group and also had integrated product teams

(IPTs). The engineers in the systems engineering

group, however, tended to perform advanced studies

only and did not participate on the IPTs. The IPTs

were performing the required systems engineering

by themselves, but there were disconnects when

they felt �systems engineering� ought to be provid-

ing them with things such as system requirements

changes. An assessment produced a finding that the

organization needed a better understanding of how

systems engineering would be accomplished.

Solution. First define a set of roles and interactions

that encompass the functions shown in the model. No

model insists that only people with the title of �systems

engineer� should do systems engineering. It is best if a

large part of the organization is aware of systems engi-

neering practices and incorporates them into their

work. Identify who performs the practices called for

in each FA, involve those who perform the practices,

and assess how well the FA criteria are met. An

organizational process will have to show how the

various groups contribute to engineering the organi-

zation�s products.

Second, be aware of, and highlight, varieties of

definitions of systems engineering. [Sheard, 2000] de-

fines these types of systems engineers: Discovery (Type

1), Program systems engineering (Type 2), and Ap-

proach (Type 3). The second example above shows a

conflict between Type 2 and Type 3 assumptions. The

third example is a conflict between Type 1 and Type 2.

The organization needs to validate all the definitions

currently in use, and weave them into a coordinated set

of engineering processes that solve business problems.

Example: One organization chose to assess 11 of

the first 12 SE-CMM process areas, and none of the

last six, because the people responsible for ensuring

organizational support were generally different

people from the engineers. The organization met its

ratings goals with flying colors because it had

scoped the assessment properly.

Barrier 3: Approaching ProcessImprovement as �Studying for the Test�

Table I shows some typical reasons for initiating sys-

tems engineering process improvement efforts. Some

organizations have experienced improvement using,

for example, the SW-CMM, and truly understand what

improvement will do for them�eliminate confusion,

streamline processes, ensure everyone knows what is

going on and is working toward the same goal, and

increase morale. These organizations undertake a sys-

tems engineering process improvement effort using the

model as a guide for �best practices,� something against

which to baseline and measure how effective improve-

ment efforts have been. Their goal is to improve proc-

esses, not only to improve compliance to the model in

order to achieve a better rating. These organizations

Table I Typical Organizational Reasons for SystemsEngineering Process Improvement

62 SHEARD, LYKINS, AND ARMSTRONG

have the best chance to realize benefits from their

efforts.

Other organizations begin a systems engineering

process improvement effort in response to pressure. A

competitor may advertise a higher maturity level. An

executive�s or manager�s compensation may depend on

successfully obtaining a certain assessed level. A gov-

ernment customer may require evidence of systems

engineering processes in a Request-For-Proposal. In

these situations, process improvement funding may

become available where it formerly was not. This can

begin an effort that will produce value for the organi-

zation when the processes are defined and imple-

mented.

However, it is also possible that an understanding of

what is involved in true improvement may be missing.

There may be an attitude of �we want the benefits, but

we don�t want to change anything.� In this case, the

organization does whatever is required to comply on the

surface, but minimizes �disruption� to everyday work.

The process improvement effort becomes an exercise

in looking good to the assessors.

Several problems arise when using this �cramming�

approach as if there were a test to be passed. Two

common problems and solutions are discussed here.

Problem 1. Documentation. Documentation pro-

duced to pass the test may not be usable. For example,

if you write exactly 18 processes, one for each PA in

the SE-CMM, these processes will not map well to how

your organization works [Sheard and Roedler, 1999].

If a process does not reflect the way the organization

works, it may increase the work of engineers rather than

make it easier.

Solution. A better approach is to determine what

processes are being implemented now, map these to the

model, and find the best place to insert additional

needed practices. This has the highest likelihood of

changing the way work is done.

Example: Most organizations do not have a single

Validation and Verification process. Most have veri-

fication processes at unit, subsystem, and system

levels, and/or at the computer software configura-

tion item (CSCI), build, integration, and system

levels. The organization should map all these to the

Validation and Verification areas of the model, not

force them into one process.

Example: Rather than writing two configuration

management processes, one for the SW-CMM and

another for the SE-CMM, a group assessing to both

models wrote one merged process.

To avoid these documentation problems, keep these

two principles in mind:

1. Processes should be written using the organiza-

tion�s terminology. Insisting that the organiza-

tion change terminology to demonstrate model

compliance virtually guarantees that people per-

forming the process will view it as something that

is separate from �real work.�

Example: An organization was formed from the

remnants of three companies through a series of

corporate acquisitions. The three groups had differ-

ent and sometimes conflicting terminology. Model

terminology was accepted as the basis for new

processes because it was perceived as neutral and

accepted by the rest of industry.

2. Map these processes to the model�s FAs in a clear

manner. At the start of an assessment, one organi-

zation provided the assessors with a table show-

ing where every practice in a capability model

was fulfilled in one or more organizational proc-

esses (or why it was tailored out). The process of

creating that table is what made this organiza-

tion�s processes robust. Dwight Eisenhower said,

�Plans are nothing. Planning is everything.�

These solutions also protect the organization from

having to change all its processes each time a maturity

model is modified (or replaced, as EIA/IS 731 is replac-

ing SE-CMM and will in turn be replaced by the sys-

tems engineering part of the CMM Integration effort,

currently merging software and systems engineering

capability models).

Problem 2. Loss of management support. A sec-

ond problem with �studying for the test� is that that the

organization may withdraw support for continuous im-

provement upon reaching the desired maturity level.

Then processes cannot be updated and will get stale.

The employees develop a cynicism about the commit-

ment of management to true improvement, which

makes the next process improvement push even more

difficult to accomplish.

Solution. The process improvement plan should

focus on the long-term reasons shown in Table I. Expect

reduction of enthusiasm after reaching a level. Manage

this risk by elaborating the benefits of continuing and

making it seem possible, with detailed plans. Perhaps

OVERCOMING BARRIERS TO SYSTEMS ENGINEERING PROCESS IMPROVEMENT 63

plan continuing assessments per [Sheard, 1999]. Addi-

tionally, customer interest can help maintain focus.

Example. One organization lost out on a contract

recompete they had expected to win. The customer

debriefed them that their systems engineering proc-

esses were not mature enough. Up until now, the

systems engineering process improvement effort

involved a few interested engineers working mostly

on their own time. Suddenly, systems engineering

process improvement obtained almost a full �head�

of overhead funding. In this case, there is now both

a focus on a level and adequate commitment to

begin real improvement. The customer�s concern

with the true status of systems engineering proc-

esses (not just a level number) helped restart this

process improvement effort.

Barrier 4: Assuming Training Is the Answer

�Change is good. You go first.� � a T-shirt

Training is necessary to accomplish process im-

provement, but it is never sufficient. Unfortunately,

many managers and organizations do not understand

this. Improvement efforts that are initiated with fanfare

and supplemented with training, but are dropped for all

practical purposes shortly thereafter, will produce little

more than cynicism.

Example: An organization wished to achieve Level

2 ratings in all process areas of the SE-CMM. Man-

agement funded a process group to write compliant

processes, and then planned employee training. At

that point, they believed processes would be im-

proved. Unfortunately, they did not understand that

their own behavior would have to change as well.

Solution. It is no accident that the first step in staged

process improvement models is improving project

management. Managers must behave differently if they

want their organizations to reap the benefits of im-

proved process efficiency and effectiveness. They must

set policies and see that they are implemented. They

must understand and overcome resistance with stead-

fast support of process improvement efforts. They must

reallocate resources (see Barrier 5 below) and reward

people who achieve success in using the new proc-

esses. Managers who ask in every program review,

�How are you overcoming obstacles to using our new

requirements tool?� will see the tool implemented

more thoroughly than those who just send require-

ments engineers to tool training.

Example: A crisis arose during the week of an

assessment. A customer was demanding that the

organization implement a change immediately. The

manager was steadfast in insisting that all changes

need to follow the recently developed change con-

trol process. Not having done so would have stalled

future process improvement efforts.

Barrier 5: Lack of Resources

All process improvement models require that projects

be given adequate schedule and budget to ensure that

tasks can be completed with reasonable attention to

quality. Yet process improvement efforts themselves are

often denied adequate resources. This is particularly

true among organizations that have not yet proven their

ability to manage projects well by achieving Level 1 or

Level 2 in the management areas of a systems engineer-

ing capability model. Lack of resources usually shows

itself in one of three ways:

• In budget pressures (significantly less than 1% of

the engineering staff, for example, or an under-

standing that process work will generally be done

on an employee�s own time).

• In promises made without an understanding of

how they will be kept. (�It is in my performance

plan that your organization shall be Level 3 in

EIA/IS 731 by December of this year.�) This

manager, like many others, does not understand

the concept of a continuous model (among other

things).

• By the attitude that the process group will be

staffed by whoever does not have a direct charge

number.

• By scheduling development projects so that the

going is guaranteed to get rough, and discarding

process improvement efforts when it does. (This

is analogous to the behavior of Level 1 organiza-

tions who discard process when faced with a crisis.)

Solution. Look for other justification for funding.

Funding is sometimes obtained for reasons that have

nothing to do with a desire to increase predictability of

64 SHEARD, LYKINS, AND ARMSTRONG

an organization�s internal processes. For example, sys-

tems engineering process improvement groups some-

times get started only after company-wide ISO 9000

compliance efforts, or software-driven SW-CMM ef-

forts, which are in turn driven by the belief that either

�ticket� is needed to maintain or improve competitive

position. Such systems engineering process groups are

fortunate, because either effort, when successful, draws

attention to the need to treat process improvement like

any other project. Distributing rewards and recognition

to divisions that have achieved process improvement

results can make resources available for copycat efforts

in other divisions surprisingly quickly. Companies are

sometimes successful in getting process improvement

funds accepted as part of reimbursable contract work.

Example: A division of a large corporation

achieved a high rating on the SE-CMM. A corporate

executive publicly commended the manager of this

division. Within 3 months, four other divisions re-

quested systems engineering process assessments.

If resources are inadequate, those responsible for

achieving maturity level goals should gather literature

references to support a demand for a reasonable budget

and schedule, calling on experienced assessors for sup-

port if necessary. Generally, a process effort should

have a budget of 1�3% of the technical staff, funding a

core process improvement group plus hours for other

participants. There should also be management commit-

ment to free up some of the more experienced project team

members to participate in some manner, and senior man-

agement should ask middle managers to report progress in

identifying and completing improvement tasks.

Barrier 6: Trying To Do It All at Once

�Why can�t we just focus on everything?�

�Dilbert�s Pointy-Haired Boss

Once a decision has been made to proceed with

process improvement, sometimes the decision is made

to do too much at once. Either too many FAs are

addressed at once (e.g., all of them), or sometimes

people try to achieve Levels 1, 2, and 3 all in one step.

While it appears there would be efficiency of scale in

writing organizational processes all at once and then

tailoring for projects, it is not known which are the

�best� practices before the practices as they are per-

formed on projects are documented. �Making up� what

should be good practices is more likely to waste re-

sources than save money, since the likelihood of made-

up processes being implemented after writing is small.

Example: One organization decided to meet �all�

process standards. This organization hired eight proc-

ess engineers to study and sort the SW-CMM, ISO

9001, ISO 9000-3, the SE-CMM, and internal stand-

ards, and derive a grouped set of �requirements� for

their processes. Then the process engineers created

�compliant� processes from scratch, as they were not

allowed to bother the people who were doing the work,

for �they were busy.� The �compliant� processes

were so out of touch with the way things were

currently done that they were never implemented.

Solutions. First, do not try to improve all FAs at

once. Per Garcia [1996] and the Pareto principle

[Scholtes et al., 1988],4 address the PAs that are giving

you the most trouble right now. In this way, you maxi-

mize buy-in because the organization will see positive

benefits to process improvement as soon as possible.

How do you know which FAs give you the most

trouble? Perform an assessment, either a full assess-

ment or a mini-assessment [Sheard, 1999], and find out

from the people performing the processes.

Another way to approach a few FAs at a time is to

follow the concept of �staging.� The SW-CMM has

organizational levels or �stages,� and several authors

[POSE5; Cusick, 1998; Ibrahim et al., 1997; SEI, 1999]

have recommended stagings for systems engineering

process improvement. The staging mentioned by the

SE-CMM [Bate et al., 1995] is similar to that in Cusick

[1998]: the idea is to first perform systems engineering

via the Engineering PAs at Level 1, then add the Project

PAs to support bringing Engineering PAs up to Level 2,

and so forth. Another staging, as in the SW-CMM,

concentrates first on project management, and then

adds development engineering as an organization ap-

proaches Level 3 capability. The project management

and process documentation activities called for at Level

2 are important to accomplish before Level 3, because

Level 2 makes people believe in process discipline

before insisting they change what they are doing, which

is Level 3.

Table II shows that all systems engineering stagings

are different; therefore, organizations should consider

4Twenty percent of your processes give you 80% of your

trouble.5Process Oriented Systems Engineering staged model, quoted

in Johnson and Dindo [1998].

OVERCOMING BARRIERS TO SYSTEMS ENGINEERING PROCESS IMPROVEMENT 65

implementing improvements in whatever order best

meets their needs.

SUMMARY

This paper has discussed common barriers to improve-

ment of the systems engineering process and has sug-

gested ways to overcome them. One common barrier is

that each organization is convinced that no one knows

the trouble they have seen, and therefore nothing in the

literature will apply to them. Another barrier is that

definitions of �systems engineering� vary so much. Yet

all three models have addressed the totality of systems

engineering as currently practiced. In a way, these

models themselves define �systems engineering.�

When the improvement effort begins, �all the major

mistakes are made the first day� [Rechtin, 1989, p. 48].

This happens if an organization focuses on �making a

good grade,� provides grossly inadequate resources, or

tries to �get it over with� and improve all processes right

now, hoping to get back to �business as usual� as soon

as possible.

In general, solutions to the barriers require defining

what improvements are needed by the organization,

selecting an appropriate model, tailoring the model to

meet business needs. An implementable organizational

systems engineering process requires a common set of

roles and responsibilities and a common terminology

definition. The models can help here but should not

force a change in current processes that work. Process

improvement should take precedence over achievement

of a target ratings level, but if the emphasis is on the

level, all is not lost and it is possible to get process

improvement benefits as part of achieving the level.

A good understanding of what process improvement

involves will start the systems engineering process im-

provement effort in the right direction.

ACKNOWLEDGMENTS

The authors thank Albert J. Truesdale for reviewing this

paper, Debbie Morgan for coordinating the review,

editing, and production processes, and Bobbie Troy for

technical editing.

REFERENCES

R. Bate, et al., Systems Engineering Capability Maturity

Model (Version 1.1), Software Engineering Institute,

Carnegie Mellon University, Pittsburgh, PA, 1995.

K. Cusick, Engineering management pocket guides: the �trip-

tik� approach to capability maturity models, Proc Eighth

Annu Int Symp INCOSE, Vancouver, British Columbia,

Canada, July 26�30, 1998, pp. 267�273.

Electronic Industries Alliance (EIA), Systems engineering

capability EIA/IS 731, http://www.geia.org/eoc/G47/731dwnld.htm , Arlington, VA, 1999.

S. Garcia, How to survive as a change agent in hostile terri-

tory: principles of process improvement terrorism, Proc

Sixth Annu Int Symp INCOSE, Boston, July 7�11, 1996,

pp. 579�587.

L. Ibrahim, et al., Federal aviation administration integrated

capability maturity model, http://www.faa.gov/ait/ait5/ faa-icmm.htm , Federal Aviation Ad-

ministration, Washington, DC, 1997.

International Council on Systems Engineering (INCOSE),

Systems engineering capability assessment model (ver-

sion 1.50), INCOSE, Seattle WA. June 1996.

K. Johnson, and J. Dindo, Expanding the focus of software

process improvement to include systems engineering,

Crosstalk (October 1998), pp. 13�18.

L. Pajerek, The age of process: applying historical perspective

to the present day, Proc Ninth Annu Int Symp INCOSE,

Brighton, England, June 6�11, 1999.

E. Rechtin, Systems architecting, creating and building com-

plex systems, Prentice-Hall, Englewood Cliffs, NJ, 1989.

P. Scholtes, et al., The team handbook, how to use teams to

improve quality, Joiner Associates, Madison, WI, 1988.

S. Sheard, Twelve systems engineering roles, Proc Sixth Annu

Int Symp INCOSE, Boston, USA, July 7�11, 1996, pp.

481�488.

Table II. Systems Engineering Model Stagings

66 SHEARD, LYKINS, AND ARMSTRONG

S. Sheard, Navigating the compliance frameworks quagmire,

tutorial presented at the Seventh Annu Int Symp INCOSE,

Los Angeles, August 3�7, 1997a.

S. Sheard, The frameworks quagmire, a brief look, Seventh

Annu Int Symp INCOSE, Los Angeles, August 3�7,

1997b, pp. 159�166.

S. Sheard, Systems engineering for hardware and software

systems: point-counterpoint, Proc Eighth Annu Int Symp

INCOSE, Vancouver, British Columbia, Canada, July 26�

30, 1998, pp. 935�942.

S. Sheard, Fifty ways to save your budget: Reduced cost

systems engineering process improvement, Proc Ninth

Annu Int Symp INCOSE, Brighton, England, June 6�11,

1999, pp. 455�463.

S. Sheard, Types of systems engineering implementation,

Proc Tenth Annu Int Symp INCOSE, Minneapolis, July

16�20, 2000.

S. Sheard and J. Lake, Systems engineering standards and

models compared, Proc Eighth Annu Int Symp INCOSE,

Vancouver, British Columbia, Canada, July 26�30, 1998,

pp. 589�596.

S. Sheard and G. Roedler, Interpreting �continuous-view�

capability models for higher levels of maturity, Syst Eng

2 (1999), 15�31.

Software Engineering Institute, Capability maturity model

integrat ion, http://www.sei.cmu.edu/cmm/cmms/cmms.integration.html , Carnegie Mellon

University, Pittsburgh, PA, 1999.

Sarah A. Sheard has 20 years� experience in systems engineering, including hardware systems such as

satellites and software systems such as air traffic control. Currently the deputy manager of the systems

engineering group at the Software Productivity Consortium, Ms. Sheard helps companies activate systems

engineering process improvement efforts and integrate the software and systems engineering aspects of

both development work and process improvement efforts. Ms. Sheard first published a paper in the field

of systems engineering process improvement at the 1992 meeting of NCOSE, when �Capturing the

systems engineering process� won the Best Paper award in the processes track. Since that time, she has

written organizational processes and process architectures, participated in developing two systems

engineering capability models, and performed SE-CMM and EIA/IS 731 systems engineering assessments

with organizations from large government contractors to small commercial companies. She is currently

the chair of INCOSE�s Measurement Technical Committee, and in the past has served INCOSE as chair

of the Communications Committee and as Programs Chair of the Washington Metropolitan Area Chapter.

Ms. Sheard received a B.A. in Chemistry from the University of Rochester and an M.S. in Chemistry from

the California Institute of Technology.

Howard Lykins is a Senior Member of the Technical Staff at the Software Productivity Consortium, where

he develops methods and processes for software and systems engineering. He holds a B.A. in Computer

Science from Transylvania University and an M.S. in Computer Science from Washington University in

St. Louis. Prior to joining the Consortium in 1991, Mr. Lykins was employed at Emerson Electric Company

in St. Louis, where he developed control software for radar systems. He is coauthor of the Consortium�s

Integrated Systems and Software Engineering Process (ISSEP), and recently developed a detailed tailoring

of ISSEP for component-based development. He has participated in a number of systems engineering

benchmarks and miniassessments. Mr. Lykins is a member of the IEEE and co-chair of the INCOSE Model

Driven System Design Working Group.

James R. Armstrong is an internationally known writer and lecturer on systems engineering and is the past

president of the largest chapter of INCOSE. His 30 years� experience in systems engineering includes 21

years of systems development as program manager and lead engineer, with experience in test, production,

field deployment, and configuration management. Programs included space, ground, ship and airborne

deployments of radar, communications, optics, and navigation systems. As the manager of the systems

engineering group at the Software Productivity Consortium, Mr. Armstrong provides consulting expertise

and training for members and performs SE-CMM assessments. He has participated in the development

both of industry standards for systems engineering and of systems engineering capability models,

including IEEE 1220 and EIA/IS 731. Mr. Armstrong received a B.S. in Mechanical Engineering from

Rensselaer Polytechnic Institute in 1967 and an M.S. in Systems Management from the University of

Southern California in 1977.

OVERCOMING BARRIERS TO SYSTEMS ENGINEERING PROCESS IMPROVEMENT 67