predicting mission success in small satellite missions

10
Available online at www.sciencedirect.com Acta Astronautica 52 (2003) 361 – 370 www.elsevier.com/locate/actaastro Predicting mission success in small satellite missions Mark Saunders , Wayne Richie, John Rogers, Arlene Moore NASA Langley Research Center, Space Science Support Oce, Hampton, VA 23665, USA Abstract In our global society with its increasing international competition and tighter nancial resources, governments, commercial entities and other organizations are becoming critically aware of the need to ensure that space missions can be achieved on time and within budgets. This has become particularly true for the National Aeronautics and Space Administration’s (NASA) Oce if Space Science (OSS) which has developed their Discovery and Explorer programs to meet this need. As technologies advance, space missions are becoming smaller and more capable than their predecessors. The ability to predict the mission success of these small satellite missions is critical to the continued achievement of NASA science mission objectives. The NASA OSS, in cooperation with the NASA Langley Research Center, has implemented a process to predict the likely success of missions proposed to its Discovery and Explorer programs. This process is becoming the basis for predicting mission success in many other NASA programs as well. This paper describes the process, methodology, tools and synthesis techniques used to predict mission success of this class of mission. c 2002 Elsevier Science Ltd. All rights reserved. 1. Background In response to the constrained budgets and the science communities need for more frequent scien- tic ight investigations, the Oce of Space Science (OSS) has developed an acquisition strategy designed to reduce the overall cost and schedule of small satellite missions. This reduction has been achieved by soliciting entire science investigations from the scientic community, capitalizing on the strengths of open competition and peer review. The science inves- tigator is responsible for the total mission, including the development of the science objectives, to the de- sign of the instruments, spacecraft, and mission opera- tions centers and the collection of science data and the subsequent data analysis. A process for evaluat- ing these science investigations from a technical, Corresponding author. management and cost perspective has been developed by OSS and NASA Langley Research Center to allow the best achievable science to be conducted. To ensure equivalent science investigations can compete on an equal footing, the Explorer Program is divided into three dierent classes of missions: medium class explorers (MIDEX), small class explor- ers (SMEX), and university class explorers (UNEX). OSS solicits proposals for each of these about every two years. The Discovery Program is not divided into dierent programs and solicits proposals approxi- mately every 18 months. The Discovery Program and MIDEX solicitations are usually conducted through a two-part process with formal proposals submit- ted in response to an Announcement of Opportunity followed by a competitive funded study period with 4–6 proposals selected from the rst step. The SMEX and UNEX investigations are usually selected for ight after only a single proposal stage. 0094-5765/03/$ - see front matter c 2002 Elsevier Science Ltd. All rights reserved. PII:S0094-5765(02)00176-5

Upload: mark-saunders

Post on 02-Jul-2016

217 views

Category:

Documents


2 download

TRANSCRIPT

Available online at www.sciencedirect.com

Acta Astronautica 52 (2003) 361–370www.elsevier.com/locate/actaastro

Predicting mission success in small satellite missionsMark Saunders∗, Wayne Richie, John Rogers, Arlene MooreNASA Langley Research Center, Space Science Support O�ce, Hampton, VA 23665, USA

Abstract

In our global society with its increasing international competition and tighter +nancial resources, governments, commercialentities and other organizations are becoming critically aware of the need to ensure that space missions can be achievedon time and within budgets. This has become particularly true for the National Aeronautics and Space Administration’s(NASA) O2ce if Space Science (OSS) which has developed their Discovery and Explorer programs to meet this need. Astechnologies advance, space missions are becoming smaller and more capable than their predecessors. The ability to predictthe mission success of these small satellite missions is critical to the continued achievement of NASA science missionobjectives. The NASA OSS, in cooperation with the NASA Langley Research Center, has implemented a process to predictthe likely success of missions proposed to its Discovery and Explorer programs. This process is becoming the basis forpredicting mission success in many other NASA programs as well. This paper describes the process, methodology, tools andsynthesis techniques used to predict mission success of this class of mission.c© 2002 Elsevier Science Ltd. All rights reserved.

1. Background

In response to the constrained budgets and thescience communities need for more frequent scien-ti+c ;ight investigations, the O2ce of Space Science(OSS) has developed an acquisition strategy designedto reduce the overall cost and schedule of smallsatellite missions. This reduction has been achievedby soliciting entire science investigations from thescienti+c community, capitalizing on the strengths ofopen competition and peer review. The science inves-tigator is responsible for the total mission, includingthe development of the science objectives, to the de-sign of the instruments, spacecraft, and mission opera-tions centers and the collection of science data and thesubsequent data analysis. A process for evaluat-ing these science investigations from a technical,

∗ Corresponding author.

management and cost perspective has been developedby OSS and NASA Langley Research Center to allowthe best achievable science to be conducted.To ensure equivalent science investigations can

compete on an equal footing, the Explorer Programis divided into three di<erent classes of missions:medium class explorers (MIDEX), small class explor-ers (SMEX), and university class explorers (UNEX).OSS solicits proposals for each of these about everytwo years. The Discovery Program is not divided intodi<erent programs and solicits proposals approxi-mately every 18 months. The Discovery Program andMIDEX solicitations are usually conducted througha two-part process with formal proposals submit-ted in response to an Announcement of Opportunityfollowed by a competitive funded study period with4–6 proposals selected from the +rst step. TheSMEX and UNEX investigations are usuallyselected for ;ight after only a single proposalstage.

0094-5765/03/$ - see front matter c© 2002 Elsevier Science Ltd. All rights reserved.PII: S0094 -5765(02)00176 -5

362 M. Saunders et al. / Acta Astronautica 52 (2003) 361–370

Nomenclature

AO announcement of opportunityCDR critical design reviewCoDR concept design reviewEMI electromagnetic interferenceGDS ground data systemNASA national aeronautics and space

administrationOSS o2ce of space sciencePDR preliminary design reviewPRR preship readiness reviewWBS work breakdown structure

When acquiring science investigations through thistechnique, the OSS typically assesses science investi-gations proposed to Discovery and Explorer Programseries with 4 evaluation criteria:

1. Scienti+c merit of the investigation.2. Feasibility of achieving the science objectives.3. Feasibility of the mission implementation ap-

proach.4. Social bene+ts of the investigation.

Two separate panels conduct the evaluations. A sci-ence peer review panel is responsible for assessingthe scienti+c merit of the science investigation andthe feasibility of achieving the science objectives.A technical, management, cost and other programfactors (TMCO) panel assesses the feasibility of themission implementation approach and the social ben-e+ts of the investigation. For proposals that respondto the second step of the two-step approach, OSSusually convenes a single panel to assess the resultsof the funded study. In general, this second reviewexamines the higher +delity designs and developmentplans, but does so in a similar fashion to the initialevaluation. Unless the science has changed, the scien-ti+c merit of the investigation is not reassessed. Onceselected for ;ight, the mission is subjected to at leastone more formal con+rmation review. Again, thisreview will look at the same elements of the mission,but at the level of +delity at the preliminary design re-view (PDR). This review is consistent with NASA’sreview requirements speci+ed in the NASA proce-

dures and guidelines (NPG) 7120.5A, Program andProject Management, dated April 3, 1998.

2. Evaluation overview

The ability to predict the successful achievementof space science missions proposed to the Discoveryand Explorer programs requires examination of allelements of these proposed science investigations.These facets include: scienti+c objectives; scienti+cinstrument capabilities and development; missiondesign; spacecraft design, development and integra-tion; launch integration and operations; mission andscience operations; and +nally science data analysisand archival. Each of the implementation aspects isexamined by scienti+c peers and technical experts todetermine the likelihood that the element can sup-port the scienti+c objectives. The results of theseanalyses are synthesized and integrated into cohesiveconclusions. Since the uncertainties driving a team’sability to deliver the hardware and software necessaryto achieve scienti+c success become clearer as thescience investigation/mission proceeds through its de-sign and development, the ability to predict improvesas the development progresses. To compensate foruncertainties early in development, the technical, costand schedule resources are examined carefully to en-sure that there are adequate reserves and margins forthe current stage of development.

• The technical approach is examined in detail to de-termine the level of understanding and feasibility ofthe technical design details, the areas of unknowns,the trades necessary to resolve those unknowns, andthe technical resources (for example, mass, power,data), particularly the reserves and margins that areavailable to handle the uncertainties and risks.

• The management and organization is examined todetermine if the project organization matches thedevelopment approach and provides adequate skills,processes and management tools to ensure problemscan be foreseen as they arise.

• The cost and schedule is analyzed to ensure thatthere are adequate resources to meet the engineer-ing, manufacturing, integration, operation and dataanalysis of the project to achieve the scienti+cobjectives.

M. Saunders et al. / Acta Astronautica 52 (2003) 361–370 363

Since the success of the technical development andsubsequent mission and science operations dependson both the management and organization and the costand schedule resources, all analyses are synthesizedand integrated through a comprehensive process ofexamining the interrelationships between the threedi<erent areas. This synthesis is the essence ofNASA’s ability to predict mission success. Each of thethree areas (technical, management and cost/schedule)will be described in detail, followed by a descriptionof how these three areas are integrated to arrive at anoverall prediction of mission success.

3. Predicting technical success

Most space mission teams proceed through a similarand standard design and development process. Oncethe team has established its scienti+c objectives (orother objectives such as commercial communication)and the payload necessary to meet these objectives,they begin to look at the overall requirements that thepayload will place on the rest of the system architec-ture. From these requirements, the team can developa mission concept and a system architecture whichsatis+es the basic system requirements. This typicallyincludes the weight, power and data requirementsfor the payload. From this the team can estimatethe weight, size, power, data and communication re-quirements for the spacecraft and ground systems.Given the weight and size of the spacecraft theteam can determine the appropriate launch vehicleand mission design to meet the technical objectives.Following the development of the system architec-ture, the team re+nes the design through increas-ing technical de+nition, system trade studies, andsubsystem design. This process usually follows asimilar path from the concept development and tech-nical de+nition, through preliminary design, detaileddesign, fabrication, assembly, integration and testleading up to delivery of the system to the launchsite. At each stage of design and development theunknowns are reduced and the margins for uncer-tainty are better de+ned. If teams are using the currentstate of the art, there is no reason why they shouldnot be able to deliver the planned system (with alltechnical objectives met) on schedule and within bud-get. If this is true, then why are some missions suc-

cessful and others not. Although technical di2cultiesdo not in themselves lead to failure, there are a num-ber of technical reasons that contribute to why mis-sions are not successful. First, mission teams assumethat they understand their mission well enough toignore the standard rules of thumb on design marginssuch as mass margins. Second, teams assume that theywill not encounter signi+cant di2culties in develop-ing new technologies or advancing the state of the artnecessary to enable their mission. Third, teams do notmethodically, rigorously, and judiciously control andsafeguard project resources (margins for mass, power,cost, schedule etc.) throughout the project life cycle.As the realities of system design and developmentoccur, teams are forced into using up marginsand taking greater risks to overcome developmentproblems.The OSS/LaRC technical evaluation process is

a systematic approach to examining each elementof the system design, development approach, andoperations plans. The evaluation teams include ex-perts in management, mission design, spacecraftdesign and development, instrument design and de-velopment, mission operations, ground data systemdesign and development. Since OSS examines mis-sions at various stages of development, these evalu-ation teams expect the level of design/developmentmaturity to be commensurate with the point of theplanned overall development schedule.

3.1. Mission design

The evaluation begins with examining the over-all mission architecture to ensure that the missionteam has considered all the elements of the systemcompletely and correctly. In early stages of conceptdevelopment it is not uncommon for teams to developarchitectures which have elements that do not workwell together. The criticality of this mistake dependson the degree of freedom that the team has left them-selves in terms of technical, schedule and cost marginsto solve problems. If teams have adequate margins,this type of mistake may not be considered a showstop-per, but developing workable architectures will ensurethat the mission is viewed in a positive light. The sys-tem architecture is also examined from a complexityand ;exibility standpoint. How hard is the mission toaccomplish correctly and what degree of ;exibility

364 M. Saunders et al. / Acta Astronautica 52 (2003) 361–370

does the team have if problems occur during the mis-sion? After examining the system architecture, thetechnical parameters of the mission design (launch en-ergy and trajectory, orbit parameters, deep space tra-jectories, etc.) are veri+ed through orbital mechanicsanalytical tools. In the early stages of developmentthese analytical tools require inputs of assumptions(e.g., propulsion Isp) which can a<ect the outcome. Ithelps when the teams provide su2cient informationto minimize the assumptions made by an evaluationteam. This is particularly critical when discussions be-tween the proposing team and the evaluation team arelimited. This case usually occurs during the formal so-licitation process. For OSS missions the follow-on re-views usually allow adequate discussion to illuminatemisunderstandings. However, in attempts to maximizethe return on investment in terms of the maximum pay-load, many teams make overly optimistic assumptionswhich may erode mass and power margins later inthe development. Finally, mission duration and launchwindow constraints are considered when assessing thelikelihood that the mission can sustain long operationsperiods or be launched during tight launch windows.

3.2. Spacecraft design

The level of maturity of the spacecraft design isevident by the depth of the system and subsystemdescriptions, block diagrams and equipment lists.Experts familiar with spacecraft design determine ifthe various sub-systems interface appropriately, arecomposed of appropriate hardware and software andhave adequate contingencies for the heritage and com-plexity of the design and the development stage Figs.1 and 2 (weight contingency) (power contingency)represent minimum reserves for expected growths.Mass and power margins for unplanned/unexpectedgrowths must be added on top of these contingencies.In addition, the level of redundancy and reliabilityis examined in light of the length and risk of themission. When new technology or advances in thecurrent state-of-the-technology are required, back-upalternatives to these technologies are reviewed to en-sure that they are feasible and can be implementedat the proper time. If no back-ups are planned, theschedule and cost margins are particularly importantas the methods for managing the technology risks.

The mission duration and the design life of thespacecraft are carefully considered when examiningthe levels of redundancy, reliability and parts qualityplanned for the spacecraft. The ability of the space-craft to gracefully degrade can alleviate some relia-bility concerns, but the level of spacecraft reliabilityneeds to be commensurate with the mission durationand impact of failure to accomplish the mission ob-jectives.Although the trend is toward integrated hardware

and software development, software development issometimes overlooked as a critical development area.Review teams expect to see a rational sequence andapproach for developing, testing and verifying the;ight software.During the last 10 years, the size and cost of

missions has come down considerably. This has al-lowed more missions to be accomplished. Since thishas permitted the simpler missions to be accom-plished, newmissions are beginning to be signi+cantlymore complicated. The number and complexity of themechanisms is a risk factor that must be handledappropriately. In these cases, the degree of hard-ware quali+cation for the planned environment andoperation is particularly important. Accommodatingfailsafe features into these complex systems enhancesthe mission’s reliability by providing opportunitiesto continue other portions of the mission when fail-ures occur. If failsafe features are not incorporated,missions may not be successful.In the early stages of the design process, the

mission team is faced with many system trades inarriving at the most optimum design. The plan forand the status of these trades, the conclusions drawnand the processes used to conduct these trades revealhow well the team is addressing the many designdecisions.To substantiate expert opinions and to ensure

thoroughness in our reviews, analytical designtools are used to verify that the proposed space-craft has the resources necessary to meet systemrequirements during mission operations. Althoughthese tools are not perfect, they highlight design areaswhere resource stresses occur and allow reviewersopportunities to examine, in depth, how well the de-sign operates. Thus, mission teams will bene+t fromincluding the best and most complete information inproposals or review packages.

M. Saunders et al. / Acta Astronautica 52 (2003) 361–370 365

0

10

20

30

40

50

60

CoDR PDR CDR PRR

<50kg

<500kg

New Spacecraft - Mass <50kg

Next Generation spacecraft -

Mass <50kg

Production Spacecraft - Mass

New Spacecraft - Mass <500kg

Next Generation spacecraft -Mass <500kg

Production Spacecraft - Mass

Proposal

Fig. 1. Minimum standard weight contingencies (percent).

0

10

20

30

40

50

60

70

80

90

100

Proposal CoDR PDR CDR PRR

New Spacecraft - Power < 500watts

Next Generation spacecraft -

Power < 500 watts

Production Spacecraft - Power <500 watts

New Spacecraft - Power < 1500watts

Next Generation spacecraft -

Power < 1500 watts

Production Spacecraft - Power <1500 watts

Fig. 2. Minimum standard power contingencies (percent).

3.3. Payload design (including payload tospacecraft interfaces)

The payload is examined in much the same wayas the spacecraft. The maturity and completeness ofthe designs are compared to the postulated point inthe development process to ensure that the degreeof completion is appropriate. Margins, technologiesand redundancies are looked at in light of the pay-

load complexity, technology readiness and heritage,and the complexity of the payload is compared tothe state of development and quali+cation plans.Finally, the payload requirements on the spacecraftand ground systems are compared with the level oftechnical resources (e.g., power) available to ensurethat the payload will be allowed to perform as spec-i+ed. The payload-to-spacecraft interfaces are exam-ined for complexity and thoroughness. A traceability

366 M. Saunders et al. / Acta Astronautica 52 (2003) 361–370

matrix, which starts with the science objectives andwalks through the mission, instrument, spacecraft andground requirements is extremely helpful in under-standing how well the system is designed to meet themission objectives.

3.4. Mission operations and ground data systems(GDS) design

The mission operations and ground data systemdesigns must be compatible with the spacecraft andpayload designs so the mission planning and subse-quent operations plans are looked at to ensure they re-;ect a reasonable and adequate approach. The missionoperations and ground data system architecture isjust as important as the ;ight hardware and soft-ware and has as big an impact on mission success.Missions have a better chance of succeeding whenthe mission operations and ground data systems aredeveloped concurrently with the ;ight system. Thisallows both systems hardware and software to beginworking together early enough to identify problemsthat might occur. Mars Path+nder used this tech-nique to signi+cantly reduce the cost of the GDS byseveral factors by using existing hardware and soft-ware as part of the ;ight system test equipment, andmost of the Discovery missions are following thisexample.The technical aspects of the mission design a<ect

how well the mission can be operated and thus mustbe carefully examined against the operations concept.The spacecraft operations can be complex requiringvery sophisticated software that must be veri+ed priorto use. If the software is not planned to be loadedon the spacecraft prior to launch, then a version ofthe spacecraft hardware and software is required onthe ground. Having this type of capability has alsoproven very e<ective when trying to resolve space-craft anomalies after launch. The communication linksbetween the spacecraft and the ground stations musthave su2cient margin to ensure that the data can beadequately retrieved. The geometry between thespacecraft, the Earth and the Sun can a<ect thisgreatly. Since the ground antennas can be heavilycommitted, the planned needs for these assets or theneed to build new ones must be planned. When ex-isting facilities are a signi+cant part of the missionoperations strategy, it is important to demonstrate the

availability of these assets through commitments bythe asset owners.Finally, the drive for lower cost mission opera-

tions is leading to more innovative approaches. Thesenew approaches must be carefully thought out andtested before fully implementing. CONTOUR, Dis-covery’s sixth mission, is designed to go into a hiber-nation mode between comet encounters. During thisperiod, the spacecraft is placed in a safe mode and leftunattended for months at a time. Given the recentLewis spacecraft failure, where the spacecraft was leftunattended for brief periods; i.e. weekends (and sub-sequently lost), this method will need to be carefullyproven prior to full implementation.

3.5. Development approach (includingmanufacturing, integration and test)

The development approach is considered as impor-tant to mission success as the ability of the design tomeet its requirements. Having an adequate approachand su2cient time to accomplish the developmentis critical to success. Many mission teams are goingto a proto;ight development approach (where thedevelopment unit is ultimately ;own). Although thisincreases risk somewhat, it is proving acceptable inmissions where technology is not a big driver. Whennew technology is required, more traditional develop-ment strategies with prototypes, brassboards, bread-boards, engineering models, etc. allow developmentproblems to be recti+ed prior to investing in the ;ighthardware. Despite the development strategy chosen,each element of the development plan is carefullylooked at to ensure that implementation of the designcan be achieved. The schedule is compared to theplanned development activities to ensure that su2-cient time is available to meet delivery requirementsand to overcome development problems that willoccur. Depending on the degree of di2culty of thedevelopment, one month of schedule slack per yearof development time is a reasonable rule of thumb.If the development is particularly di2cult, at least1.5 months of slack/year may be necessary.The methods for hardware ;ight quali+cation are

an important aspect of the development plans. Ifheritage hardware is planned, the team needs to con-sider how close the quali+cation of the original hard-ware matches the planned usage. Many teams are now

M. Saunders et al. / Acta Astronautica 52 (2003) 361–370 367

routinely performing heritage reviews to identify howwell the hardware meets the planned system perfor-mance. Part of the Lewis failure, however, has beenattributed to inappropriate use of heritage (hardwareuse did not match the original usage) which resultedin a ;awed attitude control system design. For newpieces of hardware, particularly mechanisms, lifetimetesting is also needed to increase the likelihood ofsuccessful operation in orbit. The Genesis spacecraft,which has extensive mechanisms, is planning lifetesting at over two lifetimes to ensure con+dence thatthe hardware will work on orbit.As always, software development and testing must

be matched to the hardware development. Concurrentsoftware development allows the hardware and soft-ware to be tested in testbeds early in the development;ow so that the bugs can be worked out. When soft-ware development is decoupled from the hardware;ow, additional schedule reserve needs to be includedto work out the software bugs. Early testing also allowsfor more burn-in of electronic components. Adequateburn-in is often overlooked or considered a luxury.Increasing the burn-in can increase con+dence that theearly failure modes have been eliminated, particularlyfor single string systems.The completeness of environmental testing to

verify that the system will operate as designed in thevarious mission environments (radiation, temperature,solar, vibration, acoustic, EMI, etc.), is also exam-ined. Skipping tests when schedule or cost pressuresarise increases the risk to mission success. Equallyimportant is having adequate time in the schedule toresolve ;ight system anomalies found during testing.Invariably, workmanship or design ;aws are foundduring this period that must be resolved prior tolaunch, and if at all possible, retested.Finally, the launch site activities are examined to

ensure that team has planned adequate time for ship-ping, ;ight system to launch vehicle integration,hardware testing, fueling, etc. For many missions, thelaunch window is very short, therefore it is criticalthat launch preparations have adequate schedule timewith margin to accomplish these tasks.When examining the development approach, the

review team also looks carefully at the variousdevelopment plans and processes to ensure thatthey are appropriate and complete for the givenstage of development. Adequate systems engi-

neering is an essential part of both the designand the development process. Through this disci-pline, integration and orchestration of the entire de-sign/development is accomplished, and resolutionsto development problems can be worked withoutinadvertently impacting other parts of the system.In addition, system engineering maintains trackof system resources and con+gurations. Test andveri+cation plans reveal the degree to which the mis-sion team plans to qualify, test and verify system per-formance. Adequate con+guration control gives con+-dence that what is being tested matches plans. Finally,the quality assurance program is examined to assurethat the plans and processes will ensure an acceptablequality product. ISO 9000 certi+cation is now be-coming a requirement and the degree of certi+cationimproves con+dence that the mission team has theright processes in place.

4. Predicting management success

The design and development processes aredependent on an e<ective management approach.The organizational structure needs to be matchedto the work and should be as simple as possi-ble. Simplicity allows less-complicated projectcontrol techniques and processes, but is not al-ways possible. Many projects have large organi-zations with each organization responsible for anelement of hardware or software. As the complex-ity of the organization grows, the importance ofe<ective project control techniques and tools isincreased. Appropriate performance measurementsystems and receivable/deliverable systems go alongway to keeping management informed of the progresson development. Coupled with these, a good systemsengineering team and systems engineering processwill ensure that decision data is the best informationavailable. In both large and small organizations re-sponsibility, authority and accountability need to bematched and the decision-making process clear to allmembers of the team. Many development problemscan be attributed to confused lines of authority, whichinhibit decisions necessary to move through the de-velopment process. A key indicator of the qualityof the management approach is how well the workbreakdown structure corresponds to the organiza-

368 M. Saunders et al. / Acta Astronautica 52 (2003) 361–370

tional structure. When these are poorly matched, thelines of authority and responsibility are obscured.Despite the desire by organizations to want to move

into new business areas, successful past experience inperforming the functions assigned is a very good indi-cator of the likelihood of success in similar functionsfor new missions. Unfortunately, organizations with-out experience, do not usually fully understand all thepitfalls related to a given task. The quality and experi-ence of key personnel is equally important. Most of thenew smaller mission are fast tracked with little mar-gin for error. Individuals in leadership positions musthave adequate training and experience if the project isexpected to be delivered on time and within budget.The project personnel can expect to work overtime sofull time or near full time commitments are impor-tant to ensure adequate attention to details. As obvi-ous as this may seem, the continuing pressures to domore with less encourages organization to assign moreresponsibilities than is sometimes prudent.Procurement strategies and subcontract manage-

ment cannot be overlooked since much of the missionhardware will be procured. Mars Path+nder assigneda procurement manager to track the progress ofprocurements, and this proved extremely bene+cial.This person assisted the project manager in decidingthe best procurement strategy, including incentiveplans, and reported on the progress of hardware de-velopment and deliveries. This allowed early warningof procurement problems so that adequate manage-ment attention could be applied to mitigate deliveryissues and to develop work around plans.Regardless of the management approach, risk

management and mitigation is critical to the successof any project. Without adequate risk management, itis impossible to predict whether a given mission canbe successful, and NASA has now made this manda-tory for all NASA projects. A comprehensive andtailored risk management plan should be developedas soon as possible in the project life cycle, and thenfollowed rigorously until the project is complete. Thisplan needs to encompass not only technical issues,but schedule and cost as well. Risk management andmitigation involves understanding the project’s entiretechnical, cost and schedule envelope (all margins)so that problems may be resolved through the appli-cation of the appropriate resource. As examples, massproblems might be solved through the application of

cost reserves, and technology development problemsmight be overcome with schedule reserves. Reviewteams look for evidence that the mission team un-derstands the risks associated with their individualmission and have planned actions to deal with them.The identi+cation of appropriate speci+c risk items,the a<ects of those risks on the project and the meth-ods for analysis, tracking and mitigation indicate agood understanding of the problems ahead and howto keep them constrained. Descope plans are also anecessary part of the risk management plan, but theseshould be used as a last resort. When descope plansare developed, it is very important to identify for eachdescope item the appropriate decision date and thea<ect on the mission whether positive or negative.

5. Predicting cost success

Predicting cost success is one of the most misun-derstood elements of predicting mission success. Mostof this can be attributed to a lack of understandingon how good cost analysis is conducted. This lack ofunderstanding comes in part from the decoupling ofcost from the technical aspects of the mission and froma lack of proper utilization of analytical cost tools.However, cost modeling is no di<erent than technicalanalytical modeling. The results are only as good as thecorrelation of the analytical model with the system be-ing measured. Unfortunately, correlation of cost mod-eling is not as easy as it may seem. The OSS/LaRCcost analysis method is designed to provide the mostcomplete assessment possible. The Fig. 3 below is agraphic representation of the elements of good costanalysis and how each element builds to form a com-prehensive view of a given mission’s budget. Theobjective of this cost analysis is to verify the missionteam’s cost estimates, not to develop a NASA esti-mate for what the mission should cost. As always,it behooves the proposer to provide as much and asconvincing data as possible to substantiate their pro-posed costs. However, many teams assume too muchoptimism in the development process without support-ing evidence. This usually leads to a poor assessmentconclusion.The cost analysis begins with a detailed review of

the project’s cost documentation including: the WBS,basis of estimate (rationale used by the team in devel-

M. Saunders et al. / Acta Astronautica 52 (2003) 361–370 369

5. Overall Cost Risk

4. Cost Assessment

3. Cost Threats & Risks from all work below & from tech/mgt analysis

2. Independent Tools - Models - Analogies

1. Analysis of Concept Study/Proposal

CostRisk

Synthesis of Data

CostThreats

RiskItems

RiskMitigati on

SAIC ModelResults

AC Model Results

Reconcil eDifferences

LCC Comparisonw/Concept Study/

Proposal

Analogies & HighLevel Comparisons

Basis ofEstimat e

CompleteWBS

Estimate

Internal Consistency Check(totals, neg. numbers, etc.)

Match-up of:Funding Profile

Project Schedule& Staffing Plan

Funding Profile& Annual Obligations

Reserve Levels&

Reserve Management

Costs byOrganizati on

Contributions Noted

Cost Savings fromDesign Heritage

Independent Cost AssessmentProcess and Elements

I am ready now!

Fig. 3. Cost analysis hierarchy.

oping their estimate), funding pro+le, project sched-ule, sta2ng plan, costs by organization, contributions(elements provided at no cost) and rationale for sav-ings (heritage, new ways of doing business, etc.). Thisreview is designed to examine the consistency of allthe cost pieces and the rationale behind them. Oncethe review of the team’s data is completed, the tech-nical data from the mission are input into two di<er-ent cost models. These models require a number ofassumptions (e.g. heritage and technology readinesslevels), and these assumptions are derived from thetechnical data presented by the mission team. The re-sults of the two cost models are compared, and the dif-ferences reconciled. These results are then comparedwith the mission team’s estimate. When di<erencesare observed, the di<erences are examined by techni-cal and cost members of the review team to determinewhether the cause is poor correlation or if the missionteam’s estimate appears unrealistic. However, the costmodel results are never used exclusively. Historicalcost actuals for similar items are compared with boththe cost model results and with the mission team’s es-

timate. This data provides an independent credibilitycheck of projected cost estimates. Based on data pro-vide by the technical and management teams and datafrom the mission team, cost threats are identi+ed andquanti+ed, if possible. The analytical cost analysis iscombined with the cost threats, risk items and risk mit-igation approaches (including cost reserves) to arriveat an overall cost risk assessment and rationale.

6. Synthesis of technical, managementand cost data

Despite weaknesses and ;aws found in reviews ofeach of the technical, management and cost areas, eacharea by itself may or may not indicate a poor chanceof mission success. However, when viewed in the ag-gregate, the mission may still be considered to havea good chance of success even when signi+cant tech-nical ;aws exist. Experience has shown that all de-velopment programs will encounter unforeseen eventsand problems and that overcoming these are linked to

370 M. Saunders et al. / Acta Astronautica 52 (2003) 361–370

all the pieces. Predicting mission success can be tiedto the level of maturity of the design/development inrelation to the planned state, the degree of technical,cost and schedule margins for the state of the devel-opment, and the quality of the management approachand management team.This can be boiled down to a few simple questions:

1. How hard is the investigation to implement andhow well is it understood? Is there enough ”enve-lope?” What are the inherent risks?2. Will the overall mission design (spacecraft,

launch vehicle, ground system, mission ops) allowsuccessful implementation of mission as proposed? Ifnot, are there su2cient resources (time and schedule)to correct identi+ed problems?3. Does the proposed ;ight system design and de-

velopment approach allow the mission to have a rea-sonable probability of accomplishing its objectives?Does it depend on advanced or new technology not yetdemonstrated, or are the mission requirements withinexisting capabilities? Does the mission have su2cientresiliency in appropriate resources (e.g., mass, power)to accommodate development uncertainties?4. Does the schedule reveal an understanding of

work to be done, the time it takes to do it, and is therea reasonable probability of launching on time?5. Will the management plan, organization, roles

and responsibilities, and experience allow successfulcompletion of investigation?6. Does the investigation have a reasonable chance

of being accomplished within proposed cost?7. Does the mission team understand the risks and

have adequate fallback plans to mitigate them, includ-ing risk of using new technology, to assure that mis-sion can be completed as planned?

Are the risks and the risk mitigation approaches ad-equate to give su2cient warning to ensure that theycan be mitigated without impacting the mission ob-jectives?If the answer to all of these questions is positive,

then the mission is likely to succeed. For each negativeanswer, however, the level of risk must rise.Despite doing or planning to do everything

correctly, however, mission success cannot be

guaranteed. There are many intangibles that can a<ectmission success that the team may have little controlover. Some of these intangibles are:

political meddling,customer meddling,adversarial relationships,failure of customer to meet commitments,poor communications,geographic dispersion of project elements,facility problems.

These uncontrollable factors can create problemsfor an otherwise excellent mission. If there are ade-quate margins in the critical mission resources (cost,schedule, mass, etc.), even these problems might beovercome.

7. Conclusions

As can be seen, predicting mission success is assimple as the application of good engineering prac-tices and common sense. Most ;aws in missions withpoor chances can be traced to trying to do too muchwith too little, whether it is money, time, technicalresources, technology or management. Over the past5 years, mission teams proposing to Discovery andExplorer Announcements of Opportunities have beenlearning this lesson, and the trend is for increasingnumbers of them to achieve low risk ratings witheach AO. This is creating a pleasant but interestingproblem for the Associate Administrator of OSS: anabundance of diverse selectable missions to choosefrom. Since the discriminators are getting very tight,most successful proposals to NASA’s Discovery andExplorer programs are typically well into Phase A atthe proposal stage. In the end, if missions are techni-cally equivalent, the +nal selectionmay bemade solelyon the basis of the best science for the dollar cost.

For further reading

[1] Guide for Estimating and Budgeting Weight and PowerContingencies, AIAA-G-020-1992.