ida’s integrated risk assessment and management · pdf fileida’s integrated risk...

46
INSTITUTE FOR DEFENSE ANALYSES IDA Paper P-4470 Log: H 09-001006 June 2009 IDA’s Integrated Risk Assessment and Management Model James S. Thomason, Project Leader Approved for public release; distribution is unlimited.

Upload: trinhnhu

Post on 08-Mar-2018

221 views

Category:

Documents


3 download

TRANSCRIPT

I N S T I T U T E F O R D E F E N S E A N A L Y S E S

IDA Paper P-4470Log: H 09-001006

June 2009

IDA’s Integrated Risk Assessment and Management Model

James S. Thomason, Project Leader

Approved for public release;distribution is unlimited.

This work was conducted under IDA’s independent research program. The publication of this IDA document does not indicate endorsement by the Department of Defense, nor should the contents be construed as reflecting the official position of that Agency.

© 2009 Institute for Defense Analyses, 4850 Mark Center Drive, Alexandria, Virginia 22311-1882 • (703) 845-2000.

This material may be reproduced by or for the U.S. Government.

I N S T I T U T E F O R D E F E N S E A N A L Y S E S

IDA Paper P-4470

IDA’s Integrated Risk Assessment and Management Model

James S. Thomason, Project Leader

IDA’s Integrated Risk Assessment and Management Model

INSTITUTE FOR DEFENSE ANALYSES

“There has been plenty of risk management talk, but not much risk management action.”

Under Secretary of Defense for Policy Michèle Flournoy Testimony (as Senior Fellow, Center for International and Strategic Studies) to the House Armed Services Committee, 2005

“The United States, our allies, and our partners face a spectrum of challenges, including violent transnational extremist networks, hostile states armed with weapons of mass destruction, rising regional powers, emerging space and cyber threats, natural and pandemic disasters, and a growing competition for resources…. We must balance strategic risk across our responses, making the best use of the tools at hand within the U.S. Government and among our international partners.”

Secretary of Defense Robert Gates, National Defense Strategy 2008

iii

PREFACE

This paper has been prepared by the Institute for Defense Analyses (IDA) under a central research project (C6251). The overall approach to strategic risk assessment and management described in this paper was originally developed and tested by IDA on behalf of its Department of Defense (DoD) sponsors under IDA task order BB-6-2414. Three OSD offices (PA&E, AT&L, Policy) and the Joint Staff, J-8 sponsored this original work as part of a broader effort to support capabilities-based planning and decision making in DoD.

The original IDA task order (BB-6-2414) focused on developing analytic approaches for assessing “cross-capability” trades. The original framework name -- ICCARM, for Integrated Cross Capability Assessment and Risk Management Framework--reflected that priority. But the construct described herein is actually broader than that name suggests, so we have re-labeled the approach – as IDA’s Integrated Risk Assessment and Management Model (IRAMM) -- to make explicit that wider applicability for DoD and national security decision-making.

The author alone is accountable for the final content of this report, but many people have made important contributions. Essential analytic help has come from all IRAMM team members, including Mr. Darrell Morgeson, Mr. Gene Porter, Mr. Jason Dechant, Mr. Mike Fitzsimmons and Dr. Amy Alrich. Very important advisors along the way have included Mr. Mike Leonard, Dr. Bob Bovey, Mr. Stan Horowitz, Dr. Jerry Bracken, Mr. Karl Lowe, General (USAF ret.) Larry Welch and ADM (USN, ret.) Dennis Blair. Great appreciation is due as well to all the people who have served as evaluators and guinea-pigs in the various pilot tests, as well as to the DoD sponsors from OSD and the JS, especially Mr. Eric Coulter, Ms. Lisa Disbrow, Dr. Nancy Spruill, Mr. Jim Bexfield and Mr. Phil Rodgers. Finally, the author is grateful to IDA for a Central Research Program grant to prepare this paper, and to the reviewers of this particular document: Mr. Mike Dominguez, Mr. Vance Gordon, and Dr. Thomas Mahnken. Mrs Leslie Norris and Mrs Barbara Varvaglione provided excellent publication expertise, and thanks are due to them as well.

iv

This page is intentionally left blank.

v

CONTENTS

PREFACE ............................................................................................................ iii SUMMARY……………………………………..…………………………… S-1

Purpose .............................................................................................................1 Introduction .......................................................................................................2 Definition and Context of “Strategic Risk” ......................................................3 Characteristics of the Risk Assessment Methodology ......................................6 Challenge Areas ................................................................................................8 Risk Parameters ..............................................................................................10

Likelihood ...................................................................................................11 Consequences ..............................................................................................12 Combining Parameters ................................................................................15 Additional Risk Information .......................................................................18

Notional Phase II Results ................................................................................19 Phases III and IV of IRAMM ..............................................................................21 Using IRAMM in the DoD Planning, Programming and Budget Execution

(PPBE) Process ...........................................................................................24 Strategic Risk Management—Toward a New Era of Defense and National

Security Decision Making ...........................................................................26

FIGURES

1. Challenge Area Definitions for Pilot Test .....................................................9 2. Calibration Points for Consequence Estimates ...........................................13 3. Consequences Scale Aid .............................................................................14 4. Risk Assessment Output Format: “Risk Profile” ........................................17 5. Force Adequacy Scale .................................................................................18 6. Notional Strategic Risk Assessments with IRAMM ...................................19

vi

7. Representative Judgments regarding Strategic Risks from Weapons of Mass Destruction ....................................................................20

8. IRAMM .......................................................................................................23

S-1

SUMMARY

A senior national security leader in the United States, the Secretary of Defense for instance, benefits greatly from having a small group of trusted senior civilian and military advisors who comprise a strong, genuine team. Such a team will share insights and evidence with each other and with the Secretary candidly and efficiently. Having a shared language and a common assessment framework for such deliberations seems manifestly valuable. This paper offers senior national security leaders a tested, flexible approach to building a lexicon and assessment framework in order to facilitate and expedite top-level teamwork. The approach proposed here should be valuable to senior leaders throughout their tenure, but it may be especially useful first during a major transition, to a new strategy, a new administration, or both.

The conceptual framework and management model introduced in this paper are, moreover, tested “tools at hand” that could help the Secretary of Defense (and the Chairman of the Joint Chiefs of Staff, for example) immediately in their efforts with senior advisors and aides to balance strategic risk, moving DoD from risk management “talk” to more risk management “action.”

The particular approach offered in this paper encourages senior national security leaders to focus explicitly on managing, mitigating and balancing strategic risk—risk to the Nation’s most important interests—as the central goal of the entire security enterprise. The concept of risk to the nation may be the most appropriate vehicle for formalizing the overall, integrated judgment necessary for strategic decision making. Accordingly, a formalized, quantified measurement of strategic risk can be one useful means for discriminating among alternative policy and program options. Establishing a viable, comprehensive process for senior decision makers to compare the relative merits of alternative policy and program options in mitigating risks to the nation--strategic risks--as they are making major defense program decisions has been the chief goal of IDA researchers in building IDA’s Integrated Risk Assessment and Management Model (IRAMM).

A team of IDA researchers developed and employed this framework, IRAMM, in several pilot-tests including a series of not-for-attribution interviews of more than two dozen senior civilian and military officials within DoD. Interviewees included the CJCS and most of the Combatant Commanders, Service Chiefs and Undersecretaries of Defense. This paper describes

S-2

the framework and assessment methodology, offers illustrative results, and concludes with some recommendations for applying the methodology to DoD and national security planning activities. Four Phases in IRAMM

The full IRAMM process consists of four basic phases: (I) Set up parameters for a baseline strategic risk assessment (the major national security challenge areas as seen by the current Administration as well as the baseline, currently planned force, capabilities and policies to be assessed) ; (II) Conduct an initial strategic risk assessment of how well the baseline program would likely do against those challenges; (III) Build alternative force/capability options for mitigating and/or reallocating that baseline strategic risk; (IV) Conduct strategic risk assessments of the alternatives -- assess how well those options perform relative to the baseline and each other. After these four main phases of IRAMM, results are presented to senior leaders for deliberation, refinement, and senior level decision.

Implementing Strategic Risk Management within DoD To exploit this process in building and selecting force and capability options for the DoD

future years defense program and beyond, DoD could adopt the strategic risk definition and scales described in this paper, or prudent variations on them. For challenge (major mission) areas, several possibilities exist. One option would center analysis on the four major scenario types that have been identified as important by the secretary in the Terms of Reference for the current 2010 QDR study plan. Another could be the taxonomy developed for DoD’s recent Roles and Mission Study. In selecting a set of challenge areas, though, the objective should be finding one that is mission-oriented, simple, and as mutually exclusive and comprehensive as possible.

Regarding time frames for a baseline and alternatives assessment, it would be sensible to establish both a near-term and a mid-term or longer-term version of any set of challenge areas that is selected. While such a structure might seem to complicate matters, such a scheme allows evaluators to explicitly assess how well baseline and alternative force-capability options balance strategic risks across time periods – addressing head-on Secretary Gates’ concerns about DoD “over-insuring” for some hypothetical future challenges while taking undue risks regarding important current challenges.

The evaluators for such an application of IRAMM in this Planning Programming and Budget Execution (PPBE) cycle should involve as many members of DoD’s “Senior Level Review Group” as can be scheduled. This panel would thus include the Secretary, the Deputy Secretary, the Chairman and Vice Chairman of the Joint Chiefs of Staff, the individual chiefs of the military services, the senior civilian officials of the Department at the Service Secretary and Under-Secretary level, and the combatant commanders.

S-3

The prompt adoption of a viable strategic risk assessment and management process, one that is transparent to senior decision-makers and that features scales robust enough to permit intellectually coherent comparisons among overall baseline and alternative force capability options, could help trigger a more integrated era of strategic planning, programming and policy making in DoD. IDA has developed what we believe are the core elements of such a process, ready for use by the Secretary of Defense and the CJCS in this PPBE cycle as well as in a number of other important national security contexts, including potentially the Chairman’s Risk Assessment.

Additional Applications At the National Security Council level, a version of this IRAMM process could be

structured as well for regular baseline assessments by NSC members of the strategic risks to the country of relying upon the current set of relevant departmental level budgets and programs, and then comparing that Phase II IRAMM assessment with promising options across the entire national security apparatus in a set of IRAMM Phase IV assessments.

In addition to IRAMM applications for regular DoD and potentially NSC-level strategic risk management and program decisions, the process could be used in a variety of training contexts, such as in service academies and in curricula of the war colleges and the National Defense University. Encouraging young and mid-career officers to think systematically in the strategic terms that IRAMM focuses on should help to promote more joint and whole-of-government approaches to the complex national security challenges that will increasingly confront the country and our allies in the years ahead.

In Summary IRAMM is a flexible approach to strategic risk assessment and management that may be

used now by DoD and other national security organizations. It may be used alone or in conjunction with a variety of other assessments and analytic tools. But a key premise of the IRAMM approach is that the methodology is credible and transparent to the senior decision-makers who will need to make overall judgments one way or another on future forces and related security capabilities for the good of the country. This process, while not necessarily the only possibility, offers a sound set of scales and an analytic framework that have proven understandable and viable to numerous senior leaders, both civilian and military, in the tests conducted of it thus far for the U.S. Department of Defense.

S-4

This page is intentionally left blank.

1

Purpose

A senior national security leader in the United States, the Secretary of Defense for

instance, benefits greatly from having a small group of trusted senior civilian and military

advisors who comprise a strong, genuine team. Such a team will share insights and evidence with

each other and with the Secretary candidly and efficiently. Having a shared language and a

common assessment framework for such deliberations seems manifestly valuable. While

selecting a strong set of trusted advisors and aides may in the end be the most important

ingredient for effective, efficient teamwork, finding, and employing, a shared framework and

vocabulary ought to help substantially too. Accordingly, this paper offers senior national security

leaders a tested, flexible approach to building such a lexicon and assessment framework in order

to facilitate and expedite top-level teamwork. The approach proposed here should be valuable to

senior leaders throughout their tenure, but it may also be especially useful first during a major

transition, to a new strategy, a new administration, or both.

There is no presumption here that members of a real national security team must hold

identical views. The idea, rather, is that by establishing a relatively simple but important lexicon

and a flexible assessment framework a secretary of defense and other senior national security

leaders may move to highly productive discussions and decision-meetings with their top deputies

and advisors faster and more efficiently than without these tools. This process and discussion

framework can also, we believe, help forge a security team’s common ground more quickly than

otherwise, more efficiently revealing for senior leaders their clearly substantive (versus merely

superficial, terminological) agreements and differences across a range of vital and pressing

national security issues.

The particular approach offered here encourages senior national security leaders to focus

explicitly on managing, mitigating and balancing strategic risk—risk to the Nation’s most

important interests—as the central goal of the entire security enterprise. This is not the only way

to employ this process. But a focus on strategic risk seems both prudent from a national security

perspective as well as in line with, for example, the thinking of the current Secretary of Defense -

- as quoted above.

2

Introduction

The conceptual framework and management model introduced in this paper are “tools

at hand” that could help the Secretary of Defense (and the Chairman of the Joint Chiefs of Staff,

for example) immediately in their efforts with senior advisors and aides to balance strategic risk,

moving DoD from risk management “talk” to more risk management “action.” This approach to

strategic risk assessment and management has been developed and tested by the Institute for

Defense Analyses (IDA) on behalf of its Department of Defense (DoD) sponsors.1

IDA developed IRAMM and tested it under the sponsorship of three OSD offices

(PA&E, AT&L, Policy) and the Joint Staff, J-8 as part of a broader effort to support capabilities-

based planning and decision making in DoD. This broader effort was initiated in early 2004 with

the objective of “developing a cross-capabilities assessment and integrated risk management

analytic framework usable for assessing how well alternative mixes and levels of major

functional capabilities can provide the nation with the wherewithal to successfully conduct high

priority operations of various kinds and to meet the major demands of the U.S. defense strategy.”

In establishing this larger objective, study sponsors were motivated by what they perceived to be

a dearth of tools or frameworks for conducting strategic-level analyses and for assessing the

impact of major capability trade-off decisions on national security risk.

A team of

IDA researchers has employed this framework, IDA’s Integrated Risk Assessment and

Management Model (IRAMM), in several pilot-tests including a series of not-for-attribution

interviews of more than two dozen senior civilian and military officials within DoD.

Interviewees included the CJCS and most of the Combatant Commanders, Service Chiefs and

Undersecretaries of Defense. This paper describes the framework and assessment methodology,

offers illustrative results, and concludes with some recommendations for applying the

methodology to DoD and national security planning activities.

The full IRAMM process, described below, consists of four basic phases: (I) Set up

parameters for a baseline strategic risk assessment (the major national security challenge areas as

seen by the current Administration as well as the baseline, currently planned force, capabilities

1 The original IDA task order (BB-6-2414) focused on developing analytic approaches for assessing “cross-

capability” trades. The original framework name -- ICCARM, for Integrated Cross Capability Assessment and Risk Management Framework--reflected that priority. But the construct described herein is actually broader than that name suggests, so we have re-labeled the approach --as IRAMM--to make explicit that wider applicability for DoD and national security decision-making.

3

and policies to be assessed) ; (II) Conduct an initial strategic risk assessment of how well the

baseline program would likely do against those challenges; (III) Build alternative

force/capability options for mitigating and/or reallocating that baseline strategic risk; (IV)

Conduct strategic risk assessments of the alternatives -- assess how well those options perform

relative to the baseline and each other. After these four main phases of IRAMM, results are

presented to senior leaders for deliberation, refinement, and senior level decision.

DoD, like most other large public organizations, has traditionally made most of its

resource allocation decisions in organizational or mission area stovepipes, with limited success

in optimizing across those stovepipes. For many years, much public policy literature has claimed

that such sub-optimization is really inevitable. The approach described here will not eliminate

this thorny problem. However, it offers a viable way to attack it head-on: a means for the

Secretary of Defense and the Chairman of the Joint Chiefs, for example, to reduce the extent of

that sub-optimization and to move toward a more holistic optimization across DoD.

.

Definition and Context of “Strategic Risk”

In the DoD context, risk is defined as: “probability and severity of loss linked to

hazards.”2 Many different risk assessment activities, in fact, coexist in DoD, in large part

because there are many different types of “losses linked to hazards” relevant to the management

of the Department. These take the form of technical risk, cost risk, schedule risk, operational

risk, and many others. The current National Defense Strategy identifies four overarching

categories of risk for the Department3

2 Joint Publication 1-02, DoD Dictionary of Military and Associated Terms, p. 462. Other relevant definitions

include: “risk assessment – the identification and assessment of hazards (first two steps of risk management process)”; “risk management – the process of identifying, assessing, and controlling risks arising from operational factors and making decisions that balance risk cost with mission benefits.” The American Heritage Dictionary defines risk similarly, as: “the possibility of suffering harm or loss.”

: force management risk, having to do with building the

force and maintaining its readiness; operational risk, having to do with preparedness to win wars

3 The National Defense Strategy of the United States (Department of Defense, 2008).

We introduce the term “strategic risk,” defined as: “prospective political, economic and military losses or hazards facing the United States, based on

the expected likelihood and character of future events and conditions.”

4

or achieve other operational objectives in the near-term; future challenges risk, relating to

preparedness for tomorrow’s fights; and institutional risk, relating to the management of the

Department itself.

All of these categorizations of risk are useful to particular constituencies within the DoD

enterprise. On the other hand, none of them encompasses the entire DoD enterprise under a

single measure of merit; in other words, none provides an integrated, holistic basis for assessing

the impact of major decisions on the Department’s whole set of missions. Strictly speaking, of

course, the contributions of DoD to national security are far too numerous and complex to reduce

to a single or even a few metrics. But in fact, senior leaders both inside and outside DoD must

and ultimately do decide among alternative choices of policy, programs, and organization based

on some holistic judgment of the effects of those choices on national security. How is such a

judgment to be made or described?

The premise of the work presented here is that the goal of national security and defense

strategy is to mitigate overall risk to the nation, and that therefore the concept of risk to the

nation is the appropriate vehicle for formalizing the overall, integrated judgment necessary for

strategic decision making. In order to distinguish risk in this context from other forms of risk,

we introduce the term “strategic risk,” defined as: “prospective political, economic and military

losses or hazards facing the United States, based on the expected likelihood and character of

future events and conditions.” In this context, mitigation/reduction of strategic risk can be seen

as the “objective function” of the DoD enterprise. Accordingly, a formalized, quantified

measurement of strategic risk can be one useful means for discriminating among alternative

policy and program options. Establishing a viable, comprehensive process for senior decision

makers to compare the relative merits of alternative policy and program options in mitigating

Establishing a viable, comprehensive process for senior decision- makers to compare the relative merits of alternative policy and program options in mitigating overall risks to the nation –strategic risks--as they are making major defense program decisions has been the chief goal of IDA researchers in building IRAMM.

5

risks to the nation--strategic risks--as they are making major defense program decisions has been

the chief goal of IDA researchers in building IRAMM.

It’s the Law: In addition to its potential analytical utility for program and policy

decisions, formal strategic risk assessment is a statutory requirement for DoD. In the language

establishing the Quadrennial Defense Review, Title 10 directed that “assessment of risk . . . shall

be undertaken by the Secretary of Defense in consultation with the Chairman of the Joint Chiefs

of Staff. That assessment shall define the nature and magnitude of the political, strategic, and

military risks associated with executing the missions called for under the national defense

strategy.”4

Strategic risk assessment remains a relatively new undertaking in the Department. The

risk framework carried forward in the current National Defense Strategy does not include a

functioning formal strategic risk assessment component.

5

Any major force structure or programmatic change proposed in the QDR should be accompanied by an assessment of the associated types and levels of risk. Without such a risk assessment, it is impossible to make informed judgments about how much of a given capability is enough and whether a proposed change is in the nation's interests. Even more importantly, given the looming budget crisis, the OPTEMPO strains of the force, and the substantial security challenges we face as a nation, the Department of Defense must do a better job of establishing priorities and allocating risk. Previous QDRs have not done well on this score. There has been plenty of risk management talk, but not much risk management action.

While the Chairman of the Joint

Chiefs’ annual risk assessment is firmly established, its main focus is on risk to the military’s

ability to fully execute its specific operational plans, not on assessing and comparing how well

baseline and alternative forces could each mitigate strategic risks to national security when

looking ahead. At the time IDA was developing this process, a former defense official (now the

new Under Secretary of Defense for Policy) summed up the role and status of risk assessment in

DoD this way:

6

The IRAMM process described here represents an attempt to fill this continuing,

important gap in DoD’s analytical, planning, and decision-support tool kit. 4 United States Code, Title 10, Chapter 2, Section 118. 5 United States Government Accountability Office, “Additional Actions Needed to Enhance DoD’s Risk-Based

Approach for Making Resource Decisions” (Washington: GAO, November 2005). 6 Michèle Flournoy, “QDR 2005: Goals and Principles,” Testimony before the Committee on Armed Services,

U.S. House of Representatives, September 14, 2005.

6

Characteristics of the Risk Assessment Methodology

The principal methodological challenge in constructing an approach to strategic risk

assessment is the creation of a single objective measure from judgments that are chiefly

subjective and integrative. Any estimate of strategic risk depends not only on non-verifiable

predictions of the future, but also on value systems, that is, estimates of what aspects of national

security are more or less important than others. As a result, strategic risk assessments, as a

matter of procedure, must be tied closely to human judgment and expertise, although these can

and should be informed and refined with supporting studies and analyses that are more

quantitative and objective in nature.

Accordingly, IRAMM does not employ any complex algorithms for relating the myriad

factors relevant to strategic risk. Its aim, rather, is to elicit the intuition and insights that the

evaluators bring to the assessment of the basic elements of risk (probabilities combined with

consequences) in a transparent and straightforward but still disciplined, well-quantified way. On

this basis, results from individual evaluators can be compared, contrasted and combined with a

view toward identifying and building consensus or clarifying reasons for disagreement. The

quality or “validity” of assessment results will depend heavily on the breadth and depth of

experience as well as the intellectual objectivity of the evaluators. This is a fundamental

methodological choice from which much of the rest of the structure flows. In our view, this

aspect of IRAMM is both one of its main implementation challenges and one of its greatest

strengths.

Implementation challenges arise in several dimensions. The first, of course, is the

inherent fallibility and variability of human judgment. More specifically to IRAMM’s national

security context, the combination of breadth and depth in perspective and subject matter

expertise required to be an effective evaluator is relatively rare, and therefore limits the pool of

plausible assessors to the most senior leaders in the Department. These leaders have very limited

time to participate but have broad access to a rich variety of scenario-oriented war games and

other relevant quantitative analyses to provide an adequate basis for doing so. This time

constraint, in turn, limits the level of detail that can usually be addressed in the strategic risk

assessment questions and answers.

7

There is no doubt that this approach trades off depth and rigor for breadth and flexibility

(e.g., the capability to compare and contrast risk of military operations across challenge areas of

significantly different character), and it does so purposefully. It is breadth that makes strategic

risk assessment both difficult and important, and human judgment is one of the best available

mechanisms for dealing with the complexity associated with this kind of breadth. Conversely,

the narrower the question under consideration, the less likely it is that IRAMM is the best tool

for answering the question. Many other analytic tools and tailored metrics exist that provide

greater rigor on the rich variety of analytic questions facing DoD. Ideally, strategic risk

assessors would already be conversant with a wide range of evidence and insights developed

through other analyses of the most important issues of national security and defense. In any

case, they should be explicitly provided with the best available analyses in preparation for the

IRAMM evaluations. This body of knowledge would help them in shaping their judgments of

risk. The need that IRAMM itself is meant to fill is the aggregation of disparate and dissimilar

evidence, insights and intuition into a holistic framework that generates output in the form of

estimates of strategic risk.

The IRAMM methodology begins with a structured, interactive one-on-one interview.

For the senior level assessments IDA conducted earlier, interviews that usually lasted between 60

and 90 minutes, evaluator comments were recorded by our interviewers with the strict

understanding that no responses would be attributed to an individual. The interview consisted of

three phases. In the first phase, the interviewer introduced the evaluator to the assessment

format, procedure, and the definitions of key terms, including definitions of risk and of a set of

strategic DoD mission areas, or “challenge areas.” In the second phase, the evaluator answered a

series of questions organized around the set of challenge areas and intended to draw out the

evaluator’s opinions, world views, and expectations about key drivers of strategic risk. The

evaluator also estimated values for a few key risk-related parameters that were used to construct

a strategic risk score for each challenge area. Finally, in the brief third phase, the evaluator

answered several follow-on questions intended to check the consistency of the risk scores

produced in the second phase and to inform the construction of alternatives in phase III of

IRAMM.

The following sections provide overviews of the key elements of this construct: the

challenge areas and the risk parameters.

8

Challenge Areas

IRAMM “challenge areas” define a set of strategic mission areas for DoD. Strategic risk

is discussed and estimated separately for each challenge area, so assessment output is formed

principally in terms of comparative levels and distribution of risk across the challenge areas.

This makes selection and definition of challenge areas a crucial element of the methodology.

This taxonomy of DoD missions serves as the main lens through which IRAMM enables

examination of risk and the impact of alternative force and capability options.

Clearly, DoD’s mission set can be divided in many ways. However, not every possible

categorization scheme is equally appropriate for a risk assessment exercise. To the contrary, a

few relatively strict criteria are best applied to the selection of challenge areas. First, the

categories should be relatively few in number in order to retain a strategic focus and to keep a

short assessment manageable. Second, categories should be operationally oriented, that is, they

should be relatively easy to disaggregate into military objectives against which prospective force

performance can be assessed. Third, categories should be roughly mutually exclusive, so that

risk can be assessed in each area independently. Fourth, they should be comprehensive in the

context of the decision environment that is being supported by the risk results, that is, all

categories considered together should cover the full range of military operations that generate

risk and they should also provide a structure for evaluating risk mitigation alternatives. And

fifth, the categories should link in some way to existing force sizing guidance so that total force

structures may be assessed with some reference to requirements for simultaneous operations.

A sixth criterion is important for practical rather than methodological reasons, and

unfortunately, it may come into conflict with one or more of the methodological considerations.

This criterion is that the challenge area set should be one that the senior leadership is comfortable

with. This may involve working with previously selected official categories that are not always

ideal from a purely academic or logical perspective.

Among the most prominent frameworks recently in effect have been the “1-4-2-1” force-

sizing construct (including homeland defense, deterrence, major combat operations, and lesser

contingencies)7

7 National Defense Strategy (Department of Defense, 2005), pp. 16-17.

, the “mature and emerging challenges” (traditional, irregular, catastrophic, and

9

disruptive)8, and the 2006 QDR Force Planning Construct (including homeland defense, war on

terror/irregular warfare, conventional campaigns, and deterrence).9

Because the assessment

described in this paper was conducted concurrently with QDR’06 deliberations, the IDA team

developed a challenge area set derived from the guidance intended to direct that QDR analysis.

Figure 1 presents those challenge areas and their definitions.

Challenge Areas Definitions

Major Combat Operations Large-scale operations conducted against a nation state that possesses

significant regional military capacity, with global reach in selected capabilities, and the will to employ that capability in opposition to or in a manner threatening to U.S. national interests. e.g., MCO-1, MCO-2, MCO-3

Stability Operations Military operations in concert with the other elements of national power and

multinational partners, to maintain or re-establish order and promote stability. e.g., Iraq, Afghanistan, Bosnia, Haiti

Homeland Defense The protection of U.S. sovereignty, territory, domestic population, and critical

defense infrastructure against external threats and aggression. e.g., ballistic missile attack, cruise missile attack, piloted aircraft attack

Counter-Terrorism Continuous or contingency-based offensive operations undertaken to prevent,

deter, and respond to terrorism. e.g., strikes and raids, cooperation and training of foreign security forces, intelligence collection

Combating WMD Operations to interdict, destroy, secure, or render safe weapons of mass

destruction held or in danger of being held by actors hostile to U.S. interests. e.g., counter-proliferation strike, WMD interdiction, tracking and securing loose nuclear weapons/materiel

Shaping Strategic Choices

Operations and posture intended to dissuade major regional powers from challenging U.S. or allies’ freedom of action. e.g., basing choices, mil-mil cooperation, alliance building, investment choices

Figure 1: Challenge Area Definitions for Pilot Test

8 Ibid. pp. 2-3. 9 Quadrennial Defense Review Report (2006), p. 38.

10

A few comments of clarification on the challenge area definitions used here are

warranted. First, as implied by the examples offered in Figure 1 for Stability Operations, this

challenge area includes both post-conflict “Phase IV” operations, and other types of stability

operations. Homeland Defense is defined here narrowly, in accordance with the then extant DoD

Strategy for Homeland Defense and Civil Support.10

In all cases, evaluators were encouraged to make reference to specific scenarios in their

discussion of risk, including but not limited to those in current operations, operational plans and

official mid- and longer-range planning scenarios. Interviews were conducted at the SECRET

level in order to accommodate discussion of particularly important classified planning factors,

results of previous analyses, or intelligence that evaluators wanted to bring to bear on the

questions at hand.

Though DoD plays a role in areas of

Homeland Defense other than protection from attacks through the air and other strategic

approaches, those areas were the only ones in which DoD was the lead federal agency. As a

result, risk in the DoD Homeland Defense “mission space” was considered to be limited to these

types of attacks. Risk from other types of attacks on the homeland was typically captured in

Combating WMD and Counterterrorism challenge areas.

Risk Parameters

In reviewing the basic dictionary definition of risk (“the possibility of suffering harm or

loss”), two independent factors are evident. The first factor, “possibility,” is fairly easily

parameterized as a subjective probability or likelihood from zero to 100 percent. The second,

“harm or loss,” is not so readily quantifiable and will require further explication. But risk, both

as a concept and as a major metric in the IRAMM framework, is formed through the

combination of these two factors referred to here as “likelihood” and “consequences.”

Before describing the particular methods employed for quantifying these risk parameters,

it is important to note a caveat. Great numerical precision is an impossible goal in describing

phenomena as abstract, subjective, and speculative as those that drive strategic risk. To be sure,

IRAMM’s risk scoring system is very useful in rank ordering the challenge areas on the basis of

risk as well as in establishing magnitudes of differences in assessments of risk levels among

10 See DoD Strategy for Homeland Defense and Civil Support (Washington, DC: Department of Defense, June

2005), pp. 5-6.

11

different evaluators and across different mission areas (e.g., challenge area 1 carries three times

the risk of challenge area 2). More fundamentally, however, the scoring serves as a heuristic

device for disciplining and illuminating evaluators’ reasoning about the issues in question.

Likelihood

IRAMM’s likelihood parameter is simply a subjective probability, expressed as a

percentage, or as “chances out of ten,” of one or more events of a particular kind occurring over

the timeframe in question. A typical question used for soliciting this estimate would be: “Over

the decade 2010-2020, what are the chances that the U.S. military will be called on to conduct

one or more major combat operations?” While many evaluators are hesitant to apply likelihoods

to inherently unpredictable events, most also recognize the centrality of this factor in weighing

strategic options. Estimation of this parameter also tends to prompt sharper consideration on the

part of the evaluator of likelihood’s role in determining the importance of various threats or

opportunities. Richard Neustadt and Ernest May addressed this particular utility of subjective

probabilities in their classic work on the use of history in decision making, Thinking in Time, as

follows: “We know of no better way to force clarification of meanings while exposing hidden

differences.”11

For some challenge areas, however, likelihood recedes to the background of the

discussion because of the near certainty of having to conduct certain types of operations. For

example, there was general agreement among the earlier evaluators that counterterrorism

operations would be a persistent requirement for the U.S. military. In these cases, likelihood is

considered to be 100% and the risk discussion devolves essentially to an estimate of the expected

11 Richard Neustadt and Ernest May, Thinking in Time: Uses of History by Decision-Makers (New York: Free

Press, 1986); p. 152. Their longer discussion of organizing senior decision makers’ discussions is instructive, and directly in line with the spirit of IRAMM. They argue, “The need is for tests prompting questions, for sharp, straightforward mechanisms the decision-makers and their aides might readily recall and use to dig into their own and each others’ presumptions. And they need tests that get at basics somewhat by indirection, not by frontal inquiry: not ‘what is your inferred causation, General?’ Above all, not, ‘what are your values, Mr. Secretary?’ Professionally trained Americans are shy about confronting one another, to say nothing of their bosses – or themselves – in such terms on the job. . . If someone says ‘a fair chance’. . . ask, ‘if you were a betting man or woman, what odds would you put on that?’ If others are present, ask the same of each, and of yourself, too. Then probe the differences: why? This is tantamount to seeking and then arguing assumptions underlying different numbers placed on a subjective probability assessment. We know of no better way to force clarification of meanings while exposing hidden differences. . . Once differing odds have been quoted, the question ‘why?’ can follow any number of tracks. Argument may pit common sense against common sense or analogy against analogy. What is important is that the expert’s basis for linking ‘if’ with ‘then’ gets exposed to the hearing of other experts before the lay official has to say yes or no.” pp. 151-152.

12

consequences of the nation having to rely upon the forces, capabilities and policies being

assessed in those challenge areas. This distinction will be addressed in more detail in the

“Combining Parameters” section below.

Consequences

The second risk parameter presents a knottier problem than the first. Much of what

makes strategic risk assessment challenging is the need to integrate and weigh judgments on a

very wide range of factors and values that constitute national security. IRAMM does not aim to

simplify this integrative cognitive process. Evaluators must judge for themselves, that is, apply

their own value systems in determining, the relative importance of such losses as casualties,

economic damage, alliance relationships, confidence in government, etc. As evaluators come

together to discuss results at the end of Phase II, we have observed in the pilot exercises thus far

that consensus begins to emerge for a common value system.

What IRAMM aims to do is to standardize the terms and magnitudes associated with

evaluators’ judgments on these issues so that those judgments can be compared, both among

different challenge areas and among different evaluators. Take, for example, a comparison of

consequences between two hypothetical events: a terrorist attack on Washington, DC with a

weaponized biological agent; and a bloody, drawn-out major combat operation in the Middle

East.

Most evaluators would probably agree that the consequences of both scenarios would be

quite bad. But how bad is bad? In order to assess risk, we need more information about the

relative magnitudes of their consequences. Are they roughly equivalent? Or is one twice as bad

as the other?

The mechanism IRAMM uses to enable these types of apples-to-oranges comparisons is a

100-point consequences scale that is used for all challenge areas. The scale is defined by two

complementary means. First, “calibration points” for the scale are specified in the form of

sample scenarios that define the highest (90-100) and lowest (0-10) ranges of the scale. Those

calibration points are shown in Figure 2. Second, evaluators are provided with a detailed set of

generic negative consequences grouped into economic, military, and political categories. The

generic consequences are also arrayed vertically in groups corresponding to levels of severity.

This “Consequence Scale Aid” is shown in Figure 3.

13

Figure 2: Consequence Scale Calibration Points

14

Economic Consequences Military Consequences Political Consequences

• Extreme, semi-permanent structural and economic costs.

• Capital flows massively degraded and/or dollar collapses jeopardizing US economic foundation.

• Alliances and economic agreements teminated.

• Considerable loss of overall military force capability; recovery after 4 years.

• Covering worldwide mission areas adequately is impossible.

• Deterrence practically non-existent in key areas.

• Potential international condemnation due to high civilian casualties.

• The US seen as unreliable by multiple allies or coalition partners and new regional security orders emerge leaving the US isolated, internally divided and polarized.

• Loss of credibility and reputation as guarantor of new global security environment.

• Other nations create their own nuclear deterrents, aggressively compete with US in peer adversarial role.

Most

Severe

• Severe, economic costs resulting from trade disruptions, operational factors, or property damage.

• Capital flows seriously degraded and/or substantial devaluation of dollar

• Global economy stalled

• Recovery eventually.

• Considerable loss of overall military force capability; recovery within 4 years.

• Covering worldwide mission areas difficult.

• The conflict outcome may not be worth the cost

• Deterrence weak in key area

• Critical US vulnerability revealed to all from military surprise

• International criticism due to high civilian casualties.

• US strategic influence severely degraded.

• US credibility lost in one or more key regions in the world.

• Some coalitions fail; relationships with some key allies lost.

• Serious, economic costs due to trade disruptions, operational factors, or property damage.

• Capital flows degraded and/or value of dollar weakens.

• Economic disruptions possible, but no recession follows

• Reconstruction of key economic capabilities could take months.

• Moderate loss of overall military force capability; recovery within 8 months.

• Worldwide mission areas still covered.

• Overall mission success not questioned.

• Deterrence weaker, but still strong.

• High civilian casualties.

• US weakened as major global political broker.

• International cooperation with US put at risk.

• US credibility weakened with one particular or a limited number of adversaries

• US partners in region of conflict doubt US commitment and begin forge separate security arrangements.

• Some economic costs due to trade disruptions, operational factors, or property damage.

• Confidence quickly restored domestically and internationally.

• Some loss of military force capability overall.

• Worldwide mission areas covered adequately.

• Low or predicted civilian casualties.

• Some political opposition and deeper suspicion of US intentions in previously friendly countries.

• Some unwillingness of former allies to cooperate with US on other international security goals.

• Negligible. • No major loss of military force capability overall.

• Worldwide mission areas covered adequately.

• Low or predicted civilian casualties.

• Some minor political opposition and suspicion of US intentions in previously friendly countries.

Least Severe

Figure 3: Consequences Scale Aid

The consequences in Figure 3 do not correspond directly to any particular level on the

100-point scale, but rather serve two related purposes. First, they prompt evaluators to consider

the full range of economic, military and political factors that contribute to “strategic risk.” And

second, together they provide a common standard by which evaluators can begin to reason about

comparisons between dissimilar kinds of challenge areas and negative consequences.

There is also a third element of the methodology that provides refinement to the scale:

evaluators’ scores, themselves. Each consequence (and risk) score that the evaluator provides

creates another calibration point for their successive scores in other challenge areas. Evaluators

are allowed to adjust their scores throughout the exercise in order to ensure that the scores reflect

15

their cumulative judgments as closely as possible, both in relation to each other, and in relation

to the given calibration points on the 100-point scale. Critically important here is that evaluators

are asked to confirm (to the interviewer) that the risk ratios for each pair of challenge areas

implied by their various scores are in fact valid representations of their own judgments on the

relative magnitudes of risk in each of the challenge areas. This feature is an essential,

fundamental difference between IRAMM and other risk methodologies that use only rank-

ordered, ordinal results (e.g., challenge area 1 has greater risk than challenge area 2); it permits

IRAMM evaluators in Phase IV (the comparison of alternatives) to make much more coherent,

intellectually credible assessments of how well alternatives mitigate strategic risk than would

comparisons using weaker, ordinal scales.

Combining Parameters

One of the greatest conceptual challenges of comparative risk assessment is the

combination of the two parameters of likelihood and consequences. In conversational use, the

question “what is the risk?” is commonly conceived of as meaning either “what are the odds?” or

“how bad would it get?” These conceptualizations of risk are not necessarily wrong, but they are

incomplete. A full measure of risk must account for both “what are the odds?” and “how bad

would it get?” Due to qualitative differences among the challenge areas used in the senior leader

interviews, IRAMM uses two different (but analogous) methods for combining these two

parameters into a single measure, or risk score.

The first method reflects a traditional actuarial view of risk and involves simply the

multiplication of likelihood and consequences estimates. This method is the same one that

would be used by an insurance company, for example, to estimate the risk posed to property by a

natural disaster. In such a case risk is synonymous with “expected loss,” which is equal to the

probability of a disaster times the likely property loss in the event a disaster does occur. In order

to apply this approach, it is essential that the “loss” metric, in this case the consequences score,

be measured on a scale with meaningful ratio properties. In other words, in order for

multiplication to be applied to the parameter without any distortion of information, a score of,

say, 40 must actually have twice the value of a score of 20. This is the reason why, as noted

above, evaluators are asked to confirm that ratios among their risk scores in fact reflect their

judgments on relative risk magnitudes. Given the ratio properties of the consequences scale,

16

then, this method is relatively straightforward to apply to discrete military events such as major

wars or individual attacks. For the senior leader interviews conducted to date, this method was

applied for the Major Combat Operations, Stability Operations, and Homeland Defense

challenge areas.

The second method draws on a somewhat different though analogous definition of risk.

This method equates risk with a “level of hazard.” In these cases, risk accrues not to discrete

events, but rather to persistent conditions or threats. In effect, likelihood is considered to be

100% for these areas, and the risk estimation becomes equivalent to an estimate of the expected

negative consequences associated with relying upon the force being assessed to address the

particular challenge areas. For the senior leader interviews conducted to date, this method was

applied for the Counterterrorism, Combating WMD, and Shaping Strategic Choices challenge

areas. In administering the exercise, the IDA team placed these three challenge areas after the

other challenge areas sequentially. Accordingly, the risk scores given for the first three

challenge areas provide solid reference or “anchors” for the latter challenge area scores.12

Finally, as previously noted, evaluators are allowed, indeed encouraged, to adjust their

risk scores at any time throughout the evaluation in order to maintain the integrity of the ratios

and rankings implied by their scores.

The resulting set of scores for each evaluator can then be visually displayed by

connecting each challenge area score with a curve, as shown in Figure 4. Each evaluator’s curve

constitutes his or her “risk profile.” The benefit of using this visualization technique is that the

risk profiles from all evaluators can be plotted together on one graph, allowing evaluators to see

quickly--in a group session bringing evaluators together--how their own risk profiles compare

with those of others. Also, central tendencies and variances of the group can be quickly

visualized, as can be trends for different subgroups based on demographics or commonly held

beliefs about risk.

12 See Dan Ariely, Predictably Irrational, Harper Collins, 2008, on some important nuances in solid anchoring

techniques.

17

Figure 4: Risk Assessment Output Format: “Risk Profile”

More generally, assessment results can be reported and examined through at least three

different lenses, each with somewhat different characteristics. First, standard descriptive

statistics of interview evaluators’ risk scores provide insight into data set properties such as

central tendencies and spread.13

A second lens uses comparisons of rank order statistics. In principle, the information

contained in these rank order statistics can readily be derived from the IRAMM risk scores

themselves. However, because of potential differences in the value systems that various

evaluators may bring to their Phase II assessments, and also because of a certain irreducible

abstraction in the topics being treated, we cannot be certain that two identical scores from two

different evaluators do, in fact, represent identical levels of risk. The use of the calibrated

consequence scales significantly mitigates this problem, but rank order statistics allow us to

summarize the data in terms wholly independent of comparison of absolute score values across

evaluators. The premise here is that two evaluators who both ranked, for example, Combating

WMD as the highest risk area agrees on the relative risk of this area compared to others, even if

one gave the area a score of 30 and the other gave it a score of 60.

13 We do not discuss the statistical significance of differences among mean risk scores. Because the scores are

subjective constructions specific to the context of this exercise, they do not represent samples of any “true” population of risk data.

18

The third lens is a purely qualitative one. Stripped of any methodological device, these

are the exercise data in raw form. These verbatim and paraphrased comments together constitute

DoD’s senior leaders’ world views, opinions, values, and expectations for the future as they

relate to defense strategy and force planning. Viewing the assessment results through this lens

reveals a rich landscape of strategic judgment and enables identification of patterns of

agreement, disagreement, and uncertainty.

Additional Risk Information

The creation of the risk profile shown above, the scores together with all of its related

commentary and supporting rationale, is the primary objective for the initial strategic risk

assessment interview within the IRAMM process. Beyond the profile, the IDA team collected

three additional sets of risk-related information at the end of each senior-level interview.

• The first of these was a rating of expected risk trends for each challenge area over the decade 2010-2020. Specifically, evaluators were asked if they believe that risk would be constant, rising, rising strongly, declining, or declining strongly based on the then extant FYDP.

• Second, evaluators were asked to rate the adequacy of currently programmed U.S. forces and capabilities to execute the mission objectives associated with each challenge area. For this assessment, evaluators used the five-category scale shown in Figure 5. “Failure” was assessed using well-specified goals and objectives for the force, derived from multiple, approved DoD resources (e.g., Defense Planning Scenarios).

• Finally, evaluators were asked for their judgments regarding desirable changes in U.S. military capabilities, both investments that would be likely to reduce strategic risk and disinvestments that might be made with minimal increase in strategic risk.

Figure 5: Force Adequacy Scale

The Force Will Almost Certainly Not fail*

The Force will Probably Not fail

Chances are About Even that the force will fail

The Force will Probably fail

The Force will almost Certainly fail

1. Adequate / Ample

2. Marginally Adequate

3. Inadequate

4. Very Inadequate

5. Severely Inadequate

19

The combination of these data points, along with the strategic risk estimates and

supporting rationales, provides a robust description of how senior leaders view future challenges

and the ability of DoD to address them with a given (in this case baseline) force and related

capabilities.

Notional Phase II Results

Phase II baseline assessments result in a set of individual risk profiles, one for each

evaluator. Figure 6 provides notional profiles of this sort, derived from an IRAMM test.

12

Notional Strategic Risk Assessments

1. Combating WMD2. Counter-Terrorism

Shaping Strategic ChoicesStability OperationsMCOs

6. HLD (*DoD Mission Space)

Sample Risk RankingShape

Strategic Choices

Combating WMDMCOs

StabOps CT

90

100

80

70

60

50

40

30

20

10

0

DoDHLD

*

Mean Risk Curve

Assessment produces extensive qualitative and

quantitative data highlighting agreements,

disagreements, and worldviews

3.

Yellow rectangles depict range of 50% of risk scores in the middle.

Figure 6: Notional Strategic Risk Assessments with IRAMM

Evaluators’ individual strategic risk “profiles” are shown in grey, along with a mean risk

profile for all evaluators (in purple). Depicted in yellow bars here as well are the middle 50% of

the group’s scores, one for each challenge area. The box at the top right summarizes the notional

risk rankings of the baseline future force by the group -- across all six challenge areas.

Essentially, this represents a notional senior-level baseline strategic risk assessment of the

programmed future force at the time. In this illustration, evaluators assessed that with the given

force, the Nation would face significant risk regarding countering WMD and countering

terrorism, relative to its preparedness to deal with high intensity combat challenges. More

20

specifically, the mean strategic risk evaluations of this group, by challenge area, are as follows:

MCOs –26; Stability Operations—27; Homeland Defense—16; Counter-terrorism—41;

Countering WMD—49; and Shaping Strategic Choices—29. The total mean strategic risk score

is the sum of these individual means, or 188.

In Phase II, the IDA team also compiled a rich set of views from the senior evaluators

that represented their world views and judgments underlying their quantitative assessments. One

example of how such worldviews can be expressed is shown in the following figure.

Greater Risk Viewpoint Lesser Risk Viewpoint

Proliferation of weapons of mass destruction to terrorists and rogue states is a virtual certainty.

Our policies, as well as the policies of others, are making it more difficult for adversaries to acquire, produce, or deliver WMD.

Deterrence will not be effective against non-state actors and WMD is likely to be employed. We’re in real trouble here.

Effective employment of WMD is difficult. Non-state actors would be plagued with problems concerning manufacture and delivery means.

A successful attack using WMD could radically change the structure of society, akin to the effect of the Black Death on Europe in the 14th century.

Were a low-yield nuclear weapon detonated in the U.S., there would be no military consequences and the political consequences would be positive. Such an attack tends to rally the country.

There is no doubt that terrorists such as UBL will use WMD on us if they can get their hands on it.

Attacks would most likely occur in Europe, because as of now, they are less focused on defending against WMD.

Consequences of attack would be severe, in terms of both casualties and psychology.

The global economy will be very resilient to single attacks.

Figure 7: Representative Judgments regarding Strategic Risks from Weapons of Mass Destruction (Those seeing greater vs. those seeing lesser risks)

The cells in the two columns of Figure 7 – one column for those viewing WMD as a large

risk, one column for those seeing it as a lesser risk – record actual quotes, non-attributable, from

various evaluators. Taken together, the judgments in a given column succinctly frame a

representative, composite world view (among these evaluators) for this challenge area. In effect,

they provide qualitative supporting rationale for the alternative risk estimates for this area as well

21

as serve as a basis for group discussion in the consensus-building sessions that follow the initial

one-on-one risk interviews. For the pilot test with senior leaders, the IDA team collected and

grouped these supporting rationale quotations in a way that caused a lively discussion among

leaders when world views differed. Discussions at this level seem essential to knowledge sharing

and consensus building as well as to providing supporting rationale for the final results of the full

IRAMM process.

However, it is the complementary nature of the quantitative risk estimates combined with

the supporting rationale for those estimates that most distinguishes the IRAMM. The supporting

rationale provides the basis for a substantive (and as we experienced lively) discussion among

senior leaders on the causes of risk. It also serves to inform and enlighten the discussion with

facts, analytical results and logical reasoning on key elements of the risk landscape. Of course

these discussions are continuously ongoing in various different forums and venues inside the

Pentagon. The added dimension that IRAMM brings is the potential to estimate whether or not

opinions and beliefs are changing as a result, and if so how, to what degree and why by using the

quantitative risk estimates14

Phases III and IV of IRAMM

. These discussions are very important to consensus building and

team building among participants with very different backgrounds, experiences, and in many

cases different value systems. IRAMM facilitates the development of a common value system.

More importantly, however, is that once the IRAMM process has been put in place, it provides

an essential foundation for estimating the change in risk – good or bad – that derives from the

assessment of alternative courses of action to mitigate risk.

In IRAMM, the baseline set of strategic risk assessments described above is structured

through the first two phases of the process. Arguably much more important for strong DoD

decision making than a baseline assessment by itself is a means of understanding – in

commensurate terms –how changes in either U.S. forces and capabilities or changes in the future

security environment might affect that baseline strategic risk assessment. The rest of the

IRAMM framework is structured to assemble and organize comparisons of how well the baseline

14 A side benefit is that these kind of discussions using the IRAMM framework also point out the need for

additional facts, figures and analyses that can shed light on unresolved issues in the deliberations.

22

forces and capabilities fare against alternatives (“force capability options”) or in hypothetical

alternative futures that might alter evaluators’ assumptions and judgments about risk.

Phase III of IRAMM is thus devoted to eliciting and assembling promising force

capability options for mitigating strategic risk relative to the baseline force capability option.

Recommendations for such options may come from the evaluators themselves, their staffs,

independent analyses, or from special teams established to develop option building blocks. An

option development team would normally be established to assemble such options from various

building blocks of “puts,” “takes,” and any other important adjustments. Insofar as possible,

these full-up force capability options would be designed as integrated wholes, not just buckets of

piecemeal, disjoint items.

Overall, starting with strategic guidance for mitigating strategic risks, especially in some

challenge areas more than others, or at the expense of others, a building block approach can be

used in Phase III to craft a small number of promising alternatives within broad fiscal guidance

for evaluation in Phase IV -- using strategic risk mitigation as a major measure of merit. Such an

approach was devised and used successfully by the IRAMM study team in a pilot test of the full

IRAMM process.

Phase IV of IRAMM centers on assessing these alternatives and comparing them against

the baseline as well as against each other in terms of how well they each mitigate strategic risk to

the Nation. The results of this structured set of comparisons would then be compiled and

presented for deliberation, refinement and inclusion in decision-making processes by the most

senior leaders in DoD and the White House. This conceptually straightforward process is

depicted in Figure 8.

23

Figure 8: IRAMM

With a set of promising alternatives, a basic Phase IV question would then be: Can any

alternatives mitigate overall strategic risk (SR) better than the baseline, i.e., result in a total SR of

less than 188? If so, could such an overall result occur without inducing excessive strategic risk

in any particular challenge area?

If evaluators were to judge, on balance, that one or more trades in the defense program –

more investment in one set of capabilities and less investment in others, for instance -- would be

likely to produce higher rates of strategic risk mitigation return than the baseline investment

pattern, such a decision could indeed be made by the Secretary. The net effect of shifting

investments from a relatively low rate of (strategic risk mitigation) return to a relatively high rate

of return could lower the overall mean strategic risk score of an option relative to the baseline.

In the pilot tests of IRAMM, evaluators did identify to the IDA team a number of

investments that they believed would have strong strategic risk mitigation effects. They also

identified some disinvestments they believed would have comparatively small strategic risk

aggravation effects. From the perspective of safeguarding U.S national interests and security,

24

identifying and assessing systematically such promising high risk reduction and low risk

aggravation trades would be a major positive outcome of using the full IRAMM framework to

help manage strategic risk. Exploring such options systematically, with strong strategic risk

assessment tools and a transparent evaluation process for senior decision makers, is at the heart

of the framework proposed here.

Using IRAMM in the DoD Planning, Programming and Budget Execution (PPBE) Process

To exploit this process in building and selecting force and capability options for the DoD

future years defense program and beyond, DoD could choose to adopt the strategic risk

definition and scales described above, or prudent variations on them. For challenge areas, several

possibilities exist. One option would center analysis on the four major scenario types that have

been identified as important by the secretary in the Terms of Reference for the current QDR

study plan. Another could be the taxonomy developed for DoD’s recent Roles and Mission

Study. In selecting a set of challenge areas, though, the objective should be finding one that is

mission-oriented, simple, and as mutually exclusive and comprehensive as possible.

Regarding time frames for a baseline and alternatives assessment, it would be sensible to

establish both a near-term and a mid-term or longer-term version of any set of challenge areas

that is selected. While such a structure might seem to complicate matters, such a scheme allows

evaluators to explicitly assess how well baseline and alternative force-capability options balance

strategic risks across time periods – addressing head-on Secretary Gates’ concerns about DoD

“over-insuring” for some hypothetical future challenges while taking undue risks regarding

important current challenges.

The evaluators for such an application of IRAMM in the PPBE process should involve as

many members of DoD’s “Senior Level Review Group” as can be scheduled. This panel would

thus include the Secretary, the Deputy Secretary, the Chairman and Vice Chairman of the Joint

Chiefs of Staff, the individual chiefs of the military services, the senior civilian officials of the

Department at the Service Secretary and Under-Secretary level, and the combatant commanders.

As for establishing a baseline force for this assessment, one option would be the new

(now FY2010) DoD budget proposal and its currently associated FYDP. There may be other

plausible candidates as well.

25

The process of administering the IRAMM interview itself merits a few comments. The

methodology is meant to be simple and transparent, making the conduct of the exercise relatively

straightforward. Nevertheless, a few procedural considerations are paramount:

• Not-for-attribution interviews with individual evaluators have proven the most effective format thus far for IRAMM. (A Secret-level questionnaire approach is also possible.)

• The interviewers must be fully conversant with the exercise methodology, definitions, and challenge area boundaries and must be comfortable adapting those elements to individual evaluators. Because of large variations among evaluator backgrounds, personalities, cognitive styles, etc., the line of questioning and rhythm of the interview can vary a great deal. The interview is highly interactive, not mechanical, and requires the interviewer to probe and question the evaluators’ reasoning.

• For the assessment format that IDA used in the 2005 interviews, an hour was the absolute minimum amount of time required to complete the Phase II baseline exercise. The first 15-20 minutes of the sessions usually consisted of introducing the framework and the definitions of the exercise. Many sessions would have benefited from even more than the 90 minutes usually available. Two hours would be a good target.

• At least one note taker must be present for each exercise, in addition to the interviewer. Most of the data generated by the assessment are the evaluator’s comments and explanations of rationales for his or her scores. The interviewer cannot effectively record these judgments and conduct the interview simultaneously.

Clearly, the IRAMM process requires a non-trivial time commitment from the senior

leadership in the US national security community, a group that has little time to spare except for

very high pay-off activities. While it may be highly advisable for the senior leaders themselves to

participate directly in initial, individual scoring sessions as depicted above in Phase II of the

IRAMM process, other paths are viable. It would also be feasible for the DoD’s senior leaders to

start with the Phase II results of a set of mid-level advisors’ evaluations of strategic risks, discuss

those results in a group session, do their own individual evaluations using modern groupware

such as that used today in the Joint Staff (J8) offices (SAGD), compare their results, discuss

underlying rationales for agreements and disagreements, and then move on to Phase III. Overall,

IRAMM is a flexible process, and there are many ways to adapt it to senior leaders’ schedules

and particular team-building and decision-making needs.

There are many complex problems that DoD needs to address, including negotiations

with adversaries as well as with partners, and leading edge research into concepts of operation

for dealing with Improvised Explosive Devices or with deeply buried targets that cannot be

informed very well by an approach of the sort described here. But this approach can work in

26

complementary fashion with much of that leading edge work. “Analytic Agenda” efforts to

build costed force alternatives are an example of such work. Simulated trade-off assessments in

the context of contingency plans and Defense Planning Scenarios are another. Promising force

multiplier investments –as identified through such research—should regularly be offered up as

strategic risk mitigators in Force Capability Options for evaluation in Phase IV of the IRAMM

process. Overall, evidence and insights gathered through such studies can provide helpful

background and option building blocks to senior evaluators as they conduct their baseline and

alternatives assessments.

Strategic Risk Management—Toward a New Era of Defense and National Security Decision Making

The idea of conducting a serious strategic risk assessment may seem sensible enough to

most defense and national security decision-makers. Still, at the end of a cycle to build a defense

program it would be possible for DoD report writers simply to compose a few well-crafted

paragraphs depicting how the force and capability choices, while promising in terms of strategic

risk, will certainly need to be monitored closely in the future and that DoD leaders plan to do so.

Unfortunately, there is ample precedent for such a limited, end-of the-study approach. Far better,

however, would be the prompt adoption of a viable strategic risk assessment and management

process, one that is transparent to senior decision-makers and that features scales robust enough

to permit intellectually coherent comparisons among overall baseline and alternative force

capability options. Such an initiative could help trigger a new, more integrated era of strategic

planning, programming and policy making in DoD. IDA has developed what we believe are the

core elements of such a process, one which is now a tool “at hand,” ready for use by the

Secretary of Defense and the CJCS in this PPBE cycle as well as in a number of other important

national security contexts, including potentially the Chairman’s Risk Assessment.15

At the National Security Council level, a version of this IRAMM process could be

structured as well for regular baseline assessments by NSC members of the strategic risks to the

country of relying upon the current set of relevant departmental level budgets and programs, and

15 The CJCS conducts a regular risk assessment (CRA) that could potentially benefit from an approach to strategic

risk of the sort – and with the types of scales–described herein.

27

then comparing that Phase II IRAMM assessment with promising options across the entire

national security apparatus in a set of IRAMM Phase IV assessments. 16

In addition to IRAMM applications for regular DoD and potentially NSC-level strategic

risk management and program decisions, the process could be used in a variety of training

contexts, such as in service academies and in curricula of the war colleges and the National

Defense University. Encouraging young and mid-career officers to think systematically in the

strategic terms that IRAMM focuses on should help to promote more joint and whole-of-

government approaches to the complex national security challenges that will increasingly

confront the country and our allies in the years ahead.

IRAMM is a flexible approach to strategic risk assessment and management that may be

used now by DoD and other national security organizations. It may be used alone or in

conjunction with a variety of other assessments and analytic tools. But a key premise of the

IRAMM approach is that the methodology is credible and transparent to the senior decision-

makers who will need to make overall judgments one way or another on future forces and related

security capabilities for the good of the country. This process, while not necessarily the only

possibility, offers a sound set of scales and a process that have proven understandable and viable

to numerous senior leaders, both civilian and military, in the tests conducted of it thus far for the

U.S. Department of Defense.

16 The IDA study team has been in touch with members of the Project on National Security Reform, especially Dr.

Jim Locher and Dr. Sheila Rhonis, to develop such an approach for use at the NSC level as soon as it is feasible.

Standard Form 298 Back (Rev. 8/98)

R E P O R T D O C U M E N TAT I O N PA G E Form Approved OMB No. 0704-0188

Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.

1 . R E P OR T D ATE (D D -M M - Y Y ) 2 . R E P OR T T YP E 3 . D ATE S C OV E R E D ( Fr om – To )

June 2009 Final 4 . T IT L E A N D S U B T I T LE 5 a . C O N TR A C T N O.

IDA’s Integrated Risk Assessment and Management Model DASW01-04-C-0003

5 b . GR A N T N O.

5 c . P R O G R AM E LE M E N T N O (S ) .

6 . A U TH O R ( S ) 5 d . P R O JE C T N O.

James S. Thomason 5 e . TAS K N O.

C6251

5 f . W O R K U N I T N O.

7 . P E R F OR M IN G OR G A N I Z ATI O N N A M E (S ) A N D A D D R E S S ( E S ) Institute for Defense Analyses 4850 Mark Center Drive Alexandria, VA 22311-1882

8 . P E R F OR M IN G OR G A N I Z ATI O N R E P OR T N O . IDA Paper P-4470

9 . S P O N S OR IN G / M O N I TOR IN G A GE N C Y N AM E ( S ) A N D A D D R E S S (E S ) 1 0 . S P O N S OR ’S / M ON I TO R ’ S A C R ON YM (S )

Institute for Defense Analyses 4850 Mark Center Drive Alexandria, VA 22311-1882

11 . S P O N S OR ’S / M O N I TOR ’S R E P OR T

N O (S ) .

1 2 . D IS T R I B U T IO N / AVA I L A B I L I T Y S TATE M E N T

Approved for public release; distribution unlimited. 1 3 . S U P P LE M E N TARY N O T E S

14. Abstract DoD, like most other large public organizations, has traditionally made most of its resource allocation decisions in organizational or mission area stovepipes, with limited success in optimizing across those stovepipes. For many years, much public policy literature has claimed that such sub-optimization is inevitable. IDA’s Integrated Risk and Management Model (IRAMM) approach will not eliminate this thorny problem. However, it offers a viable way to attack it head-on: a means for the Secretary of Defense and the Chairman of the Joint Chiefs of Staff, for example, to reduce the extent of that sub-optimization and to move toward a more holistic optimization across the enterprise. Specifically, the four-phase IRAMM process can help the Secretary and the Chairman immediately in their efforts with senior advisors and aides to balance DoD’s forces and capabilities by using strategic risk mitigation as the chief measure of merit. IRAMM is a well-tested approach and includes strong scales that permit comparisons and trades across major missions, time periods, and program alternatives. IDA researchers developed IRAMM (for OSD and the JS) and have employed it in several pilot-tests, including a series of not-for-attribution interviews of more than two dozen senior civilian and military officials within DoD. Evaluators included the CJCS and most of the Combatant Commanders, Service Chiefs and Undersecretaries of Defense. Overall, IRAMM is a flexible approach to strategic risk assessment and management that may be used by DoD and other national security organizations, including the NSC. It may be used alone or in conjunction with a variety of other assessments and analytic tools. A key feature of the approach is that the methodology is simple and transparent to the senior decision-makers who will need to make overall judgments one way or another on future forces and related security capabilities for the good of the country.

1 5 . S U B JE C T TE R M S

strategic risk, risk management, holistic assessments, enterprise-wide assessments, balance the force, balancing capabilities, strategic trades, planning, QDR, PPBE, capability based planning.

1 6 . S E C U R I T Y C L AS S I F IC AT IO N O F: 1 7 . L IM I TATI ON

O F A B S T R A C T

UU

1 8 . N O . O F PA G E S

30

1 9a . N AM E O F R E S P ON S IB L E P E R S O N

a . R E P OR T b . A B S T R A C T c . TH IS PA GE 1 9 b . TE LE P H ON E N U M B E R ( I n c l u d e A r e a C o d e ) U U U

Standard Form 298 Back (Rev. 8/98)

The Institute for Defense Analysesis a non-profit corporation thatadministers three federally fundedresearch and development centersto provide objective analyses of

national security issues, particularly those requiring scientific and technical expertise, and conduct related research on other national challenges.