civilian response corps force review: the application of multi-criteria decision analysis to...
TRANSCRIPT
Civilian Response Corps Force Review: The Application ofMulti-Criteria Decision Analysis to Prioritize Skills Required forFuture Diplomatic Missions
IGOR LINKOVa*, HEATHER ROSOFFa, L JAMES VALVERDEa, MATTHEW E. BATESa,BENJAMIN TRUMPa, DANIEL FRIEDMANb, JESSIE EVANSb and JEFFREY KEISLERcaUS Army Engineer Research and Development Center, 696 Virginia Rd, Concord, MA 01742, USAbUS Department of State, Washington, DC, USAcManagement Science and Information Systems, University of Massachusetts Boston, Boston,MA 02125, USA
ABSTRACT
Created by the State Department’s Office of the Coordinator for Reconstruction and Stabilization, the Civilian ResponseCorps (CRC) contains a diverse pool of qualified and ready-to-deploy civilian professionals that support conflict preventionand response efforts in countries or regions that are at risk of, are currently in, or are transitioning from conflict or civil strife.As such, it is vital to optimize the CRC’s skill groupings to maximize adaptability and responsiveness to highly uncertainand trying political conditions and crises across the globe. The nature of the CRC value proposition is such that determiningwhich skill set compositions deliver the greatest benefit requires a multi-faceted perspective that looks at a number ofattributes and factors, both tangible and intangible. To meet these needs, an organizational decision-making approachutilizing multi-criteria decision analysis (MCDA) was applied to ensure that skill-grouping allocations were determined ina logical and robust manner. The MCDA analysis allowed for a wide range of worldviews and perspectives, drawn fromselect members of academia and partner agencies of the CRC who provided their expert opinions on the expected demandfor skill groupings commonly identified as most necessary in a civilian ‘surge’ capacity. These skills were assessed withreference to a values hierarchy of representative country scenarios, missions and sub-missions identified by the Office ofthe Coordinator for Reconstruction and Stabilization. Of particular interest was the use of the MCDA method to prioritizeCRC skill groupings and to help inform the Department of State’s understanding of the ‘ideal’ proportion and types ofcivilian skills for inclusion in the CRC. Copyright © 2012 John Wiley & Sons, Ltd.
KEY WORDS: MCDA; prioritization; political conflicts
1. INTRODUCTION
Failing or post-conflict states are of great concern to theinternational community, in part because of their highsusceptibility to terrorist activity, violence, traffickingand humanitarian catastrophes. Even though militaryforces are likely to be heavily engaged in post-conflictsituations, there is a demonstrated need for civilianforces to provide reconstruction and stabilizationassistance to these struggling countries (Krasner andPascual, 2005). In 2008, the US Department of State(DOS) commissioned the Office of the Coordinator forReconstruction and Stabilization (S/CRS) to establishthe US Civilian Response Corps (CRC), with the
mindset of preparing a group of civilian federalemployees specially trained and equipped to rapidlydeploy overseas. The CRC is partitioned into twobodies, Active (CRC-A) and Standby (CRC-S). Theactive component (the primary subject of this study)includes officers that are full-time government employ-ees trained and prepared to deploy within 48 hours forreconstruction, stabilization and conflict preventionefforts overseas. CRC-S officers are full-time employ-ees of their departments who have specialized expertisethat proves useful in reconstruction and stabilizationoperations and are available to deploy within 30 days.The CRC draws its members from a partnership ofnine departments and agencies throughout thefederal government, with the goal of ensuring that thecomplexities involved in conflict prevention andresponse are adequately addressed in ways that leverageall available resources of the US government for itsmissions. CRC-A is currently made up of 135 membersand CRC-S 1062 members.
*Correspondence to: Igor Linkov, US Army EngineerResearch and Development Center, 696 Virginia Rd,Concord, MA 01742.E-mail: [email protected]
Copyright © 2012 John Wiley & Sons, Ltd.Received 03 February 2011Accepted 23 December 2011
JOURNAL OF MULTI-CRITERIA DECISION ANALYSISJ. Multi-Crit. Decis. Anal. (2012)Published online in Wiley Online Library(wileyonlinelibrary.com) DOI: 10.1002/mcda.1468
The first 2 years of CRC operations have shown thatthe demand for CRC support is broad—both in terms ofthe type and proportion of support services needed.For example, the CRC has deployed members toAfghanistan to provide national-level planning andintegration expertise to the US Embassy and the Interna-tional Security Assistance Force in Kabul. In Liberia,CRC efforts were directed more locally. Inter-agencyteam analysis led to the development of a conflictprevention programme focused on resolving landdisputes and expanding police training in two key coun-ties. The CRC also has provided humanitarian support,for example, deploying inter-agency teams within72 hours of the Haiti earthquake to assist in disasterresponse and to begin planning for the transition tolong-term recovery and reconstruction.
Nevertheless, operations from 2008 to 2010 havesignalled that there may be opportunities to increasethe effectiveness and efficiency of the CRC by exam-ining workforce issues. At present, CRC-A is made upof 135 full-time members, with plans to grow to 264within the next year. On the basis of the demand forCRC-A since its inception, it is unclear whether corpsmembers possess the best possible skill groupingsrequired to respond to crisis situations. The challengeof matching skill groupings to specific needs is non-trivial, given the inherent uncertainty in future missionsand in the specificity of skill groupings provided bypartner agencies. In addition, several political, socialand humanitarian agencies deploy to struggling coun-tries in times of crisis. Rather than duplicate the effortsof their counterparts, the skill groupings that CRC-Amust maintain are intended to build upon the efforts ofthese other agencies. The value proposition, then, forCRC must be construed in holistic terms, with anunderstanding of the broader context in which it sitsand operates within.
The Office of the Coordinator for Reconstructionand Stabilization recognizes that the expertise ofCRC-A is largely distributed, somewhat informallydefined, and not fully integrated. To formally and defen-sibly integrate CRC needs and priorities, S/CRSinitiated a review of the skill composition and futurerequirements of the CRC, referred to as the ForceReview. This task is challenging for several reasons.Essentially, the goal of a flexible and rapid CRC-Aresponse team is to prepare for chaotic situationsthat cannot be reliably predicted. Preparing CRC-Amembers for actual operations requires detailedjudgments about different problems, combined with anunderstanding of how such a force can help reduce theinstability and damage caused by those situations.Furthermore, although the corps is officially under the
aegis of DOS, the CRC is an inter-agency effort thatrequires the support of its partner agencies thatcontribute personnel yet also compete for resourcesand influence. Finally, although the Force Review’sresults provide insights about CRC-A skill-groupingprioritization and inform/justify recruitment strategiesand hiring decisions, these decisions are ultimatelymade at a higher level of authority. At this higher level,skill-grouping prioritization is considered in the largercontext of selecting CRC-A members from availablehuman resources with specific resumes/qualifications,as well as with respect to human resource allocationsensitivities across partner agencies (e.g. decisionsabout the number of CRC-A members from eachpartner agency).
Multi-criteria methods have proven useful in manypersonnel selection contexts (e.g. Jessop, 2004). Toprioritize the skill groupings of the CRC-A, a multi-criteria decision analysis (MCDA; see Belton andStewart, 2001) approach was used. A linear multi-attribute value model (e.g. Dyer and Larsen, 1985)was developed to ensure that skill-grouping alloca-tions logically incorporated tradeoffs among the widerange of considerations at the multiple levels. Key toframing the effort was defining the importance of skillgroupings in terms of CRC-A’s preparedness torespond to crisis situations throughout the globe.CRC-A member’s level of preparedness was capturedusing a range from minimal to very high, rather than interms that were objectively measurable but either notdirectly predictable or controllable (e.g. lives saved)or directly connected to preparedness (e.g. individualcredentials). The model output provided the CRC withan understanding of the existing challenges and futureneeds of CRC-A personnel.
2. OVERALL APPROACH
To connect the staffing decisions for CRC-A to theCRC’s organizational goals, a hierarchical value modelwas developed in collaboration with inter-agency stake-holders (Figure 1). This allowed separation of judg-ments according to relevance and expertise, whilesimultaneously limiting what could potentially be anoverwhelming number of required judgments. Thevalues hierarchy (Keeney, 1992) consists of four levelsand captures the relationship between skill groupingsand events where CRC preparedness is relevant. Theiterative development of the hierarchical structure wasinformed by analysis of historical data about previousCRC engagements along with guiding documentssummarizing related government, diplomatic and
I. LINKOV ET AL.
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
military planning efforts (in part extending the approachto prioritizing ways of filling mission gaps used inLinkov et al. (2009) and by interactive discussions withexperts and stakeholders. Given the research emphasison CRC-A, the time frame of skill deployment wasassumed to be immediate and lasting up to a year ofactivity. Figure 1 presents the values hierarchystructure.
The hierarchical elements important to charac-terizing the CRC’s ability to effectively and efficientlyrespond to crisis situations include the following:
• At the top level, 11 representative country scenarios.The scenarios are general illustrative descriptions ofsituations, as opposed to listings of specific countriesor scenarios, for which CRC-A might be prepared torespond (e.g. providing specialized capabilities ina country with a growing risk of extremist violence,1
S/CRS senior staff developed the scenarios on thebasis of their perceived US political importance andlikelihood of conflict over the next 5 years).
• At level two, eight missions CRC-A should beprepared to engage in each country scenario (e.g.Peacekeeping). The missions are defined in terms ofthe corps’ organizational objectives when deployedoverseas.
• At level three, given the broad scope of each missionspace, the components of a mission are divided into
ten sub-missions contributing to the preparednessfor each mission, such as Conflict Prevention andDefusing Intergroup Tensions.
• Lastly, the bottom level includes the 33 skillgroupings deemed requisite to carrying out a mis-sion and sub-mission, as required to equip CRC-Ato respond to a representative country scenario.The 33 skill groupings were constructed from a verylong list of particular skills that might be foundon individual resumes. For example, rather thanseparately listing Senior Ministry Advisor (legal),Democracy Specialist (judicial) and Attorney, onlythe more general skill-grouping Rule of Law—Systemic is provided. At this level, an analysis ofresumes and of prior activities helped to developmore manageable skill groupings.
Selected representative country scenarios, missions,sub-missions and skill groupings were reviewed,revised and approved by the S/CRS leadership. Inaddition, members of S/CRS developed definitions forall hierarchy elements, factoring in agency partners’feedback that was received
Using this hierarchy, importance scores were derivedfor each skill grouping. Interviewees and individualexperts associated with academic institutions andpartner agencies provided inputs. These ranged fromacademic-driven strategic judgments at the higher levels
Representative Country
Scenarios
Skill 1 Skill 2 … Skill 30
REPRESENTATIVECOUNTRY SCENARIO
MISSION
SUB MISSION
SKILLGROUPING
CRC-A Preparedness To Respond To Crisis
Situations
Figure 1. Structure of the values hierarchy.
CIVILIAN RESPONSE CORPS FORCE REVIEW
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
to agency/partner-driven tactical judgments at the lowerlevels. An effort was made to separate perceptionsabout strategic considerations (e.g. ‘peacekeeping isvery important’) from perceptions about tacticalconsiderations (e.g. ‘economic administration is criticalto peacekeeping’). At each level, multiple individualsprovided judgments.
To obtain judgments and then perform calculations,specific methods were selected on the basis of sourcesof expertise at different levels within the valueshierarchy and the issues that had to be integrated.For example, at the representative country scenariolevel, judgments about the importance of preparednessfor each representative country scenario (with respectto CRC-A’s overall preparedness) were requiredfrom senior DOS individuals. At the next two levels,academic experts provided operational judgments onthe value added of CRC-A for a number of missions,each relative to a specific country scenario; moreover,for each mission, additional judgments were requiredabout a number of sub-missions. Finally, agencypartners’ tactical judgments about the importance ofskill groupings were required for each sub-mission.
To minimize the burden of providing judgments,rank-based weighting was used to elicit the importanceof a CRC-A response to each hierarchy element withrespect to CRC-A’s value added for each element atthe level above. For example, the CRC-A response tothe sub-mission Conflict Prevention and DefusingIntergroup Tensions was elicited with considerationfor the ranking of the Peacekeeping mission. Intervie-wees were also encouraged to provide verbal rationalesfor these judgments to allow for consistency checks andpeer review. Using ordered ranking was particularlyuseful for the academic expert elicitations, as theseindividuals had limited availability (thereby precludingthe use of more costly/time-consuming methodologies).Although rank-based weights are not as precise asexplicit swing weights, this method is consistent withthe goal of providing general guidance without the needfor very precise weights as might be provided by asingle individual using this analysis as the sole inputto a decision. The exception to this approach was atthe skill-grouping level of the hierarchy, where, with alarge number of possibilities and a more natural scoringscale, direct importance scores (rather than relativerankings) were assigned to those skill groupingsidentified as critical, important or useful to a sub-mission.
Because different rank-weighting formulas havedifferent properties (e.g. producing relatively lowweights for middle-ranked elements), different formulaswere used for aggregating judgments and calculatingweights at different levels. The formula selection
process was driven by howmany interviewees providedinputs about hierarchy elements, along with how manyelements typically received non-zero importancerankings. For simplicity, we either used rank-sum orrank-reciprocal weights. Similarly, modelling choiceswere made about whether to aggregate by averagingranks or weights as well as the characteristics of the dataat each level. In each case, numerous calculations wereperformed and their results compared, but a primary setof results was reported using specific calculations ateach level.
To support the skill-grouping prioritization decisionprocess, a variety of outputs were generated (rather thana single ranking). Because of the S/CRS’s intricateorganizational structure, several substantial sensitivityanalyses were prepared, including sensitivity of impor-tance scores to the following: weight calculation meth-ods at each level of the hierarchy, assumptions aboutnormalization and treatment of missing data and thesource of judgments provided at each level. Addition-ally, comparisons and statistics were developed todemonstrate how inputs correlated with each other, suchas inputs from different interviewees, inputs from inter-viewees at different levels and overall results at differentlevels. Preliminary results were presented and discussedat a large meeting with the CRC’s partner agencies.Interactions at that meeting (and for a defined periodafterward) were integral to the development of the finalreport. These interactions ranged from questions aboutmethodology to clarifications about inputs and provi-sion of additional judgments. The effect of the overallMCDA approach was to both provide insights toS/CRS about organizational perspectives on CRC-Askill-grouping prioritization and demonstrate to partneragencies that MCDA analysis reflected their best under-standing of the decision situation and possessed a soundtheoretical basis. The MCDA analysis prepared S/CRSto move forward with its review and planning with thesupport of stakeholders whose cooperation is a funda-mental prerequisite for successful implementation.
3. VALUE JUDGEMENT ELICITATION
To collect the rankings for the variables at each levelof the hierarchy, elicitation sessions were conductedwith select academic experts and each of the CRCpartner agencies. The following is an overview ofthe elicitation sessions’ objectives and procedures.
3.1. Stakeholder elicitationElicitation sessions were conducted with sevenacademic representatives and multiple representatives
I. LINKOV ET AL.
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
of the nine CRC partner agencies (Table I). Prior tothe elicitation sessions, each interviewee was providedwith a read-ahead packet detailing the background andcontext of the S/CRS CRC Force Review, includinga breakdown of the values hierarchy and detaileddefinitions of the representative country scenarios,missions, sub-missions and skill groupings. Eachelicitation session ranged from 1 to 2 hours in length,with ample time for a brief review of the projectbackground, the elicitation process and discussion ofany interviewee comments or questions. Discussionof the interviewees’ rationale for the providedrankings was specifically encouraged. Throughoutthe process, each academic and partner agency’sperspective was respected in its own right, and everyperspective was weighted equally when all inputswere combined to yield a joint prioritization.
3.2. Academic elicitationExperts unaffiliated with the CRC partner agencieswere identified from governmental, non-governmentaland academic communities by S/CRS coupled withinput from partner agencies. The selection of academicexperts was based on the individual’s knowledge aboutthe CRC and its deployment history, as well as thecorps’ larger role within the US restructuring andstabilization efforts. Two rounds of interviews wereconducted. In the first, six academic experts wereinterviewed in person, and an additional expert wasinterviewed over the phone. All second roundinterviews were conducted over the phone.
Academic experts provided rankings for the topthree levels of the hierarchy—representative countryscenarios, missions and sub-missions. To rank the
elements within each level of the hierarchy, academicexperts used a ranking scale where ‘1’ represented areaswhere CRC-A could provide the most value added, ‘2’represented the second most important variable reflect-ing CRC-A value added and so forth. Where the valueadded of one or more variables could not be determined,equal rankings were used. For variables academicexperts deemed outside their area of expertise, noresponse was provided. Rankings were elicited on theentire matrix of 11 representative country scenarios,indexed by the eight missions and the ten sub-missions.
Complete rankings were required for representativecountry scenarios, although only those missions andsub-missions deemed relevant were selected. All ofthe interviewed academic experts provided completerankings for the representative country scenarios, mis-sions and sub-missions, with the exception of two inter-viewees who provided complete representative countryscenario rankings but only partial mission rankings andno sub-mission rankings. Following the elicitationsessions, the collected category rankings were recalcu-lated into normalized weights for analysis purposes.
3.3. Partner agency elicitationEach of the nine partner agencies was instructed to iden-tify three representatives to participate in the elicitationsessions. The criterion used for the selection of repre-sentatives was left to the discretion of the individualagencies. All nine agency partners were interviewed inperson, and a second round interview was conductedwith DOS and Department of Energy over the phone.
Partner agencies identified the skill groupingsneeded for CRC-A to provide the most value addedwith respect to a particular sub-mission, with specificconsideration for both the type and proportion ofskill groupings that might be included in a CRC-Aresponse. In addition to receiving the read-aheadpacket prior to the elicitation sessions, partner agencyinterviewees were provided with a skill-groupingworkbook to complete. This workbook was developedas a tool to help partner agencies start thinking aboutthe relevance of CRC-A skill groupings as they relateto specified sub-missions. During the elicitationsessions, the skill-grouping workbook was used tofacilitate the elicitation of the skill groupings’rankings for each of the sub-missions.
Partner agencies rated the skill groupings as havinglow, medium or high relevance to the completion ofthe specified sub-mission. A high rating indicated thatthe skill grouping is critical to the successful executionof a sub-mission response. A medium rating indicatedthat a skill grouping is necessary for a sub-mission’sexecution, but absence does not assume that some
Table I. List of academics’ affiliations and CivilianResponse Corps partner agencies
Academic affiliations Partner agencies
National Defense University Department of AgricultureNational Security Council Department of CommerceOffice of the Coordinator forCounterterrorism
Department of Energy
RAND National Security ResearchDivision
Department of HomelandSecurity
Royal United Services Institute Department of JusticeThe Stimson Center Department of StateUS Institute for Peace Department of
TransportationHealth and HumanServices
USAID
USAID, United States Agency for International Development.
CIVILIAN RESPONSE CORPS FORCE REVIEW
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
value added of the sub-mission cannot be realized, anda low rating indicated that the skill grouping assistswith sub-mission execution, yet the skill grouping isnot required. In this context, ‘medium’ is twice asimportant as ‘low’; ‘high’ is twice as important as‘medium’, and ‘high’ is the upper bound of absolutelycritical. Rankings were elicited on the entire matrix often sub-missions by 33 skill groupings.
With the exception of two agencies, representativesfrom all partner agencies completed the skill-groupingevaluations for all ten sub-missions. Representativesfrom one agency elected to comment only on hierarch-ical elements closely related to its own contributionsand field, whereas representatives from another agencycommented on all skill groupings except for two (onewhich they did not perceive to be within the purviewof CRC-A and another which they believed to be apersistent country-team effort and therefore not needinga separate skill grouping). Similar to the academicpreference elicitations, the collected category rankingswere recalculated into normalized weights foranalysis purposes.
4. MULTI-CRITERIA DECISION ANALYSISIMPLEMENTATION
Elicited rankings for the representative country scenar-ios, missions, sub-missions and skill groupings wereused as inputs to a linear multi-attribute value model.2
The model served as a mathematical framework forconverting the elicited data into normalized weights,combining the data according to their hierarchicalimportance, and producing the final results fromcombining and ranking the elicited data. The MCDAfindings are presented in Section 5, whereas the specificmethodologies used are described beneath.
4.1. Calculation of raw average category weightsThe calculation of raw average category weights is theprocess by which raw interview responses at each levelof the values hierarchy are converted into mathematicalweights. This process was completed at each of the fourhierarchical levels (representative country scenarios,missions, sub-missions and skill groupings). To be use-ful for the MCDA, the raw interviewee rankings (e.g. ofcountry scenarios with a 1st, 2nd or 3rd degree ofimportance) were converted to normalized percentageweights (e.g. to country scenarios accounting for 10,5, or 3% of the total importance).
There are several robust and reasonable methods toreach this end (see, e.g. Stillwell et al., 1981). In thisanalysis, a combined approach of simple rank-sum
(where ri is the rank of the ith element out of n, andits weight is wi = (n� ri+ 1) / Σj= 1,n rj) and rank-reciprocal (where the ith element has weight wi= 1/(ri (Σj= 1,n 1/rj)) was taken. The results have been com-pared with other approaches and shown to be insensi-tive to the weighing method used (see Section 5,Figure 6).
At the representative country scenario level, theraw importance ranking scores were converted intopercent weights by using four different methods. If idenotes the ith country scenario out of n and if kdenotes the kth interviewee out of m, these methodsare formally specified as follows:
(1) The average of the interviewees’ rank-sumweights,
wik ¼ ðn� rik þ 1Þ = ðn2 þ nÞ = 2� �
; wi
¼X
kwik = mð Þ:
(2) The rank-sum weights of the average intervieweedata,
wi ¼ n�X
krik = m
� �þ 1
� �= n2 þ n
� �= 2
� �:
(3) The average of the interviewees’ rank-reciprocalweights,
wi ¼X
k1= rik
Xj¼1;n
1=rjk� �� �
= m:
(4) The rank-reciprocal weights of the average inter-viewee data,
wi ¼ 1=X
krik=mð Þ
� �=X
j¼1;n1=rj:
The average of the weights calculated by the fourmethods was also calculated and (to avoid any of theartefacts of any one weighting method) was used forthe final presentation of results.
For the mission to representative country scenario-level importance rankings, the raw ranking scoreswere converted to percentage weights by using rank-sum weighting. Here, the missions where CRC-Awas perceived to provide the most value added toaddress a potential representative country scenariowere assigned the highest rank, the next highest rankto the second most value-added mission, and so ondown the list until the least important mission received
I. LINKOV ET AL.
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
a ranking of ‘1’. Missions noted as not relevant toresponding to a potential representative country sce-nario were assigned a value of 0 and missions forwhich rankings were not provided were labelled N/A. These values were then summed, and the entryfor each mission was divided by the sum to producethe rank-sum. In the end, the most important mis-sion received the greatest weight, and the leastimportant mission received the smallest weight.Rank-sum weights were used at this level becausethey are less prone to extreme values on higherranked categories than rank-reciprocal weights, andfor this level of the hierarchy, that was desirablebecause the matrices were fairly sparse. The weightsused in the primary results were calculated as theaverage of interviewees’ mission rank-sum weightsacross representative country scenarios.
At the sub-mission to mission level, the raw rankingswere converted to percent weights by using rank-reciprocal weighting. This method calls for assigningthe value of ‘1’ to the most important sub-mission fora mission, ‘2’ to the second most important sub-missionand so forth. Sub-missions identified as not important toa mission’s completion were assigned a value of 0. Forweighting conversion, the reciprocal of these valueswere normalized. Similar to the mission level, the mostimportant sub-mission receives the greatest weight, andthe least important sub-mission receives the smallestweight. To avoid losing distinction when data isaveraged across interviewees, rank-reciprocal weightswere used at this level (rather than rank-sum weights).The weights for the results were taken as the averageof interviewees’ sub-mission rank-reciprocal weightsacross missions.
For the skill-grouping level importance rankings,importance scores for each skill grouping with respectto each sub-mission were provided directly by theinterviewees. As previously noted, partner agenciesrated the skill groupings as having low, medium orhigh relevance to the completion of the specifiedsub-mission. A high relevance skill grouping receiveda score of 4, medium a score of 2, low a score of 1 andN/A a score of 0. Interviewees were instructed to bemindful of the ratios of these scores and the fact thatthey would translate into final weighted scores. Here,the final importance weights were taken as the averageof interviewees’ skill-grouping rank-sum weightsacross sub-missions.
4.2. Calculation of net category weightsThe calculation of net category weights is the processby which the raw average category weights calculated
in Section 4.1 are combined to produce rankings sen-sitive to both interviewee responses and importancerankings across hierarchical levels.
The representative county scenarios are at the toplevel in the values hierarchy. For data at this level,the average weights are sufficient. For the mission,sub-mission and skill-grouping levels, the net categoryimportance rankings were calculated by matrix multi-plication. At the mission level, the matrix of eight aver-age mission importance weights across the 11 countryscenarios are multiplied by the array of 11 representa-tive country scenario weights, yielding an array ofthe net category weights for the eight missions. Thecalculated output is the overall net category missionweights. This process of matrix multiplication wasrepeated at the sub-mission level by using the overallnet category mission weights and then again at theskill-grouping level, incorporating the net categorysub-mission weights. Once the weights are computedand combined throughout the hierarchy, a ranked listof skill groupings from most needed to least neededis produced on the basis of the list of representativecountry scenarios.
5. RESULTS
We now summarize the key results of the MCDAanalysis. The prioritized list of skill-grouping net cate-gory importance rankings for CRC-A is presentedfirst, followed by several sensitivity analyses con-ducted to assess the robustness of MCDA findings.As we discuss beneath, the sensitivity analyses greatlyenriched the CRC’s understanding of its own areas ofagreement and disagreement between inter-agencystakeholders.
5.1. Skill-grouping prioritizationFor each of the 33 skill groupings, a net categoryimportance ranking was generated from the MCDAmodel analysis. The findings indicate that the two mostimportant skill groupings for CRC-A members topossess are (1) Conflict Specific Planning/Ops/Manage-ment/Assessment/Engagement/Coordination and (2)Strategic Communications, ranked with 5.17 and4.17% of the total importance, respectively. Across the33 skill groupings, the net category importance rankingscores are close for many items. Table II includes acomplete ranked list of skill groupings from mostneeded to least needed on the basis of interview datafrom partner agencies.
CIVILIAN RESPONSE CORPS FORCE REVIEW
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
5.2. Sensitivity analysisTo explore the sensitivity of the MCDA analysis tovariations in the hierarchy’s major quantifiablevariables, researchers compared the following: (1)individual partner agency importance rankings withthe overall (total across all agency partners) netcategory skill-grouping rankings; (2) individualpartner agencies’ importance rankings for coupledsystemic with technical skill groupings; (3) the influ-ence of academic experts with partner agencies’importance rankings on the model output; and (4)the impacts of different mathematical weighting meth-ods on skill-grouping prioritization.
5.2.1. Individual partner agency rankings versuscombined average results. The level of agreement
across agency partner importance rankings wasassessed through a comparison of each of the partneragencies’ individual raw average skill-groupingimportance rankings with the overall (total across allagency partners) net category skill-grouping rankings.A scatter plot for each agency partner was developedto visually demonstrate and contrast these relation-ships, allowing for the assessment of differentperspectives on how skill groupings should be prior-itized for the CRC-A to provide the most valueadded to overseas missions.
For example, Figure 2(a, b) depicts scatter plots fortwo representative partner agencies. The overall netcategory skill-grouping importance rankings arerepresented by the diagonal blue line (recall that a rankof 1 indicates the most needed skill grouping), and therespective agency partners’ raw average importancerankings are represented by the red data points. Partneragency 1 representatives identified Conflict SpecificPlanning/Ops/Management/Assessment/Engagement/Coordination and Strategic Communications as, onaverage, two of the most needed skill groupings, whichare similar to the overall net category skill-groupingimportance rankings (Figure 2a).
Conversely, partner agency 2 representativesranked Corrections, Criminal Justice—Systemic andPolicing—Systemic as, on average, the skill groupingsof greatest importance. Partner agency 2 intervieweesstressed the importance of CRC-A members takingon the role of ‘searchers’—individuals who go into acountry with the intention of assessing and acquiringbasic managerial and situational information and thenreach back for assistance either from home or fromcontracting agencies already on the ground.
5.2.2. Systemic versus technical skill-grouping rankings.Initial conversations with CRC about CRC-A memberexpertise often fell into two predominant skillcategories, systemic and technical. Systemic skills referto possessing expertise with respect to planning andmanagement sector responsibilities along with main-taining a general understanding of how all the technicalaspects of their sector fit together. Technical skills referto having specialized knowledge and abilities that arecalled upon only for the technical and operational levelsof an engagement. Given this dichotomy, individualpartner agencies’ importance rankings were comparedfor coupled systemic and technical skill groupings (forexample, by comparing Policing—Systemic withPolicing—Technical) to explore whether agencies wereconsistent in the prioritization of these two skill-grouping categories.
Table II. Net category skill-grouping weights byimportance
Overall weight (%) Skill group
5.17 Conflict Specific Planning4.17 StratComms3.84 Immigration, Customs. . .3.78 ADR/Transitional Justice3.74 Force Protection3.73 Rule of Law—Systemic3.71 Diplomacy & Governance—Systemic3.68 Security Sector Reform3.64 Democracy and Political Processes3.49 Policing—Technical3.48 Public Infrastructure—Systemic3.33 Policing—Systemic3.24 Criminal Justice—Systemic3.22 Public Infrastructure—Technical3.13 Civil Administration3.11 Economic Recovery—Systemic3.03 Diplomacy & Governance—Field Officer3.02 Civil Society/Media Development2.77 Commerce2.75 Management—Technical2.70 Legal Administration2.61 Rural Development2.57 Agriculture—Systemic2.48 Business and Financial Services2.47 Criminal Justice—Technical2.46 Counterterrorism2.41 Education—Systemic2.37 Corrections2.36 Public Health—Systemic2.25 Public Health—Technical1.95 Agriculture—Technical1.69 Labor—Systemic1.63 Taxes and Monetary Policy
ADR, Alternative dispute resolution.
I. LINKOV ET AL.
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
In Figure 3, the difference between systemic andtechnical average percentage total importance rank-ings given to each skill (across all sub-missions) isshown. Results indicate that in most cases, the major-ity of agency partners placed a greater value on the
systemic-related skill groupings. The exception isAgency 9 whose representatives ranked Policing—Technical, Public Health—Technical and PublicInfrastructure—Technical more important than theirsystemic counterparts.
0
5
10
15
20
25
30
35Overall Rank
Agency 1 Rank
0
5
10
15
20
25
30
35
Con
flict
Spe
cific
Pla
nnin
g
Str
atC
omm
s
Imm
igra
tion,
Cus
tom
s…
AD
R/ T
rans
ition
al J
ustic
e
For
ce P
rote
ctio
n
Rul
e of
Law
– S
yste
mic
Dip
lom
acy
& G
over
nanc
e –
Sys
tem
ic
Sec
urity
Sec
tor
Ref
orm
Dem
ocra
cy a
nd P
oliti
cal P
roce
sses
Pol
icin
g –T
echn
ical
Pub
lic In
fras
truc
ture
– S
yste
mic
Pol
icin
g –
Sys
tem
ic
Crim
inal
Jus
tice
– S
yste
mic
Pub
lic In
fras
truc
ture
– T
echn
ical
Civ
il A
dmin
istr
atio
n
Eco
nom
ic R
ecov
ery
– S
yste
mic
Dip
lom
acy
& G
over
nanc
e –
Fie
ldO
ffice
r
Civ
il S
ocie
ty /
Med
ia D
evel
opm
ent
Com
mer
ce
Man
agem
ent –
Tec
hnic
al
Lega
l Adm
inis
trat
ion
Rur
al D
evel
opm
ent
Agr
icul
ture
– S
yste
mic
Bus
ines
s an
d F
inan
cial
Ser
vice
s
Crim
inal
Jus
tice
– T
echn
ical
Cou
nter
- te
rror
ism
Edu
catio
n –
Sys
tem
ic
Cor
rect
ions
Pub
lic H
ealth
– S
yste
mic
Pub
lic H
ealth
– T
echn
ical
Agr
icul
ture
- T
echn
ical
Labo
r –
Sys
tem
ic
Tax
es a
nd M
onet
ary
Pol
icy
Con
flict
Spe
cific
Pla
nnin
g
Str
atC
omm
s
Imm
igra
tion,
Cus
tom
s…
AD
R/ T
rans
ition
al J
ustic
e
For
ce P
rote
ctio
n
Rul
e of
Law
– S
yste
mic
Dip
lom
acy
& G
over
nanc
e –
Sys
tem
ic
Sec
urity
Sec
tor
Ref
orm
Dem
ocra
cy a
nd P
oliti
cal P
roce
sses
Pol
icin
g –T
echn
ical
Pub
lic In
fras
truc
ture
– S
yste
mic
Pol
icin
g –
Sys
tem
ic
Crim
inal
Jus
tice
– S
yste
mic
Pub
lic In
fras
truc
ture
– T
echn
ical
Civ
il A
dmin
istr
atio
n
Eco
nom
ic R
ecov
ery
– S
yste
mic
Dip
lom
acy
& G
over
nanc
e –
Fie
ldO
ffice
r
Civ
il S
ocie
ty /
Med
ia D
evel
opm
ent
Com
mer
ce
Man
agem
ent –
Tec
hnic
al
Lega
l Adm
inis
trat
ion
Rur
al D
evel
opm
ent
Agr
icul
ture
– S
yste
mic
Bus
ines
s an
d F
inan
cial
Ser
vice
s
Crim
inal
Jus
tice
– T
echn
ical
Cou
nter
- te
rror
ism
Edu
catio
n –
Sys
tem
ic
Cor
rect
ions
Pub
lic H
ealth
– S
yste
mic
Pub
lic H
ealth
– T
echn
ical
Agr
icul
ture
- T
echn
ical
Labo
r –
Sys
tem
ic
Tax
es a
nd M
onet
ary
Pol
icy
Partner Agency 2 vs. Average Skill Grouping Rankings
Partner Agency 1 vs. Average Skill Grouping Rankings
Overall Rank
Agency 2 Rank
(a)
(b)
Figure 2. (a) partner agency 1 versus net category skill-grouping importance rankings. (b) partner agency 2 versus net cate-gory skill-grouping importance rankings.
CIVILIAN RESPONSE CORPS FORCE REVIEW
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
5.2.3. Raw versus net category importance rankings.To explore the robustness of the aggregated academicand partner agency assessments, the net categoryweights and corresponding ranks were compared withthe raw average weights and ranks at the mission,sub-mission and skill-grouping levels. At each level,net category weights depend on the weights at all thehigher levels (no net category results are shown forthe country scenario level as it is the top level of thevalues hierarchy).
Overall, results indicate that although there is somevariation in the raw average and net category impor-tance rankings, the values are relatively similar interms of percentage of importance and rank order atall levels of the values hierarchy. For example,Figure 4 shows that at the mission level, the ConflictMitigation/Resolution/Peacemaking net categorycaptures 26.9% of the total importance and the rawaverage 28.8%. Aside from the raw average impor-tance ranking being only marginally greater, ConflictMitigation/Resolution/Peacemaking is ranked byboth raw average and net category data as the missionarea where CRC-A can provide the most value added.
Similarly, Figure 5 shows that at the skill-groupinglevel the Strategic Communications net categoryimportance ranking is 4.2%, and raw average rankingis 4.3%. Whereas the product of the academic andpartner agencies rankings slightly diminishes the netcategory ranking, the overall result that Strategic
Communications is one of the most needed skillgroupings does not change. This pattern suggests thatthe importance rankings of partner agencies are notoverly sensitive to input from the academic expertsregarding the relative importance of each representa-tive country scenario, mission or sub-mission.
5.2.4. Comparative assessment of mathematicalweighting methods. To calculate the weights for thevalues elicited from each academic expert and partneragency, various mathematical methods and combina-tions of methods could be used. As previously men-tioned, this paper explored a combined approach of
-4.00%
-3.00%
-2.00%
-1.00%
0.00%
1.00%
2.00%
3.00%
4.00%
Agriculture:Systemic -Technical
Criminal Justice:Systemic -Technical
Diplomacy &Governance:
Systemic - FieldOfficer
Policing: Systemic -Technical
Public Health:Systemic -Technical
PublicInfrastructure:
Systemic -Technical
Comparing {Systemic - Technical} Unweighted-Average Skill Grouping Weights
Avg Weight
Agency 1 Weight
Agency 2 Weight
Agency 3 Weight
Agency 4 Weight
Agency 5 Weight
Agency 6 Weight
Agency 7 Weight
Agency 8 Weight
Agency 9 Weight
Figure 3. Comparing systemic versus technical raw average skill-grouping rankings.
0%
10%
20%
30%
Net Category & Raw - Average Mission ImportanceWeights
Net Category Average Mission…Raw - Average Mission Weight
Figure 4. Net category and raw average mission importanceweights.
I. LINKOV ET AL.
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
rank-sum and rank-reciprocal methods. To evaluatethe effects of different mathematical weighting meth-ods on the calculation of skill-grouping prioritization,a sensitivity analysis was conducted comparing themixed method approach used in this paper with acomplete rank-sum and a complete rank-reciprocalapproach. Figure 6 shows that the percentage ofimportance assigned to each of the skill groupings iscompletely insensitive to the specific weightingmethod chosen.
6. DISCUSSION
A linear MCDA model was used to identify the idealtypes of civilian skill groupings needed to optimizeCRC-A support of overseas conflict prevention andresponse efforts in countries or regions that are at riskof, are currently experiencing, or are transitioningfrom conflict or civil strife. The linear model wasdesirable because detailed scores were not availableand specific rankings and recommendations wouldhave produced marginal insight and also wouldhave detracted from discussions during elicitationsvaluable to the decision-making process. The MCDAanalysis allowed for select members of academic andpartner agencies to provide their expert opinion onthe requirement and demand for skill groupings
commonly identified as needed in a civilian ‘surge’capacity. These skills were assessed with referenceto a values hierarchy of representative countryscenarios, missions and sub-missions. Discussionssurrounding the hierarchical structure were criticalto MCDA model development but also proved usefulfor fostering internal CRC discussions about thecorps’ objectives and structure.
Of particular interest to S/CRS was the use ofthe MCDA method to prioritize CRC-A skillgroupings and to help inform S/CRS’ understandingof the ‘ideal’ proportion and types of civilianskills for inclusion in the CRC. Importance rankingsshow that Conflict Specific Planning/Ops/Manage-ment/Assessment/Engagement/Coordination and Stra-tegic Communications scored as the skill groupingsthat CRC-A perceived as needing to provide the mostvalue added. The results are a reflection of whatemerged to be a strong consensus among academicand partner agencies, namely, that CRC-A effortsshould be directed at conducting systemic assessmentsof what support is required on the ground and subse-quently determining where the involvement of moretechnically skilled personnel might be most valuableand necessary.
During elicitation sessions and the followingpresentation of preliminary results, many of theinterviewees emphasized the importance of CRC-A
0%
1%
2%
3%
4%
5%
6%
Con
flict
Spe
cific
Pla
nnin
g
Str
atC
omm
s
Imm
igra
tion,
Cus
tom
s…
AD
R /
Tra
nsiti
onal
Jus
tice
For
ce P
rote
ctio
n
Rul
e of
Law
– S
yste
mic
Dip
lom
acy
& G
over
nanc
e –
Sys
tem
ic
Sec
urity
Sec
tor
Ref
orm
Dem
ocra
cy a
nd P
oliti
cal P
roce
sses
Pol
icin
g –
Tec
hnic
al
Pub
lic In
fras
truc
ture
– S
yste
mic
Pol
icin
g –
Sys
tem
ic
Crim
inal
Jus
tice
– S
yste
mic
Pub
lic In
fras
truc
ture
– T
echn
ical
Civ
il A
dmin
istr
atio
n
Eco
nom
ic R
ecov
ery
– S
yste
mic
Dip
lom
acy
& G
over
nanc
e –
Fie
ld O
ffice
r
Civ
il S
ocie
ty /
Med
ia D
evel
opm
ent
Com
mer
ce
Man
agem
ent –
Tec
hnic
al
Lega
l Adm
inis
trat
ion
Rur
al D
evel
opm
ent
Agr
icul
ture
– S
yste
mic
Bus
ines
s an
d F
inan
cial
Ser
vice
s
Crim
inal
Jus
tice
– T
echn
ical
Cou
nter
– te
rror
ism
Edu
catio
n –
Sys
tem
ic
Cor
rect
ions
Pub
lic H
ealth
– S
yste
mic
Pub
lic H
ealth
– T
echn
ical
Agr
icul
ture
– T
echn
ical
Labo
r –
Sys
tem
ic
Tax
es a
nd M
onet
ary
Pol
icy
Net Category & Raw-Average Skill Grouping Importance Weights
Net Category Skill Grouping WeightRaw-Average Skill Grouping Weight
Figure 5. Net category and raw average skill-grouping importance weights.
CIVILIAN RESPONSE CORPS FORCE REVIEW
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
members possessing a core group of systemic-relatedskills that would enable them to assess the crisis inits environmental context and more effectivelyrespond to institutional needs on the ground. Forinstance, planning and operations-management skillswere repeatedly cited as necessary to assess andacquire knowledge of the situation. CRC-A memberswere encouraged to look beyond their own areas ofexpertise to analyse complex situations and identifyother problem areas that might benefit from CRC orother types of technical support. In addition, it wasbelieved that to successfully design and implementnew capacity-building support programmes, CRC-Amembers would need to possess strong communica-tion skills that enable them to reach back to other USgovernment agencies, international stakeholders andlocal stakeholders for coordination or collaboration.
Several analyses were conducted to test the sensitiv-ity of the MCDA findings to variations in the hierarchy,and the results proved to be extremely robust. In acomparison of raw average systemic and technicalskill-grouping rankings, the majority of the agencypartners were found to consistently place greater valueon the systemic-related skill groupings compared withtheir technical counterpart, an outcome that resonateswith the interviewees’ emphasis on CRC-A memberspossessing a base set of planning, communications
and assessment skills. By no means does this analysissuggest that technical skill groupings are not beneficial.Rather, interviewees pointed out that CRC-A memberswould be able to use their understanding of the technicalaspects of their sector to support sector-wide as well ascross-sector strategic planning.
Similarly, an analysis comparing net category andraw average skill-grouping importance rankings wasconducted to assess the degree to which the modelwas sensitive to the weighting of inputs at differentlevels of the values hierarchy. Results indicate thatthe importance weights from one hierarchical level tothe next are relatively similar. This finding holds truefor all levels of the values hierarchy, missions, sub-missions and skill groupings, indicating that theagency partners’ preferences are relatively insensitiveto the input from the academic experts. This mightbe because the fundamental beliefs about the needsfor the CRC-A’s preparedness are shared by bothacademic experts and agency partners, or the generalconsensus among agency partners might outweighthe variability across academic inputs. Further assess-ment of the justification provided by academic expertsabout their importance rankings would be needed tofully explain analysis findings.
The third analysis was conducted comparingvarious weighting methods to determine whether
0.00%
1.00%
2.00%
3.00%
4.00%
5.00%
6.00%
Con
flict
Spe
cific
Pla
nnin
gS
trat
Com
ms
Imm
igra
tion,
Cus
tom
s…A
DR
/ Tra
nsiti
onal
Jus
tice
For
ce P
rote
ctio
nR
ule
of L
aw –
Sys
tem
icD
iplo
mac
y &
Gov
erna
nce
– S
yste
mic
Sec
urity
Sec
tor
Ref
orm
Dem
ocra
cy a
nd P
oliti
cal P
roce
sses
Pol
icin
g –
Tec
hnic
alP
ublic
Infr
astr
uctu
re –
Sys
tem
icP
olic
ing
– S
yste
mic
Crim
inal
Jus
tice
– S
yste
mic
Pub
lic In
fras
truc
ture
– T
echn
ical
Civ
il A
dmin
istr
atio
nE
cono
mic
Rec
over
y –
Sys
tem
icD
iplo
mac
y &
Gov
erna
nce
– F
ield
Offi
cer
Civ
il S
ocie
ty /
Med
ia D
evel
opm
ent
Com
mer
ceM
anag
emen
t – T
echn
ical
Lega
l Adm
inis
trat
ion
Rur
al D
evel
opm
ent
Agr
icul
ture
– S
yste
mic
Bus
ines
s an
d F
inan
cial
Ser
vice
sC
rimin
al J
ustic
e –
Tec
hnic
alC
ount
er –
terr
oris
mE
duca
tion
– S
yste
mic
Cor
rect
ions
Pub
lic H
ealth
– S
yste
mic
Pub
lic H
ealth
– T
echn
ical
Agr
icul
ture
– T
echn
ical
Labo
r –
Sys
tem
icT
axes
and
Mon
etar
y P
olic
y
Weighting Methods Compared
Current (mixed – method) WeightsRank Reciprocal (top 3 levels) WeightsRank Sum (top 3 levels) Weights
Figure 6. Weighting methods compared.
I. LINKOV ET AL.
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
using different mathematical approaches at each levelof the hierarchy would change the results of theMCDA analysis. Recall that a combined approach ofrank-sum and rank-reciprocal methods was used forcalculating weights. It was found that—independentof the weighting method used—CRC-A skill-groupingprioritization remained largely the same. To put thisfinding in perspective, if there had been considerablevariability in the scoring (country scenario, missionand sub-mission rankings) used to define the skillgroupings, the different weighting methods wouldlikely produce prioritized lists of skill groupings thatdid show significant differences. As such, this findingfurther validates that there is consistency in the expres-sion of perspective and importance rankings betweenand within experts.
Lastly, the level of agreement across partner agen-cies’ importance rankings was assessed through acomparison of each partner agencies’ raw averageskill-grouping importance rankings with the overall(total across all agency partners) net category skill-grouping rankings. The results indicated that althoughthere were similarities in rankings across partner agen-cies, there were also differences. During discussionswith partner agencies at the presentation of preliminaryresults, it became clear that the disparity in the analysiswas because of partner agencies’ varied perceptions ofthe skills a systemic expert should possess. Forexample, partner agency 3 representatives rankedStrategic Communications and Public Infrastructure—Systemic as, on average, the skill groupings of greatestimportance, whereas partner agency 4 intervieweesranked Corrections, Criminal Justice—Systemic andPolicing—Systemic as, on average, the skill groupingsof greatest importance. Also, there are still other agencypartners, such as the representatives from partneragency 5 that elected to only comment on their ownagency-related skill groupings, whereas partner agency9 representatives leaned towards prioritizing technicalskill groupings that are unaccounted for in this context.Consideration of the variability in systemic expertqualifications should be used to inform pending staffingand hiring decisions.
To this end, the MCDA analysis was conducted toprovide contextual knowledge on CRC-A’s value addedbut was not intended to guide, set or inform US foreignpolicy. The method presents a scientifically robust andtransparent approach to enable decision makers toaccurately identify CRC-A future needs and existinggaps. Through this application of MCDA, a prioritizedlist of skill groupings for the CRC-A was provided. Inaddition, through the execution of extensive sensitivityanalyses, the model inherently becamemore transparent
and facilitated dialogue about an otherwise politicallysensitive topic. What remains unanswered is how know-ing which skill-grouping CRC-A members should pos-sess translates into types and proportions of skills neededfor inclusion in the CRC.Ultimately, theMCDA findingswill be used to inform the decision-making process, yethuman input and consideration of multiple additionalfactors are required for the translation of MCDA findingsinto actual staffing decisions. The MCDA model can beupdated and revised as the CRC’s organizational objec-tives, and mission space continues to take shape. Inaddition, the MCDA approach can be further exploredat the organizational level as a tool for aiding in theidentification of actual staffing requirements or at theoperational level to help in deciding which institutionalsupport programmes to pursue.
ACKNOWLEDGEMENTS
The analysis presented in this study was conducted toprovide contextual knowledge on CRC-A’s value addedbut was not intended to guide, set or inform US foreignpolicy. We would like to thank Dr. Mayank Mohan,Eric Chu, Paul Welle and Kun Zan for helpful discus-sions and support. Mr. Gary Russell was instrumentalin framing the problem. This study was funded in partby the DOS. Permission was granted by the DOS andthe US Army Chief of Engineers to publish this infor-mation. The views and opinions expressed in this paperare those of the individual authors and not those of theUS DOS, US Army or other sponsor agencies.
ENDNOTES
1. The illustrative country scenarios used in the analysis arenot included herein because of politics-related and secur-ity-related sensitivities. The example provided is, for thepurposes of our discussion here, purely illustrative.
2. Similar applications have used various MCDA approachesincluding Multi-attribute Utility (MAU) and AnalyticHierarchy Processes. For this effort, it was useful to thinkin terms of missions, but these were almost by definitionunanticipated events and not amenable to probabilisticassessment, and our effort was aimed at organizational gui-dance but not MAU maximization. With the limited timeavailable for obtaining judgments from the most seniorexperts and decision makers—and with some uncertaintyabout how many informants would offer inputs regardingeach of the ranked elements—we felt rank-based weight-ing would in this case be more flexible and would requirefewer judgments than an orthodox Analytic HierarchyProcess approach and would certainly require fewer andsimpler judgments than an orthodox MAU approach.
CIVILIAN RESPONSE CORPS FORCE REVIEW
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda
REFERENCES
Belton V, Stewart T. 2001. Multiple Criteria DecisionAnalysis: An Integrated Approach. Springer: NewYork.
Dyer JS, Larsen JB. 1985. Using multiple objectivesto approximate normative models James S. Dyer andJohn B. Larsen. Annals of Operations Research 2(1):39–58.
Jessop A. 2004. Minimally biased weight determination inpersonnel selection. European Journal of OperationalResearch 153(2): 433–444.
Keeney R. 1992. Value Focused Thinking. HarvardBusiness School Press: Cambridge, MA.
Krasner SD, Pascual C. 2005. Addressing state failure.Foreign Affairs 84(4): 153–163.
Linkov I, Satterstrom FK, Fenton G. 2009. Prioritization ofcapability gaps for Joint Small Arms Program usingmulti-criteria decision analysis. Journal of MulticriteriaDecision Analysis 16: 179–185.
Stillwell WG, Seaver DA, Edwards W. 1981. A comparisonof weight approximation techniques in multi-attributeutility decision making. Organizational Behavior andHuman Performance 28(1): 62–77.
I. LINKOV ET AL.
Copyright © 2012 John Wiley & Sons, Ltd. J. Multi-Crit. Decis. Anal. (2012)DOI: 10.1002/mcda