the characteristics of problem structuring methods: a ...€¦ · web viewdespite this, there is...
Post on 15-Oct-2020
4 Views
Preview:
TRANSCRIPT
The characteristics of problem structuring methods: A literature review
Chris M. Smith*Alliance Manchester Business SchoolThe University of ManchesterG28 Sackville Street BuildingSackville Street Manchester, M1 3BB, UK. Christopher.Smith@manchester.ac.uk +44(0)161 306 3468
Duncan Shaw Alliance Manchester Business SchoolHumanitarian and Conflict Research Institute (HCRI) The University of ManchesterH24 Sackville Street BuildingSackville Street Manchester, M1 3BB, UK. duncan.shaw-2@manchester.ac.uk
* Corresponding author
1
Abstract
Problem structuring methods (PSMs) are a class of qualitative operational research
(OR) modelling approaches that were first developed approximately 40 years ago. Different
definitions of PSMs have been proposed, some focusing on the types of problems that PSMs
typically address, others on how they address these problems. Despite this, there is no clear
framework for what characteristics need to be present in an approach to warrant it being
regarded as a PSM. This presents a challenge to understanding what constitutes a PSM and
the acceptance of new PSMs. This exploratory paper develops a framework from a literature
review to identify similarities between PSMs. The framework reflects that PSMs hold
different philosophical assumptions to traditional OR and, thus, the framework is structured
according to the four pillars of ontological, epistemological, axiological and methodological
assumptions an approach makes. Across these assumptions, the framework poses 13
questions to determine if an approach could be a PSM. The effectiveness of the framework is
understood by applying it to eight OR approaches to see if it successfully identifies PSMs.
Keywords: problem structuring methods; OR methodology; soft OR; literature review
1. Introduction
Problem structuring methods (PSMs) are qualitative approaches for making progress
with ill-structured problems (Rosenhead & Mingers, 2001b). PSMs sit within operational
research (OR) but represent an alternative paradigm for problem-solving, distinct from
‘traditional quantitative OR’ (Rosenhead & Mingers, 2001b). Each PSM is distinctive, but
this paper searches for their similarities. It has been 40 years since PSMs emerged (Kirby,
2000), but there is still no detailed characterisation of the features that are shared by PSMs.
As Ackermann (2012, p. 656) writes, “whilst it is believed that they have similar
characteristics and aim to support a particular type of problems there is not agreement as to
which method[ologies] do and do not comply”. There is no generally accepted definition
because the original PSMs were not derived from a common starting point. While the theory
of individual PSMs has advanced, the universal understanding of PSM methodology has been
more stagnant (Westcombe et al., 2006). Recognising this, Eden and Ackermann (2006) call
for research to analyse across PSMs, but the response to this has been sparse, with recent
publications focussing on mixing PSMs together and with traditional approaches (Kotiadis &
Mingers, 2014); developing evaluation frameworks (Midgley et al., 2013); or defining a
single PSM (Yearworth & White, 2014).
Clarity on the similarities of PSM characteristics is fundamental to the advancement
of the field. For example, a definition can assess the status of approaches that claim to be
2
PSMs, such as Visioning Choices (O’Brien & Meadows, 2006), WASAN (Shaw & Blundell,
2010), DPSIR (Bell, 2012) and Wuli–Shili–Renli (Li & Zhu, 2014). Thus, this paper
develops and tests a framework of inter-related questions that can be used to assess the
veracity of the PSM claims that such methods make. The classification may not matter for
established OR/non-OR approaches, but the classification of new methods will matter if they
compromise the identity of the PSM label, rendering the term meaningless.
The seminal work on PSMs by Rosenhead (1989) in ‘Rational Analysis for a
Problematic World’ identified that PSMs constituted a new paradigm of analysis when
compared with traditional OR. This, and the subsequent edition of the book (Rosenhead &
Mingers, 2001b), became the consistent naming convention for PSMs, with some approaches
considered PSMs and others not. This established a form of exclusivity that may have limited
the space for a rigorous debate concerning the philosophical, theoretical and methodological
characteristics of PSMs. Rosenhead and Mingers (2001a) argued that methods associated
with quantitative OR follow a more objectivist stance and are better suited to ‘tame’ problems
that can be more easily comprehended. In contrast, PSMs take a subjectivist stance (within an
interpretivist paradigm) and are suited to ‘wicked’ problems that are difficult to specify.
Rosenhead and Mingers (2001a) assumed that because wicked and tame problems are
opposites, the assumptions underpinning the two paradigms should also be diametrically
opposed. They took the assumptions underpinning traditional OR and defined the opposing
state as assumptions for PSMs (Figure 1). This meant traditional OR and PSMs were cast as
opposites, which did not fully recognise the commonalities between some of their
underpinning assumptions.
Rosenhead and Mingers (2001a) noted that when these characteristics were developed
in the 1970s, they were rather theoretical, a blueprint for future approaches that may be
developed; however, they have remained a dominant set of assumptions that underpin PSMs.
While the assumptions were a useful starting point in the 1970s, it is opportune to revisit the
philosophical, theoretical and methodological position of PSMs.
Underpinning our framework is a view that (compared to other problem-solving
approaches) PSMs make some unique assumptions about the nature of problems and how to
solve them (Rosenhead & Mingers, 2001b). These are underpinned by a framework of ideas
(Checkland & Scholes, 1990). Our framework is informed by the research methodology work
of Guba and Lincoln (1994; 2005). They define four constructs of the qualitative research
paradigm: ontology, the form and nature of reality and what can be known; epistemology, the
nature of relationships between the knower and what can be known; axiology, what is valued
3
in terms of research processes for generating knowledge; and methodology, how the knower
can find out what can be known. We use these constructs to build the four pillars of our
framework.
Characteristics of the Quantitative OR paradigm Characteristics of the Qualitative OR paradigm
Problem formulation in terms of a single objective and optimization. Multiple
objectives, if recognized, are subjected to trade-off onto a common scale.
Non-optimizing; seeking alternative solutions which are acceptable on a separate dimensions, without
trade-offs.
Overwhelming data demands, with consequent problems of distortion, data
availability and data credibility.
Reduced data demands, achieved by greater integration of hard and soft data with social
judgements.Scientization and depoliticization,
assumed consensus.Simplicity and transparency, aimed at clarifying the
terms of conflict.People are treated as passive objects. Conceptualizes people as active subjects.Assumption of a single decision maker
with abstract objective from which concrete actions can be deduced for implementation through a hierarchical
chain of command.
Facilitates planning from the bottom-up.
Attempts to abolish future uncertainty, and pre-take future decisions. Accepts uncertainty, and aims to keep options open.
Figure 1: Characteristics of quantitative and qualitative OR (Rosenhead & Mingers, 2001a)
Based on the literature, the framework identifies a suite of inter-related, common
characteristics of PSMs, some of which are shared with traditional OR approaches, as
expected given that they are both part of the OR family. All characteristics are included in the
framework, not only those that are unique to PSMs. The remainder of the paper is organised
as follows: Section 2 shows how PSMs have been defined in the literature. Section 3
introduces the methodology used to develop the four pillar framework. Section 4 introduces
the framework. Section 5 tests the usefulness of the framework by applying it to eight OR
approaches to understand if it can identify the three PSM approaches contained therein.
Section 6 discusses the findings in relation to existing theory. Section 7 draws conclusions
and limitations of this work.
2. Defining PSMs
In the 1970s a ‘Crisis in OR’ was identified (Thunhurst, 1973) suggesting the
assumptions underpinning existing quantitative OR techniques were ill equipped to deal with
the social problems being faced by organisations (Kirby, 2007). Responding to these new
problem characteristics, new approaches were developed with different methods of analysis,
viewing problems from a different philosophical position. Here we call these problem
4
structuring methods (PSMs); in the literature PSMs have been defined in three ways1:
problem characteristics, method of analysing problems and philosophical dimensions. First,
the characteristics of problems that PSMs address have been called ‘messy’ (Ackoff, 1979)
and ‘wicked’ (Rittel, 1972). Such problems are pluralistic (Jackson & Keys, 1984), as
stakeholders have divergent views about goals and objectives. The problems exist in dynamic
and complex systems that interact with each other (Ackoff, 1979). While these problems are
varied, and it is difficult to exhaustively list their attributes, Churchman (1967) believes them
to share many of the following properties: They cannot be exhaustively formulated, every
formulation is a statement of a solution, there is no stopping rule, there is no true or false,
there is no exhaustive list of operations, there are many explanations for the same problem,
every problem is a symptom of another problem, there is no immediate or ultimate test,
solutions are ‘one shot’ and every problem is unique. Other authors add that problems often
lack reliable data (Mingers & Brocklesby, 1997) and that standard mathematical techniques
are not applicable (Simpson, 1978) as problems are defined by a social construction by actors
(Keys, 2006) and require constant negotiation (Pidd, 2009). Given their diversity of form and
interpretation, problems rarely fit neatly into rigid analytical frameworks (Checkland, 1983).
Second, some definitions focus on how PSMs analyse a problem. PSMs build models
of situations (Franco, 2013), where a model is an integrated representation of a situation that
supports negotiation or develops new understanding. The models are qualitative (Ackermann,
2012), often representing data from differing worldviews (Mingers, 2011). PSMs reject
reductionism (Ackoff, 1979), where individual elements are optimised independently of the
whole. Instead, they manage complexity (Rosenhead, 2006) by taking a holistic approach and
seeking emergent system properties (Checkland, 1981). PSMs see problems as systems in
which elements are connected by interrelationships rather than static snapshots. Therefore,
PSMs explore systemic issues (Midgley et al., 2013), aiming to build shared understanding
and commitment across stakeholders (Ackermann, 2012) through facilitation (Franco &
Montibeller, 2010), participation (Rosenhead, 1996) and stimulating dialogue (Mingers &
White, 2010) through a structured decomposition of issues. Rather than relying on the
analysis of abstract data (Mingers, 2000), a social process of learning takes place through
which actions are agreed upon (Pidd, 2009).Finally, philosophical definitions centre on PSMs
offering an alternative to ‘traditional OR’, which assumes that reality can be objectively
1 In defining PSMs here, and in Section 4, the paper recognises that some of the literature cited predates the emergence of PSMs (in some cases by decades). However, these early sources set the foundations for the development of OR research, including PSMs, and thus underpin the assumptions of PSMs.
5
modelled to identify efficient ways of achieving well-specified objectives (Rosenhead &
Mingers, 2001b). In contrast, PSMs take interpretivist and social constructivist views that
situations are constructed differently by different people are therefore subjective and require
participation (Rosenhead & Mingers, 2001a).
These three classes of definition do not provide a sufficiently detailed classification of
similarities across PSMs as they are based on anecdotal evidence from these papers (i.e. the
definitions are stated in the papers but not as the product of a research project designed to
discover the characteristics of PSMs). Also, those papers do not aim to explain a
classification system of PSM characteristics. This paper aims to offer a comprehensive
framework to classify PSMs that brings together a breadth of characteristics from across the
PSM literature base by integrating understanding from multiple authors and providing a
structure through which PSMs can be understood.
3. Methodology to identify the pillars of PSMs from the literature
To aid readability of this section, Figure 2 summarises and illustrates the coding
process. While the table presents each stage in a linear fashion the process was more cyclical,
particularly between the axial and relational coding, where the formation of the relational
codes helped to contextualise the axial codes and the resulting questions. Figure 2 is
illustrative of the process focussing only on Pillar 2 which yields 3 of the 13 questions. The
figure only shows three sources of literature, therefore not all open codes informing each
axial code are shown.
To explore the similarities among PSMs, we undertook a comprehensive literature review to
identify common characteristics. First a set of search terms were identified to return a set of
papers to review from the literature. We started with just problem structuring methods,
however wanted to expand the search beyond this generic term. Therefore the five
approaches identified in Rosenhead & Mingers (2001b) were considered. Of these five
approaches, three (Soft Systems Methodology, Strategic Choice Approach, Strategic Options
Development and Analysis/Journey Making) dominate written literature on PSMs with the
other two (Drama Theory and Robustness Analysis) having a less than 10% usage rate as
found in Munro & Mingers (2002) in a survey of practitioners, and having a less expansive
literature base with fewer papers and fewer authors contribution to their theoretical and
methodological development. Therefore these approaches were dropped from the search
terms. We revisit both Drama Theory and Robustness Analysis later on in the Discussion as a
further test of the framework.
6
* Ackermann, F. (2012). Problem structuring methods “in the Dock”: Arguing the case for Soft OR. European Journal of Operational Research, 219(3), 652–658. ** Checkland, P. (1985a). Achieving “Desirable and Feasible” Change: An Application of Soft Systems Methodology. The Journal of the Operational Research Society, 36(9), 821–831. *** Mingers, J., & White, L. (2010). A review of the recent contribution of systems thinking to operational research and management science. European Journal of Operational Research, 207(3), 1147–1161.
Figure 2: Illustration of the coding process for 3 pieces of litrature relating to Pillar 2
7
Google Scholar searches made for the terms problem structuring methods, soft
systems methodology, strategic choice approach, strategic options development and analysis
or journey making. This identified 12,600 unique entry articles (some articles were returned
for multiple key words). This data set was too large for thorough analysis; therefore, the
search was filtered by focussing on two key OR journals: the European Journal of
Operational Research (EJOR) and the Journal of the Operational Research Society (JORS)
(Figure 3). These journals were selected because they are the leading OR journals with a
remit to develop theoretical contributions in PSMs. Removing duplications reduced this
further. Additional key sources of highly-cited literature were also included based on
references from the reduced set of articles (see last row of Figure 3). These are not grouped
by search term as they emerged through the review process. This produced a more
manageable data set from which to identify key characteristics of PSMs2.
None EJOR JORS"Problem Structuring Methods" 2,070 83 176"Soft Systems Methodology" 12,600 81 260"Strategic Choice Approach" 1,630 38 92"Strategic Options Development and Analysis" OR "Journey Making" 2,340 56 74Unique entries 16,600 148 337Other sources of highly cited literature relating the features of PSMs
Search TermJournal Filter
115
Figure 3: Search terms in Google Scholar, EJOR, JORS and other sources of literature
The process of identifying the common characteristics of PSMs from the literature
followed a three-stage coding process of open coding, axial coding and relational coding.
During open coding, each paper was read to identify the characteristics of PSMs they
contained. Where the same (or very similar) characteristics were found in different papers
they were grouped together in open codes. This broke down the data to allow comparisons
across the literature (Strauss & Corbin, 1990). To illustrate using Figure 2, in Checkland
(1985a) we identified “both (systematically) desirable and (culturally) feasible” and
Ackermann (2012) “Changes that are both culturally feasible and systematically desirable
identified”. Here, repetition was removed to create a more usable dataset by grouping these
though open coding as ‘desirable and feasible outcomes’. This gave a comprehensive list of
common characteristics of PSMs extracted from the literature.
Next, axial coding identified connections across the open codes. Axial coding clusters
open codes to understand how the open codes are related and highlights patterns of
interaction. This identified characteristics that were prevalent in the literature and, thus, the
2 Space prohibits us from citing every source used in this paper. An additional bibliography is provided as a supplement, and is available from the EJOR online system.
8
characteristics to be included in the framework. For example, there were several open codes
relating to the political feasibility of outcomes; therefore, this was identified as an axial code
(see Figure 2). Dual coding (axial and open) in this way aided understanding of what was
meant by political feasibility through the open codes. That is, political feasibility included,
for example, seek buy-in through accommodation and desirable and feasible outcomes. This
led to a diverse interpretation of what political feasibility means for a PSM. Each axial code
was then developed into a question that embodied the open codes. These questions could be
asked of an approach to see if that PSM feature is present or not in an approach. For example,
for political feasibility, the question created was Does the approach aim to develop buy-in to
politically feasible outcomes? In total we identified 13 axial codes from the data set resulting
in 13 questions, each addressing a particular feature of PSMs.
Relational coding identified overarching relational theory within the axial codes.
Relational codes are core categories through which the theory can be understood. This
procedure validates the relationships across each group, filling in categories that need further
refinement and development (Strauss & Corbin, 1990).
Rosenhead (1989) suggested that PSMs constitute a new paradigm of analysis, an
alternative way of viewing the world. To understand what paradigm means for PSMs we
reviewed the OR and PSM literature and found Mingers (2003) and Shaw et al (2006) already
drew on the highly cited authors Guba and Lincoln (1989, 1994, 2005) who identify four
constructs that underpin a paradigm: ontology, epistemology, axiology and methodology.
Mingers (2003) operationalized three of these constructs for OR, stating that they represent
the most general characteristics that OR approaches share (p561) – thus signalling the
potential of Guba and Lincoln's work to OR/PSMs. Analysis of the data led to the emergence
of these four constructs (relational codes) and naming them using Guba & Lincoln’s terms
helped us to understand how the identified characteristics of PSMs related to philosophical
constructs that underpin PSMs. Thus we were able to broaden our conceptualisation of the
data using this theory, leading to a more systematic approach to understand the data and a
more holistic way to understand the characteristics of PSMs.
The first construct, ontology, guides users on the form and nature of reality and what
is there that can be known about it. For Mingers, this translates into OR by identifying the
types of problems to which an approach can be applied, aspects to model and general system
characteristics required to apply an approach. This identifies the first relational code of the
framework, the characteristic and scope of the system modelled by the PSM, called systems
characteristics.
9
Epistemology considers the relationship between the knower and what can be known.
Mingers operationalized epistemology as how knowledge is created using an approach, by
whom, and identifying goals of this. Thus, our second relational code defines the knowledge
and involvement of stakeholders to ensure that the required breadth/depth of insight is
available. Axiology considers what is valued or considered right. Mingers operationalized
axiology as judging the value of the intervention and the insight it produces. Hence, the third
relational code represents the values of model building through the contribution of model
building to the discovery of new knowledge. Mingers published his paper in 2003, but in
2005, Guba and Lincoln added a fourth theoretical construct, methodology, which considers
how the inquirer should go about finding knowledge. Here, methodology is operationalized
as the structured process of analysis and modelling that an approach takes to formally build
and represent knowledge. Thus, the fourth relational code represents the structured analysis
of knowledge using formalised rules and its representation in models.
Next, the questions were evaluated using five measures from Keeney and Raiffa
(1993) which they developed to evaluate the appropriateness of criteria in MCDA
interventions: completeness, operationality, decomposability, absence of redundancy, and
minimum size. Completeness requires that all attributes of concern to the decision maker are
included. Operationality requires that the criteria are specific enough to compare and evaluate
actions effectively. Decomposability requires that the performance of an action in one
criterion can be judged independently of its performance in other criteria. Absence of
redundancy requires that two or more criteria do not represent the same thing. Minimum size
requires that there are not too many criteria, as this would make the framework too large and
impractical (Goodwin & Wright, 2004). We use these five measures because we have a
similar aim to Keeney and Raiffa – namely, to understand the usability of a set of criteria for
evaluating an option – in our case – the option of whether the questions can identify if an
approach has characteristics of PSMs. Using these principles led to a set of 13 mutually
exclusive questions being developed, each aligned to one of four constructs, making the four
pillar framework. Finally, these questions were externally reviewed by two academics who
actively research PSMs, one being a developer of a new qualitative OR approach identified in
Section 1 of this paper. This helped to verify the appropriateness and applicability of these
questions to current PSMs and the range of newly developed approaches, increasing
confidence in the framework. In the discussion we consider the interrelated nature of these
questions and if all 13 are needed.
10
To build the framework each relational code is defined as a pillar, each question is
developed from an axial code which is linked to one of these four pillars. The four pillars of
PSMs are summarised in Figure 4.Theoretical Construct
Theoretical meaning (Guba and Lincoln)
Operationalised construct (Mingers)
The pillars of PSMs
OntologyWhat is assumed to
existWhat is included within a
modelSystem
Characteristics
EpistemologyThe nature of
knowledgeHow and with whom is
knowledge created
Knowledge and Involvement of
Stakeholders
AxiologyValues for problem
solvingHow to judge the value of
PSM researchValues of Model
BuildingResearch
MethodologyHow to structure
enquiryStructured Analysis
Figure 4: The theoretical construction of four pillars
4. Introducing the four pillar framework
We now introduce the 13 characteristics identified from the literature review. Each
feature aligns to one of the four pillars and specifies a question that, when asked of an
approach, uncovers if a PSM characteristics is present. If all 13 characteristics are present
within an approach we suggest it may be eligible for consideration for being a PSM, however
the measure of proof may be higher than just exhibiting all 13 characteristics. While the
framework specifies characteristics of PSMs, we expect some characteristics to be shared
with non-PSMs as they may be characteristics of wider OR. The questions are numbered and
in italics throughout this section. In Section 5, we apply these questions to a range of PSMs
and non-PSMs to test the framework.
4.1. Pillar 1: Systems characteristics
OR approaches build models that reflect a system of elements that interact with each
other. Elements included in a model reflect the ontological assumptions of the modelling
approach.
The first characteristic of PSMs is that they build a model, or models. Moreover, each
analytical approach should be designed to model a system that has been identified. For
example, soft systems methodology (SSM) investigates human activity systems (Checkland
& Scholes, 1990) and strategic options development and analysis (SODA) analyses
strategically important causal relationships (Eden & Ackermann, 2001). PSMs should clearly
identify the analytical approach that is being applied to identify and analyse a model.
Question 1: Does the approach identify a system to model?
11
Next, we consider the assumptions about the nature of the system being modelled.
Here we identify opposing positions between traditional OR and PSMs. These two views are
summarised by Franco and Montibeller (2010): In traditional OR, problem situations are
assumed to exist as external realities; in PSM, problems are socially constructed entities that
depend on how participants subjectively interpret the world. For PSMs, actors construct their
own interpretation of reality so that multiple subjective realities are inputs to be modelled
(White, 2009). Hence, PSMs ‘move away from “objectively” modelling the external world
towards modelling peoples’ concepts and beliefs about the world’ (Mingers, 1992, p3). For
PSMs, inputs to a model are the subjective understanding of participants about the external
world they perceive. Question 2: Does the approach model participants’ subjective
interpretations of the world?
Another aspect of a PSM is how the approach tries to understand the system through
modelling. Ackermann (2012) states that PSMs focus on managing (rather than reducing)
complexity, looking at the whole picture and not breaking problems into constituent parts.
Rather than isolating parts and studying them independently, PSMs advocate ‘holism’ to
concentrate on the whole and analyse the relationships between parts to identify emergent
properties (Preece, Shaw, & Hayashi, 2013). Traditional OR approaches take a more
mechanistic view that assumes that phenomena are predictable and inherently understandable
(Jackson & Keys, 1984). The mechanistic view leads to reductionism (Ackoff, 1979), where
cause-and-effect relationships are measured assuming that knowledge of all these individual
relationships would lead to knowledge of the entire system. Question 3: Does the approach
seek to build a holistic understanding of the system?
4.2. Pillar 2: Knowledge and involvement of stakeholders
OR techniques use models to construct and represent knowledge about an area of
concern. The way in which knowledge is created reflects an approach’s epistemological
assumptions. Knowledge creation is aided by the representations of a problem situation in a
model. The form the model takes affects the knowledge-creation process. PSM models take a
qualitative form, are often diagrammatic (Ackermann, 2012) and represent differing
perspectives. This is in contrast to the quantitative representations of reality that typify
traditional OR models and represent a more objective standpoint. Question 4: Does the
approach build a qualitative model?
The next characteristic is the process of eliciting knowledge to build a PSM model.
Franco and Montibeller (2010) argue that building a model can be done in two forms, expert
and facilitator. In expert form, the problem situation faced by a client is given to the OR
12
consultant, who builds a model to develop a (quasi-)optimal solution. In facilitator form, the
consultant jointly develops a model through participant interaction, possibly in a group
workshop. Checkland and Winter (2005) suggest that some PSMs can also be implemented in
two modes. For themirst, is formal facilitation at of a group-level application (which they call
Mode 1). The facilitator is a process expert and facilitates the elicitation of the participants’
knowledge in the application of the approach (Phillips & Phillips, 1993). Alternatively, a
participant structures their own thinking using the principles of the approach (perhaps via an
interviewer), so the user is both a process and content expert and uses the approach to
facilitate their own thinking processes (called Mode 2). Between these two modes is self-
facilitation where a group guides themselves through a pre-defined process using prompts
(Johnson & Johnson, 2002). With facilitation, the model is the focus for participants
exploring the complexity of issues and beginning to transition their understanding either
achieving this clarity in a group or by themselves (Eden & Ackermann, 2006). Question 5:
Does the model building involve the facilitation of participants?
For PSMs, stakeholder learning is critical (Checkland, 1985a). This arises from
participants sharing situational knowledge to build joint definitions and construct problem
resolutions within a model. A shared model can act as a boundary object offering a shared
language, shared meaning and a common interest (Franco, 2013), helping stakeholders to
identify understand how their knowledge inter-relates (Ackermann, 2012) thorough
facilitating problem resolution through dialogue (Bryant, 2002). This learning can be done in
groups or individually. For example, participants working as individuals can also understand
the relevance of their own knowledge through a structured process (Shaw, Eden, &
Ackermann, 2009). Traditional OR will also lead to clients learning about the problem
situation; however, this will be through analysis of the model rather than through
participation in the model building process as described above. Question 6: Does the model
building enhance participants’ learning about the situation?
Finally, in Pillar 2, PSMs prioritise taking actions that are systemically desirable and
culturally feasible (Pidd, 2009). PSMs assume that it is better to have a good set of actions
that improve the situation and are politically feasible and implementable rather than optimal
solutions that may not get implemented (Checkland, 1981). Some other OR approaches seek
optimal solutions, but these may never be implemented if political factors do not also inform
the model. Political feasibility can be gained through recognition of power structures getting
buy-in from important stakeholders (Eden & Ackermann, 1998). Stakeholders can explore
perceptions of the problem and find agreement or accommodation between participants’
13
conflicting constructions (Checkland & Scholes, 1990). Participation goes beyond merely
consulting stakeholders, and envelops stakeholders into the model building process (Davis,
MacDonald, & White, 2010) to increase their commitment to implementing the outcome as
they then appreciate how their views inform the analysis, with the models reflecting solutions
they jointly developed (Franco & Montibeller, 2010). Participation in the process over time
develops buy-in to feasible outcomes. Question 7: Does the approach aim to develop buy-in
to politically feasible outcomes?
4.3. Pillar 3: The values of model building
An OR approach must have a set of values to guide the modelling and offer a standard
against which to judge the quality of analysis. These will reflect the axiological assumptions
of an approach. Guba and Lincoln (1989) introduce four measures for judging the quality and
rigour of qualitative research: credibility, transferability, dependability and confirmability.
These measures were used by Shaw (2006) to judge the value of Journey Making workshops
(similar to SODA) and showed their compatibility with PSMs; therefore, we employ these
values in the framework.
Credibility requires the data to accurately reflect stakeholders’ social constructions.
PSMs recognise that problems are multi-perspective, allow a range of distinctive views to be
explored and embrace conflicting objectives without collapsing them into a final single
function (Mingers, 2011). Instead of trying to define a ‘real’ or ‘objective’ problem, the focus
is on joint problem definitions, which encompasses the main features of individual
perceptions (Franco & Montibeller, 2010). Where Question 2 is concerned with the inputs to
a model, this question is concerned with what happens to those inputs. Question 8 explores if
the final model preserves the different competing logics or social realities of participants
without forcing the model to represent a single objective reality. For example, different
worldviews are accommodated in SSM. These models are credible to participants as they can
identify their own views as present within the final models. Question 8: Is credibility
established in models by preserving multiple participant contributions?
Transferability is the extent to which methodological findings can be generalised and
used in other problem contexts. The model building approach should be suitably generic so it
is not limited to a single setting but can be used with a diverse set of problems and clients.
Question 9: Is the model building process suitably generic so it can be transferred to multiple
problem contexts?
Traditional OR methods attempt to show that outputs are dependable by
demonstrating their economic (substantive) rationality, or when outputs are appropriate to
14
achieve stated goals within limits imposed by given constraints (Eden & Ackermann, 1998).
PSMs are used in situations where a single goal and explicitly-stated constraints may not
exist, as they are constructions of different stakeholders. Therefore, in the absence of being
able to show that outcomes are substantively rational, PSMs need to demonstrate reliability in
outcomes by showing that a logical procedure has been followed. In part, dependability puts
focus on the process of collecting data (Shaw, 2006). This is called procedural rationality,
where ‘the procedure itself is the outcome of a publicly stated reasoning and so can gather
cognitive commitment from participants’ (Eden & Ackermann, 1998, p. 55). Procedural
rationality and involving users in the model building process makes the process transparent
(Jackson, 2006), potentially increasing participants’ confidence in the outcomes. Question 10:
Does the model building process aim to create confidence in the outcome through procedural
rationality?
Confirmability requires that the data in a PSM model is grounded in the situation
being studied and not the facilitator’s own constructions. Furthermore, confirmability
suggests that the outcomes are grounded in the content of that model and are traceable to its
source (i.e. that a validated audit trail exists of stakeholder views leading to model content
leading to model outcomes). In PSMs, the validation of models is done by participants during
model building in a process called collaborative inquiry (Champion & Wilson, 2010). This
ensures that modellers accurately represent the views of participants. This is different from
the definition of validation applied to traditional OR by Pidd (2009) in which validation is a
process of assessing the degree to which the input-output relation of the model is the same as
that of the real system within some defined experimental frame. Confirmability focuses on
ensuring a transparent path of inferring findings. Question 11: Does the model act as an
audit trail that has been validated through collaborative enquiry?
4.4. Pillar 4: Structured analysis
OR approaches build and analyse a model and create knowledge, and how an
approach structures this reflects its methodological basis. Methodologically, PSMs comprise
a number of different tools that enable different stages of analysis. For example, SSM has
rich pictures and root definitions. These stages offer richness to an application, allowing a
range of considerations to be included in the analysis. Because PSMs are applied to wicked
problems, a multiplicity of different tools can help approach the problem from different
analytical perspectives to formalise and structure the knowledge of participants. The staged
approach results in flexibility within an application, where users can cycle through different
analytical tools according to the needs of the context (Checkland, 1985a). With a diverse
15
toolset, PSMs can structure the problem in different formats over several stages of analysis.
These different tools are well-documented in the literature. Question 12: Does the approach
structure knowledge through different stages of analyses?
While PSMs have different stages of analysis, they also use different types of
thinking: divergent and convergent. During divergent thinking, participants are encouraged to
think with variety to explore diverse issues and thus increaseincreasing the likelihood of
identifying creative solutions (Franco & Montibeller, 2010). Convergent thinking allows
participants to identify commonalities in views (Franco, 2013) and consolidate the best ideas
in preparation for the next stage (Shaw, 2003). Phillips and Phillips (1993) cite many
examples of poor practice in which groups converge and reject ideas before they are fully
explored which may lead to poorer outcomes. Question 13: Does the approach have distinct
phases for divergent and convergent thinking?
5. Testing the four pillars
Section 4 detailed four pillars, characteristics and 13 corresponding questions. This
section explores the validity of these 13 questions by considering if they effectively identify
PSMs from within the family of OR approaches.
To demonstrate breadth and variety of application, eight approaches have been
selected. Six approaches are selected using the Williams (2008) taxonomy of OR methods:
From PSMs, we chose SSM (Checkland, 1981), SODA (Eden & Ackermann, 1998) and the
strategic choice approach (SCA) (Friend & Hickling, 2005). From ‘methods to calculate an
attribute of a system’, we chose data envelopment analysis (DEA). From ‘methods to
replicate or forecast system behaviour’ we chose simulation. From ‘optimisation methods’,
we chose linear programming (LP). To assess if the framework can distinguish between
PSMs and approaches that are reported to be closely aligned with PSMs (Mingers &
Rosenhead, 2001), we also chose the viable system model (VSM) and system dynamics (SD).
In our selection of SD to test the framework, we appreciate the breadth of use of the
approach and so take a perspective of SD that is closest to it being a PSM. To define this
perspective, we take three recent EJOR papers on SD: Lane, Munro & Husemann (2016);
Thompson, Howick & Belton (2016); and Torres, Kunc & O’Brien (2017). We refer
specifically to these applications (rather than the widest spectrum of SD applications).
Consequently, we refer to SD3 in this paper to signal that we are not considering an all-
encompassing view of SD. As these papers represent the closest form of SD to PSMs, they
will provide the toughest test of the framework while also giving clarity to the answers to
16
each question. Similarly, to give a more specific definition to simulation, we have chosen a
mainstream approach to discrete event simulation (DES).
Having established the 13 characteristics, we now test their legitimacy by applying
them to the selected approaches. Importantly, application of these characteristics aims to test
the framework we have developed, not test the OR approaches to which they are being
applied. Informing our understanding of the non-PSM approaches is the established OR
literature relating to each of these approaches with the exception of SD3, as mentioned above.
Different amounts of literature for each OR approach was needed across the 13 questions to
be confident when answering them – however, our answers were also checked with an expert
in each field to be confident in the answers.
To conduct this test, there are four issues to consider: whether a generic or unique
scale is used to test the legitimacy of each characteristic, how many points should be on the
scale(s), what descriptors are used for each point and on what basis to apply each descriptor
and what descriptors are used for each point. First, using a unique scale to test each of the 13
characteristics could better reflect the diversity of characteristics and address unique features
of each characteristic. However, this option was rejected because it would be difficult to
agree on 13 different scales, and the usability of a framework with 13 unique scales may be
low. Instead, we opted for a generic scale to be used across all characteristics because it
allows users to become more familiar with its application, future-proofs the framework and is
sufficient to test the characteristics, which is the focus of this paper.
Second, we needed to determine a scale to identify if a feature of PSMs is present
within an approach. A binary yes/no scale was trialled but did not adequately represent the
diversity of how techniques were reported in the literature. For example, when considering
question 5, Does the model building involve the facilitation of participants?, it is not possible
to answer consistently for all techniques, as some, such as DES, can be built both with and
without facilitation. Therefore, our scale needed to represent a wider range of alternatives
found in the literature. We began with a 5-point Likert scale (strongly agree to strongly
disagree) but found it difficult to be confident and consistent on what the threshold should be
between different points on the scale, as the differences can be nuanced. This was also true of
a 4-point scale. We settled on a 3-point scale, which provided the opportunity for extreme
responses (1 or 3) as well as a middle option (2) when the literature was more equally
balanced.
Third, we needed to identify suitable descriptors for each of the three points on the
scale. To provide evaluation descriptors that cover all options, we decided that: descriptor 1
17
should be a positive response to the characteristic; descriptor 2 should be neutral; and
descriptor 3 should be a negative response. On the specificity of the descriptor, we trialled 1-
Must, 2-May and 3-Must not, but this language was too restrictive, implying 100%
compliance in options 1 and 3. This did not allow for the outlier paper, not consistent with the
dominant narrative, meaning many techniques would be incorrectly classed as 2. We also
trialled softer descriptors (e.g. 1-Mostly, 2-Unclear and 3-Mostly not), but the equivocality of
‘unclear’ was not helpful. We finally trialled and accepted 1-Yes, 2-Often and 3-No, which
(as we describe below) allowed 1 and 3 to reflect the dominant narrative in the literature and
allowed 2 to reflect when there was not a dominant narrative.
hHaving established the coding scalethe three point scale, to ensure consistent
application of the framework to each approach we followed the validation process reported in
Shaw et al., (2017) whereby two researchers independently coded the answers for each
question on each of the approaches. Where there were discrepancies between these
assessments, increasingly tight rules and definitions were agreed between the researchers and
the process started again. This continues until both researchers had a 100% coding match
across all questions and approaches. Here we consider one of these principles, namely the
threshold required to allocate each of the three points on the scale. The main difficulty here
was to specify the circumstances under which to allocate a 2 on the scale, as the choice
between 1 and 3 for a technique was often clear. The basis of evaluation was the dominant
narrative in the OR literature, with the exception of SD, for which we used the SD3 papers.
Therefore, a journal paper in which an OR technique was used in a way that was not
consistent with that dominant narrative would not change the evaluation. For example, the
majority of the VSM literature does not use the model as a method of facilitation, so the
answer would be ‘no’ to question 5, Does the model building involve the facilitation of
participants? There is one paper that uses VSM as a facilitation tool (Tavella &
Papadopoulos, 2014), but this does not represent the dominant use of VSM in the literature
and so would not move the evaluation to ‘unclear’, which is reserved for techniques in which
there is no single dominant narrative. We now present the application of these questions to
the eight OR approaches. Most of these classifications are unproblematic and have been
stated briefly; longer answers are given in cases that are more contentious. Summary tables
are presented in Figures 5-8.
Finally, we needed to identify suitable descriptors for each of the three points on the
scale. To provide evaluation descriptors that cover all options, we decided that: descriptor 1
should be a positive response to the characteristic; descriptor 2 should be neutral; and
18
descriptor 3 should be a negative response. On the specificity of the descriptor, we trialled 1-
Must, 2-May and 3-Must not, but this language was too restrictive, implying 100%
compliance in options 1 and 3. This did not allow for the outlier paper, not consistent with the
dominant narrative, meaning many techniques would be incorrectly classed as 2. We also
trialled softer descriptors (e.g. 1-Mostly, 2-Unclear and 3-Mostly not), but the equivocality of
‘unclear’ was not helpful. We finally trialled and accepted 1-Yes, 2-Often and 3-No, which
(as we describe below) allowed 1 and 3 to reflect the dominant narrative in the literature and
allowed 2 to reflect when there was not a dominant narrative.
We now present the application of these questions to the eight OR approaches. Most
of these classifications are unproblematic and have been stated briefly; longer answers are
given in cases that are more contentious. Summary tables are presented in Figures 5-8.
5.1. Pillar 1: Systems characteristics
1. Does the approach identify a system to model?
All of the OR approaches are clear about the system being modelled. SSM models the
human activity system (Checkland & Scholes, 1990), the “modelling language used for
making models of human activity systems is all the verbs in language; an indicator of logical
dependency; indicators of flows, concrete or abstract” (Checkland, 1981 p. 315). SODA
builds cognitive maps that are designed to represent the way in which a person defines an
issue (Eden & Ackermann, 2001). The cognitive map is made up of constructs (nodes) linked
to form chains (shown by arrows) of action-oriented argumentation (Eden & Ackermann,
1998). SCA builds several models that represent the interconnectedness of decisions with an
aim to reduce uncertainty (Friend, 2001). VSM outlines five sub-systems that are required for
an organisation to remain viable (Beer, 1981). SD3 draws causal loop diagrams based on
mental models of a situation, which are converted into level and rate equations that can be
quantitatively modelled (Torres, Kunc, & O’Brien, 2017). DES models show how an entity
moves through a system over time. A DEA model consists of inputs and outputs from a
system of decision making units (DMUs) that are used to calculate the relative efficiency of
DMUs within the system. LP models are built with constraints defining a feasible range, or
convex hull. An objective function is then either maximised or minimised within this feasible
range to give an optimum answer for the defined system.
2. Does the approach model participants’ subjective interpretations of the world?
SSM builds models of the human activity system, in which a purposeful system is
modelled in the systems world from multiple perspectives, so subjectivity is a key feature of
what is elicited (Checkland, 1981). SODA builds models from ‘different subjective views of
19
the situation as expressed through individual interviews’ (Eden, 1995, p. 304). SCA models
represent subjective information (Friend & Hickling, 2005). For SD3, all applications elicit
participants’ interpretation about the problem situation as the inputs to a model, not an
objective representation of reality. VSM takes a system-in-the-world position in which the
laws underpinning the model, such as requisite variety, exist (Sinn, 1998) and objectively
model an external reality. DES, DEA and LP all build models of external systems that are
objectively described.
3. Does the approach seek to build a holistic understanding of the system?
SSM, SODA, SCA, VSM and SD3 all prioritise the study of whole entities before the
study of parts. They codify system properties to represent how the system being studied
relates to the whole. This allows decision-makers to consider systemic properties. For
example, SCA uses the shaping mode to make judgments about the connectedness between
one field of choice and another (Friend & Hickling, 2005). VSM analyses information flows
and communication links between different parts of the system (Beer, 1981). SD3 models
seek to understand the whole system, as was demonstrated by Lane et al. (2016), who sought
to understand the unintended consequences of decisions.
DEA and LP do not attempt to gain a holistic understanding of the situation. These
models reduce complexity by breaking the system into constituent, related parts that are
formulated in a mechanistic way. The model can only give predefined answers about, for
example, an optimal solution or a sensitivity analysis, without understanding how this relates
to the whole.
DES is more flexible, a model often may only seek to give a mechanistic
understanding of the world with a single output, such as queuing time, or it can be used in a
more holistic way such as (Robinson, 2001) where efforts were made on understanding “why
a particular change led to an improvement or worsening of the situation” (p. 909).
Question SSM SODA SCA VSM SD3 DES DEA LPQ1 yes yes yes yes yes yes yes yesQ2 yes yes yes no yes no no noQ3 yes yes yes yes yes often no no
Q1 Does the approach identify a system to model?
Pillar 1: System characteristics
Q2 Does the approach model participants’ subjective interpretations of the world?
Q3 Does the approach seek to build a holistic understanding of the system?
Figure 5: Answer to pillar 1 questions
20
5.2. Pillar 2: Knowledge and involvement of stakeholders
4. Does the approach build a qualitative model?
Qualitative models are built in SSM, SODA, SCA and VSM, while SD3 builds both
quantitative and qualitative models. In SD3, the qualitative model shows the
interrelationships between different elements of a system by qualitatively mapping the
feedback loops between these different elements. Quantitative data is then collected to show
the stocks and flows between the different elements of the system, which is the input for a
quantitative model. DEA, DES and LP build objective models to represent the situation using
quantitative variables that interconnect.
5. Does the model building involve the facilitation of participants?
SSM, as described by Checkland and Scholes (1990), can be used in facilitated Mode
1 as well as non-facilitated Mode 2. Likewise, VSM and SODA models can be built in Modes
1 or 2. SCA models are typically built in Mode 1. All SD3 approaches have elements of
facilitation. DES models are often built in expert mode with no facilitation; however, there is
an established body of literature in which these models are built using facilitation (e.g.
Robinson, 2001).
The selection of input-output variables to build DEA models is usually based on the result
of conversations between analysts and experts in the units being assessed, supported by
quantitative analysis (Casu et al., 2005). The model is then built in expert mode. Casu et al.
(2005) used Journey Making (derived from SODA) with a group of stakeholders to determine
input-output variables. This facilitated approach constitutes a different data collection
technique. However, while the model built with the stakeholders to identify input-output
variables was made through facilitation, the DEA model was built in expert mode. Therefore,
the dominant narrative is that DEA does not build models in facilitator mode. Likewise, LP
models are not built in a facilitated way.
6. Does the model building enhance participants’ learning about the situation?
Learning arises from participants sharing knowledge with each other, allowing them to
acquire and create knowledge by synthesising views (Edwards, Ababneh, Hall & Shaw,
2009). SSM does this by encouraging participants to discuss different worldviews during
group modelling, and encourages learning about the system (Checkland, 1985b). SODA
enables participants to share knowledge through the building of composite or group causal
maps. Friend and Hickling (2005) suggest that SCA groups should adopt open technology so
that many can share ideas, allowing participation to be interactive and learning to be
enhanced. Thompson et al. (2016) show that participants of facilitated SD3 workshops
21
experience critical learning incidents during the conceptualization phase. Kotiadis and
Mingers (2014) also show how participants can learn about the problem context during model
building and specification when it is facilitated in DES.
The purpose of the model building phase in VSM, DEA, and LP is not to be a vehicle
for participants to learn through facilitation although learning may certainly come from that
phase. The purpose of that phase is to build a model so that the formal outcome of the
analysis from using these methods can allow greater learning from the context being
analysed. Their focus is for clients to learn about potential solutions so that wider learning is
gained while focussing on outputs.
7. Does the approach aim to develop buy-in to politically feasible outcomes?
To build buy-in and enhance political feasibility, approaches increase participation
through enveloping stakeholders in the process and addressing issues of power within the
problem situation. SSM envelops stakeholders by building different models with them during
the intervention. SODA establishes a joint understanding of a problem through building
shared group maps. These maps are either a composite of individual cognitive maps or a
single map built by a number of participants. Both ‘can provide a means of enabling group
members to jointly understand the perspectives of others, reflect on the emergent issues that
are surfaced from them and begin to negotiate an agreed strategic direction’ (Eden &
Ackermann, 1998 p 73). SCA builds shared models to increase understanding of a situation.
For example, decision graphs represent the linkages between different decision areas and the
focus for the group, while the different options are represented on a compatibility grid
(Friend, 2001). In addition, SCA integrates a policy stream that involves managing the
conflicting positions of those involved to develop commitment to the results (Friend &
Hickling, 2005).
SD3 authors report aiming to develop buy-in of participants and searching for
outcomes that can be implemented (e.g. Lane et al., 2016). DES often includes participants in
the process to develop fuller recommendations and increase the likelihood that they are
implemented (e.g. Robinson, 2001).
VSM considers power in the systems it models with the aim of understanding
business functions rather than increasing buy-in from powerful stakeholders. The purpose of
DEA and LP is not to explicitly envelop stakeholders or manage power relationships through
their modelling process to build buy-in to the outcomes.
22
Question SSM SODA SCA VSM SD3 DES DEA LPQ4 yes yes yes yes yes no no noQ5 yes yes yes no yes often no noQ6 yes yes yes no yes often no noQ7 yes yes yes no yes often no no
Pillar 2: Knowledge and involvement of stakeholdersQ4 Does the approach build a qualitative model?
Q5 Does the model building involve the facilitation of participants?
Q6 Does the model building enhance participants’ learning about the situation?
Q7 Does the approach aim to develop buy-in to politically feasible outcomes?
Figure 6: Answer to pillar 2 questions
5.3. Pillar 3: Values of model building
8. Is credibility established in models by preserving multiple participant contributions?
During SSM multiple perspectives are accommodated, preserving multiple
contributions by modelling a range of root definitions and conceptual models. SODA
preserves multiple views in cognitive maps stitching together participant models to form a
new model that encompasses multiple views (Smith & Shaw, 2018). SCA builds group
models via participants writing out their individual ideas so that competing contributions can
be compared, merged or preserved. This ensures that each participant feels that they
participated in building the model (Friend & Hickling, 2005).
The final models in VSM, SD3, DES, DEA and LP typically represent a single
(objective) reality; therefore, their purpose is not to represent different social realities. These
models may have input from several stakeholders, who each start off with different mental
models of the situation; however, there will be convergence to a single model that represents
the issue so competing perspectives are not retained. In some approaches, the final model can
be reconfigured, for example, using sensitivity analysis to show different scenarios. However,
these do not represent different social realties being embedded in a single model in the same
sense as SSM, SODA and SCA.
9. Is the model building process suitably generic so it can be transferred to multiple
problem contexts?
All eight OR approaches discussed here have been successfully deployed in multiple
and varied problem situations. For case studies of SSM, see Checkland and Scholes (1990);
for SODA, see Eden and Ackermann (1998); for SCA, see Friend and Hickling (2005). For
DEA, DES and LP, see Williams (2008); for SD3, see the three papers quoted above; for
VSM, see Beer (1981).
23
10. Does the model building process aim to create confidence in the outcome through
procedural rationality?
PSMs have to demonstrate they are procedurally just without having hard data to
prove economically that the outcome is rational; therefore, there is transparency in the model
building process and involvement by participants. This is explicitly the case for SSM, SODA,
SCA, and SD3 which involve participants in the model building process. VSM does not
explicitly involve participants in the model building process and therefore must build
confidence by relying on the strength of the VSM and cybernetic principles.
DEA and LP can show economic rationally through hard data, and the reliability of
outcomes is accepted based on proof of outcomes, not just inputs. DES uses a combination of
both economic and procedural rationality, sometimes involving participants in the model
building process to increase confidence in outcomes.
11. Does the model act as an audit trail of the decision making process validated through
collaborative enquiry?
The audit trail of models and other artefacts (e.g. reports) for all these OR approaches
should show the rationale behind how and why outcomes and outputs were reached. The
process of validating the audit trail through collaborative enquiry varies according to the
approach. In SSM, SODA and SCA, participants build models and the audit trail so will have
seen it develop throughout the process. Thus, participants validate the audit trail through
intensive collaborative enquiry. The audit trail can also be recorded, either through software
(such as Decision Explorer (SODA) and STRAD (SCA)), or by photographing models drawn
on paper.
VSM, DEA, DES and LP do not offer the same opportunity for collaborative enquiry
between stakeholders to continuously validate an audit trail because the model is likely to
have been built by an expert modeller. For these approaches, validation ensures that the
model accurately and objectively represents the system being modelled. For example, in
DES, although there are instances in which the model will be built with participant
facilitation thereby encouraging collaborative enquiry, validation is completed by simulating
a current state of the system and comparing this with historical data (Greasley & Smith,
2017).
24
Question SSM SODA SCA VSM SD3 DES DEA LPQ8 yes yes yes no no no no noQ9 yes yes yes yes yes yes yes yes
Q10 yes yes yes no yes often no noQ11 yes yes yes no yes no no no
Pillar 3: The values of model buildingQ8 Is credibility established in models by preserving multiple participant contributions?
Q9 Is the model building process suitably generic so it can be transferred to multiple problem contexts?
Q10 Does the model building process aim to create confidence in the outcome through procedural rationality?
Q11 Does the model act as an audit trail that has been validated through collaborative enquiry?
Figure 7: Answer to pillar 3 questionsSD3 completes validation via both means. For example, Torres et al. (2017) shared
notes with participants during facilitated workshops. Quantitative SD models are validated in
the same way as traditional OR, checking the accuracy of outputs objectively against a
current state, or validating individual relationships.
5.4. Pillar 4: Structured analysis
12. Does the approach structure knowledge through different stages of analyses?
SSM, SODA, SCA, SD3, DES, DEA and LP all structure knowledge through
different stages of analysis. The approaches give guidance on the order of using the stages,
but they also give flexibility to revisit or switch between stages. For example, SSM has a 7-
stage process (Checkland & Scholes, 1990); SODA can be presented as a step-by-step guide
(Ackermann, Eden, & Brown, 2005); SCA has shaping, choosing, comparing and designing
phases (Friend & Hickling, 2005); SD3 traditionally has a 6-stage process (see Torres et al.,
2017) and DES, DEA and LP have generic phases such as problem formulation, and model
analysis (Pidd, 2009).
VSM is neither a staged methodology nor a method; it is an abstract model or
blueprint for helping to design the structure of organisations (Mingers & Rosenhead, 2001).
Some authors propose staged approaches using principles from VSM, such as viable systems
diagnosis (Flood & Zambuni, 1990); however viable systems diagnosis is not the focus of
this paper.
13. Does the approach have distinct phases for divergent and convergent thinking?
SSM, SODA and SCA all have examples of structuring both types of thinking: SSM
encourages divergent thinking by looking at the transformation from different world views.
SODA facilitators encourage participants to expand the richness of a cognitive or group map.
SCA decision graphs help participants to consider how a range of issues are connected.
Thompson et al. (2016) suggest that divergent thinking takes place during SD3 model
25
definition and conceptualisation stages. DES, DEA and LP employ divergent thinking during
the model specification/problem formulation stage. All of these approaches exhibit
convergent thinking when participants select the most relevant and accurate information to
build the final model. VSM is a model and therefore does not specify different forms of
thinking.
Question SSM SODA SCA VSM SD3 DES DEA LPQ12 yes yes yes no yes yes yes yesQ13 yes yes yes no yes yes yes yes
Pillar 4: Structured analysisQ12 Does the approach structure knowledge through different stages of analysis?
Q13 Does the approach have distinct phases for divergent and convergent thinking?
Figure 8: Answer to pillar 4 questions6. Discussion
Only the three established PSMs answered yes to all questions and, according to the
framework, should be classified as a PSM. Even an interpretation of SD at the soft end of the
technique resulted in question 8—about credibility built through preserving different
participants’ contributions, a key aspect to PSMs—being answered no. Thus, the framework
distinguishes the application of SD as described in the SD3 papers from PSMs. Had we not
taken such a soft interpretation of SD, we would expect more answers of often or no. Section
5 shows that the framework can identify PSMs from among other OR approaches.
To triangulate our findings we also applied the framework to the two PSMs from
Rosenhead & Mingers (2001b) not tested in Section 5, Robustness Analysis and Drama
Theory. Data for this was captured from chapters 8 - 11 from the Rosenhead & Mingers
(2001b) as well as from Rosenhead (1980) and Bryant (2002). This evidence suggests both
approaches can answer yes to all 13 questions. Below we discuss the wider implications and
contributions of the framework along with reflections on the framework and its development.
First, to understand how the four pillar framework contributes to the debate on PSMs,
Figure 9 compares the 13 characteristics in the framework against the six assumptions of the
alternative paradigm identified by Rosenhead and Mingers (2001a) from Figure 1. The six
assumptions are shown on the left with the 13 characteristics in the corresponding row of the
right-hand column. The last section lists five characteristics not covered by any of the
original assumptions.
Our findings support the assertion by Rosenhead and Mingers (2001a) that the
alternative paradigm makes different assumptions to traditional OR regarding problem-
solving and the nature of problems. We also agree with them that defining the characteristics
26
of PSMs as ‘not traditional OR’ is limited as some characteristics of PSMs are shared with
hard OR. Thus, we suggest that the theoretical assumptions made by Rosenhead and Mingers
(2001a) can be further improved to provide a stronger basis on which to consider the claims
by new approaches that may also belong to the family of PSMs, such as Visioning Choices,
WASAN, Wuli–Shili–Renli and DPSIR
All eight approaches answered yes to questions 1 and 9: 1 – “Does the approach
identify a system to model?” and; 9 – “Is the model building process suitability generic so it
can be transferred to multiple contexts?”. As these questions do not distinguish PSMs from
non-PSMs, they may seem superfluous to the framework – however, the purpose and
contribution of this paper is to search for common characteristics of PSMs of which these are
central characteristics. That these characteristics may also be characteristics of wider OR
does not exclude them from being characteristics of PSMs. Interestingly, neither questions 1
or 9 compare to Rosenhead & Mingers' (2001a) assumptions of the PSM paradigm (Figure
9); This is a difference between the aim of our work and that of Rosenhead and Mingers,
who sought opposing assumptions of PSMs and traditional OR. Characteristics of Rosenhead and Mingers Alternative
ParadigmCharacteristics from the Four
Pillar FrameworkNon-optimizing; seeking alternative solutions which are acceptable on separate dimensions, without trade-offs.
Q7: Buy-in to politically feasible outcomes.
Reduced data demands, achieved by greater integration of hard and soft data with social judgements.
Q4: Build qualitative models.
Q10: Shows procedural rationality.
Q11: Builds validated audit trail of decision making.
Q2: Models subjective interpretations.
Q6: Facilitates participants' involvement.
Q8: Credibility through preserving participant contributions.
Facilitates planning from the bottom-up. Q6: Participant learning.Accepts uncertainty, and aims to keep options open. Q3: Holistic understanding.
Q1: Identify system to model.Q9: Generic model building
approach.Q12: Different stages of analysis.
Q13: Phases of divergent and convergent thinking.
Simplicity and transparency, aimed at clarifying the terms of conflict.
Not included.
Conceptualizes people as active subjects.
Figure 9: Rosenhead and Mingers’ (2001a) assumptions compared with the four pillar framework
Next, there are 5 instances in which the framework answers often, all relating to DES.
To understand this we found it useful to think in terms of Pidd's (1998) two streams of DES
projects: the simulation problem (concerned with the technical aspects of model) and the
27
simulation project (concerned with the aims for the context and modellers). Where a
characteristic related to the technical aspects of an approach answers were consistent with
either yes or no (Questions 1, 2, 4, 8, 9, 12 & 13). Where a characteristic related to the
context or the modeller’s view of their/participants role, this brought a diversity of
application that was sometimes best answered with often (Questions 3, 5, 6, 7, 10 & 11). This
was not surprising given that DES recognises the potential for qualitative modelling to make
initial sense of the system to be modelled quantitatively (Kotiadis & Mingers, 2006;
Robinson, 2007). Like some other traditional OR approaches, DES has scope to alter its
method of application and exhibit some of the softer characteristics of PSMs from the
framework. For example, DES models have been built using facilitation (Robinson, 2001),
and so may answer yes to questions from the second pillar. This is not saying that Robinson’s
(2001) use of DES constitutes a PSM but that the epistemological assumptions underpinning
this work are different to those assumed in the exclusively harder applications of DES. This
may result in the facilitated use of DES answering yes to more questions in the framework
than a purely hard application of DES. The inverse is true for the established PSMs, which
also have examples of non-standard use (Mingers, 2003). For example, Shaw, Smith and
Scully (2017) use tools from SODA to model secondary data (a non-standard use). Therefore,
validation of their models was not done through collaborative enquiry, meaning that the
framework may not classify the application as a PSM. We argue that, given a spectrum of
behaviour relating to the project elements, the adaptability of the framework is a strength.
PSMs should not be classified based on their historic definitions, but on the assumptions of
the approach in context and how it is used and adapted for a particular application.
It is not surprising that SD3 papers answered only yes and no. This was due to the
clarity provided in the three selected EJOR SD papers, which were chosen to provide a
dominant narrative. A wider range of SD papers (with a wider interpretation of SD) could
return a different result with more often answers if they had a more balanced discussion in a
wider literature. Restricting the definition of SD in this way allowed for a more thorough
examination of the framework as we took the view of SD that was closest as possible to
PSMs thus, justifying this action to support the aims of the paper.
The structure of the framework has a heavy reliance on Guba and Lincoln (1994,
2005) through relational coding and how these codes are understood for PSMs through
Mingers (2003). We feel this grounding in higher-level theory is necessary to differentiate
between the pillars and add clarity about the different ways PSMs are similar to each other
28
(specifically, the entities included in a model, the process by which knowledge is created, the
values of good research and the structure of enquiry).
We note the circular argument in classifying the three PSMs using the framework.
The literature on these three PSMs led to the identification of the characteristics and the
resulting questions, which were then used to decide if the PSMs satisfied their own criteria.
Critics could argue that this self-referencing approach does not demonstrate that the
established approaches are PSMs, but merely that they have the same characteristics already
identified in the literature. However, this misunderstands why the framework was applied to
the eight approaches. The framework was developed to understand common characteristics of
PSMs, and the inclusion of three PSMs was a test of the framework rather than a test of the
selected approaches. The important finding is the ability of the framework to identify the
PSMs, showing that they have common and defining characteristics.
The 13 questions each focus on a separate characteristic and should each be
considered independently however, by their nature, the questions are not discrete – they are
interrelated. This is shown in two ways, first, to understand if an approach has a claim of
being a PSM all 13 questions must be considered. Second, not every conceivable
combination of answer to the 13 questions is possible. For example, there is a relationship
between in the answers of Questions 2 & 8. Iif an approach answers yes for Question 2 then
it may (but may not) answer yes for Question 8, however if it answers no for Question 2 then
the approach must also answer no in Question 8.
7. Conclusion
Through an exploratory review of the literature this paper has identified
characteristics of PSMs that were developed into a theoretical framework by which to clarify
the similarities between PSMs, their underpinning assumptions and understand their unique
identity as a family of OR approaches. It aims to prompt critical conversation about PSMs by
questioning and expanding on the dominant assumptions underpinning what it is to be a PSM
as first described 40 years ago. As a result of the framework, confidence can be given to
claims of being a PSM that are made by new or candidate PSMs, establishing the conditions
of evidence. By revisiting the assumptions underpinning PSMs, the framework can help to
refocus the development of PSM approaches and theory.
We recognise that this work is not without limitations; first, the paper takes a
particular view of PSMs through the selection criteria of drawing from the two most
prominent journals that regularly publish methodological and case study based papers (EJOR
and JORS) (Ranyard, Fildes, & Hu, 2015). As Ranyard et al (2015) also found, US-based
29
journals tend to overlook PSMs, even INFORMS based journals “have refused to engage
with the topic, at least the formal problem structuring component” (p11). OMEGA is an
exception in publishing PSM case study papers (although many fewer such papers) but most
of their PSM papers are written by European researchers who also publish in EJOR and
JORS i.e. there isn’t a distinct PSM literature only available outside of the dataset. Second,
the framework offers a view of PSMs which is a product of the literature considered and the
coding process adopted. We appreciate that some readers may not agree with every aspect of
the framework or identify aspects that they feel are missing. However, while we may have set
out to answer ‘what is a PSM’, we realise the resulting answer is more like Rosenhead’s
characteristics of PSMs in Figure 1, that is, it is a blueprint to debate. Third, placing an
approach on the 3 point scale was easier for some questions than for others. This is
particularly true where there is a diversity of application of an approach and, to overcome this
spread (as we did for system dynamics), it is necessary to reduce the spectrum of use by
tightly specifying what is the approach. However as the framework was designed to identify
characteristics of PSMs it is less relevant how the non-PSMs faired as this is not indicative of
the intended use of the framework.
In terms of future work, this framework could be further tested on established PSMs
such as Robustness Analysis (Rosenhead, 1978, 1980) and Drama Theory (Bryant, 1997).
Also, it would be applied to a range of qualitative approaches developed since Rosenhead and
Mingers published their work in 2001 to identify if they sit comfortably in the same family as
the methods that established the field of PSMs. The authors also repeat the calls of other
researchers cited in this paper for more research and development of theory that spans across
PSMs rather than focussing on a specific approach.
8. ReferencesAckermann, F. (2012). Problem structuring methods “in the Dock”: Arguing the case for Soft OR. European
Journal of Operational Research, 219(3), 652–658. Ackermann, F., Eden, C., & Brown, I. (2005). The Practice of Making Strategy A Step-by-Step Guide. London:
Sage Publications.Ackoff, R. L. (1979). The Future of Operational Research is Past. The Journal of the Operational Research
Society, 30(2), 93–104.Beer, S. (1981). Brain of the Firm (2nd ed.). Chichester: John Wiley and Sons Ltd.Bell, S. (2012). DPSIR=A Problem Structuring Method? An exploration from the “Imagine” approach.
European Journal of Operational Research, 222(2), 350–360. Bryant, J. (1997). The Plot Thickens: Understanding Interaction Through the Metaphor of Drama. Omega, The
International Journal of Management Science, 25(3), 255–266.Bryant, J. W. (2002). Confrontations in health service management: Insights from drama theory. European
Journal of Operational Research, 142, 610–624.Casu, B., Shaw, D., & Thanassoulis, E. (2005). Using a group support system to aid input-output identification
in DEA. The Journal of the Operational Research Society, 56(12), 1363–1372.Champion, D., & Wilson, J. M. (2010). The impact of contingency factors on validation of problem structuring
30
methods. The Journal of the Operational Research Society, 61(9), 1420–1431. Checkland, P. (1981). Systems Thinking, Systems Practice. Chichester: John Wiley & Sons.Checkland, P. (1983). O. R. and the Systems Movement : Mappings and Conflicts. The Journal of the
Operational Research Society, 34(8), 661–675.Checkland, P. (1985a). Achieving “Desirable and Feasible” Change: An Application of Soft Systems
Methodology. The Journal of the Operational Research Society, 36(9), 821–831. Checkland, P. (1985b). From Optimizing to Learning: A Development of Systems Thinking for the 1990s. The
Journal of the Operational Research Society, 36(9), 757–767. http://doi.org/10.1057/jors.1985.141Checkland, P., & Scholes, J. (1990). Soft Systems Methodology in Action. Chichester: Wiley. Checkland, P., & Winter, M. (2005). Process and content: two ways of using SSM. The Journal of the
Operational Research Society, 57(12), 1435–1441. http://doi.org/10.1057/palgrave.jors.2602118Churchman, C. W. (1967). Wicked Problems. Management Science, 14(4), B141–B142. Davis, J., MacDonald, A., & White, L. (2010). Problem-structuring methods and project management: an
example of stakeholder involvement using Hierarchical Process Modelling methodology. The Journal of the Operational Research Society, 61(6), 893–904.
Eden, C. (1995). On evaluating the performance of “wide-band” GDSS’s. European Journal of Operational Research, 81(2), 302–311.
Eden, C., & Ackermann, F. (1998). Making Strategy: The Journey of Strategic Management. Sage Publications. Eden, C., & Ackermann, F. (2001). SODA - The Principles. In J. Rosenhead & J. Mingers (Eds.), Rational
Analysis for a Problematic World Revisited (2nd ed., pp. 20–42). Chichester: Wiley.Eden, C., & Ackermann, F. (2006). Where next for problem structuring methods. The Journal of the
Operational Research Society, 57(7), 766–768. Edwards, J. S., Ababneh, B., Hall, M., & Shaw, D. (2009). Knowledge management: a review of the field and of
OR’s contribution. The Journal of the Operational Research Society, 60, S114–S125. Flood, R. L., & Zambuni, S. a. (1990). Viable Systems Diagnosis. 1. Application with a Major Tourism Services
Group. Systems Practice, 3(3), 225–248. Franco, L. A. (2013). Rethinking Soft OR interventions: Models as boundary objects. European Journal of
Operational Research. Franco, L. A., & Montibeller, G. (2010). Facilitated modelling in operational research. European Journal of
Operational Research, 205(3), 489–500. Friend, J. K. (2001). The Strategic Choice Approach. In J. Rosenhead & J. Mingers (Eds.), Rational Analysis
for a Problematic World Revisited (2nd ed., pp. 115–149). Chichester: Wiley.Friend, J. K., & Hickling, A. (2005). Planning Under Pressure: The Strategic Choice Approach (3rd ed.).
Oxford: Elsevier.Goodwin, & Wright. (2004). Decision Analysis for Management Judgement (3rd ed.). Chichester: Wiley.Greasley, A., & Smith, C. M. (2017). Using activity based costing and simulation to reduce cost at a Police
communications centre. Policing: An International Journal of Police Strategies & Management, 40(2).Guba, E. G., & Lincoln, Y. S. (1989). Fourth Generation Evaluation. Newbury Park, CA: Sage.Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S.
Lincoln (Eds.), Handbook of qualitative research (pp. 105–117). Thousand Oaks, California: Sage.Guba, E. G., & Lincoln, Y. S. (2005). Paradigmatic Controversies, Contradictions, and Emerging Confluences.
In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage Handbook of Qualitative Research (3rd ed., pp. 191–216). Thousand Oaks, California: Sage Publications.
Jackson, M. C. (2006). Beyond problem structuring methods: reinventing the future of OR/MS. The Journal of the Operational Research Society, 57(7), 868–878.
Jackson, M. C., & Keys, P. (1984). Towards a System of Systems Methodologies. The Journal of the Operational Research Society, 35(6), 473–486.
Johnson, P., & Johnson, G. (2002). Facilitating group cognitive mapping of core competencies. In A. S. Huff & M. Jenkins (Eds.), Mapping Strategic Knowledge (pp. 220–236). London: Sage.
Keeney, R. L., & Raiffa, H. (1993). Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge: Cambridge University Press.
Keys, P. (2006). On Becoming Expert in the Use of Problems Structuring Methods. The Journal of the Operational Research Society, 57(7), 822–829.
Kirby, M. W. (2000). Operations Research Trajectories: The Anglo-American Experience from the 1940s to the 1990s. Operations Research, 48(5), 661–670.
Kirby, M. W. (2007). Paradigm Change in Operations Research: Thirty Years of Debate. Operations Research, 55(1), 1–13.
Kotiadis, K., & Mingers, J. (2006). Combining PSMs with hard OR methods: the philosophical and practical challenges. The Journal of the Operational Research Society, 57(7), 856–867.
Kotiadis, K., & Mingers, J. (2014). Combining problem structuring methods with simulation : The philosophical
31
and practical challenges. In S. Brailsford, L. Churilov, & B. Dangerfield (Eds.), Discrete-Event Simulation and System Dynamics for Management Decision Making (pp. 52–75). Chichester: John Wiley & Sons Ltd.
Lane, D. C., Munro, E., & Husemann, E. (2016). Blending systems thinking approaches for organisational analysis: Reviewing child protection in England. European Journal of Operational Research, 251, 613–623.
Li, Y., & Zhu, Z. (2014). Soft OR in China: A critical report. European Journal of Operational Research, 232(3), 427–434.
Midgley, G., Cavana, R. Y., Brocklesby, J., Foote, J. L., Wood, D. R. R., & Ahuriri-Driscoll, A. (2013). Towards a new framework for evaluating systemic problem structuring methods. European Journal of Operational Research, 229(1), 143–154.
Mingers, J. (1992). Recent Developments in Critical Management Science. The Journal of the Operational Research Society, 43(1), 1–10.
Mingers, J. (2000). Variety is the spice of life: combining soft and hard OR/MS methods. International Transactions in Operational Research, 7(6), 673–691.
Mingers, J. (2003). A classification of the philosophical assumptions of management science methods. The Journal of the Operational Research Society, 54(6), 559–570.
Mingers, J. (2011). Soft OR comes of age—but not everywhere! Omega, 39(6), 729–741. Mingers, J., & Brocklesby, J. (1997). Multimethodology: Towards a framework for mixing methodologies.
Omega, 25(5), 489–509. Mingers, J., & Rosenhead, J. (2001). An Overview of Related Methods. In J. Rosenhead & J. Mingers (Eds.),
Rational Analysis for a Problematic World Revisited (2nd ed., pp. 267–288). Chichester: John Wiley and Sons Ltd.
Mingers, J., & White, L. (2010). A review of the recent contribution of systems thinking to operational research and management science. European Journal of Operational Research, 207(3), 1147–1161.
Munro, I., & Mingers, J. (2002). The use of multimethodology in practice—results of a survey of practitioners. The Journal of the Operational Research Society, 53(4), 369–378.
O’Brien, F., & Meadows, M. (2006). Developing a visioning methodology: Visioning Choices for the future of operational research. The Journal of the Operational Research Society, 58(5), 557–575.
Phillips, L. D., & Phillips, M. C. (1993). Facilitated Work Groups: Theory and Practice. The Journal of the Operational Research Society, 44(6), 533–549.
Pidd, M. (1998). Computer Simulation in Management Science (4th ed.). Chichester, UK: Wiley.Pidd, M. (2009). Tools for Thinking: Modelling in Management Science (3rd ed.). Chichester: Wiley.Preece, G., Shaw, D., & Hayashi, H. (2013). Using the Viable System Model (VSM) to structure information
processing complexity in disaster response. European Journal of Operational Research, 224(1), 209–218. Ranyard, J. C., Fildes, R., & Hu, T. (2015). Reassessing the scope of OR practice: The Influences of Problem
Structuring Methods and the Analytics Movement. European Journal of Operational Research, 245(1), 1–13
Rittel, H. (1972). On the planning crisis: systems analysis of the “first and second generations”.Robinson, S. (2001). Soft with a hard centre : discrete-event simulation in facilitation. The Journal of the
Operational Research Society, 52(8), 905–915.Robinson, S. (2007). PSMs: looking in from the outside. Journal of the Operational Research Society, 58(5),
689–691. Rosenhead, J. (1978). An Education in Robustness. The Journal of the Operational Research Society, 29(2),
105–111.Rosenhead, J. (1980). Planning under Uncertainty: II. A Methodology for Robustness Analysis. Journal of the
Operational Research Society, 31(4), 331–341.Rosenhead, J. (1989). Rational Analysis for a Problematic World. (J. Rosenhead, Ed.). Chichester: John Wiley
& Sons.Rosenhead, J. (1996). What’s the Problem? An Introduction to Problem Structuring Methods. Interfaces, 26(6),
117–131.Rosenhead, J. (2006). Past , present and future of problem structuring. The Journal of the Operational Research
Society, 57(7), 759–765.Rosenhead, J., & Mingers, J. (2001a). A New Paradigm of Analysis. In J. Rosenhead & J. Mingers (Eds.),
Rational Analysis for a Problematic World (2nd ed., pp. 1–20). Chichester: Wiley.Rosenhead, J., & Mingers, J. (2001b). Rational Analysis for a Problematic World Revisited. (J. Rosenhead & J.
Mingers, Eds.). Chichester: Wiley.Shaw, D. (2003). Evaluating electronic workshops through analysing the “ brainstormed ” ideas. The Journal of
the Operational Research Society, 54, 692–705.Shaw, D. (2006). Journey Making Group Workshops as a Research Tool. The Journal of the Operational
32
Research Society, 57(7), 830–841. Shaw, D., & Blundell, N. (2010). WASAN: The development of a facilitated methodology for structuring a
waste minimisation problem. European Journal of Operational Research, 207(1), 350–362. Shaw, D., Eden, C., & Ackermann, F. (2009). Mapping causal knowledge: How managers consider their
environment during meetings. International Journal of Management and Decision Making, 10(5/6), 321–340.
Shaw, D., Smith, C. M., & Scully, J. (2017). Why did Brexit happen ? Using causal mapping to analyse secondary, longitudinal data. European Journal of Operational Research, 263(3), 1019–1032.
Simpson, M. G. (1978). Those who can’t ? The Journal of the Operational Research Society, 29(6), 517–522.Sinn, J. S. (1998). A Comparison of Interactive Planning and Soft Systems Methodology : Enhancing the
Complementarist Position. Systemic Practice and Action Research, 11(4), 435–453.Smith, C. M., & Shaw, D. (2018). Horizontal recursion in soft OR. Journal of the Operational Research Society,
5682, 1–14. Strauss, A. L., & Corbin, J. M. (1990). Basics of Qualitative Research: Grounded theory in Practice. Thousand
Oaks, California: Sage.Tavella, E., & Papadopoulos, T. (2014). Expert and novice facilitated modelling: A case of a Viable System
Model workshop in a local food network. Journal of the Operational Research Society, 1–18. Thompson, J. P., Howick, S., & Belton, V. (2016). Critical Learning Incidents in system dynamics modelling
engagements. European Journal of Operational Research, 249(3), 945–958. Thunhurst, C. ". (1973). Who does operational research operate for. Annual Conference of Operational
Research Society. Torquay, UK.Torres, J. P., Kunc, M., & O’Brien, F. (2017). Supporting strategy using system dynamics. European Journal of
Operational Research, 260(3), 1081–1094. Westcombe, M., Franco, L. A., & Shaw, D. (2006). Where Next for PSMs: A Grassroots Revolution? The
Journal of the Operational Research Society, 57(7), 776–778.White, L. (2009). Understanding problem structuring methods interventions. European Journal of Operational
Research, 199(3), 823–833.Williams, T. (2008). Management Science in Practice. Chichester: Wiley.Yearworth, M., & White, L. (2014). The non-codified use of problem structuring methods and the need for
generic constitutive definition. European Journal of Operational Research.
33
top related