many meanings of "heuristic"
TRANSCRIPT
This article has been accepted for publication in The British Journal for the Philosophy ofScience. This is a preprint draft produced by the author. Please do not cite this version.
Many Meanings of “Heuristic”
Sheldon J. [email protected] of PhilosophyMount Allison University
Abstract A survey of contemporary philosophical and scientific literatures reveals thatdifferent authors employ the term “heuristic” in ways that deviate from, and are sometimesinconsistent with, one another. Given its widespread use in philosophy and cognitive sciencegenerally, it is striking that there appears little concern for a clear account of what phenomenaheuristics pick out or refer to. In response, I consider several accounts of “heuristic”, andI draw a number of distinctions between different sorts of heuristics in order to make senseof various research programmes. I then develop a working characterization of “heuristic”which is shown to be coherent and robust enough to serve as a kind within philosophy andcognitive science broadly. My intent is not to pursue a unified account of “heuristic”, but tohighlight some features of certain kinds of heuristics that are important for theorizing aboutcognition in order to proceed with a science of the mind/brain.
The term ‘heuristic’ and its cognates are widely used to label certain methods or
procedures that guide inference and/or judgement in problem-solving and decision-making.
Heuristics are commonly understood as economical shortcut procedures that may not lead
to optimal or correct results, but will generally produce outcomes that are in some sense
satisfactory or ‘good enough’. It is also widely acknowledged that heuristics can result in
systematic errors or mistakes. But all this is usually understood as a trade-off: heuristics
forego optimal outcomes in favour of economy; systematic errors may result on some
occasions, but they tend to produce sufficient solutions. And indeed, some sort of
evolutionary/adaptation story is often told to explain our use of and reliance on heuristics
in our day-to-day reasoning. Understood as such, it is received wisdom that heuristics are a
stock commodity in human cognition.
However, while it is agreed that humans use and generally rely on heuristics, a close
inspection of the philosophical and scientific literatures reveals that different authors
employ the term ‘heuristic’ in ways that deviate from one another, and are sometimes
inconsistent with one another. Not only is ‘heuristic’ used in diverse ways across and
within disciplines, but its meaning has evolved over the years. Consequently, as Jonathan
St.B.T. Evans ([2009], p. 36) remarks, ‘The term “heuristic” is deeply ambiguous’. Indeed,
the term very well may not express a single, unified concept. Or perhaps ‘heuristic’ is best
characterised in terms of a cluster of family resemblances. In any case, given its widespread
use in philosophy and cognitive science generally, it is striking to see that, while there isn’t
a consensus on what phenomena ‘heuristics’ pick out or refer to, little philosophical
attention has been paid to providing an appropriate characterisation, or characterisations,
of ‘heuristic’. We’re long overdue for such a task. It is unacceptable that philosophers
continue to offer accounts of reasoning, for example, while invoking an ambiguous term (see
Bechtel and Richardson [2010]; Hooker [2011]; Wimsatt [2007]); moreover, cognitive science
needs to be clear on its objects of study.
All this prompts the twofold aim of this paper, which is to clarify (some of) the many
uses of ‘heuristic’, and to offer an adequate working characterisation of the term to apply
to a specified range of cognitive phenomena. I should like to be clear that my intent is not
to pursue a unified account of ‘heuristic’, but to highlight some features of heuristics that
are important for theorising about reasoning and cognition, and to develop an account of
‘heuristic’ which is useful for proceeding with a science of the mind/brain. I begin, in
2
section 1, by fleshing-out the motivation for demanding an appropriate characterisation, or
characterisations, of ‘heuristic’. In section 2, I discuss some definitions of the term, and I
give an extensive treatment of a generalised negative definition that is often found in the
literature. In section 3, I draw on the work of some influential heuristics researchers,
including George Polya, Herbert Simon, Daniel Kahneman and Amos Tversky, and Gerd
Gigerenzer. By considering the various accounts of these thinkers, we will be in a position
to make a number of distinctions between different kinds of heuristics. The result is an
initial taxonomy of heuristics which will help us to make sense of various claims made
about human cognition and of various research programmes. This taxonomy also enables
me to focus on a specific class of heuristics; in section 4 I develop a working characterisation
of these sorts of heuristics. I close in section 5 with some concluding remarks.
1 Why We Need an Appropriate Characterisation of ‘Heuristic’
Given its widespread use within (and without) the several disciplines of cognitive science, it
is intriguing that there is little concern for a fully developed characterisation of ‘heuristic’.
The lack of a developed characterisation of the term is disconcerting in at least two ways.
First, since heuristics have rapidly made their way as central tenets in many theories of
reason and cognition, we might expect something that approaches a refined and robust
explication of what is meant by ‘heuristic’. But, as far as I know, no one has produced such
an explication. This is problematic for cognitive science since, if it is indeed a science,
careful consideration should be taken to delimit heuristics qua entities of scientific theories.
To put it another way, if cognitive science is in the business of discovering and studying the
kinds within its domain, pains should be taken to appropriately characterise these kinds
(Carruthers [2006]; Pylyshyn [1999]); by all accounts, heuristics are included among the
kinds of cognitive science. Thus, being clear on what phenomena are picked out by
3
‘heuristic’ should be of concern to both scientists and philosophers who wish to make sense
of, and perhaps adopt, any theory of reason or cognition that claims a role for heuristics.
Secondly, many arguments—descriptive and normative—are made on the premise that
heuristics are ubiquitous in human cognition. Such arguments try to decide the extent to
which humans rely on heuristics, what heuristics people employ in what circumstances, or
whether and when it is rational to engage heuristics. But vagueness (or perhaps ambiguity)
in the concepts employed by a theory will be inherited by any arguments based on that
theory. Unclarity in what heuristics are in the first place therefore impedes progress on
matters involving the purported uses and abuses of heuristics in reasoning (taken here to
include judgement, inference, problem-solving, and decision-making).1 In the face of
vagueness (or ambiguity) in concepts, the arguments concerning human reasoning may very
well lack a sufficient basis to be anywhere near decisive. To be sure, if we do not know
what cognitive processes ‘heuristics’ refer to, then it is unclear whether and to what extent
purported heuristics have a role in our cognitive lives. For instance, there has been a
longstanding debate between those who believe that heuristic strategies are or can be
rational, and those who believe that rationality resides in the dictates of some
independently established normative theory which heuristics fail to meet (see Simon [1957];
Gigerenzer [1996]; Kahneman and Tversky [1996]; Samuels, Stich and Bishop [2002]). But
if it is unclear what either party means by ‘heuristic’, then the force of the arguments from
either side of the debate, or precisely what is being claimed, is equally unclear. This is
illustrated by Gigerenzer’s ([1996]) accusation that Kahneman and Tversky deploy vague
labels instead of describing real heuristics (see also Gigerenzer and Todd [1999]; Gigerenzer
and Brighton [2009]), and Kahneman and Tversky’s ([1996], p. 585) response that the
1My uses throughout this paper of terms such as ‘reason’, ‘inference’, ‘decision-making’ and the likeshould be understood in very broad senses.
4
heuristics they study may nevertheless be ‘assessed empirically; hence need not be defined
a priori’. We might thus see that, since a proper characterisation of ‘heuristic’ is
lacking—or at least a characterisation on which each camp can agree is lacking—the debate
concerning whether and which heuristics are (or can be) rational can scarcely advance, and
the deeper issue about the nature of rationality gets lost in confusion.
In sum, I perceive at least two significant problems arising from the absence of a clear
and robust characterisation of ‘heuristic’. The first problem concerns heuristics as objects
of scientific study; the second problem concerns arguments that invoke heuristics to make
descriptive and/or normative claims about human cognition. These two problems (perhaps
among others) act as barriers that inhibit the advancement of philosophical and scientific
research on reasoning and cognition. To overcome these barriers, we need an account of
heuristics that, on the one hand, is coherent and robust enough to plausibly fit a scientific
kind,2 and on the other hand, provides clarity in the objects of study. To the extent that
an account of heuristics is successful with respect to these tasks, we will better understand
how the mind/brain works and how heuristics fit within the various conceptual frameworks
of psychology and the rest of the cognitive sciences.
Before continuing on, it is important to forestall a worry that may be raised against the
project of this paper. One may claim that an attempt at characterising ‘heuristic’ may be
neither possible nor useful if the class of heuristics is constituted by phenomena that share
a family resemblance.3 If ‘heuristic’ is a term used by scientists or philosophers to refer to
2I prefer to use ‘scientific kind’ as opposed to ‘natural kind’, since I’m not prepared to argue that heuristicsare natural kinds. I take a scientific kind to be something that is a proper object of scientific study. Inthis sense, whatever else a scientific kind may be, it had better be suitably characterised to play a certain(perhaps explanatory) role in a scientific theory. Suitability and level of characterisation will come in degrees,I suppose, depending on the role that the kind is supposed to play in the theory in question. Whatever thecase may be, it is beyond the scope of this paper to develop or defend a theory of scientific kinds. The point Iam making in the main text is that ‘heuristic’ generally isn’t adequately characterised to be a proper objectof scientific study, or for it to play the role many researchers assume it to play in scientific theories.
3I thank an anonymous referee for raising this concern.
5
certain cognitive or perceptual processes whose properties overlap in various ways with one
another, then it would be futile to attempt to identify the core properties of heuristics.
Moreover, what processes are referred to as ‘heuristic’ may very well depend on the aims of
the theory and research in question, and this seems to vitiate the need for a definite
characterisation of the term.
Though this is a natural worry to have, it is not one that bears negatively on the
project of this paper. In the first instance, given the lack of work on characterising
‘heuristic’, it has yet to be determined whether the term is best characterised as a cluster
of family resemblances; so far as we presently know, this may not be the case. So some
value in this project can be found in initiating interest and research to make such a
determination, as well as in going some way to providing such a determination. Yet, even if
‘heuristic’ does consist of a cluster of family resemblances, again given the lack of work on
characterising the term, it is worthwhile to identify the relevant properties that make up
the family resemblances, for this would allow us to better understand various perceptual
and cognitive processes as heuristic. As we shall see, a close examination of several
phenomena that researchers call ‘heuristic’ reveals a number of relevant properties, which
allows for a classification of the phenomena into distinguishable kinds (section 3).
If we take our cue from Edouard Machery’s ([2009]) treatment of ‘concept’ or Marc
Ereshefsky’s ([1998], [2001]) treatment of ‘species’, we might suspect that a treatment of
‘heuristic’ will likewise result in a pluralistic conception of the term.4 And indeed, the
treatment I give below seems to suggest a sort of pluralism as it considers the many uses of
‘heuristic’ in the literature. In this light, the present project has two chief importances.
4Yet whereas Machery’s and Ereshefsky’s work each followed from a history of thinkers who have at-tempted to advance precise accounts of these respective concepts, the situation for ‘heuristic’ is differentsince there is generally a lack of any attempt at a detailed account of the term. Since the work must beginsomewhere, it is my hope that this paper serves as a fruitful start to more thorough treatments of ‘heuristic’,even if it turns out to be pluralistic.
6
First, by trimming, pruning, and clearing away some of the brush that is the current state
of the uses of ‘heuristic’ in science and philosophy, we are put in a better position to
understand current and past research and/or arguments involving heuristics (in psychology,
computer intelligence, philosophy of science, and so on). More specifically, we will be able
to better understand what sorts of processes or phenomena are being researched or referred
to, and this will help us to navigate and make sense of much of the relevant work to date.
Second, the clarification provided in this paper can serve as a guide for future research.
Being clear, on the one hand, on the sorts of phenomena we call ‘heuristic’, and on the
other hand, on why we call such phenomena ‘heuristic’, allows us to structure various
problems for the scientist and philosopher, and it provides the materials to build
frameworks to conduct research and/or advance arguments. When, in section 4, I go on to
develop a working characterisation of ‘heuristic’, I identify a cluster of properties possessed
by heuristics of a certain sort. This advances the clarification project with respect to a
specific class of heuristics, providing an understanding of the respects in which heuristics in
this class relate to one another (what makes them fall within a class) and the ways in
which they relate to heuristics and other processes falling outside the class. All this
contributes to identifying the heuristics of interest as scientific kinds, thus remedying the
problems discussed above. At the same time, the working characterisation I develop is
open to revision, so we leave open the possibility of recharacterisations in light of further
evidence and analysis, which is exactly what we should expect of proper scientific kinds.
2 Inadequate Definitions
2.1 A positive attempt
Psychologist Gerd Gigerenzer is arguably the most prominent researcher of heuristics
today. Since the 1990s, Gigerenzer and his colleagues have investigated how heuristics can
7
produce reasonable or appropriate solutions in different ecological contexts. (For collections
of this work, see Gigerenzer et al. [1999]; Gigerenzer et al. [2011]; Todd et al. [2012].).) In
a recent review article written with Wolfgang Gaissmaier, the following definition is
proposed.
HG A heuristic is a strategy that ignores part of the information, with the goal
of making decisions more quickly, frugally, and/or accurately than more
complex methods. (Gigerenzer and Gaissmaier [2011], p. 454; see also
Todd et al. [2012], p. 7)
It is unfortunate, however, that Gigerenzer and Gaissmaier only devote a short paragraph
to discussing this definition.5 For given the motivations for the present paper, a more
extensive treatment would be appreciated, if not expected. And indeed, HG is inadequate
as a general characterisation of heuristics.
A serious problem for HG concerns the unspecified ‘more complex methods’ that
heuristics are supposed to perform more quickly, frugally and/or accurately than. For there
will always be arbitrarily more complex methods with which to make decisions. If taken as
stated, HG allows that linear regression is a heuristic insofar as part of the information is
ignored (example: under the weak exogeneity assumption, errors in predictor values are
ignored), and decisions are made more quickly, frugally, and/or accurately than other more
complex methods (example: linear regression plus Bayesian analysis, or relaxed linear
regression which drops the weak exogeneity assumption). The arbitrariness of complexity
resides in always being able to add to a decision method. Of course, Gigerenzer would not
enjoy this result; indeed, linear regression is a procedure with which he contrasts (at least
5Though see (Gigerenzer and Todd [1999]) for a lengthier discussion. Gigerenzer and Todd define ‘fast andfrugal heuristics’ thus: ‘Fast and frugal heuristics employ a minimum of time, knowledge, and computationto make adaptive choices in real environments’ (p. 14). This earlier definition is in some ways better thanGigerenzer’s more recent attempt.
8
some) heuristics (see Gigerenzer and Brighton [2009]). More generally, it obvious from
Gigerenzer’s body of work that formal, complex, information-hungry procedures (such as
linear regression or Pareto/negative binomial distribution models) would not be
understood as heuristic by any means.6
Perhaps the most problematic worry for HG is that its central claim that heuristics
ignore ‘part of the information’ puts tight constraints on the types of strategies we would
consider heuristic. This can be seen once we question what ‘the’ information is supposed to
be. It is suggested by the discussion in (Gigerenzer and Gaissmaier [2011]), and it is made
explicit in (Gigerenzer and Brighton [2009], p. 111), that ignoring ‘part of the information’
means ‘ignoring cues, weights, and dependencies between cues’. But this is inadequate for
a general characterisation of heuristics, since there are strategies and procedures that seem
obviously heuristic but do not in any obvious sense ignore cues, weights, or dependencies
between cues; for example: ‘If you can’t solve a problem right away, try an indirect proof’.
What is more, some obviously heuristic strategies don’t seem to ignore information
simpliciter in any obvious way; for example: ‘Draw a diagram when trying to solve a
problem’ (in fact, this heuristic appears to require the use and integration, rather than the
ignoring, of more information, i.e., using and integrating visual information with semantic
information). (These examples of heuristics are from Polya [1957]; we shall see them again
below.) Hence, the central claim that heuristics ignore ‘part of the information’ puts too
narrow constraints on the sorts of strategies or procedures we would call ‘heuristic’—so
narrow that some strategies that are rightly considered heuristic are excluded. This is an
6Of course, the details of Gigerenzer’s work implicitly indicates the ‘more complex methods’ againstwhich heuristics perform more quickly, frugally and/or accurately, as he often compares the performance ofheuristics to the performance of regression models and Pareto/negative binomial distribution models (seeGigerenzer and Brighton [2009]). However, though Gigerenzer uses these latter models as paradigmaticof ‘more complex methods’, my argument is that, since arbitrarily more complex methods can always bederived, these paradigmatic ‘more complex methods’ can easily be characterised as heuristic according toHG.
9
undesirable result.
In light of these criticisms, it is clear that HG fails to provide us with a suitable or
robust characterisation of ‘heuristic’.7 We must do better.
Yet many researchers do not even bother to provide even a simple definition, taking its
meaning to be intuitive or obvious, when in fact it is not. As Simon and his colleagues once
remarked, ‘[“heuristic”] denotes an intuitive notion usually misunderstood when
oversimplified’ (Simon et al. [1967], p. 10). At the same time, some authors provide merely
perfunctory definitions, without any concern for whether the implicated concept is coherent
or robust in the above sense.8 I shall consider such perfunctory definitions next.
2.2 Heuristics vs. guaranteed correct outcomes
A common feature of almost all perfunctory accounts of heuristics is that ‘heuristic’ is
negatively defined (cf. Fodor [2008], p. 116). A heuristic is thus understood to be deficient
or inadequate in some respect. Indeed, one often happens upon a definition which states
that heuristics are procedures that can produce good outcomes but don’t guarantee correct
solutions (see Dunbar [1998]) or don’t always work (Fodor [2008]). In one textbook on
cognition, it is claimed that a ‘hallmark’ of a heuristic approach to solving a problem is
that ‘It will work part of the time or for part of the answer, but it is very unlikely to
furnish the complete, accurate answer’ (Ashcraft [2002], p. 465). At other times one finds
7As we shall see, the working characterisation of heuristic I develop below will have some features incommon with HG. To my knowledge Gigerenzer, with his HG definition, has come the closest to providingan adequate characterisation of heuristics, my criticisms notwithstanding. Nevertheless, it will becomeapparent as we move along that the characterisation I develop is broader, encompassing a wider variety ofcognitive phenomena. Additionally, I draw several distinctions (absent in Gigerenzer’s account) to delineatedifferent kinds of heuristics (for example, differences that distinguish Gigerenzer-style heuristics from Polya-style heuristics).
8For example, in an influential book entitled Heuristics, Judea Pearl ([1984]) expounds and discussesseveral heuristics used in computation and computer science. Throughout the entire book, however, thereare only two brief passages in which he attempts to characterise heuristics (see pp. vii; 3). Though I won’tpursue the issue here, I submit that these brief passages fail to do justice to a clear understanding of whatheuristics are.
10
heuristics contrasted with algorithms (see Haugeland [1981]; Richardson [1998]), where
algorithms are understood as procedures that, when applied properly, invariably produce
correct outcomes; hence, heuristics are conceived simply as procedures that do not
invariably produce correct outcomes.
Jerry Fodor’s remarks in The Language of Thought wonderfully illustrate the
distinction between procedures that are supposed to guarantee right answers and shortcut
procedures that avoid excessive mechanical computations (i.e., heuristics):
In short, the computational load associated with the solution of a class of
problems can sometimes be reduced by opting for problem-solving procedures
that work only most of the time. Reliability is wagered for efficiency in such
cases, but there are usually ways of hedging the bet. Typically, heuristic
procedures are tried first; relatively slower, but relatively algorithmic,
procedures are ‘called’ when the heuristics fail. The way of marshaling the
available computational resources can often provide the optimal trade-off
between the speed of computation and the probability of getting the right
results. (Fodor [1975], pp. 166–7)
This sort of characterisation of heuristics is not surprising in light of the fact that there is
typically a risk of suffering negative consequences for cutting corners or taking shortcuts.
As Fodor ([1975], p. 166) remarks, ‘one can often get away with less than strict compliance
with such requirements [that guarantee a correct solution] so long as one is willing to
tolerate occasional mistakes’. Yet, certainly it is not the intended purpose of heuristics to
make mistakes. Rather they offer a computationally economical method of solving a
problem. Thus understood, heuristics offer a way to cut down on the computations needed
to arrive at a desired outcome.
11
Acknowledging the benefits afforded by heuristic procedures is very important in
understanding the nature of heuristics. For we would hesitate to call a procedure that
invariably leads to disaster a heuristic procedure; nor would we want to call a procedure
that is irrelevant to the task at hand a heuristic procedure (with respect to the task). Yet
such procedures most certainly would not guarantee correct outcomes. Therefore, what
heuristics fail to achieve (viz. reliably getting the right results) should be thought of as a
corollary of a definition of ‘heuristic’, rather than made to be an essential feature.
Nevertheless, notice from Fodor’s remarks that the benefits offered by heuristics are
contrasted with associated risks of making mistakes—whatever good heuristics do, they do
not always get things right. In this way, Fodor provides a particularly clear example of the
attitude toward heuristics that typically leads to a negative characterisation in terms of
what they fail to do, or what they do not achieve. We can generalise over these sorts of
negative definitions to find a common distinction between heuristic processes and processes
that are guaranteed to produce correct answers or outcomes. I will refer to these latter as
GCO (guaranteed-correct-outcome) procedures.
GCO procedures are essentially formal operations, paradigmatically those of
mathematics or logic. For example, applying the rules of division will guarantee that you
will arrive at the correct answer to a specified division problem of arithmetic; constructing
truth-tables will guarantee a correct determination of whether a specified set of
propositions is consistent; and so on. However, GCO procedures exist beyond the paradigm
of logic or mathematics. We can easily come up with an algorithm to, say, determine
whether a lamp needs replacing, or to phone a friend. There are even algorithms for
playing tic-tac-toe which guarantee either a win or draw (Crowley and Siegler [1993]).
In general, GCO procedures necessitate the right answers, when applied correctly. In
contrast with these GCO procedures, heuristics are supposed to be procedures that
12
efficiently produce outcomes or answers that may be good enough for one’s purposes, and
that may sometimes end up to be correct, but they won’t guarantee correct answers. Thus,
negative definitions of this type can be summed up as follows.
HN Heuristics are procedures that do not guarantee correct outcomes,
although they can can provide certain benefits.
I will discuss the property of providing ‘certain benefits’ in more detail when I offer my
positive characterisation below. For now we will focus on the ‘do not guarantee correct
outcomes’ part of this definition. In particular, I shall address the extent to which the
claim that heuristics are procedures that do not guarantee correct outcomes does useful
work for us as a central part of a definition of heuristics.
Let us begin by observing that, in the small number of contexts considered so far, HN
makes perfect sense—in math, logic, tic-tac-toe, and so on, we can understand heuristics as
procedures that do not guarantee correct outcomes as GCO procedures do, though they
offer certain benefits. For example, randomly selecting a limited subset of a set of
propositions and simply looking at its members to see if any contradicts any other is a
heuristic process to check the consistency of the superset of propositions; ‘always start in
the centre square’ is a heuristic for tic-tac-toe (Dunbar [1998]). Such heuristics won’t
guarantee a correct outcome, but they are at least cognitively efficient. In general, HN is
applicable in cases when there exists a GCO procedure for a given problem. Of course, the
GCO procedure need not be feasible (as when the GCO procedure cannot be practically
performed by a human (or even machine) because it is too complex, or too
time-consuming, or too taxing on cognitive resources),9 and the GCO procedure need not
9Computational complexity theorists call such intractable problems ‘NP-complete’, and the TravelingSalesman problem is the paradigm case. We may note in passing that heuristics are the only viable procedureswhen dealing with NP-complete problems. Cf. the discussion below.
13
be known for a given problem, so long as there exists a GCO procedure in principle.
Notice, however, that the problems for which HN is usefully applicable are all
well-defined problems. It should be known well enough that well-defined problems have
well-defined problem-spaces, with discrete states and goals that are, or can be in principle,
known and understood. Tic-tac-toe, problems in math and logic, dialing a friend—they all
have well-defined problem-spaces. Having a well-defined problem-space is what allows for
the generation of algorithms that guarantee correct outcomes. For not only are the goals of
well-defined problems understood and known to be correct outcomes—that is, we know
precisely what a correct outcome is for these sorts of problems—additionally one can
identify definite and discrete steps by which to navigate the problem-space, proceeding
from one state to the next, so as to ultimately reach the goal state. This is what is meant
by a problem having a GCO procedure in principle—if the states and goals of a problem
are well-defined, then there will exist some procedure that will guarantee a correct
outcome, even if this requires exhaustive search and evaluation of the problem-space (even
if the problem is NP-complete; see note 9).
But here is the trouble: When faced with a well-defined problem, we are almost always
restricted to the use of heuristics in the sense of HN. To be sure, cognitive limitations or
time constraints often prevent us from carrying out a GCO procedure, or there is just no
known correct solution or no known GCO procedure (notwithstanding that these may exist
in principle). Indeed, the well-defined problems for which there are known feasible GCO
procedures are very few (Richardson [1998]). In this light, then, we may very well
understand heuristics as procedures that do not guarantee correct solutions, but in doing
so we seem to allow too much in. For when we consider real-world contexts, where time
and resource constraints are ubiquitous, engaging in GCO procedures won’t be a viable
option for a great many tasks. Thus, we come to an important problem for HN: The
14
widespread claim that much of human cognition is heuristic is nearly trivially true, since
much of our cognition cannot be anything but heuristic as described by HN. I will call this
the ‘triviality problem’.
Not only are GCO procedures rarely a viable option in the real-world (when we are
faced with a well-defined problem), but in addition, well-defined problems are relatively
rare. Instead, we are commonly confronted with ill-defined problems, which have undefined
or indeterminate states and/or goals. The problem-spaces for ill-defined problems cannot
be completely or adequately known, understood, or characterised, and so there are no clear
solution paths for ill-defined problems.10 Ill-defined problems include grandiose problems
such as stabilising the current state of the economy, and coming up with a feasible policy
to reduce carbon emissions, as well as more humble problems such as deciding whether to
take a job offer, planning a party, and writing a good paper. By their very nature,
ill-defined problems preclude the possibility of GCO procedures. For the goals of these
problems are often insufficiently defined, and as a result it is unknown what a ‘correct’
outcome would be, thereby making it impossible to have a procedure that would guarantee
a correct outcome. Moreover, even if an ill-defined problem has a sufficiently defined goal,
such as determining the culprit in a murder case, the means by which one would proceed to
achieve this goal would likely still be uncertain since either the mediating states may
remain indeterminate, or we are unable to properly evaluate the steps taken (for example,
whether the work one does on the murder case brings one closer to or further from
10The distinction between well-defined and ill-defined problems goes as far back as the infancy of artificialintelligence research (see McCarthy [1956]). McCarthy’s original characterisation has been reexamined andextended by a number of researchers (most notably Reitman [1964], [1965]; Newell [1969]; Simon [1973];more recent treatments are found in Voss et al. [1983]; Shin et al. [2003]; Lynch et al. [2006]; Voss [2006];Mitrovic and Weerasinghe [2009]). It is perhaps an important task to critically analyse the concept of anill-defined problem to arrive at a fully developed account, but it is beyond the scope of this paper to providethis analysis. For those who desire a more thorough account of the distinction between ill-defined andwell-defined problems, I refer them to the works cited here.
15
determining the culprit); there still would be no clear solution path.11 This situation is
particularly bad news for HN, for if there is no way to evaluate intermediate steps toward a
goal in such ill-defined contexts, then no sense can be made of heuristics being satisfactory,
reasonable, or ‘good enough’ solutions. That is, we would not know what would make a
heuristic solution satisfactory, reasonable, or ‘good enough’ in ill-defined contexts.12
Since it does not make much sense to speak of correct outcomes for many ill-defined
problems, much less processes that guarantee correct outcomes, little sense can be made of
HN in this context, in particular its claim that heuristics are procedures that do not
guarantee a correct solution. The situation would not be so bad if it were not the case that
a vast number of real-world problems humans typically face are characteristically
ill-defined. Alas, we do not live in a tic-tac-toe world. Thus, in addition to the triviality
problem cited above, HN faces what I call a ‘range problem’: ‘heuristic’ doesn’t apply to an
interesting range of cases, since we are typically confronted with ill-defined problems for
11While one may argue that there very well may be a correct way to, for example, stabilise the economy,the correct way is realised only retrospectively. If this weren’t so, many countries wouldn’t currently haveso much difficulty in stabilising their economies. Aside from this rhetoric, however, it is a fact that there isgreat uncertainty in whether measures yet to be undertaken will advance us toward the goal or away fromthe goal (as above with respect to solving a murder case). Such uncertainty is owed to the multifariouscomplexity of the problem, but it is also owed to the indeterminateness of the problem-space. Thus, onecannot know a priori what the correct solution is. But this is just the nature of ill-defined problems, viz.there are no clear solution paths. See below for more on ill-defined problems.
12We should be careful not to confuse ill-defined problems that arise from trying to solve well-definedproblems with the well-defined problems themselves. Such confusion may result when considering, for in-stance, attempting to discover a proof in mathematics or coming up with a succinct algorithm for an Internetsearch engine to retrieve specific websites. Certainly, the mathematical proof and the search algorithm arethemselves well-defined, and they are paradigmatically so, as the demonstration (or subsequent derivation)of the proof and the execution of the algorithm both have identifiable discrete goals and intermediate states.But the problem of discovering a mathematical proof or a search algorithm very well may be—and I supposeis typically—ill-defined. For even though the goal states for these problems are well-defined, there is noclear solution path since the mediating states of the problem of discovery and the problem of developmentare fuzzy and indeterminate, i.e., they have ill-defined problem-spaces. This much has been recently noted.And indeed, it is generally agreed among philosophers and cognitive psychologists that discovery in generalinvolves a measure of creativity and finesse which cannot be captured with a well-defined procedure, eventhough in some cases the object of discovery is a well-defined procedure. It was the problem of discoveringFermat’s last theorem that stalled three century’s worth of bright mathematicians, not the theorem itself;the acceptance of Wiles’ proof (after an initial correction) demonstrates this.
16
which there are no GCO procedures.
A possibility worth considering is that a deeper functional analysis of heuristics might
have it such that certain heuristics represent ill-defined problems as well-defined ones.
That is, a heuristic process might simplify a complex, ill-defined problem to a manageable
and well-defined problem. Or perhaps some independent, nonheuristic process is
responsible for representing ill-defined problems as well-defined problems, and then
heuristics are employed to solve these well-defined problems. If either of these possibilities
is the case, then it would make sense to claim that heuristics are processes that do not
guarantee correct outcomes, for they would be working within the confines of well-defined
problem-spaces for which there are (in principle) GCO procedures. This is certainly a
possibility, and many contemporary decision researchers, following the work of Simon,
agree that much of human reasoning involves search within and manipulation of
well-defined problem-spaces (Dunbar [1998]).
However, difficulties with this analysis arise when we realise that there will be always
be innumerable ways to carve up ill-defined problems into problem-spaces consisting of
discrete states. Consider, for instance, a simple, though ill-defined, problem of deciding
whether to go for a picnic. What states should the problem-space consist of, and what
would be the goal state? Should there be only two states, (i) ‘go for a picnic’ and (ii) ‘not
go for a picnic’? Or should the problem-space consist of (i) ‘go for a picnic’, (ii) ‘not go for
a picnic, but go to a restaurant instead’, and (iii) ‘not go for a picnic, but stay home
instead’?13 I need not belabour the issue here of how difficult it is to model an ill-defined
problem as a well-defined one; the problem should be familiar to the reader. The difficulty
here is similar to what is known as ‘the problem of small worlds’, introduced by L. Jimmie
Savage ([1954]; see also Shafer [1986]). The problem of small worlds concerns whether
13This example is derived from Savage’s ([1954], pp. 15–6) famous omelette example.
17
expected utility assignments will maintain the same structure, and therefore dictate the
same decisions, upon refinements of a decision-problem. As Savage recognises, there doesn’t
seem to be a plausible way to define an operationally applicable criterion to determine
whether a well-defined model maps onto the original ill-defined problem. Hence, there is no
criterion for whether or the extent to which a given well-defined model adequately captures
an ill-defined problem. In this way, we might say that representing an ill-defined problem
as a well-defined problem is itself an ill-defined problem insofar as it is indeterminate how
to represent the states and goal(s) (i.e., the correct outcome) of the problem.
I do not mean to suggest that we never simplify ill-defined problems; in fact, we
probably do this often with certain ill-defined problems.14 Of course, whether and to what
extent people actually represent ill-defined problems as well-defined problems are empirical
questions. But unless evidence would show that representing ill-defined problems as
well-defined ones is ubiquitous, there remains a large class of problems for which it is
trivial to speak of heuristics as processes that do not guarantee correct outcomes.15
3 A Taxonomy of Heuristics
14A concern that might be raised here is that rational decision making under uncertainty may not be bestdescribed in terms of ill-defined problems. For example, many Bayesians and game theoreticians conceivedecision making problems under uncertainty as well-defined problems with definite, well-defined, rationalsolutions. Nevertheless, even though the formal apparatus of Bayesianism or game theory impose a well-defined problem-space on an ill-defined problem, this doesn’t entail that the raw problem is itself well-defined.Given that uncertainty (or what we might call ‘unknown risk’) is an inherent feature of such raw problems,their problem-spaces are necessarily ill-defined. Cf. discussion in note 12. In fact it is arguable that Bayesiansand game theoreticians represent ill-defined problems as well-defined ones just because doing so allows forformal analysis on what otherwise would be an unanalyzable problem. The reader is referred to the workscited in note 10 for further clarification on the nature of ill-defined problems. See also the discussion above.
15Another negative definition implicitly suggested by some authors is that heuristics are procedures thatdo not optimise (though they can result in good outcomes) (see Carruthers [2006]; Fodor [2008]; Gigerenzerand Goldstein [1996]; Savage [1954]; Simon [1957]; von Neumann and Morgenstern [1944]). We may take thisas a special case of HN if we understand optimisation as a ‘correct outcome’. Yet optimisation procedures areusually understood not to aim at guaranteeing correct outcomes, but at maximising some value, traditionallyexpected utility. Optimisation procedures take into account problems for which there are in principle noGCOs or GCO procedures. Nevertheless, most of what I said against HN can be applied to this negativeheuristics-as-not-optimising definition, and I will refrain from rehearsing the range and triviality problems.
18
We have now considered several problems with some attempted or implicit definitions of
‘heuristic’. I will now turn to the task of offering a more adequate account of the term by
highlighting some features of heuristics that are important for theorising about cognition
generally. In undertaking this task, I draw a number of distinctions in order to give a
taxonomy of different kinds of heuristics. This will be helpful to identify some of the
important properties of heuristics, as well as to make sense of various research programmes
on heuristics. This will also serve to eventually focus in on a certain class of heuristics
found in the taxonomy—a class that is distinctively cognitive. It is this cognitive class of
heuristics that the characterisation of ‘heuristic’ developed in section 4 will apply to.
3.1 Stimulus-driven vs. heuristic
The first distinction I make is between stimulus-driven behaviour and heuristic-produced
behaviour. By stimulus-driven behaviour I do not intend to refer to the behaviourist idea
of stimulus-response that takes the mind to be a black box to remain unanalysed. Rather, I
mean to refer to some unspecified theory that takes a stimulus to be the cause, in a
coarse-grained fashion, of a response. By heuristic-produced behaviour, on the other hand,
I mean behaviour (including cognitive behaviour) that is brought about or caused by the
operation of some heuristic. Consider, for example, some heuristics found in the literature:
(1) In chess, consider first those moves that remove or protect pieces under
attack. (Simon et al. [1967])
(2) Probabilities are evaluated by the degree to which one thing or event is
representative of (resembles) another; the higher the representativeness
(resemblance) the higher the probability estimation. (Tversky and
Kahneman [1974])
19
These heuristics, which may be taken as paradigm cases of heuristics, are not mere
stimulus-driven operations. Rather, they are relatively more complex operations involving
relatively more complex representations. Specifically, these heuristics are conceptual
operations, involving conceptual representations. To put it another way, these heuristics
are concept-mediated.16 In contrast, stimulus-driven behaviour does not implicate
conceptual representations, since the stimulus is supposed to be the cause (in a
coarse-grained way) of the response. As a paradigm example, we might take a case of
shuddering upon hearing an unexpected loud noise—it is unlikely that the loud noise is
conceptualised, and thus conceptual representations would have no role in producing the
shudder response; it is simply a brute reaction or reflex.
The distinction between heuristic-produced and stimulus-driven behaviour may seem
obvious, but it is an important distinction to make, especially in light of some of
Gigerenzer’s recent work. Gigerenzer ([2007], [2008b]) has speculated that some nonhuman
animal behaviour can be understood in terms of heuristics. According to Hutchinson and
Gigerenzer ([2005]), behavioural biologists have studied ‘rules of thumb’ since the middle of
the twentieth century, where rules of thumb describe some animal behaviour through
simple rules that largely appear to be innate. So, for instance, the manner in which the
16I adopt here the common view that concepts are, qua constituents of thought, complex mental entitiesthat participate in a complex system of representations. That is, I envision that concepts are structuredrepresentations. And though I remain neutral on a specific theory of concepts (if Machery ([2009]) is right,then there is a plurality of scientific entities we call concepts), we might take Jesse Prinz’ ([2002]) theory of‘proxytypes’ as (at least roughly) representative of the sort of structure I envision concepts to have. Theproxytype theory allows for a rich range of representations, from perceptual representations to more abstractones, to constitute conceptual structure. However, I do not wish to simply subscribe to Prinz’ proxytypetheory, since I think he underestimates the role that other sorts of representations have in constitutingconcepts; in particular, I have in mind representations derived from knowledge of natural language (forexample, knowing that a word operates as a noun constrains the sort of thing the word refers to). (Cf.Viger [2006].) Whatever the details may be of an appropriate theory of concepts, what is important for thepresent paper is that concepts be understood as complex structures that are at least partly constituted by arange of representations that cluster in cognition to form the concept in question. (See below for discussionon what makes a concept cognitive rather than perceptual, if a concept can be constituted by perceptualrepresentations.)
20
wasp Polistes dominulus constructs its nest dictates that there are a possible 155 different
hexagonal arrangements. However, only eighteen arrangements are ever observed, and the
eighteen arrangements appear to follow a rule of thumb ‘in which the wasp places each cell
at the site where the sum of the ages of the existing walls is greatest’ (Hutchinson and
Gigerenzer [2005], p. 103). Another example: If there are two light sources in a copepod’s
environment, it will follow ‘a trajectory as if it were pulled toward each source with a force
proportional to source intensitydistance2
’. Yet behavioural biologists explain this seemingly complex
behaviour by a simple rule of thumb that says that the copepod ‘adjusts its orientation so
as to maximise the amount of light falling on [its] flat eye’ (Hutchinson and Gigerenzer
[2005], p. 103). Hutchinson and Gigerenzer claim that such rules of thumb ‘correspond
roughly’ (Hutchinson and Gigerenzer [2005], p. 98) to heuristics, though in other works
Gigerenzer appears to make the stronger claim that ‘rule of thumb’ is synonymous with
‘heuristic’ (Gigerenzer [2007], [2008b]; Gigerenzer and Gaissmaier [2011]). Gigerenzer
([2007]) goes on to explain that many other animal behaviours can be understood as being
the result of following rules of thumb or heuristics.17
However, given the distinction made here between stimulus-driven and
heuristic-produced behaviour, Gigerenzer’s belief that such animal behaviours are the result
of following heuristics may be misguided. This certainly appears to be the case with respect
to the copepod example—it is more likely that copepods are guided by stimulus-driven
mechanisms rather than heuristics; it is doubtful that copepods have the wherewithal to
possess or develop any concepts, let alone representational systems or mechanisms.
Perhaps stimulus-driven behaviour is best characterised as merely satisfying a rule, or
17For example, Gigerenzer ([2007]) believes that birds of paradise mate selection behaviour can be explainedas the females following the heuristic ‘Look over a sample of males, and go for the one with the longest tail’;Gigerenzer also claims that aggressor-assessment of deer stags follow a pattern of sequential reasoning, whichcan be described by a heuristic to the effect of ‘Use deepness of roar to estimate the size of rival first; useappearance afterward’.
21
conforming behaviour to a rule. As Gigerenzer’s examples illustrate, satisfying a rule may
give the appearance that the behaviour is produced by a heuristic, but such
stimulus-driven behaviours do not involve a rule as a causal ingredient.18 We might say
that merely satisfying a rule is not sufficient for heuristic-produced behaviour. (I shall say
more on satisfying rules below.)19
It is interesting to note that some heuristics may importantly involve stimulus-driven
responses.20 Consider for instance Gigerenzer’s gaze heuristic:
(3) (i) Visually fixate on a moving object; (ii) move toward the object; (iii)
adjust your speed such that the angle of your gaze remains constant.
(Gigerenzer [2001])
According to Gigerenzer ([2007]), this heuristic appears to be used by baseball players
when they catch fly-balls, as well as by airplane pilots and boaters when they want to avoid
collisions (when one wants to avoid a collision, one uses the gaze heuristic to adjust one’s
speed to ensure that one doesn’t meet the object in question). Gigerenzer also claims that
birds, fish, flies, and other creatures use the gaze heuristic to catch prey. Now, it is difficult
18Many of Gigerenzer’s other assertions about animals following heuristics (see note 17) are likewise throwninto doubt, for the behaviour in question may be stimulus-driven. But I haven’t given here any conclusiveargument or proof that the animal behaviours in question aren’t heuristic as I understand the term here (i.e.,that the behaviour isn’t concept-mediated). Whether such behaviour invokes conceptual representations, andwhether animals invoke rules as a causal ingredient in their behaviour, are empirical matters. The point hereis just an exercise in drawing a needed distinction between stimulus-driven behaviour and heuristic-producedbehaviour.
19I suppose that some might wish to claim that what I’m calling stimulus-driven behaviour may never-theless satisfy a heuristic rule, and that in this way stimulus-driven behaviour still can be characterisedas heuristic in nature. This move should be resisted, however, since any stimulus-driven behaviour can becharacterised as satisfying some arbitrary rule or other. For example, the shudder response to a loud noisecan be characterised as satisfying the rule ‘Shudder upon hearing an unexpected loud noise’, or ‘Display fearbehaviour when confronted with unexpected intense auditory input’, or a variety of others. I take this to bean unacceptable result because the role such rules would have in stimulus-driven behaviour is unclear, andby all lights vacuous.
20Thanks to an anonymous referee for some insightful comments which led me to the present discussion,as well as some significant discussion in the following subsection.
22
to determine whether the gaze heuristic is inherently concept-mediated. While it is almost
certain that a human can use (3) conceptually, such as when a novice baseball player is
learning how to catch fly-balls, fish and insects arguably don’t possess the appropriate
conceptual wherewithal to likewise deploy (3). In this way, the cases of fish and insects
seem more like the copepod case of stimulus-driven behaviour. However, if Gigerenzer is
correct that (3) is the mechanism underlying the behaviour of baseball players and insects
alike, and that the gaze heuristic is indeed a heuristic, then it appears that my proposal
that heuristics are concept-mediated is thrown into question. In response, I should like to
emphasise that the distinction between stimulus-driven and heuristic-produced behaviour
should not be taken in the mutually exclusive sense. Rather, I make the distinction to
exclude mere stimulus-driven behaviour from (what I take to be) a useful and robust
account of heuristics. My claim is just that, if a behaviour is heuristic-produced, then the
behaviour is produced by a concept-mediated mechanism. I’m not making the further
claim that no stimulus-driven mechanisms can be involved. Furthermore, though it is
possible for the gaze heuristic to be concept-mediated, as in the case of the novice baseball
player, it need not. This seems to be obvious from the fish and insect cases, but also from
the fact that skill development can often ingrain behaviours to the extent that behaviour
becomes stimulus-driven. In light of these remarks, I submit that the gaze heuristic, insofar
as it is a heuristic, is a special sort of heuristic. This special sort of heuristic is captured by
the distinction I make in the following subsection.
3.2 Perceptual vs. cognitive heuristics
An additional important distinction to make is between perceptual heuristics and cognitive
heuristics. Perceptual psychologists who subscribe to the perceptual heuristics approach to
vision or speech perception believe that the human perceptual system is too deficient to
23
fully compute perceptual information coming in from the environment; or that the
perceptual information in the environment is too vague to produce a certainly veridical
output (Braunstein [1994]). And so, it is claimed, the human perceptual system relies on
heuristics to make approximations. An example of a visual perceptual heuristic is:
(4) If a repetitive texture gradually and uniformly changes scale, then
interpret the scale change as a depth change. (Feldman [1999], p. 215)
Although such perceptual heuristics are often expressed in conceptual terms, they are
nonconceptual in operation.
Now, there has been a recent movement in psychology and philosophical psychology,
commonly referred to as ‘embodied cognition’ or ‘situated conceptualisation’ (see Barsalou
[2005]), which understands conceptual processing as ‘grounded in’ the perceptual systems
and involving the (re)activation of perceptual representational processing (cf. note 16). In
light of this movement, which enjoys considerable empirical support, I don’t wish to make
any substantial claims about the relation between perception and cognition, or between the
perceptual systems of the brain and the conceptual systems. However, I believe that the
distinction with respect to perceptual and cognitive heuristics is useful, and indeed needed
in order to narrow in on certain cognitive kinds commonly discussed in the psychology and
philosophy literatures. Thus, I propose the following: Perceptual heuristics can be
understood to use only information resulting from and maintained by occurrent distal and
proximal stimuli, whereas the information used by cognitive heuristics does not rely on
occurrent stimuli.21 If this distinction is granted, then perceptual heuristics constitute a
special class of heuristics insofar as they are mediated by perceptual representations rather
21This account enjoys inspiration from a talk given by Jacob Beck at The 9th International Symposiumof Cognition, Logic and Communication: Perception and Concepts, in Riga, Latvia, May 2013. I make noclaim that Beck would be pleased with my gloss here.
24
than by concepts. In this way, (4) will constitute a perceptual heuristic but not a
conceptual (cognitive) heuristic. On the other hand, (1) and (2), given as paradigm cases
of heuristics above, are not perceptual heuristics. Rather, they are cognitive in nature;
they are cognitive heuristics.
Furthermore, we might now understand that (3) the gaze heuristic is not so much a
cognitive heuristic as it is a perceptual heuristic, despite Gigerenzer’s seeming claims to the
contrary. We might characterise the gaze heuristic more precisely as a percept-motor
heuristic, since it also involves online adjustments to bodily motion. In any case, as a
heuristic for object tracking, occurrent distal and proximal stimuli are required. This leaves
open the possibility for nonhuman animals that lack the appropriate conceptual
wherewithal to employ the gaze heuristic as a noncognitive percept-motor heuristic. So, on
the present account, the gaze heuristic need not be a cognitive heuristic. I say ‘need not
be’ because, as intimated above, it is possible for humans to use such a heuristic
cognitively (i.e., conceptually).
Before continuing on, it would be instructive to say something about the interesting
fact that nonhuman animals possess and use some of the same heuristics as humans. This
appears to be true of the gaze heuristic, and it may be true of others. If I were to guess, I
would suppose that the shared heuristics among human and nonhuman species are mainly
perceptual in nature. The motivation for this supposition has to do with having
evolutionary problems in common, along with possessing similar (though not always
identical) perceptual systems to solve those problems. In general, I believe we can point to
certain core systems or core capacities to explain at least some of the similarities and
differences between the sorts of heuristics species possess and use.22 Humans and a great
22The term ‘core capacities’ is used by Gigerenzer to explain how heuristics work (Gigerenzer and Gaiss-maier [2011]), and in some respects the present account is similar to Gigerenzer’s. He claims that heuristicsexploit core capacities. However, in contrast, I prefer to think of heuristics as operations employed by core
25
many nonhuman species have core visual systems, and object-tracking is a common
problem; so it seems sensible to assume that similar solutions were arrived at. On the other
hand, humans don’t always share core capacities with nonhuman species. Ants have core
systems and capacities to lay and recognise pheromone trails, and they may therefore
possess and utilise heuristics to track pheromone tails. It would be impossible for humans
to possess these sorts of heuristics since humans lack the requisite core capacities. Such
pheromone heuristics, if any there are, would be sorts of perceptual heuristics in the sense
described above insofar as they are not concept-mediated. Of course, it is possible that
humans and nonhumans share some cognitive heuristics, but in such cases similar cognitive
(conceptual) systems would be deployed. Shared conceptual systems across species, I
suppose, would be less common than shared or similar core perceptual capacities with
perceptual heuristics. At the same time, if all of this is correct, we might be able to learn
something about the perceptual and conceptual representational systems of nonhuman
animals if and when they deploy similar heuristics as humans.
3.3 Computational vs. cognitive heuristics
Some may take issue with the contention that heuristics exploit either conceptual or
perceptual representations by pointing out, for instance, that many heuristics developed by
computer programmers operate within systems that (arguably) do not possess, and
(arguably) are not capable of possessing, conceptual or perceptual
representations—representations of the sort we are wont to ascribe to cognitive systems.
To respond to this objection, we might draw a further distinction between cognitive
heuristics and what I will call computational heuristics.
I don’t want to make a big fuss about this proposed distinction. The point of it is to
capacities.
26
home-in on the sorts of operations that are distinctly cognitive, so that we might be able to
identify a certain class of heuristics as a cognitive kind. This latter task is important if, as
discussed in section 1, cognitive science is in the business of discovering and studying the
kinds within its domain. In so doing, I suggest that we set aside operations that may be
labeled ‘heuristic’ but do not require any sort of cognition—conceptual or perceptual—to
execute. A case in point is heuristics, with varying complexities, that facilitate economical
search within decision-trees. Nothing cognitive need be involved in such a decision-tree
search. Although cognitive heuristics may very well be computational in nature insofar as
the computational theory of mind holds, the way I am using ‘computational’ here is to refer
simply to processes that do not operate within a representational cognitive architecture.23
Notice that computational heuristics are still heuristics—they are just of a sort
different from cognitive heuristics, just as perceptual heuristics are of a different sort. And
it is an open matter whether and to what extent representational, nonheuristic cognitive
processes involve computational heuristics. However, the main point to the present
distinction is that cognitive heuristics, as I propose to understand them, are processes that
operate within a conceptual representational cognitive architecture.
3.4 Methodological vs. inferential heuristics
Finally, I make a distinction between what I will call methodological heuristics and
inferential heuristics. This distinction falls within the category of cognitive heuristics. The
23Perhaps the proper relation between computational heuristics and cognitive heuristics is one in whichthe latter is a subset of the former, rather than an oppositional relation. But the purpose of making thedistinction doesn’t hinge on the proper relation between the two. All that matters is that we identifycognitive heuristics as those that involve a representational cognitive system. Thus, even when what isbest called a cognitive heuristic is employed in a nonrepresentational, noncognitive system, I would call ita computational heuristic in that setting. An example: Gigerenzer’s take the best heuristic (see below)has been successfully implemented for automated literature search in libraries (Lee et al. [2002]). Whenimplemented in a computer program for literature search in libraries, take the best is a computationalheuristic; when employed by a human, the heuristic is cognitive.
27
etymological understanding of the term ‘heuristic’ concerns methods of discovery. It was in
this sense that George Polya ([1957]) employed the term to refer to the endeavour of
studying the strategies and processes typically useful for solving problems. Some of his
strategies included:
(5) Draw a diagram when trying to solve a problem.
(6) If you can’t solve a problem right away, try an indirect proof.
(7) Try to restate the problem in different terms.
(8) Assume you have a solution and work backwards.
(9) Find a related problem that has been solved before, and try to use its
result or method.
Heuristics, in this tradition, are methodological devices for learning and
problem-solving. It is in this sense that the use of models and analogies are heuristic
devices to help us learn or understand something about our world, and that the techniques
offered by Polya are heuristic aids in solving problems. This is what I intend to refer to by
methodological heuristics. As such, methodological heuristics are what concern
philosophers of science and cognitive scientists who are interested in creative thought, the
process of discovery, and the construction and improvement of theories in science (for
example, Popper [1959]; Lakatos [1963–4]; Wimsatt [2007]; Bechtel and Richardson [2010]).
However, a different meaning of ‘heuristic’ was invoked in psychology with the Gestalt
theorists, and later with Simon’s notion of ‘satisficing’. This meaning was popularised in
the 1970s by the work of Daniel Kahneman and Amos Tversky (see Kahneman et al.
[1982]; Kahneman and Tversky [1973]; Tversky and Kahneman [1974]). The heuristics
studied by Kahneman and Tversky (and others such as Gigerenzer, and Simon to some
28
extent, although both Gigerenzer and Simon had interests in methodological heuristics) are
not precisely the methods of discovery and invention intimated by the term’s etymology.
The important distinction to recognise here is between the respective domains of use, and
their respective functions in our cognitive lives. As tools or devices for discovery and
problem-solving, methodological heuristics are aids to learning and understanding. In
contrast, the heuristics that are of interest to Kahneman and Tversky (and certain other
psychologists) are principles that guide judgement and inference. Thus, these latter are
what I call inferential heuristics. Inferential heuristics are not about learning or
understanding per se, but serve to facilitate judgements, inferences, and decision-making.
In a certain sense, inferential heuristics may be understood as special cases of
methodological heuristics, where inferential heuristics aid in the discovery of a solution to a
problem by providing the appropriate judgements and inferences. Nevertheless, inferential
heuristics remain distinct in kind from methodological heuristics insofar as either serves
distinct cognitive functions. Moreover, we can distinguish inferential heuristics from
methodological heuristics by making some generalisations. Inferential heuristics are often
epistemically opaque—people often employ these heuristics without knowing that they do
so, and without knowing the nature of these heuristics (that is, absent of a psychologist
informing one of such things). Methodological heuristics, on the other hand, are generally
epistemically transparent—these methods are more or less easily identified; we often
consciously and deliberately employ them; their usefulness is usually known; and, because
of this, an individual is able to compare and manipulate them (and all this without a
psychologist informing one of such things). Furthermore, methodological heuristics are
typically cultivated by experience and therefore vary between individuals, whereas
inferential heuristics can be to a large extent immune to experience and very common
29
among everyone, and some even may be innate.24 For example,
(9) Find a related problem that has been solved before, and try to use its
result or method,
is a technique that one acquires by working through many problems and drawing abstract
principles, whereas the representativeness heuristic
(2) Probabilities are evaluated by the degree to which one thing or event is
representative of (resembles) another; the higher the representativeness
(resemblance) the higher the probability estimation;
and the availability heuristic
(10) The frequency of a class or the probability of an event is assessed
according to the ease with which instances or associations can be brought
to mind (Tversky and Kahneman [1974]);
are pervasive, as those in the ‘heuristics and biases’ tradition have shown, and are not
cultivated or developed through conscious, intentional experience; in fact they are in many
respects immune to conscious, intentional revision.
This is not to say that inferential heuristics cannot be instantiated in higher-order
cognition, however. For instance, it is possible that one can learn and consciously employ
the availability or representativeness heuristic (cf. the discussion on the gaze heuristic
above). Moreover, it may not always be clear whether a given heuristic falls in the
methodological or inference category. That there are fuzzy cases, however, does not bear
24I use the qualifiers ‘often’ and ‘generally’ and ‘typically’ because there will undoubtedly be exceptions.The exceptions, I suppose, can contribute to our understanding of the natures of methodological and infer-ential heuristics.
30
negatively on the methodological-inferential distinction drawn here. Indeed, fuzzy cases
should not come as a surprise since it is not uncommon for us to learn something about the
world while we make inferences, nor is it uncommon that the very act of making a
judgement solves a problem. Nevertheless, the functional distinction between
methodological and inferential heuristics can be maintained so long as the functional role of
the heuristic can be determined.
3.5 Summary of distinctions
Let us briefly pause here to summarise the distinctions presented thus far.
Heuristic-produced behaviour are to be distinguished from stimulus-driven behaviour.
Moreover, there are many kinds of heuristics, including (though perhaps not exhausted by)
computational, perceptual, and cognitive. And of cognitive heuristics we can identify
different sorts: methodological heuristics and inferential heuristics. Some of these heuristics
may overlap or relate in ways that I haven’t accounted for here. Nevertheless, we are
presented with a working taxonomy to help us to understand past and current research and
arguments concerning heuristics, and to frame and guide future research and arguments.
The taxonomy is represented in Figure 1.
4 Characterising Cognitive Heuristics
Now that I have distinguished between different kinds of heuristics and discussed some of
the relations between them, I will develop a more detailed sketch of the class of cognitive
heuristics that seems to be of central interest to theorising about human cognition, namely
cognitive heuristics. Although my concern will be for cognitive heuristics in general, the
characterisation I will give will be most applicable to inferential heuristics. This will
become apparent quite quickly. However, as I proceed I will comment on how the
31
Heuristics
Perceptual heuristics
Cognitive heuristics
Computational heuristics
Inferential heuristics
Methodological heuristics
Stimulus-driven behaviour
Figure 1: A taxonomy of heuristics represented as class inclusions/exclusions. I leave open thepossibility of some overlap of certain classes of heuristics.
characterisation of inferential heuristics I develop relates to methodological heuristics, as
well as to perceptual and computational heuristics. In this section, I will refer to cognitive
heuristics simply by ‘heuristics’ unless otherwise specified.
4.1 Heuristics as rules of thumb
Heuristics are almost invariably characterised as ‘rules of thumb’. So common is this
characterisation that one may even take ‘rule of thumb’ to be synonymous with ‘heuristic’
(see Pearl [1984]). (This use of ‘rule of thumb’ should not be confused with its use by
behavioural biologists which, in section 3.1 above, was described to be more associated
with stimulus-driven behaviour than with heuristic-produced behaviour.) Though I do not
want to commit to wholly endorsing this synonymity (see below), such a
rule-of-thumb-definition illuminates several central qualities of heuristics. One such quality
is that heuristics are rules of some sort. Notice however that as rules, heuristics need not
32
(and perhaps should not) be understood as normative.25 Rather, as rules, heuristics are
procedures that can be specified and applied in a given situation. That is, ‘rule’, as it is
being used here, refers not to a prescriptive guide for conduct, but to a procedure that
(nonprescriptively) regulates or governs some process, or describes a method for performing
some operation (cf. Rawls [1955]).26
It would be useful here to make some distinctions with respect to the roles rules can
play in action or behaviour. It is common to differentiate between satisfying a rule and
following a rule (cf. Searle [1980]; Wittgenstein [1953], §§185–242). To satisfy a rule is
simply to behave in such a way that fits the description of the rule—merely to conform
behaviour to the rule. It is in this sense that the motion of the planets satisfy the rules
embodied by classical physics. On the other hand, following a rule implies a causal link
between the rule and some behaviour, and moreover that the rule is an intentional object.
Fodor’s recent account is particularly illuminating here: ‘following [a rule] R requires that
one’s behavior have a certain kind of etiology; roughly, that one’s intention that one’s
behavior conform with R explains (or partly explains) why it does conform to R’ (Fodor
[2008], pp. 37–8). Thus, merely satisfying a rule is not sufficient for following the rule.
Now, our concern with respect to cognitive heuristics is not so much about behaviour
as it is about reasoning.27 Let us therefore say that to satisfy a rule in reasoning is to
reason in such a way that merely conforms to or fits a description of the rule, whereas to
25Kahneman and Tversky oppose heuristics to rules (see their [1982], [1996]). But they are contrastingnormative rules with heuristics, the latter which I take here to be characterised as descriptive rules.
26Of course, there are normative matters with respect to the application of certain heuristics in certainsituations. These are matters in which Gigerenzer and his followers are engaged. They defend a view ofecological rationality, which asserts (very roughly) that a heuristic is rational in a given environment insofaras it is adaptive in that environment. My own view is that a great deal more must be said about rationalitythan what ecological rationality offers in order to make it a substantive philosophical view. But a seriousdiscussion of this matter is beyond the scope of this paper, so I set it aside here.
27Reasoning might be construed as a special case of behaviour—cognitive behaviour, as it were—in whichcase there is not much substance to this distinction. But the point of the present discussion will be preservedeither way.
33
follow a rule in reasoning implies a causal role for the rule in reasoning by its being an
intentional object (cognitively represented). I suppose that in both these ways a rule can
(nonprescriptively) regulate, govern, or describe a reasoning process.
We saw above that stimulus-driven behaviour is best characterised as satisfying a rule;
and since rule-satisfying behaviour does not implicate the involvement of cognitive
representations, stimulus-driven behaviour is distinguished from heuristic-produced
behaviour. In terms recently stated, merely satisfying a rule is not sufficient for
heuristic-produced behaviour. I believe we can likewise observe that satisfying a rule in
reasoning is not sufficient for heuristic reasoning, though not because satisfying a rule in
reasoning doesn’t involve cognitive representations (indeed, I’m not sure how much sense
can be made of nonrepresentational reasoning). Rather, satisfying a rule in reasoning is not
sufficient for heuristic reasoning because some heuristic must have a causal role in a
heuristic reasoning process, otherwise we would not be able to say that the reasoning
process was in fact heuristic. To paraphrase Fodor, heuristic reasoning requires that it have
a certain kind of etiology. Nevertheless, I do not think that heuristics are necessarily rules
that are followed in reasoning, for I do not think that heuristics are necessarily represented
in cognition. Fodor ([1975], [2000], [2008]) argues that reasoning generally requires
representing the rules according to which one reasons.28 But this does not seem to be a
necessary requirement of reasoning, especially with respect to heuristics. As we saw,
inferential heuristics are often epistemically opaque, and they are pretty much automatic
and sometimes impervious to change. In any case, I see no reason why heuristics must be
intentional objects in cognition. It seems entirely plausible that they can be rules that
28More specifically, Fodor believes that acting (that is, intentional acting, as opposed to reflex or thrashingabout) requires planning, which requires deciding on a plan, which requires representing plans; on theother hand, he believes that belief fixation requires hypothesis formation and confirmation, which requiresrepresenting hypotheses.
34
operate in cognition, but are neither consulted nor represented when employed. And it is
certainly not impossible for computational procedures to be executed without being
represented; even Fodor ([2008], p. 37) admits that there must be at least some rules that
are not represented in computations, e.g., those that instruct a Turing machine to move
along the tape. Thus, I propose that heuristics are plausible candidates to be
unrepresented rules in cognition, and hence that heuristics are employed, having a causal
role in reasoning, but not followed. This is not to say that being unrepresented is a
necessary condition for a rule to be heuristic. Indeed, many heuristics are explicitly
represented; this is especially true of methodological heuristics, and I believe it is obvious
that some inferential heuristics can be explicitly represented, such as when novices begin to
learn the skills of a trade. My claim, however, is that heuristics need not be, and often are
not, represented in cognition.
In light of these considerations, let me make a further distinction between, on the one
hand, satisfying a rule and following a rule, and on the other hand, acting accordance with
a rule. ‘Acting in accordance with a rule’ is sometimes meant in the same sense as
satisfying a rule, but I will suggest a special way to understand it here. By ‘acting in
accordance with a rule’, I propose to mean that a rule is guiding one’s behaviour and that
there is a causal link between the rule and the behaviour—in Fodor’s terms, the rule is
implicated in the etiology of the behaviour that accords with the rule—but the rule is not
an intentional object in cognition. In terms of reasoning, reasoning in accordance with a
rule means that the rule has a causal role in the reasoning process, but is not represented.
Notice that in this way a rule still (nonprescriptively) regulates, governs, or describes a
reasoning process.
This notion of acting in accordance with a rule is roughly what Daniel Dennett ([1991])
refers to when he talks about ‘intermediate regularities’ in systems. In ‘Real Patterns’, he
35
explains intermediate regularities thus:
Philosophers have tended to ignore a variety of regularity intermediate between
the regularities of planets and other objects ‘obeying’ the laws of physics and
the regularities of rule-following (that is, rule-consulting) systems. These
intermediate regularities are those which are preserved under selection pressure:
the regularities dictated by principles of good design and hence homed in on by
self-designing systems. That is, a ‘rule of thought’ [. . . ] no more need be
explicitly represented than do the principles of aerodynamics that are honored
in the design of birds’ wings. (Dennett [1991], p. 43)
We can easily understand heuristics as such intermediate-level ‘rules of thought’ which are
happened upon throughout the course of learning, or in Dennett’s terms, throughout the
course of self-design. In fact, I am inclined to believe that inferential heuristics generally
are rules that one reasons in accordance with (as I am using the phrase), whereas
methodological heuristics are usually (though perhaps not always) followed. Whether this
is true, however, does not bear on the present point, which is that heuristics in general do
not have to be represented when employed. Let us therefore observe that:
H1 Heuristics are cognitive procedures that can be expressed as rules one
reasons in accordance with,
where ‘reasons in accordance with’ is to be understood in the special sense given here. H1
expresses the first quality of heuristics illuminated by the rule-of-thumb-definition.
Another quality of heuristics illuminated by the rule-of-thumb-definition is that a rule
of thumb is so called because a thumb can be used as a device for rough estimations;
historically, one’s thumb was likely used as a tool for rough measurements. We might take
36
this observation to imply that heuristics are imprecise cognitive tools. But this is too broad
a claim, for there are many imprecise inferential procedures that we would hesitate to call
heuristic. For instance, procedures that are irrelevant to the task at hand, or that
invariantly result in disaster, are (in some sense) imprecise, but such procedures are by no
means heuristic (cf. discussion in section 2.2 above). Moreover, Gigerenzer has shown that
heuristics can sometimes outperform complex operations on certain decision-tasks. For
example, in choosing which of two cities is larger, Gigerenzer and Goldstein ([1996], [1999])
show that the recognition heuristic
(11) (i) Search among alternatives; (ii) stop search when one alternative is
recognised; (iii) choose the recognised object,
is more accurate than weighted linear models and multiple regression. Such results throw
into doubt the notion that heuristics are, or at least must be imprecise.
A natural way around this issue is to invoke satisficing. Simon ([1957]) introduced the
notion of satisficing to describe procedures that do not aim to meet the standards of
rational decision theory (or any theory of rationality governed by the dictates of logic and
probability theory), but instead aim to meet some aspiration level that is in some sense
‘good enough’ relative to the desires, interests, and goals of the agent. It is certainly
possible for a satisficing procedure to set its aspiration level commensurate with an ideal, if
one exists. However, the point of satisficing is that the bar is flexibly positioned according
to the goals and desires of the satisficer.29
29To revisit an issue discussed in section 2.2 above, one may wonder whether heuristics represent ill-definedproblems as well-defined problems, and thus whether heuristics just approximate norms. Though I don’twish to deny that this is a possibility, I don’t think that this is generally the case. We have already seen thedifficulties in representing ill-defined problems as well-defined problems. One of the key points of invokingthe notion of satisficing is that heuristics simply do not need to do any of this modelling business. Flexiblysetting goals relative to one’s needs and desires seems to be a better and more efficient strategy for finite andconstrained creatures such as ourselves than first modelling the problem and then setting about to (re)solve
37
Understanding heuristics as useful procedures that do not necessarily aspire to meet
some ideal implies that heuristics are essentially satisficing procedures. Sometimes this
might not be entirely obvious. Consider two examples of inferential heuristics. The first is
Gigerenzer’s take the last heuristic,
(12) Use the cue that discriminated on the most recent problem to choose an
object; if the cue does not discriminate use the cue that discriminated the
time before last; and so on. (Gigerenzer [2001])
This may not appear at first sight to be a satisficing procedure. However, if we take
determining a method of choice to be an initial goal in a problem, a certain aspiration level
might be set with respect to this initial goal, and using the method that was used to solve
the previous problem would satisfice. Consider also Kahneman and Tversky’s availability
heuristic, which we saw above:
(10) The frequency of a class or the probability of an event is assessed according
to the ease with which instances or associations can be brought to mind.
Again, this may not appear to be a satisficing procedure, but upon closer examination we
see that this heuristic opts not to determine the actual frequency or probability of an event
via complex calculations, but instead sets an aspiration level relative to the ease with
which instances come to mind; thus, availability of instances is a ‘good enough’, i.e.,
satisficing, indicator of frequency.
Methodological heuristics, on the other hand, appear to resist the characterisation as
satisficing procedures. For example, it is difficult to see how Polya’s heuristics above (5)–(9)
it. However, even if a satisficing procedure would model an ill-defined problem as a well-defined one, so beit. The account I provide still applies, so long as the formulated well-defined goal (the ideal) need not alwayscoincide with the ideal(s) of a formal rationality theory.
38
are satisficing strategies. Nevertheless, methodological heuristics are strategies that aim to
learn or understand things in the world, not by reverting to some standard operations such
as formal proofs or deductive reasoning, but by using flexible, defeasible procedures. Once
this is acknowledged, methodological heuristics begin to look like satisficing strategies, or
at least strategies arising from a need to satisfice. Polya makes this point:
We shall attain complete certainty when we shall have obtained the complete
solution, but before obtaining certainty we must often be satisfied with a more
or less plausible guess. We may need the provisional before we attain the final.
We need heuristic reasoning when we construct a strict proof as we need
scaffolding when we erect a building. (Polya [1957], p. 113)
Hence, methodological heuristics appear to be procedures that are often indispensable to
attaining complete and certain solutions; they are a provisional step that must be taken
sometimes before we reach our final goal; they are a satificing step in the problem-solving
process.
Understanding heuristics as satisficing procedures helps our characterisation in a
number of ways. First, it does justice to the evidence that heuristics can sometimes (or
often) result in good inferences, since satisficing techniques can sometimes (or often) hit
the mark, and also since what is considered ‘good enough’ can very well be close to what is
ideal. We might note that this is the sense in which computational heuristics are generally
understood in computer programming and AI research—heuristics approximate some
presumed normative ideal (such as a correct or an optimised outcome), and some heuristics
are considered better than others if they result in better approximations (Horgan and
Tienson [1998]). Furthermore, this is roughly the sense in which many perceptual heuristics
are conceived as providing approximations to an ideal, especially the perceptual heuristics
39
employed by the human visual system.
Moreover, contra the HN definition, understanding heuristics as satisficing procedures
offers a positive account of what heuristics can do rather than just stating what they
cannot or do not do. This speaks to the suggestion that heuristics are useful devices for
inference and deliberation. But if a procedure fails to approximate entirely—if it produces
random results, or results that are completely off the mark—it is utterly useless, and thus
should not be considered heuristic.
Given these observations, understanding heuristics as satisficing procedures gives us a
robust notion. Let us therefore amend our characterisation thus:
H2 Heuristics are satisficing cognitive procedures that can be expressed as rules
one reasons in accordance with.
There are further essential features of heuristics that the rule-of-thumb-definition
illuminates. For what a rule of thumb lacks in precision is made up for by its convenience
and reliability. A thumb is not as accurate for measuring as markings on a rigid stick, but
it is readily accessible, easily deployed, and dependable. The same can be said of heuristics.
A heuristic is dependable because it can be counted on to produce a satisficing solution.
This is in part why heuristics are powerful inference devices, and may in part explain why
heuristics are so ubiquitous in human reasoning and inference (Gigerenzer [2001]). Yet it is
not by chance or happenstance that a heuristic succeeds or fails. Rather, heuristics process
specific kinds of information in systematic and predictable ways. This accounts for why
they work so well in certain domains, but fail miserably in others. According to some
researchers, such as Gigerenzer, heuristics are attuned to specific domains or environments
that exhibit stable informational structures, and this is what enables a heuristic to be
successful in that domain or environment. However, if the heuristic is applied in another
40
domain or environment in which the same informational structures do not obtain, then the
heuristic will fail, and biases will ensue (Gigerenzer [2000], [2001], [2006]).
Heuristics are readily accessible because little cognitive resources are required to engage
them. At least this is so for inferential heuristics. We rarely, if ever, have to invest much
time and processing in thinking over whether we will employ one inferential heuristic or
another (or some other procedure). On the contrary, the empirical evidence shows that
inferential heuristics are typically available for use without the need for reflection. In fact,
many psychologists agree that some inferential heuristics are hardwired mechanisms that
are essentially automatic. (This corroborates my position that heuristics are cognitive
procedures that can be expressed as rules one reasons in accordance with.) In a similar
vein, heuristics are also easily deployed. This means that comparatively little cognitive
resources are needed in their processing, which is achieved by processing only readily
available or easily accessible information (in the environment or within the mind). That
heuristics are readily accessible and easily deployable allows other cognitive resources
(including time) to be spared, not only with respect to processing, but also with respect to
search (among information as well as procedure alternatives, if the latter occurs). This
enables swift, on-the-fly inferences, and frees up precious cognitive resources for other
tasks—virtues upon which survival generally depends. Thus, it seems that:
H3 Heuristics are satisficing cognitive procedures that can be expressed as rules
one reasons in accordance with; they require little cognitive resources for
their recruitment and execution.
Notice, however, that H3 appears to preclude methodological heuristics. This is the
point at which the present characterisation of heuristic processes departs from
methodological heuristics. Indeed, we often take our time in deciding which, if any,
41
methodological heuristic to use for a given problem. Moreover, methodological heuristics
can require much cognitive resources to be recruited and executed, and as such their
deployment may not be so easy. It stands to reason that, in general, methodological
heuristics will require more cognitive resources in their deployment and processing than
inferential heuristics, since the former generally involve substantial processing and
integration of conceptual content. But these are empirical matters to be explored on some
other occasion. The present account is content with focusing on inferential heuristics that
require little cognitive resources for their deployment.
4.2 Beyond rules of thumb: exploiting information
Before being satisfied with our working characterisation of cognitive inferential heuristics
developed thus far, let us revisit and emphasise a key issue that has been pervasive in much
of the discussion. The nature of heuristics qua cognitive processes involves particular
utilisation of conceptual representations. A rule of thumb per se, on the other hand, is
merely a static, rough unit of measure, involving few, if any, representations in its
employment. In this vein, rules of thumb do not involve the same sort of information
processing that heuristics do.
To put the point another way, when one employs a heuristic, as I am characterising it
here, it tells us something about one’s conceptual wherewithal—particularly about the
content of one’s concepts and the organisation of the larger system within which one’s
concepts participate. Consider, for example, Gigerenzer’s take the best heuristic:
(13) (i) Search among cues; (ii) stop search when one cue discriminates between
two alternatives; (iii) choose the object picked out by step (ii). (Gigerenzer
[2001]; Gigerenzer and Goldstein [1999])
42
When one employs this heuristic, certain beliefs are implied about the cue upon which the
choice in question was made. At the very least, we can infer that the said cue was believed
to be the ‘best’ upon which to make the choice in question. Of course, this belief need not
be explicitly represented. Rather, it could be (and probably usually is) a tacit belief, which
is embodied by one’s concepts. In this sense, that the cue is believed to be the ‘best’
implies certain things about the conceptual content of the cue as possessed by the chooser,
as well as certain things about how the cue fits within the chooser’s organised conceptual
system. Similarly, when one employs the representativeness heuristic,
(2) Probabilities are evaluated by the degree to which one thing or event is
representative of (resembles) another; the higher the representativeness
(resemblance) the higher the probability estimation;
certain (tacit) beliefs are implicated concerning what one believes about and how one
conceptualises the objects or events under evaluation, and how the representations involved
in the evaluation fit within one’s conceptual system. The upshot here is that one’s
conceptual wherewithal has a significant role in the operations of inferential heuristics.
When one employs a rule of thumb, on the other hand, one’s conceptual wherewithal
may not play much of a role at all, and therefore the use of a rule of thumb may tell us
nothing about one’s concepts or conceptual system. ‘When constructing a truth tree,
decompose those formulas that use stacking rules before those that branch’; ‘To choose a
ripe cantaloupe, press the spot on the candidate cantaloupe where it was attached to the
plant and smell it; if the spot smells like the inside of a cantaloupe, it’s probably ripe’
(Pearl [1984]); ‘Start in the centre square when beginning a game of tic-tac-toe’ (Dunbar
[1998]); ‘Measure twice, cut once’—all these are rules that can be applied willy-nilly with
very little requisite conceptual resources or content. For example, we can imagine a
43
beginner at sentential logic being presented with the truth tree rule, say in a lecture, and
told to follow it. However, that this student is able to follow the rule tells us nothing
interesting about her beliefs, or more broadly about her concepts or conceptual
system—she is just doing what she is instructed to do, not really caring to think about
what she’s doing or why she’s doing it. It is in this sense that the student would be
employing the rule as a rule of thumb rather than as a heuristic.30 Importantly, what
distinguishes rules of thumb from heuristics, as I am claiming here, is the manner in which
the rule engages one’s conceptual wherewithal, and not the rule itself. In other words, the
distinction concerns not the procedure but the sort of information over which the
procedure operates. (Cf. the distinction between computational and cognitive heuristics in
section 3.3.)
A difference between rules of thumb and heuristics can also be found with respect to
the basic understanding of heuristics provided by H1, namely that heuristics can be
expressed as rules one reasons in accordance with. Let us recall here the distinction I made
between following a rule in reason and reasoning in accordance with a rule. Following a
rule in reason implies a causal link between the rule and some behaviour, and that the rule
is an intentional object in cognition; reasoning in accordance with a rule likewise implies a
causal link between the rule and some behaviour, but the rule does not have to be an
intentional object in cognition. Upon reflection we can now see that, whereas heuristics do
not have to be represented, rules of thumb usually are; that is, whereas heuristics are rules
that one reasons in accordance with, rules of thumb are typically rules that are followed.31
30Certainly, with enough experience a student eventually understands truth trees and uses the rule in anautomatic way. But this doesn’t mean that she is (once again) not caring about what she is doing or whyshe is doing it. Rather, I suppose that it is in virtue of a thorough knowledge of truth trees that the rule hasbecome embedded in the student’s conceptual system in such a way that it can be deployed unconsciously.
31On this account, it is possible that a rule starts out as a represented rule of thumb, and with enoughexperience the rule turns into a heuristic as one’s representational wherewithal plays more of a role whenusing the rule. This I believe to be plausible especially with respect to methodological heuristics.
44
I therefore resist simply equating heuristics with rules of thumb. The basis for this is
ultimately that heuristics exploit concepts, and they are successful because they do so.32
Let us draw the characterisation of heuristics that the discussion has led us to:
H4 Heuristics are satisficing cognitive procedures that can be expressed as rules
one reasons in accordance with; they require little cognitive resources for
their recruitment and execution; they operate by exploiting concepts.
As with H3, this characterisation should be understood to refer only to inferential
heuristics.
It is beyond the scope of this paper to offer an account of how heuristics are supposed
to exploit concepts.33 Nevertheless, the purpose of introducing this aspect to the
characterisation of inferential heuristics is to emphasise the central importance of
understanding inferential heuristics as essentially conceptual processes. Not only does this
corroborate the distinctions I made throughout section 3 between cognitive heuristics and
other sorts of heuristics; in addition, as I had discussed at the beginning of this subsection,
an important feature of inferential heuristics is that they allow us to infer something about
32This is similar to Gigerenzer’s conception of heuristics (see especially Gigerenzer [2000], [2001], [2008a],[2008b]; Gigerenzer and Todd [1999]). But there are important differences between Gigerenzer’s views andmy own. For instance, Gigerenzer does not make the distinctions I do between kinds of heuristics, as evincedby his confounding what I call stimulus-driven and heuristic-produced behaviour. Further, Gigerenzer doesnot develop an understanding of heuristics as exploiting concepts, as I am presently doing. Finally, whileGigerenzer asserts that heuristics operate on little information, I emphasise that heuristics generally relyon (though maybe not compute) generous amounts of information (cf. Sterelny [2004], [2006]), which onmy account is tacitly or implicitly embodied within one’s representational structures and overall conceptualsystem. See note 33.
33To be more precise, I maintain that inferential heuristics exploit, not concepts per se, but informationembodied by one’s conceptual system. In brief and rough form, here is how I envision such an account to go:When presented with a cognitive task, a set of conceptual representations are activated or primed, amongwhich associational connections exist. These connections embody a rich source of implicit (putative) knowl-edge concerning one’s concepts—their content and organisation—as mentioned above. More specifically,they implicitly embody higher-order information (or knowledge) about one’s concepts. Heuristics exploitsuch metainformation by operating over the relations that exist among one’s concepts. (Cf. Clark [2002].)This account is admittedly incomplete as stated here; a full development will have to wait for some otheroccasion.
45
the content of one’s concepts and the organisation of the larger system within which one’s
concepts participate. This latter feature is interesting in itself, but it also indicates the sort
of empirical research that can be undertaken with respect to inferential heuristics and
conceptual cognition. For example, we might be able to design experiments that would
reveal information about one’s concepts based on the sorts of heuristics employed;
conversely, we might be able to predict and manipulate the sorts of heuristics employed if
we have requisitely garnered information about one’s concepts. Such research is made
possible on the present account of inferential heuristics, and it illustrates the extent to
which we can conceive heuristics as scientific kinds, as discussed in section 1.34 These are
virtues of H4.
At the same time, H4 captures what seems to be important features of certain
heuristics discussed in philosophy, psychology, and the other disciplines of cognitive science,
while being precise enough to pick out a specific kind of cognitive process. The
characterisation may therefore be used to help us understand a broad but distinct range of
34One might suggest that things might not be so neat as this. The objection is that, there are indefinitelymany ways to specify the rules that implicitly characterise a cognitive system, and this would make it difficult,if not impossible, to determine what concepts a heuristic-using system has. For instance, we might have tworules: ‘Do x if y, unless it is before January 1, 2000, in which case do z’ and ‘Do x if y, unless it is beforeJanuary 1, 2000, in which case do w’. These rules are equivalent for systems that operate only after January1, 2000, and so we wouldn’t know what concepts to attribute to these systems. (Thanks to an anonymousreferee for raising this objection along with the example.) I have two responses to this objection. (1) Giventhat the rules are equivalent for heuristic-using systems operating only after January 1, 2000, and given thatit is in fact after that date, the difference between the rules (and between the corresponding concepts) isa difference that doesn’t make a difference, so there is no need to worry about it. On the other hand, ifit were presently before January 1, 2000 (or if the rules incorporated the date of January 1, 2050), thenthere would be a difference, and this difference would allow us to distinguish between concepts possessedby systems employing either rule. And this is a good thing. (2) Furthermore, it is actually a boon for thepresent account of heuristics that there is confusion over what concepts a heuristic-using system possessesin the face of indefinitely many ways to specify the rules that implicitly characterise a cognitive system.For it indicates ways to empirically test the account and its implications; we can design empirical tests todetermine what rules a system actually uses, and conversely what concepts the system possesses. That twoheuristic-using systems can each behave similarly and yet possess different concepts for heuristics to exploitis an interesting and substantial fact that an adequate characterisation of ‘heuristic’ should accommodate.Fortunately, the present characterisation does this.
46
cognitive phenomena, and thereby make clear sense of certain psychological theories of
reasoning, inference, and decision-making, as well as of certain philosophical debates
concerning reason and rationality. Heuristics are supposed to be ubiquitous in human
cognition, and so coming this much closer to understanding the nature of heuristic
processes significantly contributes to understanding the nature of the mind in general.
5 Concluding Remarks
Let us briefly review what has been achieved in this paper. We started with a vague
understanding of what ‘heuristic’ means and what heuristics are. With the motivation of
clarifying the notion of ‘heuristic’ in order to advance research and investigations in
philosophy and cognitive science generally, I considered a perfunctory negative account:
(HN) Heuristics are procedures that do not guarantee correct outcomes. I argued that this
account is unsatisfactory, since it either fails to sensibly apply to a range of important
cases of practical cognition, or it trivialises interesting features of cognition. In this way, I
contend that negative definitions such as HN are not useful for the kinds of phenomena
that we are generally interested in. Following this, I drew a number of distinctions between
different kinds of heuristics, narrowing in on a cluster of properties to offer a positive
account; the rule-of-thumb analogy proved most helpful for this purpose. The end result
was a working characterisation of inferential heuristics: (H4) Heuristics are satisficing
cognitive procedures that can be expressed as rules one reasons in accordance with; they
require little cognitive resources for their recruitment and execution; they operate by
exploiting concepts.
This account is useful and interesting for investigating several issues in philosophy and
psychology, as it provides an adequate account of a cognitive kind within the domain of
cognitive science. And this cognitive kind can be seen to have a clear place and role in
47
various scientific theories, which makes it (what I had called in section 1) a scientific kind.
To be sure, each property in the cluster—satisficing, being rules, using of little cognitive
resources, and exploiting concepts—makes a substantial claim about inferential heuristics
and can be easily understood within our psychological theories. For example, the dual
process theory which is popular these days (Evans and Frankish [2009]; Kahneman [2011])
contends (roughly) that a cognitive process can be either implicit, automatic, and
unconscious (type 1) or explicit, controlled, and conscious (type 2). In its infancy, this
theory posited that heuristics were generally type 1 processes (Evans [1984], [2006]).
Recently, however, the theory recognises that heuristics can be a process of either type. We
can make sense of this in terms of how inferential heuristics are characterised here: A
heuristic can be either a type 1 or type 2 process, since all that is required is that the
process be as described by H4 which cuts across these types. Along with the sort of
empirical research discussed above that can be undertaken with respect to the ways in
which inferential heuristics bear on conceptual cognition, I take this to be a positive and
reassuring result.
The present characterisation also enables us to see the respects in which the disparate
conceptions of heuristics given by different researchers and authors overlap, as well as ways
in which they come apart. For example, as was indicated above, Gigerenzer’s use of
‘heuristic’ to refer to stimulus-driven ‘rules of thumb’ of animals is precluded by the
present characterisation of H4. Moreover, we can now clearly see why other heuristics, such
as the gaze heuristic, are not of the sort of heuristics that Kahneman and Tversky were
interested in when they began their research programme in the early 1970s; Kahneman and
Tversky were interested in the cognitive heuristics referred to by H4, not perceptual
heuristics. We can also see why some heuristics, such as those proposed by Polya, are
essentially methodological devices to provide insight, and thus are of a different species
48
than the inferential heuristics Kahneman and Tversky researched. More generally, H4 helps
us to understand whether and to what extent the examples of heuristics found in the
literature are of the same sort of phenomenon that other researchers discuss. In addition,
whereas Gigerenzer ([1996]) seems to suggest that precise computational modelling is
required to scientifically study heuristics (see also Gigerenzer and Todd [1999]), the present
characterisation offers a broader account, allowing the inclusion of non-Gigerenzer-style
heuristics. All in all, we gain a better understanding of the various ways in which the term
‘heuristic’ is used by different theorists and in different disciplines.
In addition, we are presented with new and exciting research projects to pursue. For
instance, I have not adequately described how heuristics exploit concepts. I’ve provided a
brief indication in note 33 of how I envision such an account to go, but much more work
needs to be done. The hope is that we learn more about conceptual cognition in concert
with heuristic cognition. What is it about concepts that facilitate heuristic processes?
What is it about them that enable heuristics to be ‘fast and frugal’? What is it about
them that result in the systematic biases of heuristics? An investigation into these matters
can yield further insight not only into how heuristics work, but also into more fundamental
issues concerning the nature and architecture of cognition.
Acknowledgements
I would like to thank Chris Viger, Richard Samuels, Rob Stainton, John Nicholas, and
Scott Bakker for comments and discussions on earlier versions of this paper. Thanks also
to two anonymous referees whose comments and suggestions helped to improve many
aspects of the paper.
Department of Philosophy
Mount Allison University
49
63D York Street
Sackville, NB E4L 1E2
Canada
References
Ashcraft, M. H. [2002]: Cognition (Third edition). New Jersey: Prentice Hall.
Barsalou, L. W. [2005]: ‘Situated Conceptualization’, in H. Cohen and C. Lefebvre (eds),
Handbook of Categorization in Cognitive Science, St. Louis, MO: Elsevier, pp. 619–50.
Bechtel, W. and Richardson, R. C. [2010]: Discovering Complexity: Decomposition and
Localization as Strategies in Scientific Research (Second edition), Cambridge, MA:
The MIT Press.
Braunstein, M. L. [1994]: ‘Decoding Principles, Heuristics and Inference in Visual Percep-
tion’, in G. Jansson, S. S. Bergstrom and W. Epstein (eds), Perceiving Events and
Objects, Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 436–46.
Carruthers, P. [2006]: The Architecture of the Mind: Massive Modularity and the Flexibility
of Thought. Oxford: Oxford University Press.
Clark, A. [2002]: ‘Local Associations and Global Reason: Fodor’s Frame Problem and
Second-order Search’, Cognitive Science Quarterly, 2, pp. 115–40.
Crowley, K. and Siegler, R. S. [1993]: ‘Flexible Strategy Use in Young Children’s Tic-tac-toe’,
Cognitive Science, 17, pp. 531–61.
Dennett, D. C. [1991]: ‘Real Patterns’, The Journal of Philosophy, 88, pp. 27–51.
Dunbar, K. [1998]: ‘Problem Solving’, in W. Bechtel and G. Graham (eds), A Companion
to Cognitive Science, Malden, MA: Blackwell Publishing, pp. 289–98.
50
Ereshefsky, M. [1998]: ‘Species Pluralism and Anti-realism’, Philosophy of Science, 65, pp.
103–20.
Ereshefsky, M. [2001]: The Poverty of the Linnaean Hierarchy: A Philosophical Study of
Biological Taxonomy, Cambridge, UK: Cambridge University Press.
Evans, J. St. B. T. [1984]: ‘Heuristic and Analytic Processes in Reasoning’, British Journal
of Psychology, 75, pp. 451–68.
Evans, J. St. B. T. [2006]: ‘The Heuristic-analytic Theory of Reasoning: Extension and
Evaluation’, Psychonomic Bulletin & Review, 13, pp. 378–95.
Evans, J. St. B. T. [2009]: ‘How Many Dual-process Theories Do We Need? One, Two, or
Many?’, in J. St. B. T. Evans and K. Frankish (eds), In Two Minds: Dual Processes
and Beyond, Oxford: Oxford University Press, pp. 33–54.
Evans, J. St. B. T. and Frankish, K. [2009]: In Two Minds: Dual Processes and Beyond,
Oxford: Oxford University Press.
Feldman, J. [1999]: ‘Does Vision Work?: Towards a Semantics of Perception’, in E. Lepore
and Z. Pylyshyn (eds), What is Cognitive Science?, Malden, MA: Blackwell Publishers,
pp. 208–29.
Fodor, J. A. [1975]: The Language of Thought, Cambridge, MA: Harvard University Press.
Fodor, J. A. [2000]: The Mind Doesn’t Work That Way: The Scope and Limits of Compu-
tational Psychology, Cambridge, MA: The MIT Press.
Fodor, J. A. [2008]: LOT 2: The Language of Thought Revisited, Oxford: Clarendon Press.
Gigerenzer, G. [1996]: ‘On Narrow Norms and Vague Heuristics: A Reply to Kahneman and
Tversky’, Psychological Review, 103, pp. 592–96.
Gigerenzer, G. [2000]: Adaptive Thinking: Rationality in the Real World, New York: Oxford
University Press.
Gigerenzer, G. [2001]: ‘The Adaptive Toolbox’, in G. Gigerenzer and R. Selten (eds),
51
Bounded Rationality: The Adaptive Toolbox, Cambridge, MA: The MIT Press, pp.
37–50.
Gigerenzer, G. [2006]: ‘Bounded and Rational’, in R. J. Stainton (ed.), Contemporary Debates
in Cognitive Science, Malden, MA: Blackwell Publishing, pp. 115–33.
Gigerenzer, G. [2007]: Gut Feelings: The Intelligence of the Unconscious. New York: Viking
(the Penguin Group).
Gigerenzer, G. [2008a] Rationality for Mortals: How People Cope With Uncertainty, Oxford:
Oxford University Press.
Gigerenzer, G. [2008b]: ‘Why Heuristics Work’, Perspectives on Psychological Science, 3,
pp. 20–9.
Gigerenzer, G. and Brighton, H. [2009]: ‘Homo Heuristicus: Why Biased Minds Make Better
Inferences’, Topics in Cognitive Science, 1, pp. 107–43.
Gigerenzer, G. and Gaissmaier, W. [2011]: ‘Heuristic Decision Making’, Annual Review of
Psychology, 62, pp. 451–82.
Gigerenzer, G. and Goldstein, D. G. [1996]: ‘Reasoning the Fast and Frugal Way: Models
of Bounded Rationality’, Psychological Review, 103, pp. 650–69.
Gigerenzer, G. and Goldstein, D. G. [1999]: ‘Betting On One Good Reason: The Take the
Best Heuristic’, in G. Gigerenzer, P. M. Todd and the ABC Research Group (eds),
Simple Heuristics That Make Us Smart, New York: Oxford University Press, pp. 75–
95.
Gigerenzer, G., Hertwig, R. and Pachur, T. (eds) [2011]: Heuristics: The Foundations of
Adaptive Behavior, New York: Oxford University Press.
Gigerenzer, G. and Todd, P. M. [1999]: ‘Fast and Frugal Heuristics: The Adaptive Toolbox’,
in G. Gigerenzer, P. M. Todd and the ABC Research Group (eds), Simple Heuristics
That Make Us Smart, New York: Oxford University Press, pp. 3–34.
52
Gigerenzer, G., Todd, P. M. and the ABC Research Group (eds) [1999]: Simple Heuristics
That Make Us Smart. New York: Oxford University Press.
Haugeland, J. [1981]: ‘Semantic Engines: An Introduction to Mind Design’, in J. Haugeland
(ed.), Mind Design, Cambridge, MA: The MIT Press, pp. 34–50.
Hooker, C. [2011]: ‘Rationality as Effective Organisation of Interaction and Its Naturalist
Framework’, Axiomathes, 21, pp. 99–172.
Horgan, T. and Tienson, J. [1998]: ‘Rules’, in W. Bechtel and G. Graham (eds), A Com-
panion to Cognitive Science, Malden, MA: Blackwell Publishing, pp. 660–70.
Hutchinson, J. M. and Gigerenzer, G. [2005]: ‘Simple Heuristics and Rules of Thumb: Where
Psychologists and Behavioural Biologists Might Meet’, Behvioural Processes, 69, pp.
97–124.
Kahneman, D. [2011]: Thinking Fast and Slow, Toronto: Doubleday Canada.
Kahneman, D., Slovic, P. and Tversky, A. (eds) [1982]: Judgment Under Uncertainty:
Heuristics and Biases, New York: Cambridge University Press.
Kahneman, D. and Tversky, A. [1973]: ‘Availability: A Heuristic for Judging Frequency and
Probability’, Cognitive Psychology, 5, 207–32.
Kahneman, D. and Tversky, A. [1996]: ‘On the Reality of Cognitive Illusions’, Psychological
Review, 103, 581–91.
Lakatos, I. [1963–4]: ‘Proofs and Refutations, I-IV’, British Journal for the Philosophy of
Science, 14, pp. 1–25, 120–39, 220–45, 296–342.
Lee, M. D., Loughlin, N. and Lundberg, I. B. [2002]: ‘Applying One Reason Decision-making:
The Prioritisation of Literature Searches’, Australian Journal of Psychology, 54, pp.
137–43.
Lynch, C., Ashley, K. D., Aleven, V. and Pinkwart, N. [2006]: ‘Defining Ill-Defined Domains:
A Literature Survey’, in V. Aleven, K. D. Ashley, C. Lynch and N. Pinkwart (eds),
53
Proceedings of the First International Workshop on Intelligent Tutoring Systems for Ill-
Defined Domains, Jhongli Taiwan: 8th International Conference on Intelligent Tutoring
Systems, pp. 1–10.
Machery, E. [2009]: Doing Without Concepts, Oxford: Oxford University Press.
McCarthy, J. [1956]: ‘The Inversion of Functions Defined by Turing Machines’, in C. E.
Shannon and J. McCarthy (eds), Automata Studies, Annals of Mathematical Studies,
Princeton, NJ: Princeton University Press, pp. 177–81.
Mitrovic, A. and Weerasinghe, A. [2009]: ‘Revisiting Ill-definedness and the Consequences for
ITSs’, in R. Mizoguchi, B. du Boulay and A. C. Graesser (eds), Artificial Intelligence
in Education: Building Learning Systems That Care: From Knowledge Representation
to Affective Modelling, Proceedings of the 14th International Conference on Artificial
Intelligence in Education, Amsterdam: IOS Press, pp. 375–82.
Newell, A. [1969]: ‘Heuristic Programming: Ill-structured Problems’, Progress in Operations
Research, 3, pp. 361–413.
Pearl, J. [1984]: Heuristics: Intelligent Search Strategies for Computer Problem Solving,
Reading, MA: Addison-Wesley Publishing Company.
Polya, G. [1957]: How to Solve It: A New Aspect of Mathematical Method, Garden City, NY:
Doubleday Anchor Books.
Popper, K. [1959]: The Logic of Scientific Discovery, London: Hutchinson.
Prinz, J. J. [2002]: Furnishing the Mind: Concepts and Their Perceptual Basis, Cambridge,
MA: The MIT Press.
Pylyshyn, Z. W. [1999]: ‘What’s in Your Mind’, in E. Lepore and Z. Pylyshyn (eds), What
is Cognitive Science? Malden, MA: Blackwell Publishers, pp. 1–25.
Rawls, J. [1955]: ‘Two Concepts of Rules’, Philosophical Review, 64, pp. 3–32.
Reitman, W. R. [1964]: ‘Heuristic Decision Procedures, Open Constraints and the Structure
54
of Ill-defined Problems’, in M. W. Shelly II and G. L. Bryan (eds), Human Judgments
and Optimality, New York, NY: John Wiley & Sons Inc., pp. 282–315.
Reitman, W. R. [1965]: Cognition and Thought: An Information Processing Approach, New
York, NY: John Wiley & Sons Inc.
Richardson, R. C. [1998]: ‘Heuristics and Satisficing’, in W. Bechtel and G. Graham (eds),
A Companion to Cognitive Science, Malden, MA: Blackwell Publishing, pp. 566–75.
Samuels, R., Stich, S. and Bishop, M. [2002]: ‘Ending the Rationality Wars: How to Make
Disputes About Human Rationality Disappear’, in R. Elio (ed.), Common Sense, Rea-
soning and Rationality, New York: Oxford University Press, pp. 236–68.
Savage, L. J. [1954]: The Foundations of Statistics (second edition). New York: Dover
Publications Inc.
Searle, J. [1980]: ‘Minds, Brains and Programs’, Behavioral and Brain Sciences, 3, pp.
417–57.
Shafer, G. [1986]: ‘Savage Revisited’, Statistical Science, 1, pp. 463–85.
Shin, N., Jonassen, D. H. and McGee, S. [2003]: ‘Predictors of Well-structured and Ill-
structured Problem Solving in an Astronomy Simulation’, Journal of Research in Sci-
ence Teaching, 40, pp. 6–33.
Simon, H. A. [1957]: Models of Man, Social and Rational: Mathematical Essays on Rational
Human Behavior in a Social Setting, New York: John Wiley.
Simon, H. A. [1973]: ‘The Structure of Ill-structured Problems’, Artificial Intelligence, 4,
pp. 181–201.
Simon, H. A., Newell, A., Minsky, M. and Feigenbaum, E. [1967]: ‘Heuristic Programs and
Algorithms’, SIGART Newsletter, 6, pp. 10–9.
Sterelny, K. [2004]: ‘Externalism, Epistemic Artifacts and the Extended Mind’, in R. Schantz
(ed.), The Externalist Challenge: New Studies on Cognition and Intentionality, New
55
York: Walter de Gruyter, pp. 239–54.
Sterelny, K. [2006]: ‘Cognitive Load and Human Decision, or, Three Ways of Rolling the
Rock Uphill’, in P. Carruthers, S. Laurence and S. Stich (eds), The Innate Mind:
Culture and Cognition, Oxford: Oxford University Press, pp. 218–233.
Todd, P. M., Gigerenzer, G. and the ABC Research Group. [2012]: Ecological Rationality:
Intelligence in the World (Evolution and Cognition), New York: Oxford University
Press.
Tversky, A. and Kahneman, D. [1974]: ‘Judgment Under Uncertainty: Heuristics and Biases’,
Science, New Series, 185, pp. 1124–31.
Viger, C. [2006]: ‘Symbols: What Cognition Requires of Representationalism’, Protosociol-
ogy: The International Journal of Interdisciplinary Research, 22, pp. 40–59.
von Neumann, J. and Morgenstern, O. [1944]: Theory of Games and Economic Behavior,
Princeton, NJ: Princeton University Press.
Voss, J. F. [2006]: ‘Toulmin’s Model and the Solving of Ill-structured Problems’, in D. Hitch-
cock and B. Verheij (eds), Arguing on the Toulmin Model: New Essays in Argument
Analysis and Evaluation, Berlin: Springer, pp. 303–11.
Voss, J. F., Greene, T. R., Post, T. A. and Penner, B. C. [1983]: ‘Problem Solving Skill in
the Social Sciences’, The Psychology of Learning and Motivation, 17, pp. 165–215.
Wimsatt, W. C. [2007]: Re-engineering Philosophy for Limited Beings: Piecewise Approxi-
mations to Reality, Cambridge, MA: Harvard University Press.
Wittgenstein, L. [1953]: Philosophical Investigations The German text, with a revised En-
glish translation (G. E. M. Anscombe, Trans.), Oxford: Blackwell, 2001.
56