methodological implications from equating knowledge to ... · harry potter: \is this real? or has...

39
Explico, Ergo Cognosco - Methodological Implications from Equating Knowledge to Explanations Marco Valente * Abstract The paper discusses the methodological implications of considering explanations as elementary units of knowledge, and, consequently, of assuming the production of explanations as the ultimate goal of science. We show that under this perspective it is possible to derive highly relevant indica- tions on how a research project is carried out and on the elementary components of knowledge/explanations. Furthermore, we can exploit the format of explanations to clarify the process of assessing scientific knowledge. One of the advantages of our proposal is that it applies to any kind of knowledge, overcoming distinctions between “hard” vs “soft” sciences, frequently mentioned but vaguely defined. Rather, we state that the difference applies not to disciplines, but to classes of phenomena whose nature require distinct kind of analyses because of differences in the nature of explanations. Though the proposal encompasses any type of knowledge, and therefore applies to any scientific discipline, we focus on the implications for Economics, whose method- ological problems, we maintain, depend on the frequent use of analytical tools devised for one class of phenomena to tackle phenonomena in a different category. We discuss a few examples of methodological approaches frequently adopted by economists under the light of considering explanations as final goal. Keywords: Methodology, Philosophy of science, Validation, Empirical assessment. * Universit`adell’Aquila; LEM, Scuola Superiore Sant’Anna, Pisa; SPRU, University of Sussex, Brighton. Email: [email protected]

Upload: others

Post on 22-Mar-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

Explico, Ergo Cognosco-

Methodological Implications from Equating Knowledge to

Explanations

Marco Valente∗

Abstract

The paper discusses the methodological implications of considering explanationsas elementary units of knowledge, and, consequently, of assuming the production ofexplanations as the ultimate goal of science.

We show that under this perspective it is possible to derive highly relevant indica-tions on how a research project is carried out and on the elementary components ofknowledge/explanations. Furthermore, we can exploit the format of explanations toclarify the process of assessing scientific knowledge.

One of the advantages of our proposal is that it applies to any kind of knowledge,overcoming distinctions between “hard” vs “soft” sciences, frequently mentioned butvaguely defined. Rather, we state that the difference applies not to disciplines, butto classes of phenomena whose nature require distinct kind of analyses because ofdifferences in the nature of explanations.

Though the proposal encompasses any type of knowledge, and therefore applies toany scientific discipline, we focus on the implications for Economics, whose method-ological problems, we maintain, depend on the frequent use of analytical tools devisedfor one class of phenomena to tackle phenonomena in a different category. We discussa few examples of methodological approaches frequently adopted by economists underthe light of considering explanations as final goal.

Keywords: Methodology, Philosophy of science, Validation, Empirical assessment.

∗Universita dell’Aquila; LEM, Scuola Superiore Sant’Anna, Pisa; SPRU, University of Sussex, Brighton.Email: [email protected]

Page 2: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

Contents

1 Knowledge as explanations 31.1 Supporting evidence for equating knowledge to explanations . . . . . . . . . 51.2 Modes for increasing knowledge . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.1 Knowledge revision . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.2.2 Knowledge extension . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.3 Knowledge deepening . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4 Classes of explanatory mechanisms . . . . . . . . . . . . . . . . . . . . . . . 10

1.4.1 Aggregative explanations . . . . . . . . . . . . . . . . . . . . . . . . 101.4.2 Transformational explanations . . . . . . . . . . . . . . . . . . . . . 101.4.3 Logical/associative explanations . . . . . . . . . . . . . . . . . . . . 11

2 Scientific Knowledge 112.1 Knowledge assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2 Degrees of robustness, intrinsic subjectivity of assessment and finalized

knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.3 Validation vs. assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4 Describing vs. explaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Structurally Dynamic Phenomena 193.1 Classes of phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2 Assessment and empirical evidence . . . . . . . . . . . . . . . . . . . . . . . 223.3 Historicity and structural dynamics . . . . . . . . . . . . . . . . . . . . . . . 23

4 Implications for Economic Analysis 254.1 Planning, developing and assessing research projects . . . . . . . . . . . . . 254.2 Economics of explanations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.3 Econometric modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.4 Agent-based modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5 Conclusions 34

Page 3: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

Harry Potter: “Is this real? Or has this been happening inside my head?”Dumbledore: “Of course it is happening inside your head, Harry, but why on earth

should that mean that it is not real?”

Introduction

Many professional economists as well as observers from outside the academy have theopinion that social sciences in general, and Economics in particular, suffer from a method-ological problem undermining their reliability as scientific disciplines. The perceived limitsof Economics may be exemplified by the famous “Queen’s Question”, posed by ElisabethII to a gathering of economists meant to explain the situation after the explosion of thefinancial crisis. After carefully listening the reports, Her Majesty was recorded asking:“Why did nobody notice it?”.

In the last decades there has been a fiery debate among economists roughly dividedbetween those defending the approach adopted by the majority of the profession, and thosecriticising this very approach and promoting different perspectives. The attacks againstthe dominant school of thought (the “neoclassical” approach) have intensified since thecrisis because of its embarrassing failure to provide any warning to what is acknowledged asan obviously systemic crisis (no meteor struck Lehman Brothers), and even the incapacityto provide satisfactorily answers to questions by royals as well as companies, governmentsand the society at large.

This work is not concerned with providing a full account of the large variety of the-oretical positions among academic economists, but rather wish to advance a proposaladdressing what, we claim, is a methodological weakness shared by many scholars andprovide a solid ground to assess economic theoretical proposals. The ideological battle

1

Page 4: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

among economists is generally fought on the basis of a shared common understanding,that the winning party in a scientific dispute should be the one able to provide the mostfaithful description of observed events. For example, mainstream economists defend theirrecord claiming to have succeeded in predicting most of the events in the last decades1, andattack the opponents’ positions accusing alternative schools of methodological sloppiness,lacking a coherent proposal and being unable to produce rigorous results. Heterodox par-tisans reply that they prefer to “to be roughly right than precisely wrong”, and attack themainstream for founding their models on unrealistic assumptions (rational behaviour andmarket equilibrium) that are able to generate good predictions only by carefully selectingthe evidence so as to remove any inconvenient fact risking to undermine their assumptions.

Both positions share the same interpretation on what should be the goal of science andwhat criterion should be used to solve scientific disputes: science is supposed to providefaithful and detailed descriptions of observed events, and, in a scientific dispute, the factionproducing the most accurate description must be considered the most trustworthy. Thiswork argues that this approach is logically faulty, and, besides, never applied in science,not even by researchers in natural sciences, formally considered as role models by manyeconomists.

The problems generated by appealing to faithfulness of representations as main cri-terion to assess scientific theories can be exemplified considering that there cannot exista single description of reality independent from the purpose the description is meant toserve. The tiniest object, such as a speck of dust, not to mention highly complex ones suchfirms, markets or economies, are potentially liable to be described by infinite possible per-spectives, collecting data on infinite number of possibly relevant features. Which aspectsshould be left out, as required for any practical use, without affecting the results? Is reallyalways an advantage in science to include in the analysis as many details as possible?

We argue that in natural sciences it does never matter the similarity of theoreticalresults to as much evidence as possible. On the contrary, scientists select very carefullythe kind of evidence required to confirm or refute a theoretical statement, trying to reduceas much as possible the empirical features treated in the theory and, consequently, thedata necessary for its assessment. It is not the adherence of theory to general empiricalevidence that matter, but to a specific evidence, identified on the basis of theory. As such,empirical evidence provides possibly highly relevant clues, but cannot be considered, onits own, a sufficient criterion to solve scientific disputes.

Rejecting the generic adherence to empirical evidence as sole or main criterion to assesstheoretical statements requires the identification of an alternative approach able to evaluatescientific statements and, in general, to guide the process of scientific discovery. This workis concerned with advancing such a proposal meant to apply not only to economics or socialsciences, but to any scientific discipline. The overall proposal consists in sustaining thatconflicting analyses should not battle on the ground of being realistic, a criterion proneto manipulation and, ultimately, impossible to define. Rather, they should compete onproviding the best explanations for clearly stated goals. The proposed criterion necessarilyimplies some degree of subjectivity, potentially leaving room for never-ending discussions,but ensures, at the very least, that opposing parties agree on what, exactly, they disagree.

The foundation of the proposal is the definition of the elementary units of knowl-edge, that we proposed to be considered the explanations. This idea is well known in

1According to this view the crisis as been just a statistically insignificant detail. Such a position is wellrepresented by a quote from a Nobel Laureate: “One thing we are not going to have, now or ever, is aset of models that forecasts sudden falls in the value of financial assets, like the declines that followed thefailure of Lehman Brothers in September.” (Lucas, 2009).

2

Page 5: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

the literature of philosophy of science (e.g. Hempel, 1965; Salmon, 1989). However, thecontributions in this field have centered on the conceptual nature of explanations within apurely philosophical perspective (Reiss, 2012), for example discussing the ambiguity con-cerning the entities that should be part of the explanantia within different interpretationof methodological individualism (Hodgson, 2007). In this work we avoid taking positionin the philosophical debate, providing a definition of explanations (and, crucially, of themethod to assess their validity) rooted in the practice of theoretical economists. We willderive the implications of our proposal providing suggestions on how a research projectshould be designed, carried out and evaluated. We will sustain that economics is no dif-ferent from any other scientific discipline, since any kind of knowledge admits the use ofexplanations as the universal format. The difference between economics and other disci-pline concerns the nature of the phenomena it is interested into. While explanations area universal format of knowledge, certain phenomena admits also additional, more elegant,formats of knowledge such as mathematical expressions that may be interpreted as a com-pact representation of a explanations. However, the possibility to adopt a specific formatfor explanations is restricted to a specific class of phenomena respecting certain charac-teristics. When those characteristics are not present, as frequently happens in economics(but also in any other discipline like biology and physics itself), it is necessary to resortto the more general format of pure explanations, concatenating logical steps proving acoherent understanding of events.

The structure of this paper aims at describing a formal definition of explanation andderiving the methodological consequences of this proposa. The first section below pro-vide the formal definition, examples and arguments supporting the idea that any piece ofknowledge in any domain can always be represented in terms of one or more explanations.The proposed definition is shown to produce useful implications, such as a classificationfor the possible routes of cumulating new knowledge, and for the possible components ofthe explanatory mechanisms.

In the second section we discuss how generic knowledge, a faculty of all human beings,turns into the scientific variety once it is complemented by a public process of assessment.In this section we also present several implications of the proposed assessment procedure,showing how it applies to any kind of scientific knowledge. We also discuss the possiblemotivations behind the methodological troubles in social sciences based on the propertiesof the phenomena of interest to the discipline. We show that the relative importance ofcertain stages of the assessment procedure has a differentiated relevance depending on thenature of the phenomenon concerned. We will sustain that social sciences are frequently(but not exclusively) concerned with a class of phenomena concerning entities subject tomorphological change or taking place within open systems, undermining certain types ofassessment. However, this is not necessarily a limitation to pursue scientific advancement,since many cases in physics and biology, among other disciplines, face the same conditionsbut do not seem to suffer the same methodological ailment afflicting social sciences.

The final section before the conclusions suggests a possible protocol to be followedwhen pursuing a research project, such as a PhD program, following the implications fromthe proposed definition of scientific knowledge.

1 Knowledge as explanations

The goal of scientific research is, tautologically, to produce scientific knowledge. We pro-pose to consider knowledge as the capacity to reach one of the three following goals:

3

Page 6: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

1. Explain past events.

2. Explain likely future events.

3. Explain how to generate desired events.

All the three possible goals rely on the same definition of knowledge, that is, thecapacity to explain how we can logically describe the transformation of a system fromone state to another. We can therefore adopt a definition of “knowledge” as explanation.That is, we can define a piece of knowledge as the explanatory link between two states ofa system:

Aµ⇒ B

We maintain that asserting to know something consists defining the description of astate, say A, which “explains” another state B by means of the explanatory mechanismµ.

Below we will elaborate on the methodological relevance of considering knowledge ascomposed by explanations. Before doing this it the case to stress a few points. First, whenclaiming that knowledge is made of explanations we are not claiming that the objectivereality works by causal relations. We limit to consider only on the subjective, internal cog-nitive representation used to make sense of observations, following, among others, H.Simonwho, however, uses the same term “cause” for both cases:

“Our undertaking is logical rather than ontological. That is, we wish to beable to assert:“The sentence A stands in a causal relation to the sentence B (ina specified system of sentences)”; and not: “That which is denoted by sentenceA causes that which is denoted by sentence B.” (Simon, 1952, p. 517)

Second, our proposal entails the broader possible interpretation for “explanation”. Forexample, in our perspective a theorem can be interpreted as “explaining” its claim Bstarting from its assumptions A by means of the proof µ. On the opposite extreme of anhypothetical spectrum of reliability we may have a piece of “knowledge” made as follows:B= “I got a parking ticket in the afternoon” linked to A= “a black cat crossed my wayin the morning” by means of the explanatory (...) mechanism µ=“bad luck”.

Both cases, and anything in between, are ways to express bits of knowledge. The secondexample clearly shows that explanations (pieces of knowledge) are a highly differentiatedset, varying in plausibility, realism, reliability, usefulness, convincing power, etc. We willdiscuss below the issues related to assessing knowledge for scientific purposes. At thisstage, we do not distinguish “true” from “false” knowledge, or scientific versus “lay”knowledge. Our claim is that human beings can be assumed to store knowledge underthe format of links between two states, such as in expert systems or the trouble-shootingsections of an appliance manual. This claim does not rule out alternative, even better,forms of knowledge used by certain types of people for certain specific cases. For example,an equation Y = X2 expresses in few symbols the infinitely large set of “explanations”formed by any real numbers linked to its square value. Our position is that any pieceof knowledge can be equivalently expressed using the format of explanations, though,obviously, the explicit use of the explanatory format is not necessarily the most compact orconvenient. The purpose of representing knowledge as explanations concerns its generality,covering any possible type of knowledge, while other formats specific to specific domains

4

Page 7: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

(mathematics, chemistry, etc.), will be more powerful where applicable, though obviouslythey will also inherit any property related to the most generic format of explanations.

The third and final note concerns the boundary of our definition: given the breadth ofthe proposed definition, what is then left that is not knowledge? We sustain that the meredescription of something is not knowledge, but only an unfinished component in need ofanother description and of an explanatory mechanism, however tenuous, in order to forma complete piece of knowledge. A description without a connection to another description,even implicit, is void of any meaning, and therefore cannot be used, or evaluated, unlesslinked to another description by an explanatory mechanism.

Negating the status of knowledge to descriptions may appear as a bold statement,particularly since, as we will discuss below, we will derive potentially controversial conse-quences from this position. For example, we will sustain that it is not possible to providean absolute, objective description of a real-world entity without reference to a specificphenomenon, and that different phenomena may entail different, even incompatible, de-scriptions of the very same real-world entity. Hence, pure descriptions such as data setscannot be considered as knowledge unless, at least implicitly, they are connected to otherstates to form an explanation.

Refusing to consider as knowledge bits of pure data (such as measurements produced byan experiment) does not reduce the potential relevance of providing detailed descriptions.There are cases of research projects that, provided an accepted (or, at least, conjectured)explanation, need detailed data to answer specific questions; in these cases the data (i.e.descriptions) are obviously of major importance. However, they are not knowledge on theirown, but only a necessary (and possibly crucial) component of an explanation made of twodescriptions and an explanatory mechanism. In other words, scientific experiments aimingat collecting data do not contradict our claim that, technically, knowledge is formed byan explanation; rather, they testify that the difficult bit of the research project concernsthe collection of some data, while the other data and the explanatory mechanism are lessdifficult to retrieve and accept.

The present work focuses on the consequences of considering knowledge as formed byexplanations and therefore its relevance is confined to the extent that such assumptionis held true, and we are not going to support further the universality of explanations asformat for knowledge. Yet, our arguments would still hold in case there were a few typesof knowledge that cannot be translated in terms of explanations. We think that even themost skeptical reader would accept that, at the very least, a large and relevant portion ofknowledge is liable to be expressed in this format, and this is sufficient to consider seriouslythe implications. In any case, the following paragraph will sustain that there are solidreasons to believe that explanations is a format is very close to the way the human mindstores knowledge, therefore suggesting that explanations are, indeed, the basic, universalformat of knowledge. The remaining paragraphs in the section will focus on extractingmethodologically relevant consequences. Firstly, we will propose a categorization of themodes available to increase knowledge, and, then we will define three broad classes ofelementary explanations that, jointly, permit to build any conceivable piece of knowledge.

1.1 Supporting evidence for equating knowledge to explanations

We cannot produce definitive evidence that the format of knowledge as explanations wepropose has a physiological foundation, and, besides, we are interested in showing theuseful consequences of our proposed approach, not to investigate its origins. However, atthe very least, we can point to a series of strong points supporting, to a varying degree,

5

Page 8: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

the interpretation of knowledge as represented by explanations. This digression has alsothe ancillary purpose to provide a clearer intuition of what we mean by explanation.

A bit of introspection shows that we frequently store memories of the past not as datain an indexed dataset, where each bit of information is associated to coordinates necessaryfor its retrieval. On the contrary, memories appear as part of a network linking sparse bitsof information along many different directions. As a consequence, any data stored in ourmemory is a node connected to many other nodes, and to retrieve it we can travel acrossthe network along a path vaguely leading towards the desired location until we stumbleover the desired objective

As an example, consider when you fail to remember the name of a person cited ina conversation. To retrieve the missing information you will likely place that person ina context (e.g. past encounters, common friends, specific events likely shared with thatperson), until her name surfaces together with other details that, initially, where absolutelyforgotten, nor are actually relevant to the name of that person. Similarly, people findsmuch easier to remember long sequences of text when this is structured in music or poems,while memorizing plan text or, worse, unrelated sequences of words is all but impossible.All of this supports our claim that humans store knowledge as linked chains of elementaryconcepts.

The physical nature of the brain, and the known details on its functioning are alsobroadly compatible with our proposal. It is widely accepted that memories (as well as skillsand most, if not all, superior features of human behaviour) are fundamentally produced bythe flow of electric pulses across the network of synapses linking neurons. For example, ourbrains seem to extend to provide a sort of “peripheral” memory: to remember a password,apparently forgotten, the best solution is to type it on a real or imaginary keyboard,exploiting our “fingers’ memory” to recollect the correct sequence of keys even when the setof numbers or letters is partially or totally (apparently) lost from our “symbolic memory”.Those who experience these types of memory know that the fingers “remember” a sequencefar better than un-related bunch of data. All of these examples show that memory andsuperior human functions work with a rigid “IF-THEN” structure, which is the logicalbase of our proposal definition of explanations as universal form of knowledge.

Additional evidence, admittedly circumstantial, can be derived from the considerationthat the development and survival of human intelligence must be rooted in the evolution-ary advantage provided by high cognitive capabilities. The proposal to consider knowledgeas explanations is compatible with a “learning system” based on the capacity to establishlinks between observations and ways to, firstly, explain the expected outcome and, if nec-essary, intervene to direct events in the most desired direction. For example, Hawkins andBlakeslee (2005) provide an ingenious interpretation of available evidence on the function-ing of the human brain based on the neo-cortex. This region is able to produce predictionsand matching them with actual observations. While superior animals (e.g. mammals)apply the neo-cortex to the sensorial part of the brain (i.e. they make predictions on whatthe senses can expect to perceive in future), humans are endowed with the neo-cortex tothe activity-related region, too. This means that we are able to make “prediction” onthe consequences of our own actions, besides on the environment, in effect “simulating”alternative scenarios they may generate, evaluating their expected outcome and adjustingconsequently hypothetical actions as necessary. Again, predictions are but one of the formsof (conjectured) explanations, i.e. knowledge according to our assumption.

As we will see below in detail, one of the consequences of interpreting knowledge asexplanations is that new evidence contradicting existing one does not necessarily requirethe trashing of all prior experiences. On the contrary, it is particularly easy to save un-

6

Page 9: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

disputed experience and simply add incrementally novel bits to ones’ own set of skills. Thepossibility of gradual cumulation and adjustment is a necessary feature for knowledge tobe an evolutionary advantage.

In conclusion, we cannot point to any unequivocal proof that human intelligence gener-ates and stores knowledge in a format compatible with explanations. However, at the veryleast we can safely sustain that there is more evidence in favor than against our claim. Asfor the goal of the present work this would suffice to continue exploring the consequencesof this approach. The following paragraph discusses a taxonomy for the means to increaseknowledge, that is, a classification of learning methods.

1.2 Modes for increasing knowledge

Doing research aims at increasing the stock of available knowledge and, notoriously, it isvery difficult to provide a unified representation of how such activity is carried on. Herewe limit our discussion to consider the logical consequences from considering knowledge asformed by explanations. We sustain that the insights obtained by our approach provideshighly valuable consequences leading to pragmatic directives useful to researchers. Thesewill not be groundbreaking, essentially stating formally the activities routinely performedby any experienced scientist. However, providing a coherent and systematic foundationsfor these activities is both useful for less experienced researchers and will lead to relevantconsideration, as we will see discussing issues concerning the evaluation of knowledge, thecrucial issue concerning scientific knowledge.

It is obvious that new knowledge is generated when facing a novel event (real or hypo-thetical) contradicts currently held understanding, and therefore one form of knowlegdegeneration involves the discarding, at least partial, of prior one. While this is the archetyp-ical form of discovery, in reality such forms of knowledge improvements are rare, observedmore frequently at the birth of new fields or facing rare events. Far more frequently, andhence more relevantly, new knowledge is built gradually on top of existing of knowledge,clarifying previously obscure bits of evidence and not requiring knowledge replacements.Our proposed definition identifies three possible routes to incrementally generate newknowledge expressed as explanations. Only the first type concerns the revision of an exist-ing piece of knowledge, such as the rejection or qualification of currently held knowledgein the face of new evidence. The second extends a given piece of knowledge placing it ina wider context. The last consists in identifying novel details on the mechanism produc-ing a known result. Below we describe in details these three modes for generating novelknowledge.

1.2.1 Knowledge revision

Given an existing body of knowledge stating that:

Aµ⇒ B

a revision consists in adjusting one or more of the components of the explanation. Forexample, new evidence may highlight counterexamples to a general law requiring specialtreatment for specific cases. The new, revised, knowledge may be expressed as:

A′µ′⇒ B′;A′′

µ′′⇒ B′′

with A = A′ ∪ A′′ and B = B′ ∪ B′′. That is, we modify our definition of initialstate, final state and/or explanation mechanism to accommodate the new evidence with

7

Page 10: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

the prior knowledge by better specifying the elements of an explanation.This mode for generation new knowledge is the one traditionally considered to replace

knowledge proved false, such as the rebuttal of the Ptolemaic model in favor of the Coper-nican perspective. However, in many cases the generation of new knowledge does not implydiscarding old one. Rather, new knowledge refines and improves previous knowledge whichmaintains intact its validity.

1.2.2 Knowledge extension

Extending knowledge means to be able to represent an existing accepted explanationA

µ⇒ B as an intermediate step in a chain of explanations forming a new, broader piece ofknowledge. For example, limiting to add one single step on both sides, we may consider anextension by adding a preceding description, call it A−1, to the initial state of the existingpiece of knowledge, and/or a subsequent consequences of its final state B, such as B+1.

A−1µ−1⇒ A

µ⇒ Bµ+1⇒ B+1

In other terms, we produce the new piece of knowledge will be A−1µ′⇒ B+1 where the

new explanation is a combination including the initial one, now part of a wider piece ofknowledge: µ′ ≡ µ−1 · µ · µ+1

The extension of knowledge concerns the formation of complex explanations by chain-ing elementary ones. A simple example of knowledge extensions is the concatenation oftwo theorems, where the hypotheses of theorem 2 are made of the claim of theorem 1. Anew theorem is therefore composed by the hypotheses of theorem 1 producing the claim oftheorem 2 using as proof the sequential combination of the two proofs of the existing the-orems. More in general, extending knowledge means to connect existing bits with newlydiscovered explanatory links.

1.2.3 Knowledge deepening

Similarly to the infinite recursion of the questions posed by pestering children, any answerto whatever question can be further requested to be supported with a more detailedexplanation. Deepening an existing piece of knowledge means to ask how the explanationmechanism µ actually carries on its role of turning the initial state A into the final stateB. For example, the division of an existing explanatory mechanism µ into two distinctsteps forming an intermediate state can be represented as:

A

µ︷ ︸︸ ︷µ′⇒ A′

µ′′⇒B

Deepening knowledge means therefore to open the box of an accepted explanatorymechanism and to describe in greater detail its intermediate steps, and their generativemechanisms, which, all together, comprise the original explanatory mechanism µ.

An example of knowledge deepening can be considered the law of physics linking pres-sure, heat and volume of a gas. Such law can be stated and used for practical purposes,being, on its own, a relevant piece of knowledge. However, researchers may (and actuallydid) investigate the underlining process giving rise to the law’s regularity, and, on suc-cess, the same initial and final states can be explained in greater details by describing themolecular (or atomic, or sub-atomic, ...) properties leading to the known and un-modifiedaggregate properties.

8

Page 11: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

1.3 Example

We have shown that considering explanations as the building blocks of knowledge allowsto identify three directions of research, i.e. attempts to increment existing knowledge. Aresearch project is therefore composed by steps in one of these “directions” of research.Actually, it is likely that different directions are explored at different or even at the sametime, pursued for different interests. Consider the following case as an example of how ourproposal can be used to classify the steps of a research project (Yoshimoto et al., 2013).

1. Obesity is significantly correlated with cancer (existing knowledge).

2. Liver cancer is correlated with obesity (initial conjectural knowledge as a revisedqualification of a generic bit of knowledge).

3. Fat animals have a different variety of gut bacteria than non-fat individuals; somebacteria produce inflammation; inflammation promotes cancer (existing bits of knowl-edge, insofar unrelated).

4. Experiment 1: compare fat and non-fat mice. No difference in cancer rates (negativeknowledge: obesity on its own does not induce cancer).

5. Experiment 2: inject a carcinogen in mice. 5% of non-fat mice develop lung cancer.100% of fat mice develop liver cancer. (creates new knowledge, where A is “obeseand cancer-prone mice” and B is “develop liver cancer”).

6. Investigate the motivation µ of experiment 2. Discover that cancerous cells in themice’s livers were damaged as aged cells.

7. Experiment 3, test the role of gut bacteria. Mice receiving general antibiotics, re-ducing the number of bacteria in the gut, are far less likely to develop liver cancerthan those without antibiotics.

8. Experiment 4, select a class of bacteria responsible for liver cancer. Administeringmice with only one type of antibiotics (killing Gram-positive bacteria only) producesthe same cancer reduction.

9. Investigate link between bacteria and cancer. High rates of an acid (DCA) areinduced by a type of (Gram-positive) bacteria and, in turn, induce inflammation(and aging) in liver cells, favoring cancer.

A journalist reporting on the paper for the wider public2 concludes: “There is, then, achain of causation leading from the gut to the liver that promotes tumours in obese mice”.Our proposal for the method by which scientists develop knowledge is by cumulativelyrevising, extending and deepening knowledge reaching new explanations, i.e. producingnew knowledge. Essentially, that the researchers revised, extended and deepened existingknowledge by producing ever more detailed explanations for phenomena known in generalterms advancing conjectures and producing specific experiments meant to validate or rejectthe conjecture. Notice also that the new knowledge is not conclusive. Crucial detailsare still missing, such as the metabolic process leading those bacteria to generate thechemicals responsible for attacking the cells, and the motivations for the response by thecells themselves.

2The Economist, June 29th, 2013: ”A punch in the gut”.

9

Page 12: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

We suggest that it is possible to interpret the history of scientific discoveries in anyscientific discipline as a process of cumulatively deepening, extending and, occasionally,revising prior explanations. Obviously the type of explanations vary with each disciplineand, with type and phenomena considered, even within disciplines. In the following wepropose a classification of the type of explanatory mechanisms available to express knowl-edge.

1.4 Classes of explanatory mechanisms

Notwithstanding the generality of the concept of explanation, there are only three, broad,classes of explanatory mechanisms liable to be used in explanations. This classificationallows to identify the nature of the explanations, but does not imply that any explanatorymechanisms used in practice must fall exclusively into one of the classes. Rather, most ofthe explanatory mechanisms used in practice are made of several sub-explanations classifiedin different categories that, jointly, form the actual explanation. The classification servesto decompose and identify the building blocks of the explanation, a necessary step toelaborate on the generation, representation and evaluation of knowledge.

We suggest that there are three classes of explanations: aggregative, transformationaland logical/associative.

1.4.1 Aggregative explanations

We refer to aggregative explanatory mechanisms whenever the link between two statesconcerns the aggregation or disaggregation of elements in one or more sets hierarchicallyrelated. For example, any mathematical statement associating numerical values relies, ul-timately, on computation, that is, elaboration of the basic aggregative operations3. Quan-titative relations, “explaining” values in terms of other values, are a form of aggregativeexplanatory links. For example, the Pythagorean theorem may be understood in terms ofthe quantity identified by the square of the hypothenuse “explained” by the sum of thesquares of the other sides. Obviously, we are not sustaining that this is interpretation iscorrect, but just that any mathematical concept, from the single value to the most com-plex equation, is compatible with an interpretation as explanation, however impossiblycumbersome may be the use of the this format in practice.

More in general, aggregative explanations concern not only values, but any relationbetween aggregate entities and their elementary components, in both directions. Forexample, the properties of a wall may be linked to the properties of the bricks formingit, and, viceversa, the property of a brick, may depend its position in the structure ofthe wall it is part of. Similarly, aggregative explanations relate properties of coloniesof bacteria to the symptoms generated in the body, the properties of economic actorsdetermine the aggregate features of a market or economic system, or the laws of chemistryrelating molecules to their individual components. In general we consider as aggregativeexplanations all relations among entities at different levels of aggregation.

1.4.2 Transformational explanations

This class contains the explanations that rely on the passing of time to indicate the irre-versible transformation of the entities of a system from their initial state into their final

3The very concept of number is, after all, the result of summing up the unity. Hence, numbers also maybe “explained” in terms of unity and the operation sum.

10

Page 13: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

state. Such explanations may indicate also a change in nature of an entity, referring toproperties not related to quantitative or other aggregative aspects.

As an example, consider the “knowledge” explaining the growth of a tree. At primaryschool children are taught of the seed, germination, root/trunk/leaves growing, flowers,fruits, etc. The explanatory links between these elements are, partly but fundamentally,due to the transformation taking place over a period of time. Other examples may be grow-ing organisms, aging, learning, technological change, etc. All these cases involve structuralor morphological changes through time, besides, possibly, one or more quantitative dimen-sions.

A characteristic of many, if not all, these types of explanations is the irreversibilityof these changes, at least, by the same process. That is, if the process explaining onephenomenon may be reversed, the return to the original state from the transformed onerequires a completely different mechanisms than the original transformation. That is, aseed generates a tree by one process, and the tree produces seeds by means of a totallydifferent process.

1.4.3 Logical/associative explanations

A residual class of explanations is composed by explanations based on any type of asso-ciation determined by formal rules, such as codifications or look-up tables. For example,coding specific colors with positive integers, assigning names to elements etc. are examplesof (components of) explanations not falling into the previous two categories. Explanationsof these type can can also be logical inferences, associating assumptions to their logicalimplications determined by specific rules.

The classification proposed indicates the logical building blocks of explanations, classesof components that are normally used to form high level (composite) explanations. Asalready mentioned, in most practical cases an explanation is a combination of elementaryexplanations from different classes. For example, the aging of a living organism is describedin terms of structural and quantitative changes, as well as in some details of the chemistryinvolved in its metabolism. Considering the case of the growth of a tree we need to usejointly explanations concerning the size of the elements, their numbers, their structuralmodification, the chemical processes, etc. Though the phenomenon is obviously due tothe joint effects of all three types of explanations, the way we store knowledge about phe-nomenon as a whole necessarily separates the contribution from individual explanations.

A particularly relevant case consists of dynamical quantitative systems. In these casesit is possible to turn the temporal explanations in the mere measurement of time. Forexample, the movement of planets in the solar system is obviously a dynamic phenomenon.However, some aspects of the system allow to turn consider time a mere quantitativevariable associated to the other relevant variables, such position and masses of planets.In this way mathematical relations suffice to express any relevant aspect. However, thepossibility to consider time as a quantitative variable is not always available. As we willdiscuss below, in certain cases the irreversibility of time cannot be neglected and thereforepurely mathematical analyses are not sufficient to account for some phenomena.

2 Scientific Knowledge

Until now we have been deliberately ambiguous concerning the difference between genericknowledge as opposed to scientific knowledge. The reason is that we believe the two

11

Page 14: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

forms of knowledge are fundamentally identical in nature and treated by people in thesame way, since scientists have cognitive processes no different from other human beings.The difference concerns two aspects. Firstly, scientific knowledge requires to be formallyexpressed in such a way that any other person sharing the language of the discipline isable to understand unambiguously the content of the knowledge as expressed by a fellowscientist. In this respect science differs from, say, wisdom of experts, who can apply, butnot perfectly communicate, their (tacit) knowledge.

Secondly, scientific knowledge undergoes a process of evaluation in order to assess itsaccordance to reality, either in absolute or in respect of competing pieces of knowledge. Inthis respect, the difference between lay and scientific knowledge is only a matter of degreeof confidence, defining how reliable a piece of knowledge should be considered.

The assessment status of a piece of knowledge typically varies through time and maydiffer among fellow scientists. Through time a potential explanation may change fromun-proven conjecture, established truth, and finally discredited historical curiosity. Thevariety of opinions across people concerning the reliability of a piece of knowledge, asgenerated by scientific assessment, may depend on the limited information and/or capacityto deal with the subject. However, even among people in full control of all the skillsrequired to understand the meaning of the explanation and of the information availableto evaluate it, an irreducible minimum of subjectivity leaves room for the possibility todisagree about its reliability, i.e. about the conclusions to draw from the assessmentprocedure.

In this section we deal with the way knowledge is assessed, that is, how it is supportedby evidence in order to increase the confidence it is reputed with. Below we providea generic definition of the process of assessment for scientific knowledge resulting fromequating knowledge to explanations. This definition of the procedure for assessment hasa number of interesting properties, discussed in the remaining part of this section.

2.1 Knowledge assessment

The ultimate goal of assessing scientific statements consists in the evaluating how stronglyempirical evidence supports a claimed piece of knowledge, that is, whether a stated expla-nation is “true” or not. Accepting that knowledge is expressed as a chain of explanationsallows to formalise the process of evaluation, clarify limitations, and possibly help to solvepotential problems.

A necessary requisite for scientific assessment is that the claimed explanation is clearlyand coherently expressed, so as to avoid ambiguous statements preventing the identifi-cation of evidence confirming or negating the claimed piece of knowledge. Hence, wewill subject to scientific assessment not the explanation itself, which is an abstract con-cept stored in people minds, but its expression in terms of a model. Here we define amodel as the symbolic representation of the salient features of the system involved in thephenomenon of interest implemented by means of a language, such as mathematics, pro-gramming language, logical derivation, laws of chemistry, etc. A modeling language ismade of symbols and logically consistent rules governing the relation among symbols.4

Given the proposed structure of a piece of knowledge, the assessment concerns its threecomponents: initial state, final state, and the transformation turning the former into thelatter. Figure 1 represents the three levels of reality, model and (cognitive) knowledge for

4We are not concerned here with assessing the representational power of formal languages, so we simplyassume that there exist such a possibility and that all people interested in the phenomenon under assessmentare able to understand the formal language and agree on whether a specific model is legal or not under therules of the adopted language.

12

Page 15: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

Figure 1: Knowledge is made of three elements: initial state of a system, an explanatorymechanism transforming the initial state, and a final state. Knowledge is communicatedby means of a formal language (Model) which is compared to Reality by means of anAssessment.

the three elements of an explanation. The model provides a formal representation of theinitial state A, a chain of formal passages applying the rules of the chosen language repre-senting µ, and, finally, the representation of the final state B. Notice that the expression ofthe model may represent the format of explanation only implicitly. For example, a mathe-matical model may be expressed as Y = f(X). For assessment purposes, however, we willconsider a collection of observations (X∗1 , Y

∗1 ), (X∗2 , Y

∗2 ), ..., (X∗Z , Y

∗Z ) and test whether each

of these observation follows the purported law expressed by f(...). Hence, the actual tests

will be on the set of “explanations” generated by available data: X∗1f(...)⇒ Y ∗1 , X

∗2f(...)⇒ Y ∗2 ,

etc.In order to assess whether the knowledge claimed by the researcher as expressed in the

model is actually supported by available evidence it is necessary to perform an assessmentfor each of the components of the explanation.

The abstraction stage concerns the evaluation on whether the model selected therelevant features of the real-world system to be used in the explanation. Any entitycan be described in an infinite different ways, only some of which are relevant for thepiece of knowledge considered. Consequently, a researcher advancing a scientific claimneeds to support the choice of the specific features chosen as relevant, and, possibly, thereason to discard other aspects of the real-world counterparts not deemed relevant forthe phenomenon considered. Among the tasks of abstraction there is obviously also theassessment on the precision of the measures adopted.

13

Page 16: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

The verification stage concerns the “proof” of the explanation, listing the individ-ual steps that, according to the rules of the chosen language, support the passage fromthe assumptions in the initial state to the resulting final state. There are two types ofverification that can be part of an assessment exercise.

A first type of verification concerns the correctness of the explanatory mechanism in theexplanation according to the declared rules of the formal language adopted. An exampleof this type of verification is checking the proof of a theorem or controlling the algorithmof a numerical estimation. In general, this type of verification concerns the evaluation ofthe correct application of the explanatory mechanism claimed in the explanation.

A second type of verification concerns the assessment of the explanatory mechanismproposed against evidence from the reality concerned. If available, the observation of thereal-world intermediate steps of the phenomenon during its unfolding between the initialand final states of the explanation may confirm or negate the claim contained in the pro-posed explanatory mechanism. This type of verification may be, in some cases, impossibleto attain directly because of the lack of available evidence on the intermediate stages,whose nature makes it typically less accessible than those concerning the initial and finalstates. However, the very nature of the explanation proposed may suggest where to lookfor evidence confirming (or denying) the proposed mechanism. This may the case, forexample, for evidence from evolutionary biology where different explanatory mechanismscompete to explain the link between the same set of initial and final evidence. The com-peting conjectures imply different “missing links”, therefore the debate is resolved when(and if) new discoveries support one of mutually incompatible explanatory mechanisms.

Validation compares the final state of the model to the observation concerning theequivalent state as observed in reality. As for abstraction, also validation entails theassessment of a description not as universal representation, but for the specific purpose ofthe explanation. Hence, the description subject to validation is, to a degree, necessarilyfalse because one or another detail will be missing or mis-represented. However, whatis requested is whether the description of the final state as resulting from the model issufficiently accurate for the sake of the explanation, not in absolute. Therefore, all theconsiderations advanced for the case of abstraction applies also to the case of validation.

It is worth stressing that validation is particularly relevant in case the research projectaims at predictions as a prominent results. Obviously, in this cases being able to show thecapacity of the model to make correct prediction is of paramount importance. However,even detailed and consiste predictions, however useful in practice, are not sufficient, ingeneral, to assess as scientific a piece of knowledge unless a complete explanation is at-tached to the validated results. Consider, as a mental experiment, a fortune teller able topredict consistently an apparently random event, such as the the number extracted for alottery. Though the activity may be obviously useful, it may not be deemed as scientificunless the explanatory mechanism is clarified.

In order to turn from generic knowledge, believed of conjectured explanations heldprivately by people, to scientific knowledge it is necessary to provide support along allthe three stages. The robustness and type of support necessary to establish one piece ofknowledge as accepted will vary depending on the kind of knowledge one aims to provideand the competing alternative explanations. Before discussing some of these issues weprovide some comments on the implications of the proposed assessment procedure.

14

Page 17: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

2.2 Degrees of robustness, intrinsic subjectivity of assessment and final-ized knowledge

Possibly the most frequently cited statement in introductory courses of Statistics is that“correlation is not causation”, warning that (mis-)interpreting a statistical link as a logicalone is an error. Yet, though not sufficient, correlation may point to a logical connection,and hence the presence of correlation may be used to support a conjecture, which is notan explanation (as yet). However, the statisticians’ warning shows that it is obviouslypossible (and, indeed, desirable) to obtain stronger support beside mere correlation. Forexample, considering only statistical tools, a test of causality may suggest the direction ofcausation, though these tests may still hide complex relation, such as those concerning twovariables mutually reinforcing each other. In these conditions it is impossible to distinguishstatistically the direct effects from one variable on the other, from the possible situationwhere the variables are completely independent from each other but are both influencedby a third, un-observable, variable. Even stronger support can be obtained when theproponent manages to identify evidence on the generative mechanism of the data, digging,for example, in the micro-data detailing the individual behaviours producing the aggregateclaimed results.

In summary, the result of the evaluation of a scientific statement is not binary, but itis necessarily expressed in different degrees and, therefore, may never reach a definitiveconclusion. It is possible, for example, that different scientists given the same informationmaintain in good faith different opinions, one judging available evidence as sufficient toaccept the proposed explanation and another still pointing to alternatives admitting thesame evidence under a different explanation.

According to the proposed representation of knowledge as explanations, it is possibleto communicate and evaluate perfectly the model representing the explanation, that is thedefinition of the states and the proposed explanatory mechanism, A, B and µ. However,the fidelity of the representation to the reality can always be questioned, since a modelis, by definition, different from reality, and the features used to represent a real systemin symbolic terms are necessarily arbitrary. Hence, two scientists proposing conflictingexplanations may understand each other’s proposal, agree on their respective legitimacyand internal consistency, share the same trust on the evidence used to assess the twoproposals, but honestly disagree on whether the model is adequate as representation ofthe purported goal.

The reason for the irreducible subjectivity of scientific assessment is that an explanationis necessarily generated out of a question, that is, the purpose of the research projectproducing the knoledge. In natural sciences most of the research projects have obviousgoals, hence the research question is so obvious to not even merit attention. Hence, allthe attention can be focused on the proposed answers. However, in some cases even in thepurest of the scientific disciplines we find examples of (apparently) conflicting explanationsfor (apparently) the same phenomenon, due to the different goals pursued. Consider, forexample, the case of the explanations for the effects of gravitational force provided byEinstein’s General Relativity and Newton classical laws. Both theory deal with essentiallythe same systems and for largely overlapping phenomena, though not scientist would assessone as correct and the other as wrong. The two bodies of knowledge, represented bylargely incompatible explanations, are obviously meant to address different goals, so that,for example, no one would suggest the use of Einstein’s equations to compute the orbit ofa planet in the Solar system. In social sciences there is far more room for ambiguity, sincethe same economic phenomenon (say, the financial crisis) may be perceived as proof in thejudgment of a whole theory or as an oddity irrelevant for theoretical analysis. The role

15

Page 18: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

of scientific knowledge is to answer questions providing explanations; thus, there is thepossibility that scientists may agree on the answers but disagree on the relevance of thequestion. The following paragraphs comment on some implications from this consideration.

2.3 Validation vs. assessment

Validation is the most widely adopted system to assess theoretical models (Windrumet al., 2007), up to the point that many researchers seem to consider validation as theonly criterion for assessing scientific claims. A diffused approach is that a model can beconsidered as scientifically proven when it is “validated”, meaning that the data producedby the model match those collected from reality. An example of such approach is the(in)famous Friedman’s position that un-realistic assumptions do not necessarily underminea theory, which can only be judged according to its capacity to produce correct predictions(Friedman, 1953).

Considering validation as the only criterion to assess a scientific statement implies toassume that a certain final state corresponds to one and only one possible couple of initialstate and explanatory mechanism. Only in this case it may be justified the exclusivereliance on validation as proving the adequacy of the explanation as a whole. In thesecases validation is sufficient to support the adequacy of the whole model because theconfirmation of the final state necessarily confirms also the whole explanation.

Figure 2: Case of sufficiency of validation as assessment criterion. Observing only onecomponent, for example B2, justifies the rejection of any alternative explanation.

For example, consider the hypothetical case of a phenomenon that can be potentiallyexplained by a set of alternative explanations, each enjoying the same level of a prioriacceptability, such as, say, internal coherence, compatibility with related phenomena etc.Figure 2 represents the case in which each explanation is composed by separate initialstatesA’s, distinct explanatory mechanisms µ and different final statesB’s. If the empiricalevidence shows that B2 is the state actually observed in reality, then one is entitled to

conclude that explanation formed by A2µ2⇒ B2 is preferable to alternatives.

However, there are cases for which rejecting potential explanations based on one evi-dence on the final state is not sufficient to justify the rejection of alternative explanations.Figure 3 shows the case in which merely observing B2 does not help in assessing whichexplanation should be preferred. The reason is that evidence B2 may be produced byseveral means, i.e. there are several possible initial states and several explanatory mecha-nism compatible with the observation B2. Under these conditions validation alone is not

conclusive, and a researcher proposing a specific piece of knowledge, say A2µ2⇒ B2, needs

to also find empirical support for A2 as initial state and µ2 as explanatory mechanism.

16

Page 19: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

Figure 3: Insufficiency of validation as assessment criterion. Observing only one compo-nent, for example B2, does not support the preference for any explanation admitting theobserved case as final state of the system.

In general, limiting the assessment of a proposed bit of scientific knowledge to validationfor cases where observed evidence is shared by multiple alternative explanations can leadto serious mistakes, since a researcher may be tempted to misinterpret undisputed evidenceas supporting, incorrectly, a given explanation. The obsession with validation stems fromconsidering the production of reliable predictions as the only goal of research. In these casesa validation-only assessment may be justified for practical purposes, such as producingaccurate weather forecasts or gaining profits predicting the future value of assets. Forpractical purposes any means may be used, as long as it can show a robust record incorrect predictions. However, scientists cannot be satisfied by mere predictions, which cansuffice professionals with a clearly defined goal and little interest to the means used to reachthose conclusions. Science, on the contrary, must necessarily provide a rational explanationfor the motivations behind the predictions. Besides, identifying correctly the elements toconsider and elaborate them in the proper way is the foundation to improve upon existingknowledge and potentially produce even better results (Tetlock and Gardnes, 2014).

From the perspective of scientific understanding, as opposed to empirical applications,considering the accuracy of forecasting as sole criterion to assess a scientific claim risk toerrors and methodological corruptions. This is due to the temptation to lighten the burdenof the forecaster by carefully selecting a biased subset of the outcomes to predict. This isnot, in general, an error, since starting a scientific exploration by attacking a sample ofcases before moving to consider the generality of observations is a common and legitimatestrategy. However, one must be aware of the risk to get potentially stuck with an approachsuiting only few special cases (the ones considered initially) but proving utterly useless forany other. This event may become particularly harmful considering that the initial successof a theory in dealing with the test cases can be expected to provide sufficient prestige andconsequent intellectual power to influence future scientific directions; a mistake in this facerisk to mis-classify the borders of a discipline and to exclude any inconvenient evidence.

An example of the bias caused by the over-reliance on validation the case of mainstreameconomic theory, which stands accused of providing very elegant models (market equilibriaamong rational agents) adapt to describe conditions rarely observed and, ultimately, oflimited importance. Though being discussed since the ’950’s, the neoclassical economictheory surged in popularity only in the mid ’970’s at the expenses of the economic approachbeing adopted in the previous decade, rooted in the Keynesian approach. The motivationfor this reversal is attributed in no small measure to the capacity of the “new” approachto provide a satisfactory explanation for the inflation experienced in that period and tosuggest remedies for a problem that the preceding consensus struggled to tackle. Similarly

17

Page 20: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

to the fate of Keynesian economics, doomed by its difficulty in dealing with inflation,also the neoclassical approach is threatened by its negligence in treating issues that mostpeople consider as crucial, such as markets’ upheavals and panicky behaviours, that seemto be considered by scholars as not pertaining the discipline (Lucas, 2009).

It is not possible to avoid this kind of methodological errors by relying on “empirical”observation only, such as collecting data and assessing how frequently the theory fits real-world data. The mere frequency of an event admitting a large number of possible causes(e.g. “stable markets”) cannot compensate the failure of a theory to deal with rarer, butfar more relevant, events, such as financial crisis. Many natural and social phenomena areof huge interest not because they are frequent, but, on the contrary, due to the relevanceof their impact, which is possibly enhanced, not diminished, by their low frequency. Thereare many examples of phenomena being at the center of scientific analysis despite theirrarity: birth of a biological species, appearance of a comet, the founding of a new industry,are all rare events attracting far greater interest than their absence.

In the light of considering knowledge as the capacity to provide explanations, sup-port for a given claim does not necessarily need to take the form of a large number ofobservations, but may consists in a detailed explanation of (possibly) rare and relevantobservations. Assessing scientific knowledge requires evidence not only in terms of valida-tion (accuracy of predictions) but also on the abstraction and verification stages (whichaspects of the systems are relevant? how are the outcomes generated?). Such questionsare as relevant as the validation since, as we will explain in the next paragraph, the goalof science is not limited to describe reality, but, most importantly, to explain it.

2.4 Describing vs. explaining

An obvious property of abstraction is that it will always produce an incomplete repre-sentation of the system, so a strict test of “realism” of representation will systematicallycondemn any representation as substantially different from the reality it is purported torepresent, and hence necessarily “false”. So, the issue concerning abstraction is not thegeneric and universal realism of the representation adopted, but the adequacy of the (nec-essarily unrealistic) simplification of reality used to explain the properties expressed bythe explanation.

Similarly, the theoretical description provided by the model final state will always bedifferent, incomplete, imprecise, so as to systematically reject any claim of validating atheoretical statement. Nor one can adopt universally a criterion based on the distancebetween representation adopted and empirical observation, since it is not always the casethat representations closer to reality are actually better than more sketchy ones. Focusingon delivering the highest adherence to reality as possible can frequently backfire. Con-sider, for example, the case of “overfitting”, the condition of a statistical model with alarge number of parameters that is formally able to perfectly fit the entire empirical sam-ple. However, the consequence of reaching a perfect match with the sample data preventthe model to interpret any out-of-sample data, such as those necessary for prediction. Amodel is estimated (in a statistical case) or, in general, generated not with the purposeof reproducing reality, but with a specific goal, and without specifying the goal of a rep-resentation we also lack the necessary yardstick to assess the relevance of the proposeddescription.

These considerations can be summarized by the statement that “the map is not theterritory” (Korzybski, 1931). As much as the same geographical map can be both assessedas a faithful description or an unusable approximation of a territory, depending on the

18

Page 21: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

necessities of the reader, descriptions of a reality are impossible to judge more or lesssupported by empirical evidence unless one specifies what phenomenon the proposer wantsto explain.

To clarify this point, we can make a simple example. Consider the statement: “a stoneand a feather fall at the same speed”. Any real-world test would falsify the statement ingeneral, as our intuition and everyday experience provides systematically contrary evidenceto the claim. However, consider the same statement with the additional specification “inabsence of attrition”: than the statement turns out as true, surprising our senses unusedto environments lacking an atmosphere.5

Adding more details to an experiment, and therefore making the description more ac-curate, does no necessarily make any more relevant the results as far as explanation isconcerned. In our example, providing details such as color, weight, chemical composition,bird species, environmental temperature, height of the fall, etc. would undoubtedly in-crease the realism of the representation, but would add nothing to the central meaning ofthe statement. Obviously, these details are irrelevant only insofar the phenomenon “fallingspeed” is concerned, since the very same objects can be considered as involved in differentphenomenon, such as those concerning animal species or terrain samples, for which someof those extra-features may become relevant.

In conclusion, there is no way to decide, once and for all, how a real world entity shouldbe represented for scientific purposes without stating clearly the purpose the descriptionis made. The abstract description of a system can never be considered as “perfect” or“complete” as far as scientific knowledge is concerned, but needs to be tuned on thebasis of the specific needs of the piece of knowledge it is designed to be part of, and thedescription of an element successfully used within one explanation may be useless, or evenplainly erroneous, for other pieces of knowledge even when the very same entity.

3 Structurally Dynamic Phenomena

A research project reaching a result needs to state the claimed conjecture by declaringwhich system is affected and the phenomenon concerned, and then must provide evidenceconcerning all the three aspects of the claimed explanation. Though these steps need tobe universally satisfied, not all scientific results are received with the same degree of trustand may yet be the object of disputes.

In some cases disagreement among scholars depend on different judgments concerningthe reliability of evidence, such as empirical observations judged as critical by someoneand irrelevant by others. It is obvious that in these cases different scholars may reachdifferent conclusions, since in effect they use a different representation of reality.

In any case, some claims are intrinsically weaker than others, independently fromthe degree of empirical support. For example, showing strong correlation between twovariables is a useful piece of knowledge, if appropriately supported. Yet, it is still possibleto question the nature of the link between the two variables, and it is therefore a ratherweak statement. Further analysis may identify the underlining phenomenon responsiblefor the correlation, and therefore offering a stronger explanation than the mere statisticalacknowledgement of the effects.

Moreover, competing explanations for the same phenomenon and based on the sameevidence may adopt totally different approaches providing different explanations. In these

5The Galileo discovery was considered worth an experiment by astronauts from the Apollo 15 mission,who filmed the contemporary fall of a hammer and a feather performed on the surface of the moon foreducational (and propagandistic) purposes.

19

Page 22: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

cases it may be difficult to assess which piece of knowledge is correct and which is to berejected, creating uncertainty and, consequently, decreasing the confidence in both. Inthese cases it is necessary to identify a test, such as an implication compatible with oneresult but not the other, and possibly searching evidence concerning the implication.

However, there are cases in which scholars studying the same phenomena provideradically different interpretations seemingly impossible to compare in order to reach adefinitive settlement. The intensity of conflict among scholars varies greatly across severaldiscipline, so that many observers agree on the existence of a methodological problem insome fields, distinguishing “hard” sciences (reputed as methodologically sound) and “soft”ones requiring a different methodological approach.

Leveraging on the definition of knowledge as explanation it is possible to apply adifferent perspective. We argue that there is only one overall methodology for science,which, however, requires different implementations depending on well-defined features oftheir phenomena of interest.

3.1 Classes of phenomena

Some disciplines are widely considered as lacking reliable methodological foundations, upto the point that a few even question the very consideration of some of these, such asEconomics, as a scientific discipline. The motivation for this opinion is rooted in a sup-posed methodological weakness of the discipline because of a number of supposedly uniquefeatures plaguing the subject. For example, an invited editorial on the New York Timesstates that: “the fundamental challenge faced by economists [...] is our limited ability torun experiments”6. Such argumentation is obviously mis-directed and self-defeating. Infact, even excusing the complete dismissal of the field Experimental Economics as partof the discipline (one of whose founder was awarded the Nobel Prize), Economics wouldnot be certainly the only discipline having limited access to scientifically controlled ex-periments (is Astronomy a science?). Yet, a methodological problem does exist in manysocial sciences, as testified by the conflicting approaches appearing impossible to resolve(Balletti et al., 2015). In this section we expand on the proposed use as explanations asunits of knowledge to derive some contributions to this methodological debate.

We sustain that methodological issues are not a concern of whole disciplines, but de-pend on the features of each phenomena under study. We identify two classes of phenomenaimplying different approaches to the application of the generic assessment method madeof the three stages of abstraction, verification and validation. Both types of phenomenacan be found in practically any discipline, but the distribution is clearly different in differ-ent disciplines, so that it is not surprising that methodological opinions are expressed onwhole fields. We argue that though there is a single overall methodological approach therelative importance of the three stages varies largely depending on the nature of the phe-nomena considered, not the discipline. Let’s see first the definition of the different classesof phenomena, and then we see the implications concerning the assessment of scientificknowledge about the two classes of phenomena.

The class of structurally stable phenomena comprises events fully defined in termsof changes of values of a stable set features, typically vectors of quantitative variables. Todefine the phenomena in this class we replace any aspect of the real-world system with oneor more variables and consider their system as perfectly represented by those variables.The phenomenon itself is therefore defined by the states of the variables used to representthe system. While the states of the features may change as defined by the phenomenon,

6Raj Chetty, Yes, Economics Is a Science, October 20th, 2013.

20

Page 23: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

the very number and type of relevant features do not. In other terms, all the relevantaspects of real-worl entities are perfectly expressed in terms of modifications of values ofa given set of variables. For this class of phenomena theoretical considerations follow thecollection of data, as the objective measurement of relevant data does not require anytheoretical consideration.

The alternative class, that we may refer to as structurally dynamic, considers phe-nomena whose relevant features fail to be fully expressed by quantitative variables only,but require a far more complex, qualitative (and, potentially, vague) representation. Thesephenomena may have features admitting measurable aspects, but the resulting variablesdo not express completely, even as approximations, the full nature of the phenomenon (as,say, the speed of a body in Newton’s gravitational model) but are proxies, possibly rele-vant but necessarily partial, of non quantitative properties that cannot be appreciated bya stable vector of variables. For example, consider a phenomenon concerning the growthof a plant, the development of am ailing patient or the features of a cloud. At each stageand each aggregative level one may identify specific features relevant for certain purposes,but the nature of the entities involved will never fully represented in the same way as, e.g.,the properties of a planetary system can be represented by the vectors of mass, positionsand speed, or the chemical properties of a compound result from those of its elementalcomponents. For these cases, the data possibly involved in a theoretical analysis dependsnecessarily on the assumptions and goals of the theory. It is well possible that alternativestheories of the same phenomenon involve different and non-comparable measurements,since, for these phenomena, it is the theory that defines the relevant variables.7

In structurally dynamic phenomena the set of relevant features is not constant, butchanges through time and at different levels of aggregation. For these phenomena thevariables relevant to describe the system at a certain time or at a certain aggregationlevel disappear at a different time or level, and new features emerge describing aspects ofthe very same entities that, at a different time did not exist, or made no sense. In suchconditions phenomena cannot be fully described by the changes of state of some features,but require also the analysis of how the very vector of variables is endogenously modified.In other terms, the dynamics of the system would need to include not only a descriptionof variables’ values changes, such as traditional calculus, but also a calculus of objects toaccount for the varying number of variables relevant to consider at different moments orstages (Fontana and Buss, 1996). Such phenomena comprises entities whose identify isperfectly defined, but undergo morphological modifications as part of the phenomenon.Consider, for example, the development of an embryo from conception to adulthood orthe development of a firm, from its founding to its mature phase as a multinational. Inboth cases, the units of analysis remain the same, but the features relevant to describetheir properties are necessarily different at different stages, and their values contribute toexplain the structural modification it undergoes.

The only explicit reference to the methodological problems raised by structurally dy-namic phenomena (Fontana and Buss, 1996) derives from a study on the elements at theblurred frontier between complex chemical components and simple living biological en-tities. However, many authors have raised essentially the same issue. The concept ofemergent properties (Lane, 1993) refers to the change of relevant features of the same sys-tem when observed at different hierarchical levels. Also the effects of positive feedbacks inphysical and social systems (Buchanan, 2014) generate ever accelerating processes whoseonly possible result consists in the loss of relevance of exploded variables and the emergence

7I wish to thank F.Sylos Labini (personal communication) for having pointed to the relevance of thisissue.

21

Page 24: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

of new ones. In the same vein we may also include the call for “economic significance” asa necessary complement to statistical significance in econometric analysis (McCloskey andZiliak, 1996)

Notice that we carefully avoided to assign structural properties to the systems object ofanalysis, but insisted that this is a property of the phenomena we decide to get interestedinto. The reason is that the very same system can be involved in different phenomenafalling in both categories. Consider, for example, the system “boiling water”. You mayconsider the phenomenon related to global variables (i.e. temperature, pressure, volume,etc.), studying their values, changes and interactions, which is a structurally stable phe-nomenon fully represented by the relation among the variables representing the salientfeatures of the system. But you may also address the chaotic dynamics of patches ofgaseous molecules within liquid ones, and being concerned with the shape, dimension, up-ward path, location, etc. of bubbles created in certain conditions. In this case you wouldneed to study how liquid molecules switch to the gaseous state, what affects their aggrega-tion, how they react as a collective within the environment, etc. This second phenomenon,concerning exactly the same system in the same conditions, is structurally dynamic, andthe measures we may collect (e.g., number and dimension of bubbles, the shapes, etc.)would be useful proxies for some aspect of the phenomenon, but not represent its realnature.

3.2 Assessment and empirical evidence

The distinction of the two classes of phenomena is relevant to identify the more relevantassessment stages in evaluating scientific knowledge. From the definition of structurallystable phenomena we can derive the conclusion that the systems generating the phenomenacan be fully replaced by the relevant variables, and therefore the whole phenomenon iscompletely represented by the levels and changes of values of one or more variables. Forexample, the Newtonian representation of the phenomenon concerning bodies’ movementsin space does not concern any aspect of planets other than the values of the variablesrepresenting their relevant states, such as speed, position and mass.

On the contrary, structurally dynamic phenomena cannot rely on variables’ valuesonly for their representation. Though there can be a whole range of measures detailingrelevant aspects of the phenomenon, the phenomenon itself is not represented completelyby these values. Consider, for example, the evolutionary history of a biological species.You can collect data on a large number of characteristics affecting the phenomenon, someof which may turn out to be extremely relevant. However, no single variable, nor any fixedcombination of variables, will ever represent the true nature of the phenomenon, but,at best, can only proxy some of its properties. Social sciences, such as economics, dealconstantly with this class of phenomena. For example, GDP is the universally adoptedmeasure of the economic activity of a country. However, this measure is just a proxy ofa whole set of characteristics (mostly non observable) whose real nature is irreducible tomeasurable values, not to speak of a single variable. For example, two countries withsimilar GDP levels (or, for that matter, any other measure) will never be expected toproduce similar outcomes in the same way as two planets in the same conditions can beexpected to behave in the same way.

Though for both classes of phenomena assessing a scientific claim requires support fromall the three stages listed above, their relevance is likely to be different. In structurallystable phenomena the explanatory mechanism connecting initial and final state consists ofelaborations of mathematical operations or other logical steps whose correctness depends

22

Page 25: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

on the adherence of a well-defined set of rules. Hence, as long as the rules are known andaccepted, there is little scope for debate in the verification stage, while identification andprecision of measurement of the variables in the abstraction stage and, more importantly,capacity to predict the final state, i.e. validation, is of the highest importance. Onthe contrary, the matching of empirical evidence with theoretical values in structurallydynamic phenomena (abstraction and validation) has only limited power, since the dataavailable are partial proxies for the un-observable core of the phenomena. In these casesthe real challenge for a scientific claim is to find support to identify the mechanismsundergoing during the unfolding of the phenomena, captured only partially and indirectlyby available measures. Hence, the verification stage is far more relevant than validating aclaimed explanation on the basis of indirect and partial measures, following the spirit ofthe maxim8: “Not everything that counts can be counted, and not everything that can becounted counts”.

3.3 Historicity and structural dynamics

A frequent property of structurally dynamic phenomena consists in being strongly histor-ical, that is, they can be observed only as a sequence of unique events, radically differentin many respects from one another. Strictly speaking, one may sustain that this is alwaysthe case, since any observation is necessarily different from any other one in history in atleast some detail (e.g. seconds passed from the Big Bang) that may have some relevance.However, there are phenomena that can easily be isolated and treated as samples fromthe same category, and therefore treated as replicable, while others show an irreduciblehistorical nature..

Thus, the planet observed now in the sky certainly differ from the same planet observedby Newton, but we can engage in measuring the same properties of the planet, and assesswhether the few centuries in between the observations actually matter, with pretty highconfidence that no appreciable change of the basic laws have occurred.

On the contrary, phenomena involving structural dynamics are typically strictly his-torical in nature, where the actual observations are strongly influenced by the constantlychanging environment surrounding the phenomenon of interest. The evolution of, say, eu-karyotic organisms may share some high-level properties with the evolution of mammals,but the quantitative variables relevant in the two instances are obviously radically differ-ent. Similarly, the industrial revolution in Japan since the second half of the XIX centurydeals with similar issues as the industrialization in other places and times, but it is alsounique in a many respects.

The uniqueness of each instance of structurally dynamic phenomena is a difficult prob-lem from the methodological perspective. One solution is trying to identify the commonfeatures to all instances of a certain phenomenon, and then trying to apply the sameprotocol as for structurally stable phenomena to the variables so identified. This is, forexample, the approach typically adopted in Economics. The model of perfect competition,for example, is proposed as a generic representation of every instance of market satisfyingcertain conditions, and the specific features of each observation are considered as addi-tional details attached to the core underlining generic model. Under such an approach,researchers should distinguish between the “signal” represented by the underlining model,and the “noise” composed by the idiosyncratic features specific to each observation. Thisis the same approach used in classical physics, where, for example, the “signal” of theequations driving the movement of a body can be individuated after cleaning the obser-

8Frequently attributed to Albert Einstein, this statement is actually from Cameron (1963).

23

Page 26: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

vations from the “noise” of, say, attrition, measurement errors, effects of distant bodies,etc.

In case of structurally dynamic phenomena, however, the common features shared byall the instances are so few and so generic to be, in the event, almost irrelevant. In otherterms, for these phenomena the underlining signal is so weak to be almost un-observable,and the core of observations concerns, essentially, only idiosyncratic noise. Indeed, insuch phenomena any actual observation would present an overwhelming effect of eventsrelated to, but not part of, the core phenomena one wishes to study. Any empiricalobservation for these cases would find the contribution from the core of the phenomenonto a weak trivial signal providing no relevant information. Indeed, the literal interpretationof a specific evidence as having universal validity may lead to serious errors, as shown inthe following examples of clashes between apparently robust theoretical assumptions andempirical violations.

• Law of demand: buyers prefer to pay less than more for a given purchase. However,whole industries rely on people willing to spend more preferring, for example, buyinga certified Rolex rather than the factually indistinguishable fake copy offered at atiny fraction of the original’s price.

• Monetary policy: interest rates cuts incentivate investments and consumption stim-ulating economic growth. However, in many occasions interest cuts have been in-terpreted as announcements that the central banks are expecting a weakening ofeconomic conditions. As a consequence operators cut their expectations triggeringthe recession the central banks wanted to avoid in the first place.

• Relevance of statistical evidence: in case of controversies collect statistical data andrely on the stronger evidence relayed by the majority of observations. However, manyrelevant phenomena depend on minority events. For example, the vast majority ofnewly founded firms go bankrupt in a few months or remain small activities. Apurely statistical analysis of the fraction of startup becoming important players (the“unicorns” in the jargon of investors) would be impossible.

Each of the above examples, as well as many others reported in the literature, implyimportant qualifications delimiting the application of general rules providing a list of ex-ceptions cases requiring special treatment. The problem is the frequency of exceptions isso high that the supposedly general rule is no more than a reference point of no method-ological use. That is, it is not possible to interpret empirical observations through thelenses of the general rule because of the large number (and importance) of exceptions. Onthe opposite, the only viable procedure consists in reconstructing the observation adoptingthe general rule along with all the necessary exceptions, with the latter being frequentlythe most relevant. An example of the relevance of the research approach is the literatureon path dependence (Paul, 1985; Arthur, 1989; Arthur, 1994) showing the importance ofhistorical events in shaping current observation. From the methodological perspective, theobvious implication is that the only sensible way to make sense of current evidence is togo back in time trying to linking its features to relevant past events. That is, providingan explanation for observation, not describing it.

When research focuses on historical events concerning structurally dynamic phenomenathe assessment stages of abstraction and validation (i.e. comparing actual measures totheoretical ones) loose much of their relevance. In these cases evidence from the core ofthe phenomenon is normally marred by a large number of idiosyncratic events taking place

24

Page 27: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

in the same time, and possibly strongly related to the ones relevant to the phenomenon,but outside the scope of the analysis. For example, in general economic phenomena arestrictly intertwined with political, cultural, military, scientific events (among others), sothat finding evidence of economic-only contributions from historical records is all butimpossible. Yet, having an understanding of such contributions controlling for (but notincluding) other social factors is a highly relevant endeavor worth to be pursued. In thesecases the major result one may hope to obtain consists not much in replicating an everchanging multi-faceted “reality”, even if that were possible to obtain, but rather providegood explanations for the observed evidence. Such kind of results, being able to connectthe chain of events through time and among different components of the system, mayprovide useful guidance on, both, understanding crucial events and, potentially, decidinga course of actions for future cases in similar conditions. These goals can be reached onlyby focusing on the verification stage of assessment, trying to find the core events on whichthe relevant parts of the phenomenon hinges on. In other terms, economists should aimat providing an answer to the Queen’s question, not sidestep it claiming how improbablethe crisis appeared to be.

4 Implications for Economic Analysis

In the sections above we sustained that advancing science consists in finding more and bet-ter explanations for observed phenomena. Moreover, we proposed a distinction betweenphenomena relevant for the identification of the most sensitive aspect of the assessmentprocess. In this final section we conclude drawing some implications addressing, in par-ticular, methodological issues concerning typically scholars in economics, but that maybe of interest also to neighboring disciplines such as management, sociology, etc. Thegoal of this section is not to provide radically new indications, though a few original as-pects suggestions will emerge. Rather, we wish to show that the proposed perspectiveof knowledge-as-explanation fits neatly the common sense indications broadly adopted byexperienced scholars.

4.1 Planning, developing and assessing research projects

The first indication provided by an experienced researcher to a junior scholar such as,for example, the suggestions of a PhD supervisor to a student consists in identifying theresearch question for the project. According to our perspective the research questionshould be expressed as the missing element of an explanation. Thus, a project may bedirected to identify the initial state responsible for a certain conditions (?

µ⇒ B), identify

the intermediate steps linking two conditions of a system (A?⇒ B) or devise the expected

outcomes of a given system (Aµ⇒?).

Formatting a research question as an incomplete explanation to be filled allows alsoto express the conjectured results one expects to obtain from the research, assess itsrelevance, and evaluate the overall project. The expected, conjectured, results can beplaced in the missing element so that evaluators may easily devise the sort of projectproposal, for example assessing the likelihood to succeed and pondering the most riskysteps of the project. Similarly, independently from the difficulty of accomplishing theproject, representing the project in terms of new explanations it is possible to evaluatethe potential innovativeness of the project in case of success by comparing the purportednew explanations in respect of the currently held ones.

25

Page 28: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

Besides the usefulness in the evaluation of a project proposal, the format of knowledgeas explanations is also valuable in maintaining a correct methodological attitude duringthe development of the project. Given the overall task of fulfilling the project objective,it will be necessary to break up the work in individual steps. Each step will be composedby either retrieving the relevant bit of knowledge (i.e. explanation) from the existingliterature, or devising a new one when necessary by, as we have suggested above, revising,extending or deepening existing knowledge. Hence, the researcher can coherently planher work marking each step, planned or accomplished, according to the method actuallyused, circumscribing the necessary literature, spelling possible difficulties, etc. In short,considering a research project as a collection of organically related explanations helps todevelop a coherently structured record of the activities planned and performed.

Assessing each step will be possible according to the three stages we have mentionedabove. While each element of an explanation deserve consideration, depending on theinnovativeness and the potential controversies the researcher will be driven to search forsupport on the most critical elements of her proposal.

Since research is, by definition, and uncertain business, it is very likely that an appar-ently promising line of research fails to deliver the expected results. Indeed, failing to failmay indicate an insufficiently innovative project. Failures do not necessarily imply thecomplete rejection of the entire research project. The very interest underpinning the ini-tially planned research program means that the failure of proceeding as planned is, on itsown, at least partly surprising, and hence useful to communicate. Negative results are, ingeneral, considered failures in terms of research credentials, although many observers hintsthat in many cases they should be given more credit. For example, disseminating failuresof tests on new drugs may save money and lives from future possible tests. In general,the format of explanation for representing knowledge offers the possibility to highlightthe importance of a negative results, permitting to stress the apparent plausibility of aconjecture and, hence, the striking relevance of results negating the conjecture.

In case of failures to proceed according the initial plan, the set of confirmed knowledgecollected until the setback can be re-used devising a different route to fulfill the sameoverall result as in the original plan, or even to design a new research project on thebasis of the new knowledge constituted by the un-expected failure. Such possibilities,commonly experienced by researchers, are facilitated by the storing of knowledge accordingto the format of explanations, each composing an element of a puzzle to be eventuallycombined following initial conjectures, confirmations and failures, revised conjectures, etc.In particular, the study of the motivations for the failure for building an explanationaccording to a certain mechanism is likely to provide indications suggesting a different,unexpected, explanation playing the same role in the overall project as the one provenwrong. In this way, the project is not only saved, but also enriched by new, unplanned,content.

Once a research project is completed it is necessary to support its claims. Using theformat of explanation the author will be able to defend her contributions by presentingthe logic of the project as the sequences of chained explanations, and listing the supportcollected for each of those. As we have seen above supporting a scientific claims requiresthree distinct pieces of evidence, concerning the reasonableness initial state (abstraction),the congruence of the final state (validation) and the means by which the former turns intothe second (verification). Depending on the phenomenon targeted the relative importanceof each piece of support will vary. However, adopting explicitly the format of explanations,evaluators will be able to clearly spot potential weaknesses and flag precisely possible dis-agreements between the author and evaluators in the case their judgments would diverge.

26

Page 29: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

The adoption of explanations as explicit format to represent a research project such as aPhD program, a paper or a book allows also to identify potential pitfalls to be avoided. Aswe noted above the choice of a certain description to represent a given real-world entityfollows from the properties we wish to exploit for our explanation. Such considerationneeds to be taken carefully especially when dealing with entities that may assume severaldifferent shapes under different circumstances. The generally wise strategy of building“upon the shoulders of giants”, i.e. improving on the prior results from recognized masters,may easily mislead a researcher to choose elements introduced by a prominent scholar fora different purpose, and therefore, possibly, unsuitable for the purposes of the research.An assumption adopted by a renown researcher is not, per se, applicable without furtherconsideration. Indeed, relaxing rigid assumptions or completing the analysis to includecases discarded by pioneers are a major tool to advance research, which does not underminethe role of predecessors, but, on the opposite, may further increase their relevance. Onthe contrary, transplanting blindly bits of a given research in a different context, withoutconsidering its adequacy and relying only on the prestige of the original authors, may leadto serious errors by producing inconclusive results. The authority of past masters is not asufficient justification for the adoption of an assumption that, however relevant in a givencontext, needs to be re-discussed when used in a different one.

4.2 Economics of explanations

Economic theory, as it has been thought in most of the universities for decades, is definedas the discipline studying the “allocation of scarce resources”. In addition, most of the coreresults rely on the additional assumption of diminishing marginal returns, stating the fallin productivity at increasing scales of production. The goal and the (apparently) technicalassumption are particularly useful in allowing the representation of economic problems inmathematical terms, where any phenomenon is turned into quantitative variables produc-ing results in the reassuring format of theorems.

The definition and core assumption stem from the dawn of the discipline in the earlyXIX century. At the time, the most pressing concerns were related to agricultural pro-duction and population growth. Obviously, scarcity was of paramount importance, anddiminishing marginal returns were both an evident technical property of the most relevanteconomic activity (farming) as well as the corner stone to explain prices and the distribu-tion of income, for example between workers and landowners. It is interesting to note thatat the time economics began to be discussed using mathematical tools, which producedthe first, and possibly most famous, of a long series of spectacularly wrong economic pre-dictions, the Malthusian expectation that economic development would necessarily leadto diffuse famine and a nightmarish society surviving at the level of pure subsistence9.The failure of the Malthusian prediction to come true did not prevent the discipline toflourish and the tools at its base to be adopted for most of the subsequent analysis. This ispuzzling to those considering science as devoted to provide faithful representations of em-pirical observations, but becomes perfectly understandably in terms of knowledge gained,from successes as well as from failures.

As we noted above, science is not concerned primarily with describing reality, but toexplaining relevant phenomena. Mainstream economics (and also many of its critics) area perfect example of the consequences of this methodological error. The way Ricardo andMalthus used to describe economic phenomena were determined by the specific problemsthey wanted to tackle. The ideas and technical tools developed for a specific purpose can

9For this reason Malthus gained the discipline the label of dismal science.

27

Page 30: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

frequently be applied successfully to other areas, but this does not imply that the firstanalytical tools proposed are the best suited for any other application, and that othersmay be necessary for tackle different phenomena.

The economic tradition that remained mostly loyal to the classical roots is also the onethat has had the strongest problems to deal with areas of economic activities where theproblem of scarcity is of minor relevance and the impact of diminishing marginal returnsis minimal, or totally absent. Below we list briefly a few examples of areas where the coretenets of classical economics fail to play a relevant role, and that, not coincidentally, arealso areas where the economic approach rooted on scarcity and diminishing returns fail toprovide results (or, worse, lead to wrong conclusions and policies).

Technological innovation entails a number of aspects that fails to be represented interms of scarcity, and that defies, in many respect, diminishing returns. More than alloca-tion, what matters in case of innovation is creation of previously non-existent goods andservices (and consumers’ needs). Moreover, innovation deals mostly on knowledge, whichcan be considered as being affected by increasing marginal returns. It is not surprisingtherefore that economic theory mostly ignores innovations resulting in new products in itsstandard textbook, and that critical schools reject the standard approach posing innova-tion at the center of economic analysis (Nelson and Winter, 1982). It is also suggestive thatstandard policy recommendation concerning innovations inspired by standard economicstry to introduce artificially scarcity where it is naturally absent. For example, thougheconomists generally consider free markets as a superior coordinating system they alsosupport state intervention to reduce market competition in the form of patents, even incases where it appears to be a socially inferior coordinating system (Marengo et al., 2012).

Similar to the case of innovation, financial markets frequently show features differentfrom other markets, and ignoring those differences produces severe problems. In financialmarkets “value” and “credit” are two core elements, and they also frequently defy scarcity,as commonly understood, and fail to show marginal diminishing returns. For example, thestandard reaction of demand to a price increase is to reduce the quantity requested, whilefor financial assets the typical reaction is to increase demand for the asset (in expectationof capital gains). The very concept of scarcity is also declined in peculiar terms. For ex-ample, the mere diffusion of leveraging among actors on a market generates potentially aninfinite amount of credit that can be spent to increase the notional value of assets owned bya party. While in normal circumstances notional values play no role, with the differencesbetween assets and liabilities being the only relevant variable, in case of failure of an actorbacking one class of assets leads to the immediate contagion, as experienced in the LehmanBrothers case. Ignoring these peculiarities, many economists treat financial markets withthe same analytical tools developed for “standard” markets, considering, for example, fi-nancial bubbles and flash-crashes as occasional aberrations departing from “normal” timescharacterized by equilibrium, even though the evidence appears the opposite. Similarly,mainstream policy recommendation insists on light regulation, even when evidence showsthat there are activities detrimental to the socially effective market functioning.10

Finally, consider the economic systems affected by externalities. Economic theory iswell aware of the disruptive effects of externalities in respect of standard results pro-duced under the assumption of the lack of external effects. However, the space devoted toexternalities in mainstream economic theory is limited to brief mentions and vague recom-mendations, with only limited attention outside economists concerned with specific issues,

10For example, the role of high-frequency trading is clearly destabilizing, involving a huge number ofcancelled orders biasing the perceptions of operators, and amounting to more than 50% of trades on themost advanced markets. See Leal et al. (2014) for a critical analysis of high-frequency trading.

28

Page 31: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

such environmental economics. In reality, the role of externalities is crucial to under-stand entire segments of increasing importance in modern economies. Markets for healthservices, information, risk, as well as any markets where social media play an importantrole, depend crucially on externalities. Yet, the only contribution by economic theory is,essentially, a negative one, stating that in these cases markets will fail to provide sociallydesirable outcomes, and leaving to empiricist the task of solving specific problem, withoutproviding a global framework.

The above examples may be extended with the many others where authors questionthe relevance of core economic concepts, such as perfect rationality and equilibrium. In allthese cases most of the critiques question the realism of economic assumptions, pointing toevidence suggesting that actual economic events differ substantially from those assumed bytheoretical economic hypotheses. Conversely, our discussion raise a different, and partlycomplementary, issue, that of appropriateness: how useful are those assumptions to explainreal-world events and solve specific problems.

Sustaining the necessity to explain as ultimate goal of science implies to relegate de-scriptions to an instrumental role called upon in different formats depending on the ne-cessity of a specific application. Thus, rather than building theories based on supposedlyuniversally valid axioms, economists should seek collections of explanations for classes ofphenomena adopting the more appropriate assumptions for each case. For example, wemay consider a class of models where scarcity is deemed important and derive certainconclusions without restricting the possibility to consider other cases for which scarcity isnot relevant, and other factors drive economic phenomena. Similarly, assumptions con-cerning the rationality agents should not be considered religious dogmas to accept or rejectaccording to personal beliefs, but should be considered as different tools available for useupon careful justification for different cases.

A consequence of the proposed approach is that it will be very difficult to build alogically coherent theoretical construction providing clear-cut positive and normative in-dications, such as those provided by the currently dominating stream of economic think-ing. Critics of this approach retort that they prefer “to be roughly right than preciselywrong”, and aim at rejecting in toto any analysis based on assumptions proven as un-funded. Framed in terms of realism this debate is likely to never reach any conclusion.The reason is that it is always possible to find empirical evidence in favor as well asagainst any specific assumption, and therefore empirical testing alone, based on similar-ity between theoretical data and empirical evidence alone will never succeed in settlingscientific disputes.

Our proposal suggests that the conflict between alternative positions is less stark thatcurrently assumed. There is room for a pragmatic approach based not on determiningwhich approach is “true”, or better in absolute terms, but, more simply, to assess whichclass of problems are better suited to be tackled with specific theoretical tools. Whereverirreducible conflicts would remain, as it is likely, posing one theoretical approach againstalternatives, then judgments should be based on assessing which one provides the bestexplanation, not limiting to consider only a statistical data-matching approach.

There are cases when economic knowledge has the tools to produce specific results,such as detailed predictions or policy indications. In other cases economic knowledge willnot be able to reach any sharp conclusion, and still be useful providing broad indicationsand possibly wise suggestions. In both cases, the scientific debate should be rooted inassessing which position provides the stronger support not in terms of descriptive power(i.e. adherence to an arbitrarily chosen selection of real-world measures), but in terms oflogically and empirically funded explanatory power.

29

Page 32: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

4.3 Econometric modeling

In recent years two main drivers have promoted the extensive use of econometric as majortool of economic analysis. On the one hand, the dominance of a quantitative-orientedeconomic theory favored the interpretation of empirical evidence as the unfolding of mea-surable phenomena whose functional forms were dictated by theory, and so the only re-maining interest consisted in quantifying the effect on the basis of actual measures. Onthe other hand the huge increase in data availability, computing power and advancementin statistical theory lowered the barriers for applying econometric analysis to, essentially,any kind of problem.

While a data-driven approach may be considered a positive result because it doesnot require, at least apparently, the application of questionable theoretical assumptions,there have been also critical assessments. Part of these stem from the diffusion of econo-metric practices that fail to respect statistical best practices, essentially over-interpretingthe economic meaning of empirical data (McCloskey and Ziliak, 1996). However, evenmore seriously, many pieces of research seem to have dispensed completely with any the-oretical framework, presenting results from econometric analysis with the status of factthat theoretical analysis needs to accomodate. Essentially, such approach raises economicdata to the primary role of indisputable evidence, so that econometric results dominate,methodologically, any theoretical consideration. Such perspective, which may be sensiblefor quantitative phenomena, risks producing absurdities when applied to a discipline whosedata depends on a theoretical consensus, and not on physical properties.

An example, admittedly extreme, is the case of a paper11 published in a prestigiousjournal (Proceedings of the National Academy of Sciences) correlating the damage pro-duced by hurricanes with the gender of the name assigned by meteorological officers.Besides the highly dubious technical analysis (exposed by experts, for that matter), thevery intention of pursuing this project, forcing statistical techniques up to their legitimateboundaries (and possibly beyond) and, eventually, managing of been published states amethodological stance by which theoretical considerations are supposed to be ancillary to“real” evidence, produced by means of statistical analysis. In other terms, the approachadopted in the paper relies on considering data as indisputable true, and then expectingtheory to catch up with evidence, irrespective of any possible consideration on the meansthis may be performed. This approach is fundamentally flawed.

Economic systems generate phenomena that do not concern primarily quantitativevariables, like typical phenomena in classical physics, for which variables’ values embodyall relevant aspects of the phenomena of interest. Rather, quantitative measures for phe-nomena we defined above as structural dynamic cannot but be proxies designed to capture,with some degree of approximation, non quantitative aspects of a phenomenon whose es-sential nature defies a quantitative only representation. Consider, for example, the natureof GDP values, representing (but not, by far, coinciding with) the un-quantifiable conceptof “economic activity”, “material wellbeing”, etc. of an economy. Clearly, GDP series canrepresent the state of the economy on some respect, but these data cannot be assumed torepresent all economically relevant aspect of the country. To clarify, two bodies with thesame mass moving at the same speed are perfectly identical, from the perspective of thelaws of motion, since those variables make up the very phenomenon the laws represents.Any consequential event will therefore be identical, irrespective of other features of the two

11According to an increasingly diffused practice the number of citations received by a paper is consideredas a proxy for its overall quality, affecting authors’ career, research funding etc. Since, according to myopinion, the paper I refer to does not deserve any distinction I avoid to cite the paper in question, whichcan be easily retrieved by interested readers.

30

Page 33: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

bodies, such as color, temperature, chemical composition etc. (provided, of course, thatthose features do not interfere with the relevant variables). On the contrary, two countrieswith the same GDP, or sharing any other similitude in the values of their proxies, can-not be expected to behave similarly, but for rare cases and, anyway, in a very restrictedsense. The reason is that the “laws” of economics do not concern the variables, but thebehaviours of individuals and organistions contributing to, but not identifying with, themeasurable variables assumed to proxy in quantitative terms their non-quantitative fea-tures. For this reason, a data-driven approach, sensible for cases in which the measuresidentify with the phenomenon, cannot be methodologically acceptable for cases in whichthe very nature of the data depends on the theory defining its content, interpretation andlimitations. Concerning the paper on gender names and hurricanes, the only potentiallyinteresting aspect of the paper is the psychological bias resulting in female name inducingmore benign attitudes, and hence more risky, than male ones. A researcher interested insuch such a phenomenon, and possibly its consequences, should have primarily exploredthe existence of such attitudes in respect of decisions of the same class as those concerninghurricanes. Having gathered sufficiently strong evidence about the mechanism then a sec-ond mechanism would need to be at work, that people’s decisions affect hurricanes damage.Only in case both explanatory mechanisms are sufficiently supported, then the economet-ric exercise of correlating gender names and damage recorded would make methodologicalsense. Otherwise, it is just an exercise of data mining with no scientific interest.

Having cleared the cases misusing econometric techniques, there is still large room forapplying statistical analysis in economic studies. Plenty of empirical applications, the-oretical studies and policy-oriented researches requires, or may be greatly improved, byworking on data. The condition for a sound methodological approach is to ensure thatresearchers spell out clearly the goal of their research, and this is best done, at least implic-itly, stating the explanation they are considering. This may concern pure, curiosity-drivenresearch, or be related to a specific policy or strategic goal pursued by an authority or by afirm. In any case, the appropriate procedure needs to follow the path of, first, specify thegoal you aimed to (conjectured explanation), from which follows the kind of data required,and, finally, the results meant to confirm or reject the explanation considered. When thisis done, as many econometric works do, albeit in various formats, then we have empiricalevidence of statistical connections, typically a variation of conditional correlation. Suchresult, which may be in itself sufficiently complex to be worth a research project on itsown, is still, however, an incomplete piece of research to be finalised by further analysis.

While, famously, correlation is not causation, finding correlation is a promising firststep in establishing a nexus. However, this early stage of a research requires necessarily thesecond step of identifying, testing and finding support for the actual mechanism underliningthe connection highlighted by the data. Such explanatory mechanism cannot be left tovague potential suggestion or anecdotal evidence: on the opposite, one needs to sustain thecausal mechanism with, at least, as much support as found for the statistical finding. Thisstep is required because quantitative measures of economic phenomena, being proxies ofunderlining qualitative properties, cannot be trusted to be universally replicated. However,the forces underlining the level, variety and variation of those proxies can be identifiedand exploited to identify the explanatory mechanisms (of universal validity) behind thenumerical evidence, that in economics are necessarily affected by idiosyncratic specificevents.

In conclusion, econometric analysis is a powerful tool able to mine rich (but not conclu-sive) evidence on what happened in a given system. Researchers, however, need necessarilyto complete the existential analysis with an explanatory component, supporting their find-

31

Page 34: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

ings with evidence on the generative mechanism of the quantitative results observed. Thedifference with quantitative only phenomena is that explanatory mechanisms cannot bemere mathematical relations, since economic variables are proxies for non-quantitativeevents generated by human behaviour.

4.4 Agent-based modeling

Some of the same forces increasing the diffusion of econometric analysis (computer power,improved interfaces and techniques) have also contributed to increase the diffusion ofagent-based simulation modeling, ABM (see, e.g., Axelrod and Tesfatsion, 2006). In thisparagraph we will discuss the methodological indications that our proposal implies for theuse of ABM as research tool.

ABM are computer programs designed to simulate certain interaction among virtualactors. Increasingly used in many social sciences, besides economics, ABM’s are attractivebecause they can potentially avoid any constraint necessary to “close” a model, thatis, provide results not only in presence of external constraints or intrinsic properties ofthe system represented, but under the only condition of logical consistency of the codeimplementing the model. Along with this advantage, however, there are some difficultiesin their use that make the results provided by ABM controversial among many scholars.The two major criticism consist in the difficulty in understanding the actual content ofABM’s, and the problems due to large number of parameters potentially affecting theresults.

ABM major asset as research tool is that they allow to generate real-time dynamics ofthe system represented, providing in essence a virtual history that may be compared toactual series from real-world data. This means that ABM’s are particularly useful whenthe interest of a research is not only focused on static conditions, such as equilibriumvalues, but concerns also (or primarily) the patterns of the system, either towards a stablecondition or subject to constant variability. Consequently, ABM’s are perfectly suited toinvestigate the intermediate steps in between two states of the simulated system. Moreover,being composed by a computer program, they are also perfectly adapted to be analysedsystematically to manage large data sets, such as those produced by large models, orobtained by multiple repetitions of the same model.

Though ABM’s can obviously include mathematical expressions, generating, for exam-ple, the outcome of a from a set of linear equations, they can also contain any non-linearexpressions, as well as events hard to treat in standard mathematical terms, such as cre-ation of new entities, random events, elaborated logical conditions, etc.

Methodologically, are mostly presented as virtual replicas of real-world systems, usedto sustain the capacity of a given set of hypotheses (those implemented in the model’scode and initialization) to generate certain empirically observed conditions. In these casesit is necessary to support the purported claim by stressing the similitude of the results(typically the final data of the simulation, but possibly also intermediate values) withreal-world data (Windrum et al., 2007).

However, there is also an additional use of ABM’s, focused not only (or primarily) onthe sufficiency of the model to produce observed data, but to do so by specific means. Infact, many economic phenomena, such as, for example, economic growth and cycles, canbe easily generated by a large variety of formal structure, purely mathematical as well ascomputational. Thus, the capacity to merely generate a vague aggregate property is notsufficient to state the adequacy of the model to represent the real-world, in particular whenit is expected to provide information on events lacking observations, such as predictions

32

Page 35: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

or those concerning different systems lacking data. In these cases it becomes relevant forthe model to be compatible with additional evidence besides replicating overall data. Thisevidence needs to support the claim not only that the model produces outcomes resemblingthe overall properties of the system, but also that the intermediate steps (the explanatorychain of events) is compatible with as much evidence is available about the process.

In conclusion, ABM’s need to be used according to three steps. First, the model is builtupon a set of assumptions on the elements relevant for the phenomenon. Second, the modelis run to generate data series convincingly supporting the similitude between simulatedand empirical evidence concerning the overall phenomenon under investigation. Third,and crucial, the simulation runs need to be analysed to individuate how the computationalstructure generated the results. The more unexpected the relation between model contentand results, the more interesting. But for such interest to become scientific knowledgeit is necessary to clarify, beyond any doubt, the simulated chain of events leading to thephenomenon of interest. This step can be assessed objectively, once the code is known andthe simulation data are analysed in detail. Secondly, empirical support must be searchedat least for the most controversial mechanism, reassuring that real-world cases follow apattern similar to that produced by the simulated model. Following this approach it ispossible to derive a number of consequences, potentially controversial yet highly relevant,that we briefly discussed below.

ABM’s are a perfectly suited tool for the study of emergent properties (Lane, 1993),because it may be possible to generate a high-level, dynamic phenomenon on the basisof the interactions among lower-level components. Yet, reproducing emergent propertiesis not sufficient: one has to show how this has been reached in order to provide a fullcontribution to the discipline.

ABM’s are frequently criticised because of they typically include a huge number ofparameters, most of which cannot be set on the basis of empirical observations for lackof data. Thus, observers contend that results cannot be trusted, since a different setof parameters’ value may generate different results. This criticism is correct as long asthe modeler’s claim concern universal properties of the model, expected to be replicatedindependently from the potential space of configurations. Though such claims may bemade, one may also aim at reaching far less generic results, generated only within a tinyportion of all the potential configurations of the model. Such results would state, forexample, that “when phenomenon X emerges, it can be triggered by mechanism Y ”. Forbeing of interest, there is no need for X to present itself under a vast number of conditions.Rather, it is the chain of events Y linking the model content to X that matter. Requiring,on the contrary, that only universal statement can be accepted in economics implies torestrict oneself to those properties valid for any economic system ever existed, or thatpotentially could have been brought in existence. Obviously, even we such statement didexist, they would be so vague and few of being of little use for any practical matter. Suchabsurd request would be like limiting biology of medical disciplines to restrict their analysisto statements valid for any actual or potential living form. Many economic phenomena,like many cases in biology and other disciplines, are interesting not because they areubiquitous, but because they are rare. When targeting rare events the purpose of researchis to explain, in as much detail as possible, the event, and for this purpose ABM’s are avery powerful tool, provided they are used in the appropriate way.

Similar considerations can be made to reject criticisms of incompleteness of simulationmodels, based on the asserted absence in the model of some real-world component. Modelsshould not aim at being realistic copies of the world, but to answer questions. Thus, anyanswer is necessarily based on an implicit ceteris paribus clause implying that the results

33

Page 36: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

may differ when conditions change. Being able to reduce the model to unrealisticallyfew components and yet show that it still produces certain results is an advantage of themodel, not a limitation. Adding more and more elements may affect the results, but aclaim of sufficiency is more powerful the fewer elements are included. Models meant forscientific analysis are not meant to be faithful descriptions, but to be useful in explainingwell-specified phenomena.

5 Conclusions

This work is based on a simple claim: the goal of science is not describe accurately theworld, but to explain it, with descriptions being a required elements of explanations whoseappropriateness varying depending on the research question considered. This claim clashesagainst the tendency in many social sciences, such as economics, to pursue detailed and(it is claimed) objective quantitative descriptions of real-world systems as a necessarypreliminary step in science. This approach, we sustain, leads to potentially erroneousresults and endless inconclusive disputes. On the contrary, setting as goal of science thegeneration of good explanations is a guiding principle providing very useful indications todesign, plan, generate and assess scientific knowledge.

The paper wants to show how the shift from describing to explaining does not implyradical methodological innovations in social sciences in respect of “hard” disciplines. In-deed, we sustain that there is no fundamental methodological difference between scientificdisciplines, which differ only on the relative importance of certain steps in the process ofassessment, common to any kind of science. Such differences, when apply, are determinednot by intrinsic features of the disciplines by the nature of the phenomena object of theanalysis, and, therefore, different approaches need to be used within the same disciplinewhen facing different kind of phenomena, that we formally divide in two classes.

Making explicit the purpose of providing explanations as the ultimate goal of scienceprovides a useful guidance on methodological issues concerning social sciences. We pro-vide a formal definition of “explanation”, leading to several interesting consequences. Wedetermine possible sources of scientific knowledge, defined as collections of explanations.Moreover, we can categorize the elementary components of explanations, and therefore ofknowledge.

Moving from generic to scientific knowledge is a matter of reliability, with scientificknowledge undergoing a process of assessment. Leveraging on the definition of knowledgeas explanations we can discuss how assessment needs to be carried on, delimiting theconclusions on which objective conclusions can be drawn from those in which a subjectivecomponent cannot be removed.

The discussion on assessment identifies, specularly, which stage is more relevant underdifferent cases. We show that different classes of phenomena relies in different ways ondifferent elements of the assessment procedure. Leveraging on this observation we canclassify real-world phenomena in two classes, namely those fully defined by fixed set quan-titative variables (stressing the stage of validation in the assessment procedure), and thoseformed by non-quantitative features, for which data can be collected only as proxies andare composed by varying sets of (proxied) measures. In this latter case verification is morelikely to play a crucial role.

The indications produced in the first sections of the paper allow to draw a few con-clusions in relation with economics. Namely, we discuss the implications of consideringknowledge as formed by explanations for four topics. First, how a research project shouldbe designed, evaluated, carried on, and, finally, assessed. Second, we draw the implications

34

Page 37: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

for the debate between orthodox vs. those criticising the mainstream approach. Third,we discuss methodological issues concerning econometrics and, finally, for agent-basedmodeling.

35

Page 38: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

References

Arthur, B. (1989), “Competing Technologies,Increasing Returns,and Lock-in by Histor-ical Events”, Economic Journal , 99, pp. 116–131.

Arthur, B. (1994), Increasing Returns and Path Dependence in the Economy , Universityof Michigan Press.

Axelrod, R. and Tesfatsion, L. (2006), A Guide for Newcomers to Agent-Based Mod-eling in the Social Sciences, Handbooks in Economics Series, Elsevier/North-Holland,Amsterdam, the Netherlands.

Balletti, S., Maas, M. and Helbing, D. (2015), “On Disciplinary Fragmentation andScientific Progress”, Plos One.

Buchanan, M. (2014), Forecast: What Physics, Meteorology, and the Natural SciencesCan Teach Us About Economics, Bloomsbury USA.

Cameron, B., William (1963), Informal sociology: a casual introduction to sociologicalthinking , Random House, New York.

Fontana, W. and Buss, L. W. (1996), “The Barrier of Objects: from Dynamical Systemsto Bounded Organizations”, in J. Casti and A. Karlqvist, eds., “Boundaries andBarriers”, Addison-Wesley, Massachusetts.

Friedman, M. (1953), Essays on Positive Economics, University of Chicago Presss.

Hawkins, J. and Blakeslee, S. (2005), On Intelligence, St.Martin’s Griffin.

Hempel, C. G. (1965), Aspects of Scientific Explanation and other Essays in the Philos-ophy of Science, Free Press.

Hodgson, G. H. (2007), “Meanings of methodological individualism”, Journal of Eco-nomic Methodology , 14(2), pp. 211–226.

Korzybski, A. (1931), A Non-Aristotelian System and its Necessity for Rigour in Math-ematics and Physics, Proceedings of the American Association for the Advancementof Science.

Lane, D. (1993), “Artificial Worlds and Economics. Part 1 and 2”, Journal of Evolution-ary Economics, 3, pp. 89–108, 177–197.

Leal, S., Napoletano, M., Roventini, A. and Fagiolo, G. (2014), “Rock aroundthe Clock: An Agent-Based Model of Low- and High-Frequency Trading”, ArXive-prints.

Lucas, R. (2009), “In defence of the dismal science”, The Economist .

Marengo, L., Pasquali, C., Valente, M. and Dosi, G. (2012), “Appropriability,Patents, and Rates of Innovation in Complex Products Industries”, Economics ofInnovation and New Technology , 21(8), pp. 753–773.

McCloskey, D. and Ziliak, S. (1996), “The Standard Error of Regression”, Journal ofEconomic Literature, 34, pp. 97–114.

36

Page 39: Methodological Implications from Equating Knowledge to ... · Harry Potter: \Is this real? Or has this been happening inside my head?" ... terion to assess scienti c theories can

Nelson, R. R. and Winter, S. G. (1982), An Evolutionary Theory of Economic Change,Belknap Press, Cambridge, Mass. and London.

Paul, D. (1985), “Clio and the Economics of QWERTY”, American Economic Review ,332.

Reiss, J. (2012), “The explanation paradox”, Journal of Economic Methodology , 19(1),pp. 43–62.

Salmon, W. (1989), Four Decades of Scientific Explanation, University of MinnesotaPress.

Simon, H. A. (1952), “On the Definition of the Causal Relation”, The Journal of Philos-ophy , 49(16), pp. 517–528.

Tetlock, P. and Gardnes, D. (2014), Superforecasting: The Art and Science of Pre-diction, Crown Pub.

Windrum, P., Fagiolo, G. and Moneta, A. (2007), “A Critical Guide to EmpiricalValidation of Agent-Based Models in Economics: Methodologies, Procedures, andOpen Problems”, Computational Economics, 30(3), pp. 195–226, ISSN 1460-7425.

Yoshimoto, S., Loo, T., Atarashi, K., Kanda, H., Sato, S., Oyadomari, S.,Iwakura, Y., Oshima, K., Morita, H., Hattori, M., Honda, K., Ishikawa,Y., Hara, E. and Ohtani, N. (2013), “Obesity-induced gut microbial metabolitepromotes liver cancer through senescence secretome.”, Nature, 499(5), pp. 97–101.

37