the development of the concept of artificial intelligence: historical overviews and milestones

18
INTRODUCTION The Development of the Concept of Artificial Intelligence Historical Overviews and Milestones Ronald Chrisley 1 Overview It is fitting to place Newell's article at the beginning of this set, given the large influence it exerted on the conception of this reference work. It is no coincidence that the kind of enquiry in which Newell was engaged is a para- digmatic example of the kind of activity for which this set is meant to pro- vide resources. His paper is an attempt to chart the hi story of the field of AI , and the organisation which he prefers - not by theories, methodologies, paradigms, tasks, nor cognitive functions, but instead by examining the intel- lectual issues involved - has s ubstantial overlap with the approach taken in compiling this set. However, given that his interest is the field of AI, several of the is sues be mentions are not so much conceptual as they are techno- logical. That i s, their impact on the concept of artificial intelligence is indirect, mediated by an implicit premiss: that intelligence is constituted by whatever techniques turn out to be necessary for workers in the field of artificial intelligence to synthesise it. Despite his intellectual ambitions, it must be noted that Newell reveals an engineering bias when he deems the philosop hy of mind to be a peripheral matter, and that it is technological advances alone, and not conceptual development s, which allowed "se ri ous" artificial inteUigence to be done. Understanding Gardner 's approach is assisted by keeping in mind that this account was intended to situate the field of AI within a larger picture of cognitive science, "the mind's new science". While in some re spects in sight- ful, the account is nevertheless often de ri vative, relying heavily on McCorduck 's Machines Who Think. Y et thi s, combined with the fact that it has been widely read, in fact makes it an important inclusion in the set in that it pr ovides a concise summary of what has come to be the received 7

Upload: sussex

Post on 22-Feb-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

INTRODUCTION

The Development of the Concept of Artificial Intelligence Historical Overviews

and Milestones

Ronald Chrisley

1 Overview

It is fitting to place Newell's article at the beginning of this set, given the large influence it exerted on the conception of this reference work. It is no coincidence that the kind of enquiry in which Newell was engaged is a para­digmatic example of the kind of activity for which this set is meant to pro­vide resources. His paper is an attempt to chart the history of the field of AI, and the organisation which he prefers - not by theories, methodologies, paradigms, tasks, nor cognitive functions, but instead by examining the intel­lectual issues involved - has substantial overlap with the approach taken in compiling this set. However, given that his interest is the field of AI, several of the issues be mentions are not so much conceptual as they are techno­logical. That is, their impact on the concept of artificial intelligence is indirect, mediated by an implicit premiss: that intelligence is constituted by whatever techniques turn out to be necessary for workers in the field of artificial intelligence to synthesise it. Despite his intellectual ambitions, it must be noted that Newell reveals an engineering bias when he deems the philosophy of mind to be a peripheral matter, and that it is technological advances alone, and not conceptual developments, which allowed "serious" artificial inteUigence to be done.

Understanding Gardner's approach is assisted by keeping in mind that this account was intended to situate the field of AI within a larger picture of cognitive science, " the mind's new science". While in some respects insight­ful , the account is nevertheless often derivative, relying heavily on McCorduck's Machines Who Think. Yet this, combined with the fact that it has been widely read, in fact makes it an important inclusion in the set in that it provides a concise summary of what has come to be the received

7

ARTIFI CI AL INT EL LI GENCE

history of artificial intelligence withjn the fields of AI and cognitive science themselves. Thus, emphasis is placed on Dartmouth, but that meeting is viewed as the culmination of an intellectual strand beginning in the seven­teenth century with Descartes and Vaucanson, winding through Babbage, Boole, Whitehead & Russell, Shannon, Turing, McCuJioch and Pitts, Wiener and von Neumann. The emphasis here is on the development and formalisa­tion of logic on the one hand, and machines which can mechanise these logical norms and transitions on the other. The contribution of the Dart­mouth participants is then, broadly, the joint application of these two ideas to the task of creating an intelligent machine.

As is common to many of the accounts, not only of the 1950s and 1960s in particular but of the entire history of artificial intelligence in general, a contrast is made between two approaches, which differ mainly in the extent to which they are guided by considerations of neurophysiological verisimilitude. Gardner opposes one pair with another: Newell and Simon vs. McCulloch and Pitts (contrast Boden's assessment of the latter pair, as men­tioned below in the discussion on their article " A Logical Calculus of the Ideas Immanent in Nervous Activity"). The difference is the level at whlch similarity to human intelligence should be pursued: is it at a low, structural level , thus requiring detailed understanding of the brain; or is it at a higher, behaviouraUfunctiona1 level, which may be implemented in machines in a way quite different from how it is in the brain? From what Gardner says, one mjght be tempted to define the field of AI (as opposed to other fields trying to achieve artificial intelligence, such as cybernetics) as those researchers who took the latter approach. But the fact is that there were/are some cyber­neticists who, like Newell and Simon, favoured a more abstract approach, sometimes on a level higher even than the Dartmouth group and their descendants.

The truth is that it is difficult to state a conceptual distinction between the two groups unless one looks ahead to the two conceptual developments in the field of AI which Gardner documents. After detailing some programs that were considered major advances in the field, and before concluding with discussions of the four early critics of AI (Weizenbaum, Dreyfus, LighthiU and Searle), he highlights (I) a move from general to more domain specific systems; and (2) a move from procedural to declarative knowledge. Both of these changes made what was previously an implicit assumption into a cen­tral explicit doctrine: that intelligence is primarily a matter of knowledge. It is this focus on knowledge which is taken to be characteristic of the orthodoxy in the field of AI. Note that move 2 prevents a trivialisation of the claim that intelJjgence is knowledge. Almost anyone could agree with that claim if they were allowed to expand arbitrarily the extension of "knowledge"; but the focus on declarative knowledge renders those parts of life which can only be understood in terms of "knowing how" as peripheral to intemgence and thus AI.

8

INTRODUCTION

The first sentence of the Glymour, Ford and Hayes' overview, which equates artificial intelligence with android epistemology, makes it clear that they share the epistemic view of intelligence. This view suggests that the historical development of our understanding of the objects of knowledge is foundational for the development of the concept of artificial intelligence. Accordingly, Glymour et al. start with the logic, epistemology and metaphys­ics of Plato and Aristotle, and follow a thread of logic and its mechanisation through Lull, Pascal, Leibniz, and Bacon. A brief hiccup comes with Des­cartes' dualism. Though foreshadowing the hardware/software distinction, it nevertheless defies the possibility of the reduction of the mental to the physical. The logico-mechanical line continues to develop, however, via Hobbes, Kant, Boote, Frege and Russell. Form Frege on, the intellectual dependence is consolidated into a genealogy where the teacher/father passes the torch on to the student/son, who in turn becomes a teacher: Frege begat Carnap, Carnap begat Simon and Pitts and Hempel, Hempel begat Massey begat Buchanan (begat Mycin?). Like Gardner, Glymour et al. make a neurophysiological/non-neurophysiological distinction between two approaches to artificial intelligence, "connectionist vs. symbolic". They trace the origin of what they take to be the central connectionist view (Hebbian learning at a synapse) back to the nineteenth century psychologists such as Brucke and especially Freud, and leave the story there.

Mazlish is not just interested in the development of the concept of arti­ficial intelligence, nor just the development of automata, but mainly in the effect that these have had on humanity's conception of itself. The paper of his which has been included here was adapted from chapter three of his book The Fourth Discontinuity for inclusion in a special edition of The Stanford Humanities Review entitled Constructions of the Mind: Artificial Intelligence and the Humanities (which also contains the article from Agre and many other articles that would have made excellent inclusions in this set). Mazlish's claim is that artificial intelligence has made humanity rethink its place in the cosmos in as profound a way as did the rejection of a geo-centric astronomy and the adoption of an evolutionary biology. The article included here documents an aspect of the development of the concept of artificial intelli­gence that is otherwise intentionally neglected in this set: the concept's role in myth, fiction and the arts. That said, there are some notable omissions in his story. In focusing on automata, he neglects the roots of the concept of artificial intelligence in the Judeo-Christian scriptures and ancient myth. In addition to his observations concerning golems, one can go further to note that the move from "emeth" to "meth" is a case of symbol manipula­tion par excellence, with the role of "truth" (the meaning of "emeth") being crucial to the operation of the golem: an idea which was to remain a central aspect of at least logic-based artificial intelligence work in modern times. The significance of the golem is further heightened when one learns that more than one famous pioneer in the field of AI claimed to be heirs to

9

ARTIFI C IAL INT E LLIGEN C E

the Rabbi Loew tradition. Lastly. Fritz Lang's film Metropolis, which features the famous robot Maria, is notably absent from Mazlish's compara­tive discussion.

2 Focus

The earliest scientific attempts at creating artificial intelligence focused on the concept of autonomy. Cohen locates the beginning of the epoch of mod­em automata with Descartes, while simultaneously linking Descartes' ideas with those that came before (e.g. Thales, Galen) and after (e.g. Leibniz, La Mettrie, Freud). U the rumours are true, then Descartes was one of that very rare company, a philosopher of artificial intelligence who actually tried to build an intelligent artefact. Nonetheless, it is his thinking, rather than his Francine, which has had monumental impact on the concept and practice of artificial intelligence. Despite being famous for embracing a dualist view of reality which places the mind outside of the physical sphere, his otherwise mechanistic view of nature played a crucial role in expanding the possibilities for artificial intelligence. This expansion began when La Mettrie extended Descartes' mechanistic view of animals to humanity itself. Cohen points out that what was important here was the fact that La Mettrie was " the first to state the problem of mind in terms of physics", and not La Mettrie's proof itself, which was merely "a statement of empirical correla­tion between physiological and psychical events" . Crucial here, too, is mention of Vartanian's analysis of L 'Homme-Machine, which locates the chief limitation of La Mettrie's model in its "insensitivity to duration" . Vartanian's assertion. that temporality is, by contrast, essential to human experience will be echoed (almost certainly unknowingly) centuries later in the dynamicist challenge to the symbolic approach to artificial intelligence (Volume III, Part II).

Cohen next focuses on contemporary critiques of Descartes, such as Cyrano de Bergerac's Estate et empires de Ia lune, which is striking in its similarity to modern thought experiments in this area, which often make use of a "Martian" perspective or some such. Giles Morfouace de Beaumont is mentioned for his attack on dualism and his defence of animals saying "who could endow a machine with the sensitive soul of a beast or with power of reproduction?" His contemporary, Father G . H. Bougeant argued that ani­mals have language and can understand each other weU, even if we cannot understand them. Spinoza (satirically) wrote that the golem " has as much life as any other human being, if one accepts the new viewpoint that the relation between body and mind is so loose that it can in a moment be lifted and replaced". This could be seen as a precursor to Wittgenstein's behaviourist/ anti-dualist arguments. England's Lord Chief Justice Sir Matthew Hale was a severe critic of Cartesian reductionism which he claimed " renders living creatures merely mechanisms or 'Artificial Engins"'. Coleridge's attack on

10

INTRODUCTION

Descartes was unknowingly ironic: Coleridge argued that Descartes' dualism made as much sense as saying that one's feelings could be put in spatial relation to one another, and yet Descartes explicitly claimed that the mind was not spatial. But these last two make it clear how Descartes' literal limita­tion of the mechanical view to the case of animals has often been set aside, with the effect that at times Descartes was seen as a threat to dualism, not an adherent of it.

Cohen concludes with a discussion of what he takes to be Descartes' most important contribution: " his great faith in the mathematical method". Des­cartes' influence on science is put in comparison to Hobbes, eight years his senior, and Leibniz. Their faith in mathematics led to the first "robot­mathematician" by Pascal, and had Leibniz, having read about Lull's Ars Magna, "dream of a machine that could calculate so well as to be able to derive a complete mathematical system of the universe". Cohen notes Leibniz' view that the art of thinking would come to perfection through an analysis of games, a view which continues to play a crucial role in the field of artificial intelligence.

What Descartes says at the end of Part Five of the Discourse on the Method .is striking in its relevance to the development of the concept of artificial intelligence. He considers the question of how we could tell from the behaviour of an automaton that it is not a real man (and therefore, not really intelligent). Descartes offers two behavioural limitations of automata: (1) an automaton would be unable to produce different arrangements of words that are meaningful and appropriate; and (2) an automaton would only be able to imitate our behaviour in a fixed number of ways, since our reason is a "universal instrument" while an automaton must have a different organ for each purpose. Although Descartes relied on the implication of the form: " failure to match human behaviour means an automaton is not intelli­gent" (lacks reason), it is not unfair to suppose that the same reasoning would have led him to assent to the converse: " if something is behaviourally equivalent to a human, it must possess reason". Although Descartes was not restricting his attention to linguistic behaviour, his emphasis on language as being a stumbling block for mechanised reason makes it reasonable to see the Turing Test, which does restrict its attention to linguistic behaviour, as a descendant of Descartes' thinking. For if the only obstacle to producing a reasoning automation is its inability to use language flexibly, meaningfully and appropriately, then surely creating a machine which passes the Turing Test is sufficient for creating an artefact which truly thinks. Of course, the linguistic limitation was not the only obstacle Descartes mentioned: there was also the matter of universality to be overcome. Thus, Descartes' experi­ence with automata anticipates the more modern experience of interacting with Weizenbaum's Eliza program: initial fascination, but eventual disil­lusionment as it is realised that Eliza can only cope with a limited number of situations, pre-anticipated by her programmer. As with the linguistic case,

1 I

ARTIFICIAL INTELLIGENCE

this does not seem to be limited reasoning, but no reasoning at all. But this issue, too, was explicitly addressed by Turing in his results concerning universal machines (namely, that there exist machines which can be pro­grammed to simulate any other machine) and the Church- Turing thesis (that for any effective procedure there is a Turing machine which can compute it). The upshot of Turing's work here is: Descartes was wrong; there are mechan­isms which can act as " universal instruments". Although these foundational notions in artificial intelligence contradict Descartes' conclusions, they give his work a fundamental role by accepting his framing of the issues - in particular, the idea that the essence of reason is seen only in diverse linguistic behaviour. This assumption, and the questioning of it, is a dichotomy which would go on to generate many of the branches in the concept of, and work in, artificial intelligence.

A tantalising reference to a lost section of his Treatise on Man hints at another point of connection between Descartes' thinking and artificial intel­ligence. It might be thought that dualism would be a hindrance to artificial intelligence, since after the engineering work in the physical realm is com­pleted, one would still have to bring about the mental substance that is neces­sary for true mentality and reason. But here Descartes speaks precisely of the creation of the soul, over and above the arrangement of matter. The fact that Descartes doubtlessly has divine creative powers in mind not withstanding, his remark leads us to question: what is it about the dualistic picture that supposedly prevents artifice? Cannot the mental realm aJso be manipulated? U there is causal interaction between the physical and mental realms, as Descartes maintained, then why should there be any obstacle to creating and shaping the mental substance in a manner necessary and sufficient for reason? In this respect, Descartes' comment that reason "cannot be in any way derived from the potentiality of matter" seems misplaced, if understood in an unrestricted form. The happiest reconciliation of his views may be this: while material configurations cannot constitute reason, they can give rise to it. Fortunately, all that is required of the material-trafficking artificial intelligence practitioner is the latter.

Of final interest is Descartes' use of the analogy of the helmsman in the ship, the very analogy which Wiener would use when coining the term "cybernetics" some three hundred years later. At various times, including the present, it has been fashionable to label traditional, symbolic artificial intelligence "Cartesian", and therefore reject it in favour of a more cyber­netic view, one which sees control theory and dynamics rather than computa­tion as crucial for understanding intelligence and cognition. But notice here that Descartes is rejecting the cybernetic analogy, not in favour of a more detached, symbolic, dualistic, representational picture, but rather in favour of a view that sees mind and body even more tightly coupled than a helms­man is with his ship. To be fair, Descartes does restrict his remarks here to "movement, feelings and appetites", leaving open the possibility that reason

12

INTROD UCTION

or intelligence proper is, by contrast, of a detached nature. Despite the popu­larity of this interpretation, it is not one explicitly supported in this passage.

One can distinguish two ways in which one might help bring about arti­ficial mentality. The first involves primarily engineering breakthroughs that resuJt in mechanisms which are more mind-Hke. The second, more con­ceptual way attempts to create mind-like artefacts not so much by making machines more like minds, but rendering minds to be more easily understood as machines. Hobbes makes this latter kind of contribution to artificial intel­ligence. The focus of interest of Western scientific culture had been reduced from mentality in general to the particular case of reason centuries, perhaps millennia, before Hobbes. Already in Plato we find a distinction between logos (reason) and dianoia (perceiving, contemplating, realising or recognis­ing), and Aristotle distinguishes noesis, the process of understanding, from nous (related to the Greek word for see.ing), an abstract abmty to think (cf. Vroon 1980, chapter 2). But Hobbes further assisted the enterprise of artificial intelligence by in turn reducing reasoning to something more mecbanicaJiy, indeed computationally, tractable: "Reason ... is nothing but reckoning''. This view of reason as calculation has in recent times been fre­quently criticised as being too idealised, too "perfect" to apply to human cognition (although it is interesting to note that most critics have retained Hobbes' equation of reasoning and computation, and have instead ques­tioned the focus on pure reason). The inference usually drawn is that arte­facts built on the foundation of Hobbesian reason will either be so different from the familiar form of human intelligence that they will not be recognis­able as intelligent, or (more frequently) that such machines will be too brittle to cope with the real, messy, unreasonable world. But the target of such objections is not Hobbesian reason: Hobbes spends much of Chapter V mak­ing it clear how reason and the capacity for error ("absurdity") go hand in hand (thus anticipating similar, recent conclusions in the philosophy of cog­nitive science concerning the necessity, for true representational capacities, to be capable of misrepresenting) (e.g., Fodor 1984). So reason, although reck­oning, is not merely applying inference rules to a database of strings which encode knowledge. Rather, it is a family of capacities which (in addition) compensate for the eight kinds of "absurdity" which Hobbes delineates. In this sense, Hobbes was much more radical than is typically acknowledged, since he was suggesting that aU of reason (including the processes of concept-formation, error-detection and compensation, etc.), not just the syntactic move from premises to conclusion, can be understood as reckon­ing. Put another way, reasoning is but reckoning, but reckoning is more than blind syntactic symbol manipulation. Nevertheless, it is not this Hobbes, but Hobbes as usually interpreted, who had a monumental impact on the concept of artificial intelligence.

Hobbes is also of note for making an explicit connection between arti­ficiality and Plato's individual/republic analogy. He opens Leviathan with:

13

ARTIFI C IAL INTELLIGEN CE

"By art is created that great Leviathan, called a Commonwealth or State -(in Latin, Civitas) which is but an artificial man" . Here we have an intel­lectual precursor to an idea expressed in Turing's 1950 paper, and Minsky's "Society of Mind, (see the introductions to Volume II , Part I sections I and 2 respectively).

Leibniz' Preface is included primarily because it is one of the locations of his famous ca/culenzus (i.e. "let us calculate") line, which also appears in modified form, e.g. in his Dissertio de Arte Combinatoria of 1666: if contro­versies were to arise, " there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: Let us calculate." Although in the included passage the emphasis is on scientinc rather than philosophical problems, the upshot is the same: the idea of reasoning and problem solving through mechanical computation. Although this idea is a vital ingredient for what is to come, on its own it envisions more a kind of prosthesis to enhance our own intelligence (see the inclusion from Ashby in Volume IIl, Part I) than an autonomous intelligent artefact. Note the qualification that immediately follows the slogan: "I stilJ add: in so far as the reasoning allows on the given facts" (original emphasis). This concession was not to express itself in modern work as an acknowledgement of any fundamental limitation of the symbolic approach as much as it was to be manifested in two imperatives: to find an appropriate knowledge representa­tion formalism; and later to ensure that enough knowledge was encoded in the system being constructed (cf. the inclusion from Lenat and Feigenbaum in Volume II, Part n.

There is much of interest in Leibniz' Monadology as well. The similarities between section 17 and Searle's Chinese Room thought experiment cannot be ignored. Whereas Descartes and Turing used external behaviour as evidence for the presence of mind, and Wittgenstein made such behaviour criterial for mentality, both Leibniz and Searle accept the possibility of behaviour which is intelligent-like, but move inside the purportedly intelli­gent system to find that, at least in cases of certain kinds of mechanism, there is, in fact, nobody home. Of course, there are important differences between Leibniz' mm and Searle's Chinese Room: Searle is not attacking the possibility of a machine having a mind (since be concedes, in a way Leibniz (predating La Mettrie) could not, that humans are machines), but only that certain kinds of mechan.ism - namely programs operating on symbols- are not sufficient for mentality. Furthermore, Leibniz uses external perception to see that there is nothing more than mechanism, and therefore no menta lity, in the mill. Searle does not beg the question in this way - be does not appeal to external sense, but to introspection. Searle is not introspectively aware of any understanding of Ch.inese going on, so there is no understanding of Chinese going on. But it is interesting that Searle's conclusions to the effect that understanding is some sort of biological secretion fit in well with

14

INTRODUCTION

Leibniz' conclusion that mind (perception) must itself be a simple substance, rather than a compound of non-mental substances or mechanism.

More troubling are the difficulties that Leibniz' metaphysics, based on monads, gets him into, especially in sections 17, 18 and 64. Every simple substance constitutes/is constituted by perceptions, and the self-sufficiency of these simple substances makes them incorporeal automata. This is troub­ling for understanding the development of the concept of artificial intelli­gence, for we now see the notion of an automaton extended beyond the idea of physical mechanism to include "divine" or "natural'' machines. It might seem a strange development indeed that forces a man of Leibniz' day to use "divine" and "natural" as synonyms: but it is the logical consequence of viewing nature as the product of divine art. Here, as in Genesis, the paral1el made between man's artifice and God's artifice threatens an elimination of the naturaJ/artificial distinction, thus trivialising the concept of artificial intelligence itself (see the General Introduction). However, even stranger is the reason that Leibniz gives for the inferiority of artificial (that is, man­made) automata: they are not sufficiently mechanical! Natural automata are machines which are made of smaller machines, which are in tum made of smaller machines, ad infinitum. It is notable that Leibniz' does not conclude that man-made automata are therefore barred from being intelligent - the possibility is left open for them to be inteWgent, even if inferior to us in that or other respects. AJthougb we now believe that the mechanism of natural systems does bottom out in simple, non-mechanical substances, Leibniz' point can be rephrased to retain relevance: only automata which are hierarchical, exhibiting mechanical complexity on a variety of scales, wiU be fluid and adaptive enough to rival human/biological intelligence. But Leibniz' original objection to man-made automata was not this contingent: by declaring that natural automata were decomposable into machines ad infinitum, Leibniz was forever placing the sophistication of natural auto­mata out of the reach of man's artifice, which must begin somewhere, with simple, non-mechanical substances. But Leibniz does not consider the possibility that man might start with what God has provided: simple biological machines, be they cells or chemicals. As long as each level of organisation built from these components is itself machinelike, then a man­made artefact could have the infinitistic mechanical structure Leibniz takes to be characteristic of natural automata. Would we withhold the label "man­made .. from such devices simply because they got a head start from God's mechanical engineering? We do not require a carpenter to have created the wood she carves in order to say that her chair is man-made. So it seems that Leibniz' prohibition of sophisticated man-made automata turns out to be a contingent one after all.

Fryer and Marshall attempt to redress what they take to be an inaccuracy in the historical record: the labelling of Jacques de Vaucanson as a mere toy maker and entertainer. They persuasively argue that Vaucanson's aim was to

15

ARTIFICIAL INTELLIGENCE

understand behaviour by simulating (or even re-creating) it in automata. Their conclusions imply that Vaucanson may have been the first to pursue artificial intelligence not as an end in itself, but as a means to further under­standing natural intelligence - an approach which would later be enshrined as a principal methodology of cognitive science.

Given what is said above concerning Cohen's paper, let alone what is said in Cohen's paper itself and in the historical overviews, nothing more needs to be said here concerning the selection from La Mettrie, except that the work is somewhat frustrating in the currept context. A connection is never explored between man being a machine and the fact that machines are built by man, to yield the question: could an artificial man-machine be created by humans? That work was left to the many thinkers that were infiuenced by La Mettrie.

Until Babbage, the conceptual developments in artificial intelligence were far outstripping the technology necessary to attempt their realisation. L. F. Menabrea, originally a military engineering officer but later a general and Prime Minister of Italy, heard a description of the Analytical Engine by Babbage in 1840. He summarised Babbage's ideas in a paper, Babbage being too concerned with the developments of his engines to publish any proper descriptions of them himself. Of interest to the concerns here is Menabrea's distinction between two aspects of intelligence: the mechanical, and the domain of reasoning or understanding - a conceptual move which, given the mechanistic view of calculation that Babbage presented to Menabrea. gives up on the Hobbesian ideal of reasoning as calculation and instead focuses on the residue of intellect which can be so understood. But it is not Menabrea's main text that is included here. Countess Ada Lovelace (daugh­ter of Lord Byron, and namesake of the US Department of Defence pro­gramming language ADA) translated the paper and extensively annotated it, providing us with the best contemporary account - an account that even Babbage recognised to be clearer than his own. It is these notes (in fact, only Note A and part of Note G), rather than the description of the machine itself, that have most relevance to the development of the concept of artificial intelligence. In Note A, Lovelace makes two important con­ceptual points. First, she makes a distinction between an operator and that which is operated on; next she makes the point that the objects of operators need not be numerical, but anything which can be formalised:

The operating mechanism can even be thrown into action independ­ently of any object to operate upon ... Again, it might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. (original emphasis)

16

INTRODUCTION

Lovelace then offers music composition as an example of such a non­numerical case of computation.

Compare (Newell 1980: 137):

The mathematicians and engineers then [in the 1950s] responsible for computers insiste-d that computers only processed numbers - that the great thing was that instructions could be translated into numbers. On the contrary, we [Newell and Simon] argued, the great thing was that computers could take instructions and it was incidental, though useful, that they dealt with numbers. (original emphasis)

Together, these two points yield a distinction which would reappear in sym­bolic artificial intelligence in various forms: the program/data distinction, the heuristic/search method distinction, the knowledge/inference engine distinc­tion (see the quote from D avis in the discussion of Crevier's paper, below), etc.

Lovelace follows Leibniz in thinking of the machine as a kind of intelli­gence prosthesis, but with a twist; the very same processes (and this could only be meant in a rather soph.isticated notion of functional identity of processes) " pass through" our brains and that go through the Analyser:

It were much to be desired, that when mathematical processes pass through the human brain instead of through the medium of inanimate mechanism, it were equally a necessity of things that the reasonings connected with operations should hold the same just place as a clear and well-defined branch of the subject of analysis, a fundamental but yet independent ingredient in the science, which they must do in studying the engine.

The suggestion is th.at because the Analytical Engine "cannot be confounded with other considerations", it will be able to reason more clearly and success­fully than we do.

The section of Note G that is included is the source of the (in)famous "Lady Lovelace's Objection" (to use Turing's appellation) to artificial intelli­gence: that a computer could never think because it could never originate anything. Of course, as Hartree, Turing and some others admit, th.is is not what Lovelace actually said; she was speaking only of the Analytical Engine. But even this more modest (and surely correct) claim is in tension with what she says in Note A. There she takes great pains to say how much an advance the Analytical Engine is over the Difference Engine, but in doing so seems to suggest that the Analytical Engine can do more than what is allowed by Note G :

... it is scarcely necessary to point out, for instance, that while the Difference Engine can merely tabulate, and is incapable of developing,

17

ARTIFICIAL I NTE LLIG ENCE

the Analytical Engine can either tabulate or develope. (original emphasis)

We may say most aptly, that the Analytical Engine weaves algebraica/ patterns just as the Jacquard-loom weaves flowers and leaves. Here, it seems to us, resides much more of originality than the Difference Engine can be fairly entitled to claim. (original emphasis)

Further commentary on and context for Babbage is provided by Mazlish, who compares and contrasts him with his contemporaries: not only Huxley and Butler as stated in the title, but a lso lesser known figures. For example, in Carlyle we can perhaps see the first conscious acknowledgement of the "second way" of pursuing the goal of macllines which are like humans, that of making humans more machinelike. It is usuaUy thought that Babbage's contribution to artificial intelligence was clearly of the other, technological sort. But the case is not so clear-cut. That Babbage's work was a milestone in the development of computers is without question, and it is not in dispute that advances in computation led to drastic changes in the concept of arti­ficial intelligence. But did Babbage's work have a more direct effect on the concept? Given Lady Lovelace's comments, it might be presumed that Bab­bage bad no such pretensions, a lthough it must be admitted that Mazlish notes the aspects of Babbage's work that suggest otherwise. Regardless of whether Babbage was explicitly concerned with making machines more Like humans, it seems that he was also concerned with rendering humans, even God, as machinelike calculators. Furthermore, in Babbage we see a nexus in history: the culmination of historical strands such as the entertaining auto­mata of the eighteenth century, a Leibnizian focus on games, a Cartesian emphasis of mathematics, as well as the first instance of what was to become a defining aspect of work in artificial intelligence: funding by the military.

Simons documents the developments, initiated by Babbage, that would provide the technology essential for the modern, scientific approach to arti­ficial intelligence: the electronic computer. He also discusses Alan Turing, whose importance to artificial intelligence is profound and multifaceted , t ra­cing his ideas back to his influences: La Mettrie and Butler via Brewster, as well as Beutell and Hilbert. Perhaps his discussion relies heavily on Hodges' biography of Turing, but it is none the worse for that.

Cordeschi looks for the roots of the background assumptions of modern artificial intelligence (what he calJs the "culture of the artificial") in the pre­cybernetics period of 1930-1940. Of interest is a concession he makes early on, to the effect that if one sees artificial intelligence as primarily an engin­eering enterprise, then his analysis does not apply; instead, "artificial" just means " inorganic" in the pre-cybernetic and cybernetic periods, while it just means "computational" in artificial intelJigence research after that - what Cordeschi calls "cognitive AI". Cordescbi makes his case by looking at the

18

INTRODU C TION

" robot approach" of Hull, which was, in essence, a synthetic approach to verifying behaviourism. But this was no simplistic, ideological behaviourism - Hull was willing to allow all manner of internal states, self-modification, and memory to play a role in his proposed machines. In this he was directly opposing McDougall, who claimed that it was the inability of machines to modify their behaviour on the basis of memory which established the non­mechanical nature of mind. Cordeschi looks at HuH's proposed circuits in some detail, following his work through to the distinction eventually made between merely purposive behaviour and the class of truly mental behaviours involving plans. It was this latter kind of behaviour which he hoped to explain with the concept of " ultra-automaticity" : a response anticipatory mechanism. Comparisons and contrasts are also made with Hull's method­ology and the influential work of Craik. Cordeschi leaves us with "the fateful word" being uttered in 1943: surely he means the word "feedback", and chooses 1943 because it is the year in which Rosenblueth, Wiener and Bigelow published their seminal paper.

The work of Ross is another topic covered by Cordeschi, in the context of the refinements to Hull's methodology that also included the work of Walter and Ashby. Ross' designs, like those of the others just mentioned, is noteworthy in that they are examples of modem, scientific, but in some sense non-computational attempts to construct artificial intelligence. The first of the two papers from Ross merely describes such a machine; the second paper does this also, but in addition makes a point whjch would later become a key component of the concept of artificial intelligence: the idea that an intelligent machine need not resemble the mechanism underlying human or animal intelligence. This would be embellished later by Armer (Volume IV, Part II) and others in the flight analogy argument for artificial intelligence, and would also be reflected in the philosophlcal ljterature as the multiple­realisability of functional states.

Although most hlstories of the field of artificial intelligence emphasise the Dartmouth conference of 1956, there were several earlier meetings that played a critical role in the evolution of the concept of artificial intelligence, including the 1942 Cerebral Inhibition Meeting, the ten Macy Conferences held between 1946 and 1953, and the Hixon Symposium held in 1948. Edwards' contribution covers the highlights of these meetings with regard to artificial intelligence, including discussions of McCulloch, Pitts, Wiener, Rosenblueth, von Neumann, von Foerster, Shannon, and several others. However, his intention in this selection, as in the book from which it is taken, is not just to relay these events, but to establish the claim that the military backdrop of the research in thls period is inseparable from the content of the research itself, that the concept of intelligence and the methodologies proposed for its synthesis were heavily influenced by the militaristic surround - economic and otherwise - of the researchers involved. This provides a counterpoint to the account, in (Dreyfus and Dreyfus 1988), of the rise of

19

ARTIFICIAL INTELLIGENCE

artificial intelligence in the 1950s and 1960s, an account which emphasises the (primarily US) military sponsorship of symbolic artificial intelligence over non-symbolic, neo-cybernetic approaches such as Rosenblatt's Per­ceptron work.

Discussed in. the Edwards paper, and mentioned at the very end of the Cordeschi's discussion is the seminal 1943 paper by Rosenblueth, Wiener and Bigelow. This was the written version of a presentation by Rosenblueth which instigated the post-war series of conferences on cybernetics and from which the field of cybernetics eventually evolved. The paper had two goals. The first was to define bebaviouristic study, which was done in a way that opposed such study to functionalism. The main contribution here was to provide a behavioural taxonomy, but even this was enough to have a lasting conceptual and methodological influence - for one thing, the taxonomy was so constructed as to be applicable to both organisms and machines. But one cybernetic generalisation that this taxonomy suggests - "all purposeful behaviour may be considered to require negative feedback" - is problematic. Feedback systems, a fortiori negative feedback systems, are only a subclass of purposive systems. The paper appears to transcend the traditional question of artificial intelligence by turning the tables and claiming that machines can do things that people cannot - until it is made clear that this is only meant in an uncontentious way, such as "having electrical output" . The paper is more even-banded than much of the work, either supporting or oppositional, that followed; the authors allow for the possibility that one might later need dif­ferent means of studying organisms and machines. It is noted that functional study reveals deep differences between organisms and machines: there are no wheels in organisms, they have a uniform distribution of energy, and use spatial rather than temporal multiplication. The taxonomy offered would allow one to see the similarities.

But even more revealing of the differences between these cyberneticians and the symbolists to follow was the role of the machine in their work. Rosenblueth et at. admit that their focus on synthesis of intelligence in non-protein materials is merely a practical matter, and might change: " the ultimate model of a cat is of course another cat". Thus the "bebaviourist" cyberneticists were not as opposed to multiple-realisability as some func­tionalist critiques have made them out to be. In fact, it was the symbolists who placed significance on one particular way of achieving intelligent behaviour: programming a digital computer. True, it was acknowledged that this computer could be realised in any substrate, but this does not change the fact that the symbolists were in fact more restrictive than their behavioural

. cousms.

The second aim of the Rosenblueth, Wiener and Bigelow paper was to stress the importance of the concept of purpose. Their largely philosophical remarks here explain, to a much greater extent than does the part of the paper dealing with the first aim, why the paper appeared in a philosophy

20

INTRODU C TION

journal. Their analysis is that previous thinkers were rightly distrustful of the problematic notion of a final cause, but these thinkers made the mistake of throwing out the notion of purpose as well. It is argued that the notion of purpose is not opposed to determinism; causality is concerned with functional relationships, while teleology is only concerned with behaviour. Perhaps this, more than anything else, was the most important conceptual contribution of the paper: a further insight into how the normativity of intelligence could be realised in the meaninglessness of the mechanical.

Boden has more than once made the point that McCulloch and Pitts' paper " A Logical Calculus of Ideas Immanent in Nervous Activity" started the field of artificial intelligence. She does this by arguing for it in her entry "Artificial Intelligence" in the Routledge Encyclopaedia of Philosophy, and also by including the paper as the first entry in her collection The Philosophy of Artificial Intelligence. Her main contention is that it "integrated three powerful ideas of the early twentieth century: propositional logic, the neuron theory of Charles Sherrington, and Turing computability". With regard to the latter, Boden notes that McCulloch and Pitts showed that "every net computes a function that is computable by a Turing machine", and that "their work inspired early efforts in both classical and connectionist AI because they appealed to logic and Turing computability, but described the implementation of these notions as a network of abstractly defined 'neurons' passing messages to their neighbours'' (which makes it all the more note­worthy that McCulloch and Pitts do not cite Turing in their list of refer­ences). Boden's assessment of how their paper made the field of artificial intelligence possible is tripartite: it resulted in von Neumann's choice of bin­ary arithmetic in designing the digital computer; it reminded researchers of the Hobbesian insight that calculation could underlie all reasoning, not just mathematics; and it launched the computational study of neural networks.

The Heims' paper breaks with the rest of the set in being primarily bio­graphical, focusing on the work and persons of McCulloch and Pitts; this, together with its absence of the political slant of the Edwards' piece makes it a balancing complement to that account.

Although the official 1951 publication date puts it after the watershed of 1950 which is being employed to divide the early and modern sections of this set, von Neumann's paper is merely a slightly edited version of one presented at the Hixon Symposium in 1948 which, together with the content of the paper, warrants putting it here. Von Neumann's primary interest is in explor­ing how biology might help us understand automata; the converse is a secondary interest. He begins with a discussion of the relative merits of ana­logue vs. digital machines. Although he notes that "the first well-integrated, large computjng machine ever made was an analogy machine, V. Bush's Dif­ferential Analyzer", he goes on to show, in a manner quite different from Turing's universality-based arguments for the same conclusion, the advan­tages of digital computers. In particular, the noise-related error of analogue

21

ARTIFIC IAL I N TE L LI GENCE

computers grows much more quickly as the number of components increases than does the round-o.ff error of digital computers. Next, he compares bio­logical systems with (digital) computers; admitting that the nervous system is partly analogue, he restricts his attention to the digital aspect. Although the neuron itself is not purely digital, he notes that the same goes for com­ponents in digital machines; what matters is whether a component's f unction is to be digital. Von Neumann then realises that this means that one must employ intentional terms just to make the analogue/digital distinction. This has substantial implications for those who would wish to use computation and artificial intelligence as a way of understanding intentionality. If even digitality is, as Brian Cantwell Smith puts it, a "post-intentional" notion (Smith 1995), then how can one appeal to digitality to explain intentionality (as does, e.g., Dretske 1980) in a non-circular manner?

Another interesting notion that von Neumann expresses in this paper, but which should p.robably receive further consideration, is his notion of a switching organ. What is important is not the idea of the organ itself, which is just a black box which is capable of causing several stimuli of the same kind as the ones which initiated it. Rather, the insight is in the conclusion that von Neumann draws: the energy of the organ must derive not from the original stimulus, but from an independent source. This concept might be of use in drawing the representationaVnon-representational distinction.

Next comes a comparison between the sizes of organisms and the comput­ing elements of the day, with von Neumann concluding that the large size and unreliability of vacuum tubes was what was then limiting the size and power of artificiaJ automata. Given the vast improvements in hardware since 1948, only the third of von Neumann's limitations on "current" technology is still applicable: the lack of a logical theory of automata. In a move that is in stark contrast with hls foregoing adumbration of digitality, and indeed his intellectual image as being a kind of "Mr. Digital" , von Neumann argued that formal logic "is one of the technically most refractory parts of math­ematics", because it deals with rigid, all-or-none concepts, thus cutting it off from the "best cultivated portions of mathematics" . The theory of automata, he maintained, is currently a chapter in formal logic: it is combinatorial rather than analytical It needs to change so that it is not binary, all-or­notbing. In particular, the actual time it takes to perform a computation must be taken into account. In this regard von Neumann was flying in the face of what has become a Turning macrune orthodoxy, and presages the criticisms of the dynamicists (such as van Gelder 1998). The new logic of automata should also incorporate the probability, however slight, of an operation failing, rather than assuming perfect compliance. This issue illustrates why there is a need for such a theory in order to construct more comp.lex automata. Unlike in nature, with digital computer design and con­struction we have to stop errors immediately. This is because we are more "scared" of errors, because of our ignorance of how to handle them (lack of

22

INTRODUCTION

a theory). Similarly, most of our diagnostic tools assume onJy one compon­ent is faulty - another example of how our ignorance limits our ability. One can see the connectionist emphasis on robustness and analogue systems as responding to this consideration, but not in the way von Neumann recom­mended - be would have criticised such work for its lack of theoretical foundations.

Von Neumann states the main result of the McCulloch- Pitts theory: any functioning which can be defined unambiguously in a finite number of words can be realised by a neural network. But be identifies two problems with thjs: (I) will the network be of practical size?; and (2) can every mode of behaviour be put unambiguously into words? The latter in effect chaUenges the Church-Turing thesis and the role it has played in justifying wtques­tioned faith in approaches to artificial intelligence based on the digital com­puter. Von Neumann's objection is that although words can capture any "special phase" of any behaviour, he thinks it unlikely that this could be done for any general concept, such as "analogy". Instead of capturing the behaviour in words, he suggests - in a way that recalls Rosenblueth, Wjener and Bigelow's comment that the best model of a cat is another cat - that ''it is possible that the connection pattern of the brain itself is the simplest logical expression or definition of this principle [the general concept of ana­logy]". Here we have a von Neumann who is quite at odds with the symbolic, digital computation approach to artificial intelligence. But since he does not want to abandon logical theory altogether, he phrases ills conclusion as a call for logic to undergo a "pseudomorphosis" to neurology.

The last portion of the paper is a fascinating discussion of the possibilities of self-reproducing automata, a notion of von Neumann's that has had last­ing effect in several fields. It is in tills section that the paper's first reference to Turing is made, building on his theory of automata. A transcription of the discussion that foUowed the paper's presentation has also been included.

From the start, it was obvious that something would have to be included from McCorduck's famous personal illstory of the field of artificial intelli­gence. The difficulty was in deciding which chapter to include, since almost every chapter is illghly relevant and revealing. Much the same could be said concerning Crevier's more recent history of the field. Although McCorduck's discussion of the Dartmouth conference is extensive, it was decided that since Dartmouth is discussed elsewhere (e.g. in Gardner's over­view), her chapter that covered the period from Dartmouth to the end of the 1960s would be included (further coverage of the early years of modern artificial intelligence is provided in other papers in the set - e.g. Brooks' paper, Article I in Volume III). At this point the goal of artificial intelligence was still a kind of universal, Leibnizian intelligence, as witnessed in Newell and Simon's General Problem Solver program and McCarthy's concentra­tion on common sense reasoning. McCorduck covers this., but also docu­ments the dissatisfaction willch would eventuaJiy lead research away from

23

ARTIFICIAL INTELLIGENC E

small general intelligence programs toward large, knowledge-based approaches, shunning toy domains in favour of real-world tasks such as robotics (e.g. the Shakey project).

However, it was not robot controllers, but expert systems which were to demonstrate the successes of the knowledge-based approach. And, as Crevier (quoting Simon) says at the outset, it was not only the dissatisfaction with general approaches, but also advances in hardware which led to the rise of expert systems in the late 1960s, 1970s and 1980s: DENDRAL, MYCIN, TEIRESIAS and XCON. The key idea here was that the knowledge in a syste~ and the operations that organised and exploited that knowledge, could be kept distinct. This, following Lovelace, was a unification of von Neumann's separation of program and data, with the idea that computers are symbol processors, not number-crunchers. Randall Davis' choice of the term "inference engine" is revealed to be an acknowledgement of this Bab­bagean legacy: "And knowing about Babbage, l figured inference engine was in the right spirit. It would orient them toward thinking of a machine because it was an engine, but orient them toward thinking of a machine that was doing inference, not arithmetic." Crevier's list of six advantages of expert systems is succinct and persuasive: they facilitate explanation and therefore user confidence, they allow for dynamic reordering of knowledge elements, they allow several alternatives to be considered, they are easily modifiable, they are semantically transparent, and they are robust - "one can usually remove any single rule without grossly affecting program per­formance,. - a desideratum that would haunt workers in traditional artificial intelligence when the connectionists wielded it against them soon after.

References Boden, M. A. (1998) "ArtificiallnteUigence", in E. Craik (ed.), Routledge Encyclo­

paedia of Philosophy, London: Routledge. Boden, M. A. (1990) The Philosophy of Artificial Intelligence, Oxford: Oxford Uni­

versity Press. Oretske, F. (1981) Knowledge and the Flow of Information, Cambridge: MIT Press. Dreyfus, H. L. and Dreyfus, S. E. (1988) "Making a Mind versus Modeling the Brain:

ArtificiallnteUigence back at a branch point", Daedalus, Journal of the American Academy of Arts and Sciences 117(1 ): 15-44.

Fodor, J. A. {1984) "Semantics. Wisconsin Style", Synthese 59: 231- 50. NeweU, A. {1980) "Physical Symbol Systems", Cogniti,,e Science 4: 135-83. Smith, B. C. (1995) On the Origin of Objects, Cambridge: MIT Press. van Gelder, T. {1998) "The Dynamical Hypothesis in Cognitive Science", Behavioral

and Brain Sciences 21: 615- 28. Vroon, P. A. (1980) Intelligence: on My ths and Measurernent, Amsterdam: North­

Holland.

24