on the eve of artificial minds - university of...

28
On the Eve of Artificial Minds Chris Eliasmith I review recent technological, empirical, and theoretical developments related to building sophisticated cognitive machines. I suggest that rapid growth in robotics, brain-like computing, new theories of large-scale functional modeling, and finan- cial resources directed at this goal means that there will soon be a significant in- crease in the abilities of artificial minds. I propose a specific timeline for this de- velopment over the next fifty years and argue for its plausibility. I highlight some barriers to the development of this kind of technology, and discuss the ethical and philosophical consequences of such a development. I conclude that researchers in this field, governments, and corporations must take care to be aware of, and will- ing to discuss, both the costs and benefits of pursuing the construction of artificial minds. Keywords Artificial cognition | Artificial intelligence | Brain modelling | Machine learning | Neuromorphic computing | Robotics | Singularity Prediction is difficult, especially about the future – Danish Proverb Author Chris Eliasmith celiasmith @ uwaterloo.ca University of Waterloo Waterloo, ON, Canada Commentator Daniela Hill daniela.hill @ gmx.net Johannes Gutenberg-Universität Mainz, Germany Editors Thomas Metzinger metzinger @ uni-mainz.de Johannes Gutenberg-Universität Mainz, Germany Jennifer M. Windt jennifer.windt @ monash.edu Monash University Melbourne, Australia 1 Introduction The prediction game is a dangerous one, but that, of course, is what makes it fun. The pit- falls are many: some technologies change expo- nentially but some don’t; completely new inven- tions, or fundamental limits, might appear at any time; and it can be difficult to say some- thing informative without simply stating the obvious. In short, it’s easy to be wrong if you’re specific. (Although, it is easy to be right if you’re Nostradamus.) Regardless, the purpose of this essay is to play this game. As a con- sequence, I won’t be pursuing technical discus- sion on the finer points of what a mind is, or how to build one, but rather attempting to paint an abstract portrait of the state of re- search in fields related to machine intelligence broadly construed. I think the risks of undertak- ing this kind of prognostication are justified be- cause of the enormous potential impact of a new kind of technology that lies just around the corner. It is a technology we have been dream- ing about—and dreading—for hundreds of years. I believe we are on the eve of artificial minds. In 1958 Herbert Simon & Allen Newell claimed that “there are now in the world ma- chines that think” and predicted that it would take ten years for a computer to become world chess champion and write beautiful music (1958, p. 8). Becoming world chess champion took longer, and we still don’t have a digital Debussy. More importantly, even when a com- puter became world chess champion it was not generally seen as the success that Simon and Eliasmith, C. (2015). On the Eve of Artificial Minds. In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 1 | 17

Upload: others

Post on 11-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

On the Eve of Artificial MindsChris Eliasmith

I review recent technological, empirical, and theoretical developments related tobuilding sophisticated cognitive machines. I suggest that rapid growth in robotics,brain-like computing, new theories of large-scale functional modeling, and finan-cial resources directed at this goal means that there will soon be a significant in-crease in the abilities of artificial minds. I propose a specific timeline for this de-velopment over the next fifty years and argue for its plausibility. I highlight somebarriers to the development of this kind of technology, and discuss the ethical andphilosophical consequences of such a development. I conclude that researchers inthis field, governments, and corporations must take care to be aware of, and will-ing to discuss, both the costs and benefits of pursuing the construction of artificialminds.

KeywordsArtificial cognition | Artificial intelligence | Brain modelling | Machine learning |Neuromorphic computing | Robotics | Singularity

Prediction is difficult, especially about the future – Danish Proverb

Author

Chris [email protected]   University of WaterlooWaterloo, ON, Canada

Commentator

Daniela [email protected]   Johannes Gutenberg-UniversitätMainz, Germany

Editors

Thomas [email protected]   Johannes Gutenberg-UniversitätMainz, Germany

Jennifer M. [email protected]   Monash UniversityMelbourne, Australia

1 Introduction

The prediction game is a dangerous one, butthat, of course, is what makes it fun. The pit-falls are many: some technologies change expo-nentially but some don’t; completely new inven-tions, or fundamental limits, might appear atany time; and it can be difficult to say some-thing informative without simply stating theobvious. In short, it’s easy to be wrong if you’respecific. (Although, it is easy to be right ifyou’re Nostradamus.) Regardless, the purposeof this essay is to play this game. As a con-sequence, I won’t be pursuing technical discus-sion on the finer points of what a mind is, orhow to build one, but rather attempting topaint an abstract portrait of the state of re-search in fields related to machine intelligencebroadly construed. I think the risks of undertak-

ing this kind of prognostication are justified be-cause of the enormous potential impact of anew kind of technology that lies just around thecorner. It is a technology we have been dream-ing about—and dreading—for hundreds ofyears. I believe we are on the eve of artificialminds.

In 1958 Herbert Simon & Allen Newellclaimed that “there are now in the world ma-chines that think” and predicted that it wouldtake ten years for a computer to become worldchess champion and write beautiful music(1958, p. 8). Becoming world chess championtook longer, and we still don’t have a digitalDebussy. More importantly, even when a com-puter became world chess champion it was notgenerally seen as the success that Simon and

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 1 | 17

Page 2: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

Newell had expected. This is because the way inwhich Deep Blue beat Gary Kasparov did notstrike many as advancing our understanding ofcognition. Instead, it showed that brute forcecomputation, and a lot of careful tweaking byexpert chess players, could surpass human per-formance in a specific, highly circumscribed en-vironment.

Excitement about AI grew again in the1980s, but was followed by funding cuts andgeneral skepticism in the “AI winter” of the1990s (Newquist 1994). Maybe we are just stuckin a thirty-year cycle of excitement followed bydisappointment, and I am simply expressing thebeginning of the next temporary uptick. How-ever, I don’t think this is the case. Instead, Ibelieve that there are qualitative changes inmethods, computational platforms, and finan-cial resources that place us in a historicallyunique position to develop artificial minds. Iwill discuss each of these in more detail in sub-sequent sections, but here is a brief overview.

Statistical and brain-like modeling meth-ods are far more mature than they have everbeen before. Systems with millions (Garis et al.2010; Eliasmith et al. 2012) and even tens ofmillions (Fox 2009) of simulated neurons aresuddenly becoming common, and the scale ofmodels is increasing at a rapid rate. In addition,the challenges of controlling a sophisticated,nonlinear body are being met by recent ad-vances in robotics (Cheah et al. 2006; Schaal etal. 2007). These kinds of methodological ad-vances represent a significant shift away fromclassical approaches to AI (which were largelyresponsible for the previously unfullfilled prom-ises of AI) to more neurally inspired, and brain-like ones. I believe this change in focus will al-low us to succeed where we haven’t before. Inshort, the conceptual tools and technical meth-ods being developed for studying what I call“biological cognition” (Eliasmith 2013), willmake a fundamental difference to our likelihoodof success.

Second, there have been closely allied andimportant advances in the kinds of computa-tional platforms that can be exploited to runthese models. So-called “neuromorphic” com-puting—hardware platforms that perform brain-

style computation—has been rapidly scaling up,with several current projects expected to hitmillions (Choudhary et al. 2012) and billions(Khan et al. 2008) of neurons running in realtime within the next three to four years. Thesehardware advances are critical for performingefficient computation capable of realizing brain-like functions embedded in and controllingphysical, robotic bodies.

Finally, unprecedented financial resourceshave been allocated by both public and privategroups focusing on basic science and industrialapplications. For instance, in February 2013 theEuropean Union announced one billion euros infunding for the Human Brain Project, which fo-cuses on developing a large scale brain model aswell as neuromorphic and robotic platforms. Amonth later, the Obama BRAIN initiative wasannounced in the United States. This initiativedevotes the same level of funding to experi-mental, technological, and theoretical advancesin neuroscience. More recently, there has been ahuge amount of private investment:

Google purchased eight robotics and AIcompanies between Dec 2013 and Jan 2014, in-cluding industry leader Boston Dynamics Stunt(2014).

Qualcomm has introduced the Zeroth pro-cessor, which is modeled after how a humanbrain works (Kumar 2013). They demonstratedan Field-Programmable Gate Array (FPGA)mock-up of the chip performing a reinforcementlearning task on a robot.

Amazon has recently expressed a desire toprovide the Amazon Prime Air service, whichwill use robotic quadcopters to deliver goodswithin thirty minutes of their having beenordered (Amazon 2013).

IBM has launched a product based on Wat-son, which famously beat the best human Jeop-ardy players (http://ibm.com/innovation/us/watson/). The product will provide confidence basedresponses to natural language queries. It has beenopened up to allow developers to use it in a widevariety of applications. They are also developing aneuromorphic platform (Esser et al. 2013).

In addition, there are a growing number ofstartups that work on brain-inspired computingincluding Numenta, the Brain Corporation, Vi-

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 2 | 17

Page 3: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

carious, DeepMind (recently purchased byGoogle for $400 million) and Applied Brain Re-search, among many others. In short, I believethere are more dollars being directed at theproblem than ever before.

It is primarily these three forces that I be-lieve will allow us to build convincing examplesof artificial minds in the next fifty years. And, Ibelieve we can do this without necessarily defin-ing what it is that makes a “mind”—even an ar-tificial one. As with many subtle concepts—such as “game,” to use Wittgenstein’s example,or “pornography,” to use Supreme Court JusticePotter Stewart’s example—I suspect we willavoid definitions and rely instead on our soph-isticated, but poorly understood, methods ofclassifying the world around us. In the case of“minds,” these methods will be partly behavi-oural, partly theoretical, and partly based onjudgments of similarity to the familiar. In anycase, I do not propose to provide a definitionhere, but rather to point to reasons why the ar-tifacts we continue to build will become moreand more like the natural minds around us. Indoing so, I survey recent technological, theoret-ical, and empirical developments that are im-portant for supporting our progress on thisfront. I then suggest a timeline over which I ex-pect these developments to take place. Finally, Iconclude with what I expect to be the majorphilosophical and societal impacts on our beingable to build artificial minds. As a reminder, Iam adopting a somewhat high-level perspectiveon the behavioural sciences and related techno-logies in order to make clear where my broad(and likely wrong) predictions are coming from.In addition, if I’m not entirely wrong, I suspectthat the practical implications of such develop-ments will prove salient to a broad audience,and so, as researchers in the area, we shouldconsider the consequences of our research.

2 Technological developments

Because I take it that brain-based approachesprovide the “difference that makes a difference”between current approaches and traditional AI,here I focus on developments in neuromorphicand robotic technology. Notably, all of the de-

velopments in neuromorphic hardware that Idiscuss below are inspired by some basic fea-tures of neural computation. For instance, all ofthe neuromorphic approaches use spiking neuralnetworks SNNs to encode and process informa-tion. In addition, there is unanimous agreementthat biological computation is in orders of mag-nitude more power efficient than digital compu-tation (Hasler & Marr 2013). Consequently, acentral motivation behind exploring these hard-ware technologies is that they might allow forsophisticated information processing using smallamounts of power. This is critical for applica-tions that require the processing to be near thedata, such as in robotics and remote sensing. Inwhat follows I begin by providing a sample ofseveral major projects in neuromorphic comput-ing that span the space of current work in thearea. I then briefly discuss the current state ofhigh-performance computing and robotics, toidentify the roles of the most relevant technolo-gies for developing artificial minds.

To complement its cognitively focusedWatson project, IBM has been developing aneuromorphic architecture, a digital model ofindividual neurons, and a method for program-ming this architecture (Esser et al. 2013). Thearchitecture itself is called TrueNorth. They ar-gue that the “low-precision, synthetic, simultan-eous, pattern-based metaphor of TrueNorth is afitting complement to the high-precision, ana-lytical, sequential, logic-based metaphor oftoday’s of von Neumann computers” (Esser etal. 2013, p. 1). TrueNorth has neurons organ-ized into 256 neuron blocks, in which eachneuron can receive input from 256 axons. To as-sist with programming this hardware, IBM hasintroduced the notion of a “corelet,” which is anabstraction that encapsulates local connectivityin small networks. These act like small pro-grams that can be composed in order to buildup more complex functions. To date the demon-strations of the approach have focused onsimple, largely feed-forward, standard applica-tions, though across a wide range of methods,including Restricted Boltzmann Machines(RBMs), liquid state machines, Hidden MarkovModel (HMMs), and so on. It should be notedthat the proposed chip does not yet exist, and

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 3 | 17

Page 4: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

current demonstrations are on detailed simula-tions of the architecture. However, because it isa digital chip the simulations are highly accur-ate.

A direct competitor to IBM’s approach isthe Zeroth neuromorphic chip from Qualcomm.Like IBM, Qualcomm believes that constructingbrain-inspired hardware will provide a newparadigm for exploiting the efficiencies of neuralcomputation, targeted at the kind of informa-tion processing at which brains excel, but whichis extremely challenging for von Neumann ap-proaches. The main difference between thesetwo approaches is that Qualcomm has commit-ted to allowing online learning to take place onthe hardware. Consequently, they announcedtheir processor by demonstrating its applicationin a reinforcement learning paradigm on a real-world robot. They have released videos of therobot maneuvering in an environment andlearning to only visit one kind of stimulus(white boxes: http://www.youtube.com/watch?v=8c1Noq2K96c). It should again be noted thatthis is an FPGA simulation of a digital chipthat does not yet exist. However, the simula-tion, like IBM’s, is highly accurate.

In the academic sphere, the Spinnakerproject at Manchester University has not fo-cused on designing new kinds of chips, but hasinstead focused on using low-power ARM pro-cessors on a massive scale to allow large-scalebrain simulations (Khan et al. 2008). As a res-ult, the focus has been on designing ap-proaches to routing that allow for the highbandwidth communication, which underwritesmuch of the brain’s information processing.Simulations on the Spinnaker hardware typic-ally employ spiking neurons, like IBM andQualcomm, and occasionally allow for learning(Davies et al. 2013), as with Qualcomm’s ap-proach. However, even with low power conven-tional chips, the energy usage is projected tobe higher on the Spinnaker platform perneuron. Nevertheless, Spinnaker boards havebeen used in a wider variety of larger-scaleembodied and non-embodied applications.These include simulating place cells, path in-tegration, simple sensory-guided movements,and item classification.

There are also a number of neuromorphicprojects that use analog instead of digital im-plementations of neurons. Analog approachestend to be several orders of magnitude morepower efficient (Hasler & Marr 2013), thoughalso more noisy, unreliable, and subject to pro-cess variation (i.e., variations in the hardwaredue to variability in the size of components onthe manufactured chip). These projects includework on the Neurogrid chip at Stanford Univer-sity (Choudhary et al. 2012), and on a chip atETH Zürich (Corradi, Eliasmith & Indiveri2014). The Neurogrid chip has demonstratedlarger numbers of simulated neurons—up to amillion—while the ETH Zürich chip allows foronline learning. More recently, the Neurogridchip has been used to control a nonlinear, sixdegree of freedom robotic arm, exhibiting per-haps the most sophisticated information pro-cessing from an analog chip to date.

In addition to the above neuromorphicprojects, which are focused on cortical simula-tion, there have been several specialized neur-omorphic chips that mimic the information pro-cessing of different perceptual systems. For ex-ample, the dynamic vision sensor (DVS) artifi-cial retina developed at ETH Zürich performsreal-time vision processing that results in astream of neuron-like spikes (Lichtsteiner et al.2008). Similarly, an artificial cochlea calledAEREAR2 has been developed that generatesspikes in response to auditory signals (Li et al.2012). The development of these and otherneuronal sensors makes it possible to build fullyembodied spiking neuromorphic systems (Gal-luppi et al. 2014).

There have also been developments in tra-ditional computing platforms that are import-ant for supporting the construction of modelsthat run on neuromorphic hardware. Testingand debugging large-scale neural models is oftenmuch easier with traditional computationalplatforms such as Graphics Processing Unit(GPUs) and supercomputers. In addition, thedevelopment of neuromorphic hardware oftenrelies on simulation of the designs before manu-facture. For example, IBM has been testingtheir TrueNorth architecture with very large-scale simulations that have run up to 500 billion

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 4 | 17

Page 5: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

neurons. These kinds of simulations allow fordesigns to be stress-tested and fine-tuned beforecostly production is undertaken. In short, thedevelopment of traditional hardware is also animportant technological advance that supportsthe rapid development of more biologically-based approaches to constructing artificial cog-nitive systems.

A third area of rapid technological devel-opment that is critical for successfully realizingartificial minds is the field of robotics. The suc-cess of recent methods in robotics have enteredpublic awareness with the creation of theGoogle car. This self-driving vehicle has success-fully navigated hundreds of thousands of milesof urban and rural roadways. Many of the tech-nologies in the car were developed out ofDARPA’s Grand Challenge to build anautonomous vehicle that would be tested inboth urban and rural settings. Due to the suc-cess of the first three iterations of the GrandChallenge, DARPA is now funding a challengeto build robots that can be deployed in emer-gency situations, such as a nuclear meltdown orother disaster.

One of the most impressive humanoid ro-bots to be built for this challenge is the Atlas,constructed by Boston Dynamics. It has twenty-eight degrees of freedom, covering two arms, twolegs, a torso, and a head. The robot has beendemonstrated walking bipedally, even in ex-tremely challenging environments in which itmust use its hands to help navigate and steady it-self (http://www.youtube.com/watch?v=zkBn-FPBV3f0). Several teams in this most recentGrand Challenge have been awarded a copy ofAtlas, and have been proceeding to competitivelydesign algorithms to improve its performance.

In fact, there have been a wide variety ofsignificant advances in robotic control al-gorithms, enabling robots—including quad-copters, wheeled platforms, and humanoid ro-bots—to perform tasks more accurately andmore quickly than had previously been possible.This has resulted in one of the first humanversus robot dexterity competitions being re-cently announced. Just as IBM pitted Watsonagainst human Jeopardy champions, Kuka haspitted its high-speed arm against the human

ping-pong champion Timo Boll (http://www.y-outube.com/watch?v=_mbdtupCbc4). Despitethe somewhat disappointing outcome, this kindof competition would not have been thoughtpossible a mere five years ago (Ackerman 2014).

These three areas of technological develop-ment—neuromorphics, high-performance con-ventional computing, and robotics—are pro-gressing at an incredibly rapid pace. And, moreimportantly, their convergence will allow a newclass of artificial agents to be built. That is,agents that can begin processing information atvery similar speeds and support very similarskills to those we observe in the animal king-dom. It is perhaps important to emphasize thatmy purpose here is predictive. I am not claim-ing that current technologies are sufficient forbuilding a new kind of artificial mind, butrather that they lay the foundations, and areprogressing at a sufficient rate to make it reas-onable to expect that the sophistication, adapt-ability, flexibility, and robustness of artificialminds will rapidly approach those of the humanmind. We might again worry that it will be dif-ficult to measure such progress, but I wouldsuggest that progress will be made along manydimensions simultaneously, so picking nearlyany of dimensions will result in some measur-able improvement. In general, multi-dimensionalsimilarity judgements are likely to result in “I’llknow it when I see it” kinds of reactions to clas-sifying complicated examples. This may be de-rided by some as “hand-wavy”, but it mightalso be a simple acknowledgement that“mindedness” is complex. I would like to beclear that my claims about approaching humanmindful behaviour are to be taken as applyingto the vast majority of the many measures weuse for identifying minds.

3 Theoretical developments

Along with these technological developmentsthere have been a series of theoretical develop-ments that are critical for building large-scaleartificial agents. Some have argued that theoret-ical developments are not that important: sug-gesting that standard back propagation at asufficiently large scale is enough to capture

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 5 | 17

Page 6: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

complex perceptual processing (Krizhevsky etal. 2012). That is, building brain-like models ismore a matter of getting a sufficiently largecomputer with enough parameters and neuronsthan it is of discovering some new principlesabout how brains function. If this is true, thenthe technological developments that I pointed toin the previous section may be sufficient forscaling to sophisticated cognitive agents. How-ever, I am not convinced that this is the case.

As a result, I think that theoretical devel-opments in deep learning, nonlinear adaptivecontrol, high dimensional brain-like computing,and biological cognition combined will be im-portant to support continued advances in un-derstanding how the mind works. For instance,deep networks continue to achieve state-of-the-art results in a wide variety of perception-likeprocessing challenges (http://rodrigob.git-hub.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130). Andwhile deep networks have traditionally beenused for static processing, such as an imageclassification or document classification, therehas been a recent, concerted move to use themto model more dynamic perceptual tasks as well(Graves et al. 2013). In essence, deep networksare one among many techniques for modelingthe statistics of time varying signals, a skillcentral to animal cognition.

However, animals are also incredibly ad-ept at control ling nonlinear dynamical sys-tems, including their bodies. That is, biolo-gical brains can generate time varying signalsthat allow successful and sophisticated inter-actions with their environment through theirbody. Critically, there have been a variety ofimportant theoretical advances in nonlinearand adaptive control theory as well. Newmethods for solving difficult optimal controlproblems have been discovered through carefulstudy of biological motor control (Schaal et al.2007; Todorov 2008). In addition, advances inhierarchical control allow for real-time compu-tation of difficult inverse kinematics problemson a laptop (Khatib 1987). And, finally, im-portant advances in adaptive control allow forthe automatic learning of both kinematic anddynamic models even in highly nonlinear and

high dimensional control spaces (Cheah et al.2006).

Concurrently with these more abstractcharacterizations of brain function there havebeen theoretical developments in neurosciencethat have deepened our understanding of howbiological neural networks may perform sophist-icated information processing. Work using theNeural Engineering Framework (NEF) has res-ulted in a wide variety of spiking neural modelsthat mirror data recorded from biological sys-tems (Eliasmith & Anderson 1999, 2003). In ad-dition, the closely related liquid computing(Maass et al. 2002) and FORCE learning(Sussillo & Abbott 2009) paradigms have beensuccessfully exploited by a number of research-ers to generate interesting dynamical systemsthat often closely mirror biological data. To-gether these kinds of methods provide quantit-ative characterizations of the computationalpower available in biologically plausible neuralnetworks. Such developments are crucial for ex-ploiting neuromorphic approaches to buildingbrain-like hardware. And they suggest ways oftesting some of the more abstract perceptualand control ideas in real-world, brain-like imple-mentations.

Interestingly, several authors have suggestedthat difficult perceptual and control problems arein fact mathematical duals of one another (To-dorov 2009; Eliasmith 2013). This means thatthere are deep theoretical connections betweenperception and motor control. This realizationpoints to a need to think hard about how diverseaspects of brain function can be integrated intosingle, large-scale models. This has been a majorfocus of research in my lab recently. One result ofthis focus is Spaun, currently the world’s largestfunctional brain model. This model incorporatesdeep networks, recent control methods, and theNEF to perform eight different perceptual, motor,and cognitive tasks (Eliasmith et al. 2012). Im-portantly, this is not a one-off model, but rather asingle example among many that employs a gen-eral architecture intended to directly address in-tegrated biological cognition (Eliasmith 2013).Currently, the most challenging constraints forrunning models like Spaun are technological—computers are not fast enough. However, the

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 6 | 17

Page 7: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

neuromorphic technologies mentioned previouslyshould soon remove these constraints. So, in somesense, theory currently outstrips application: wehave individually tested several critical assump-tions of the model and shown that they scale well(Crawford et al. 2013), but we are not yet able tointegrate full-scale versions of the components dueto limitations in current computational resources.

Taken together, I believe that these recenttheoretical developments demonstrate that wehave a roadmap for how to approach the prob-lem of building sophisticated models of biolo-gical cognition. No doubt not all of the methodswe need are currently available, but it is notevident that there are any major conceptualroadblocks to building a cognitive system thatrivals the flexibility, adaptability, and robust-ness of those found in nature. I believe this is aunique historical position. In the heyday of thesymbolic approach to AI there were detractorswho said that the perceptual problems solvedeasily by biological systems would be a chal-lenge for the symbolic approach (Norman 1986;Rumelhart 1989). They were correct. In theheyday of connectionism there were detractorswho said that standard approaches to artificialneural networks would not be able to solve diffi-cult planning or syntactic processing problems(Pinker & Prince 1988; Fodor & Pylyshyn 1988;Jackendoff 2002). They were correct. In theheyday of statistical machine learning ap-proaches (a heyday we are still in) there are de-tractors who say that mountains of data are notsufficient for solving the kinds of problems facedby biological cognitive systems (Marcus 2013).They are probably correct. However, as many ofthe insights of these various approaches arecombined with control theory, integrated intomodels able to do efficient syntactic and se-mantic processing with neural networks, and, ingeneral, become conceptually unified (Eliasmith2013), it is less and less obvious what might bemissing from our characterization of biologicalcognition.

4 Empirical developments

One thing that might be missing is, simply,knowledge. We have many questions about how

real biological systems work that remain un-answered. Of course, complete knowledge ofnatural systems is not a prerequisite for build-ing nearly functionally equivalent systems (seee.g., flight). However, I believe our understand-ing of natural cognitive systems will continue toplay an important role in deciding what kindsof algorithms are worth pursuing as we buildmore sophisticated artificial agents.

Fortunately, on this front there have beentwo announcements of significant resources ded-icated to improving our knowledge of the brain,which I mentioned in the introduction. One isfrom the EU and the other from the US. Eachare investing over $1 billion in generating thekind of data needed to fill gaps in our under-standing of how brains function. The EU’s Hu-man Brain Project (HBP) includes two centralsubprojects aimed at gathering mouse and hu-man brain data to complement the large-scalemodels being built within the project. Thesesubprojects will focus on genetic, cellular, vas-cular, and overall organizational data to com-plement the large-scale projects of this typealready available (such as the Allen Brain Atlas,http://www.brain-map.org/). One central goalof these subprojects is to clarify the relationshipbetween the mouse (which is highly experiment-ally accessible) and human subjects.

The American “brain research through ad-vancing innovative neurotechnologies” (BRAIN)initiative is even more directly focused on large-scale gathering of neural data. Its purpose is toaccelerate technologies to provide large-scale dy-namic information about the brain that demon-strates how both single runs and larger neural cir-cuits operate. Its explicit goal is to “fill majorgaps in our current knowledge” (http://www.ni-h.gov/science/brain/). It is a natural complementto the human connectome project, which hasbeen mapping the structure of the human brainon a large-scale (http://www.humanconnec-tomeproject.org/). Even though it is not yet clearexactly what information will be provided by theBRAIN intiative, it is clear that significant re-sources are being put into developing technologiesthat draw on nanoscience, informatics, engineer-ing, and other fields to measure the brain at alevel of detail and scale not previously possible.

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 7 | 17

Page 8: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

While both of these projects are just over ayear old, they have both garnered internationalattention and been rewarded with sufficient fund-ing to ensure a good measure of success. Con-sequently, it is likely that as we build more soph-isticated models of brain function, and as we dis-cover where our greatest areas of ignorance lay,we will be able to turn to the methods developedby these projects to rapidly gain critical informa-tion and continue improving our models. In short,I believe that there is a confluence of technolo-gical, theoretical, and empirical developmentsthat will allow for bootstrapping detailed func-tional models of the brain. It is precisely thesekinds of models that I expect will lead to themost convincing embodiments of artificial cogni-tion that we have ever seen—I am even willing tosuggest that their sophistication will rival those ofnatural cognitive systems.

5 A future timeline

Until this point I have been mustering evidencethat there will soon be significant improvementsin our ability to construct artificial cognitiveagents. However, I have not been very specificabout timing. The purpose of this section is toprovide more quantification on the speed of de-velopment in the field.

In Table 1, the first column specifies thetimeframe, the second suggests the number ofneurons that will be simulatable in real-time onstandard hardware, the third suggests the num-ber of neurons that will be simulatable in real-time on neuromorphic hardware, and the lastidentifies relevant achievable behaviours withinthat timeframe.

I believe that several of the computationaltechnologies I have mentioned, as well as empir-

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 8 | 17

Table 1

Page 9: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

ical methods for gathering evidence, are on anexponential trajectory by relevant measures(e.g., number of neurons per chip, number ofneurons recorded Stevenson & Kording 2011).On the technological side, if we assume a doub-ling every eighteen months, this is equivalent toan increase of about one order of magnitudeevery five years. I should also note that I am as-suming that real-time simulation of neurons willbe embedded in an interactive, real-world envir-onment, and that the neuron count is for thewhole system (not a single chip). For context, itis worth remembering that the human brain hasabout 1011 neurons, though they are more com-putationally sophisticated than those typicallysimulated in hardware.

Another caveat is that it is likely thatlarge-scale simulations on a digital Von Neu-mann architecture will hit a power barrier,which makes it likely that the suggested scalingcould be achieved, but will be cost-prohibitivein fifty years. Consequently, a neuromorphic al-ternative is most likely to be the standard im-plementational substrate of artificial agents.

Finally, the behavioural characterizations Iam giving are with a view to functions neces-sary for creating a convincing artificial mind inan artificial body. Consequently, my commentsgenerally address perceptual, motor, and cognit-ive skills relevant to reproducing human-likeabilities.

6 Consequences for philosophy

So suppose that, fifty years hence, we have de-veloped an understanding of cognitive systemsthat allows us to build artificial systems thatare on par with, or, if we see fit, surpass theabilities of an average person. Suppose, that is,that we can build artificial agents that canmove, react, adapt, and think much like humanbeings. What consequences, if any, would thishave for our theoretical questions about cogni-tion? I take these questions to largely be in thedomain of philosophy of mind. In this section Iconsider several central issues in philosophy ofmind and discuss what sorts of consequences Itake building a human-like artificial agent tohave for them.

Being a philosopher, I am certain that, forany contemporary problem we consider, at leastsome subset of those who have a committedopinion about that problem will not admit thatany amount of technical advance can “solve” it.I suspect, however, that their opinions may endup carrying about as much weight as a modern-day vitalist. To take one easy example, let usthink for a moment about contemporary dual-ism. Some contemporary dualists hold that evenif we had a complete understanding of how thebrain functions, we would be no closer to solv-ing the “hard problem” of consciousness(Chalmers 1996). The “hard problem” is theproblem of explaining how subjective experiencecomes from neural activity. That is, how thephenomenal experiences we know from a first-person perspective can be accounted for bythird-person physicalist approaches to under-standing the mind. If indeed we have construc-ted artificial agents that behave much likepeople, share a wide variety of internal stateswith people, are fully empirically accessible, andreport experiences like people, it is not obviousto what extent this problem will not have beensolved. Philosophers who are committed to thenotion that no amount of empirical knowledgewill solve the problem will of course dismisssuch an accomplishment on the strength of theirintuitions. I suspect, however, that when mostpeople are actually confronted with such anagent—one they can interrogate to their heart’scontent and one about which they can havecomplete knowledge of its functioning—it willseem odd indeed to suppose that we cannot ex-plain how its subjective experience is generated.I suspect it will seem as odd as someonenowadays claiming that we cannot expect to ex-plain how life is generated despite our currentunderstanding of biochemistry. Another way toput this is that the “strong intuitions” of con-temporary dualists will hold little plausibility inthe face of actually existing, convincing artificialagents, and so, I suspect, they will become evenmore of a rarity.

I refer to this example as “easy” becausethe central reasons for rejecting dualism areonly strengthened, not generated, by the exist-ence of sophisticated artificial minds. That is,

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 9 | 17

Page 10: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

good arguments against the dualist view aremore or less independent of the current state ofconstructing agents (although the existence ofsuch agents will likely sway intuitions). How-ever, other philosophical conundrums, likeSearle’s famous Chinese room (1980), have re-sponses that depend fairly explicitly on ourability to construct artificial agents. In particu-lar, the “systems reply” suggests that a suffi-ciently complex system will have the same in-tentional states as a biological cognitive system.For those who think that this is a good rejec-tion of Searle’s strong intentionalist views, hav-ing systems that meet all the requirements oftheir currently hypothetical agents wouldprovide strong empirical evidence consistentwith their position. Of course, the existence ofsuch artificial agents is unlikely to convincethose, like Searle, who believe that there issome fundamental property of biology that al-lows intentionality to gain a foothold. But itdoes make such a position seem that much moretenuous if every means of measuring intentional-ity produces similar measurements across non-biological and biological agents. In any case, therealization of the systems reply does ultimatelydepend on our ability to construct sufficientlysophisticated artificial agents. And I am sug-gesting that such agents are likely to be avail-able in the next fifty years.

More immediately, I suspect we will beable to make significant headway on severalproblems that have been traditionally con-sidered philosophical before we reach the fifty-year mark. For example, the frame problem—i.e., the problem of knowing what representa-tional states to update in a dynamic environ-ment—is one that contemporary methods, likecontrol theory and machine learning, strugglewith much less than classical methods. Becausethe dynamics of the environment are explicitlyincluded in the world-model being exploited bysuch control theoretic and statistical methods,updating state representations naturally in-cludes the kinds of expectations that causedsuch problems for symbolic approaches.

Similarly, explicit quantitative solutionsare suggested for the symbol-grounding problemthrough integrated models that incorporate as-

pects of both statistical perceptual processingand syntactic manipulation. Even in simplemodels, like Spaun, it is clear how the symbolsfor digits that are syntactically manipulated arerelated to inputs coming from the externalworld (Eliasmith 2013). And it is clear howthose same symbols can play a role in drivingthe model’s body to express its knowledgeabout those representations. As a result, thetasks that Spaun can undertake demonstrateboth conceptual knowledge, through the sym-bol-like relationships between numbers (e.g., inthe counting task), and perceptual knowledge,through categorization and the ability to driveits motor system to reproduce visual properties(e.g., in the copy-drawing task).

In some cases, rather than resolving philo-sophical debates, the advent of sophisticated ar-tificial agents is likely to make these debates farmore empirically grounded. These include de-bates about the nature of concepts, conceptualchange, and functionalism, among others. How-ever these debates turn out, it seems clear thathaving an engineered, working system that cangenerate behaviour as sophisticated as that thatgave rise to these theoretical ideas in the firstplace will allow a systematic investigation oftheir appropriate application. After all, thereare few, if any, limits on the empirical informa-tion we can garner from such constructed sys-tems. In addition, our having built the systemexplicitly makes it unlikely that we would beunaware of some “critical element” essential ingenerating the observed behaviours.

Even without such a working system, I be-lieve that there are already hints as to howthese debates are likely to be resolved, given thetheoretical approaches I highlighted earlier. Forinstance, I suspect that we will find that con-cepts are explained by a combination of vectorspace representations and a restricted class ofdynamic processes defined over those spaces(Eliasmith 2013). Similarly, quantifying the ad-aptive nature of those representations and pro-cesses will indicate the nature of mechanisms ofconceptual change in individuals (Thagard2014). In addition, functionalism will probablyseem too crude a hypothesis given a detailedunderstanding of how to build a wide variety of

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 10 | 17

Page 11: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

artificial minds. Perhaps a kind of “functional-ism with error bars” will take its place, provid-ing a useful means of talking about degrees offunctional similarity and allowing a quantifica-tion of functional characterizations of complexsystems. Consequently, suggestions about whichfunctions are or are not necessary for “minded-ness” can be empirically tested through explicitimplementation and experimentation. This willnot solve the problem of mapping experimentalresults to conceptual claims (a problem we cur-rently face when considering non-human andeven some human subjects), but it will makefunctionalism as empirically accessible as seemsplausible.

In addition to these philosophical issuesthat may undergo reconceptualization with theconstruction of artificial minds, there are othersthat are bound to become more vexing. For ex-ample, the breadth of application of ethical the-ory may, for the first time, reach to engineereddevices. If, after all, we have built artificialminds capable of understanding their place inthe universe, it seems likely we will have toworry about the possibility of their suffering(Metzinger 2013). It does not seem that under-standing how such devices work, or having ex-plicitly built them, will be sufficient for dismiss-ing them as having no moral status. While cur-rent theories of non-human ethics have been de-veloped, it is not clear how much or little theor-ies of non-biological ethics will be able to bor-row from them.

I suspect that the complexities introducedto ethical theory will go beyond adding a newcategory of potential application. Because artifi-cial minds will be designed, they may be de-signed to make what have traditionally beenmorally objectionable inter-mind relationshipsseem less problematic. Consider, for instance, arobot that is designed to gain maximal self-ful-fillment out of providing service to people. Thatis, unlike any biological species of which we areaware, these robots place service to humansabove all else. Is a slave-like relationshipbetween humans and these minds still wrong insuch an instance? Whatever our analysis of whyslavery is wrong, it seems likely that we will beable to design artificial minds that bypass that

analysis. This is a unique quandary becausewhile it is currently possible for certain indi-viduals to claim to have such slave-alignedgoals, it is always possible to argue that theyare simply mistaken in their personal psycholo-gical analysis. In the case of minds whose psy-chology is designed in a known manner, how-ever, the having of such goals will at least seemmuch more genuine. This is only one amongmany new kinds of challenges that ethical the-ory will face with the development of sophistic-ated artificial agents (Metzinger 2013).

I do not take this surely unreasonablybrief discussion of any of these subtle philosoph-ical issues to do justice to them. My main pur-pose here is to provide a few example instancesof how the technological developments discussedearlier are likely to affect our theoretical in-quiry. On some occasions such developmentswill lead to strengthening already common intu-itions; on others they may provide deep empir-ical access to closely related issues; and on stillother occasions these developments will serve tomake complex issues even more so.

7 The good and the bad

As with the development of many technologies—cars, electricity, nuclear power—the construc-tion of artificial minds is likely to have bothnegative and positive impacts. However, there isa sense in which building minds is much morefraught than these other technologies. We may,after all, build agents that are themselves cap-able of immorality. Presumably we would muchprefer to build Commander Data than to buildHAL or the Terminator. But how to do this isby no means obvious. There have been severalinteresting suggestions as to how this might beaccomplished, perhaps most notably from IsaacAsimov in his entertaining and thought-provok-ing exploration of the three laws of robotics. Formy purposes, however, I will sidestep this issue—not because it is not important, but becausemore immediate concerns arise from consideringthe development of these agents from a techno-logical perspective. Let me then focus on themore immediately pressing consequences of con-structing intelligent machines.

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 11 | 17

Page 12: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

The rapid development of technologies re-lated to artificial intelligence has not escapedthe notice of governments around the world.One of the primary concerns for governments isthe potentially massive changes in the nature ofthe economy that may result from an increasein automatization. It has recently been sugges-ted that almost half of the jobs in the UnitedStates are likely to be computerized in the nexttwenty years (Rutkin 2013). The US Bureau ofLabor and Statistics regularly publishes articleson the significant consequence of automation forthe labour force in their journal Monthly LaborReview (Goodman 1996; Plewes 1990). Thiswork suggests that greater automatization ofjobs may cause standard measures of productiv-ity and output to increase, while still increasingunemployment.

Similar interest in the economic and socialimpacts of automatization is evident in manyother countries. For instance, Policy HorizonsCanada is a think-tank that works for the Ca-nadian government, which has published workon the effects of increasing automatization andthe future of the economy (Arshad 2012). Soonafter the publication of our recent work onSpaun, I was contacted by this group to discussthe impact of Spaun and related technologies. Itwas clear from our discussion that machinelearning, automated control, robotics, and so onare of great interest to those who have to planfor the future, namely our governments andpolicy makers (Padbury et al. 2014).

This is not surprising. A recent McKinseyreport suggests that these highly disruptivetechnologies are likely to have an economicvalue of about $18 trillion by 2025 (Manyika etal. 2013). It is also clear from the majority ofanalyses, that lower-paid jobs will be the firstaffected, and that the benefits will accrue tothose who can afford what will initially be ex-pensive technologies. Every expectation, then, isthat automatization will exacerbate the alreadylarge and growing divide between rich and poor(Malone 2014; “The Future of Jobs: The On-rushing Wave” 2014). Being armed with thisknowledge now means that individuals, govern-ments, and corporations can support progessivepolicies to mitigate these kinds of potentially

problematic societal shifts (Padbury et al.2014).

Indeed, many of the benefits of automatiz-ation may help alleviate the potential down-sides. Automatization has already had signific-ant impact on the growth of new technology,both speeding up the process of developmentand making new technology cheaper. The hu-man genome project was a success largely be-cause of the automatization of the sequencingprocess. Similarly, many aspects of drug discov-ery can be automatized by using advanced com-putational techniques (Leung et al. 2013). Auto-matization of more intelligent behaviour thansimply generating and sifting through data islikely to have an even greater impact on the ad-vancement of science and engineering. This maylead more quickly to cleaner and cheaper en-ergy, advances in manufacturing, decreases inthe cost and access to advanced technologies,and other societal benefits.

As a consequence, manufacturing is likelyto become safer—a trend already seen in areasof manufacturing that employ large numbersof robots (Robertson et al. 2005). At the sametime, additional safety considerations comeinto play as robotic and human workspacesthemselves begin to interact. This concern hasresulted in a significant focus in robotics oncompliant robots. Compliant robots are thosethat have “soft” environmental interactions,often implemented by including real or virtualsprings on the robotic platform. As a result,control becomes more difficult, but interac-tions become much safer, since the roboticsystem does not rigidly go to a target positioneven if there is an unexpected obstacle (e.g., aperson) in the way.

As the workplace continues to becomeone where human and automated systems co-operate, additional concerns may arise as towhat kinds of human-machine relationshipsemployers should be permitted to demand.Will employees have the right not to workwith certain kinds of technology? Will employ-ers still have to provide jobs to employees whorefuse certain work situations? These ques-tions touch on many of the same subjectshighlighted in the previous section regarding

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 12 | 17

Page 13: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

the ethical challenges that will be raised as wedevelop more and more sophisticated artificialminds.

Finally, much has been made of the pos-sibility that the automatization of technologicaladvancement could eventually result in ma-chines designing themselves more effectivelythan humans can. This idea has captured thepublic imagination, and the point in time wherethis occurs is now broadly known as “The Sin-gularity,” a term first introduced by von Neu-mann (Ulam 1958). Given the vast variety offunctions that machines are built to perform, itseems highly unlikely that there will be any-thing analogous to a mathematical singularity—a clearly defined, discontinuous point—afterwhich machines will be superior to humans. Aswith most things, such a shift, if it occurs, islikely to be gradual. Indeed, the earlier timelineis one suggestion for how such a gradual shiftmight occur. Machines are already used inmany aspects of design, for performing optimiz-ations that would not be possible without them.Machines are also already much better at manyfunctions than people: most obviously mechan-ical functions, but more recently cognitive ones,like playing chess and answering trivia questionsin certain circumstances.

Because the advancement of intelligentmachines is likely to continue to be a smooth,continuous one (even if exponential at times),we will likely remain in a position to make in-formed decisions about what they are permittedto do. As with members of a strictly human so-ciety, we do not tolerate arbitrary behavioursimply because such behaviour is possible. Ifanything, we will be in a better position to spe-cify appropriate behaviour in machines than weare in the case of our human peers. Perhaps wewill need laws and other societal controls for de-termining forbidden or tolerable behaviour. Per-haps some people and machines will choose toignore those laws. But, as a society, it is likelythat we will enforce these behavioural con-straints the same way we do now—with public-ally sanctioned agencies that act on behalf ofsociety. In short, the dystopian predictions weoften see that revolve around the developmentof intelligent robots seem no more or less likely

because of the robots. Challenges to societalstability are nothing new: war, hunger, poverty,weather are constant destabilizing forces. Artifi-cial minds are likely to introduce another force,but one that may be just as likely to be stabiliz-ing as problematic.

Unsurprisingly, like many other technolo-gical changes, the development of artificialminds will bring with it both costs and benefits.It may even be the case that deciding what is acost and what is a benefit is not straightfor-ward. If indeed many jobs become automated, itwould be unsurprising if the average workingweek becomes shorter. As a result, a large num-ber of people may have much more recreationaltime than has been typical in recent history.This may seem like a clear benefit, as many ofus look forward to holidays and time off work.However, it has been argued that fulfilling workis a central to human happiness (Thagard2010). Consequently, overly limited or unchal-lenging work may end up being a significantcost of automation.

As good evidence for costs and benefitsbecomes available, decision-makers will be facedwith the challenge of determining what the ap-propriate roles of artificial minds should be.These roles will no doubt evolve as technologieschange, but there is little reason to presumethat unmanageable upheavals or “inflectionpoints” will be the result of artificial minds be-ing developed. While we, as a society, must beaware of, and prepared for, being faced withnew kinds of ethical dilemmas, this has been aregular occurrence during the technological de-velopments of the last several hundred years.Perhaps the greatest challenges will arise be-cause of the significant wealth imbalances thatmay be exacerbated by limited access to moreintelligent machines.

8 Conclusion

I have argued that we are at a unique point inthe development of technologies that are criticalto the realization of artificial minds. I have evengone so far as to predict that human-level intel-ligence and physical ability will be achieved inabout fifty years. I suspect that for many famil-

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 13 | 17

Page 14: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

iar with the history of artificial intelligence suchpredictions will be easily dismissed. Did we nothave such predictions over fifty years ago? Somehave suggested that the singularity will occurby 2030 (Vinge 1993), others by 2045 (Kurzweil2005). There were suggestions and significantfinancial speculation that AI would change theworld economy in the 1990s, but this neverhappened. Why would we expect anything to bedifferent this time around?

In short, my answer is encapsulated by thespecific technological, theoretical, and empiricaldevelopments I have described above. I believethat they address the central limitations of previ-ous approaches to artificial cognition, and are sig-nificantly more mature than is generally appreci-ated. In addition, the limitations they address—such as power consumption, computational scal-ing, control of nonlinear dynamics, and integrat-ing large-scale neural systems—have been morecentral to prior failures than many have realized.Furthermore, the financial resources being direc-ted towards the challenge of building artificialminds is unprecedented. High-tech companies, in-cluding Google, IBM, and Qualcomm have inves-ted billions of dollars in machine intelligence. Inaddition, funding agencies including DARPA (De-fense Advanced Research Projects Agency), EU-IST (European Union—Information Society Tech-nologies), IARPA (Intelligence Advanced Re-search Projects Agency), ONR (Office of NavalResearch), and AFOSR (Air Force Office of Sci-entific Research) have contributed a similar orgreater amount of financial support across a widerange of projects focused on brain-inspired com-puting. And the two special billion dollar initiat-ives from the US and EU will serve to furtherdeepen our understanding of biological cognition,which has, and will continue, to inspire builders ofartificial minds.

While I believe that the alignment of theseforces will serve to underpin unprecedented ad-vances in our understanding of biological cogni-tion, there are several challenges to achieving thetimeline I suggest above. For one, robotic actuat-ors are still far behind the efficiency and speedsfound in nature. There will no doubt be advancesin materials science that will help overcome theselimitations, but how long that will take is not yet

clear. Similarly, sensors on the scale and precisionof those available from nature are not yet avail-able. This is less true for vision and audition, butdefinitely the case for proprioception and touch.The latter are essential for fluid, rapid motioncontrol. It also remains to be seen how well ourtheoretical methods for integrating complex sys-tems will scale. This will only become clear as weattempt to construct more and more sophistic-ated systems. This is perhaps the most fragile as-pect of my prediction: expecting to solve difficultalgorithmic and integration problems. And, ofcourse, there are myriad other possible ways inwhich I may have underestimated the complexityof biological cognition: maybe glial cells are per-forming critical computations; maybe we need todescribe genetic transcription processes in detailto capture learning; maybe we need to delve tothe quantum level to get the explanations weneed—but I am doubtful (Litt et al. 2006).

Perhaps it goes without saying that, allthings considered, I believe the timeline I proposeis a plausible one.1 This, of course, is predicated onthere being the societal and political will to allowthe development of artificial minds to proceed. Nodoubt researchers in this field need to be respons-ive to public concerns about the specific uses towhich such technology might be put. It will be im-portant to remain open, self-critical, and self-regu-lating as artificial minds become more and morecapable. We must usher in these technologies withcare, fully cogniscent of, and willing to discuss,both their costs and their benefits.

Acknowledgements

I wish to express special thanks to two anonym-ous reviewers for their helpful feedback. Manyof the ideas given here were developed in discus-sion with members of the CNRG Lab, parti-cipants at the Telluride workshops, and my col-laborators on ONR grant N000141310419 (PIs:Kwabena Boahen and Rajit Manohar). Thiswork was also funded by AFOSR grant FA8655-13-1-3084, Canada Research Chairs, andNSERC Discovery grant 261453.

1 It is quite different from that proposed by the HBP, for example. Forfurther discussion of the differences in perspective between the HBPand my lab’s work, see Eliasmith & Trujillo (2013).

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 14 | 17

Page 15: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

References

Ackerman, E. (2014). Robots playing ping pong: What’sreal, what’s not? IEEE Spectrum

Amazon (2013). Amazon Prime Air. Arshad, I. (2012). People and machines: Competitors or

collaborators in the emerging world of work? Chalmers, D. (1996). The conscious mind: In search of a

fundamental theory. Oxford, UK: Oxford UniversityPress.

Cheah, C. C., Liu, C. & Slotine, J. J. E. (2006). Adaptivetracking control for robots with unknown kinematicand dynamic properties. The International Journal ofRobotics Research, 25 (3), 283-296. SAGE Publications.

Choudhary, S., Sloan, S., Fok, S., Neckar, A., Trautmann,E., Gao, P., Stewart, T., Eliasmith, C. & Boahen, K.(2012). Silicon neurons that compute. In A. E. P. Villa,W. Duch, P. Érdi, F. Masulli and G. Palm (Eds.) Arti-ficial neural networks and machine learning - ICANN2012 (pp. 121-128). Berlin, GER: Springer.10.1007/978-3-642-33269-2_16

Corradi, F., Eliasmith, C. & Indiveri, G. (2014). Mappingarbitrary mathematical functions and dynamical sys-tems to neuromorphic VLSI circuits for spike-basedneural computation. IEEE International Symposiumon Circuits and Systems (ISCAS) 2014 (pp. 269-272).IEEE. 10.1109/ISCAS.2014.6865117

Crawford, E., Gingerich, M. & Eliasmith, C. (2013). Bio-logically plausible, human-scale knowledge representa-tion. In M. Knauff, M. Pauen, N. Sebanz and I. Wachs-muth (Eds.) 35th Annual Conference of the CognitiveScience Society (pp. 412-417). Austin, TX: CognitiveScience Society.

Davies, S., Stewart, T., Eliasmith, C. & Furber, S.(2013). Spike-based learning of transfer functions withthe SpiNNaker neuromimetic simulator. The 2013 In-ternational Joint Conference on Neural Networks(IJCNN) (pp. 1832-1839). IEEE.10.1109/IJCNN.2013.6706962

Eliasmith, C. (2013). How to build a brain: A neural ar-chitecture for biological cognition. New York, NY: Ox-ford University Press.

Eliasmith, C. & Anderson, C.H. (1999). Developing andapplying a toolkit from a general neurocomputationalframework. Neurocomputing, 26-27 (0), 1013-1018.10.1016/S0925-2312(99)00098-3

(2003). Neural engineering: Computation, repres-entation and dynamics in neurobiological systems.Cambridge, MA: MIT Press.

Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T.,DeWolf, T., Tang, Y. & Rasmussen, D. (2012). A large-scale model of the functioning brain. Science, 338(6111), 1202-1205. 10.1126/science.1225266

Eliasmith, C. & Trujillo, O. (2013). The use and abuse oflarge-scale brain models. Current Opinion in Neurobio-logy, 25, 1-6. 10.1016/j.conb.2013.09.009

Esser, S. K., Andreopoulos, A., Appuswamy, R., Datta,P., Barch, D., Amir, A., Arthur, J., Cassidy, A., Flick-ner, M., Merolla, P., Chandra, S., Basilico, N., Carpin,S., Zimmerman, T., Zee, F., Alvarez-Icaza, R., Kusnitz,J. A., Wong, T. M., Risk, W. P., McQuinn, E., Nayak,T. K., Singh, R. & Modha, D. S. (2013). Cognitivecomputing systems: Algorithms and applications fornetworks of neurosynaptic cores. The 2013 Interna-tional Joint Conference on Neural Networks (IJCNN)(pp. 1-10). IEEE. 10.1109/IJCNN.2013.6706746

Fodor, J. A. & Pylyshyn, Z. W. (1988). Connectionismand cognitive architecture: A critical analysis. Cogni-tion, 28 (1-2), 3-71. 10.1016/0010-0277(88)90031-5

Fox, D. (2009). IBM reveals the biggest artificial brain ofall time. Popular Mechanics

Galluppi, F., Denk, C., Meiner, M. C., Stewart, T.,Plana, L. A., Eliasmith, C., Furber, S. & Conradt, J.(2014). Event-based neural computing on an autonom-ous mobile platform. Proceedings of IEEE internationalconference on robotics and automation (ICRA). IEEE.

De Garis, H., Shuo, C., Goertzel, B. & Ruiting, L. (2010).A world survey of artificial brain projects, Part I:Large-scale brain simulations. Neurocomputing, 74 (1-3), 3-29. 10.1016/j.neucom.2010.08.004

Goodman, W. C. (1996). Software and engineering indus-tries: Threatened by technological change? MonthlyLabor Review, 119 (8), 37-45. Bureau of Labor Statist-ics, U.S. Department of Labor. 10.2307/41844604

Graves, A., Mohamed, A. & Hinton, G. E. (2013). speechrecognition with deep recurrent neural networks. IEEEinternational conference on acoustic speech and signalprocessing (ICASSP) (pp. 6645-6649). Vancouver,Canada: IEEE.

Hasler, J. & Marr, H. B. (2013). Finding a roadmap toachieve large neuromorphic hardware systems. Fronti-ers in Neuroscience, 7 (118), 1-29.10.3389/fnins.2013.00118

Jackendoff, R. (2002). Foundations of language: Brain,meaning, grammar, evolution. Oxford, UK: OxfordUniversity Press.

Khan, M. M., Lester, D. R., Plana, L. A., Rast, A., Jin,X., Painkras, E. & Furber, S. B. (2008). SpiNNaker:

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 15 | 17

Page 16: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

Mapping neural networks onto a massively-parallel chipmultiprocessor. IEEE. (pp. 2849-2856).10.1109/IJCNN.2008.4634199

Khatib, O. (1987). A unified approach for motion andforce control of robot manipulators: The operationalspace formulation. IEEE Journal of Robotics andAutomation, 3 (1), 43-53.

Krizhevsky, A., Sutskever, I. & Hinton, G. E. (2012). Im-ageNet classification with deep convolutional neuralnetworks. In F. Pereira, C. J. C. Burges, L. Bottou andK. Q. Weinberger (Eds.) Advances in Neural Informa-tion Processing Systems 25 (p. 4).

Kumar, S. (2013). Introducing qualcomm zeroth pro-cessors: Brain-inspired computing. http://www.qual-comm.com/media/blog/2013/10/10/introducing-qual-comm-zeroth-processors-brain-inspired-computing

Kurzweil, R. (2005). The singularity is near. New York,NY: Penguin Books.

Leung, E. L., Cao, Z., Jiang, Z., Zhou, H. & Liu, L.(2013). Network-based drug discovery by integratingsystems biology and computational technologies. Brief-ings in bioinformatics, 14 (4), 491-505. Oxford, UK:Oxford University Press.

Lichtsteiner, P., Posch, C. & Delbruck, T. (2008). Tem-poral contrast vision sensor. IEEE Journal of Solid-State Circuits, 43 (2), 566-576.

Li, C., Delbruck, T. & Liu, S. (2012). Real-time speakeridentification using the AEREAR2 event-based siliconcochlea. 2012 IEEE International Symposium on Cir-cuits and Systems (pp. 1159-1162).10.1109/ISCAS.2012.6271438

Litt, A., Eliasmith, C., Kroon, F., Weinstein, S. &Thagard, P. (2006). Is the brain a quantum computer?Cognitive Science, 30 (3), 593-603.

Maass, W., Natschläger, T. & Markram, H. (2002). Real-time computing without stable states: A new frame-work for neural computation based on perturbations.Neural computation, 14 (11), 2531-2560. Cambridge,MA: MIT Press.

Malone, P. (2014). Wealthy need to share the spoils ofautomation. http://www.canberratimes.com.au/com-ment/wealthy-need-to-share-the-spoils-of-automation-20140301-33sp6.html

Manyika, J., Chui, M., Bughin, J. & Dobbs, R. (2013).Disruptive technologies: Advances that will transformlife, business, and the global economy. McKinseyGlobal Institute.

Marcus, G. (2013). Is “Deep Learning” a revolution in ar-tificial intelligence? The New Yorker.

Metzinger, T. (2013). Two principles for robot ethics. InE. Hilgendorf and J. P. Günther (Eds.) Robotik undGesetzgebung (pp. 263-302). Baden-Baden, GER:Nomos.

Newquist, H. P. (1994). The brain makers. Indianapolis,IN: Sams Publishing.

Norman, D. A. (1986). Reflection on cognition and paral-lel distributed processing. In J. L. McClelland and D.E. Rumelhart (Eds.) Parallel distributed processing:Exploration inthe microstructure of cognition (p. 531).Cambridge, MA: MIT Press.

Padbury, P., Christensen, S., Wilburn, G., Kunz, J. &Cass-Beggs, D. (2014). MetaScan 3: Emerging techno-logies. MetaScan 3: Emerging technologies (p. 45).Cambridge, MA: Policy Horizons Canada.http://www.horizons.gc.ca/eng/content/metascan-3-emerging-technologies-0

Pinker, S. & Prince, A. (1988). On language and connec-tionism: Analysis of a parallel distributed processingmodel of language acquisition. Cognition, 28 (1-2), 73-193. 10.1016/0010-0277(88)90032-7

Plewes, T. J. (1990). Labor force data in the next cen-tury. Monthly Labor Review, 113 (4). Bureau of LaborStatistics, U.S. Department of Labor.

Print Edition (2014). The future of jobs: The onrushingwave. The Economist.

Robertson, J., Sheppard, T. & Sarnes, S. (2005). Workerscompensation claim frequency continues to decline,particularly for smaller claims. NCCI Research Brief, 2

Rumelhart, D. E. (1989). The architecture of mind: Aconnectionist approach. In M. I. Poser (Ed.) The archi-tecture of mind: A connectionist approach (pp. 133-159). Cambridge, MA: MIT Press.

Rutkin, A. H. (2013). Report suggests nearly half of U.S.jobs are vulnerable to computerization. MIT Techno-logy Review.

Schaal, S., Mohajerian, P. & Ijspeert, A. (2007). Dynam-ics systems vs. optimal control: A unifying view. Pro-gress in brain research, 165 (1), 425-45. 10.1016/S0079-6123(06)65027-9

Searle, J. R. (1980). Minds, brains, and programs. Behavi-oral and Brain Sciences, 3 (03), 417-424. Cambridge, UK:Cambridge University Press. 10.1017/S0140525X00005756

Simon, H. A. & Newell, A. (1958). Heuristic problemsolving: The next advance in operations research. Op-erations Research, 6 (1), 1-10.

Stevenson, I. H. & Kording, K. P. (2011). How advancesin neural recording affect data analysis. Nature neuros-cience, 14 (2), 139-42. 10.1038/nn.2731

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 16 | 17

Page 17: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

Stunt, V. (2014). Why Google is buying a seeminglycrazy collection of companies. CBC News.

Sussillo, D., Abbott, L. F. (2009). Generating coherentpatterns of activity from chaotic neural networks.Neuron, 63 (4), 544-57. 10.1016/j.neuron.2009.07.018

Thagard, P. (2010). The brain and the meaning of life.Princeton, NJ: Princeton University Press.

(2014). Explanatory identities and conceptualchange. Science & Education, 23 (7), 1531-1548.10.1007/s11191-014-9682-1

Todorov, E. (2008). Optimal control theory. In K. Doya(Ed.) Bayesian brain: Probabilistic approaches toneural coding (pp. 269-298). Cambridge, MA: MITPress.

(2009). Parallels between sensory and motor in-formation processing. In M. S. Gazzaniga (Ed.) Thecognitive neurosciences (pp. 613-624). Cambridge, MA:MIT Press.

Ulam, S. (1958). Tribute to John von Neumann. Bulletinof the American Mathematical Society, 64 (3), 5.

Vinge, V. (1993). The coming technological singularity:How to survive in the post-human era. Vision 21: In-terdisciplinary Science and Engineering in the Era ofCyberspace (pp. 11-22). Westlake, OH: NASA Confer-ence Publication 10129.

Eliasmith, C. (2015). On the Eve of Artificial Minds.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(T). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570252 17 | 17

Page 18: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

Future GamesA Commentary on Chris Eliasmith

Daniela Hill

In this commentary, the future of artificial minds as it is presented by the targetarticle will be reconstructed. I shall suggest two readings of Eliasmith’s claims:one regards them as a thought experiment, the other as a formal argument. Whilethe latter reading is at odds with Eliasmith’s own remarks throughout the paper, itis nonetheless useful because it helps to reveal the implicit background assump-tions underlying his reasoning. For this reason, I begin by “virtually reconstruct-ing” his claims as an argument—that is, by formalizing his implicit premises andconclusion. This leads to my second claim, namely that more than technologicalequipment and biologically inspired hardware will be needed to build artificialminds. I then raise the question of whether we will produce minds at all, or ratherfunctionally differentiated, fragmented derivates which might turn out not to benotably relevant for philosophy (e.g., from an ethical perspective). As a potentialalternative to artificial minds, I present the notion of postbiotic systems. Thesetwo scenarios call for adjustments of ethical theories, as well as some caution inthe development of already-existing artificial systems.

KeywordsArtificial minds | Artificial systems ethics | Biological cognition | Mindedness |Postbiotic system

Commentator

Daniela [email protected]   Johannes Gutenberg-UniversitätMainz, Germany

Target Author

Chris [email protected]   University of WaterlooWaterloo, ON, Canada

Editors

Thomas [email protected]   Johannes Gutenberg-UniversitätMainz, Germany

Jennifer M. [email protected]   Monash UniversityMelbourne, Australia

1 Introduction

This commentary has two main aims: First, itaims to reconstruct the major important predic-tions and claims Eliasmith presents in his targetarticle as well as his reasons for endorsing them.Second, it plays its own version of “futuregames”—the “argumentation game”—by takingsome suggestions presented by Eliasmith max-imally seriously and then highlighting problemsthat might arise as a consequence. Of course,these consequences are of a hypothetical nature.Still, they are theoretically relevant for thequestion of what will be needed to build full-fledged artificial cognitive agents.

Chris Eliasmith discusses recent technolo-gical, theoretical, and empirical progress in re-

search on Artificial Intelligence and robotics.His position is that current theories on cogni-tion, along with highly sophisticated technologyand the necessary financial support, will lead tothe construction of sophisticated-minded ma-chines within the coming five decades (Elia-smith this collection, p. 2). And also vice versa:artificial minds will inform theories on biologicalcognition as well. Since these artificial agentsare likely to transcend humans’ cognitive per-formance, theoretical (i.e., philosophical andethical) as well as pragmatic (e.g., legal and cul-tural laws etc.) consequences have to be con-sidered throughout the process of developingand constructing such machines.

Hill, D. (2015). Future Games - A Commentary on Chris Eliasmith.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570610 1 | 8

Page 19: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

The ideas Eliasmith presents are derivedfrom developments in three areas: technology,theory, and funding; and I will demonstrate thebackground assumptions underlying these. Inthis way, I want to demonstrate that if we readEliasmith as defending a formal argument(rather than a thought experiment), this argu-ment has the form of a petitio principii. To il-lustrate this very clearly, a formal reconstruc-tion of the (not explicitly endorsed, but impli-citly assumed) arguments will be conducted. Ithen argue that even though they are construc-ted as arguments, and Eliasmith’s claims fail,his suggestions provide an insightful contribu-tion to the philosophical debate on artificial sys-tems and the near future of related research. Ifurther want to stress that we should perhapsconfine ourselves to talking about less radicalalternatives that do not necessarily include themindedness of artificial agents, but have someelement of biological cognition (architecture orsoftware) in them. A number of subordinatequestions have to be looked at in order to arriveat a point where a justified statement about thepossibility of phenomenologically convincing ar-tificial minds can be made. These considerationsinclude more possibilities than simply the dicho-tomy of human-like vs. artificial. This is due toour having to think about possibilities that liebetween or beyond these two extremes, such asfragmented minds and postbiotic systems, sincethey might soon emerge in the real world. Theway in which these will be relevant to philo-sophy will be largely a question of their psycho-logical make-up—most notably, their ability tosuffer.

To start with, the following two sectionswill present some relevant aspects of the posi-tion expressed in the target article. They willsummarize, and highlight some of the article’smany informative and noteworthy suggestions. Ishall also bring in some additional thoughtsthat I consider important. Afterwards, I willplay a kind of future game of my own: I takeEliasmith’s predictions very seriously and pointat some of the problems that might arise if wewere to take his suggestions as arguments. Tobe fair, Eliasmith himself says that what hepresents are “likely wrong” predictions (this col-

lection, p. 3). So on a more charitable reading,his claims are not intended to be arguments atall. Yet the attempt to reconstruct them as aformal argument has the advantage of showingthat his claims are based on a reasoning that isitself problematic.

2 Are artificial minds just around the corner?

Eliasmith’s perspective on the architecture ofminds is a functionalist one (this collection, p.2, p. 6, pp. 6–7, pp. 9–11, p. 13). The threadrunning through his paper is his interest in “un-derstanding how the brain functions” and real-izing “detailed functional models of the brain”(ibid., p. 9). The basic idea is that if we con-struct artificial minds and endow them withcertain functions (such as natural language andhuman-like perceptual abilities), we can exam-ine empirically, in a process comparable to re-verse engineering, what it is that constitutes so-called mindedness (ibid., p. 11). But in theirstriving to unearth the nature of mindedness, itis not the task of artificial intelligence researchor biology to deliver comprehensive and full-fledged theories on biological cognition in gen-eral and human cognition in particular. Rather,a very interesting reciprocal relationshipbetween the two parties, in which one learnsfrom the other, is what will propel forward ourunderstanding of biological cognitive systems.In the following I give an overview of the mostrelevant points that are presented in the targetarticle. They will be divided up into the originalsections (technical, theoretical, and empirical).

First, in the technical area and accordingto Eliasmith, we are fairly far advanced—al-though there are certain hindrances to success-fully implementing theories on this technology.The main obstacle is the size of artificial neur-onal systems and, connected to that, theirpower consumption. Even though neuromorphicchips are being improved steadily, the numberof neurons that can be reproduced artificially isstill much lower than the number of neurons ahuman brain has. Thus, the processing of in-formation is significantly slower than in naturalcognitive systems (Eliasmith this collection, p.

Hill, D. (2015). Future Games - A Commentary on Chris Eliasmith.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570610 2 | 8

Page 20: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

14). Consequently, what can be realized in thefield is still far from the complexity displayedby natural, biological cognition. However, asEliasmith argues, since we are already in posses-sion of the theoretical groundwork, the mainbarrier to overcome are technological advances(ibid., p. 9). Throughout the paper, Eliasmithinforms the reader that in case we had the tech-nologies needed, artificial minds would immedi-ately be created (ibid., e.g., p. 7, p. 9, p. 11).However, where Eliasmith emphasizes technolo-gical barriers, I would like to point out that the-oretical obstacles exist as well. These mainly re-volve around the fact that a system of ethicshas to be created before we encounter artificialagents. Eliasmith also comments on the con-sequences for philosophy, arguing that some ma-jor positions in the philosophy of mind, such asfunctionalism, will receive more empiricalgrounding (ibid., p. 11).

It seems as if the tacit understanding thatEliasmith has of the function of artificial mindsis that they serve as shared research objects ofbiology and artificial research science in orderto gain a better understanding of biological cog-nition (this collection, p. 9). That is of courseonly true if indeed the functional architecture ofthe artificial agent produces convincing beha-vior, similar to that of biological cognitive sys-tems (humans and animals alike). To illustratepossible problems, one can think of the factthat in research, we learn from animal experi-ments, even though these animals are quite dif-ferent from us in many ways. They are, how-ever, similar or at least comparable in one epi-stemically relevant and specific aspect, i.e., theone that is to be examined, for example in cer-tain aspects of metabolism used to test whethera new drug causes liver failure in humans(Shanks et al. 2009, p. 5). It is the same withartificial agents: they are similar to us in theirbehavior and thus a worthwhile research object.As such, we could formulate the underlyingreasoning as a variant of analytical behaviorism.Analytical behaviorists suppose that intrinsicstates of a system are mirrored in certain kindsof behavior. Two systems displaying identicalbehavior on the outside can be investigated inorder to detect whether they do so on the inside

as well (Graham 2010). This means that wecould gain insight on the origin of mental statesfrom a functionally isomorphic system, i.e., anartificially constructed system that is identicalin organization and behavior to the natural sys-tem copied.

Last, since it seems that it will be possiblein the future, given the required hardware, todesign artificial agents according to our needs, itdoes not appear far-fetched to assume that thequality of human life might consequently be im-proved to a great extent (Eliasmith thiscollection, p. 11). This requires, however, that wemake up our own minds about how to interactwith such agents, which rights to grant and whichto deny them. And also the opposite case maynot be disregarded: it is imaginable that the arti-ficial agents will at some point turn the tablesand be the ones to decide on our rights (cf. Met-zinger 2012). In highlighting aspects from differ-ent areas to be considered, Eliasmith reminds usof the possibilities that lie ahead of us, but also ofthe challenges that might show up and have to befaced. I want to suggest that we also take intoconsideration alternative outcomes that are notminds in the biological sense, but rather derivatesof minds. I will therefore put the notion of postbi-otic systems into play as a way of escaping the di-chotomy “human-like” vs. “artificial” (Metzinger2013). The philosophical point here is that theconceptual distinction between “natural” and “ar-tificial” may well turn out to be non-exhaustiveand non-exclusive: there might well, as Metzingerpoints out, be future systems that are neither ar-tificial nor biological. By no means do I intend toargue against the use of scientific models, sincethey are what good research needs. Rather, I wishto draw attention to the possible emergence of in-termediate systems, rather than only the ex-tremes (i.e., human-like vs. artificial agents), orclasses of systems that go beyond our traditionaldistinctions, but which nevertheless count as“minded”. As mentioned above, this is due tothese intermediate or postbiotic systems beingpossible much earlier—probably preceding full-blown minded agents.

I will end this section by drawing attentionto some of the author’s thoughts on the crucialelements of artificially-minded systems. According

Hill, D. (2015). Future Games - A Commentary on Chris Eliasmith.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570610 3 | 8

Page 21: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

to Eliasmith, three types of skills are vital inbuilding artificial minds: cognitive, perceptual,and motor skills have to be combined to create acertain behavior of the minded artificial agent.This behavior will then serve as the basis for ushumans to judge whether we perceive the artifi-cial agent as “convincing” or not (Eliasmith thiscollection, p. 9). Unfortunately, no closer specific-ation of what it is to be “convincing” is given inthe target article. No theoretical demarcation cri-terion is offered. What we can say with great cer-tainty, however, is that in the end our subjectiveperception of the artificial agents will be the decis-ive criterion. One could speculate on whether it ismerely an impression, or even an illusion, thatleads us to concluding that we are facing aminded agent. According to Eliasmith, any sys-tem that produces a robust social hallucination inhuman observers will count as possessing a mind.

3 Playing the “argumentation game”

In the following I will play the “argumentationgame” and for a moment assume that what Eli-asmith presents us with actually is argumenta-tion. The goal of this section is not to claimthat Eliasmith really argues for the emergenceof artificial minds in the classical way. Rather, Iwish to highlight that possibly more than tech-nological equipment and biologically inspiredhardware need to be taken into account beforeresearch can present us with a mind, as outlinedby Eliasmith. If we deconstruct his line of reas-oning and virtually formalize the argument, wedon’t find valid argumentation but rather a setof highly educated—and certainly informative—claims about the future, which doubtlessly helpus prepare for a future not too far ahead of us. Iwill utilize the terms “argumentation”, “argu-ment”, “premise”, and “conclusion” in the fol-lowing, but it should always be rememberedthat these terms are only “virtually” or hypo-thetically. So let us see how Eliasmith proceeds:

If we play the argumentation game, a firstresult is that Eliasmith’s virtual argument be-comes problematic at the moment he starts elab-orating on theoretical developments that havebeen made and that will propel forward the de-velopment of “brain-like models” (this collection,

p. 6). From the perspective of an incautiousreader, the entire section “Theoretical develop-ments” could be seen as resulting in a claim thatcan be traced back to a petitio principii. Thismeans that the conclusion drawn at the end ofthe argumentative line is identical with at leastone of the implicit premises. The implicit argu-mentation is made up of three relevant parts andunfolds as follows: first, building brain-like modelsis not only a matter of the available technologicalequipment (ibid., first paragraph; cf. premise 1).Instead, if we face a convincing artificially-mindedagent, it is characterized by both sophisticatedtechnological equipment and by our discovery ofprinciples of how the brain functions, such aslearning or motor control (ibid.; cf. premise 2).And so, in conclusion, it follows that if biologicalunderstanding and technological equipment cometogether, we will be able to build brain-like mod-els and implement them in highly sophisticatedcognitive agents (ibid.).

The incautious reader would now have tobelieve that Eliasmith is confusing necessaryand sufficient conditions. Let us look at this as-sumed argument in some more detail. Formu-lated as a complete argument we would get: “Ifit is not the case that technological equipmentalone leads to the building of brain-like modelsfor artificial cognitive agents, but we face agood artificial minded agent which is endowedwith certain technology as well as biologicallyinspired hardware, we have to conclude thatthis certain technology and biologically inspiredhardware are not only necessary, but also suffi-cient for building brain-like models for artificialcognitive agents.”

The formal expression of this argumentwould be the following:

T: We have developed sophisticated tech-nological equipment.

B: We have developed biologically-inspiredhardware.

M: We can build brain-like models whichcan be implemented in artificial cognitiveagents.

¬(T → M)M → (T & B)(T & B) → M

Hill, D. (2015). Future Games - A Commentary on Chris Eliasmith.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570610 4 | 8

Page 22: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

As is obvious from how the argument isconstructed, it is invalid. So, what we can sayat this point is that the combination of bothtechnical features and biologically-inspired neur-omorphic hardware very likely does get us someway, but we might have to consider which ele-ments are missing so that we really end upbuilding what will be perceived as minds. I shallpropose some possibilities in the following sec-tion. The author even supposes that we will beable to build artificial agents ready to rival hu-mans in cognitive ability (Eliasmith this collec-tion, p. 9). I am convinced that it is not cognit-ive artificial agents that will be the crucialhurdle, but rather their mindedness. I am alsoconvinced that the huge amount of money spenton certain research projects will most likely res-ult in improved models of the brain, as sugges-ted by Eliasmith (ibid., p. 8), but it is not obvi-ous to me how investing a vast amount ofmoney necessarily results in relevant findings. Itis also possible that no real progress will bemade. Stating the opposite, which Eliasmithdoes not, resembles a claim based on expertiseas bulletproof evidence. Sure enough, monetarysources are needed to make progress, but theyare no guarantee. So possibly technology, biolo-gical theories on the brain’s functioning, andmoney, essentially, might not lead to sophistic-ated cognitive agents being built (ibid.). Thepoint is not that we should not invest moneyunless a positive outcome is guaranteed. Rather,we need a theoretical criterion for mindednessthat is philosophically convincing—and not onlyrobust, but epistemically unjustified social hal-lucinations. This theoretical criterion is what welack.

4 What could artificial minds be?

In this section, I intend to sketch some import-ant issues and questions for the future debateon artificial minds. I shall examine whether pre-dictions on the concept of artificial minds canbe made at the present state of the debate andbased on the empirical data we currently have.This involves knowledge about what a mind is,and knowledge about how an artificial mind ischaracterized. In reconstructing Eliasmith’s un-

derstanding of what a mind is, we may find thefollowing statement informative: he relies on be-havioral, theoretical, and similarity-based meth-ods (this collection, p. 3). The possible problemwith this approach is that the characterizationof the methods is very limited. To point to somerelevant questions: what is the behavior of amind? What about the fact that mind is noteven close to being well understood theoretic-ally? How do similarity-based methods avoiddrawing problematic conclusions from analogies(cf. Wild 2012)? Importantly, at this point weare only talking about natural, biologically-grounded minds. Answers as to what an artifi-cial mind is supposed to be might exceed theconcept of mind in ways we are unable to tell atthe present moment.

Let us see how Eliasmith characterizes ar-tificial minds. One can see this as a judgmentbased on the similarity of behavior originatingfrom two types of agents: humans and artificial.Functions need to be developed that are neces-sary for building an artificial mind. These func-tions lead to a certain kind of behavior. Thisbehavior is achieved by perceptive, motor, andcognitive skills, which are needed to make thebehavior seem human-like. Thus, the functionsimplemented on sophisticated kinds of techno-logy will, in the end, lead to human-like beha-vior (Eliasmith this collection, p. 9). The reasonwhy the argumentative step from cognition,perception, and motor skills to mindedness canbe made is the underlying assumption that thebehavior resulting from these three types ofskills is convincing behavior in our eyes (Elia-smith this collection, p. 10). Similarity judg-ments, so Eliasmith argues, might appear“hand-wavy”. Still, he uses them to reduce thecomplexity that mindedness brings with it(ibid., pp. 5–6), and he certainly succeeds indrawing attention to a whole range of importantissues. However, it could well be that the reduc-tion to human-like behavior as the benchmarkfor assessing mindedness is too simple. After all,analytical behaviorism today counts as a failedphilosophical research program. There could bemuch more to mindedness than behavior. Wejust do not know what this is yet. As a possiblecandidate we might consider the previously

Hill, D. (2015). Future Games - A Commentary on Chris Eliasmith.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570610 5 | 8

Page 23: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

mentioned psychological make-up of artificialagents, such as their being endowed with in-ternal states like ours. One might think of ro-bust first-person perspectives, but also aboutemotions like pain, disappointment, happiness,fear, and the ability to react to these. Other op-tions include interoceptive awareness or theability to interact socially—and much more.

5 What should we brace ourselves for?

Given the complexity of mindedness and ourvery limited understanding of what constitutesit, what else can we talk about? We could con-sider further possibilities of artificial systemsthat might arise, thereby enlarging the set ofconstraints that has to be satisfied. Some ofthem seem much more likely than artificialminds, and they might precede minds chronolo-gically. I would like to focus on the idea of frag-mented minds on the one hand and of postbioticsystems on the other, as two versions of artificialsystems. An artificially-constructed fragmentedmind is characterized by only partial satisfactionof the constraints fulfilled by a human mind. Itcould thus, very much like autistic persons withsavant syndrome (i.e., more than average com-petence in a certain domain, e.g., language learn-ing or music), and possess only some of our cog-nitive functions, but be strikingly better at themthan normal humans are and ever could be,given their biological endowment.1 Postbioticminds, on the other hand, could satisfy addi-tional constraints that are not yet apparentpresently. I will conclude with some reflectionson the new kind of ethics that will have to becreated in order to approach new kinds of cog-nitive agents. As pointed out above, I assumethat cognitive agents will be possible muchearlier than truly minded agents. Learning, re-membering, and other cognitive functions canalready be recreated in artificial systems likeSpaun. Still, human cognition is very versatileand complex. A fully minded agent, in contrastto a merely cognitive agent, might also be ableto experience herself as a cognitive agent.

1 In that case, the variable B from above (biologically inspired hard-ware) would not be a necessary condition for finding out more aboutmindedness.

Therefore, I propose that cognitive sys-tems could be created that do not yet qualify asa copy of our cognitive facilities, but whichcover only parts of our cognitive setup. I callthese fragmented minds. Importantly, the wordminds does not refer here to the artificiality ofthe system at all. There are human beings withfragmented minds, too, such as babies, who donot yet display the cognitive abilities we ascribeto adult humans in general, or the aforemen-tioned autistic humans with savant syndrome.Fragmented minds are contrasted with what weexperience as normal human minds. Fragmentedmeans that the created system possesses onlypart of the abilities that our mind displays. Theterm mind delineates the—historically contin-gent—point of reference that is human beings.How are fragmented minds further character-ized? Eliasmith himself gives us an example: wecould design a robot (an artificial mind) thatgains fulfillment from serving humans (this col-lection, p. 11). This would only be possible ifaspects of our own minds were not part of themental landscape of this robot. We couldroughly formulate such an aspect, such as thewill to design one’s own life. Folk psychologywould most likely regard this robot as lacking afree will, which is in conformity with the idea ofslavery that Eliasmith acknowledges (ibid.). Soa fragmented mind is an artificial system thatpossesses part of a biological cognitive system’sabilities instead of the rich landscape mosthigher animals (e.g., some fishes and birds, cer-tainly mammals), as well as humans, display.

Related to the aspect of fragmentedminds is the idea that we could refrain fromcreating minds that might cause us a lot ofmoral and practical trouble, and instead focuson building sophisticated robots designed tocarry out specific kinds of tasks. Why do weneed to create artificial minds? What is theadditional value gained? If these robots arenot mindful, we will circumvent the vast ma-jority of conceptual and ethical problems, suchas legal questions (What is their legal statuscompared to ours?) or ethical considerations(If I am not sure whether an artificial agentcan perceive pain, how should I treat it in or-der to not cause harm?). In which case, they

Hill, D. (2015). Future Games - A Commentary on Chris Eliasmith.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570610 6 | 8

Page 24: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

would only be more capable technology thanwhat we know at present, and most likely beof no major concern for the philosophy ofmind. However, if they are mindful, we doubt-lessly have to think about new ways of ap-proaching them ethically.

Also ethically relevant are intermediatesystems, systems that are not clearly eithernatural or artificial. These systems have beencalled postbiotic systems (Metzinger 2012, p.268). What characterizes postbiotic systems isthe fact that they are made up of both naturaland artificial parts, thus belonging to neitherof the exhaustive categories “natural” or “arti-ficial”. In that way a natural system, e.g., ananimal, could be controlled by artificially-con-structed hardware (as in hybrid bio-robotics);or, in the opposite case, artificial hardwarecould be equipped with biologically-inspiredsoftware, which works in very much the sameway as neuronal computation (Metzinger 2012,pp. 268–270; Metzinger 2013, p. 4). PerhapsEliasmith’s own brain-like model Spaun is apostbiotic system in this sense, too. In whatway would these systems become ethically rel-evant? Although the postbiotic systems in ex-istence today do not have the ability to sub-jectively experience themselves and the worldaround them, they might have it in the future.In being able to subjectively experience theirsurroundings, they are probably also able toexperience the state of suffering (Metzinger2013, p. 4). Everything that is able to con-sciously experience a frustration of preferencesas a frustration of its own preferences automat-ically becomes an object of ethical considera-tion, according to this principle. For suchcases, we have to think of ethical guidelines be-fore we are confronted with a suffering postbi-otic mind, which could be much earlier than weexpect. Before thinking about how to imple-ment something as complex and unpredictableas an artificial mind, one should consider whatone does not want to generate. This could, forexample, be the ability to suffer, the inabilityto judge and act according to ethical premises,or the possibility of developing itself further ina way that is not controllable by and poten-tially dangerous for humans.

6 Conclusion

In this commentary, I have played the “argu-mentation game“ as my own version of Elia-smith’s “future game”. The intention behindthis was to demonstrate that we very likelyneed more than sophisticated technology andbiologically-inspired hardware to build brain-like models ready to be applied in artificial cog-nitive agents. As such, I playfully took Elia-smith’s considerations on the future of artificialminds as arguments, and demonstrated thatthey would result in a petitio principii. In so do-ing, I highlighted that necessary conditions donot have to be sufficient as well. While this iscommon philosophical currency, it is instructiveto spell this out in the case of artificial agents.So in the present case, what constitutes artifi-cial cognitive systems and what is needed togain a deeper understanding of how the mindworks might include more factors than the twocrucial ones Eliasmith outlines, namely biolo-gical understanding and its implementation inhighly-sophisticated technology. I proposedsome possibilities that might turn out to be in-formative for future considerations on what con-stitutes an artificial mind. In particular, I men-tioned experiential aspects, such as the percep-tion of emotions and reactions to them, as wellas internal perceptions like interoceptive aware-ness. In general, this means that we need theor-etical criteria that are convincing for philosophyin order to overcome referring to robust yet con-vincing social hallucinations. Further, to illus-trate that the distinction between natural andartificial systems might not be exhaustive, Ipointed to the notions of fragmented minds andpostbiotic systems as possible developments forthe nearer future. They have to be considered,in particular with respect to their ethical im-plications, before they are developed and imple-mented in practice.

Even though we lack a more fine-grained,deeper understanding of what constitutesminds, Eliasmith shows us that it is worththinking about what we already do have athand for constructing artificially-minded sys-tems. He demonstrates vividly that two factors—technology and biology—are of major import-

Hill, D. (2015). Future Games - A Commentary on Chris Eliasmith.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570610 7 | 8

Page 25: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

ance on the route to artificially-cognitive, if notminded, agents. And he brings into discussion anumber of far-reaching consequences that willapply in case we do succeed in building artificialminds within the next five decades. These willinform the development of these artificial sys-tems as well as philosophical debate, both on anethical, as well as theoretical level. In this way,Eliasmith’s contribution has to be regarded assignificant in terms of preparing us for the dec-ades to come.

Acknowledgements

First and foremost, I am grateful to ThomasMetzinger and Jennifer M. Windt for letting mebe part of this project, thus providing me aunique opportunity to gain valuable experience.Further special thanks go to the two anonymousreviewers, as well as the editorial reviewers fortheir insightful comments on earlier versions ofthis paper. Lastly, I wish to express my gratit-ude to Anne-Kathrin Koch for sharing her ex-pertise with me.

References

Eliasmith, C. (2015). On the eve of artificial minds. In T.Metzinger & J. M. Windt (Eds.) Open MIND. Frank-furt a. M., GER: MIND Group.

Graham, G. (Ed.) (2010). Behaviorism. The Stanford En-cyclopedia of Philosophy.http://plato.stanford.edu/entries/behaviorism/

Metzinger, T. (2012). Der Ego-Tunnel: Eine neue Philo-sophie des Selbst: Von der Hirnforschung zur Bewusst-seinsforschung. Berlin, GER: Bloomsbury.

(2013). Two principles for robot ethics. In E. Hil-gendorf & J.-P. Günther (Eds.) Robotik und Gesetzge-bung (pp. 263-302). Baden-Baden, GER: Nomos.

Shanks, N., Greek, R. & Greek, J. (2009). Are animalmodels predictive for humans? Philosophy, Ethics, andHumanities in Medicine, 4 (2), 1-20. 10.1186/1747-5341-4-2

Wild, M. (2012). Fische. Kognition, Bewusstsein undSchmerz: Beiträge zur Ethik und Biotechnologie. Bern,CH: Bundesamt für Bauten und Logistik BBL.

Hill, D. (2015). Future Games - A Commentary on Chris Eliasmith.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(C). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570610 8 | 8

Page 26: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

Mind GamesA Reply to Daniela Hill

Chris Eliasmith

In her discussion of my original article, Hill reconstructs an argument I may havebeen making, argues that the distinction between natural and artificial minds isnot exclusive, and suggests that my reliance on behaviour as a determiner of“mindedness” is a dangerous slip back to philosophical behaviourism. In reply, Inote that the logical fallacy of which I’m accused (circular reasoning) is not theone present in the reconstruction of my argument (besides the point), and offer anon-fallacious reconstruction. More importantly, I note that logical analysis doesnot seem appropriate for the discussion in the target article. I then agree thatnatural and artificial minds do not make up two discrete categories for minded-ness. Finally, I note that my research program belies any behaviourist motivations,and reiterate that even though behaviour is typically important for identifyingminds, I do not suggest that it is a substitute for theory. However, the target art-icle is not about such theory, but about the near-term likelihood of sophisticatedartificial minds

KeywordsArtificial minds | Behaviourism | Logical analysis | Minds

Author

Chris [email protected]   University of WaterlooWaterloo, ON, Canada

Commentator

Daniela [email protected]   Johannes Gutenberg-UniversitätMainz, Germany

Editors

Thomas [email protected]   Johannes Gutenberg-UniversitätMainz, Germany

Jennifer M. [email protected]   Monash UniversityMelbourne, Australia

1 Introduction

I think Hill is right to wonder aloud about mymethodology in the target article. After all, Ijust ignored the hard philosophical issue of say-ing what minds really are. I pretended (some-what self-consciously) that we all know whatminds are, and so that we will simply be able totell when someone has created one, if they everdo. But, I did that for a reason. The reason wasthis: I did not want to get lost in the minutiaeof metaphysics when my focus was on a techno-logical revolution—one with significant philo-sophical consequences (which is also not to say Idon’t like such minutiae in their proper timeand place).

2 A failure of logic

However, Hill was also not especially taken bythe reasons I provided for expecting such devel-opments either. Hill’s suggestion is that the bestreasonable argument you could construct frommy original considerations was fallacious.Though, Hill is quick to point out that I didn’ttake myself to be constructing an argument: “…not to claim that Eliasmith really argues for theemergence of artificial minds” (this collection, p.4).

Nevertheless, her analysis is that what Ihave provided is best understood as a petitioprincipii (aka circular argument): “this meansthat the conclusion drawn at the end of the ar-

Eliasmith, C. (2015). Mind Games - A Reply to Daniela Hill.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(R). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570788 1 | 3

Page 27: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

gumentative line is identical with at least one ofthe implicit premises” (Hill this collection, p. 4).Unfortunately, the technical analysis offered (p.5) is a non sequitur (i.e., there is no logical con-nection between the premises and conclusion).Regardless, one fallacy is as embarassing as theother.

However, I’d like to suggest that if wewanted to recast the original paper as a logicalargument, then a simple modus ponens will do:if we have a good theory and the technologicalinnovations necessary to implement the theory,then we can build a minded agent. We havegood (and improving) theory and will have theproper technological innovations (in the next 50years), therefore we will be able to build aminded agent (in the next 50 years). Indeed,most of the paper is arguing for the plausibilityof these premises.

More to the point, however, I think thatwe can take this as an object lesson for when lo-gical inference is really just the wrong kind ofanalysis of a paper. Instead of trying to providea logical argument from which the conclusionnecessarily follows from the premises, I amproviding series of considerations that I believemake the conclusion likely given both the cur-rent state of affairs, and expected changes. Inshort, I think of the original paper as providingsomething more like a series of inferences to thebest explanation: all of which are, technically,fallacious; and all of which are directed at es-tablishing premises.

3 Back to minds

Despite disagreeing with the analysis of the lo-gical structure of the paper, I do appreciate theemphasis that Hill has placed on philosophicaland ethical aspects of our attempts to constructminds. In the original article, I only very brieflytouch on those issues. However, I would bequick to point out that I do not think, andnever intended to suggest, that the distinctionbetween “natural” and “artificial minds” was anabsolute, “exhaustive,” or “exclusive” one (Hillthis collection, p. 3). Like most interesting andcomplex features, possession of ‘mindedness’ nodoubt comes in degrees. In fact, I think that our

attempts to construct artificial minds willprovide a much better sense of the dimensionsalong which such a continuum is best defined.

Finally, I must admit that I find it some-what alarming that I’m being characterized as abehaviourist in Hill’s article—that has definitelynever happened before: “Let us see how Elia-smith characterizes artificial minds. One can seethis as a judgment based on the similarity ofbehaviour originating from two types of agents:humans and artificial” (this collection, p. 5).Hence, I was espousing “analytical behavior-ism… a failed philosophical research program”(Hill this collection, p. 6). Indeed, I, like all be-havioural scientists, believe that behaviour isone important metric for characterizing the sys-tems of interest. However, the reason I focus oninternal mechanisms in my own research – allthe way down to the neural – is that I believethose mechanisms give us critical additionalconstraints for identifying the right class of al-gorithms that give rise to behaviours. Con-sequently, I wholeheartedly agree with Hill that“There could be much more to mindedness thanbehaviour” (this collection, p. 6). So, for the re-cord, I believe that our best theories for how tobuild minds are going to be highly informed bylow-level mechanisms.

That being said, I also think that mostpeople’s judgments of whether or not somethingcounts as being minded is going to come downlargely to their being convinced of the natural-ness, or “cognitiveness” of the behaviour that isexhibited by agents we construct. Notice thatthere is a difference between a claim of howpeople will judge mindedness, and a claimabout theories of mindedness or how we oughtto best achieve that judgment. Turing was, afterall, onto something with his test.

4 Conclusion

I noted in the original article (Eliasmith thiscollection) that I was attempting to avoid be-coming mired in tangential debates regardingwhat it is to have a mind by simplifying the cri-teria for mindedness (for the purposes of thatarticle). Exactly the kinds of debates I was at-tempting to avoid are raised in Hill’s comment-

Eliasmith, C. (2015). Mind Games - A Reply to Daniela Hill.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(R). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570788 2 | 3

Page 28: On the Eve of Artificial Minds - University of Waterloocompneuro.uwaterloo.ca/files/publications/elia... · On the Eve of Artificial Minds Chris Eliasmith I review recent technological,

www.open-mind.net

ary. For example, I don’t think we know if thereis a clean contrast between a “fully mindedagent” and a “merely cognitive agent” (Hill thiscollection, p. 6). Perhaps there is, and perhapsit is that a fully minded agent can “experienceherself as a cognitive agent”, (Hill this collec-tion, p. 6) but perhaps not. This does not strikeme as a decidable question at present.

So, perhaps my unwillingness to ventureinto the murky waters of necessary and suffi-cient conditions for having a mind came off asmaking me look like a behaviourist. But intruth, my purpose was rather to focus onproviding a variety of evidence that I think sug-gests that artificial minds are not as far away assome have assumed. There is, I believe, a histor-ically unique confluence of theory, technology,and capital happening as we speak.

References

Eliasmith, C. (2015). On the eve of artificial minds. In T.Metzinger & J. M. Windt (Eds.) Open MIND. Frank-furt a. M., GER: MIND Group.

Hill, D. (2015). Future games - A commentary on ChrisEliasmith. In T. Metzinger & J. M. Windt (Eds.) OpenMIND. Frankfurt a. M., GER: MIND Group.

Eliasmith, C. (2015). Mind Games - A Reply to Daniela Hill.In T. Metzinger & J. M. Windt (Eds). Open MIND: 12(R). Frankfurt am Main: MIND Group. doi: 10.15502/9783958570788 3 | 3