a (very) brief history of artificial intelligence

8

Click here to load reader

Upload: amir-masoud-abdol

Post on 11-Apr-2015

876 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: A (Very) Brief History of Artificial Intelligence

■ In this brief history, the beginnings of artificial in-telligence are traced to philosophy, fiction, andimagination. Early inventions in electronics, engi-neering, and many other disciplines have influ-enced AI. Some early milestones include work inproblems solving which included basic work inlearning, knowledge representation, and inferenceas well as demonstration programs in language un-derstanding, translation, theorem proving, associa-tive memory, and knowledge-based systems. Thearticle ends with a brief examination of influentialorganizations and current issues facing the field.

The history of AI is a history of fantasies,possibilities, demonstrations, andpromise. Ever since Homer wrote of me-

chanical “tripods” waiting on the gods at din-ner, imagined mechanical assistants have beena part of our culture. However, only in the lasthalf century have we, the AI community, beenable to build experimental machines that testhypotheses about the mechanisms of thoughtand intelligent behavior and thereby demon-strate mechanisms that formerly existed onlyas theoretical possibilities. Although achievingfull-blown artificial intelligence remains in thefuture, we must maintain the ongoing dialogueabout the implications of realizing thepromise.1

Philosophers have floated the possibility ofintelligent machines as a literary device to helpus define what it means to be human. RenéDescartes, for example, seems to have beenmore interested in “mechanical man” as ametaphor than as a possibility. Gottfried Wil-helm Leibniz, on the other hand, seemed to seethe possibility of mechanical reasoning devicesusing rules of logic to settle disputes. Both Leib-

niz and Blaise Pascal designed calculating ma-chines that mechanized arithmetic, which hadhitherto been the province of learned mencalled “calculators,” but they never made theclaim that the devices could think. EtienneBonnot, Abbé de Condillac used the metaphorof a statue into whose head we poured nuggetsof knowledge, asking at what point it wouldknow enough to appear to be intelligent.

Science fiction writers have used the possibil-ity of intelligent machines to advance the fan-tasy of intelligent nonhumans, as well as tomake us think about our own human charac-teristics. Jules Verne in the nineteenth centuryand Isaac Asimov in the twentieth are the bestknown, but there have been many others in-cluding L. Frank Baum, who gave us the Wiz-ard of Oz. Baum wrote of several robots and de-scribed the mechanical man Tiktok in 1907, forexample, as an “Extra-Responsive, Thought-Creating, Perfect-Talking Mechanical Man …Thinks, Speaks, Acts, and Does Everything butLive.” These writers have inspired many AI re-searchers.

Robots, and artificially created beings such asthe Golem in Jewish tradition and MaryShelly’s Frankenstein, have always captured thepublic’s imagination, in part by playing on ourfears. Mechanical animals and dolls—includinga mechanical trumpeter for which Ludwig vanBeethoven wrote a fanfare—were actually builtfrom clockwork mechanisms in the seven-teenth century. Although they were obviouslylimited in their performance and were intend-ed more as curiosities than as demonstrationsof thinking, they provided some initial credi-bility to mechanistic views of behavior and tothe idea that such behavior need not be feared.As the industrial world became more mecha-nized, machinery became more sophisticated

25th Anniversary Issue

WINTER 2005 53Copyright © 2005, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2005 / $2.00

A (Very) Brief History of

Artificial IntelligenceBruce G. Buchanan

Page 2: A (Very) Brief History of Artificial Intelligence

mechanical engineering than with intelligentcontrol. Recently, though, robots have becomepowerful vehicles for testing our ideas about in-telligent behavior. Moreover, giving robotsenough common knowledge about everydayobjects to function in a human environmenthas become a daunting task. It is painfully ob-vious, for example, when a moving robot can-not distinguish a stairwell from a shadow. Nev-ertheless, some of the most resoundingsuccesses of AI planning and perception meth-ods are in NASA’s autonomous vehicles inspace. DARPA’s grand challenge for au-tonomous vehicles was recently won by a Stan-ford team, with 5 of 23 vehicles completing the131.2-mile course.2

But AI is not just about robots. It is alsoabout understanding the nature of intelligentthought and action using computers as experi-mental devices. By 1944, for example, Herb Si-mon had laid the basis for the information-pro-cessing, symbol-manipulation theory ofpsychology:

“Any rational decision may be viewed as a con-clusion reached from certain premises…. Thebehavior of a rational person can be controlled,therefore, if the value and factual premises up-on which he bases his decisions are specified forhim.” (Quoted in the Appendix to Newell & Si-mon [1972]).

AI in its formative years was influenced by

and more commonplace. But it was still essen-tially clockwork.

Chess is quite obviously an enterprise thatrequires thought. It is not too surprising, then,that chess-playing machines of the eighteenthand nineteenth centuries, most notably “theTurk,” were exhibited as intelligent machinesand even fooled some people into believing themachines were playing autonomously. SamuelL. Clemens (“Mark Twain”) wrote in a newspa-per column, for instance, that the Turk must bea machine because it played so well! Chess waswidely used as a vehicle for studying inferenceand representation mechanisms in the earlydecades of AI work. (A major milestone wasreached when the Deep Blue program defeatedthe world chess champion, Gary Kasparov, in1997 [McCorduck 2004].)

With early twentieth century inventions inelectronics and the post–World War II rise ofmodern computers in Alan Turing’s laboratoryin Manchester, the Moore School at Penn,Howard Aiken’s laboratory at Harvard, the IBMand Bell Laboratories, and others, possibilitieshave given over to demonstrations. As a resultof their awesome calculating power, computersin the 1940s were frequently referred to as “gi-ant brains.”

Although robots have always been part ofthe public’s perception of intelligent comput-ers, early robotics efforts had more to do with

The Turk, from a 1789 Engraving by Freiherr Joseph Friedrich zu Racknitz.

25th Anniversary Issue

54 AI MAGAZINE

Baum Described Tik-Tok as an “Extra-Respon-sive, Thought-Creating, Perfect-Talking Mechani-

cal Man … Thinks, Speaks, Acts, and DoesEverything but Live.”

Page 3: A (Very) Brief History of Artificial Intelligence

25th Anniversary Issue

WINTER 2005 55

Photo courtesy, DARPA.On October 8, 2005, the Stanford Racing Team's Autonomous Robotic Car, Stanley, Won the Defense Advanced Research Projects Agency’s (DARPA) Grand Challenge.

The car traversed the off-road desert course southwest of Las Vegas in a little less than seven hours.

Photo Courtesy, NASAMars Rover.

Page 4: A (Very) Brief History of Artificial Intelligence

ideas from many disciplines. These came frompeople working in engineering (such as NorbertWiener’s work on cybernetics, which includesfeedback and control), biology (for example,W. Ross Ashby and Warren McCulloch andWalter Pitts’s work on neural networks in sim-ple organisms), experimental psychology (seeNewell and Simon [1972]), communicationtheory (for example, Claude Shannon’s theo-retical work), game theory (notably by JohnVon Neumann and Oskar Morgenstern), math-ematics and statistics (for example, Irving J.Good), logic and philosophy (for example,Alan Turing, Alonzo Church, and Carl Hem-pel), and linguistics (such as Noam Chomsky’swork on grammar). These lines of work madetheir mark and continue to be felt, and our col-lective debt to them is considerable. But havingassimilated much, AI has grown beyond themand has, in turn, occasionally influenced them.

Only in the last half century have we hadcomputational devices and programming lan-guages powerful enough to build experimentaltests of ideas about what intelligence is. Tur-ing’s 1950 seminal paper in the philosophyjournal Mind is a major turning point in thehistory of AI. The paper crystallizes ideas aboutthe possibility of programming an electroniccomputer to behave intelligently, including adescription of the landmark imitation gamethat we know as Turing’s Test. Vannevar Bush’s1945 paper in the Atlantic Monthly lays out aprescient vision of possibilities, but Turing wasactually writing programs for a computer—forexample, to play chess, as laid out in Claude El-wood Shannon’s 1950 proposal.

Early programs were necessarily limited inscope by the size and speed of memory andprocessors and by the relative clumsiness of theearly operating systems and languages. (Mem-ory management, for example, was the pro-grammer’s problem until the invention ofgarbage collection.) Symbol manipulation lan-guages such as Lisp, IPL, and POP and timesharing systems—on top of hardware advancesin both processors and memory—gave progra-mmers new power in the 1950s and 1960s.Nevertheless, there were numerous impressivedemonstrations of programs actually solvingproblems that only intelligent people had pre-viously been able to solve.

While early conference proceedings containdescriptions of many of these programs, thefirst book collecting descriptions of working AIprograms was Edward Feigenbaum and JulianFeldman’s 1963 book, Computers and Thought.

Arthur Samuel’s checker-playing program,described in that collection but written in the1950s, was a tour-de-force given both the limi-

25th Anniversary Issue

56 AI MAGAZINE

Herb Simon.

John McCarthy.

Page 5: A (Very) Brief History of Artificial Intelligence

tations of the IBM 704 hardware for which theprogram was written as a checkout test and thelimitations of the assembly language in which itwas written. Checker playing requires modestintelligence to understand and considerable in-telligence to master. Samuel’s program (sinceoutperformed by the Chinook program) is allthe more impressive because the programlearned through experience to improve its ownchecker-playing ability—from playing humanopponents and playing against other comput-ers. Whenever we try to identify what lies at thecore of intelligence, learning is sure to be men-tioned (see, for example, Marvin Minsky’s 1961paper “Steps Toward Artificial Intelligence.”)

Allen Newell, J. Clifford Shaw, and Herb Si-mon were also writing programs in the 1950sthat were ahead of their time in vision but lim-ited by the tools. Their LT program was anotherearly tour-de-force, startling the world with acomputer that could invent proofs of logic the-orems—which unquestionably requires creativ-ity as well as intelligence. It was demonstratedat the 1956 Dartmouth conference on artificialintelligence, the meeting that gave AI its name.

Newell and Simon (1972) acknowledge theconvincingness of Oliver Selfridge’s earlydemonstration of a symbol-manipulation pro-gram for pattern recognition (see Feigenbaumand Feldman [1963]). Selfridge’s work on learn-ing and a multiagent approach to problemsolving (later known as blackboards), plus thework of others in the early 1950s, were also im-pressive demonstrations of the power of heuris-tics. The early demonstrations established afundamental principle of AI to which Simongave the name “satisficing”:

In the absence of an effective method guaran-teeing the solution to a problem in a reasonabletime, heuristics may guide a decision maker toa very satisfactory, if not necessarily optimal,solution. (See also Polya [1945].)

Minsky (1968) summarized much of thework in the first decade or so after 1950:

“The most central idea of the pre-1962 periodwas that of finding heuristic devices to controlthe breadth of a trial-and-error search. A closesecond preoccupation was with finding effec-tive techniques for learning. In the post-1962era the concern became less with “learning”and more with the problem of representation ofknowledge (however acquired) and with the re-lated problem of breaking through the formali-ty and narrowness of the older systems. Theproblem of heuristic search efficiency remainsas an underlying constraint, but it is no longerthe problem one thinks about, for we are nowimmersed in more sophisticated subproblems,e.g., the representation and modification ofplans” (Minsky 1968, p. 9).

25th Anniversary Issue

WINTER 2005 57

Marvin Minsky.

Oliver Selfridge.

Page 6: A (Very) Brief History of Artificial Intelligence

Minsky’s own work on network representa-tions of knowledge in frames and what he callsthe “society of minds” has directed much re-search since then. Knowledge representation—both the formal and informal aspects—has be-come a cornerstone of every AI program. JohnMcCarthy’s important 1958 paper, “Programswith Common Sense” (reprinted in Minsky[1968]), makes the case for a declarative knowl-edge representation that can be manipulatedeasily. McCarthy has been an advocate for us-ing formal representations, in particular exten-sions to predicate logic, ever since. Research byMcCarthy and many others on nonmonotonicreasoning and default reasoning, as in plan-ning under changing conditions, gives us im-portant insights into what is required for intel-ligent action and defines much of the formaltheory of AI.

GPS (by Newell, Shaw, and Simon) andmuch of the other early work was motivated bypsychologists’ questions and experimentalmethods (Newell and Simon 1972). Feigen-baum’s EPAM, completed in 1959, for example,explored associative memory and forgetting ina program that replicated the behavior of sub-jects in psychology experiments (Feigenbaumand Feldman 1963). Other early programs atCarnegie Mellon University (then CarnegieTech) deliberately attempted to replicate thereasoning steps, including the mistakes, takenby human problem solvers in puzzles such ascryptarithmetic and selecting stocks for invest-ment portfolios. Production systems, and sub-sequent rule-based systems, were originallyconceived as simulations of human manipula-tions of symbols in long-term and short-termmemory. Donald Waterman’s 1970 dissertationat Stanford used a production system to playdraw poker, and another program to learn howto play better.

Thomas Evans’s 1963 thesis on solving anal-ogy problems of the sort given on standardizedIQ tests was the first to explore analogical rea-soning with a running program. James Slagle’sdissertation program used collections of heuris-tics to solve symbolic integration problemsfrom freshman calculus. Other impressivedemonstrations coming out of dissertationwork at MIT in the early 1960s by Danny Bo-brow, Bert Raphael, Ross Quillian, and FischerBlack are described in Minsky’s collection, Se-mantic Information Processing (Minsky 1968).

Language understanding and translationwere at first thought to be straightforward, giv-en the power of computers to store and retrievewords and phrases in massive dictionaries.Some comical examples of failures of the tablelookup approach to translation provided critics

25th Anniversary Issue

58 AI MAGAZINE

Photograph Courtesy, National Library of Medicine

The Original Dendral Team, Twenty-Five Years Later.

Donald Michie.

Page 7: A (Very) Brief History of Artificial Intelligence

with enough ammunition to stop funding onmachine translation for many years. Danny Bo-brow’s work showed that computers could usethe limited context of algebra word problemsto understand them well enough to solve prob-lems that would challenge many adults. Addi-tional work by Robert F. Simmons, Robert Lind-say, Roger Schank, and others similarly showedthat understanding—even some transla-tion—was achievable in limited domains. Al-though the simple look-up methods originallyproposed for translation did not scale up, re-cent advances in language understanding andgeneration have moved us considerably closerto having conversant nonhuman assistants.Commercial systems for translation, text un-derstanding, and speech understanding nowdraw on considerable understanding of seman-tics and context as well as syntax.

Another turning point came with the devel-opment of knowledge-based systems in the1960s and early 1970s. Ira Goldstein and Sey-mour Papert (1977) described the demonstra-tions of the Dendral program (Lindsay et al.1980) in the mid-1960s as a “paradigm shift” inAI toward knowledge-based systems. Prior tothat, logical inference, and resolution theoremproving in particular, had been more promi-nent. Mycin (Buchanan and Shortliffe 1984)and the thousands of expert systems followingit became visible demonstrations of the powerof small amounts of knowledge to enable intel-ligent decision-making programs in numerousareas of importance. Although limited in scope,in part because of the effort to accumulate therequisite knowledge, their success in providingexpert-level assistance reinforces the old adagethat knowledge is power.

The 1960s were also a formative time for or-ganizations supporting the enterprise of AI.The initial two major academic laboratorieswere at the Massachusetts Institute of Technol-ogy (MIT), and CMU (then Carnegie Tech,working with the Rand Corporation) with AIlaboratories at Stanford and Edinburgh estab-lished soon after. Donald Michie, who hadworked with Turing, organized one of the first,if not the first, annual conference series devot-ed to AI, the Machine Intelligence workshopsfirst held in Edinburgh in 1965. About the sametime, in the mid-1960s, the Association forComputing Machinery’s Special Interest Groupon Artificial Intelligence (ACM SIGART) beganan early forum for people in disparate disci-plines to share ideas about AI. The internation-al conference organization, IJCAI, started itsbiannual series in 1969. AAAI grew out of theseefforts and was formed in 1980 to provide an-nual conferences for the North American AI

25th Anniversary Issue

WINTER 2005 59

AAAI Today

Founded in 1980, the American Association for Artificial Intel-ligence has expanded its service to the AI community far be-yond the National Conference. Today, AAAI offers members

and AI scientists a host of services and benefits:

■ The National Conference on Artificial Intelligence promotesresearch in AI and scientific interchange among AI researchers,practitioners, and scientists and engineers in related disciplines.(www.aaai.org/Conferences/National/)

■ The Conference on Innovative Applications of Artificial In-telligence highlights successful applications of AI technology;explores issues, methods, and lessons learned in the develop-ment and deployment of AI applications; and promotes an in-terchange of ideas between basic and applied AI. (www.aaai.org/Conferences/IAAI/)

■ The Artificial Intelligence and Interactive Digital Entertain-ment Conference is intended to be the definitive point of inter-action between entertainment software developers interested inAI and academic and industrial researchers. (www.aaai.org/Con-ferences/AIIDE/)

■ AAAI’s Spring and Fall Symposia ((www.aaai.org/Symposia/)and Workshops (www.aaai.org/Workshops/) programs affordsparticipants a smaller, more intimate setting where they canshare ideas and learn from each other's AI research

■ AAAI’s Digital Library (www.aaai.org/Library), (www.aaai.org/Re-sources) and Online Services include a host of resources for theAI professional (including more than 12,000 papers), individualswith only a general interest in the field (www.aaai.org/AITopics),as well as the professional press (www.aaai.org/ Pressroom).

■ AAAI Press, in conjunction with The MIT Press, publishes select-ed books on all aspects of AI (www.aaai.org/Press).

■ The AI Topics web site gives students and professionals alikelinks to many online resources on AI (www.aaai.org/AITopics).

■ AAAI Scholarships benefit students and foster new programs,meetings, and other AI programs. AAAI also recognizes thosewho have made significant contributions to the science of AIand AAAI through an extensive awards program (www.aaai.org/Awards).

■ AI Magazine, called the “journal of record for artificial intelli-gence,” has been published internationally for 25 years (www.aaai.org/Magazine).

■ AAAI’s Sponsored Journals program (www.aaai.org/Publica-tions/Journals/) gives AAAI members discounts on many of thetop AI journals.

www.aaai.org

Page 8: A (Very) Brief History of Artificial Intelligence

Computers and Thought. New York: Mc-Graw-Hill (reprinted by AAAI Press).

Goldstein, I., and Papert, S., 1977. ArtificialIntelligence, Language and the Study ofKnowledge. Cognitive Science 1(1).

Lindsay, R. K.; Buchanan, B. G.; Feigen-baum, E. A.; and Lederberg, J. 1980. Appli-cations of Artificial Intelligence for ChemicalInference: The DENDRAL Project. New York:McGraw-Hill.

McCorduck, P. 2004. Machines Who Think:Twenty-Fifth Anniversary Edition. Natick,MA: A. K. Peters, Ltd.

Minsky, M. 1968. Semantic Information Pro-cessing. Cambridge, MA: MIT Press.

Minsky, M. 1961. Steps Toward Artificial In-telligence. In Proceedings of the Institute ofRadio Engineers 49:8–30. New York: Instituteof Radio Engineers. Reprinted in Feigen-baum and Feldman (1963).

Newell, A., and Simon, H. 1972. HumanProblem Solving. Englewood Cliffs, NJ: Pren-tice-Hall.

Polya, G. 1945. How To Solve It. Princeton,NJ: Princeton University Press.

Samuel, A. L. 1959. Some Studies in Ma-chine Learning Using the Game of Check-ers. IBM Journal of Research and Development3: 210–229. Reprinted in Feigenbaum andFeldman (1963).

Shannon, C. 1950. Programming a DigitalComputer for Playing Chess. PhilosophyMagazine 41: 356–375.

Turing, A. M. 1950. Computing Machineryand Intelligence. Mind 59: 433–460.Reprinted in Feigenbaum and Feldman(1963).

Winston, P. 1988. Artificial Intelligence: AnMIT Perspective. Cambridge, MA: MIT Press.

Bruce G. Buchanan was a founding mem-ber of AAAI, secretary-treasurer from 1986-1992, and president from 1999-2001. He re-ceived a B.A. in mathematics from OhioWesleyan University (1961) and M.S. andPh.D. degrees in philosophy from Michi-gan State University (1966). He isUniversity Professor emeritus at the Uni-versity of Pittsburgh, where he has joint ap-pointments with the Departments of Com-puter Science, Philosophy, and Medicineand the Intelligent Systems Program. He isa fellow of the American Association for Ar-tificial Intelligence (AAAI), a fellow of theAmerican College of Medical Informatics,and a member of the National Academy ofScience Institute of Medicine. His e-mailaddress is [email protected].

also to some of the methods andmechanisms we can use to create arti-ficial intelligence for real. However,we, like our counterparts in biologycreating artificial life in the laboratory,must remain reverent of the phenom-ena we are trying to understand andreplicate.

AcknowledgmentsMy thanks to Haym Hirsch, DavidLeake, Ed Feigenbaum, Jon Glick, andothers who commented on earlydrafts. They, of course, bear no respon-sibility for errors.

Notes1. An abbreviated history necessarily leavesout many key players and major mile-stones. My apologies to the many whosework is not mentioned here. The AAAIwebsite and the books cited contain otheraccounts, filling in many of the gaps lefthere.

2. DARPA’s support for AI research on fun-damental questions as well as robotics hassustained much AI research in the U.S. formany decades.

References and Some Places to Start

AAAI 2005. AI Topics Website.(www.aaai.org/ aitopics/history). MenloPark, CA: American Association for Artifi-cial Intelligence.

Blake, D. V., and Uttley, A. M., eds. 1959.Mechanisation of Thought Processes: Proceed-ings of a Symposium Held at the NationalPhysical Laboratory on 24th, 25th, 26th, and27th November, 1958. London: HerMajesty’s Stationery Office.

Bowden, B. V., ed. 1953. Faster ThanThought: A Symposium on Digital ComputingMachines. New York: Pitman.

Buchanan, B. G., and Shortliffe, E. H. 1984.Rule-Based Expert Systems: The MYCIN Ex-periments of the Stanford Heuristic Program-ming Project. Reading, MA: Addison-Wesley.

Bush, V. 1945. As We May Think. AtlanticMonthly 176(7): 101.

Cohen, J. 1966. Human Robots in Myth andScience. London: George Allen & Unwin.

Feigenbaum, E.A., and Feldman, J. 1963.

community. Many other countrieshave subsequently established similarorganizations.

In the decades after the 1960s thedemonstrations have become moreimpressive. and our ability to under-stand their mechanisms has grown.Considerable progress has beenachieved in understanding commonmodes of reasoning that are not strict-ly deductive, such as case-based rea-soning, analogy, induction, reasoningunder uncertainty, and default reason-ing. Contemporary research on intelli-gent agents and autonomous vehicles,among others, shows that manymethods need to be integrated in suc-cessful systems.

There is still much to be learned.Knowledge representation and infer-ence remain the two major categoriesof issues that need to be addressed, asthey were in the early demonstrations.Ongoing research on learning, reason-ing with diagrams, and integration ofdiverse methods and systems will like-ly drive the next generation of demon-strations.

With our successes in AI, however,come increased responsibility to con-sider the societal implications of tech-nological success and educate deci-sion makers and the general public sothey can plan for them. The issuesour critics raise must be taken serious-ly. These include job displacement,failures of autonomous machines,loss of privacy, and the issue we start-ed with: the place of humans in theuniverse. On the other hand we donot want to give up the benefits thatAI can bring, including less drudgeryin the workplace, safer manufactur-ing and travel, increased security, andsmarter decisions to preserve a habit-able planet.

The fantasy of intelligent machinesstill lives even as we accumulate evi-dence of the complexity of intelli-gence. It lives in part because we aredreamers. The evidence from workingprograms and limited successes pointsnot only to what we don’t know but

25th Anniversary Issue

60 AI MAGAZINE

With our successes in AI, however, come increasedresponsibility to consider the societal implications oftechnological success and educate decision makers and thegeneral public so they can plan for them.