reinventando el test de turing

2
  TECHNOLOGYREVIEW.COM MIT TECHNOLOGY REVIEW VOL. | NO. “Turing’s original description man- dated a freewheeling conversation that could range over any subject, and there  was no nons ense allo wed,” says Leo ra Morgenstern, an expert on AI who works at Leidos, a defense contractor headquar- tered in Virginia. AI’s earliest proponents hoped to work toward some form of gen- eral intelligence. But as the complexity of the task unfurled, research fractured into smaller, more manageable tasks. This produced progress, but it also turned machine intelligence into something that could not easily be compared with human intellect. Most AI researchers still pursue highly specialized areas, but some are turning their attention back to generalized intel- ligence and considering new ways to mea- sure progress. For Morgenstern, a machine will demonstrate intelligence only when it can show that once it knows one intellectually challenging task, it can easily learn another related task. Riedl agrees that the test should be broad: “Con-  ver sati on is just one aspe ct of huma n intelligence. Creativity is another. Prob- lem solving and knowledge are others.” Riedl has designed an alternative to the Turing test, which he has dubbed the Lovelace 2.0 test (a reference to Ada Lovelace, a 19th-century English math- ematician who programmed a seminal calculating machine). Riedl’s test would focus on creative intelligence, with a human judge challenging a computer to create something—a story, poem, or drawing—and gradually making the task harder. This test might not be the ideal succes- sor to the Turing test. But it may oer a  better way to und erstand machine intelli- gence than any simple test. “Who is to say  being above a certain score is intelligent or being below is unintelligent?” Riedl says. “Would we ever ask such a question of humans?” —  Simon P arkin ered the capacity of computers and turned to a black-box definition: if we accept humans as an intelligent species, then anything that exhibits behaviors indis- tinguishable from human behavior must also be intelligent. Turing also proposed a test, called the “imitation game,” in which a computer  would prove its intelligence by convinc- ing a person, through conversation, that it was also human. In the years since, the Turing test has been widely adopted and also widely criticized—not because of aws in Turing’s original idea, but because of aws in its execution.  A chatbot called Eugene Goostman made headlines last June for passing the Turing test in a contest organized at the University of Reading in the U.K. The software convinced 30 percent of the human judges involved that it was human. But the chatbot relies on obfuscation and subterfuge rather than the natural back and forth of intelligent conversation. We have self-driving cars, knowledgeable digital assistants, and software capable of putting names to faces as well as any expert. But do these displays of machine aptitude represent genuine i ntelligence? For decades articial-intell igence experts have struggled to nd a practical way to answer the question. “Asking whether an articial entity is ‘intelligent’ is fraught with diculties,” says Mark Riedl, an associate professor at Georgia Tech. “Eventually a self-driving car will outperform human drivers, so we can even say that along one dimension, an AI is super-intelligent. But we might also say that it is an idiot savant, because it cannot do anything else, like recite a poem or solve an algebra problem.” The most famous effort to measure machine intelligence does not resolve the challenges involved; instead, it obscures them. In his 1950 paper “Computing Machinery and Intelligence,” the British computer scientist Alan Turing consid- Rewriting the “Imitation Game” Some computer scientists are searching for more meaningful ways to measure artificial intelligence.      S      A      R      A      H      M      A      Z      Z      E      T      T      I

Upload: frank-facundo

Post on 04-Nov-2015

218 views

Category:

Documents


0 download

DESCRIPTION

Reinventando el Test de TuringTuring TestInteligencia ArtificialMITTECHNOLOGY REVIEW

TRANSCRIPT

  • 23

    TECHNOLOGYREVIEW.COMMIT TECHNOLOGY REVIEW

    VOL. 118 | NO. 3

    Turings original description man-dated a freewheeling conversation that could range over any subject, and there was no nonsense allowed, says Leora Morgenstern, an expert on AI who works at Leidos, a defense contractor headquar-tered in Virginia. AIs earliest proponents hoped to work toward some form of gen-eral intelligence. But as the complexity of the task unfurled, research fractured into smaller, more manageable tasks. This produced progress, but it also turned machine intelligence into something that could not easily be compared with human intellect.

    Most AI researchers still pursue highly specialized areas, but some are turning their attention back to generalized intel-ligence and considering new ways to mea-sure progress. For Morgenstern, a machine will demonstrate intelligence only when it can show that once it knows one intellectually challenging task, it can easily learn another related task. Riedl agrees that the test should be broad: Con-versation is just one aspect of human intelligence. Creativity is another. Prob-lem solving and knowledge are others.

    Riedl has designed an alternative to the Turing test, which he has dubbed the Lovelace 2.0 test (a reference to Ada Lovelace, a 19th-century English math-ematician who programmed a seminal calculating machine). Riedls test would focus on creative intelligence, with a human judge challenging a computer to create somethinga story, poem, or drawingand gradually making the task harder.

    This test might not be the ideal succes-sor to the Turing test. But it may oer a better way to understand machine intelli-gence than any simple test. Who is to say being above a certain score is intelligent or being below is unintelligent? Riedl says. Would we ever ask such a question of humans? Simon Parkin

    ered the capacity of computers and turned to a black-box definition: if we accept humans as an intelligent species, then anything that exhibits behaviors indis-tinguishable from human behavior must also be intelligent.

    Turing also proposed a test, called the imitation game, in which a computer would prove its intelligence by convinc-ing a person, through conversation, that it was also human. In the years since, the Turing test has been widely adopted and also widely criticizednot because of flaws in Turings original idea, but because of flaws in its execution.

    A chatbot called Eugene Goostman made headlines last June for passing the Turing test in a contest organized at the University of Reading in the U.K. The software convinced 30 percent of the human judges involved that it was human. But the chatbot relies on obfuscation and subterfuge rather than the natural back and forth of intelligent conversation.

    We have self-driving cars, knowledgeabledigital assistants, and software capable of putting names to faces as well as any expert. But do these displays of machine aptitude represent genuine intelligence? For decades artificial-intelligence experts have struggled to find a practical way to answer the question.

    Asking whether an artificial entity is intelligent is fraught with diculties, says Mark Riedl, an associate professor at Georgia Tech. Eventually a self-driving car will outperform human drivers, so we can even say that along one dimension, an AI is super-intelligent. But we might also say that it is an idiot savant, because it cannot do anything else, like recite a poem or solve an algebra problem.

    The most famous effort to measure machine intelligence does not resolve the challenges involved; instead, it obscures them. In his 1950 paper Computing Machinery and Intelligence, the British computer scientist Alan Turing consid-

    Rewriting the Imitation GameSome computer scientists are searching for more meaningful ways to measure artificial intelligence.

    SARAH MAZZETTI

    ILLUSTRATION BY LUKE SHUMAN

    DATA FROM GOVERNMENT OF IN

    DIA;

    INCOME DATA WAS UNAVAILABLE FOR THREE UNION TERRITORIES THAT ARE NOT LISTED

    MJ15_upfront.indd 23 4/6/15 12:09 PM

  • Copyright of Technology Review is the property of MIT Technology Review and its contentmay not be copied or emailed to multiple sites or posted to a listserv without the copyrightholder's express written permission. However, users may print, download, or email articles forindividual use.