articles beating common sense into interactive applications

14
A long-standing dream of artificial intelligence has been to put commonsense knowledge into com- puters—enabling machines to reason about every- day life. Some projects, such as Cyc, have begun to amass large collections of such knowledge. Howev- er, it is widely assumed that the use of common sense in interactive applications will remain im- practical for years, until these collections can be considered sufficiently complete and common- sense reasoning sufficiently robust. Recently, at the Massachusetts Institute of Technology’s Media Laboratory, we have had some success in applying commonsense knowledge in a number of intelli- gent interface agents, despite the admittedly spot- ty coverage and unreliable inference of today’s commonsense knowledge systems. This article sur- veys several of these applications and reflects on interface design principles that enable successful use of commonsense knowledge. T hings fall down, not up. Weddings (sometimes) have a bride and a groom. If someone yells at you, they’re probably angry. One of the reasons that computers seem dumber than humans is that they don’t have common sense—a myriad of simple facts about everyday life and the ability to make use of that knowledge easily when appropriate. A long-standing dream of artificial intelligence has been to put that kind of knowledge into computers, but applications of commonsense knowledge have been slow in coming. Researchers like Minsky (2000) and Lenat (1995), recognizing the importance of com- monsense knowledge, have proposed that common sense constitutes the bottleneck for making intelligent machines, and they advo- cate working directly to amass large collections of such knowledge and heuristics for using it. Considerable progress has been made over the last few years. There are now large knowl- edge bases of commonsense knowledge and better ways of using it than we have had be- fore. We may have become too used to putting common sense in that category of “impossi- ble” problems and overlooked opportunities to actually put this kind of knowledge to work. We need to explore new interface designs that don’t require complete solutions to the com- monsense problem but can make good use of partial knowledge and human-computer col- laboration. As the complexity of computer applications grows, it may be that the only way to make ap- plications more helpful and avoid stupid mis- takes and annoying interruptions is to make use of commonsense knowledge. Cellular tele- phones should know enough to switch to vi- brate mode if you’re at the symphony. Calen- dars should warn you if you try to schedule a meeting at 2 AM or plan to take a vegetarian to a steak house. Cameras should realize that if you took a group of pictures within a span of two hours, at around the same location, they are probably of the same event. Initial experimentation with using common sense encountered significant obstacles. First, despite the vast amount of effort put into com- monsense knowledge bases, coverage is still sparse relative to the amount of knowledge hu- mans typically bring to bear. Second, inference with such knowledge is still unreliable, due to Articles WINTER 2004 63 Beating Common Sense into Interactive Applications Henry Lieberman, Hugo Liu, Push Singh, Barbara Barry Copyright © 2004, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2002 / $2.00

Upload: others

Post on 01-Oct-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Articles Beating Common Sense into Interactive Applications

■ A long-standing dream of artificial intelligence hasbeen to put commonsense knowledge into com-puters—enabling machines to reason about every-day life. Some projects, such as Cyc, have begun toamass large collections of such knowledge. Howev-er, it is widely assumed that the use of commonsense in interactive applications will remain im-practical for years, until these collections can beconsidered sufficiently complete and common-sense reasoning sufficiently robust. Recently, at theMassachusetts Institute of Technology’s MediaLaboratory, we have had some success in applyingcommonsense knowledge in a number of intelli-gent interface agents, despite the admittedly spot-ty coverage and unreliable inference of today’scommonsense knowledge systems. This article sur-veys several of these applications and reflects oninterface design principles that enable successfuluse of commonsense knowledge.

Things fall down, not up. Weddings(sometimes) have a bride and a groom. Ifsomeone yells at you, they’re probably

angry. One of the reasons that computers seem

dumber than humans is that they don’t havecommon sense—a myriad of simple facts abouteveryday life and the ability to make use ofthat knowledge easily when appropriate. Along-standing dream of artificial intelligencehas been to put that kind of knowledge intocomputers, but applications of commonsenseknowledge have been slow in coming.

Researchers like Minsky (2000) and Lenat(1995), recognizing the importance of com-

monsense knowledge, have proposed thatcommon sense constitutes the bottleneck formaking intelligent machines, and they advo-cate working directly to amass large collectionsof such knowledge and heuristics for using it.

Considerable progress has been made overthe last few years. There are now large knowl-edge bases of commonsense knowledge andbetter ways of using it than we have had be-fore. We may have become too used to puttingcommon sense in that category of “impossi-ble” problems and overlooked opportunities toactually put this kind of knowledge to work.We need to explore new interface designs thatdon’t require complete solutions to the com-monsense problem but can make good use ofpartial knowledge and human-computer col-laboration.

As the complexity of computer applicationsgrows, it may be that the only way to make ap-plications more helpful and avoid stupid mis-takes and annoying interruptions is to makeuse of commonsense knowledge. Cellular tele-phones should know enough to switch to vi-brate mode if you’re at the symphony. Calen-dars should warn you if you try to schedule ameeting at 2 AM or plan to take a vegetarian toa steak house. Cameras should realize that ifyou took a group of pictures within a span oftwo hours, at around the same location, theyare probably of the same event.

Initial experimentation with using commonsense encountered significant obstacles. First,despite the vast amount of effort put into com-monsense knowledge bases, coverage is stillsparse relative to the amount of knowledge hu-mans typically bring to bear. Second, inferencewith such knowledge is still unreliable, due to

Articles

WINTER 2004 63

Beating Common Senseinto Interactive

ApplicationsHenry Lieberman, Hugo Liu, Push Singh, Barbara Barry

Copyright © 2004, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2002 / $2.00

Page 2: Articles Beating Common Sense into Interactive Applications

deliver a meaningful interaction in the future.If the agent’s knowledge is not sufficient, it canask the user to fill in the gaps.

In short, the use of common sense in inter-face agents can be made “fail-soft.” Interfaceagents are often proactive, “pushing” informa-tion rather than “pulling” it as query-responsesystems do, and it is easier to make the formerkind of agents fail-soft.

Applications of Common Sensein Interface Agents

The remainder of this article will survey severalof our lab’s recent projects in this area to illus-trate the principles above. Except where noted,these applications were built using knowledgedrawn from Open Mind Common Sense (OM-CS) (see page 72), a commonsense knowledgebase of over 725,000 natural language asser-tions built from the contributions of morethan 15,000 people over the World Wide Web(Singh 2002). Many of these applications madeuse of early versions of ConceptNet, a semanticnetwork of 1.6 million relations extracted fromthe OMCS corpus with 20 link types coveringtaxonomic, meronomic, temporal, spatial,causal, functional, and other kinds of relations.

We should note that all of the applicationsdescribed below have the status of early proto-types. While none has had large-scale com-mercial deployment, many have fared well insmall-scale user testing versus conventionalapplications. Details of testing are supplied inthe references.

Common Sense in an Agent for Digital PhotographyWith the annotation and retrieval integrationagent (Aria) (figure 1) (Lieberman, Rosenzweig,and Singh 2001), we attempt to leverage com-monsense knowledge to semiautomatically an-notate photos and proactively suggest relevantphotos (Lieberman and Liu 2002). Aria observesa user as she or he types a story, parses the textin real time, and continuously displays a rele-vance-ordered list of photos. When the user in-serts photos in text, the system automaticallyannotates the photos with relevant keywords.The basic Aria interface tested well in studies atthe Kodak usability lab when compared withsoftware shipped with Kodak cameras.

Commonsense knowledge is used in Aria toinform semantic recognition agents, whichrecognize people, places, and events in thetext. These recognition agents extract appropri-ate annotations to be added to photos insertedin the text. In retrieval, commonsense knowl-edge is compiled into a semantic network, and

vagueness, exceptional cases, logical paradox-es, and other problems.

Question-Answering Versus Interface Agent Applications

Many early attempts at applying commonsense fell into the category of question-answer-ing, story understanding, or information re-trieval kind of problems. The hope was that useof commonsense inference would improveresults beyond what was possible with simplekeyword matching or statistical methods. Seethe sidebar “Other Applications of Common-sense Reasoning” (page 74) for some examplesof these kinds of applications.

When direct question-answering using com-monsense inference works, this is great. But di-rect question-answering places very exactingdemands on a system.

First, the user is expecting a direct answer. Ifthe answer is good, the user will be happy, ifthe answer is not, the user will be critical of thesystem. If accuracy falls below a certain thresh-old in the long term, the user will give up us-ing the system completely. Second, the systemgets only one shot at finding the correct an-swer, and it must do so quickly enough tomaintain the feeling of interactivity (no morethan a few seconds).

Over the last few years, we have been work-ing in the area of intelligent interface agents(Maes 1994). An interface agent is an AI pro-gram that attaches itself to a conventional in-teractive application (text or graphical editor,Web browser, spreadsheet, and so forth), watch-es the user’s interactions, and is capable of op-erating the interface as would the user. The jobsof the agent are to provide help, assistance, sug-gestions, automation of common tasks, adapta-tion, and personalization of the interface.

Our experience has been that interfaceagents can use commonsense knowledge muchmore effectively than direct question-answer-ing applications can because they place fewerdemands on the system. Because all the capa-bilities of the interactive application remainavailable for the user to use in a conventionalmanner, it is no big deal if commonsenseknowledge does not cover a particular situa-tion. If a commonsense inference turns outwrong, users are often no worse off than theywould be without any assistance.

The user is not expecting a direct answer toevery action, only that the agent will come upwith something helpful every once in a while.Since the agent operates in a continuous, long-term manner, if it cannot respond immediate-ly, it can gather further evidence and perhaps

Articles

64 AI MAGAZINE

Page 3: Articles Beating Common Sense into Interactive Applications

associative reasoning helps to bridge semanticgaps (such as connect text about “wedding” toa photo annotated with “bride”) (Liu andLieberman 2002). The system also learns frompersonal assertions from the text (such as “Mysister’s name is Mary.”), presumably unique tothe author’s context, which can be treated as asource of implicit knowledge in much the samemanner as the commonsense assertions com-ing from Open Mind.

The application of common sense in Aria hasseveral fail-soft aspects. Annotations suggestedby the agent carry less weight than a user’s an-notations in retrieval and can be rejected or re-vised by the user. Similarly, in retrieval, com-mon sense is used only to bridge semantic gapsand would never supersede explicit keywordmatching. If a user finds a suggestion useful,she or he can choose to drag that photo in thetext. But if the suggestion is inappropriate, theuser’s writing task is not disrupted.

Common Sense in Affective Classification of TextConsider the text, “My wife left me; she tookthe kids and the dog.” There are no obviousmood keywords such as “cry” or “depressed,”or any other obvious cues, but the implicationsof the event described here are decidedly sad.This presents an opportunity for commonsenseknowledge, a subset of which concerns the af-fective qualities of things, actions, events, andsituations. From the Open Mind CommonSense knowledge base, a small society of lin-guistic models of affect was mined out, using aset of mood keywords as a starting point. Theimport of commonsense knowledge to this ap-plication is to make affective classification oftext more comprehensive and reliable by con-sidering underlying semantics, in addition tosurface features.

Using this commonsense-informed ap-

Articles

WINTER 2004 65

Figure 1. Telling Stories with ARIA.

Page 4: Articles Beating Common Sense into Interactive Applications

Common Sense in Video Capture and EditingThe Cinematic Commonsense project (Barryand Davenport 2003) is being developed toprovide feedback to documentary videogra-phers during production. Commonsenseknowledge relevant to the documentary sub-ject domain is retrieved to assist videographerswhen they are in the field recording videofootage. After each shot is recorded, metadatais created by the videographer in natural lan-guage and submitted as a query to a subset ofthe Open Mind database. For example, the shotmetadata “a street artist is painting a painting”would yield a shot suggestion such as “the lastthing you do when you paint a painting isclean the brushes” or “something that mighthappen when you paint a picture is paint getson your hands.” These assertions can be usedby the filmmaker as a flexible shot list that is

proach, two applications were built. One is ane-mail editor, Empathy Buddy (figure 2), whichuses Chernoff-style faces to interactively reactto a user as she or he composes an e-mail usingone of six basic Ekman emotions (Liu, Lieber-man, and Selker 2003). A user study showedthat users rated the affective software agent asbeing more interactive and intelligent than arandomized-face control.

Another application uses a hyperlinked colorbar to help users visualize and navigate the af-fective structure of a text document (Liu,Lieberman, and Selker 2002). Using the tool,users were able to improve the speed of within-document information access tasks.

The affective model approach has been re-cently extended to modeling point-of-view andpersonality, analyzing an author’s writings,and making a comparison of what several au-thors “might have thought” about a specifiedtopic (Liu and Maes 2004).

Articles

66 AI MAGAZINE

To: [email protected]: my car

hi mom!

guess what? i bought a new car last week.

i got into an accident and I crashed it.

But please know that I wasn’t hurtand that everything is okay.

Figure 2. Empathy Buddy Reacts to an E-mail.

Page 5: Articles Beating Common Sense into Interactive Applications

dynamically updated in accordance with theevents the filmmaker is experiencing. Annota-tion of content is enriched, as in Aria, to sup-port later search of image-based content. Col-lections of shots can also be ordered into roughtemporal and causal sequences based on the as-sociated commonsense annotations.

Initial tests with the system showed the needfor more complex story understanding to cre-ate effective suggestions for the filmmaker.Currently, knowledge drawn from the OpenMind Experiences (Singh and Barry 2003) andLifeNet (Singh and Williams 2003) script acqui-sition systems (described later in this article)are being incorporated to enhance the reason-ing ability of the system to accord with a fewkey documentary videographer goals, such ascreating coherent narratives. In discussionswith professional filmmakers, the prospect oftracking many potential narrative edits withina video database was seen as useful not only forclosing gaps in content collections but also forilluminating alternative story ideas. This canencourage creative documentary in journal-ism, education, and everyday life.

Common Sense in Other Storytelling ApplicationsA common thread throughout the aforemen-tioned applications is that they all assist theuser in some sort of storytelling process. Story-telling is a great area for common sense be-cause it draws on a wide spectrum of under-standing of situations of everyday life. It canprovide an intermediate level for the agent tounderstand and assist the user that is betterthan simple keywords but stops short of fullnatural language understanding.

Alexandro Artola’s StoryIllustrator2 (figure 3)is like Aria in that it gives the user a story editorand photo database and tries to continuouslyretrieve photos relevant to the user’s typing.However, instead of using an annotated per-sonal photo collection, it employs Yahoo’s im-age search to retrieve images from the Web.Commonsense knowledge is used for query ex-pansion, so that a picture of a baby is associat-ed with the mention of milk.

David Gottlieb and Josh Juster’s OMAdven-ture1 (figure 4) dynamically generates a Dun-geons-and-Dragons type virtual environmentby using commonsense knowledge. If the cur-rent game location is a kitchen, the system pos-es the questions to Open Mind, “What do youfind in a kitchen?” and “What locations are as-sociated with a kitchen?” If “You find an ovenin a kitchen,” we ask “What can you do withan oven?” Objects such as the oven or opera-tions such as cooking are then made available

as moves in the game for the player to make,and the associated locations are the exits fromthe current situation. If the player is given theopportunity to create new objects and loca-tions in the game, that can be a way of extend-ing the knowledge. If the player adds a blenderto a kitchen, now we know that blenders aresomething that can be found in a kitchen.

Chian Chuu and Hana Kim’s StoryFighter3

plays a game where the system and the usertake turns contributing lines to a story. Thegame proposes a start state, for example, “Johnis sleepy,” and an end state, “John is in prison,”and the goal is to get from the start state to theend state in a specified number of sentences.Along the way there are “taboo” words thatcan’t be mentioned (“You can’t use the word‘arrest’”) as an additional constraint to makethe game more challenging. Common sense isused to deduce the consequences of an event(“If you commit a crime, you might go to jail”)and to propose taboo words to exclude themost obvious continuations of the story.

Common Sense for Topic Spotting in ConversationNathan Eagle, Push Singh, and Sandy Pentland

Articles

WINTER 2004 67

Figure 3. Common Sense Helps Associate Story Elements with Video Clips.

Page 6: Articles Beating Common Sense into Interactive Applications

ic-spotting, in the manner of Eagle, Singh, andPentland’s system.

In the hide and seek example, the systemcould hear the word “bedroom.” Then com-monsense knowledge is used to determinewhat is likely to be in a bedroom, such as a bed,closet, dresser, and so forth. The result is usedto concoct a plausible continuation of the storywhen it is the virtual character’s turn again totalk, for example, “Jane’s parents walked intothe bedroom while she was hiding under thebed.”

Common Sense for a Dynamic Tourist PhrasebookGlobuddy (Musa et al. 2003), by Rami Musa,Andrea Kulas, Yoan Anguilette, and MadleinaScheidegger, uses common sense to aid touristswith translation. Phrasebooks like Berlitz willcommonly provide a set of words and phrasesuseful in a common situation, such as a restau-rant or hotel. But they can cover only a fewsuch situations. With Globuddy, you can typein your (perhaps unusual) situation (“I’ve justbeen arrested”) and it retrieves common sensesurrounding that situation and feeds it to atranslation service. “If you are arrested, youshould call a lawyer.” “Bail is a payment thatallows an accused person to get out of jail untila trial.” A recent implementation by AlexFaaborg and José Espinosa puts Globuddy onhandheld and cellular telephone platforms(figure 5).

Common Sense for Word CompletionApplications like Globuddy play up the role ofcommonsense knowledge bases in determiningwhat kinds of topics are “usual” or “ordinary.”A simple, but powerful application of this is inpredictive typing or word or phrase comple-tion. Predictive typing can vastly speed up in-terfaces, especially in cases where the user hasdifficulty typing normally or is using small de-vices such as cellular telephones whose key-boards are small. Conventional approaches topredictive typing select a prediction eitherfrom a list of words the user recently typed orfrom an ordered list of the most commonly oc-curring words in English. Alex Faaborg andTom Stocky (Stocky, Faaborg, and Lieberman2004) have implemented a commonsense pre-dictive text entry facility for a cellular tele-phone platform. It uses ConceptNet to find thenext word that “makes sense” in the currentcontext. For example, typing “train st” leads tothe completion “train station” even thoughthe user may not have typed that phrase be-fore, nor is “station” the most common “st”word (figure 6).

(Eagle, Singh, and Pentland 2003) are explor-ing the idea of a wearable computer with con-tinuous audio (and perhaps ultimately, video)recording. They are interested not only in au-dio transcription, but in situational under-standing—understanding general properties ofthe physical and social environment in whichthe computer finds itself, even if the user is notdirectly interacting with the machine.

Speech recognition is used to roughly tran-scribe the audio, but with current technology,speech transcription accuracy, especially forconversation, is poor. However, understandinggeneral aspects of the situation such aswhether the user is at home or at work, aloneor with people, with friends or strangers, is in-deed possible. Such recognition is vastly im-proved by using commonsense knowledge tomap from topic-spotting words output by thespeech recognizer (“lunch,” “fries,” “Styro-foam”) to knowledge about everyday activitiesthat the user might be engaged in (eating in afast-food restaurant). Bayesian inference is usedto rank hypotheses generated by ConceptNet.

Austin Wang and Justine Cassell used com-mon sense in a virtual collaborative story-telling partner for children (Wang and Cassell2003) whose goal is to improve literacy andstorytelling skills. An on-screen character,SAM, starts telling a story and invites the childto continue the story at certain points. For ex-ample, “Jack and Jane were playing hide andseek. Jane hid in… now it’s your turn.”

The system uses speech recognition to listento the child’s story, but the recognition is notgood enough to be sure of understandingeverything the child had to say. Instead, the re-sults of the recognition are used for rough top-

Articles

68 AI MAGAZINE

Figure 4. OMAdventure Dynamically Generates an Adventure Game’s Universe by Using Commonsense Knowledge.

Welcome To OM Adventure

The Best Choose Your Own Adventure Game Available

Click an object in the room orclick magic to add something newYou are in the townYou are exploring corner grocery in the townAre you adding a new object or a new location?A fruit is always, often, sometimes, rarely or never in corner groceryA fruit is always found in a corner grocery was added to OMCSYou are exploring town from the corner groceryYou are exploring traffic artery in the townYou are exploring town from the traffic artery

Page 7: Articles Beating Common Sense into Interactive Applications

Performance of common sense alone in thistask is statistically comparable or slightly betterthan conventional frequency-based methodswhen measured with single-candidate predic-tion and may be much better on multiple-can-didate prediction or when combined with con-ventional methods, especially where theconventional methods don’t make strong pre-dictions in particular cases. Similar approacheshave great potential for use in other kinds ofpredictive and corrective interfaces.

Common Sense in a Disk Jockey’s AssistantJoan Morris-DiMicco, Carla Gomez, Arnan Sip-itakiat, and Luke Ouko implemented a Com-mon Sense Disk Jockey,4 an assistant for musicselection in dance clubs. Disk jockeys often se-lect music initially based on a few superficialparameters (age, ethnicity, dress) of the audi-ence and then adjust their subsequent choicesbased on the reaction of the audience.

CSDJ uses Erik Mueller’s ThoughtTreasure asa reasoning engine (Mueller 1998) to filter a listof MP3 files according to commonsense as-sumptions about what kind of music particulargroups might like. It also incorporates an inter-face to a camera that measures activity levels ofthe dance floor to give feedback to the systemas to whether the selection of a particular pieceof music increased or decreased activity.

Common Sense for Mapping UserGoals to Concrete ActionsWe also have worked on some projects incor-porating commonsense knowledge into con-ventional search engines. These applicationsstill maintain the “one-shot” query-responseinteraction that we criticized in the beginningas being less suited to commonsense applica-tions than continuously operating interfaceagents. However, we apply the common sensein a fundamentally different way than conven-tional attempts to add inference to search en-gines. The role of common sense is to mapfrom the user’s search goals, which are some-times not explicitly stated, to keywords appro-priate for a conventional search engine. We be-lieve that this process will make it more likelythat the user would receive good results in thecase where conventional keywords wouldn’twork well, thereby making the interface morefail-soft.

Two systems, Reformulator (Singh 2002) andGOOSE (Liu, Lieberman, and Selker 2002) arecommonsense adjuncts to Google.

Reformulator, like Cyc, does inference onthe subject matter of the search itself. Our workin improving search engine interfaces (Liu,

Lieberman, and Selker 2002; Singh 2002), ismotivated by the observation that forminggood search queries can often be a trickyproposition. We studied expert users compos-ing queries (Liu, Lieberman, and Selker 2002)and concluded that they usually already knowsomething about the structure and contents ofpages they are expecting to find. After a littlebit of search common sense is used to decideon the nature of the expected results, the chainof reasoning leading from the high-level searchintent to query formation is usually verystraightforward and commonsensical.

By contrast, novice users lack the experiencein chain reasoning from a high-level search in-tent to query formation, so they often statetheir search goal directly. For example, a novicemay often type “my cat is sick” into a searchengine rather than looking for “veterinarians,Boston, MA” even though the chain of reason-ing is very straightforward.

In this situation, there is an opportunity fora search engine interface agent to observe anovice user’s queries. The agent attempts to in-fer the user’s intent, and when it is detected

Articles

WINTER 2004 69

Figure 5. The Globuddy 2 Dynamic Phrasebook Gives Translations of Phrases Conceptually Related to a Seed Word or Phrase.

Page 8: Articles Beating Common Sense into Interactive Applications

ter one. This allows the interface agent to makeuse of common sense to improve the user expe-rience in a fail-soft way. If common sense is toospotty to reformulate a query, no suggestion isoffered.

Another application that also maps betweenusers’ goals and concrete actions is currentlyunder development by Alex Faaborg, SakdaChaiworawitkul, and Henry Lieberman for thecomposition of Web services.

In Tim Berners-Lee’s proposed vision of thenext-generation semantic web (Berners-Lee,Hendler, Lassila 2004), users can state high-lev-el goals, and agent programs can scout out Webservices that can satisfy those goals, possiblycomposing multiple services, each of which ac-complishes a subgoal, without explicit direc-tion from the user. For example, a request“Schedule a doctor’s appointment for mymother within ten miles of her house” mightinvolve looking up directories of doctors witha certain specialty; checking a reputation serv-er; consulting a geographic server to check ad-dresses, routes, or transit; synchronizing themother’s and doctor’s schedules; and so on.

We fully concur with this vision. However,to date, most of the work on the semantic Webhas focused on the formalisms, such as XML,OWL, SOAP, and UDDI, that will be used torepresent metadata stored on the Web pagesthat will presumably be accessed by theseagents. Little work is concerned with how anagent might actually put together semanticWeb services to accomplish high-level goals forthe user.

Looking at currently available and proposedWeb service descriptions, we see that even ifeveryone agrees on the representation formal-ism, different services might ask for and returndifferent kinds of information for the same ser-vices, and connecting them is still a task thatnow requires a human programmer to antici-pate the form and structure of such services.

For example, a weather service might delivera weather report given a Zip code. But if theuser asked “What’s the weather in Denver?”then something has to know how Zip codes areassociated with cities. This is a job for commonsense.

Common sense is used to compose Web ser-vices in a manner similar to the way it is used inGOOSE. User goals are obtained through twodifferent interfaces; one that allows natural lan-guage statement of goals and another that pro-vides a sidebar to a browser that proposes rele-vant services interactively as the user isbrowsing. ConceptNet is used to expand the usergoal so that it can potentially match semantical-ly related concepts that may appear in the Web

that a query may not return the best results,the agent can help to reformulate the query us-ing search expertise and inferencing over com-monsense knowledge, and opportunisticallysuggest “Did you mean to look for veterinari-ans in Boston, MA?” above the displayed re-sults. In GOOSE, we were able to improve a sig-nificant number of queries made by noviceusers (figure 7). However, in that system, westill needed users to help the system by manu-ally disambiguating the type of search goal.Our current work on automated disambigua-tion will allow us to develop an interface agentthat does not interfere with the user’s task at alland suggests a better query (appearing abovethe search results) only if it is able to offer a bet-

Articles

70 AI MAGAZINE

Figure 6. Common Sense Can Lead to Good Suggestions for Word Completion.

Page 9: Articles Beating Common Sense into Interactive Applications

service descriptions. Thus we can achieve amuch broader and more appropriate mapping ofWeb services than is possible with literal searchthrough Web service descriptions alone.

Interfaces for Improving Commonsense Knowledge BasesOne criticism of Open Mind and similar effortsis that knowledge expressed in single sentencesis often implicitly dependent on an unstatedcontext. For example, the sentence “At awedding, the bride and groom exchange rings”might assume the context of a Christian orJewish wedding, and might not be true in othercultures. Rebecca Bloom and Avni Shah5 imple-mented a system for contextualizing OpenMind knowledge by prompting the user to addexplicit context elements to each assertion. Re-trieval can then supply information aboutwhat context an assertion depends on or findanalogous assertions in other contexts. For ex-ample, in a Hindu wedding, the bride andgroom exchange necklaces that serve the sameritual function as rings do in the West.

Several projects involved interfaces forknowledge elicitation or feedback about theknowledge base itself. The Open Mind Web siteitself contains several of what it calls “activi-ties” that encourage users to fill in templatesthat call for a particular type of knowledge.Knowledge about the function of objects iselicited with a template “You __ with a __.” TimChklovski developed an interface for prompt-ing the user to disambiguate word senses in

Open Mind (Chklovski and Mihalcea 2002)and for automatically performing simpleanalogies and asking the user to confirm or de-ny them (Chklovski 2003).

Andrea Lockerd’s ThoughtStreams6 aims toacquire commonsense knowledge throughsimulation. Everyday life is modeled in a gameworld, similar to the game, The Sims. An agenttracks user behavior in the world and tries todiscover behavioral regularities with a similar-ity-based learning algorithm. It is also envi-sioned that a game character “bot” would beintroduced that would occasionally ask humancharacters why they do things, in a manner ofan inquisitive (but hopefully not too annoy-ing) child.

Roles for Common Sense in Applications

Each of these applications uses common sensedifferently. None of them engage in “general-purpose” commonsense reasoning—whileeach makes use of a broad range of common-sense knowledge, each makes use of it in a par-ticular way by performing only certain types ofinferences. Also, while general-purpose logicalinference is about what inferences are possibleto make, commonsense inference is muchmore about what inferences are plausible tomake. Our applications generally provideenough context about what is plausible so thatwe don’t need to fall back on general logicalmechanisms.

Articles

WINTER 2004 71

Figure 7. The GOOSE Commonsense Search Engine.

Page 10: Articles Beating Common Sense into Interactive Applications

The Open Mind Common Sense Project

We built the Open Mind Common Sense (OM-CS) Web site (openmind.media.mit.edu) tomake it easy and fun for members of the gen-

eral public to work together to build a commonsensedatabase. OMCS was launched in September 2000, andas of November 2004 we have accumulated a corpus ofabout 690,000 pieces of commonsense knowledge frommore than 14,000 people across the Web, many with nospecial training in computer science or artificial intelli-gence. Our top 100 list of contributors includes an artist,a chemist, a grandmother, a jockey, a 12-year-old child,a 911 police dispatcher, as well as people from many oth-er walks of life. Our philosophy has been that becauseeveryone has the common sense we wish to give our ma-chines, everyone can contribute! The knowledge wehave collected consists largely of the kinds of simpleEnglish assertions shown in table 1.

Our approach to knowledge acquisition was inspiredby the success of information extraction methods(Cardie 1997). Rather than having our audience con-tribute knowledge directly in terms of some formal on-tology, we instead encouraged them to express factsclearly in English via free-form and structured templates.Because they interacted with our system in plain Eng-lish, they could begin entering information immediate-ly—there was no extended training period where theyhad to learn a complicated knowledge-representationlanguage. We extracted a semantic network from the re-sulting corpus of commonsense facts using a library ofhandcrafted lexico-syntactic pattern-matching rules (Liuand Singh 2004). This semantic network, called Con-ceptNet (www.conceptnet.org), consists of 280,000 linksrelating 300,000 concept nodes. The link types include avariety of common binary relations such as is-a, located-in, has-subevent, and so on. The concept nodes includea wide range of typical actions, objects, places, and prop-erties, all expressed in terms of simple English phrasessuch as “go to restaurant” or “cup of coffee.” Because ofits basis in natural language, the meanings of the linksand nodes in ConceptNet are difficult to fully disam-biguate. Nevertheless, ConceptNet has proven to be sur-prisingly useful and easy to use, especially in the contextof “fail-soft” applications where the inferences do not al-ways have to be correct.

Since the original Open Mind Common Sense Website, there have been a number of projects that have tak-

en new approaches to knowledge acquisition from thegeneral public. The Open Mind Word Expert site(www.teach-computers.org) lets users tag the senses ofthe words in individual sentences drawn from both theOMCS corpus and the glosses of WordNet word senses(Chklovski and Mihalcea 2002). The Open Mind 1001Questions site (www.teach-computers.org) uses analogi-cal reasoning to pose questions to the user by analogy towhat it already knows, and hence makes the user expe-rience more interactive and engaging (Chklovski 2003).We developed the Open Mind Experiences site to allowusers to teach stories in addition to facts, by presentingthem with story templates based on Wendy Lenhert’splot-units (Singh and Barry 2003). Finally, the upcomingLifeNet site lets users directly build probabilistic graphi-cal models, and uses those models to immediately makeinferences based on the knowledge that has been con-tributed so far (Singh and Williams 2003).

We are excited about these projects because it is nowpossible for AI researchers to work together and with thegeneral public to build large-scale commonsense knowl-edge bases. Building practical commonsense reasoningsystems is no longer solely the realm of multimilliondollar Manhattan projects and can be pursued in distrib-uted and collaborative fashion by the AI community asa whole.

Articles

72 AI MAGAZINE

People live in houses.Running is faster than walking.

A person wants to eat when hungry.Things often found together: lightbulb, contact,

glass.Coffee helps wake you up.

Birds can fly.The effect of going for a swim is getting wet.

The first thing you do when you wake up is openyour eyes.

Table 1. Sample of OMCS Corpus

Page 11: Articles Beating Common Sense into Interactive Applications

Retrieving Event-Subevent StructureIt is sometimes useful to collect together all theknowledge that is relevant to some particularclass of activity or event. For example the Cin-ematic Common Sense project makes use ofcommonsense knowledge about event-subevent structure to make suitable shot sug-gestions at common events like birthdays andmarathons. For the topic “getting ready for amarathon,” the subevents gathered might in-clude putting on your running shoes, pickingup your number, and getting in your place atthe starting line.

Goal Recognition and PlanningThe Reformulator and GOOSE search enginesexploit commonsense knowledge about typicalhuman goals to infer the real goal of the userfrom their search query. These search enginescan make use of knowledge about actions andtheir effects to engage in a simple form of plan-ning. After inferring the user’s true intention,they look for a way to achieve it.

Temporal ProjectionThe MakeBelieve storytelling system (Liu andSingh 2002) makes use of the knowledge oftemporal and causal relationships betweenevents in order to guess what is likely to hap-pen next. Using this knowledge it can generatestories like David fell off his bike. Davidscraped his knee. David cried like a baby. Davidwas laughed at. David decided to get revenge.David hurt people.

Particular Consequences of Broad Classes of ActionsEmpathy Buddy senses the affect in passages oftext by predicting only those consequences ofactions and events that have some emotionalsignificance. This can be done by chainingbackwards from knowledge about desirableand undesirable states. For example, if beingout of work is undesirable, and being firedcauses to be to be out of work, then the passing“I was fired from work today” can be sensed asundesirable.

Specific Facts about Particular ThingsSpecific facts like “the Golden Gate Bridge is lo-cated in San Francisco,” or “a PowerBook is akind of laptop computer” are often useful. Ariacan reason that an e-mail that mentions that “Isaw the Golden Gate Bridge” meant that “I wasin San Francisco at the time,” and proactivelyretrieves photos taken in San Francisco for theuser to insert into the e-mail.

Conceptual RelationshipsA commonsense knowledge base can be used tosupply “conceptually related” concepts. TheGlobuddy program retrieves knowledge aboutthe events, actions, objects, and other conceptsrelated to a given situation in order to make acustom phrasebook of concepts you might wishto have translations for in a given situation.

Limitations of the Role of Common SenseWe are well aware that there are limits to theuse of common sense in applications. Again,these applications are early-stage prototypesand not large-scale fielded applications.

Among the principal problems we had in de-veloping these systems was the spottiness ofsubject coverage of Open Mind. Open Mindknows a lot about certain things in some areasand little in others, as would be expected froma project that relies on volunteer labor. In somecases, we “beefed up” Open Mind’s knowledgeby deliberately entering facts we knew to beuseful for a certain application. But we werehappy to discover that the newly entered state-ments then “cross-fertilized” better behavior insome of the other applications, even when weweren’t expecting it. As Open Mind–like pro-jects scale up, these problems should decrease.All in all, we were more often surprised by theusefulness of what was in Open Mind thanwhat it lacked.

We were limited by our inference methods.Inferences about events, actions, and objects asdescribed above could not be counted on to bereliable and so were best used only when thesituation was noncritical and inference chainswere kept short. But we see the opportunity forbetter inference methods not simply by apply-ing existing formal inference methods to com-monsense knowledge but by developing newmethods more suited to commonsense reason-ing. We are exploring a method that inter-leaves context-sensitive inference steps withheuristic retrieval steps in a breadth-first man-ner. This will be the subject of a future paper.

We were also worried about the danger thatcommonsense-based suggestions might be dis-tracting in the interface, since they competefor attention with more conventional interfaceelements. Happily, this did not turn out to bethe case in user testing; We got strong positivereactions, and few users gave negative reac-tions that characterized the commonsense sug-gestions as a waste of time. Part of the reasonwas that, even when common sense doesn’tlead to a correct conclusion, it often leads to aplausible conclusion. People are very muchmore tolerant of possibly correct wrong guesses

Articles

WINTER 2004 73

Page 12: Articles Beating Common Sense into Interactive Applications

Other Applications of Commonsense Reasoning

Because broad-spectrum application of commonsense rea-soning is still not widespread, we can’t yet point to verymany applications, especially in the commercial world,

that use it. But we don’t want to leave the impression that weat MIT are alone in pursuing this. Doug Lenat’s CyCorp current-ly has the largest commonsense knowledge base and the longestexperience in creating applications with it. Several Cyc applica-tions were directed at trying to improve information retrievalbeyond simple keyword matching by performing implicit infer-ence.

For example, in a demonstration of Cyc (Lenat 1995), onecould ask a news database “Show me a picture of someone whois disappointed” and receive a picture of the second finisher inthe Boston Marathon, by a chain of reasoning like the follow-ing:

A marathon is a contest;

The goal of a contest is to be first;

If you do not achieve your goals, then you will be disap-pointed.

A subsequent project, eCyc, attached Cyc to the Lycos searchengine, again to perform the same kind of implicit inference.This is somewhat like the way common sense is used in Liu’sGOOSE project (Liu, Lieberman, and Selker 2002), except thatin GOOSE the commonsense inference is used to infer the goalof the user’s query (which may be not stated), whereas in eCyc,the commonsense inference is applied to the subject matter ofthe query, as it is expressed in the query.

On Cycorp’s Web site (www.cyc.com), the security applica-tion CycSecure is described. It provides network vulnerabilityanalysis and intrusion detection. It is not clear how, if at all,general knowledge about everyday life is used in this applica-tion as opposed to specific and precise technical informationabout networks. But common sense could be useful in trackingterrorist threats, for example, by noting that the word Anthraxprobably refers to the disease in that context and not to theheavy metal band of that name.

Erik Mueller’s SensiCal (Mueller 2000) is a “sanity checker”that provides warnings in a calendar scheduling program if itdetects possible anomalies, such as scheduling a business meet-ing for 2 AM or planning to take a vegetarian friend to a steakhouse. It doesn’t prohibit such actions, but asks for confirma-tion in the case that they were performed inadvertently. Cychas also reported a sanity checker that is attached to a spread-sheet and is able to detect such problems as when a cell labeledas containing an age becomes less than zero.

Andrew Gordon (Gordon 2001) implemented a common-sense photo retrieval system that was tested in the Library ofCongress. It uses a set of 768 “scripts” of everyday activities andrelates these to the standardized vocabulary for annotating pho-tographs. So, for example, a picture of children might have a re-

lated link to students (because the majority of students are chil-dren) in the annotation vocabulary, but Gordon’s system can al-so supply teachers, because students and teachers are related inthe school script. The interface was more of a typical informa-tion retrieval system than the real-time, proactive nature of ourAria.

There are also many projects that take a different and moreformalistic slant to the commonsense problem, following theoriginal idea of John McCarthy (McCarthy 1959). A recentoverview can be found in Morgenstern and Davis (2004). Theirapproach is to isolate some particular capability of common-sense reasoning that they consider fundamental and attempt todo an exhaustive axiomatic analysis of it. Examples of domainsthat people have attacked in this manner are temporal reason-ing or reasoning about the behavior of fluids. You could call this“formal reasoning about particular commonsense topics” as op-posed to the “(informal) reasoning about common sense (as awhole),” which describes our approach here.

Obviously, when a formal approach is successful in identify-ing axioms and inference rules in a particular domain, it can re-place hundreds or thousands of our Open Mind statements andreason with them more accurately. But each of these efforts isconstrained to its domain, and it is not clear how many of thesedomains are necessary before what humans consider a reason-able scope of commonsense knowledge is attained. We prefer towork with a broad base of knowledge and see what can be done,even if the knowledge itself is shallow and the inferences inac-curate.

There are also several resources that provide broad-spectrumknowledge, but not what might be considered common sensein the sense of contingent knowledge about everyday life. Forexample, WordNet is a popular resource, used in many AI nat-ural language programs, that provides word-sense disambigua-tion, hypernyms, and hyponyms. But its knowledge is mostlydefinitional. For example, Wordnet defines dog as a canine,which is a mammal. For ConceptNet, however, a dog is a pet,much more in line with the most salient feature for common-sense reasoning.

This also opens up broader consideration of what are termedknowledge-based applications in AI, including traditional ex-pert systems, where the knowledge is expressed as rules. Manyof these systems, also, operate in a commonsense domain, andthus could be said to be applications of commonsense knowl-edge. Like the applications produced by the formalists, they canhave deep knowledge, but their scope is usually not very broad,and they are brittle outside their intended scope. The applica-tions we have cited here, including the Cyc ones, Mueller’s Sen-siCal, and Gordon’s photographic system, share with our appli-cations the property that they can talk about (almost) anysubject that people would consider common knowledge, with areasonable probability of doing something interesting.

Articles

74 AI MAGAZINE

Page 13: Articles Beating Common Sense into Interactive Applications

than they are of guesses that are simply wildlyoff the mark.

ConclusionsWe think that system implementers often failto realize how underconstrained many user in-terface situations are. In many cases, systemseither do nothing or perform actions that areessentially arbitrary. These applications showthat there exists the potential to use common-sense knowledge to do something that at leastmight make sense as far as the user is con-cerned.

A little bit of knowledge is often better thannothing. Many applications, such as story-telling or language translation for tourists, cancover a broad range of subjects. With such ap-plications, it is better to know a little bit abouta lot of things than a lot about just a fewthings. Many past efforts have been stymied byinsisting that coverage of the knowledge basebe complete. They are often afraid to performinferences because of the possibility of error.We rely on the interactive nature of the inter-face to provide feedback to the user and the op-portunity for correction and completion.

Explicit input from the user is very expen-sive in the interface, so commonsense knowl-edge can act as an amplifier of that input,bringing in related facts and concepts thatbroaden the scope of the application.

Although our descriptions of each of theseprojects have been necessarily brief, we hopethat the reader will be impressed by thebreadth and variety of the applications of com-monsense knowledge. We don’t have to waitfor complete coverage or completely reliableinference to put this knowledge to work, al-though as these improve, the applications willonly get better. We think that the AI communi-ty ought to be paying more attention to thisexciting area. After all, it’s only common sense.

Notes1. See web.media.mit.edu/~lieber/Teaching/Com-mon-Sense-Course/Projects/OMAdventure/OMAd-venture.doc

2. See www.media.mit.edu/~lieber/ Teaching/Com-mon-Sense-Course/Projects/StoryIllustrator/StoryIl-lustrator-Intro.html

3. See web.media.mit.edu/~lieber/Teaching/Com-mon-Sense-Course/Projects/Storyfighter/Storyfight-er-Intro.html

4. See web.media.mit.edu/~joanie/commonsense/CSDJ-finalpaper.doc

5. See web.media.mit.edu/~lieber/Teaching/Com-mon-Sense-Course/Projects/Contextualizer/Contex-tualizer.pdf

6. See www.media.mit.edu/~lieber/Teaching/Com-

mon-Sense-Course/Projects/ThoughtStreams/ThoughtStreams-Intro.html

ReferencesBarry B.; and Davenport, G. 2003. Documenting Life:Videography and Common Sense. In Proceedings ofthe 2003 IEEE International Conference on Multimediaand Expo. Piscataway, NJ: Institute of Electrical andElectronics Engineers.

Berners-Lee, T.; Hendler, J.; and Lassila, O. 2001. TheSemantic Web. Scientific American 284(5) (May):28–37.

Cardie, C. (1997). Empirical Methods in InformationExtraction. AI Magazine 18(4): 65–79.

Chklovski. T. 2003. LEARNER: A System for Acquir-ing Commonsense Knowledge by Analogy. Paperpresented at the Second International Conference onKnowledge Capture (K-CAP 2003), Sanibel Island, FL,October 23–25.

Chklovski, T.; and Mihalcea, R. 2002. Building aSense Tagged Corpus with Open Mind Word Expert.In Proceedings of the Workshop on “Word Sense Disam-biguation: Recent Successes and Future Directions.” EastStroudsburg, PA: Association for Computational Lin-guistics.

Davis, E.; and Morgenstern, L. 2004. Progress in For-mal Commonsense Reasoning, Introduction to theSpecial Issue on Formalization of Common Sense.Artificial Intelligence Journal 153(1–2): 1–12.

Eagle, N.; Singh, P.; and Pentland, A. 2003. CommonSense Conversations: Understanding Casual Conver-sation using a Common Sense Database. Paper pre-sented at the IJCAI-03 Artificial Intelligence, Infor-mation Access, and Mobile Computing Workshop,Acapulco, Mexico, August 2003.

Gordon, Andrew S. 2001. Browsing Image Collec-tions with Representations of Commonsense Activi-ties. Journal of the American Society for Information Sci-ence and Technology 52(11): 925–929.

Lenat, D. B. 1995. CYC: A Large-Scale Investment inKnowledge Infrastructure. Communications of theACM 38(11): 33–38.

Lewin, D. B. 2000. Intelligencer: Instilling CommonSense into Software. IEEE Intelligent Systems 15(4):2–6.

Lieberman, H.; and Liu, H. 2002. Adaptive Linkingbetween Text and Photos Using Common Sense Rea-soning. Paper presented at the Second InternationalConference on Adaptive Hypermedia and AdaptiveWeb Based Systems (AH2002), Malaga, Spain, May29–31.

Lieberman, H.; Rosenzweig, E.; and Singh, P. 2001.Aria: An Agent for Annotating and Retrieving Im-ages. IEEE Computer 34(7): 57–61.

Liu, H.; and Lieberman, H. 2002. Robust Photo Re-trieval Using World Semantics. Paper presented atthe Third International Conference on Language Re-sources and Evaluation Workshop: Using Semanticsfor Information Retrieval and Filtering (LREC2002),Las Palmas, Canary Islands.

Liu, H.; Lieberman, H.; and Selker, T. 2003. A Modelof Textual Affect Sensing Using Real-World Knowl-

Articles

WINTER 2004 75

Page 14: Articles Beating Common Sense into Interactive Applications

Push Singh is a doctoralcandidate in MIT’s De-partment of ElectricalEngineering and Com-puter Science. His re-search is focused on find-ing ways to give com-puters humanlike com-mon sense, and he is

presently collaborating with Marvin Min-sky to develop an architecture for com-monsense thinking that makes use of manytypes of mechanisms for reasoning, repre-sentation, and reflection. He started theOpen Mind Common Sense project at MIT,an effort to build large-scale commonsenseknowledge bases by turning to the generalpublic, and has worked on incorporatingcommonsense reasoning into a variety ofreal-world applications. Singh received hisB.S. and M.Eng. in electrical engineeringand computer science from MIT.

Barbara Barry is a Ph.D.candidate at the Massa-chusetts Institute ofTechnology’s Media Lab-oratory in the InteractiveCinema group. Her re-search interests includecomputational docu-mentary, commonsense

knowledge acquisition, and technologiesfor reflective practice. She earned a B.F.A.from Massachusetts College of Art and anM.S. from the MIT Media Lab. Contact herat [email protected] or visit www.media.mit.edu/~barbara.

for Information Access: Papers from the 2002AAAI Spring Symposium, Technical ReportSS-02-09. Menlo Park, CA: American Asso-ciation for Artificial Intelligence.

Singh, P.; and Barry, B. 2003. CollectingCommonsense Experiences. Paper present-ed at the Second International Conferenceon Knowledge Capture (K-CAP 2003), Sani-bel Island, FL, October 23–25.

Singh, P.; and Williams, W. 2003. LifeNet: aPrepositional Model of Ordinary HumanActivity. Paper presented at the Workshopon Distributed and Collaborative Knowl-edge Capture (DC-KCAP), Sanibel Island,FL, October 26, 2003.

Stocky, Tom; Faaborg, Alexander; andLieberman, Henry 2004. Common Sensefor Predictive Text Entry. Paper presented atthe Conference on Human Factors in Com-puting Systems (CHI 2004), Vienna, Aus-tria, April 24–29.

Wang, A.; and Cassell, J. 2003. Coauthor-ing, Collaborating, Criticizing: Collabora-tive Storytelling between Real and VirtualChildren. Paper presented at the 2003 Vi-enna Workshop: Educational Agents—More than Virtual Tutors, Vienna, Austria,20 July.

Henry Lieberman is aresearch scientist at theMassachusetts Instituteof Technology’s MediaLaboratory. He heads theSoftware Agents Group,which applies AI tech-niques to making user in-terfaces more intelligent

and easier to use. He edited the books Spin-ning the Semantic Web (MIT Press) andYour Wish Is My Command (Morgan Kauf-mann). He is now working on commonsense because he wishes he had more of it.

Hugo Liu is a Ph.D. can-didate in media arts andsciences at the Massachu-setts Institute of Technol-ogy’s Media Laboratory.He has published morethan a dozen expositorypapers on common senseand its application to the

cognitive modeling of emotion, personali-ty, hermeneutics, and information forag-ing. His research has been recognized withtwo best paper prizes. Hugo is presently en-gaged in the computational modeling ofcreativity, social identity, aesthetic experi-ence, and the subconscious. He also essaysirreverently on philosophy and postmod-ernism, and of course, he is Google-friend-ly. His e-mail address is [email protected]

edge. In Proceedings of the 2003 InternationalConference on Intelligent User Interfaces. NewYork: Association for Computing Machin-ery.

Liu, H.; Lieberman, H.; and Selker, T. 2002.GOOSE: A Goal-Oriented Search Enginewith Commonsense. Paper presented at theSecond International Conference on Adap-tive Hypermedia and Adaptive Web BasedSystems (AH2002), Malaga, Spain, May29–31.

Liu, H.; Lieberman, H.; and Selker, T. 2002a.Automatic Affective Feedback in an E-mailBrowser, Technical Report SA02-01, Novem-ber 2002. Cambridge, MA: MassachusettsInstitute of Technology Media LaboratorySoftware Agents Group.

Liu, H.; and Maes, P. 2004. What WouldThey Think? A Computational Model of At-titudes. In Proceedings of the 2004 Interna-tional Conference on Intelligent UserInterfaces. New York: Association for Com-puting Machinery.

Liu, H.; and Singh, P. 2004. CommonsenseReasoning in and over Natural Language.Paper presented at the Eighth InternationalConference on Knowledge-Based Intelli-gent Information and Engineering Systems(KES2004), Wellington, New Zealand,22–24 September.

Liu, H.; and Singh, P. 2002. MAKEBELIEVE:Using Commonsense to Generate Stories.In Proceedings of the Eighteenth National Con-ference on Artificial Intelligence (AAAI-02),957–958. Menlo Park, CA: AAAI Press.

Maes, P. 1994. Agents that Reduce Workand Information Overload. Communica-tions of the ACM 37(7): 30–40.

McCarthy, John 1990. Computers withCommon Sense. In Formalization of Com-mon Sense, ed. V. Lipschitz. Westport, CT:Ablex Publishers.

Minsky, Marvin 2000. Commonsense-based Interfaces. Communications of theACM 43(8): 67–73.

Mueller, Erik T. 2000. A Calendar withCommon Sense. In Proceedings of the 2000International Conference on Intelligent UserInterfaces, 198–201. New York: Associationfor Computing Machinery.

Mueller, Erik T. 1998. Natural Language Pro-cessing with Thought Treasure. New York: Sig-niform.

Musa, R.; Kulas, A.; Anguilette, Y.; andScheidegger, M. 2003. Globuddy, A Broad-Context Dynamic Phrasebook. In Proceed-ings of the International Conference on Model-ing and Using Context (CONTEXT ’03),Lecture Notes in Computer Science. Berlin:Springer-Verlag.

Singh, P. 2002. The Public Acquisition ofCommonsense Knowledge. In Acquiring(and Using) Linguistic (and World) Knowledge

Articles

76 AI MAGAZINE