Semantics Interpretation

Download Semantics Interpretation

Post on 05-Jan-2016




1 download

Embed Size (px)


Semantics Interpretation. Allen s Chapter 9 J&M s Chapter 15. Rule-by-rule semantic interpretation. Computing Logical forms (i.e., Semantic Interpretation ) can be coupled with parsing A syntactic tree can be generated from a specified logical form (i.e., Semantic Realization ) - PowerPoint PPT Presentation


<ul><li><p>*Semantics InterpretationAllens Chapter 9J&amp;Ms Chapter 15</p></li><li><p>*Rule-by-rule semantic interpretationComputing Logical forms (i.e., Semantic Interpretation) can be coupled with parsingA syntactic tree can be generated from a specified logical form (i.e., Semantic Realization)To couple syntax and semantic, the meaning of all constituent must be determinedBy using features, the relation between the meaning of a constituent and that of its sub constituents can be specified in the grammarEach syntactical rule is associated with a semantic interpretation rule (this is called Rule-by-Rule semantic interpretation)</p></li><li><p>*Semantic interpretation and compositionalitySemantic interpretation is assumed to be a compositional process (similar to parsing)The meaning of a constituent is derived from the meaning of its sub constituentsInterpretations can be built incrementally from the interpretations of sub phrasesCompositional models make grammars easier to extend and maintain</p></li><li><p>*Semantic interpretation and compositionalityIntegrating parsing and semantic interpretation is harder than it may seemOne classic obstacle is the inconsistency between syntactical structures, and corresponding structures of the logical formsSyntactical structure of Jill loves every dog is: ((Jill) (loves (every dog)))Its unambiguous logical form is:(EVERY d1 : (DOG1 d1) (LOVES1 l1 (NAME j1 Jill) d1) </p></li><li><p>*Difficulties of Semantic interpretation via compositionalityIn the syntactical structure, every dog is a sub constituent of VP, whereas in the logical form, the situation is reversedHow the meaning of every dog could be represented in isolation and then used to construct the meaning of the sentence?Using quasi-logical forms is one way around this problem (LOVES1 l1 (NAME j1 Jill) )</p></li><li><p>*Problem with IdiomsAnother obstacle for the compositionality assumption is the presence of idiomsJack kicked the bucket = Jack diedSolution1: semantic meaning to be assigned to the entire phrase, rather than building it incrementallyWhat about: The bucket was kicked by Jack?Jack kicked the bucket is ambiguous between:(KICK1 k1 (NAME j1 Jack) )(DIE1 d1 (NAME j1 Jack))Solution2: adding a new sense of die for the verb kick with sub categorization for an object BUCKET1</p></li><li><p>*Semantic interpretation and compositionalityIn compositional semantic interpretation, we should be able to assign a semantic structure to any syntactic constituentFor instance, we should be able to assign a uniform form of meaning to every verb phrase in any rule involving a VPThe meaning of the VP in Jack laughed can be shown by a unary predicate: (LAUGHED1 l1 (NAME j1 Jack))The VP in, Jack loved Sue, should also be represented by a unary predicate: (LOVES1 l2 (NAME j1 Jack) (NAME s1 Sue))But, how can we express such complex unary predicates?Lambda calculus provides a formalism for representing such predicates</p></li><li><p>*Lambda calculusUsing lambda calculus, loved sue is represented as:( x (LOVES1 l1 x (NAME s1 SUE)))We can apply a lambda expression to an argument, by a process called lambda reduction( x (LOVES1 l1 x (NAME s1 SUE))(NAME j1 Jack) </p><p>= (LOVES1 l1 (NAME j1 Jack) (NAME s1 Sue))</p></li><li><p>*Lambda calculusLambda-calculus provides a handy tool to couple syntax and semanticsNow verb phrases with even different structures can easily be conjoinedSue laughs and opens the door( a (LAUGHES1 l1 a)) and ( a (OPENS1 o1 a) ) can be conjoined into:( a (&amp; (LAUGHES1 l1 a) (OPENS1 o1 a) )))Applying it to (NAME s1 Sue) would produce the logical form:(&amp; (LAUGHES1 l1 (NAME s1 Sue)) (OPENS1 o1 (NAME s1 Sue)) ))</p></li><li><p>*Lambda calculus (Cont.)Propositional phrase modifiers in noun phrases could be handled in different waysSearch for location of modifiers in noun phases and incorporate them into the interpretationsWorks for the man in the store, but not for the man is in the store or the man was thought to be in the storeGive and independent meaning to prepositional phrasesin the store is represented by a unary predicate ( o (IN-LOC1 o )The man in the store ()</p><p>The man is in the store (IN-LOC1 )</p></li><li><p>*Compositional approach to semanticsIn general, each major syntactic phrase corresponds to a particular semantic construction:VPs and PPs map to unary predicates, sentences map to propositions, NPs map to terms, and minor categories are used in building major categories</p></li><li><p>*A simple grammar and lexicon for semantic interpretationBy using features, logical forms can be computed during parsingThe main extension needed is a SEM feature, which is added to each lexical entry and ruleExample:(S SEM (?semvp ?semnp)) (NP SEM ?semnp) (VP SEM ?semvp)NP with SEM (NAME m1 Mary)VP with SEM ( a (SEES1 e8 a (NAME j1 Jack)))Then the SEM feature of S is:(( a (SEES1 e8 a (NAME j1 Jack))) (NAME m1 Mary)), is reduced to(SEES1 e8 (NAME m1 Mary) (NAME j1 Jack) </p></li><li><p>*Mary sees Jack</p></li><li><p>*Sample lexicon with SEM features</p></li><li><p>*A sample grammar with SEM and VAR featuresA feature called VAR is also needed to store the discourse variable that corresponds to the constituentsThe VAR feature is automatically generated when a lexical constituent is constructed from a wordThen the VAR feature is passed up the parse tree by being treated as a head featureThis guarantees that the discourse variables are always unique If ?v is m1, SEM of the man is: </p></li><li><p>*Morphological rules with SEM featureThe lexical rules for morphological derivations must also be modified to handle SEM featuresFor instance, the rule for converting singular nouns to plural ones take the SEM of a singular noun and adds a PLUR operator (L7)A similar technique is used for tense operators (L1, L2, and L3)</p></li><li><p>*Using Chart Parsing for semantic interpretationOnly two simple modifications are needed to use the standard Chart Parser to handle semantic interpretation:When a lexical rule is instantiated, the VAR feature is set to a new discourse variableWhenever a constituent is built, its SEM is simplified by possible lambda reductions</p></li><li><p>*Jill saw the dog</p></li><li><p>*Handling auxiliary verbs(VP SEM ( a1 (?semaux (?semvp a1)))) (AUX SUBCAT ?v SEM ?semaux) (VP VFORM ?v SEM ?semvp)If ?semaux is the modal operator CAN1, and ?semvp is ( x (LAUGHS1 e3 X))Then the SEM of the VP can laugh will be( a1 (CAN1 (( x (LAUGHS1 e3 x)) a1))( a1 (CAN1 (LAUGHS1 e3 a1)))</p></li><li><p>*Handling Prepositional PhrasesPropositional phrases play two different semantic roles:PP can be a modifier to a noun phrase or a verb phrase (cry in the corner)PP may be needed by a head word, and the preposition acts more as a term than as an independent predicate (Jill decided on a couch)</p></li><li><p>*PP as a modifier of a noun phraseThe appropriate rule for interpreting PP is: (PP SEM ( y (?semp y ?semnp))) (P SEM ?semp) (NP SEM ?semnp)if SEM of P is: IN-LOC1, and SEM of NP is: , Then the SEM of PP in the corner will be: ( y (IN-LOC1 y )), </p></li><li>*PP as a modifier of a noun phraseIf the PP is used in the rule: (CNP SEM ( n (&amp; (?semcnp n) (?sempp n)))) (CNP SEM ?semcnp) (PP SEM ?sempp)Then SEM of CNP man in the corner will be: ( n (&amp; (MAN1 n) (( y (IN-LOC1 y )) n)))That is reduced to:( n (&amp; (MAN1 n) (IN-LOC1 n )))Then using rule (NP VAR ?v SEM </li><li><p>*PP as a modifier of a verb phraseAppropriate rule is:(VP VAR ?v SEM ( x (&amp; (?semvp x) (?sempp ?v)))) (VP VAR ?v SEM ?semvp) (PP SEM ?sempp)SEM of PP in the corner is: ( y (IN-LOC1 y )), SEM of VP of cry is: (x (CRIES1 e1 x)),SEM of VP of cry in the corner will be: ( a (&amp; (CRIES1 e1 a) (IN-LOC1 e1 )))</p></li><li><p>*Parse tree of Cry in the corner</p></li><li><p>*PP as a sub constituent of a head wordJill decided on a couch is ambiguous:Jill made a decision while she was on a couchJill made a decision about a couchThe appropriate rule for the second one is:VP V[_pp:on] NP PP[on]The desired logical form for this is ( s (DECIDES-ON1 d1 s ))Here the preposition on doesnt have any semantic contribution (it is only a term)A new binary feature PRED is needed to determine the role of preposition (i.e., a predicate (+) or a term(-))</p></li><li><p>*Rules for handling PPs </p></li><li><p>*Different parse trees for the VP decide on the coach</p></li><li><p>*Lexicalized semantic interpretationSo far all the complexity of the semantic interpretation has been encoded in grammar rules A different approach is to encode complexities in lexical entries and have simpler rulesVerb decide has two senses, with two different SEM values:Intransitive sense: ( y (DECIDES1 e1 y))Transitive sense: ( o ( y (DECIDE-ON1 e1 y o)))</p></li><li><p>*Lexicalized semantic interpretationThere is a trade off between the complexity of the grammatical rules and the complexity of the lexical entriesThe more complex that logical forms are, the more attractive the second approach becomesFor instance, to handle thematic roles with simple lexical entries, we need extra rules to classify different type of verbs </p></li><li><p>*Lexicalized semantic interpretationSee and eat both have transitive forms where the subject fills the AGENT role and the object fills the THEME roleBreak also has a transitive sense where the subject fills the INSTR role and the object fills the THEME roleWe need a new feature ROLES to be added to the lexical entries to identify the appropriate form, and a separate rule for each</p></li><li><p>*Lexicalized semantic interpretation(VP VAR ?v SEM ( a (?semv ?v [AGENT a] [THEME ?semnp]))) (V ROLES AGENT-THEME SEM ?semv) (NP SEM ?semnp)(VP VAR ?v SEM ( a (?semv ?v [INSTR a] [THEME ?semnp]))) (V ROLES INSTR-THEME SEM ?semv) (NP SEM ?semnp)Additional rules must be added for all combinations of roles that can be used by the verbsThis approach is cumbersome, because it requires adding thematic role information to lexicon anyway (using the ROLES feature)</p></li><li><p>*Lexicalized semantic interpretationIt is simpler to enter the appropriate form only in the lexicon:See: (V VAR ?v SEM ( o ( a (SEES1 ?v [AGENT a] [THEME o]))))Break: (V VAR ?v SEM ( o ( a (BREAKS1 ?v [INSTR a] [THEME o]))))In this case, just a single rule is needed:(VP SEM (?semv ?semnp)) (V SEM ?semv) (NP SEM ?semnp)SEM of VP See the book is represented as:( o ( a (SEES1 b1 [AGENT a] [THEME o])))) =( a (SEES1 b1 [AGENT a] [THEME ]))))SEM of VP Break the book is represented as:( o ( a (BREAKS1 b1 [INSTR a] [THEME o])))) =( a (BREAKS1 b1 [INSTR a] [THEME ]))))</p></li><li><p>*Hierarchical lexiconsThe problem of making the lexicon more complex is that there are too many wordsVerb give allows:I gave the moneyI gave John the moneyI gave the money to JohnThe lexical entries for give includes:(V SUBCAT _np SEM o a (GIVE * [AGENT a] [THEME o]))(V SUBCAT _np_np SEM r o a (GIVE * [AGENT a] [THEME o] [TO-POSS r))(V SUBCAT _np_pp:to SEM r o a (GIVE * [AGENT a] [THEME o] [TO-POSS r))</p></li><li><p>*Hierarchical lexiconsThis cause a lot of redundancies in the lexiconA better idea is to exploit regularities of verbs in EnglishA very large class of transitive verbs (give, take, see, find, paint, etc.), which describe an action, use the same semantic interpretation rule for SUBCAT _np </p></li><li><p>*Hierarchical lexiconsThe idea of hierarchical lexicon is to organize verb senses in such a way that their shared properties can be used (via inheritance)INTRANS-ACT defines verbs with SUBCAT _nones (?PREDN * [AGENT s]) TRANS-ACT for verbs with SUBCAT _npo a (?PREDN * [AGENT a] [THEME o])</p><p>Give: (V ROOT give PREDN GIVE1 SUP {BITRANS-TO-ACT TRANS-ACT})</p></li><li><p>*Lexical hierarchy</p></li><li><p>*Handling simple questionsTo handle simple questions, we only need to extend the appropriate grammar rules with the SEM featureThe lexical entry for the wh-words are extended with the SEM feature: who: (PRO WH {Q R} SEM WHO1 AGR {3s 3p})But, how SEM and GAP features interact?</p></li><li><p>*Who did Jill see?</p></li><li><p>*Prepositional phrase Wh-QuestionsQuestions can begin with prepositional phrases:In which box did you put the book?Where did Jill go?The semantic interpretation of these questions depend on the type of PPs</p><p>(S INV SEM (WH-query ?sems)) (PP WH Q PRED ?p PTYPE ?pt SEM ?sempp) (S INV + SEM ?sems GAP (PP PRED ?p PTYPE ?pt SEM ?sempp))</p></li><li><p>*Prepositional phrase Wh-QuestionsTo handle wh-terms like where the following rule is also needed:(PP PRED ?p PTYPE ?pt SEM ?sempp) (PP-WRD PRED ?p PTYPE ?pt SEM ?sempp)There would also be two lexical entries:(PP-WRD PTYPE {LOC MOT} PRED VAR ?v SEM )(PP PRED + VAR ?v SEM ( x (AT-LOC x ))) </p></li><li><p>*Where did Jill go?But, Wh-questions starting with a PP with + PRED cannot be handled, yetThe difficulty comes from the restriction that in rule VP VP PP, the GAP is passed only into the VP sub constituentThus there is still no way to create a PP gap that modifies a verb phrase</p></li><li><p>*Relative ClausesThe similarity between relative clauses and wh-questions continues at the semantic level, too.The logical form of a relative clause is a unary predicate, which is produced by(CNP SEM ( ?x (&amp; (?semcnp ?x) (?semrel ?x))) (CNP SEM ?semcnp) (REL SEM ?semrel)When the embedded sentence is completed, a lambda is wrapped around it to form the relative clause</p></li><li><p>*Who Jill sawThe wh-term variable is specified in a feature called RVAR in the rule(REL SEM ( ?v ?sems)) (NP WH R RVAR ?v AGR ?a SEM ?semnp) (S [fin] INV GAP (NP AGR ?a SEM ?semnp) SEM ?sems)</p></li><li><p>*Semantic Interpretation by Features Unificationversus Lambda reductionsSemantic interpretation can be performed by just using feature values and variablesThe basic idea is to introduce new features for the argument positions that would have been filled with lambda reductions</p></li><li><p>*Jill saw the dog</p></li><li><p>*Features versus Lambda expressions</p></li><li><p>*Features versus Lambda expressionsOne advantage of using features is that no special mechanism (e.g., Lambda Reduction) is needed for semantic interpretationAnother significant advantage is that the grammar in this form is reversible (can be used to generate sentences)But, not all lambda expressions can be eliminated using this techniqueHandling conjoined subject phrases as in sue and Sam saw Jack, SUBJ variable need to unify with both Sue and Sam, which is not possible</p></li><li><p>*Generating sentences from Logical FormsIntuitively, it should be easy to reverse a grammar and use it for generation: Decompose the logical form of each constituent into a series of lexical constituents with the appropriate meaningBut, not all grammars are reversible: e.g., a grammar with Lambda reduction </p></li><li><p>*Generating sentences from Logical FormsConsider ( s1 (NAME j1 Jill) )Rule S with SEM (?semvp ?semnp) cannot be unified with the above logical formThe problem is that lambda reduction was used to convert the original logical form, which was((a ( d1 a )) (NAME j1 Jill)Lambda abstraction can be used to find a match, but the problem is that there are three possible lambda abstractions(e ( e (NAME j1 Jill) (a ( s1 a ))(o ( s1(NAME j1 Jill) o))</p></li><li><p>*Realization versus Parsing Parsing and realization both can be viewed as building a syntactic treeA parser starts with the words and tries to find a tree that accounts for them and hence determine the logical form of the sentenceA realizer starts with a logical form and tries to find a tree to account for it and hence determine the words to realize it</p></li><li><p>*Realization and ParsingUsing standard top-down parsing algorithm is extremely inefficientConsider again ( s1...</p></li></ul>