1 sims 290-2: applied natural language processing marti hearst sept 15, 2004
Post on 21-Dec-2015
216 views
TRANSCRIPT
1
SIMS 290-2: Applied Natural Language Processing
Marti HearstSept 15, 2004
2
Class Pace and ScheduleNeed a foundation before you can do anything interesting.
Tokenizing, Tagging, Regex’sText Classification Principles and TechniquesTraining vs. Testing, processing corpora
Through (approximately) the 6th week, keep doing exercises from the NLTK tutorials to build that foundation.
2 more homeworksI’m trying to make them bite-sized pieces
7th – 10th Group Miniproject on Enron CorpusWill involve classification or Information ExtractionDifferent groups will do different thingsMay have a homework within this timeframe
11th – 15th Another MiniprojectEither on Enron project or your choicesI will suggest ideas; you can propose them tooMay also have 1-2 other homeworks in this timeframe
3
Language Modeling
An fundamental concept in NLPMain idea:
For a given language, some words are more likely than others to follow each other, orYou can predict (with some degree of accuracy) the probability that a given word will follow another word.
Illustration:Distributions of words in class-participation exercise.
4Adapted from slide by Bonnie Dorr
Next Word Prediction
From a NY Times story...Stocks ...Stocks plunged this ….Stocks plunged this morning, despite a cut in interest ratesStocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall ...Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began
5Adapted from slide by Bonnie Dorr
Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last …Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last Tuesday's terrorist attacks.
6Adapted from slide by Bonnie Dorr
Human Word Prediction
Clearly, at least some of us have the ability to predict future words in an utterance.How?
Domain knowledgeSyntactic knowledgeLexical knowledge
7Adapted from slide by Bonnie Dorr
Claim
A useful part of the knowledge needed to allow word prediction can be captured using simple statistical techniquesIn particular, we'll rely on the notion of the probability of a sequence (a phrase, a sentence)
8Adapted from slide by Bonnie Dorr
Applications
Why do we want to predict a word, given some preceding words?
Rank the likelihood of sequences containing various alternative hypotheses, e.g. for ASRTheatre owners say popcorn/unicorn sales have doubled...Assess the likelihood/goodness of a sentence
– for text generation or machine translation.
The doctor recommended a cat scan.El doctor recommendó una exploración del gato.
9Adapted from slide by Bonnie Dorr
N-Gram Models of Language
Use the previous N-1 words in a sequence to predict the next wordLanguage Model (LM)
unigrams, bigrams, trigrams,…
How do we train these models?Very large corpora
10Adapted from slide by Bonnie Dorr
Simple N-Grams
Assume a language has V word types in its lexicon, how likely is word x to follow word y?
Simplest model of word probability: 1/VAlternative 1: estimate likelihood of x occurring in new text based on its general frequency of occurrence estimated from a corpus (unigram probability)
popcorn is more likely to occur than unicorn
Alternative 2: condition the likelihood of x occurring in the context of previous words (bigrams, trigrams,…)
mythical unicorn is more likely than mythical popcorn
11
A Word on Notation
P(unicorn)Read this as “The probability of seeing the token unicorn”Unigram tagger uses this.
P(unicorn|mythical)Called the Conditional Probability.Read this as “The probability of seeing the token unicorn given that you’ve seen the token mythicalBigram tagger uses this.Related to the conditional frequency distributions that we’ve been working with.
12Adapted from slide by Bonnie Dorr
Computing the Probability of a Word Sequence
Compute the product of component conditional probabilities?
P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the mythical)
The longer the sequence, the less likely we are to find it in a training corpus
P(Most biologists and folklore specialists believe that in fact the mythical unicorn horns derived from the narwhal)
Solution: approximate using n-grams
13Adapted from slide by Bonnie Dorr
Bigram Model
Approximate by P(unicorn|the mythical) by P(unicorn|mythical)
Markov assumption: The probability of a word depends only on the probability of a limited history
Generalization: The probability of a word depends only on the probability of the n previous words
– trigrams, 4-grams, …– the higher n is, the more data needed to train– backoff models
)11|( nn wwP )|( 1nn wwP
14Adapted from slide by Bonnie Dorr
Using N-Grams
For N-gram models
P(wn-1,wn) = P(wn | wn-1) P(wn-1)
By the Chain Rule we can decompose a joint probability, e.g. P(w1,w2,w3)
P(w1,w2, ...,wn) = P(w1|w2,w3,...,wn) P(w2|w3, ...,wn) … P(wn-
1|wn) P(wn)
For bigrams then, the probability of a sequence is just the product of the conditional probabilities of its bigrams
P(the,mythical,unicorn) = P(unicorn|mythical)P(mythical|the) P(the|<start>)
n
kkkn wwPwP
111 )|()(
15Adapted from slide by Bonnie Dorr
Training and Testing
N-Gram probabilities come from a training corpus
overly narrow corpus: probabilities don't generalizeoverly general corpus: probabilities don't reflect task or domain
A separate test corpus is used to evaluate the model, typically using standard metrics
held out test set; development test setcross validationresults tested for statistical significance
16Adapted from slide by Bonnie Dorr
A Simple Example
From BeRP: The Berkeley Restaurant ProjectA testbed for a Speech Recognition projectSystem prompts user for information in order to fill in slots in a restaurant database.
– Type of food, hours open, how expensiveAfter getting lots of input, can compute how likely it is that someone will say X given that they already said Y.
P(I want to each Chinese food) = P(I | <start>) P(want | I) P(to | want) P(eat | to) P(Chinese | eat) P(food | Chinese)
17Adapted from slide by Bonnie Dorr
A Bigram Grammar Fragment from BeRP
.001Eat British.03Eat today
.007Eat dessert.04Eat Indian
.01Eat tomorrow.04Eat a
.02Eat Mexican.04Eat at
.02Eat Chinese.05Eat dinner
.02Eat in.06Eat lunch
.03Eat breakfast.06Eat some
.03Eat Thai.16Eat on
18Adapted from slide by Bonnie Dorr
.01British lunch.05Want a
.01British cuisine.65Want to
.15British restaurant.04I have
.60British food.08I don’t
.02To be.29I would
.09To spend.32I want
.14To have.02<start> I’m
.26To eat.04<start> Tell
.01Want Thai.06<start> I’d
.04Want some.25<start> I
19Adapted from slide by Bonnie Dorr
P(I want to eat British food) = P(I|<start>) P(want|I) P(to|want) P(eat|to) P(British|eat) P(food|British) = .25*.32*.65*.26*.001*.60 = .000080vs. I want to eat Chinese food = .00015Probabilities seem to capture “syntactic'' facts, “world knowledge''
eat is often followed by an NPBritish food is not too popular
N-gram models can be trained by counting and normalization
20Adapted from slide by Bonnie Dorr
What do we learn about the language?
What's being captured with ...P(want | I) = .32 P(to | want) = .65P(eat | to) = .26 P(food | Chinese) = .56P(lunch | eat) = .055
What about...P(I | I) = .0023P(I | want) = .0025P(I | food) = .013
21Modified from Massio Poesio's lecture
Tagging with lexical frequencies
Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NNPeople/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NNProblem: assign a tag to race given its lexical frequencySolution: we choose the tag that has the greater
P(race|VB) Probability of “race” given “VB” on prior wordP(race|NN) Probability of “race” given “NN” on prior word
Actual estimate from the Switchboard corpus:P(race|NN) = .00041P(race|VB) = .00003
22Modified from Diane Litman's version of Steve Bird's notes
Combining Taggers
Use more accurate algorithms when we can, backoff to wider coverage when needed.
Try tagging the token with the 1st order tagger. If the 1st order tagger is unable to find a tag for the token, try finding a tag with the 0th order tagger. If the 0th order tagger is also unable to find a tag, use the NN_CD_Tagger to find a tag.
23Modified from Diane Litman's version of Steve Bird's notes
BackoffTagger class>>> train_toks =
TaggedTokenizer().tokenize(tagged_text_str)
# Construct the taggers >>> tagger1 = NthOrderTagger(1,
SUBTOKENS=‘WORDS’) >>> tagger2 = UnigramTagger() # 0th order>>> tagger3 = NN_CD_Tagger()
# Train the taggers >>> for tok in train_toks:
tagger1.train(tok) tagger2.train(tok)
24Modified from Diane Litman's version of Steve Bird's notes
Backoff (continued)
# Combine the taggers (in order, by specificity) > tagger = BackoffTagger([tagger1, tagger2, tagger3])
# Use the combined tagger> accuracy = tagger_accuracy(tagger, unseen_tokens)
25Modified from Diane Litman's version of Steve Bird's notes
Rule-Based Tagger
The Linguistic ComplaintWhere is the linguistic knowledge of a tagger?Just a massive table of numbersAren’t there any linguistic insights that could emerge from the data?Could thus use handcrafted sets of rules to tag input sentences, for example, if input follows a determiner tag it as a noun.
26Slide modified from Massimo Poesio's
The Brill tagger
An example of TRANSFORMATION-BASED LEARNING Very popular (freely available, works fairly well)A SUPERVISED method: requires a tagged corpusBasic idea: do a quick job first (using frequency), then revise it using contextual rules
27
Brill Tagging: In more detail
Start with simple (less accurate) rules…learn better ones from tagged corpus
Tag each word initially with most likely POSExamine set of transformations to see which improves tagging decisions compared to tagged corpus Re-tag corpus using best transformationRepeat until, e.g., performance doesn’t improveResult: tagging procedure (ordered list of transformations) which can be applied to new, untagged text
28Slide modified from Massimo Poesio's
An example
Examples:They are expected to race tomorrow.The race for outer space.
Tagging algorithm:1. Tag all uses of “race” as NN (most likely tag in the Brown
corpus)• They are expected to race/NN tomorrow• the race/NN for outer space
2. Use a transformation rule to replace the tag NN with VB for all uses of “race” preceded by the tag TO:• They are expected to race/VB tomorrow• the race/NN for outer space
29
First 20 Transformation Rules
From: Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech
Tagging Eric Brill. Computational Linguistics. December, 1995.
30
Transformation Rules for Tagging Unknown Words
From: Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech
Tagging Eric Brill. Computational Linguistics. December, 1995.
31Adapted from Massio Peosio's
Additional issues
Most of the difference in performance between POS algorithms depends on their treatment of UNKNOWN WORDS
Class-based N-grams
32Modified from Diane Litman's version of Steve Bird's notes
Evaluating a Tagger
Tagged tokens – the original dataUntag (exclude) the dataTag the data with your own taggerCompare the original and new tags
Iterate over the two lists checking for identity and countingAccuracy = fraction correct
33
Assessing the Errors
Why the tuple method? Dictionaries cannot be indexedby lists, so convert lists to tuples.
exclude returns a new token containing only the properties that are not named in the given list.
34
Assessing the Errors
35
Upcoming
First assignment due 8pm tonightTurn in on course Assignments page
For next week:Read the Chunking tutorial.(The pdf version has the missing images)http://nltk.sourceforge.net/tutorial/chunking.pdfWe’ll have an assignment getting practice with this.