language modeling
DESCRIPTION
Language Modeling. Next Word Prediction. From a NY Times story... Stocks ... Stocks plunged this …. Stocks plunged this morning, despite a cut in interest rates Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall ... - PowerPoint PPT PresentationTRANSCRIPT
Page 1
Language Modeling
Page 2
Next Word Prediction
From a NY Times story...• Stocks ...• Stocks plunged this ….• Stocks plunged this morning, despite a cut in interest rates• Stocks plunged this morning, despite a cut in interest rates by the
Federal Reserve, as Wall ...• Stocks plunged this morning, despite a cut in interest rates by the
Federal Reserve, as Wall Street began
Page 3
• Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last …
• Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last Tuesday's terrorist attacks.
Page 4
Human Word Prediction
Clearly, at least some of us have the ability to predict future words in an utterance.How?• Domain knowledge• Syntactic knowledge• Lexical knowledge
Page 5
Claim
A useful part of the knowledge needed to allow Word Prediction can be captured using simple statistical techniquesIn particular, we'll rely on the notion of the probability of a sequence (a phrase, a sentence)
Page 6
N-grams
N-grams ‘N-words’ ?It’s ‘N’ consecutive words that one can find in a given corpus or set of documents ?‘N-gram’ model is a probabilistic model that computes probability of Nth word occurring after seeing ‘N-1’ words – also known as language model in speech recognition
Page 7
Some Interesting Facts
The most frequent 250 words takes account of approximately 50% of all tokens in any random text‘the’ is usually the most frequent word‘the’ occurs 69,971 times in 1 million word Brown corpus (7%)The top 20 words in 1 year of Wall Street Journal is• The, of, to, in, and, for, that, is, on, it, by, with, as, at, said, mister, from,
its, are, he, million
Page 8
N-Gram Models of Language
Use the previous N-1 words in a sequence to predict the next wordLanguage Model (LM)• unigrams, bigrams, trigrams,…How do we train these models?• Very large corpora
Page 9
ApplicationsWhy do we want to predict a word, given some preceding words?• Rank the likelihood of sequences containing various alternative
hypotheses, e.g. for ASRTheatre owners say popcorn/unicorn sales have doubled...• Assess the likelihood/goodness of a sentence, e.g. for text generation or
machine translationThe doctor recommended a cat scan.
Page 10
Counting Words in Corpora
What is a word? • e.g., are cat and cats the same word?• September and Sept?• zero and oh?• Is _ a word? * ? ‘(‘ ?• How many words are there in don’t ? Gonna ?
Page 11
TerminologySentence: unit of written languageUtterance: unit of spoken languageTypes: number of distinct words in a corpus (vocabulary size)Tokens: total number of words
Page 12
Corpora
Corpora are collections of text and speech• Brown Corpus• Wall Street Journal• AP news• DARPA/NIST text/speech corpora (Call Home, ATIS, switchboard,
Broadcast News, TDT, Communicator)• TRAINS, Radio News
Page 13
Simple N-Grams
Assume a language has V word types in its lexicon, how likely is word x to follow word y?• Simplest model of word probability: 1/V• Alternative 1: estimate likelihood of x occurring in new text based
on its general frequency of occurrence estimated from a corpus (unigram probability)popcorn is more likely to occur than unicorn
• Alternative 2: condition the likelihood of x occurring in the context of previous words (bigrams, trigrams,…)mythical unicorn is more likely than mythical popcorn
Page 14
Computing the Probability of a Word SequenceConditional Probability• P(A1,A2) = P(A1) P(A2|A1)The Chain Rule generalizes to multiple events• P(A1, …,An) = P(A1) P(A2|A1) P(A3|A1,A2)…P(An|A1…An-1) Compute the product of component conditional probabilities?• P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the mythical)The longer the sequence, the less likely we are to find it in a training corpus
P(Most biologists and folklore specialists believe that in fact the mythical unicorn horns derived from the narwhal)
Solution: approximate using n-grams
Page 15
Bigram Model
Approximate by • P(unicorn|the mythical) by P(unicorn|mythical)Markov assumption: the probability of a word depends only on the probability of a limited historyGeneralization: the probability of a word depends only on the probability of the n previous words• trigrams, 4-grams, …• the higher n is, the more data needed to train• backoff models
)11|( nn wwP )|( 1nn wwP
Page 16
Using N-Grams
For N-gram models• • P(wn-1,wn) = P(wn | wn-1) P(wn-1)• By the Chain Rule we can decompose a joint probability, e.g.
P(w1,w2,w3)P(w1,w2, ...,wn) = P(w1|w2,w3,...,wn) P(w2|w3, ...,wn) … P(wn-1|wn) P(wn)For bigrams then, the probability of a sequence is just the product of the
conditional probabilities of its bigramsP(the,mythical,unicorn) = P(unicorn|mythical) P(mythical|the) P(the|
<start>)
)11|( nn wwP )1
1|(
nNnn wwP
n
kkkn wwPwP
111 )|()(
Page 17
Training and Testing
N-Gram probabilities come from a training corpus• overly narrow corpus: probabilities don't generalize• overly general corpus: probabilities don't reflect task or domainA separate test corpus is used to evaluate the model, typically using standard metrics• held out test set; development test set• cross validation• results tested for statistical significance
Page 18
A Simple Example
• P(I want to eat Chinese food) = P(I | <start>) P(want | I) P(to | want) P(eat | to) P(Chinese | eat) P(food | Chinese)
Page 19
A Bigram Grammar Fragment from BERP
.001Eat British.03Eat today
.007Eat dessert.04Eat Indian
.01Eat tomorrow.04Eat a
.02Eat Mexican.04Eat at
.02Eat Chinese.05Eat dinner
.02Eat in.06Eat lunch
.03Eat breakfast.06Eat some
.03Eat Thai.16Eat on
Page 20
.01British lunch.05Want a
.01British cuisine.65Want to
.15British restaurant.04I have
.60British food.08I don’t
.02To be.29I would
.09To spend.32I want
.14To have.02<start> I’m
.26To eat.04<start> Tell
.01Want Thai.06<start> I’d
.04Want some.25<start> I
Page 21
P(I want to eat British food) = P(I|<start>) P(want|I) P(to|want) P(eat|to) P(British|eat) P(food|British) = .25*.32*.65*.26*.001*.60 = .000080vs. I want to eat Chinese food = .00015Probabilities seem to capture ``syntactic'' facts, ``world knowledge'' • eat is often followed by an NP• British food is not too popularN-gram models can be trained by counting and normalization
Page 22
BERP Bigram Counts
0100004Lunch
000017019Food
112000002Chinese
522190200Eat
12038601003To
686078603Want
00013010878I
lunchFoodChineseEatToWantI
Page 23
BERP Bigram ProbabilitiesNormalization: divide each row's counts by appropriate unigram counts for wn-1
Computing the bigram probability of I I• C(I,I)/C(all I)• p (I|I) = 8 / 3437 = .0023Maximum Likelihood Estimation (MLE): relative frequency of e.g.
4591506213938325612153437
LunchFoodChineseEatToWantI
)()(
1
2,1
wfreqwwfreq
Page 24
What do we learn about the language?
What's being captured with ...• P(want | I) = .32 • P(to | want) = .65• P(eat | to) = .26 • P(food | Chinese) = .56• P(lunch | eat) = .055What about...• P(I | I) = .0023• P(I | want) = .0025• P(I | food) = .013
Page 25
• P(I | I) = .0023 I I I I want • P(I | want) = .0025 I want I want• P(I | food) = .013 the kind of food I want is ...
Page 26
Approximating Shakespeare
As we increase the value of N, the accuracy of the n-gram model increases, since choice of next word becomes increasingly constrainedGenerating sentences with random unigrams...• Every enter now severally so, let• Hill he late speaks; or! a more to leg less first you enterWith bigrams...• What means, sir. I confess she? then all sorts, he is trim, captain.• Why dost stand forth thy canopy, forsooth; he is this palpable hit the King
Henry.
Page 27
Trigrams• Sweet prince, Falstaff shall die.• This shall forbid it should be branded, if renown made it empty.Quadrigrams• What! I will go seek the traitor Gloucester.• Will you not tell me who I am?
Page 28
There are 884,647 tokens, with 29,066 word form types, in about a one million word Shakespeare corpusShakespeare produced 300,000 bigram types out of 844 million possible bigrams: so, 99.96% of the possible bigrams were never seen (have zero entries in the table)Quadrigrams worse: What's coming out looks like Shakespeare because it is Shakespeare
Page 29
N-Gram Training Sensitivity
If we repeated the Shakespeare experiment but trained our n-grams on a Wall Street Journal corpus, what would we get?This has major implications for corpus selection or design
Page 30
Some Useful Empirical Observations
A small number of events occur with high frequencyA large number of events occur with low frequencyYou can quickly collect statistics on the high frequency eventsYou might have to wait an arbitrarily long time to get valid statistics on low frequency eventsSome of the zeroes in the table are really zeros But others are simply low frequency events you haven't seen yet. How to address?
Page 31
Smoothing TechniquesEvery n-gram training matrix is sparse, even for very large corpora Solution: estimate the likelihood of unseen n-gramsProblems: how do you adjust the rest of the corpus to accommodate these ‘phantom’ n-grams?
Page 32
Add-one SmoothingFor unigrams:• Add 1 to every word (type) count• Normalize by N (tokens) /(N (tokens) +V (types))• Smoothed count (adjusted for additions to N) is
• Normalize by N to get the new unigram probability:VNNci
1
VNc
ipi
1*
VxyCxyzCxyzP
)(
1)()|(
VxyCxyzCxyzP
)()()|(Add delta smoothing
Page 33
Basic Idea: Use Count of things you have seen once to estimate the things you haven’t seen• i.e. Estimate the probability of seeing something for the first timeView training corpus as series of events, one for each token (N) and one for each new type (T). Probability of seeing something new is
This probability mass will assigned to unseen casesTNT
Witten-Bell Discounting
Page 34
Witten-Bell (cont…)A zero ngram is just an ngram you haven’t seen yet…but every ngram in the corpus was unseen once…so...• How many times did we see an ngram for the first time? Once for each ngram
type (T)• Est. total probability of unseen bigrams as
• Each of Z unseen cases will be assigned an equal portion probability mass - T/N+T
• Seen cases will have probabilities discounted as
TNT
Page 35
Witten-Bell (cont…)
But for bigrams we can condition on the first word:• Instead of trying to find what is the probability of finding a new bigram we
can ask what is the probability of finding a new bigram that starts with the given word w
Then unseen portion will be assigned probability mass
And for seen cases we discount as follows:
Page 36
Distributing Among the Zeros
If a bigram “wx wi” has a zero count
)()()(
)(1)|(
xx
x
xxi
wTwNwT
wZwwP
Number of bigrams starting with wx that were not seen
Actual frequency of bigrams beginning with wx
Number of bigram types starting with wx
Page 37
Re-estimate amount of probability mass for zero (or low count) ngrams by looking at ngrams with higher counts• For any n-gram that occurs ‘c’ times assume it occurs c* times,
where Nc is the n number n-grams occurring c times• Estimate a smoothed count
• E.g. N0’s adjusted count is a function of the count of ngrams that occur once, N1
• Probability mass assigned to unseen cases works out to be n1/N• Counts for bigrams that never occurred (c0) will be just count of
bigrams that occurred once by count of bigrams that never occurred. How do we know count of bigrams that never occurred?
NcNccc 11*
Good-Turing Discounting
Page 38
Backoff methods (e.g. Katz ‘87)
If you don’t have a count for ‘n-gram’ backoff to weighted ‘n-1’ gram
Alphas needed to make a proper probability distribution (If we backoff when probability is zero, we are adding extra probability mass)For e.g. a trigram model• Compute unigram, bigram and trigram probabilities• In use:
– Where trigram unavailable back off to bigram if available, o.w. unigram probability
Page 39
SummaryN-gram models are approximation to the correct model given by chain ruleN-gram probabilities can be used to estimate the likelihood• Of a word occurring in a context (N-1)N-gram models suffer from sparse dataSmoothing techniques deal with problems of unseen words in corpus
Page 40
Statistical Techniques in NLP
Page 41
Example: Text Summarization
Let’s say we are doing text summarization using sentence extractionSomeone gave us a document where each sentence is scored between 1 to 20 and Extracting top 10% scoring sentence can potentially be a summaryWe have 100 such documentsProblem: Use a machine learning technique to build a model that predicts the score of sentences in a new document
Page 42
Example: Text Summarization
CORPUS100 documentsEach sentence scored (1 to 20)
MachineLearning
Corpus-BasedModel
Page 43
Regression for our Text Summarization Problem
For simplicity let’s assume all of documents are of equal length ‘N’Our training dataset (100 documents)• We have 100xN sentences in total• 100xN scores, let’s call these scores y’s• For each sentence we know it’s position in the document. Let’s
call these sentence positions x’s so we get 100xN sentence positions
)},(),,)...(,(),,{( 100100100)1(100)1(2211 nxnxxnxn yxyxyxyxX
Page 44
Regression as Function Approximation
Using our training data ‘X’ we must predict a score for each sentence in a new documentWe can do such prediction by finding a function y=f(x) that fits the training data well• Need to find out what f(x) to use• Need to compute how good the training data fits with the chosen
f(x)
Page 45
Empirical Risk Minimization
Need to compute how good the data fits the chosen function f(x)We can define a loss function L(y, f(x))Find average loss:
Simple Loss Function:
N
iiiemp xfyL
NR
1
))(,(1
2))(())(,( iiii xfyxfyL
Page 46
Linear Regression
We can choose f(x) to be linear, polynomial, exponential or any other class of functionsIf we use linear function we get linear regression, widely used modeling technique
01);( xxf
cmxy Where m is slope and c is intercept on y axis
Page 47
Text Summarization with Linear RegressionWe had Nx100 (xi, yi) pairs where x was sentence position and y was score for the sentenceLet’s look at a sample plot
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9 10
Series1
Sentence position
Score(xi, yi)
(xi, f(xi))Error (yi - f(xi))
The line we have found by minimizing average squared error is our model (summarizer)
Given any new ‘x’ (i.e. sentence position of a new document) we can predict ‘y’ – score that represents it’s significance to summary
Page 48
Naïve Bayes Classifier
Let’s assume instead of score between 1 to 20 for each sentence, all the sentence have been classified into two classes – IN SUMMARY (1) , NOT IN SUMMARY (0)Now, given a document we want to predict the class (1) or (0) for each sentence in the document. All the sentence in class (1) should be included in the summaryThis is a binary classification problem
Page 49
Intuitive Example of Naïve Bayes Classifier
• Let us assume the objects can be classified into RED or GREEN
• We have a corpus with objected manually labeled as RED or GREEN
• We need to figure out if the test object is RED or GREEN• Number of green objects is twice that of red. So, it is
reasonable to assume that a new object (not observed yet) is twice likely to be green than red (prior probability)
All figures in the given example are from electronic textbook StatSoft
Page 50
Example (RED or GREEN classification)
There are total of 60 objects with 40 GREEN and 20 RED
Prior probability of GREEN # of GREEN objectsTotal # of objects
Prior probability of RED # of RED objectsTotal # of objects
Prior probability of GREEN 4060
Prior probability of RED 2060
Page 51
Example (RED or GREEN classification)
Looking at the picture it is reasonable to assume that likelihood of object being RED or GREEN can be computed by number of RED or GREEN objects in the vicinity
Need to figure out if thetest object is GREEN or RED
Likelihood of X being GREEN # of GREEN objects in vicinity of XTotal # of GREEN
Likelihood of X being RED # of RED objects in vicinity of XTotal # of RED
Page 52
Example (RED or GREEN classification)
Looking at the picture it is reasonable to assume that likelihood of object being RED or GREEN can be computed by number of RED or GREEN objects in the vicinity
Need to figure out if thetest object is GREEN or RED
Likelihood of X being GREEN # of GREEN objects in vicinity of XTotal # of GREEN
Likelihood of X being RED # of RED objects in vicinity of XTotal # of RED
1/40
3/20
Page 53
Example (RED or GREEN classification)
Posterior probability of X being GREEN = Prior probability of GREEN x Likelihood of X given GREEN
= 40/60 x 1/40 = 1/60
Posterior probability of X being RED = Prior probability of RED x Likelihood of X
given RED = 20/60 x 3/40 = 1/40
Hence, we classify our new object X as RED using ourBayesian classifier model.
Page 54
Document Vectors
Document Vectors• Documents can be represented in different types of vectors: binary vector,
multinomial vector, feature vectorBinary Vector: For each dimension, 1 if the word type is in the document and 0 otherwiseMultinomial Vector: For each dimension, count # of times word type appears in the documentFeature Vector: Extract various features from the document and represent them in a vector. Dimension equals the number of features
Page 55
Example of a multinomial document vector
Screening of the critically acclaimed film NumaFung Reserved tickets can be picked up on the day of the show at the box office at Arledge Cinema. Tickets will not be reserved if not paid for in advance.
4 THE 2 TICKETS 2 RESERVED 2 OF 2 NOT 2 BE 2 AT 1 WILL 1 UP 1 SHOW 1 SCREENING 1 PICKED 1 PAID 1 ON 1 OFFICE 1 NUMAFUNG 1 IN 1 IF 1 FOR 1 FILM 1 DAY 1 CRITICALLY 1 CINEMA 1 CAN 1 BOX 1 ARLEDGE 1 ADVANCE 1 ACCLAIMED
4 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Page 56
Example of a multinomial document vector
4 THE 2 SEATS 2 RESERVED 2 OF 2 NOT 2 BE 2 AT 1 WILL 1 UP 1 SHOW 1 SHOWING1 PICKED 1 PAID 1 ON 1 OFFICE 1 VOLCANO 1 IN 1 IF 1 FOR 1 FILM 1 DAY 1 CRITICALLY 1 CINEMA 1 CAN
4 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
But if you want to compare previous vector to this new vector we cannot do any computation like cosine measure with this vector. Why?
->Different dimensions
Hence, if you have multiple documents you need to first find set of all words (dimensions) in the set of documents. Values for all the words that does not appear in the given document is 0
Page 57
Feature Vectors
Instead of just using words to represent documents, we can also extract features and use them to represent the documentFor example, let’s classify History and Physics documentsWe are given ‘N’ documents which are labeled as ‘HISTORY’ or ‘PHYSICS’We can extract features like document length (LN), number of nouns (NN), number of verbs (VB), number of person names (PN), number of place (CN) names, number of organization names (ON), number of sentences (NS), number of pronouns (PNN)
Page 58
Feature Vectors
Extracting such features you get a feature vector of length ‘K’ where ‘K’ is the number of dimensions (features) for each document
LengthNoun CountVerb Count# Person
Name# Place Name
# Orgzn Name
.
.
.
7804534 76723
.
.
.
42012560 734
.
.
.
…
9403638 65521
.
.
.
HIST PHYSICS HIST
Page 59
Feature Vectors
LengthNoun CountVerb Count# Person
Name# Place Name
# Orgzn Name
.
.
.
7804534 76723
.
.
.
42012560 734
.
.
.
…
9403638 65521
.
.
.
HIST PHYSICS HIST After such feature vector representation we can use various learning algorithms including Naïve Bayes, Decision Trees, to classify the new document using the training document set
9403638 65521
.
.
.
HIST