cs 671 nlp machine learning - cse.iitk.ac.in · learning in nlp • language models may be implicit...

Post on 03-Jul-2020

13 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

CS 671 NLPMACHINE LEARNING

1

Reading

Christopher M. Bishop, Pattern recognition and machine learning. Springer, 2006.

Learning in NLP

• Language models may be Implicit : we can’t describe how we use language so effortlessly

• Unknown future cases: Constantly need to interpret sentences we have never heard before

• Model structures: Learning can reveal properties (regularities) of the language system

Latent structures / Dimensionality reduction : reduce complexity and improve performance

Feedback in Learning

• Type of feedback:

– Supervised learning: correct answers for each example

Discrete (categories) : classification

Continuous : regression

– Unsupervised learning: correct answers not given

– Reinforcement learning: occasional rewards

Inductive learning

Simplest form: learn a function from examples

An example is a pair (x, y) : x = data, y = outcome

assume: y drawn from function f(x) : y = f(x) + noise

f = target function

Problem: find a hypothesis hsuch that h ≈ f

given a training set of examples

Note: highly simplified model :– Ignores prior knowledge : some h may be more likely

– Assumes lots of examples are available

– Objective: maximize prediction for unseen data – Q. How?

Inductive learning method

• Construct/adjust h to agree with f on training set

• (h is consistent if it agrees with f on all examples)

• E.g., curve fitting:

y = f(x)

Regression:

y is continuous

Classification:

y : set of discrete values e.g. classes C1, C2, C3...

y ∈ {1,2,3...}

Regression vs Classification

Precision:

A / Retrieved

Positives

Recall:

A / ActualPositives

Precision vs Recall

Regression

Polynomial Curve Fitting

Linear Regression

y = f(x) = Σi wi . φi(x)

φi(x) : basis function

wi : weights

Linear : function is linear in the weights

Quadratic error function --> derivative is linear in w

Sum-of-Squares Error Function

0th Order Polynomial

1st Order Polynomial

3rd Order Polynomial

9th Order Polynomial

Over-fitting

Root-Mean-Square (RMS) Error:

Polynomial Coefficients

9th Order Polynomial

Data Set Size:

9th Order Polynomial

Data Set Size:

9th Order Polynomial

Regularization

Penalize large coefficient values

Regularization:

Regularization:

Regularization: vs.

Polynomial Coefficients

Binary Classification

y = f(x)

Regression:

y is continuous

Classification:

y : discrete values e.g. 0,1,2...for classes C0, C1, C2...

Binary Classification: two classesy ∈ {0,1}

Regression vs Classification

Binary Classification

Feature : Length

Feature : Lightness

Minimize Misclassification

Precision / Recall

C1 : class of interest

Which is higher: Precision, or Recall?

Precision / Recall

C1 : class of interest

(Positives)

Recall = TP / TP +FP

Precision / Recall

C1 : class of interest

Precision = TP / TP +FN

Decisions - Feature Space

- Feature selection : which feature is maximally discriminative?

Axis-oriented decision boundaries in feature space

Length – or – Width – or Lightness?

- Feature Discovery: construct g(), defined on the feature space, for better discrimination

Feature Selection: width / lightness

lightness is more discriminative

- but can we do better?

select the most discriminative feature(s)

- Feature selection : which feature is maximally discriminative?

Axis-oriented decision boundaries in feature space

Length – or – Width – or Lightness?

- Feature Discovery: discover discriminative function on feature space : g()

combine aspects of length, width, lightness

Feature Selection

Feature Discovery : Linear

Cross-Validation

Decision Surface: non-linear

Decision Surface : non-linear

overfitting!

Learning process

- Feature set : representative? complete?

- Sample size : training set vs test set

- Model selection:

Unseen data overfitting?

Quality vs Complexity

Computation vs Performance

Best Feature set?

- Is it possible to describe the variation in the data in terms of a compact set of Features?

- Minimum Description Length

CS 671 NLPNAIVE BAYES

AND SPELLING

Presented by

amitabha mukerjeeiit kanpur

44

45

Reading

Reading:

1. Chapter 6 of Jurafsky & Martin, Speech and Language Processing, “Spelling Correction noisy channel“ (draft 2014 edition)http://web.stanford.edu/~jurafsky/slp3/

2. P. Norvig, How to write a spelling corrector http://norvig.com/spell-correct.html

46

Spelling Correction

In [2], the authors used curvatures for accurate loacation and tracking of the center of the eye.

OpenCV has cascades for faces whih have been used for detcting faces in live videos.

- course project report 2013

black crows gorge on bright mangoes in still, dustgreen trees

?? “black cows” ?? “black crews” ??

47

Single-typing errors

loacation : insertion error

whih , detcting : deletion

crows -> crews : substitution

the -> hte : transposition

Damereau (1964) : 80% of all misspelled words caused by single-error of these four types

Which errors have a higher “edit-distance”?

48

Causes of Spelling Errors

Keyboard Based

83% novice and 51% overall were keyboard related errors

Immediately adjacent keys in the same row of the keyboard (50% of the novice substitutions, 31% of all substitutions)

Cognitive : may be more than 1-error; more likely to be real words

Phonetic : separate separate

Homonym : piece peace ; there their;

49

Steps in spelling correction

Non-word errors:

Detection of non-words (e.g. hte, dtection)

Isolated word error correction

[naive bayesian; edit distances]

Actual word (real-word) errors:

Context dependent error detection and correction (e.g. “three are four types of errors”)

[can use language models e.g. n-grams]

50

Nonword and Word errors

loacation, detecting non-words

crews / crows word error

Non-word error:

For alphabet Σ, and dictionary D with strings in Σ*

given a string s ∈ Σ*, where s∉ D,

find w∈ D that is most likely to have been input as s.

Word error: drop s∉ D

w x

(wn, wn-1, … , w1) (xm, xm-1, … , x1)Noisy Channel

mis-spelled wordbest guess

ŵ = argmax P(w|x) w ϵ V

Given t, find most probable w :

Find that ŵ for which P(w|t) is maximum,

Probabilistic Spell Checker

intended wordVocabulary

source receiver

Probabilistic Spell Checker

Q. How to compute P(w|t) ?

Many times, it is easier to compute P(t|w)

53

Bayesian Classification

Given an observation x, determine which class w it belongs to

Spelling Correction:

Observation: String of characters

Classification: Word intended

Speech Recognition:

Observation: String of phones

Classification: Word that was said

PROBABILITY THEORY

54

Example

AIDS occurs in 0.05% of population. A test is 99%

effective in detecting the disease, but 5% of the

cases test positive in absence of AIDS.

If you are tested +ve, what is the probability you have

the disease?

Probability theory

Apples and Oranges

Sample Space

Sample ω = Pick two fruits,

e.g. Apple, then Orange

Sample Space Ω = {(A,A), (A,O),

(O,A),(O,O)}

= all possible worlds

Event e = set of possible worlds, e ⊆ Ω

• e.g. second one picked is an apple

Learning = discovering regularities

- Regularity : repeated experiments: outcome not be fully predictable

- Probability p(e) : "the fraction of possible worlds in which e is true” i.e. outcome is event e

- Frequentist view : p(e) = limit as N → ∞- Belief view: in wager : equivalent odds

(1-p):p that outcome is in e, or vice versa

Why probability theory?

different methodologies attempted for uncertainty:

– Fuzzy logic

– Multi-valued logic

– Non-monotonic reasoning

But unique property of probability theory:

If you gamble using probabilities you have the best

chance in a wager. [de Finetti 1931]

=> if opponent uses some other system, he's

more likely to lose

Ramsay-diFinetti theorem (1931)

If agent X’s degrees of belief are rational, then X ’s

degrees of belief function defined by fair betting

rates is (formally) a probability function

Fair betting rates: opponent decides which side one

bets on

Proof: fair odds result in a function pr () that satisifies

the Kolmogrov axioms:

Normality : pr(S) >=0

Certainty : pr(T)=1

Additivity : pr (S1 v S2 v.. )= Σ(Si)

Kolmogrovian model

Probability space Ω = set of all outcomes (events)

Event A may include multiple outcomes – e.g. several coin-tosses.

F : a σ-field on Ω : closed under countable union, and under complement, maximal element Ω, emptySet= impossible event

In practice, F = all possible subsets = powerset of Ω

(alternatives to kolmogrovian axiomatization exist)

Axioms of Probability

A probability measure p : F [0,1], s.t.

- p is non-negative : p(e) ≥ 0

- unit sum p(Ω) = 1

i.e. no outcomes outside sample space

- additive : if e1, e2 are disjoint events (no common outcome):

p(e1) + p(e2) = p(e1 ∪ e2)

Joint vs. conditional probability

Marginal Probability

Conditional ProbabilityJoint Probability

Probability Theory

Sum Rule

Product Rule

Rules of Probability

Sum Rule

Product Rule

Example

parasitic Gap, a rare syntactic construction occurs on

average once in 100,000 sentences.

pattern matcher : find sentences S w parasitic gaps.

if S has parasitic gap (G), says (T) with prob 0.95.

if S has no gap (~G) wrongly says (T) w prob 0.005.

On a corpus of 100000 Sentences, How many are

expected to be detected with G?

P(G) = 10-5. P(T|G) = 0.95 P(T|~G) = 0.005 = 5.10-3

truly G = 0.95 ; falsely detected as G = 500

Probabilistic Spell Checker

Q. How to compute P(w|t) ?

Many times, it is easier to compute P(t|w)

Related by product rule:

p(X,Y) = p(Y|X) p(X)

= p(X|Y) p(Y)

Bayes’ Theorem

posterior likelihood × prior

Bayes’ Theorem

Thomas Bayes (c.1750):

how can we infer causes from effects?

can one learn the probability of a future event from

frequency of occurrance in the past?

as new evidence comes in probabilistic knowledge

improves.

basis for human expertise?

Initial estimate (prior belief P(h), not well formulated)

+ new evidence (support)

+ compute likelihood P (data| h)

improved posterior: P (h| data)

Example

parasitic Gap, a rare syntactic construction occurs on

average once in 100,000 sentences.

pattern matcher : find sentences S w parasitic gaps.

if S has parasitic gap (G), says (T) with prob 0.95.

if S has no gap (~G) wrongly says (T) w prob 0.005.

If the test is positive (T) for a sentence, what is the

probability that there is a parasitic gap?

P(G) = 10-5. P(T|G) = 0.95 P(T|~G) = 0.005 = 5.10-3

truly G = 0.95 ; falsely detected as G = 500

Example

P(G) = 10-5. P(T|G) = 0.95 P(T|~G) = 0.0005 = 5.10-4

P(G|T) = P(T|G) * P(G) / P(T)

P(T) = P(T,G) + P(T,~G) ) [Sum Rule]

= P(T|G) * P(G) + P(T|~G) * P(~G) [Product Rule]

P(G|T)

or about 1/500

= 0.95 * 10^-5 / [ .95* 10**(-5) + 5.10^-3 . (1 - 10^-5) ]

= 9.5e-4 / (9.5e-4 + 5 * 0.99999) [div by 10^-3]

= 0.0095 / (0.0095 + 4.9995) = 0.0095 / 5.00945

= 0.0019

Bernoulli Process

Two Outcomes – e.g. toss a coin three times:

HHH, HHT, HTH, HTT, THH, THT, TTH, TTT

Probability of k Heads:

k 0 1 2 3

P(k) 1/8 3/8 3/8 1/8

Probability of success: p, failure q, then

Permutations

Multinomial Coefficient

K = 2 Binomial coefficient

PERMUTATIONS

N distinct items e.g. abcde

Pn = n! permutations

N items, ni of type i, e.g. aabbbc

Precision:

A / Retrieved

Positives

Recall:

A / ActualPositives

Precision vs Recall

Example

What is the recall of the test for parasitic gap?

What is its precision?

Recall = fraction of actual positives that are detected by t

= 0.99

Precision = %age of true positives among cases that t

finds positive

= 5/505 = .0098

F-Score

Features may be high-dimensional

joint distribution P(x,y) varies considerably

though marginals P(x), P(y) are identical

estimating the joint distribution requires

much larger sample: O(nk) vs nk

Entropy

Entropy: the uncertainty of a distribution.

Quantifying uncertainty (“surprisal”):

Event x

Probability px

Surprisal log(1/px)

Entropy: expected surprise (over p):

A coin-flip is most

uncertain for a fair

coin.

pHEADS

H

H(p) = Ep log2

1

px

é

ëê

ù

ûú= - px log2

x

å px

NON-WORD SPELL CHECKER

80

81

Spelling error as classification

Each word w is a class, related to many instances of the observed forms x

Assign w given x :

w = argmaxwÎV

P(w | x)

Noisy Channel : Bayesian Modeling

Observation x of a misspelled word

Find correct word w

82

w = argmaxwÎV

P(w | x)

= argmaxwÎV

P(x |w)P(w)

P(x)

= argmaxwÎV

P(x |w)P(w)

Non-word spelling error example

acress

83

84

Confusion Set

Confusion set of word w:

All typed forms t obtainable by a single application of insertion, deletion, substitution or transposition

Confusion set for acress

Error Candidate Correction

Correct Letter

Error Letter

Type

acress actress t - deletion

acress cress - a insertion

acress caress ca ac transposition

acress access c r substitution

acress across o e substitution

acress acres - s insertion

acress acres - s insertion

85

86

Kernighan et al 90

Confusion set of word w (one edit operation away from w):

All typed forms t obtainable by a single application of insertion, deletion, substitution or transposition

Different editing operations have unequal weights

Insertion and deletion probabilities : conditioned on letter immediately on the left – bigram model.

Compute probabilities based on training corpus of single-typing errors.

Unigram Prior probability

word Frequency of word

P(word)

actress 9,321 .0000230573

cress 220 .0000005442

caress 686 .0000016969

access 37,038 .0000916207

across 120,844 .0002989314

acres 12,874 .0000318463

87

Counts from 404,253,213 words in Corpus of Contemporary English (COCA)

Channel model probability

Error model probability, Edit probability

Kernighan, Church, Gale 1990

Misspelled word x = x1, x2, x3… xm

Correct word w = w1, w2, w3,…, wn

P(x|w) = probability of the edit

(deletion/insertion/substitution/transposition)

88

Computing error probability: confusion matrix

del[x,y]: count(xy typed as x)

ins[x,y]: count(x typed as xy)

sub[x,y]: count(x typed as y)

trans[x,y]: count(xy typed as yx)

Insertion and deletion conditioned on previous character

89

90

Confusion matrix – Deletion [Kerni90]

Confusion matrix : substitution

Channel model 92 Kernighan, Church, Gale 1990

Channel model for acress

Candidate Correction

Correct Letter

Error Letter

x|w P(x|word)

actress t - c|ct .000117

cress - a a|# .00000144

caress ca ac ac|ca .00000164

access c r r|c .000000209

across o e e|o .0000093

acres - s es|e .0000321

acres - s ss|s .0000342

93

Noisy channel probability for acress

Candidate Correction

Correct Letter

Error Letter

x|w P(x|word) P(word) 109 *P(x|w)P(w)

actress t - c|ct .000117 .0000231 2.7

cress - a a|# .00000144 .00000054

4

.00078

caress ca ac ac|ca .00000164 .00000170 .0028

access c r r|c .000000209 .0000916 .019

across o e e|o .0000093 .000299 2.8

acres - s es|e .0000321 .0000318 1.0

acres - s ss|s .0000342 .0000318 1.0

94

Using a bigram language model

“a stellar and versatile acress whose

combination of sass and glamour…”

Counts from the Corpus of Contemporary American English with add-1 smoothing

P(actress|versatile)=.000021

P(whose|actress) = .0010

P(across|versatile) =.000021

P(whose|across) = .000006

P(“versatile actress whose”) = .000021*.0010 = 210 x10-10

P(“versatile across whose”) = .000021*.000006 = 1 x10-10

95

Multiple Typing Errors

Multiple typing errors

Measures of string similarity

How similar is “intention” to “execution”?

For strings of same length – Hamming distance

Edit distance (A,B):

minimum number of operations that transform string A into string B

ins, del, sub, transp : Damerau –Levenshteindistance

97

Minimum Edit Distance

Each edit operation has a cost

Edit distance based measures

Levnishtein-Damreau distance

How similar is “intension” to “execution”?

98

Three views of edit operations99

All views cost = 5 edits

If subst / transp is not allowed [their cost = 2]

cost= 8 edits

Levenshtein Distance

len(A) = m; len (B) = n

create n × m matrix : A along x-axis, B along y

cost(i,j) = Levenshtein distance (A[0..i] , B[0..j])

= cost of matching substrings

Dynamic programming : solve by decomposition.

Dist-matrix(i,j) = min { costs of insert from (i-1,j) or (i,j-1 ); or cost of substitute from (i-1, j-1) }

100

Levenshtein Distance101

WORD-FROM-DICTIONARYSPELL CHECKER

102

Real-word spelling errors

…leaving in about fifteen minuets to go to her house.

The design an construction of the system…

Can they lave him my messages?

The study was conducted mainly be John Black.

25-40% of spelling errors are real words Kukich 1992

103

Solving real-world spelling errors

For each word in sentence

Generate candidate set

the word itself

all single-letter edits that are English words

words that are homophones

Choose best candidatesNoisy channel model

Task-specific classifier

104

Noisy channel for real-word spell correction

Given a sentence w1,w2,w3,…,wn

Generate a set of candidates for each word wi

Candidate(w1) = {w1, w’1 , w’’1 , w’’’1 ,…}

Candidate(w2) = {w2, w’2 , w’’2 , w’’’2 ,…}

Candidate(wn) = {wn, w’n , w’’n , w’’’n ,…}

Choose the sequence W that maximizes P(W)

Noisy channel for real-word spell correction

106

two of thew

to threw

on

thawofftao

thetoo

oftwo thaw

...

Noisy channel for real-word spell correction

107

two of thew

to threw

on

thawofftao

thetoo

oftwo thaw

...

Norvig’s Python Spelling Corrector108

How to Write a Spelling Corrector

http://norvig.com/spell-correct.html

Simplification: One error per sentence

Out of all possible sentences with one word replaced

w1, w’’2,w3,w4 two off thew

w1,w2,w’3,w4 two of the

w’’’1,w2,w3,w4 too of thew

Choose the sequence W that maximizes P(W)

Where to get the probabilities

Language model

Unigram

Bigram

Etc

Channel model

Same as for non-word spelling correction

Plus need probability for no error, P(w|w)

110

Probability of no error

What is the channel probability for a correctly typed word?

P(“the”|“the”) = 1 – probability of mistyping

Depends on typist, task, etc.

.90 (1 error in 10 words)

.95 (1 error in 20 words) value used, say

.99 (1 error in 100 words)

.995 (1 error in 200 words)

111

from http://norvig.com/ngrams/ch14.pdf p.235

Peter Norvig’s “thew” example112

x w x|w P(x|w) P(w)109

P(x|w)P(w)

thew the ew|e 0.000007 0.02 144

thew thew 0.95 0.00000009 90

thew thaw e|a 0.001 0.0000007 0.7

thew threw h|hr 0.000008 0.000004 0.03

thew thwe ew|we 0.000003 0.00000004 0.0001

Choosing 0.99 instead of 0.95 (1 mistyping in 100 words) ”thew” becomes more likely

State of the art noisy channel

We never just multiply the prior and the error model

Independence assumptionsprobabilities not commensurate

Instead: weight them

Learn λ from a validation test set(divide training set into training + validation)

113

w = argmaxwÎV

P(x |w)P(w)l

Phonetic error model

Metaphone, used in GNU aspell

Convert misspelling to metaphone pronunciation “Drop duplicate adjacent letters, except for C.”

“If the word begins with 'KN', 'GN', 'PN', 'AE', 'WR', drop the first letter.”

“Drop 'B' if after 'M' and if it is at the end of the word”

Find words whose pronunciation is 1-2 edit distance from misspelling’s

Score result list

Weighted edit distance of candidate to misspelling

Edit distance of candidate pronunciation to misspelling pronunciation

114

Improvements to channel model

Allow richer edits (Brill and Moore 2000)

ent ant

ph f

le al

Incorporate pronunciation into channel (Toutanovaand Moore 2002)

115

Channel model

Factors that could influence p(misspelling|word)

The source letter

The target letter

Surrounding letters

The position in the word

Nearby keys on the keyboard

Homology on the keyboard

Pronunciations

Likely morpheme transformations

116

Nearby keys

Classifier-based methods

Instead of just channel model and language model

Use many more features – wider context build a classifier (machine learning).

Example:

whether/weather “cloudy” within +- 10 words

___ to VERB

___ or not

Q. How can we discover such features?

118

Candidate generation

Words with similar spelling

Small edit distance to error

Words with similar pronunciation

Small edit distance of pronunciation to error

119

Damerau-Levenshtein edit distance

Minimal edit distance between two strings, where edits are: Insertion

Deletion

Substitution

Transposition of two adjacent letters

120

Candidate generation

80% of errors are within edit distance 1

Almost all errors within edit distance 2

Also allow insertion of space or hyphen

thisidea this idea

inlaw in-law

121

Language Model

Language modeling algorithms :

Unigram, bigram, trigram

Formal grammars

Probabilistic grammars

122

CS 671 NLPNAÏVE BAYES

AND SPELLING

Presented by

amitabha mukerjeeiit kanpur

123

HCI issues in spelling

If very confident in correction

Autocorrect

Less confident

Give the best correction

Less confident

Give a correction list

Unconfident

Just flag as an error

124

Noisy channel based methods

IBM

Mays, Eric, Fred J. Damerau and Robert L. Mercer. 1991. Context based spelling correction. Information Processing and Management, 23(5), 517–522

AT&T Bell Labs

Kernighan, Mark D., Kenneth W. Church, and William A. Gale. 1990. A spelling correction program based on a noisy channel model. Proceedings of COLING 1990, 205-210

top related