i256: applied natural language processing

21
1 I256: Applied Natural Language Processing Marti Hearst Sept 27, 2006

Upload: prema

Post on 22-Feb-2016

54 views

Category:

Documents


0 download

DESCRIPTION

I256: Applied Natural Language Processing. Marti Hearst Sept 27, 2006. Evaluation Measures. Evaluation Measures. Precision: Proportion of those you labeled X that the gold standard thinks really is X #correctly labeled by alg/ all labels assigned by alg - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: I256:  Applied Natural Language Processing

1

I256: Applied Natural Language Processing

Marti HearstSept 27, 2006 

 

Page 2: I256:  Applied Natural Language Processing

2

Evaluation Measures

Page 3: I256:  Applied Natural Language Processing

3

Evaluation MeasuresPrecision:

Proportion of those you labeled X that the gold standard thinks really is X #correctly labeled by alg/ all labels assigned by alg #True Positive / (#True Positive + #False Positive)

Recall:Proportion of those items that are labeled X in the gold standard that you actually label X#correctly labeled by alg / all possible correct labels#True Positive / (#True Positive + # False Negative)

Page 4: I256:  Applied Natural Language Processing

4

F-measure

Can “cheat” with precision scores by labeling (almost) nothing with X.Can “cheat” on recall by labeling everything with X.The better you do on precision, the worse on recall, and vice versaThe F-measure is a balance between the two.

2*precision*recall / (recall+precision)

Page 5: I256:  Applied Natural Language Processing

5

Evaluation Measures

Accuracy:Proportion that you got right (#True Positive + #True Negative) / N

N = TP + TN + FP + FNError:

(#False Positive + #False Negative)/N

Page 6: I256:  Applied Natural Language Processing

6

Prec/Recall vs. Accuracy/ErrorWhen to use Precision/Recall?

Useful when there are only a few positives and many many negativesAlso good for ranked ordering

– Search results rankingWhen to use Accuracy/Error

When every item has to be judged, and it’s important that every item be correct.Error is better when the differences between algorithms are very small; let’s you focus on small improvements.

– Speech recognition

Page 7: I256:  Applied Natural Language Processing

7

Evaluating Partial Parsing

How do we evaluate it?

Page 8: I256:  Applied Natural Language Processing

8

Evaluating Partial Parsing

Page 9: I256:  Applied Natural Language Processing

9

Testing our Simple FuleLet’s see where we missed:

Page 10: I256:  Applied Natural Language Processing

10

Update rules; Evaluate Again

Page 11: I256:  Applied Natural Language Processing

11

Evaluate on More Examples

Page 12: I256:  Applied Natural Language Processing

12

Incorrect vs. MissedAdd code to print out which were incorrect

Page 13: I256:  Applied Natural Language Processing

13

Missed vs. Incorrect

Page 14: I256:  Applied Natural Language Processing

14

What is a good Chunking Baseline?

Page 15: I256:  Applied Natural Language Processing

15

Page 16: I256:  Applied Natural Language Processing

16

The Tree Data Structure

Page 17: I256:  Applied Natural Language Processing

17

Baseline Code (continued)

Page 18: I256:  Applied Natural Language Processing

18

Evaluating the Baseline

Page 19: I256:  Applied Natural Language Processing

19

Cascaded Chunking

Page 20: I256:  Applied Natural Language Processing

20

Page 21: I256:  Applied Natural Language Processing

21

Next Time

Summarization