cs276a text retrieval and mining lecture 18. recap of the last lectures introduction to text...

46
CS276A Text Retrieval and Mining Lecture 18

Post on 21-Dec-2015

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

CS276AText Retrieval and Mining

Lecture 18

Page 2: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Recap of the last lectures

Introduction to Text Classification Machine Learning Algorithms for text classification

Naïve Bayes Simple, cheap, linear classifier; quite effective

K Nearest Neighbor classification Simple, expensive at test time, high variance, non-linear

Rocchio vector space classification (centroids) Simple, linear classifier; too simple

Decision Trees Pick out hyperboxes; nonlinear; use just a few features

Support Vector Machines Currently hip; linear or nonlinear (kernelized); effective at

handling high dimensional spaces; very effective

Page 3: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Today’s Topic

Text-specific issues in classification What kinds of features help or work well? Stemming and weighting? What do different evaluation methods show?

Also Course evaluation forms A little bit about CS276B next quarter

And don’t forget Exam review session on Friday Practical exercise 2 due yesterday!

Page 4: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

The Real World

P. Jackson and I. Moulinier: Natural Language Processing for Online Applications

“There is no question concerning the commercial value of being able to classify documents automatically by content. There are myriad potential applications of such a capability for corporate Intranets, government departments, and Internet publishers”

“Understanding the data is one of the keys to successful categorization, yet this is an area in which most categorization tool vendors are extremely weak. Many of the ‘one size fits all’ tools on the market have not been tested on a wide range of content types.”

Page 5: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

The Real World

Gee, I’m building a text classifier for real, now! What should I do?

How much training data do you have? None Very little Quite a lot A huge amount and its growing

Page 6: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Manually written rules

No training data, adequate editorial staff? Never forget the hand-written rules solution!

If (wheat or grain) and not (whole or bread) then Categorize as grain

In practice, rules get a lot bigger than this Can also be phrased using tf or tf.idf weights

With careful crafting (human tuning on development data) performance is high: Construe: 94% recall, 84% precision over 675

categories (Hayes and Weinstein 1990) Amount of work required is huge

Estimate 2 days per class … plus maintenance

Page 7: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Very little data?

If you’re just doing supervised classification, you should stick to something high bias There are theoretical results that Naïve Bayes

should do well in such circumstances (Ng and Jordan 2002 NIPS)

The interesting theoretical answer is to explore semi-supervised training methods: Bootstrapping, EM over unlabeled documents, …

The practical answer is to get more labeled data as soon as you can How can you insert yourself into a process where

humans will be willing to label data for you??

Page 8: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

A reasonable amount of data?

Perfect! We can use all our clever classifiers Roll out the SVM!

But if you are using an SVM/NB etc., you should probably be prepared with the “hybrid” solution where there is a boolean overlay Or else to use user-interpretable Boolean like

models like decision trees Users like to hack, and management likes to be

able to implement quick fixes immediately

Page 9: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

A huge amount of data?

This is great in theory for doing accurate classification…

But it could easily mean that expensive methods like SVMs (train time) or kNN (test time) are quite impractical

Naïve Bayes can come back into its own again! Or other advanced methods with linear

training/test complexity like regularized logistic regression (though much more expensive to train)

Page 10: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

A huge amount of data?

With enough data the choice of classifier may not matter much, and the best choice may be unclear

Data: Brill and Banko on context-sensitive spelling correction

But the fact that you have to keep doubling your data to improve performance is a little unpleasant

Page 11: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

How many categories?

A few (well separated ones)? Easy!

A zillion closely related ones? Think: Yahoo! Directory, Library of Congress

classification, legal applications Quickly gets difficult!

Classifier combination is always a useful technique Voting, bagging, or boosting multiple classifiers

Much literature on hierarchical classification Mileage fairly unclear

May need a hybrid automatic/manual solution

Page 12: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

How can one tweak performance?

Aim to exploit any domain-specific useful features that give special meanings or that zone the data E.g., an author byline or mail headers

Aim to collapse things that would be treated as different but shouldn’t be. E.g., part numbers, chemical formulas

Page 13: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Does putting in “hacks” help?

You bet! You can get a lot of value by differentially

weighting contributions from different document zones: Upweighting title words helps (Cohen & Singer

1996) Doubling the weighting on the title words is a good rule of

thumb Upweighting the first sentence of each paragraph

helps (Murata, 1999) Upweighting sentences that contain title words

helps (Ko et al, 2002)

Page 14: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Two techniques for zones

1. Have a completely separate set of features/parameters for different zones like the title

2. Use the same features (pooling/tying their parameters) across zones, but upweight the contribution of different zones

Commonly the second method is more successful: it costs you nothing in terms of sparsifying the data, but can give a very useful performance boost

Which is best is contingent fact about data

Page 15: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Text Summarization techniques in text classification

Text Summarization: Process of extracting key pieces from text, normally by features on sentences reflecting position and content

Much of this work can be used to suggest weightings for terms in text categorization

See: Kolcz, Prabakarmurthi, and Kolita, CIKM 2001: Summarization as feature selection for text categorization

Categorizing purely with title, Categorizing with first paragraph only Categorizing with paragraph with most keywords Categorizing with first and last paragraphs, etc.

Page 16: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Feature Selection: Why?

Text collections have a large number of features 10,000 – 1,000,000 unique words … and more

Make using a particular classifier feasible Some classifiers can’t deal with 100,000s of feat’s

Reduce training time Training time for some methods is quadratic or

worse in the number of features Improve generalization

Eliminate noise features Avoid overfitting

Page 17: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Recap: Feature Reduction

Standard ways of reducing feature space for text Stemming

laugh, laughs, laughing, laughed → laugh Stop word removal

E.g., eliminate all prepositions Sometimes it seems like just nouns have most of the

information useful for classification Conversion to lower case Tokenization

Break on all special characters: fire-fighter → fire, fighter

Page 18: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Does stemming/lowercasing/… help?

As always it’s hard to tell, and empirical evaluation is normally the gold standard

But note that the role of tools like stemming is rather different for TextCat vs. IR: For IR, you often want to collapse forms of the

verb oxygenate and oxygenation, since all of those documents will be relevant to a query for oxygenation

For TextCat, with sufficient training data, stemming does no good. It only helps in compensating for data sparseness (which can be severe in TextCat applications). Overly aggressive stemming can easily degrade performance.

Page 19: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Feature Selection

Yang and Pedersen 1997 Comparison of different selection criteria

DF – document frequency IG – information gain MI – mutual information (pointwise) CHI – chi square

Common strategy Compute statistic for each term Keep n terms with highest value of this statistic

Page 20: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

2 statistic (CHI)

2 is interested in (fo – fe)2/fe summed over all table entries:

The null hypothesis is rejected with confidence .999, since 12.9 > 10.83 (the value for .999 confidence).

)001.(9.129498/)94989500(502/)502500(

75.4/)75.43(25./)25.2(/)(),(22

2222

p

EEOaj

Term = jaguar Term jaguar

Class = auto 2 (0.25) 500 (502)

Class auto 3 (4.75) 9500 (9498)

expected: fe

observed: fo

Page 21: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

There is a simpler formula for 2:

2 statistic (CHI)

N = A + B + C + D

A = #(t,c) C = #(¬t,c)

B = #(t,¬c) D = #(¬t, ¬c)

Page 22: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Yang&Pedersen: Experiments

Two classification methods kNN (k nearest neighbors) Linear Least Squares Fit

Regression method Collections

Reuters-22173 92 categories 16,000 unique terms

Ohsumed: subset of Medline 14,000 categories 72,000 unique terms

Ltc term weighting (normalized log tf.idf)

Page 23: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification
Page 24: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

IG, DF, CHI Are Correlated.

Page 25: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Discussion

You can eliminate 90% of features for IG, DF, and CHI without decreasing performance.

In fact, performance increases with fewer features for IG, DF, and CHI.

Mutual information is very sensitive to small counts.

IG does best with smallest number of features. Document frequency is close to optimal. By far

the simplest feature selection method! Similar results for LLSF (regression).

Page 26: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Feature Selection:Other Considerations

Generic vs Class-Specific feature selection Completely generic (class-independent) Separate feature set for each class Mixed (a la Yang&Pedersen)

Maintainability over time Is aggressive feature selection good or bad for

robustness over time? Ideal: Optimal features selected as part of

training not as preprocessing step

Page 27: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Measuring ClassificationFigures of Merit

Accuracy of classification Main evaluation criterion in academia More in a moment

Speed of training statistical classifier Some methods are very cheap; some very costly

Speed of classification (docs/hour) No big differences for most algorithms Exceptions: kNN, complex preprocessing

requirements Effort in creating training set/hand-built classifier

human hours/topic

Page 28: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Measuring ClassificationFigures of Merit

In the real world, economic measures: Your choices are:

Do no classification That has a cost (hard to compute)

Do it all manually Has an easy to compute cost if doing it like that now

Do it all with an automatic classifier Mistakes have a cost

Do it with a combination of automatic classification and manual review of uncertain/difficult/”new” cases

Commonly the last method is most cost efficient and is adopted

Page 29: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Concept Drift

Categories change over time Example: “president of the united states”

1999: clinton is great feature 2002: clinton is bad feature

One measure of a text classification system is how well it protects against concept drift.

Feature selection: can be bad in protecting against concept drift

Page 30: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Measures of Accuracy

Evaluation must be done on test data that is independent of the training data

Overall error rate Not a good measure for small classes. Why?

Precision/recall for classification decisions F1 measure: 1/F1 = ½ (1/P + 1/R) Correct estimate of size of category

Why is this different? Stability over time / category drift Utility

Costs of false positives / false negatives may be different For example, cost = tp-0.5fp

Page 31: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Good practice department:Confusion matrix

In a perfect classification, only the diagonal has non-zero entries

53

Class assigned by classifierA

ctual

Cla

ss

This (i, j) entry means 53 of the docs actually in

class i were put in class j by the classifier.

Page 32: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Good practice department: N-Fold Cross-Validation

Results can vary based on sampling error due to different training and test sets.

Average results over multiple training and test sets (splits of the overall data) for the best results.

Ideally, test and training sets are independent on each trial.

But this would require too much labeled data. Partition data into N equal-sized disjoint segments. Run N trials, each time using a different segment of the

data for testing, and training on the remaining N1 segments.

This way, at least test-sets are independent. Report average classification accuracy over the N trials. Typically, N = 10.

Page 33: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Good practice department: Learning Curves

In practice, labeled data is usually rare and expensive.

Would like to know how performance varies with the number of training instances.

Learning curves plot classification accuracy on independent test data (Y axis) versus number of training examples (X axis).

One can do both the above and produce learning curves averaged over multiple trials from cross-validation

Page 34: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Micro- vs. Macro-Averaging

If we have more than one class, how do we combine multiple performance measures into one quantity?

Macroaveraging: Compute performance for each class, then average.

Microaveraging: Collect decisions for all classes, compute contingency table, evaluate.

Page 35: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Micro- vs. Macro-Averaging: Example

Truth: yes

Truth: no

Classifier: yes

10 10

Classifier: no

10 970

Truth: yes

Truth: no

Classifier: yes

90 10

Classifier: no

10 890

Truth: yes

Truth: no

Classifier: yes

100 20

Classifier: no

20 1860

Class 1 Class 2 Micro.Av. Table

Macroaveraged precision: (0.5 + 0.9)/2 = 0.7 Microaveraged precision: 100/120 = .83 Why this difference?

Page 36: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Yang&Liu: SVM vs. Other Methods

Page 37: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

CS276BWeb Search and Mining

Page 38: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

CS276B

Web and related technologies Web search Link analysis XML Collaborative Filtering

Page 39: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Evolution of search engines First generation -- use only “on page”, text data

Word frequency, language

Second generation -- use off-page, web-specific data Link (or connectivity) analysis Click-through data (What results people click on) Anchor-text (How people refer to this page)

Third generation -- answer “the need behind the query” Semantic analysis -- what is this about? Focus on user need, rather than on query Context determination Helping the user Integration of search and text analysis

1995-1997 AV, Excite, Lycos, etc

From 1998. Made

popular by Google but everyone now

Still experimental

Page 40: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Anchor text (first used WWW Worm - McBryan [Mcbr94])

Here is a great picture of a tiger

Tiger image

Cool tiger webpage

The text in the vicinity of a hyperlink is

descriptive of the page it points to.

Page 41: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Search Engine Optimization IITutorial on

Cloaking & StealthTechnology

Search Engine Optimization IITutorial on

Cloaking & StealthTechnology

Page 42: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Personalization

QueryAugmentation

InterestsInterests

DemographicsDemographics

Click StreamClick Stream

Search HistorySearch History

Application UsageApplication Usage

Result Processing

Outride SchemaUser x Content x

History x Demographics

IntranetSearch

Web Search

Search Engine Schema

Keyword x Doc ID x Link Rank

Outride PersonalizedSearch System

User Query

Result Set

Outride Side Bar Interface

Page 43: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Recap: XQuery

Møller and Schwartzbach

Page 44: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Collaborative filtering Corporate Intranets

Recommendation, finding domain experts, … Ecommerce

Product recommendations - amazon Medical Applications

Matching patients to doctors, clinical trials, … Customer Relationship Management

Matching customer problems to internal experts in a Support organization.

Page 45: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Text Mining ToolsText Mining Tool that returns Key Terms

Page 46: CS276A Text Retrieval and Mining Lecture 18. Recap of the last lectures Introduction to Text Classification Machine Learning Algorithms for text classification

Chance to do a major project

Something that can exploit things that we’ve looked at this quarter

Or things that we move to next quarter A great learning context in which to investigate

IR, classification and clustering, information extraction, link analysis, various forms of text-mining, textbase visualization, collaborative filtering …