analyzing weighting schemes in collaborative filtering: cold start, post cold start and power users

Download Analyzing Weighting Schemes in Collaborative Filtering: Cold Start, Post Cold Start and Power Users

If you can't read please download the document

Upload: alan-said

Post on 16-Apr-2017

1.840 views

Category:

Technology


0 download

TRANSCRIPT

Analyzing Weighting Schemes in Collaborative Filtering:
Cold Start, Post Cold Start and Power Users

ACM SAC 2012Alan Said, Brijnesh J. Jain, Sahin Albayrak{alan, jain, sahin}@dai-lab.de

TU-Berlin

Outline

Movie Recommendation

Problem: Popularity Bias

Collaborative Filtering

Similarity Weighting Schemes

Experiments

Results

Conclusion

Recommender Systems

What they should do:Find items which should be of interest to users

Find items which should be useful to users

What they often do instead:Find items which are known by users

Find items which users would have found anyway

Popularity Bias

What is popularity bias?

Some things are more popular than othersBlockbuster movies1: Pulp Fiction, Inception, etc.

Best selling books2: Steve Jobs Bio, A Song of Ice and Fire, etc.

Apps3: Angry Birds, Skype, Kindle

1: IMDb most popular2: Amazon 2011 best sellers3: Most downloaded Android apps

Popularity Bias

Popularity Bias

Popular items = highly rated

Collaborative Filtering

Looks for users who share rating patterns

Use ratings from like-minded users to calculate a prediction for the userBoils down to:

The most similar users create a neighborhood.Those items which are most popular in the neighborhood will be recommended.

Collaborative Filtering: Similarities

Standard CF approaches do not consider the popularity of items when creating neighbor-hoods of similar users.i.e. not considering the popularity bias.

Percentage of ratings given to different popularity classes of movies in the Movielens 10 Million ratings dataset

Collaborative Filtering: Similarities

Standard CF approaches do not consider the popularity of items when creating neighbor-hoods of similar users.i.e. not considering the popularity bias.

Movielens 10M datasetDisitribution of ratings given to the three most popular movies in the Movielens 10 Million dataset

Weighting Schemes

Experiments

Approach: Test two similarity weighting strat-egies in different scenarios on two different movie rating datasets.

Weighting: Linear Inverse & Inverse User frequency

Datasets: Movielens10M & Moviepilot

Scenarios: Cold Start, Post Cold Start, Power Users

Results

Results

When is it good to use popularity weighting?

>20% improvement in Precision

Ratings: 1-5 stars

30 - 100 items each

Movielens 10M

Results

When is it not good to use popularity weighting?

No significant improvement in Precision

Ratings: 0-10 stars

Moviepilot

Conclusion

Popular items create a problem for recom-mender systems due to favorable bias.

Similarity weighting can lessen the effects of the biaswhen the rating scale is compact

when the users have more than few and less than many ratings

Ongoing Work

What if lower precision does not mean poorer quality?Lower precision can be an indicator of new, novel, serendipitous recommendations these will produce lower precision values in offline evaluation

Currently evaluating the quality of recommender algorithms based on user feedback, not only precision/recall/etc. Values.

Users and Noise: The Magic Barrier of Recommender Systems UMAP'12

User satisfaction survey: www.dai-lab.de/~alan/survey

Questions?

Thank You!