watch-it-next: a contextual tv recommendation system
TRANSCRIPT
Watch-It-Next: A Contextual TV Recommendation System
Michal Aharon, Eshcar Hillel, Amit Kagian, Ronny Lempel, Hayim Makabee, Raz Nissim
Recommendation in Personal Devices and Accounts
05/02/23 2
Challenge: Recommendations in Shared Accounts and Devices
“I am a 34 yo man who enjoys action and sci-fi movies. This is what my children have done to my netflix account”
05/02/23 3
Our Focus: Recommendations for Smart TVs
05/02/23 4
Main problems: Inferring who has
consumed each item in the past
Who is currently requesting the recommendations
“Who” can be a subset of users
Smart TVs can track what is being watched on them, but not who was watching.
Solution: Using Context
05/02/23 5
Previous work: time of day
Context in this Work:Current Item Being Watched
05/02/23 6
This Work: Contextual Personalized Recommendations
05/02/23 7
WatchItNext problem: it is 8:30pm and “House of Cards” is on What should we recommend to be
watched next on this device?
Implicit assumption: there’s a good chance whoever is in front of the set now, will remain there
WatchItNext Inputs and Output
05/02/23 8
Available programs, a.k.a. “line-up”
Ranked recommendations
A fundamental principle in recommender systems Taps similarities in patterns of
consumption/enjoyment of items by users Recommends to a user what users with detected
similar tastes have consumed/enjoyed
Collaborative Filtering
05/02/23 9
Consider a consumption matrix R of users and items
ru,i=1 whenever person u consumed item i In other cases, ru,i might be person u’s rating on item i
The matrix R is typically very sparse …and often very large
Collaborative Filtering – Mathematical Abstraction
users
R =
Items
|U| x |I|
05/02/23 10
Latent factor models (LFM): Map both users and items to some f-dimensional space Rf, i.e.
produce f-dimensional vectors vu and wi for each user and item Define rating estimates as inner products: qui = <vu,wi> Main problem: finding a mapping of users and items to the
latent factor space that produces “good” estimates
Collaborative Filtering – Matrix Factorization
users
R =
Items
≈
|U| x |I| |U| x f f x |I|
VW
05/02/23 11
Main Contribution:“3-Way” Technique
Learn a standard matrix factorization model (LFM/LDA) When recommending to a device d currently watching
context item c, score each target item t as follows:S(t follows c|d) = j=1..k vd(j)*wc(j)*wt(j)
May require an additive shift to get rid of negative values. Score is high for targets that agree with both context and
device Results in “Sequential LFM/LDA” – a personalized contextual
recommender Again – no need to model context or change learning
algorithm; learn as usual, just apply change when scoring
05/02/23 12
Data by the Numbers Training data: three months’ worth of viewership
data
Test Data: derived from one month of viewership data
05/02/23 13
* Items are {movie, sports event, series} – not at the individual episode level
Devices Unique items* Triplets339647 17232 More than 19M
Setting Test Instances Average Line-up SizeHabitual ~3.8M 390
Exploratory
~1.7M 349
Metric: Avg. Rank Percentile (ARP)
Note: with large line-ups, ARP is practically equivalent to average AUC
05/02/23 14
RP = 0.75
?next(RP = 0.25)
(RP = 0.50)
(RP = 1.0)Rank Percentile
properties: Ranges in (0,1] Higher is better Random scores ~0.5
in large lineups
Baselines
05/02/23 15
Name Personalized?
Contextual?
General popularity No No
Sequential popularity No Yes
Temporal popularity No Yes
Device popularity* Yes No
LFM Yes No
LDA Yes No
* Only applicable to habitual recommendations
Contextual Personalized Recommenders
05/02/23 16
SequentialLDA [LFM]: 3-way element-wise multiplication of device vector, context item and target item
TemporalLDA[LFM]: regular LDA/LFM score, multiplied by Temporal Popularity
TempSeqLDA[LFM]: 3-way score multiplied by Temporal Popularity
Results (1)Sequential Context Matters
Degradation when using a random item as context indicates that the correct context item reflects the current viewing session, and implicitly the current watchers of the device05/02/23 17
Results (2)Sequential Context Matters
Device Entropy: the entropy of p(topic | device) as computed by LDAon the training data; high values correspond to diverse distributions
05/02/23 18
Results (3) - Exploratory Setting
05/02/23 19
Conclusions Multi-user or shared devices pose challenging
recommendation problems. Sequential context helps – it “narrows" the topical variety
of the program to be watched next on the device. Intuitively, context serves to implicitly disambiguate
the current user or users of the device. 3-Way technique is an effective way of incorporating
sequential context that has no impact on learning.
Thank you! Questions?Please come visit the poster tomorrow.
05/02/23 20