probabilistic ranking
DESCRIPTION
TRANSCRIPT
![Page 1: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/1.jpg)
Ahmet Selman Bozkır
![Page 2: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/2.jpg)
Introduction to conditional, total probability & Bayesian theorem
Historical background of probabilistic information retrieval
Why probabilities in IR? Document ranking problem Binary Independence Model
![Page 3: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/3.jpg)
Given some event B with nonzero probability P(B) > 0 We can define conditional prob. as an event A, given B, by
The Probabilty P(A|B) simply reflects the fact that the probability of an event A may depend on a second event B. So if A and B are mutually exclusive, A B =
)(
)()(
BP
BAPBAP
![Page 4: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/4.jpg)
ToleranceResistan
ce ()5% 10
%Total
22- 10 14 24
47- 28 26 44
100- 24 8 32
Total: 62 38 100
Let’s define three events:1. A as “draw 47 resistor2. B as “draw” a resistor with 5%3. C as “draw” a “100 resistor P(A) = P(47) = 44/100P(B) = P(5%) = 62/100P(C) = P(100) = 32 /100
The joint probabilities are:P(A B) = P(47 5%) = 28/100P(A C) = P(47 100 ) = 0P(B C) = P(5% 100 ) = 24/100
I f we use them the cond. prob. :
62
28
)(
)()(
BP
BAPBAP
0)(
)()(
CP
CAPCAP
32
24
)(
)()(
CP
CBPCBP
![Page 5: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/5.jpg)
The probability of P(A) of any event A defined on a sample space S can be expressed in terms of cond. probabilities. Suppose we are given N mutually exclusive events Bn ,n = 1,2…. N whose union equals S as ilustrated in figure
A Bn
AB1
B3
B2
Bn
N
nn
N
nn BAB
11
)( A SA
![Page 6: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/6.jpg)
The definition of conditional probability applies to any two events. In particular ,let Bn be one of the events defined above in the subsection on total probability.
İf P(A)≠O,or, alternatively,
P(A)
A)P(BABP n
n
)(
)(
)()(
n
nn BP
BAPBAP
![Page 7: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/7.jpg)
if P(Bn)≠0, one form of Bayes’ theorem is obtained by equating these two expressions:
Another form derives from a substitution of P(A) as given:
)(
)()()(
AP
BPBAPABP nn
n
)()(...)()(
)()()(
11 NN
nnn BPBAPBPBAP
BPBAPABP
![Page 8: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/8.jpg)
The first attempts to develop a probabilistic theory of retrieval were made over 30 years ago [Maron and Kuhns 1960; Miller 1971], and since then there has been a steady development of the approach. There are already several operational IR systems based upon probabilistic or semiprobabilistic models.
One major obstacle in probabilistic or semiprobabilistic IR models
is finding methods for estimating the probabilities used to evaluate the probability of relevance that are both theoretically sound and computationally efficient.
The first models to be based upon such assumptions were the
“binary independence indexing model” and the “binary independence retrieval model
One area of recent research investigates the use of an explicit
network representation of dependencies. The networks are processed by means of Bayesian inference or belief theory, using evidential reasoning techniques such as those described by Pearl 1988. This approach is an extension of the earliest probabilistic models, taking into account the conditional dependencies present in a real environment.
![Page 9: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/9.jpg)
User Information
Need
Documents
DocumentRepresentation
DocumentRepresentation
QueryRepresentation
QueryRepresentation
How to match?How to match?
In traditional IR systems, matching between each document andquery is attempted in a semantically imprecise space of index terms.
Probabilities provide a principled foundation for uncertain reasoning.Can we use probabilities to quantify our uncertainties?
Uncertain guess ofwhether document has relevant content
Understandingof user need isuncertain
![Page 10: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/10.jpg)
Classical probabilistic retrieval model Probability ranking principle, etc.
(Naïve) Bayesian Text Categorization Bayesian networks for text retrieval
Probabilistic methods are one of the oldest but also one of the currently hottest topics in IR. Traditionally: neat ideas, but they’ve never won
on performance. It may be different now.
![Page 11: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/11.jpg)
In probabilistic information retrieval, the goal is the estimation of the probability of relevance P(R l qk, dm) that a document dm will be judged relevant by a user with request qk. In order to estimate this probability, a large number of probabilistic models have been developed.
Typically, such a model is based on representations of queries and documents (e.g., as sets of terms); in addition to this, probabilistic assumptions about the distribution of elements of these representations within relevant and nonrelevant documents are required.
By collecting relevance feedback data from a few documents, the model then can be applied in order to estimate the probability of relevance for the remaining documents in the collection.
![Page 12: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/12.jpg)
We have a collection of documents User issues a query A list of documents needs to be
returned Ranking method is core of an IR Ranking method is core of an IR
system:system: In what order do we present In what order do we present
documents to the user?documents to the user? We want the “best” document to be first,
second best second, etc….
Idea: Rank by probability of Idea: Rank by probability of relevance of the document w.r.t. relevance of the document w.r.t. information needinformation need P(relevant|documenti, query)
![Page 13: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/13.jpg)
For events a and b:Bayes’ Rule
Odds:
aaxxpxbp
apabp
bp
apabpbap
apabpbpbap
apabpbpbapbapbap
,)()|(
)()|(
)(
)()|()|(
)()|()()|(
)()|()()|()(),(
)(1
)(
)(
)()(
ap
ap
ap
apaO
Prior
Posterior
![Page 14: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/14.jpg)
Let x be a document in the collection. Let R represent relevance of a document w.r.t. given (fixed) query and let NR represent non-relevance.
)(
)()|()|(
)(
)()|()|(
xp
NRpNRxpxNRp
xp
RpRxpxRp
p(x|R), p(x|NR) - probability that if a relevant (non-relevant) document is retrieved, it is x.
Need to find p(R|x) - probability that a document x is relevant.
p(R),p(NR) - prior probabilityof retrieving a (non) relevantdocument
1)|()|( xNRpxRp
R={0,1} vs. NR/R
![Page 15: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/15.jpg)
Bayes’ Optimal Decision Rule x is relevant iff p(R|x) > p(NR|x)
PRP in action: Rank all documents by p(R|x)
![Page 16: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/16.jpg)
More complex case: retrieval costs. Let d be a document C - cost of retrieval of relevant document C’ - cost of retrieval of non-relevant
documentProbability Ranking Principle: if
for all d’ not yet retrieved, then d is the next document to be retrieved
We won’t further consider loss/utility from now on
))|(1()|())|(1()|( dRpCdRpCdRpCdRpC
![Page 17: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/17.jpg)
How do we compute all those probabilities? Do not know exact probabilities, have to
use estimates Binary Independence Retrieval (BIR) –
which we discuss later today – is the simplest model
Questionable assumptions “Relevance” of each document is
independent of relevance of other documents.▪ Really, it’s bad to keep on returning
duplicates Boolean model of relevance
![Page 18: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/18.jpg)
Estimate how terms contribute to relevance How tf, df, and length influence your
judgments about do things like document relevance? ▪ One answer is the Okapi formulae (S.
Robertson)
Combine to find document relevance probability
Order documents by decreasing probability
![Page 19: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/19.jpg)
Basic concept:
"For a given query, if we know some documents that are relevant, terms that occur in those documents should be given greater weighting in searching for other relevant documents.
By making assumptions about the distribution of terms and applying Bayes Theorem, it is possible to derive weights theoretically."
Van Rijsbergen
![Page 20: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/20.jpg)
Traditionally used in conjunction with PRP “Binary” = Boolean: documents are represented
as binary incidence vectors of terms (cf. lecture 1):
iff term i is present in document x.
“Independence”: terms occur in documents independently
Different documents can be modeled as same vector
Bernoulli Naive Bayes model (cf. text categorization!)
),,( 1 nxxx
1ix
![Page 21: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/21.jpg)
Queries: binary term incidence vectors Given query q,
for each document d need to compute p(R|q,d).
replace with computing p(R|q,x) where x is binary term incidence vector representing d Interested only in ranking
Will use odds and Bayes’ Rule:
)|(),|()|(
)|(),|()|(
),|(
),|(),|(
qxpqNRxpqNRp
qxpqRxpqRp
xqNRp
xqRpxqRO
![Page 22: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/22.jpg)
• Using Independence Assumption:
n
i i
i
qNRxp
qRxp
qNRxp
qRxp
1 ),|(
),|(
),|(
),|(
),|(
),|(
)|(
)|(
),|(
),|(),|(
qNRxp
qRxp
qNRp
qRp
xqNRp
xqRpxqRO
Constant for a given query
Needs estimation
n
i i
i
qNRxp
qRxpqROdqRO
1 ),|(
),|()|(),|(•So :
![Page 23: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/23.jpg)
n
i i
i
qNRxp
qRxpqROdqRO
1 ),|(
),|()|(),|(
• Since xi is either 0 or 1:
01 ),|0(
),|0(
),|1(
),|1()|(),|(
ii x i
i
x i
i
qNRxp
qRxp
qNRxp
qRxpqROdqRO
• Let );,|1( qRxpp ii );,|1( qNRxpr ii
• Assume, for all terms not occurring in the query (qi=0)ii rp
Then...This can be changed (e.g., inrelevance feedback)
![Page 24: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/24.jpg)
All matching terms Non-matching query terms
All matching termsAll query terms
11
101
1
1
)1(
)1()|(
1
1)|(),|(
iii
i
iii
q i
i
qx ii
ii
qx i
i
qx i
i
r
p
pr
rpqRO
r
p
r
pqROxqRO
![Page 25: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/25.jpg)
Constant foreach query
Only quantity to be estimated for rankings
11 1
1
)1(
)1()|(),|(
iii q i
i
qx ii
ii
r
p
pr
rpqROxqRO
• Retrieval Status Value:
11 )1(
)1(log
)1(
)1(log
iiii qx ii
ii
qx ii
ii
pr
rp
pr
rpRSV
![Page 26: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/26.jpg)
• Estimating RSV coefficients.
• For each term i look at this table of document counts:
Documens Relevant Non-Relevant Total
Xi=1 s n-s n
Xi=0 S-s N-n-S+s N-n
Total S N-S N
S
spi
)(
)(
SN
snri
)()(
)(log),,,(
sSnNsn
sSssSnNKci
• Estimates:
For now,assume nozero terms.
![Page 27: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/27.jpg)
If non-relevant documents are approximated by the whole collection, then ri (prob. of occurrence in non-relevant documents for query) is n/N and log (1– ri)/ri = log (N– n)/n ≈ log N/n = IDF!
pi (probability of occurrence in relevant documents) can be estimated in various ways: from relevant documents if know some
▪ Relevance weighting can be used in feedback loop constant (Croft and Harper combination match) – then
just get idf weighting of terms proportional to prob. of occurrence in collection
▪ more accurately, to log of this (Greiff, SIGIR 1998)
![Page 28: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/28.jpg)
Assume that pi constant over all xi in query pi = 0.5 (even odds) for any given doc
1. Determine guess of relevant document set: V is fixed size set of highest ranked
documents on this model (note: now a bit like tf.idf!)
We need to improve our guesses for pi and ri, so Use distribution of xi in docs in V. Let Vi be
set of documents containing xi ▪ pi = |Vi| / |V|
Assume if not retrieved then not relevant ▪ ri = (ni – |Vi|) / (N – |V|)
1. Go to 2. until converges then return ranking
![Page 29: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/29.jpg)
Guess a preliminary probabilistic description of R and use it to retrieve a first set of documents V, as above.
Interact with the user to refine the description: learn some definite members of R and NR
Reestimate pi and ri on the basis of these Or can combine new information with
original guess (use Bayesian prior):
Repeat, thus generating a succession of approximations to R.
||
|| )1()2(
V
pVp ii
iκ is prior
weight
![Page 30: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/30.jpg)
Getting reasonable approximations of probabilities is possible.
Requires restrictive assumptions: term independence terms not in query don’t affect the
outcome boolean representation of
documents/queries/relevance document relevance values are
independent Some of these assumptions can be removed Problem: either require partial relevance
information or only can derive somewhat inferior term weights
![Page 31: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/31.jpg)
In general, index terms aren’t independent
Dependencies can be complex
van Rijsbergen (1979) proposed model of simple tree dependencies Exactly Friedman and
Goldszmidt’s Tree Augmented Naive Bayes (AAAI 13, 1996)
Each term dependent on one other
In 1970s, estimation problems held back success of this model
![Page 32: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/32.jpg)
What is a Bayesian network? A directed acyclic graph Nodes
▪ Events or Variables▪ Assume values. ▪ For our purposes, all Boolean
Links▪ model direct dependencies between nodes
![Page 33: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/33.jpg)
a b
c
p(c|ab) for all values for a,b,c
p(a)
p(b)
• Bayesian networks model causal relations between events
•Inference in Bayesian Nets:•Given probability distributionsfor roots and conditional probabilities can compute apriori probability of any instance• Fixing assumptions (e.g., b was observed) will cause recomputation of probabilities
Conditional dependence
For more information see:R.G. Cowell, A.P. Dawid, S.L. Lauritzen, and D.J. Spiegelhalter. 1999. Probabilistic Networks and Expert Systems. Springer Verlag. J. Pearl. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan-Kaufman.
![Page 34: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/34.jpg)
Gloom(g)
Finals(f)
No Sleep(n)
Triple Latte(t)
7.02.01.001.0
3.08.09.099.0
g
g
dfdffdfd
6.0
4.0
dd
7.0
3.0
f
f
9.001.0
1.099.0
t
t
gg
7.01.0
3.09.0
n
n
ff
Project Due(d)
![Page 35: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/35.jpg)
• Independence assumption: P(t|g, f)=P(t|g)
• Joint probability P(f d n g t) =P(f) P(d) P(n|f) P(g|f d) P(t|g)
Gloom(g)
Finals(f)
Project Due(d)
No Sleep(n)
Triple Latte(t)
![Page 36: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/36.jpg)
Goal Given a user’s information need
(evidence), find probability a doc satisfies need
Retrieval model Model docs in a document network Model information need in a query
network
![Page 37: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/37.jpg)
Document Network
Query Network
Large, butCompute once for each document collection
Small, compute once forevery query
d1 dnd2
t1 t2 tn
r1 r2 r3rk
di -documents
ti - document representationsri - “concepts”
I
q2q1
cmc2c1 ci - query concepts
qi - high-level concepts
I - goal node
![Page 38: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/38.jpg)
Construct Document Network (once !)
For each query Construct best Query Network Attach it to Document Network Find subset of di’s which maximizes
the probability value of node I (best subset).
Retrieve these di’s as the answer to query.
![Page 39: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/39.jpg)
d1 d2
r1 r3
c1 c3
q1 q2
i
r2
c2
DocumentNetwork
QueryNetwork
Documents
Terms/Concepts
Concepts
Query operators(AND/OR/NOT)
Information need
![Page 40: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/40.jpg)
Prior doc probability P(d) = 1/n
P(r|d) within-document term
frequency tf idf - based
P(c|r) 1-to-1 thesaurus
P(q|c): canonical forms of query operators Always use things like
AND and NOT – never store a full CPT*
*conditional probability table
![Page 41: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/41.jpg)
Hamlet Macbeth
reason double
reason two
OR NOT
User query
trouble
trouble
DocumentNetwork
QueryNetwork
![Page 42: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/42.jpg)
Prior probs don’t have to be 1/n.“User information need” doesn’t
have to be a query - can be words typed, in docs read, any combination …
Phrases, inter-document linksLink matrices can be modified over
time. User feedback. The promise of “personalization”
![Page 43: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/43.jpg)
Document network built at indexing time
Query network built/scored at query time
Representation: Link matrices from docs to any single
term are like the postings entry for that term
Canonical link matrices are efficient to store and compute
Attach evidence only at roots of network Can do single pass from roots to leaves
![Page 44: probabilistic ranking](https://reader031.vdocuments.mx/reader031/viewer/2022013100/548b085db47959606e8b4766/html5/thumbnails/44.jpg)
All sources served by Google!