introduction to probability
DESCRIPTION
Presentation for reading session of Computer Vision: Models, Learning, and InferenceTRANSCRIPT
![Page 1: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/1.jpg)
CHAPTER 2:
INTRODUCTION TO
PROBABILITYCOMPUTER VISION: MODELS, LEARNING AND
INFERENCE
Lukas Tencer
![Page 2: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/2.jpg)
Random variables
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
2
A random variable x denotes a quantity that is
uncertain
May be result of experiment (flipping a coin) or a
real world measurements (measuring
temperature)
If observe several instances of x we get different
values
Some values occur more than others and this
information is captured by a probability
distribution
![Page 3: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/3.jpg)
Discrete Random Variables
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
3
Ordered
- histogram
Unordered
- hintongram
![Page 4: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/4.jpg)
Continuous Random Variable
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
4
- Probability density function (PDF) of a distribution, integrates to 1
![Page 5: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/5.jpg)
Joint Probability
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
5
Consider two random variables x and y
If we observe multiple paired instances, then
some combinations of outcomes are more likely
than others
This is captured in the joint probability
distribution
Written as Pr(x,y)
Can read Pr(x,y) as “probability of x and y”
![Page 6: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/6.jpg)
Joint Probability
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
6
![Page 7: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/7.jpg)
Marginalization
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
7
We can recover probability distribution of any variable in a joint
distribution by integrating (or summing) over the other variables
![Page 8: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/8.jpg)
Marginalization
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
8
We can recover probability distribution of any variable in a joint
distribution by integrating (or summing) over the other variables
![Page 9: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/9.jpg)
Marginalization
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
9
We can recover probability distribution of any variable in a joint
distribution by integrating (or summing) over the other variables
![Page 10: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/10.jpg)
Marginalization
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
10
We can recover probability distribution of any variable in a joint
distribution by integrating (or summing) over the other variables
Works in higher dimensions as well – leaves joint distribution
between whatever variables are left
![Page 11: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/11.jpg)
Conditional Probability
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
11
Conditional probability of x given that y=y1 is relative
propensity of variable x to take different outcomes given
that y is fixed to be equal to y1.
Written as Pr(x|y=y1)
![Page 12: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/12.jpg)
Conditional Probability
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
12
Conditional probability can be extracted from joint probability
Extract appropriate slice and normalize (because dos not
sums to 1)
![Page 13: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/13.jpg)
Conditional Probability
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
13
More usually written in compact form
• Can be re-arranged to give
![Page 14: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/14.jpg)
Conditional Probability
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
14
This idea can be extended to more than two
variables
![Page 15: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/15.jpg)
Bayes’ Rule
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
15
From before:
Combining:
Re-arranging:
![Page 16: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/16.jpg)
Bayes’ Rule Terminology
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
16
Posterior – what we
know about y after
seeing x
Prior – what we know
about y before seeing
x
Likelihood – propensity
for observing a certain
value of x given a certain
value of y
Evidence –a constant to
ensure that the left hand
side is a valid distribution
![Page 17: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/17.jpg)
Independence
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
17
If two variables x and y are independent then variable x
tells us nothing about variable y (and vice-versa)
![Page 18: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/18.jpg)
Independence
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
18
If two variables x and y are independent then variable x
tells us nothing about variable y (and vice-versa)
![Page 19: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/19.jpg)
Independence
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
19
When variables are independent, the joint factorizes into
a product of the marginals:
![Page 20: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/20.jpg)
Expectation
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
20
Expectation tell us the expected or average value of
some function f [x] taking into account the distribution of
x
Definition:
![Page 21: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/21.jpg)
Expectation
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
21
Expectation tell us the expected or average value of
some function f [x] taking into account the distribution of
x
Definition in two dimensions:
![Page 22: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/22.jpg)
Expectation: Common Cases
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
22
![Page 23: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/23.jpg)
Expectation: Rules
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
23
Rule 1:
Expected value of a constant is the constant
![Page 24: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/24.jpg)
Expectation: Rules
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
24
Rule 2:
Expected value of constant times function is constant
times expected value of function
![Page 25: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/25.jpg)
Expectation: Rules
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
25
Rule 3:
Expectation of sum of functions is sum of expectation of
functions
![Page 26: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/26.jpg)
Expectation: Rules
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
26
Rule 4:
Expectation of product of functions in variables x and y
is product of expectations of functions if x and y are independent
![Page 27: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/27.jpg)
Conclusions27
• Rules of probability are compact and simple
• Concepts of marginalization, joint and conditional
probability, Bayes rule and expectation underpin all of
the models in this book
• One remaining concept – conditional expectation –
discussed later
Computer vision: models, learning and inference. ©2011
Simon J.D. Prince
![Page 28: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/28.jpg)
Based on:
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
http://www.computervisionmodels.com/
28 Thank You
for you attention
![Page 29: Introduction to Probability](https://reader034.vdocuments.mx/reader034/viewer/2022042715/5597da781a28abc45e8b478c/html5/thumbnails/29.jpg)
Additional29
Calculating k-th moment