introduction to probability

29
CHAPTER 2: INTRODUCTION TO PROBABILITY COMPUTER VISION: MODELS, LEARNING AND INFERENCE Lukas Tencer

Upload: lukas-tencer

Post on 04-Jul-2015

232 views

Category:

Technology


5 download

DESCRIPTION

Presentation for reading session of Computer Vision: Models, Learning, and Inference

TRANSCRIPT

Page 1: Introduction to Probability

CHAPTER 2:

INTRODUCTION TO

PROBABILITYCOMPUTER VISION: MODELS, LEARNING AND

INFERENCE

Lukas Tencer

Page 2: Introduction to Probability

Random variables

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

2

A random variable x denotes a quantity that is

uncertain

May be result of experiment (flipping a coin) or a

real world measurements (measuring

temperature)

If observe several instances of x we get different

values

Some values occur more than others and this

information is captured by a probability

distribution

Page 3: Introduction to Probability

Discrete Random Variables

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

3

Ordered

- histogram

Unordered

- hintongram

Page 4: Introduction to Probability

Continuous Random Variable

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

4

- Probability density function (PDF) of a distribution, integrates to 1

Page 5: Introduction to Probability

Joint Probability

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

5

Consider two random variables x and y

If we observe multiple paired instances, then

some combinations of outcomes are more likely

than others

This is captured in the joint probability

distribution

Written as Pr(x,y)

Can read Pr(x,y) as “probability of x and y”

Page 6: Introduction to Probability

Joint Probability

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

6

Page 7: Introduction to Probability

Marginalization

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

7

We can recover probability distribution of any variable in a joint

distribution by integrating (or summing) over the other variables

Page 8: Introduction to Probability

Marginalization

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

8

We can recover probability distribution of any variable in a joint

distribution by integrating (or summing) over the other variables

Page 9: Introduction to Probability

Marginalization

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

9

We can recover probability distribution of any variable in a joint

distribution by integrating (or summing) over the other variables

Page 10: Introduction to Probability

Marginalization

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

10

We can recover probability distribution of any variable in a joint

distribution by integrating (or summing) over the other variables

Works in higher dimensions as well – leaves joint distribution

between whatever variables are left

Page 11: Introduction to Probability

Conditional Probability

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

11

Conditional probability of x given that y=y1 is relative

propensity of variable x to take different outcomes given

that y is fixed to be equal to y1.

Written as Pr(x|y=y1)

Page 12: Introduction to Probability

Conditional Probability

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

12

Conditional probability can be extracted from joint probability

Extract appropriate slice and normalize (because dos not

sums to 1)

Page 13: Introduction to Probability

Conditional Probability

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

13

More usually written in compact form

• Can be re-arranged to give

Page 14: Introduction to Probability

Conditional Probability

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

14

This idea can be extended to more than two

variables

Page 15: Introduction to Probability

Bayes’ Rule

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

15

From before:

Combining:

Re-arranging:

Page 16: Introduction to Probability

Bayes’ Rule Terminology

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

16

Posterior – what we

know about y after

seeing x

Prior – what we know

about y before seeing

x

Likelihood – propensity

for observing a certain

value of x given a certain

value of y

Evidence –a constant to

ensure that the left hand

side is a valid distribution

Page 17: Introduction to Probability

Independence

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

17

If two variables x and y are independent then variable x

tells us nothing about variable y (and vice-versa)

Page 18: Introduction to Probability

Independence

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

18

If two variables x and y are independent then variable x

tells us nothing about variable y (and vice-versa)

Page 19: Introduction to Probability

Independence

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

19

When variables are independent, the joint factorizes into

a product of the marginals:

Page 20: Introduction to Probability

Expectation

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

20

Expectation tell us the expected or average value of

some function f [x] taking into account the distribution of

x

Definition:

Page 21: Introduction to Probability

Expectation

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

21

Expectation tell us the expected or average value of

some function f [x] taking into account the distribution of

x

Definition in two dimensions:

Page 22: Introduction to Probability

Expectation: Common Cases

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

22

Page 23: Introduction to Probability

Expectation: Rules

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

23

Rule 1:

Expected value of a constant is the constant

Page 24: Introduction to Probability

Expectation: Rules

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

24

Rule 2:

Expected value of constant times function is constant

times expected value of function

Page 25: Introduction to Probability

Expectation: Rules

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

25

Rule 3:

Expectation of sum of functions is sum of expectation of

functions

Page 26: Introduction to Probability

Expectation: Rules

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

26

Rule 4:

Expectation of product of functions in variables x and y

is product of expectations of functions if x and y are independent

Page 27: Introduction to Probability

Conclusions27

• Rules of probability are compact and simple

• Concepts of marginalization, joint and conditional

probability, Bayes rule and expectation underpin all of

the models in this book

• One remaining concept – conditional expectation –

discussed later

Computer vision: models, learning and inference. ©2011

Simon J.D. Prince

Page 28: Introduction to Probability

Based on:

Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

http://www.computervisionmodels.com/

28 Thank You

for you attention

Page 29: Introduction to Probability

Additional29

Calculating k-th moment