1. entropy as an information measure - discrete variable definition relationship to code length -...

24
1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy 3. Conditional Entropy 4. Kullback-Leibler Divergence (Relative Entropy 5. Mutual Information Information Theory

Upload: philip-burke

Post on 25-Dec-2015

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

1. Entropy as an Information Measure - Discrete variable definition

Relationship to Code Length

- Continuous Variable Differential Entropy

2. Maximum Entropy3. Conditional Entropy4. Kullback-Leibler Divergence (Relative Entropy)5. Mutual Information

Information Theory

Page 2: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

• How much information is received when we observe aspecific value for a discrete random variable x?

• Amount of information is degree of surprise– Certain means no information

– More information when event is unlikely

• Depends on probability distribution p(x), a quantity h(x)• If there are two unrelated events x and y we want h(x,y)=h(x) + h(y)• Thus we choose h(x) = –log2 p(x)

– Negative assures that information measure is positive

• Average amount of information transmitted is theexpectation wrt p(x) refered to as entropy

3

Information Measure

Page 3: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

Usefulness of Entropy

• Uniform Distribution– Random variable x has 8 possible states, each equally likely

• We would need 3 bits to transmit• Also, H(x) = -8 (1/8)log2(1/8)=3 bits

• Non-uniform Distribution– If x has 8 states with probabilities

(1/2,1/4,1/8,1/16,1/64,1/64,1/64,1/64)H(x)=2 bits

• Non-uniform distribution has smaller entropy than uniform• Has an interpretation of in terms of disorder

4

Page 4: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

•Take advantage of non-uniform distribution to use shorter

0 1•If x has 8 states (a,b,c,d,e,f,g,h) with probabilities 1/2

1/41/8

average code length = (1/2)1+(1/4)2+(1/8)3+(1/16)4+4(1/64)6 =2 bits

• Same as entropy of the random variable• Shorter code string is not possible due to need to disambiguate

string into component parts• 11001110 is uniquely decoded as sequence cad

Relationship of Entropy to Code Length

5

Page 5: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

• Noiseless coding theorem of Shannon– Entropy is a lower bound on number of bits needed to

transmit a random variable

• Natural logarithms are used in relationship toother topics– Nats instead of bits

6

Relationship between Entropyand Shortest Coding Length

Page 6: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

History of Entropy: Thermodynamics to Information

Theory

• Entropy is average amount of information needed to specify state of a random variable.

• Concept had much earlier origin in physics – Context of equilibrium thermodynamics – Later given deeper interpretation as measure of disorder

(developments in statistical mechanics)

7

Page 7: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

increases– Explains why not all energy is available to do useful work

– Relate macro state to statistical behavior ofmicrostate

• Claude Shannon (1916-2001)• Stephen Hawking (Gravitational Entropy)

8

Entropy PersonaWorld of Atoms

• Ludwig Eduard Boltzmann (1844-1906)– Created Statistical Mechanics

• First law: conservation of energy – Energy not destroyed but converted from one form to other

• Second law: principle of decay in nature–entropy

World of Bits

Page 8: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

∑ N ln N € = −∑p ln p

• N objects into bins so that ni are in ith bin where i ni = N

• No of different ways of allocating objects to bins

• In ith bin there are ni! ways of reordering objects

– Total no of ways of allocating N objects to bins is W = • Called Multiplicity (also weight of macrostate)

N!

∏ini!

– N ways to choose the first object, N-1 ways for second leads to N.(N-1) .. 2.1 = N!

– We don’t distinguish between rearrangements within each bin

1

NlnW =

1

NlnN!−

1

N∑ilnni!

– Which gives

• Entropy: scaled log of multiplicity H =

– Sterlings approx as N →∞ ln N!≈ NlnN-€ N

• Overall distribution, as ratios ni/N, called macrostate

H = −lim

N →∞ni ni

ii i

i

Entropy by Considering Objects Division

Page 9: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

Entropy and Histograms

• If X can take one of M values(bins, states) and p(X=xi) = pi then

• Minimum value of entropy is 0when one of the pi=1 and other piare 0–noting that limp0 p ln p =0

• Sharply peaked distribution haslow entropy

• Distribution spread more evenlywill have higher entropy

30 bins, higher valuefor broader distribution

10

Page 10: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

Maximum Entropy Configuration

• Found by maximizing H using Lagrangemultiplier to enforce constraint of probabilities

H i i i

i i

• To verify it is a maximum, evaluate second

11

Page 11: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

Omit the second term and consider the limit

Entropy with Continuous Variable•

•Divide x into bins of width For each bin there must exist a value xi such that

• Known as Differential Entropy

• Discrete and Continuous forms of entropy differ byquantity ln which diverges– Reflects to specify continuous variable very precisely requires a

large no of bits

Gives a discrete distribution with probabilities p(xi)Δ Entropy HΔ = −∑p(xi)Δln(p(xi)Δ) = −∑p(xi)Δln p(xi)− €lnΔ

i i

12

Page 12: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

13

• Differential Entropy for multiple continuousvariables

• For what distribution is differential entropymaximized?– For discrete distribution, it is uniform

– For continuous, it is Gaussian

• as we shall see

Entropy with Multiple Continuous Variables

Page 13: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

14

• Ordinary calculus deals with functions

• A functional is an operator that takes afunction as input and returns a scalar

• A widely used functional in machinelearning is entropy H[p(x)] which is ascalar quantity

• We are interested in the maxima andminima of functionals analogous to thosefor functions

– Called calculus of variations

Entropy as a Functional

Page 14: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

• Functional: mapping from set of functionsto real value

• For what function is it maximized?

• Finding shortest curve length between twopoints on a sphere (geodesic)

– With no constraints it is a straight line – When constrained to lie on a surface solution is

less obvious– may be several

• Constraints incorporated using Lagrangian

15

Maximizing a Functional

Page 15: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

Maximizing Differential Entropy

is set to zero giving p(x) = exp{−1+ λ + λ x + λ (x −µ) }

entropy is Gaussian!

16

• Assuming constraints on first and second moments ofp(x) as well as normalization

• Constrained maximization is performed usingLagrangian multipliers. Maximize following functionalwrt p(x):

• Using the calculus of variations derivative of functional

• Backsubstituting into three constraint equations leadsto the result that distribution that maximizes differential

1 2 32

Page 16: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

17

• Distribution that maximizes Differential Entropyis Gaussian

• Value of maximum entropy is

• Entropy increases as variance increases• Differential entropy, unlike discrete entropy, can be negative for 2 < 1/2e

Differential Entropy of Gaussian

Page 17: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

18

• If we have joint distribution p(x,y), we draw pairs of values of x and y• If value of x is already known, additional information to specify

corresponding value of y is –ln p(y|x)• Average additional information needed to specify y is the conditional

entropy

• By product rule H[x,y] = H[x] + H[y|x]

where H[x,y] is differential entropy of p(x,y)

H[x] is differential entropy of p(x)

• Information needed to describe x and y is given by

information needed to describe x plus additional information needed to

specify y given x

Conditional Entropy

Page 18: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

Relative Entropy

• If we have modeled unknown distribution p(x) byapproximating distribution q(x)– i.e., q(x) is used to construct a coding scheme of

transmitting values of x to a receiver

– Average additional amount of information required tospecify value of x as a result of using q(x) instead of truedistribution p(x) is given by relative entropy or K-Ldivergence

• Important concept in Bayesian analysis– Entropy comes from Information Theory

– K-L Divergence, or relative entropy, comes from PatternRecognition, since it is a distance (dissimilarity) measure

19

Page 19: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

20

Relative Entropy or K-L Divergence

• Additional information required as a result ofusing q(x) in place of p(x)

– Proof involves convex functions

• Not a symmetrical quantity: KL(p||q) ≠ KL(q||p)

• K-L divergence satisfies KL(p||q)0 withequality iff p(x)=q(x)

Page 20: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

Convex Function

21

• A function f(x) is convex if every chord lies onor above function– Any value of x in interval from x=a to x=b can be

written as where 0 1– Corresponding point on chord is

– Convexity implies

Point on curve < Point on chord

– By induction, we get Jensen’s inequality

Page 21: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

• Applying to KL definition yields desired result

22

Proof of positivity of K-L Divergence

Page 22: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

• Given joint distribution of two sets of variablesp(x,y)– If independent, will factorize as p(x,y) = p(x)p(y)

– If not independent, whether close to independent isgiven by

• KL divergence between joint and product of marginals

• Called Mutual Information between variables x and y • Mutual information ( 0) is 0 iff x and y are independent.

Mutual Information

23

Page 23: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

• Using Sum and Product RulesI[x,y] = H[x] - H[x|y] = H[y] - H[y|x]

– Mutual Information is reduction in uncertainty about x given value of y (or vice versa)

• Bayesian perspective: – if p(x) is prior and p(x|y) is posterior, mutual

information is reduction in uncertainty after y is observed

24

Mutual Information

Page 24: 1. Entropy as an Information Measure - Discrete variable definition Relationship to Code Length - Continuous Variable Differential Entropy 2. Maximum Entropy

Summary• Entropy as an Information Measure

average amount of information needed to specify state of a random variable (a measure of disorder)– Discrete variable definition: Relationship to Code Length

– Continuous variable: Differential Entropy

• Maximum Entropy– Discrete variable: uniform distribution

– Continuous variable: Gaussian distribution

• Conditional Entropy H[x|y]– Average additional information needed to specify x if value of y is known

• Kullback-Leibler Divergence (Relative Entropy) KL(p||q)– Additional information required as a result of using q(x) in place of p(x)

• Mutual Information I[x,y] = H[x] - H[x|y] – reduction in uncertainty about x after y is observed