seminar at princeton university

141
ABC Methods for Bayesian Model Choice ABC Methods for Bayesian Model Choice Christian P. Robert Universit´ e Paris-Dauphine, IuF, & CREST http://www.ceremade.dauphine.fr/ ~ xian Princeton University, April 03, 2012 Joint work with J.-M. Cornuet, A. Grelaud, J.-M. Marin, N. Pillai, & J. Rousseau

Upload: christian-robert

Post on 10-May-2015

1.398 views

Category:

Education


1 download

DESCRIPTION

Dept of Economics, April 03, 2012

TRANSCRIPT

Page 1: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC Methods for Bayesian Model Choice

Christian P. Robert

Universite Paris-Dauphine, IuF, & CRESThttp://www.ceremade.dauphine.fr/~xian

Princeton University, April 03, 2012Joint work with J.-M. Cornuet, A. Grelaud,

J.-M. Marin, N. Pillai, & J. Rousseau

Page 2: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Outline

Approximate Bayesian computation

ABC for model choice

Model choice consistency

Page 3: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Approximate Bayesian computation

Approximate Bayesian computationABC basicsAlphabet soupsimulation-based methods inEconometrics

ABC for model choice

Model choice consistency

Page 4: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Regular Bayesian computation issues

When faced with a non-standard posterior distribution

π(θ|y) ∝ π(θ)L(θ|y)

the standard solution is to use simulation (Monte Carlo) toproduce a sample

θ1, . . . , θT

from π(θ|y) (or approximately by Markov chain Monte Carlomethods)

[Robert & Casella, 2004]

Page 5: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Untractable likelihoods

Cases when the likelihood function f(y|θ) is unavailable and whenthe completion step

f(y|θ) =

∫Zf(y, z|θ) dz

is impossible or too costly because of the dimension of zc© MCMC cannot be implemented!

Page 6: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Illustration

Example

Coalescence tree: in populationgenetics, reconstitution of a commonancestor from a sample of genes viaa phylogenetic tree that is close toimpossible to integrate out[100 processor days with 4parameters]

[Cornuet et al., 2009, Bioinformatics]

Page 7: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)

When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 8: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 9: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 10: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Why does it work?!

The proof is trivial:

f(θi) ∝∑z∈D

π(θi)f(z|θi)Iy(z)

∝ π(θi)f(y|θi)= π(θi|y) .

[Accept–Reject 101]

Page 11: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Earlier occurrence

‘Bayesian statistics and Monte Carlo methods are ideallysuited to the task of passing many models over onedataset’

[Don Rubin, Annals of Statistics, 1984]

Note Rubin (1984) does not promote this algorithm forlikelihood-free simulation but frequentist intuition on posteriordistributions: parameters from posteriors are more likely to bethose that could have generated the data.

Page 12: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

A as approximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distance

Output distributed from

π(θ)Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

Page 13: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

A as approximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distanceOutput distributed from

π(θ)Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

Page 14: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC algorithm

Algorithm 1 Likelihood-free rejection sampler

for i = 1 to N dorepeat

generate θ′ from the prior distribution π(·)generate z from the likelihood f(·|θ′)

until ρ{η(z), η(y)} ≤ εset θi = θ′

end for

where η(y) defines a (maybe in-sufficient) statistic

Page 15: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f(z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f(z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

Page 16: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f(z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f(z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

Page 17: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

I plasma glucose concentration in oral glucose tolerance test

I diastolic blood pressure

I diabetes pedigree function

I presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

Test of H0 : β3 = 0 for 200 observations of Pima.tr based on ag-prior modelling:

β ∼ N3(0, n(xTx)−1

)Use of importance function inspired from the MLE estimatedistribution

β ∼ N (β, Σ)

Page 18: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

I plasma glucose concentration in oral glucose tolerance test

I diastolic blood pressure

I diabetes pedigree function

I presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

Test of H0 : β3 = 0 for 200 observations of Pima.tr based on ag-prior modelling:

β ∼ N3(0, n(xTx)−1

)Use of importance function inspired from the MLE estimatedistribution

β ∼ N (β, Σ)

Page 19: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

I plasma glucose concentration in oral glucose tolerance test

I diastolic blood pressure

I diabetes pedigree function

I presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

Test of H0 : β3 = 0 for 200 observations of Pima.tr based on ag-prior modelling:

β ∼ N3(0, n(xTx)−1

)

Use of importance function inspired from the MLE estimatedistribution

β ∼ N (β, Σ)

Page 20: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

I plasma glucose concentration in oral glucose tolerance test

I diastolic blood pressure

I diabetes pedigree function

I presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

Test of H0 : β3 = 0 for 200 observations of Pima.tr based on ag-prior modelling:

β ∼ N3(0, n(xTx)−1

)Use of importance function inspired from the MLE estimatedistribution

β ∼ N (β, Σ)

Page 21: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Pima Indian benchmark

−0.005 0.010 0.020 0.030

020

4060

8010

0

Dens

ity

−0.05 −0.03 −0.01

020

4060

80

Dens

ity

−1.0 0.0 1.0 2.0

0.00.2

0.40.6

0.81.0

Dens

ityFigure: Comparison between density estimates of the marginals on β1(left), β2 (center) and β3 (right) from ABC rejection samples (red) andMCMC samples (black)

.

Page 22: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

MA example

Consider the MA(q) model

xt = εt +

q∑i=1

ϑiεt−i

Simple prior: uniform prior over the identifiability zone, e.g.triangle for MA(2)

Page 23: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

MA example (2)

ABC algorithm thus made of

1. picking a new value (ϑ1, ϑ2) in the triangle

2. generating an iid sequence (εt)−q<t≤T

3. producing a simulated series (x′t)1≤t≤T

Distance: basic distance between the series

ρ((x′t)1≤t≤T , (xt)1≤t≤T ) =T∑t=1

(xt − x′t)2

or between summary statistics like the first q autocorrelations

τj =T∑

t=j+1

xtxt−j

Page 24: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

MA example (2)

ABC algorithm thus made of

1. picking a new value (ϑ1, ϑ2) in the triangle

2. generating an iid sequence (εt)−q<t≤T

3. producing a simulated series (x′t)1≤t≤T

Distance: basic distance between the series

ρ((x′t)1≤t≤T , (xt)1≤t≤T ) =T∑t=1

(xt − x′t)2

or between summary statistics like the first q autocorrelations

τj =

T∑t=j+1

xtxt−j

Page 25: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Comparison of distance impact

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 26: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Comparison of distance impact

0.0 0.2 0.4 0.6 0.8

01

23

4

θ1

−2.0 −1.0 0.0 0.5 1.0 1.5

0.00.5

1.01.5

θ2

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 27: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Comparison of distance impact

0.0 0.2 0.4 0.6 0.8

01

23

4

θ1

−2.0 −1.0 0.0 0.5 1.0 1.5

0.00.5

1.01.5

θ2

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 28: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiency

Either modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 29: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 30: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 31: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 32: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ′|θ(t)) if x ∼ f(x|θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)π(θ(t))Kω(θ′|θ(t)) ,

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 33: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ′|θ(t)) if x ∼ f(x|θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)π(θ(t))Kω(θ′|θ(t)) ,

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 34: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABC-MCMC (2)

Algorithm 2 Likelihood-free MCMC sampler

Use Algorithm 1 to get (θ(0), z(0))for t = 1 to N do

Generate θ′ from Kω

(·|θ(t−1)

),

Generate z′ from the likelihood f(·|θ′),Generate u from U[0,1],

if u ≤ π(θ′)Kω(θ(t−1)|θ′)π(θ(t−1)Kω(θ′|θ(t−1))

IAε,y(z′) then

set (θ(t), z(t)) = (θ′, z′)else

(θ(t), z(t))) = (θ(t−1), z(t−1)),end if

end for

Page 35: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Why does it work?

Acceptance probability that does not involve the calculation of thelikelihood and

πε(θ′, z′|y)

πε(θ(t−1), z(t−1)|y)× Kω(θ(t−1)|θ′)f(z(t−1)|θ(t−1))

Kω(θ′|θ(t−1))f(z′|θ′)

=π(θ′) f(z′|θ′) IAε,y(z′)

π(θ(t−1)) f(z(t−1)|θ(t−1))IAε,y(z(t−1))

× Kω(θ(t−1)|θ′) f(z(t−1)|θ(t−1))

Kω(θ′|θ(t−1)) f(z′|θ′)

=π(θ′)Kω(θ(t−1)|θ′)π(θ(t−1)Kω(θ′|θ(t−1))

IAε,y(z′) .

Page 36: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ

Use of a joint density

f(θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(·|y, θ) is the prior predictive density ofε = ρ(η(z), η(y)) given θ and x when z ∼ f(z|θ)

Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Page 37: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ

Use of a joint density

f(θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(·|y, θ) is the prior predictive density ofε = ρ(η(z), η(y)) given θ and x when z ∼ f(z|θ)Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Page 38: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ details

Major drift: Multidimensional distances ρk (k = 1, . . . ,K) anderrors εk = ρk(ηk(z), ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

∑b

K[{εk−ρk(ηk(zb), ηk(y))}/hk]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)

ABCµ involves acceptance probability

π(θ′, ε′)

π(θ, ε)

q(θ′, θ)q(ε′, ε)

q(θ, θ′)q(ε, ε′)

mink ξk(ε′|y, θ′)

mink ξk(ε|y, θ)

Page 39: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ details

Major drift: Multidimensional distances ρk (k = 1, . . . ,K) anderrors εk = ρk(ηk(z), ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

∑b

K[{εk−ρk(ηk(zb), ηk(y))}/hk]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)ABCµ involves acceptance probability

π(θ′, ε′)

π(θ, ε)

q(θ′, θ)q(ε′, ε)

q(θ, θ′)q(ε, ε′)

mink ξk(ε′|y, θ′)

mink ξk(ε|y, θ)

Page 40: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ multiple errors

[ c© Ratmann et al., PNAS, 2009]

Page 41: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ for model choice

[ c© Ratmann et al., PNAS, 2009]

Page 42: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Which summary statistics?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistic [except when done by theexperimenters in the field]

Page 43: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Which summary statistics?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistic [except when done by theexperimenters in the field]

Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest.

Page 44: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Which summary statistics?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistic [except when done by theexperimenters in the field]

Based on decision-theoretic principles, Fearnhead and Prangle(2012) end up with E[θ|y] as the optimal summary statistic

Page 45: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

Econ’ections

Similar exploration of simulation-based techniques in Econometrics

I Simulated method of moments

I Method of simulated moments

I Simulated pseudo-maximum-likelihood

I Indirect inference

[Gourieroux & Monfort, 1996]

Page 46: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

Simulated method of moments

Given observations yo1:n from a model

yt = r(y1:(t−1), εt, θ) , εt ∼ g(·)

simulate ε?1:n, derive

y?t (θ) = r(y1:(t−1), ε?t , θ)

and estimate θ by

arg minθ

n∑t=1

(yot − y?t (θ))2

Page 47: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

Simulated method of moments

Given observations yo1:n from a model

yt = r(y1:(t−1), εt, θ) , εt ∼ g(·)

simulate ε?1:n, derive

y?t (θ) = r(y1:(t−1), ε?t , θ)

and estimate θ by

arg minθ

{n∑t=1

yot −n∑t=1

y?t (θ)

}2

Page 48: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

Method of simulated moments

Given a statistic vector K(y) with

Eθ[K(Yt)|y1:(t−1)] = k(y1:(t−1); θ)

find an unbiased estimator of k(y1:(t−1); θ),

k(εt, y1:(t−1); θ)

Estimate θ by

arg minθ

∣∣∣∣∣∣∣∣∣∣n∑t=1

[K(yt)−

S∑s=1

k(εst , y1:(t−1); θ)/S

]∣∣∣∣∣∣∣∣∣∣

[Pakes & Pollard, 1989]

Page 49: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

Indirect inference

Minimise (in θ) the distance between estimators β based onpseudo-models for genuine observations and for observationssimulated under the true model and the parameter θ.

[Gourieroux, Monfort, & Renault, 1993;Smith, 1993; Gallant & Tauchen, 1996]

Page 50: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

Indirect inference (PML vs. PSE)

Example of the pseudo-maximum-likelihood (PML)

β(y) = arg maxβ

∑t

log f?(yt|β, y1:(t−1))

leading to

arg minθ||β(yo)− β(y1(θ), . . . ,yS(θ))||2

whenys(θ) ∼ f(y|θ) s = 1, . . . , S

Page 51: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

Indirect inference (PML vs. PSE)

Example of the pseudo-score-estimator (PSE)

β(y) = arg minβ

{∑t

∂ log f?

∂β(yt|β, y1:(t−1))

}2

leading to

arg minθ||β(yo)− β(y1(θ), . . . ,yS(θ))||2

whenys(θ) ∼ f(y|θ) s = 1, . . . , S

Page 52: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

ABC using indirect inference (1)

We present a novel approach for developing summary statisticsfor use in approximate Bayesian computation (ABC) algorithms byusing indirect inference(...) In the indirect inference approach toABC the parameters of an auxiliary model fitted to the data becomethe summary statistics. Although applicable to any ABC technique,we embed this approach within a sequential Monte Carlo algorithmthat is completely adaptive and requires very little tuning(...)

[Drovandi, Pettitt & Faddy, 2011]

c© Indirect inference provides summary statistics for ABC...

Page 53: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

simulation-based methods in Econometrics

ABC using indirect inference (2)

...the above result shows that, in the limit as h→ 0, ABC willbe more accurate than an indirect inference method whose auxiliarystatistics are the same as the summary statistic that is used forABC(...) Initial analysis showed that which method is moreaccurate depends on the true value of θ.

[Fearnhead and Prangle, 2012]

c© Indirect inference provides estimates rather than global inference...

Page 54: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

ABC for model choice

Approximate Bayesian computation

ABC for model choicePrincipleGibbs random fieldsGeneric ABC model choice

Model choice consistency

Page 55: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Principle

Bayesian model choice

Principle

Several modelsM1,M2, . . .

are considered simultaneously for dataset y and model index Mcentral to inference.Use of a prior π(M = m), plus a prior distribution on theparameter conditional on the value m of the model index, πm(θm)Goal is to derive the posterior distribution of M,

π(M = m|data)

a challenging computational target when models are complex.

Page 56: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Principle

Generic ABC for model choice

Algorithm 3 Likelihood-free model choice sampler (ABC-MC)

for t = 1 to T dorepeat

Generate m from the prior π(M = m)Generate θm from the prior πm(θm)Generate z from the model fm(z|θm)

until ρ{η(z), η(y)} < εSet m(t) = m and θ(t) = θm

end for

[Grelaud & al., 2009; Toni & al., 2009]

Page 57: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Principle

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑t=1

Im(t)=m .

Page 58: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Principle

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑t=1

Im(t)=m .

Extension to a weighted polychotomous logistic regressionestimate of π(M = m|y), with non-parametric kernel weights

[Cornuet et al., DIYABC, 2009]

Page 59: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Principle

The great ABC controversy

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010c, &tcargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Page 60: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Principle

The great ABC controversy

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010c, &tcargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Replies: Fagundes et al., 2008,Beaumont et al., 2010, Berger etal., 2010, Csillery et al., 2010point out that the criticisms areaddressed at [Bayesian]model-based inference and havenothing to do with ABC...

Page 61: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Potts model

Skip MRFs

Potts model

Distribution with an energy function of the form

θS(y) = θ∑l∼i

δyl=yi

where l∼i denotes a neighbourhood structure

In most realistic settings, summation

Zθ =∑x∈X

exp{θS(x)}

involves too many terms to be manageable and numericalapproximations cannot always be trusted

Page 62: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Potts model

Skip MRFs

Potts model

Distribution with an energy function of the form

θS(y) = θ∑l∼i

δyl=yi

where l∼i denotes a neighbourhood structure

In most realistic settings, summation

Zθ =∑x∈X

exp{θS(x)}

involves too many terms to be manageable and numericalapproximations cannot always be trusted

Page 63: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Neighbourhood relations

SetupChoice to be made between M neighbourhood relations

im∼ i′ (0 ≤ m ≤M − 1)

withSm(x) =

∑im∼i′

I{xi=xi′}

driven by the posterior probabilities of the models.

Page 64: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Model index

Computational target:

P(M = m|x) ∝∫

Θm

fm(x|θm)πm(θm) dθm π(M = m)

If S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .

Page 65: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Model index

Computational target:

P(M = m|x) ∝∫

Θm

fm(x|θm)πm(θm) dθm π(M = m)

If S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .

Page 66: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Sufficient statistics in Gibbs random fields

Each model m has its own sufficient statistic Sm(·) andS(·) = (S0(·), . . . , SM−1(·)) is also (model-)sufficient.

Explanation: For Gibbs random fields,

x|M = m ∼ fm(x|θm) = f1m(x|S(x))f2

m(S(x)|θm)

=1

n(S(x))f2m(S(x)|θm)

wheren(S(x)) = ] {x ∈ X : S(x) = S(x)}

c© S(x) is sufficient for the joint parameters

Page 67: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Sufficient statistics in Gibbs random fields

Each model m has its own sufficient statistic Sm(·) andS(·) = (S0(·), . . . , SM−1(·)) is also (model-)sufficient.Explanation: For Gibbs random fields,

x|M = m ∼ fm(x|θm) = f1m(x|S(x))f2

m(S(x)|θm)

=1

n(S(x))f2m(S(x)|θm)

wheren(S(x)) = ] {x ∈ X : S(x) = S(x)}

c© S(x) is sufficient for the joint parameters

Page 68: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Toy example

iid Bernoulli model versus two-state first-order Markov chain, i.e.

f0(x|θ0) = exp

(θ0

n∑i=1

I{xi=1}

)/{1 + exp(θ0)}n ,

versus

f1(x|θ1) =1

2exp

(θ1

n∑i=2

I{xi=xi−1}

)/{1 + exp(θ1)}n−1 ,

with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phasetransition” boundaries).

Page 69: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Toy example (2)

−40 −20 0 10

−50

5

BF01

BF01

−40 −20 0 10−10

−50

510

BF01

BF01

(left) Comparison of the true BFm0/m1(x0) with BFm0/m1

(x0)(in logs) over 2, 000 simulations and 4.106 proposals from theprior. (right) Same when using tolerance ε corresponding to the1% quantile on the distances.

Page 70: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

More about sufficiency

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 71: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

More about sufficiency

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 72: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (T →∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ε∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ε

,

where the (mt, zt)’s are simulated from the (joint) prior

As T go to infinity, limit

Bε12(y) =

∫Iρ{η(z),η(y)}≤επ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤επ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤επ1(θ1)fη1 (η|θ1) dη dθ1∫Iρ{η,η(y)}≤επ2(θ2)fη2 (η|θ2) dη dθ2

,

where fη1 (η|θ1) and fη2 (η|θ2) distributions of η(z)

Page 73: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (T →∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ε∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ε

,

where the (mt, zt)’s are simulated from the (joint) priorAs T go to infinity, limit

Bε12(y) =

∫Iρ{η(z),η(y)}≤επ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤επ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤επ1(θ1)fη1 (η|θ1) dη dθ1∫Iρ{η,η(y)}≤επ2(θ2)fη2 (η|θ2) dη dθ2

,

where fη1 (η|θ1) and fη2 (η|θ2) distributions of η(z)

Page 74: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (ε→ 0)

When ε goes to zero,

Bη12(y) =

∫π1(θ1)fη1 (η(y)|θ1) dθ1∫π2(θ2)fη2 (η(y)|θ2) dθ2

c© Bayes factor based on the sole observation of η(y)

Page 75: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (ε→ 0)

When ε goes to zero,

Bη12(y) =

∫π1(θ1)fη1 (η(y)|θ1) dθ1∫π2(θ2)fη2 (η(y)|θ2) dθ2

c© Bayes factor based on the sole observation of η(y)

Page 76: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic in both models,

fi(y|θi) = gi(y)fηi (η(y)|θi)

Thus

B12(y) =

∫Θ1π(θ1)g1(y)fη1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)fη2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)fη1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)fη2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 77: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic in both models,

fi(y|θi) = gi(y)fηi (η(y)|θi)

Thus

B12(y) =

∫Θ1π(θ1)g1(y)fη1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)fη2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)fη1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)fη2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 78: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Poisson/geometric example

Samplex = (x1, . . . , xn)

from either a Poisson P(λ) or from a geometric G(p)Sum

S =

n∑i=1

xi = η(x)

sufficient statistic for either model but not simultaneously

Discrepancy ratio

g1(x)

g2(x)=S!n−S/

∏i xi!

1/(

n+S−1S

)

Page 79: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Poisson/geometric discrepancy

Range of B12(x) versus Bη12(x): The values produced have

nothing in common.

Page 80: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x) + θT

2 η2(x) +α1t1(x) +α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

Page 81: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x) + θT

2 η2(x) +α1t1(x) +α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

In the Poisson/geometric case, if∏i xi! is added to S, no

discrepancy

Page 82: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x) + θT

2 η2(x) +α1t1(x) +α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

Only applies in genuine sufficiency settings...

c© Inability to evaluate information loss due to summarystatistics

Page 83: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Meaning of the ABC-Bayes factor

The ABC approximation to the Bayes Factor is based solely on thesummary statistics....

In the Poisson/geometric case, if E[yi] = θ0 > 0,

limn→∞

Bη12(y) =

(θ0 + 1)2

θ0e−θ0

Page 84: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Meaning of the ABC-Bayes factor

The ABC approximation to the Bayes Factor is based solely on thesummary statistics....In the Poisson/geometric case, if E[yi] = θ0 > 0,

limn→∞

Bη12(y) =

(θ0 + 1)2

θ0e−θ0

Page 85: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

MA example

1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Evolution [against ε] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ε equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factorequal to 17.71.

Page 86: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

MA example

1 2

0.0

0.2

0.4

0.6

1 2

0.0

0.2

0.4

0.6

1 2

0.0

0.2

0.4

0.6

1 2

0.0

0.2

0.4

0.6

0.8

Evolution [against ε] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ε equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21

equal to .004.

Page 87: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

A population genetics evaluation

Gene free

Population genetics example with

I 3 populations

I 2 scenari

I 15 individuals

I 5 loci

I single mutation parameter

I 24 summary statistics

I 2 million ABC proposal

I importance [tree] sampling alternative

Page 88: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

A population genetics evaluation

Gene free

Population genetics example with

I 3 populations

I 2 scenari

I 15 individuals

I 5 loci

I single mutation parameter

I 24 summary statistics

I 2 million ABC proposal

I importance [tree] sampling alternative

Page 89: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Stability of importance sampling

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

Page 90: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 24 summary statistics and DIY-ABC logistic correction

●●

●●

●●

●●

●●

●●

●●

●●

● ●

●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

importance sampling

AB

C d

irect

and

logi

stic

●●

●●

●● ●

●●

●●

●●

● ●

●●

● ●

Page 91: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 15 summary statistics

●●

●●

● ●

●●

●●

●●

●●

●●

●●

● ●

●●

● ●●

●●

●●

●●

●●

−4 −2 0 2 4 6

−4

−2

02

46

importance sampling

AB

C d

irect

Page 92: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 15 summary statistics and DIY-ABC logistic correction

●●

●●

● ●

●●

●●

●●

●●

●●

●●

● ●

●●

● ●●

●●

●●

●●

●●

−4 −2 0 2 4 6

−4

−2

02

46

importance sampling

AB

C d

irect

and

logi

stic

●●

● ●

●●

● ●

●●

●●

Page 93: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

The only safe cases???

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010; Sousa & al., 2009]

...and so does the use of more informal model fitting measures[Ratmann & al., 2009]

Page 94: seminar at Princeton University

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

The only safe cases???

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010; Sousa & al., 2009]

...and so does the use of more informal model fitting measures[Ratmann & al., 2009]

Page 95: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

ABC model choice consistency

Approximate Bayesian computation

ABC for model choice

Model choice consistencyFormalised frameworkConsistency resultsSummary statisticsConclusions

Page 96: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

The starting point

Central question to the validation of ABC for model choice:

When is a Bayes factor based on an insufficient statisticT (y) consistent?

Note: c© drawn on T (y) through BT12(y) necessarily differs from

c© drawn on y through B12(y)

Page 97: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

The starting point

Central question to the validation of ABC for model choice:

When is a Bayes factor based on an insufficient statisticT (y) consistent?

Note: c© drawn on T (y) through BT12(y) necessarily differs from

c© drawn on y through B12(y)

Page 98: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposedto model M2: y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2

and scale parameter 1/√

2 (variance one).

Page 99: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposedto model M2: y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2

and scale parameter 1/√

2 (variance one).Four possible statistics

1. sample mean y (sufficient for M1 if not M2);

2. sample median med(y) (insufficient);

3. sample variance var(y) (ancillary);

4. median absolute deviation mad(y) = med(y −med(y));

Page 100: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposedto model M2: y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2

and scale parameter 1/√

2 (variance one).

0.1 0.2 0.3 0.4 0.5 0.6 0.7

01

23

45

6

posterior probability

Den

sity

Page 101: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposedto model M2: y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2

and scale parameter 1/√

2 (variance one).

●●

Gauss Laplace

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

n=100

Page 102: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposedto model M2: y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2

and scale parameter 1/√

2 (variance one).

0.1 0.2 0.3 0.4 0.5 0.6 0.7

01

23

45

6

posterior probability

Den

sity

0.0 0.2 0.4 0.6 0.8 1.0

01

23

probability

Den

sity

Page 103: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposedto model M2: y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2

and scale parameter 1/√

2 (variance one).

●●

Gauss Laplace

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

n=100

●●

●●

Gauss Laplace

0.0

0.2

0.4

0.6

0.8

1.0

n=100

Page 104: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

Framework

Starting from sample

y = (y1, . . . , yn)

the observed sample, not necessarily iid with true distribution

y ∼ Pn

Summary statistics

T (y) = T n = (T1(y), T2(y), · · · , Td(y)) ∈ Rd

with true distribution T n ∼ Gn.

Page 105: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Formalised framework

Framework

c© Comparison of

– under M1, y ∼ F1,n(·|θ1) where θ1 ∈ Θ1 ⊂ Rp1

– under M2, y ∼ F2,n(·|θ2) where θ2 ∈ Θ2 ⊂ Rp2

turned into

– under M1, T (y) ∼ G1,n(·|θ1), and θ1|T (y) ∼ π1(·|T n)

– under M2, T (y) ∼ G2,n(·|θ2), and θ2|T (y) ∼ π2(·|T n)

Page 106: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A1] is a standard central limit theorem under the true model[A2] controls the large deviations of the estimator T n from the estimandµ(θ)[A3] is the standard prior mass condition found in Bayesian asymptotics(di effective dimension of the parameter)[A4] restricts the behaviour of the model density against the true density

Page 107: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A1] There exist a sequence {vn} converging to +∞,a distribution Q,a symmetric, d× d positive definite matrix V0and a vector µ0 ∈ Rd such that

vnV−1/20 (T n − µ0)

n→∞ Q, under Gn

Page 108: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A2] For i = 1, 2, there exist sets Fn,i ⊂ Θi and constants εi, τi, αi > 0such that for all τ > 0,

supθi∈Fn,i

Gi,n

[|T n − µ(θi)| > τ |µi(θi)− µ0| ∧ εi |θi

](|µi(θi)− µ0| ∧ εi)−αi

. v−αin

withπi(Fcn,i) = o(v−τin ).

Page 109: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A3] For u > 0

Sn,i(u) ={θi ∈ Fn,i; |µ(θi)− µ0| ≤ u v−1n

}if inf{|µi(θi)− µ0|; θi ∈ Θi} = 0, there exist constants di < τi ∧ αi − 1such that

πi(Sn,i(u)) ∼ udiv−din , ∀u . vn

Page 110: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A4] If inf{|µi(θi)− µ0|; θi ∈ Θi} = 0, for any ε > 0, there existU, δ > 0 and (En) such that, if θi ∈ Sn,i(U)

En ⊂ {t; gi(t|θi) < δgn(t)} and Gn (Ecn) < ε.

Page 111: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

Again[A1] is a standard central limit theorem under the true model[A2] controls the large deviations of the estimator T n from the estimandµ(θ)[A3] is the standard prior mass condition found in Bayesian asymptotics(di effective dimension of the parameter)

[A4] restricts the behaviour of the model density against the true density

Page 112: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Effective dimension

Understanding di in [A3]: defined only whenµ0 ∈ {µi(θi), θi ∈ Θi},

πi(θi : |µi(θi)− µ0| < n−1/2) = O(n−di/2)

is the effective dimension of the model Θi around µ0

Page 113: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Asymptotic marginals

Asymptotically, under [A1]–[A4]

mi(t) =

∫Θi

gi(t|θi)πi(θi) dθi

is such that(i) if inf{|µi(θi)− µ0|; θi ∈ Θi} = 0,

Clvd−din ≤ mi(T

n) ≤ Cuvd−din

and(ii) if inf{|µi(θi)− µ0|; θi ∈ Θi} > 0

mi(Tn) = oPn [vd−τin + vd−αin ].

Page 114: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Within-model consistency

Under same assumptions,if inf{|µi(θi)− µ0|; θi ∈ Θi} = 0,

the posterior distribution of µi(θi) given T n is consistent at rate1/vn provided αi ∧ τi > di.

Note: di can truly be seen as an effective dimension of the modelunder the posterior πi(.|T n), since if µ0 ∈ {µi(θi); θi ∈ Θi},

mi(Tn) ∼ vd−din

Page 115: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Within-model consistency

Under same assumptions,if inf{|µi(θi)− µ0|; θi ∈ Θi} = 0,

the posterior distribution of µi(θi) given T n is consistent at rate1/vn provided αi ∧ τi > di.

Note: di can truly be seen as an effective dimension of the modelunder the posterior πi(.|T n), since if µ0 ∈ {µi(θi); θi ∈ Θi},

mi(Tn) ∼ vd−din

Page 116: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value of T n under bothmodels. And only by this mean value!

Page 117: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value of T n under bothmodels. And only by this mean value!

Indeed, if

inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0

then

Clv−(d1−d2)n ≤ m1(T n)

/m2(T n) ≤ Cuv−(d1−d2)

n ,

where Cl, Cu = OPn(1), irrespective of the true model.c© Only depends on the difference d1 − d2: no consistency

Page 118: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value of T n under bothmodels. And only by this mean value!

Else, if

inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} > inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0

thenm1(T n)

m2(T n)≥ Cu min

(v−(d1−α2)n , v−(d1−τ2)

n

)

Page 119: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Consistency theorem

If

inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0,

Bayes factorBT

12 = O(v−(d1−d2)n )

irrespective of the true model. It is inconsistent since it alwayspicks the model with the smallest dimension

Page 120: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Consistency results

Consistency theorem

If Pn belongs to one of the two models and if µ0 cannot beattained by the other one :

0 = min (inf{|µ0 − µi(θi)|; θi ∈ Θi}, i = 1, 2)

< max (inf{|µ0 − µi(θi)|; θi ∈ Θi}, i = 1, 2) ,

then the Bayes factor BT12 is consistent

Page 121: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Consequences on summary statistics

Bayes factor driven by the means µi(θi) and the relative position ofµ0 wrt both sets {µi(θi); θi ∈ Θi}, i = 1, 2.

For ABC, this implies the most likely statistics T n are ancillarystatistics with different mean values under both models

Else, if T n asymptotically depends on some of the parameters ofthe models, it is quite likely that there exists θi ∈ Θi such thatµi(θi) = µ0 even though model M1 is misspecified

Page 122: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Consequences on summary statistics

Bayes factor driven by the means µi(θi) and the relative position ofµ0 wrt both sets {µi(θi); θi ∈ Θi}, i = 1, 2.

For ABC, this implies the most likely statistics T n are ancillarystatistics with different mean values under both models

Else, if T n asymptotically depends on some of the parameters ofthe models, it is quite likely that there exists θi ∈ Θi such thatµi(θi) = µ0 even though model M1 is misspecified

Page 123: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

T n = n−1n∑i=1

X4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

and the true distribution is Laplace with mean θ0 = 1, under theGaussian model the value θ∗ = 2

√3− 3 leads to µ0 = µ(θ∗)

[here d1 = d2 = d = 1]

Page 124: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

T n = n−1n∑i=1

X4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

and the true distribution is Laplace with mean θ0 = 1, under theGaussian model the value θ∗ = 2

√3− 3 leads to µ0 = µ(θ∗)

[here d1 = d2 = d = 1]c© A Bayes factor associated with such a statistic is inconsistent

Page 125: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

T n = n−1n∑i=1

X4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

0.0 0.2 0.4 0.6 0.8 1.0

02

46

8

probability

Fourth moment

Page 126: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

T n = n−1n∑i=1

X4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

●●

●●

M1 M2

0.30

0.35

0.40

0.45

n=1000

Page 127: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

T n = n−1n∑i=1

X4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

Details: Comparison of the distributions of the posterior probabilities that the

data is from a normal model (as opposed to a Laplace model) with unknown

mean when the data is made of n = 1000 observations either from a normal

(M1) or Laplace (M2) distribution with mean one and when the summary

statistic in the ABC algorithm is restricted to the empirical fourth moment.

The ABC algorithm uses proposals from the prior N (0, 4) and selects the

tolerance as the 1% distance quantile.

Page 128: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [0]

WhenT (y) =

{y(4)n , y(6)

n

}and the true distribution is Laplace with mean θ0 = 0, thenµ0 = 6, µ1(θ∗1) = 6 with θ∗1 = 2

√3− 3

[d1 = 1 and d2 = 1/2]thus

B12 ∼ n−1/4 → 0 : consistent

Under the Gaussian model µ0 = 3, µ2(θ2) ≥ 6 > 3 ∀θ2

B12 → +∞ : consistent

Page 129: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [0]

WhenT (y) =

{y(4)n , y(6)

n

}and the true distribution is Laplace with mean θ0 = 0, thenµ0 = 6, µ1(θ∗1) = 6 with θ∗1 = 2

√3− 3

[d1 = 1 and d2 = 1/2]thus

B12 ∼ n−1/4 → 0 : consistent

Under the Gaussian model µ0 = 3, µ2(θ2) ≥ 6 > 3 ∀θ2

B12 → +∞ : consistent

Page 130: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [0]

WhenT (y) =

{y(4)n , y(6)

n

}

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

posterior probabilities

Den

sity

Fourth and sixth moments

Page 131: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Checking for adequate statistics

After running ABC, i.e. creating reference tables of (θi, xi) fromboth joints, straightforward derivation of ABC estimates θ1 and θ2.

Evaluation of E1θ1

[T (X)] and E2θ2

[T (X)] allows for detection of

different means under both models.

Page 132: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Checking for adequate statistics

mean and median under normal data

Page 133: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Checking for adequate statistics

mean and median under Laplace data

Page 134: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Embedded models

When M1 submodel of M2, and if the true distribution belongs tothe smaller model M1, Bayes factor is of order

v−(d1−d2)n

Page 135: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Embedded models

When M1 submodel of M2, and if the true distribution belongs tothe smaller model M1, Bayes factor is of order

v−(d1−d2)n

If summary statistic only informative on a parameter that is thesame under both models, i.e if d1 = d2, then

c© the Bayes factor is not consistent

Page 136: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Embedded models

When M1 submodel of M2, and if the true distribution belongs tothe smaller model M1, Bayes factor is of order

v−(d1−d2)n

Else, d1 < d2 and Bayes factor is consistent under M1. If truedistribution not in M1, then

c© Bayes factor is consistent only if µ1 6= µ2 = µ0

Page 137: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Another toy example: Quantile distribution

Q(p;A,B, g, k) = A+B

[1 +

1− exp{−gz(p)}1 + exp{−gz(p)}

] [1 + z(p)2

]kz(p)

A,B, g and k, location, scale, skewness and kurtosis parametersEmbedded models:

I M1 : g = 0 and k ∼ U [−1/2, 5]

I M2 : g ∼ U [0, 4] and k ∼ U [−1/2, 5].

Page 138: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Consistency [or not]

M1 M2

0.3

0.4

0.5

0.6

0.7

●●

M1 M2

0.3

0.4

0.5

0.6

0.7

M1 M2

0.3

0.4

0.5

0.6

0.7

●●

M1 M2

0.0

0.2

0.4

0.6

0.8

●●

●●

M1 M2

0.0

0.2

0.4

0.6

0.8

1.0

M1 M2

0.0

0.2

0.4

0.6

0.8

1.0

M1 M2

0.0

0.2

0.4

0.6

0.8

●●●

●●●

M1 M2

0.0

0.2

0.4

0.6

0.8

1.0

●●

●●

M1 M2

0.0

0.2

0.4

0.6

0.8

1.0

Page 139: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Summary statistics

Consistency [or not]

Details: Comparison of the distributions of the posterior probabilities that the

data is from model M1 when the data is made of 100 observations (left

column), 1000 observations (central column) and 10,000 observations (right

column) either from M1 (M1) or M2 (M2) when the summary statistics in the

ABC algorithm are made of the empirical quantile at level 10% (first row), the

empirical quantiles at levels 10% and 90% (second row), and the empirical

quantiles at levels 10%, 40%, 60% and 90% (third row), respectively. The

boxplots rely on 100 replicas and the ABC algorithms are based on 104

proposals from the prior, with the tolerance being chosen as the 1% quantile on

the distances.

Page 140: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Conclusions

Conclusions

• Model selection feasible with ABC• Choice of summary statistics is paramount• At best, ABC → π(. | T (y)) which concentrates around µ0

• For estimation : {θ;µ(θ) = µ0} = θ0

• For testing {µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

Page 141: seminar at Princeton University

ABC Methods for Bayesian Model Choice

Model choice consistency

Conclusions

Conclusions

• Model selection feasible with ABC• Choice of summary statistics is paramount• At best, ABC → π(. | T (y)) which concentrates around µ0

• For estimation : {θ;µ(θ) = µ0} = θ0

• For testing {µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅