Download - Karl Claxton and Tony Ades
Some methodological issues in value of Some methodological issues in value of information analysis: information analysis:
an application of partial EVPI and EVSI to an application of partial EVPI and EVSI to an economic model of Zanamiviran economic model of Zanamivir
Karl Claxton and Tony AdesKarl Claxton and Tony Ades
Partial EVPIsPartial EVPIs
Light at the end of the tunnel……Light at the end of the tunnel……
…………..maybe it’s a train..maybe it’s a train
A simple model of ZanamivirA simple model of Zanamivirphz 0.018
pcz 0.367
1-phz 0.982
pip 0.340
phz 0.018
1-pcz 0.633
1-phz 0.982
Zanamivir
phs 0.025
pcs 0.452
1-phs 0.975
1-pip 0.660
phs 0.025
1-pcs 0.548
1-phs 0.975
Cost-effectiveness plane
0.00
5.00
10.00
15.00
20.00
25.00
30.00
35.00
-0.00040 -0.00020 0.00000 0.00020 0.00040 0.00060 0.00080 0.00100 0.00120 0.00140 0.00160
Incremental effect (quality adjusted life years)
Incr
emen
tal c
ost
corr = -0.06
Cost-effectiveness acceptability curve
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
£0 £10,000 £20,000 £30,000 £40,000 £50,000 £60,000 £70,000 £80,000 £90,000 £100,000
Monetary value of health outcome
Pro
bab
ility
Zan
amav
ir is
co
st-e
ffec
tive
ICER = £51,376
P[inb > 0] = 0.461
.000
.007
.013
.020
.026
(£40.00) (£20.00) £0.00 £20.00 £40.00
Normal DistributionMean = (£0.51)Std Dev = £12.52
inb
Distribution of inbDistribution of inb
EVPI for the decisionEVPI for the decision
EVPI = EV(perfect information) - EV(current information)EVPI = EV(perfect information) - EV(current information)
pip
Zan
1-pip
EV(current)
pip
Std
1-pip
Zan
Pip
Std
EV(Perfect)
Zan
1-pip
Std
Expected value of perfect information (EVPI)
£0.00
£1.00
£2.00
£3.00
£4.00
£5.00
£6.00
£0 £10,000 £20,000 £30,000 £40,000 £50,000 £60,000 £70,000 £80,000 £90,000 £100,000
Monetary value of health outcome
EV
PI
Partial EVPIPartial EVPI
p(..) …….Zan
1-p(..) …….pip
p(..) ……Std
1-p(..) ……EV(Perfect)
p(..) ……Zan
1-p(..) ……1-pip
p(..) ……Std
1-p(..) ……
p(..) …….pip
1-p(..) …….Zan
p(..) ……1-pip
1-p(..) ……EV(current)
p(..) ……pip
1-p(..) ……Std
p(..) ……1-pip
1-p(..) ……
EVPIEVPIpippip = EV(perfect information about pip) - EV(current information) = EV(perfect information about pip) - EV(current information)
EV(optimal decision for a EV(optimal decision for a particular resolution of pip)particular resolution of pip)
Expectation of this difference over all resolutions of pipExpectation of this difference over all resolutions of pip
EV(prior decision for the EV(prior decision for the same resolution of pip)same resolution of pip)
--
Partial EVPIPartial EVPI
Some implications:Some implications: information about an input is only valuable if it changes information about an input is only valuable if it changes
our decision our decision information is only valuable if pip information is only valuable if pip does notdoes not resolve at its resolve at its
expected value expected value
General solution:General solution: linear and non linear modelslinear and non linear models inputs can be (spuriously) correlatedinputs can be (spuriously) correlated
Partial EVPI
£0.00
£0.50
£1.00
£1.50
£2.00
£2.50
£3.00
£3.50
£4.00
£20,000 £25,000 £30,000 £35,000 £40,000 £45,000 £50,000 £55,000 £60,000 £65,000 £70,000
Monetary Value of Health Outcome
Part
ial
EV
PI
EVPIpip
EVPIpcz
EVPIphz
EVPIpcs
EVPIphs
EVPIupa
EVPIrsd
Felli and Hazen (98) “short cut”Felli and Hazen (98) “short cut”EVPIEVPIpippip = EVPI when resolve all other inputs at their expected value = EVPI when resolve all other inputs at their expected value
Appears counter intuitive:Appears counter intuitive: we resolve all other uncertainties then ask what is the value of pip ie we resolve all other uncertainties then ask what is the value of pip ie
“residual” EVPIpip ?“residual” EVPIpip ?
But:But: resolving at EV does not give us any informationresolving at EV does not give us any information
Correct if:Correct if: linear relationship between inputs and net benefitlinear relationship between inputs and net benefit inputs are not correlatedinputs are not correlated
pip pcz phz pcs phs upa rsdPartial EVPI 1.02039 0.00688 0.53597 0.00000 0.95540 1.81184 3.54898Felli and Hazen 1.02752 0.00363 0.50388 0.00000 0.92898 1.77514 3.52854
So why different values? So why different values?
The model is linearThe model is linear The inputs are independent?The inputs are independent?
Spurious correlation
pip pcz phz pcs phs upa rsdpip pcz 0.12 phz 0.00 -0.04 pcs 0.02 0.01 0.08
phs 0.02 -0.03 0.02 0.08 upa 0.05 0.00 0.06 -0.02 0.03 rsd -0.06 0.02 0.00 0.00 0.01 -0.01
““Residual” EVPIResidual” EVPI
wrong current information position for partial EVPIwrong current information position for partial EVPI what is the value of resolving pip when we already have perfect what is the value of resolving pip when we already have perfect
information about all other inputs?information about all other inputs? Expect residual EVPIExpect residual EVPIpippip < partial EVPI < partial EVPIpippip
EVPI when resolve all other inputs at each realisation ?EVPI when resolve all other inputs at each realisation ?
pip pcz phz pcs phs upa rsdPartial EVPI 1.02039 0.00688 0.53597 0.00000 0.95540 1.81184 3.54898Residual EVPI 0.17985 0.00201 0.06510 0.00000 0.14866 0.49865 1.98472
Thompson and Evans (96) and Thompson and Graham (96)Thompson and Evans (96) and Thompson and Graham (96)
inb simplifies to:inb simplifies to:
inb = inb =
Rearrange: Rearrange:
pip: inb = pip: inb =
pcz: inb = pcz: inb =
phz: inb = phz: inb =
rsd: inb = rsd: inb =
upd: inb = upd: inb =
phs: inb = phs: inb =
pcs: inb = pcs: inb =
Felli and Hazen (98) used a similar approachFelli and Hazen (98) used a similar approach Thompson and Evans (96) is a linear modelThompson and Evans (96) is a linear model emphasis on EVPI when set others to joint expected valueemphasis on EVPI when set others to joint expected value requires payoffs as a function of the input of interest requires payoffs as a function of the input of interest
Reduction in cost of uncertaintyReduction in cost of uncertainty
intuitive appealintuitive appeal consistent with conditional probabilistic analysisconsistent with conditional probabilistic analysis
RCURCUE(pip)E(pip) = EVPI - EVPI(pip resolved at expected value) = EVPI - EVPI(pip resolved at expected value)
ButBut pip may not resolve at E(pip) and prior decisions may changepip may not resolve at E(pip) and prior decisions may change value of perfect information if forced to stick to the prior value of perfect information if forced to stick to the prior
decision ie the value of a reduction in variancedecision ie the value of a reduction in variance Expect RCUExpect RCUE(pip)E(pip) < partial EVPI < partial EVPI
pip pcz phz pcs phs upa rsdPartial EVPI 1.02039 0.00688 0.53597 0.00000 0.95540 1.81184 3.54898ROLE(pip) 0.32380 -0.00035 0.03770 0.00099 0.18698 0.56995 2.13151
Reduction in cost of uncertaintyReduction in cost of uncertainty
spurious correlation again?spurious correlation again?
RCURCUpippip = E = Epippip[EVPI – EVPI(given realisation of pip)] = partial EVPI[EVPI – EVPI(given realisation of pip)] = partial EVPI
RCURCUpippip = EVPI – E = EVPI – Epippip[EVPI(given realisation of pip)][EVPI(given realisation of pip)]
= [EV(perfect information) - EV(current information)] -= [EV(perfect information) - EV(current information)] - EEpippip[EV(perfect information, pip resolved) - EV(current information, pip [EV(perfect information, pip resolved) - EV(current information, pip
resolved)] resolved)] pip pcz phz pcs phs upa rsd
Partial EVPI 1.02039 0.00688 0.53597 0.00000 0.95540 1.81184 3.54898ROLpip 1.16434 0.00451 0.50858 0.00060 0.99372 1.88313 3.69577
pip pcz phz pcs phs upa rsdPartial EVPI 1.02039 0.00688 0.53597 0.00000 0.95540 1.81184 3.54898E[ROLpip] 1.02039 0.00688 0.53597 0.00000 0.95540 1.81184 3.54898
EVPI for strategiesEVPI for strategies
Value of including a strategy?Value of including a strategy?
EVPI with and without the strategy includedEVPI with and without the strategy included demonstrates biasdemonstrates bias difference = EVPI associated with the strategy?difference = EVPI associated with the strategy?
EV(perfect information, all included) – EV(perfect information, all included) –
EV(perfect information, excluded)EV(perfect information, excluded)
EEall inputsall inputs[Max[Maxdd(NB(NBdd||all inputsall inputs)] – E)] – Eall inputsall inputs[Max[Maxd-1d-1(NB(NBd-1d-1||all inputsall inputs)])]
Conclusions on partialsConclusions on partials
Life is beautiful …… Hegel was rightLife is beautiful …… Hegel was right…………progress is a dialecticprogress is a dialectic
Maths don’t lie ……Maths don’t lie ………………but brute force empiricism can misleadbut brute force empiricism can mislead
EVSI…… EVSI……
…… …… it may well be a trainit may well be a train
Hegel’s right again!Hegel’s right again!
…………contradiction follows synthesiscontradiction follows synthesis
EVSI for model inputsEVSI for model inputs
generate a predictive distribution for sample of ngenerate a predictive distribution for sample of n sample from the predictive and prior distributions to sample from the predictive and prior distributions to
form a preposteriorform a preposterior propagate the preposterior through the modelpropagate the preposterior through the model value of information for sample of nvalue of information for sample of n find n* that maximises EVSI-cost sampling find n* that maximises EVSI-cost sampling
EVSI for pipEVSI for pip
Epidemiological study nEpidemiological study n
prior:prior: pip pip Beta ( Beta (, , )) predicitive:predicitive: rip rip Bin(pip, n) Bin(pip, n) preposterior:preposterior: pip’ = (pip(pip’ = (pip(++)+rip)/(()+rip)/((+++n)+n)
as n increases var(rip*n) falls towards var(pip)as n increases var(rip*n) falls towards var(pip) var(pip’) < var(pip) and falls with nvar(pip’) < var(pip) and falls with n pip’ are the possible posterior meanspip’ are the possible posterior means
EVSIpipEVSIpip
= reduction in the cost of uncertainty due to n obs on pip= reduction in the cost of uncertainty due to n obs on pip
= difference in partials (EVPIpip – EVPIpip’)= difference in partials (EVPIpip – EVPIpip’)
EEpippip[E[Eotherother[Max[Maxdd(NB(NBdd||other, pipother, pip)] - Max)] - Maxd d EEotherother(NB(NBdd||other, pipother, pip)] -)] -
EEpip’pip’[E[Eotherother[Max[Maxdd(NB(NBdd||other, pip’other, pip’)] - Max)] - Maxd d EEotherother(NB(NBdd||other, pip’other, pip’)] )]
pip’has smaller var so any realisation is less likely to change decisionpip’has smaller var so any realisation is less likely to change decision
EEpippip[E[Eotherother[Max[Maxdd(NB(NBdd||other, pipother, pip)] > E)] > Epip’pip’[E[Eotherother[Max[Maxdd(NB(NBdd||other, pip’other, pip’)])]
E(pip’) = E(pip)E(pip’) = E(pip)
EEpippip[Max[Maxd d EEotherother(NB(NBdd||other, pipother, pip)] = E)] = Epip’pip’[Max[Maxd d EEotherother(NB(NBdd||other, pip’other, pip’)] )]
Expected Value of Sample Information (EVSIpip)
0.00
0.20
0.40
0.60
0.80
1.00
0 200 400 600 800 1000 1200 1400 1600
Sample size (n)
Val
ue
of
info
rmat
ion
Posterior Partial EVPI
EVSIpip
Prior Partial EVPI
EVSIpipEVSIpip
Why not the difference in prior and preposterior EVPI?Why not the difference in prior and preposterior EVPI? effect of pip’ only through var(NB)effect of pip’ only through var(NB) change decision for the realisation of pip’ once study is change decision for the realisation of pip’ once study is
completedcompleted difference in prior and preposterior EVPI will difference in prior and preposterior EVPI will
underestimate EVSIpip underestimate EVSIpip
n 110 160 320 1750EVSIpip 0.37790089 0.466544 0.635703 0.933119EVPI-EVPI' 0.25157296 0.268576 0.292854 0.31551
ImplicationsImplications
EVSI for any input that is conjugateEVSI for any input that is conjugate generate preposterior for log odds ratio for complication and generate preposterior for log odds ratio for complication and
hospitalisation etc hospitalisation etc trial design for individual endpoint (rsd)trial design for individual endpoint (rsd) trial designs with a number of endpoints (pcz, phz, upd, rsd)trial designs with a number of endpoints (pcz, phz, upd, rsd)
n for an endpoint will be uncertain (n_pcz = n*pip, etc)n for an endpoint will be uncertain (n_pcz = n*pip, etc) consider optimal n and allocation (search for n*)consider optimal n and allocation (search for n*)
combine different designs eg: combine different designs eg: obs study (pip) and trial (upd, rsd) or obs study (pip, upd), obs study (pip) and trial (upd, rsd) or obs study (pip, upd),
trial (rsd)…. etctrial (rsd)…. etc