bradley-terry modelshofroe.net/stat557/22-bradley-terry.pdfexample: american league baseball •1987...

31
Bradley-Terry Models Stat 557 Heike Hofmann

Upload: others

Post on 30-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Bradley-Terry ModelsStat 557

Heike Hofmann

Outline

• Definition: Bradley-Terry

• Fitting the model

• Extension: Order Effects

• Extension: Ordinal & Nominal Response

• Repeated Measures

Bradley-Terry Model (1952)

• Idea: based on pairwise comparisons, find overall ranking

• e.g. sports teams, wine tasting, , ...

Example: American League Baseball

• 1987 Seasoneach team played every other 13 times

7.4 Bradley-Terry Model

Situation: data comes from pairwise evaluations, e.g. as in athletic competitions (with outcome win/lose),pairwise comparison of product brands (wine tasting).Goal: find overall ranking (of the teams, players, products)Denote by Πab the probability that product a is preferred to product b. We will assume, that ties are notallowed, i.e.

Πab + Πba = 1

The Bradley-Terry model then is

logΠab

Πba= βa − βb

If βa = βb the two products are equal, i.e. Πab = Πba = 0.5; if βa > βb then Πab > 0.5 > Πba. To make themodel identifiable, we use the constraint βI = 0.Then

Πab =exp(βa − βb)

1 + exp(βa − βb)=

exp(βa)exp(βa) + exp(βb)

Since we have�I2

�values Πab to estimate (I − 1) parameters βa, a = 1, ..., I − 1, the degrees of freedom of

the Bradley-Terry model are

df =�

I

2

�− (I − 1) = I(I − 1)/2− (I − 1) = (I − 1)(I/2− 1) = (I − 1)(I − 2)/2.

Example: American League Baseball Teams of Milwaukee, Detroit, Toronto, New York, Boston,Cleveland, and Baltimore are compared pairwise (data from 1987). Each team played the other teams 13times; wins and losses are given in the table below:

losing teamwinning Baltimore Cleveland Boston NY Toronto Detroit Milwaukee

Baltimore 0 7 1 3 1 4 2Cleveland 6 0 6 6 5 4 4Boston 12 7 0 7 6 2 6NY 10 7 6 0 6 8 6Toronto 12 8 7 7 0 6 4Detroit 9 9 11 5 7 0 6Milwaukee 11 9 7 7 9 7 0

This table translates into a matrix of�I2

�= 21 columns:

[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20] [,21]Milwaukee -1 -1 -1 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Detroit 1 0 0 0 0 0 -1 -1 -1 -1 -1 0 0 0 0 0 0 0 0 0 0Toronto 0 1 0 0 0 0 1 0 0 0 0 -1 -1 -1 -1 0 0 0 0 0 0NY 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 -1 -1 -1 0 0 0Boston 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 -1 -1 0Cleveland 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 -1Baltimore 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0team.A.wins 6 4 6 6 4 2 6 8 2 4 4 6 6 5 1 7 6 3 6 1 7team.B.wins 7 9 7 7 9 11 7 5 11 9 9 7 7 8 12 6 7 10 7 12 6

team A is coded as 1, team B as -1. Using the rows of this table as variables and defining the responsevariable as

response <- cbind(team.A.wins,team.B.wins)

The Bradley-Terry model gives a fit of G2 = 15.7 on 15 degrees of freedom, indicating a decently fittingmodel.

97

http://www.baseball-reference.com/leagues/AL/1987-standings.shtml

Bradley-Terry Model

• Let πab = probability that a beats b

• assume πab+πba = 1i.e. no ties are allowed (for now)

• logit model log πab/πba = µa - µb

with µ1 = 0 (estimability)

Bradley-Terry Model

• πab = exp(µa)/(exp(µb)+exp(µa))

• πab > 0.5 , if µa > µb

• Bradley-Terry Model is quasi-symmetric model

ABL - logit modeldata

abl$pair <- fsym(abl$winner, abl$loser)

require(plyr)abl.new <- ddply(abl, .(pair), function(x) { dummy <- as.numeric(teams==x$winner[1]) - as.numeric(teams==x$loser[1]) return(c(dummy, x$times))})names(abl.new) <- c("pair", as.character(teams), "scoreA", "scoreB")

abl.tb <- glm(cbind(scoreA, scoreB)~Milwaukee + Detroit + Toronto + + NY + Boston + Cleveland-1, data=abl.new, family=binomial(link=logit))summary(abl.tb)

ABL - logit modeldata

> head(abl.new) pair Milwaukee Detroit Toronto NY Boston Cleveland Baltimore scoreA scoreB1 Baltimore,Boston 0 0 0 0 1 0 -1 12 12 Baltimore,Cleveland 0 0 0 0 0 1 -1 6 73 Baltimore,Detroit 0 1 0 0 0 0 -1 9 44 Baltimore,Milwaukee 1 0 0 0 0 0 -1 11 25 Baltimore,NY 0 0 0 1 0 0 -1 10 36 Baltimore,Toronto 0 0 1 0 0 0 -1 12 1

ALB - logit modelglm(formula = cbind(scoreA, scoreB) ~ Milwaukee + Detroit + Toronto + NY + Boston + Cleveland - 1, family = binomial(link = logit), data = abl.new)

Deviance Residuals: Min 1Q Median 3Q Max -1.50067 -0.52962 -0.06604 0.16281 2.06170

Coefficients: Estimate Std. Error z value Pr(>|z|) Milwaukee 1.5814 0.3433 4.607 4.09e-06 ***Detroit 1.4364 0.3396 4.230 2.34e-05 ***Toronto 1.2945 0.3367 3.845 0.000121 ***NY 1.2476 0.3359 3.715 0.000203 ***Boston 1.1077 0.3339 3.318 0.000908 ***Cleveland 0.6839 0.3319 2.061 0.039345 * ---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 49.699 on 21 degrees of freedomResidual deviance: 15.737 on 15 degrees of freedomAIC: 87.324

Number of Fisher Scoring iterations: 4

> head(abl) loser winner times pair2 Detroit Milwaukee 7 Detroit,Milwaukee3 Toronto Milwaukee 9 Milwaukee,Toronto4 NY Milwaukee 7 Milwaukee,NY5 Boston Milwaukee 7 Boston,Milwaukee6 Cleveland Milwaukee 9 Cleveland,Milwaukee7 Baltimore Milwaukee 11 Baltimore,Milwaukee

ABL - QS modeldata

glm(formula = times ~ pair - 1 + winner, family = poisson(link = log), data = abl)

Coefficients: Estimate Std. Error z value Pr(>|z|) pairBaltimore,Boston 2.7532 0.3973 6.930 4.22e-12 ***pairBaltimore,Cleveland 3.0539 0.3981 7.671 1.71e-14 ***...pairMilwaukee,NY 2.0248 0.3061 6.615 3.71e-11 ***pairMilwaukee,Toronto 2.0050 0.3076 6.518 7.13e-11 ***pairNY,Toronto 2.1818 0.3870 5.638 1.72e-08 ***winnerDetroit -0.1449 0.3111 -0.466 0.64131 winnerToronto -0.2869 0.3103 -0.925 0.35520 winnerNY -0.3337 0.3102 -1.076 0.28198 winnerBoston -0.4737 0.3105 -1.525 0.12718 winnerCleveland -0.8975 0.3166 -2.835 0.00458 ** winnerBaltimore -1.5814 0.3433 -4.607 4.09e-06 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for poisson family taken to be 1)

Null deviance: 609.702 on 42 degrees of freedomResidual deviance: 15.737 on 15 degrees of freedomAIC: 222.05

Number of Fisher Scoring iterations: 5

ABL - QS model

> library(BradleyTerry2)> > data(baseball, package = "BradleyTerry2")> head(baseball) home.team away.team home.wins away.wins1 Milwaukee Detroit 4 32 Milwaukee Toronto 4 23 Milwaukee New York 4 34 Milwaukee Boston 6 15 Milwaukee Cleveland 4 26 Milwaukee Baltimore 6 0

ABL - TB modeldata

BTm(outcome = cbind(home.wins, away.wins), player1 = home.team, player2 = away.team, id = "team", data = baseball)

Deviance Residuals: Min 1Q Median 3Q Max -1.6539 -0.0508 0.4133 0.9736 2.5509

Coefficients: Estimate Std. Error z value Pr(>|z|) teamBoston 1.1077 0.3339 3.318 0.000908 ***teamCleveland 0.6839 0.3319 2.061 0.039345 * teamDetroit 1.4364 0.3396 4.230 2.34e-05 ***teamMilwaukee 1.5814 0.3433 4.607 4.09e-06 ***teamNew York 1.2476 0.3359 3.715 0.000203 ***teamToronto 1.2945 0.3367 3.845 0.000121 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 78.015 on 42 degrees of freedomResidual deviance: 44.053 on 36 degrees of freedomAIC: 140.52

Number of Fisher Scoring iterations: 4

ABL - TB model

ABL

• Three different solutions:

• same fits

• different residual/null deviances

• different degrees of freedom

???

Terry Bradley Model

• Assume, X has J categories (number of teams)

• There are a total of J(J-1)/2 pairs of categories

• (J-1) parameters are fit

• degrees of freedom: (J-1)(J-2)/2

ABL

• For 7 teams we have

• 21 pairs of teams

• we fit 6 parameters

• resulting in 15 degrees of freedom

ABL• logit has correct deviance and degrees of

freedom

• BTm uses extended data set (comes with package, regards home/away teams)

• loglinear model computes deviances and degrees of freedom differently, residual deviance and degrees of freedom as with logit model (i.e. correct)

Home Advantage

• most sports show a home advantage

• 1987 season

losing team

winning Milwaukee Detroit Toronto NY Boston Cleveland Baltimore

Milwaukee 0.0 7.0 7.4 7.6 8.0 9.2 10.8

Detroit 6.0 0.0 7.0 7.1 7.6 8.8 10.5

Toronto 5.6 6.0 0.0 6.7 7.1 8.4 10.2

NY 5.4 5.9 6.3 0.0 7.0 8.3 10.1

Boston 5.0 5.4 5.9 6.0 0.0 7.9 9.8

Cleveland 3.8 4.2 4.6 4.7 5.1 0.0 8.6

Baltimore 2.2 2.5 2.8 2.9 3.2 4.4 0.0

The Bradley-Terry Model is a Quasi-Symmetry Model , since we have for QS:

log µab = λ + λAa + λB

b + λABab ,

with λABab = λBA

ba for all a, b.Then

logΠab

Πba= log

µab

µba=

= λ + λAa + λB

b + λABab − (λ + λA

b + λBa + λAB

ba ) =

= (λAa − λA

b )� �� �βa

− (λBa − λB

b )� �� �βb

.

Home Team Advantage

In some comparisons the order of comparison makes a difference - e.g. teams do have an advantage, if they

play at home, at a wine tasting the first wine tasted is usually thought better than the other. To account

for this home team advantage, we extend the Bradley-Terry model to:

logΠab

Πba= α + (βa − βb).

If α is significantly > 0 we do have a home team advantage.

Example: American Baseball League For the 1987 baseball season we have a table containing wins–

losses for home and away team:

Away Team

Home Team Milwaukee Detroit Toronto New York Boston Cleveland Baltimore

Milwaukee – 4-3 4-2 4-3 6-1 4-2 6-0

Detroit 3-3 – 4-2 4-3 6-0 6-1 4-3

Toronto 2-5 4-3 – 2-4 4-3 4-2 6-0

New York 3-3 5-1 2-5 – 4-3 4-2 6-1

Boston 5-1 2-5 3-3 4-2 – 5-2 6-0

Cleveland 2-5 3-3 3-4 4-3 4-2 – 2-4

Baltimore 2-5 1-5 1-6 2-4 1-6 3-4 –

e.g. NY vs Boston lost 4–2 at Boston, and won 4–3 at New York

Fitting the extended Bradley-Terry model yields:

fit.BTO<-glm(response~Milwaukee+Detroit+Toronto+NY+Boston+Cleveland,

family=binomial)

99

Bradley Terry with Order Effects

• assume that first team plays at home

• let πab be the probability that team a beats team b when team a goes first

• logit model log πab/πba = µ + µa - µb

• if µ significantly > 0 there is a home advantage

ABL

baseball$home.team <- data.frame(team = baseball$home.team, at.home = 1)baseball$away.team <- data.frame(team = baseball$away.team, at.home = 0)baseballModel2 <- update(baseballModel1, formula = ~ team + at.home)summary(baseballModel2)

TerryBradley2 package

> anova(baseballModel1, baseballModel2)Analysis of Deviance Table

Response: cbind(home.wins, away.wins)

Model 1: ~teamModel 2: ~team + at.home Resid. Df Resid. Dev Df Deviance1 36 44.053 2 35 38.643 1 5.4106

BTm(outcome = cbind(home.wins, away.wins), player1 = home.team, player2 = away.team, formula = ~team + at.home, id = "team", data = baseball)

Coefficients: Estimate Std. Error z value Pr(>|z|) teamBoston 1.1438 0.3378 3.386 0.000710 ***teamCleveland 0.7047 0.3350 2.104 0.035417 * teamDetroit 1.4754 0.3446 4.282 1.85e-05 ***teamMilwaukee 1.6196 0.3474 4.662 3.13e-06 ***teamNew York 1.2813 0.3404 3.764 0.000167 ***teamToronto 1.3271 0.3403 3.900 9.64e-05 ***at.home 0.3023 0.1309 2.308 0.020981 * ---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 78.015 on 42 degrees of freedomResidual deviance: 38.643 on 35 degrees of freedomAIC: 137.11

Number of Fisher Scoring iterations: 4

ABL

ABL - logit with orderresponse<-cbind(c(4,4,4,6,4,6, 3,4,4,6,6,4, 2,4,2,4,4,6, 3,5,2,4,4,6, 5,2,3,4,5,6, 2,3,3,4,4,2, 2,1,1,2,1,3),c(3,2,3,1,2,0, 3,2,3,0,1,3, 5,3,4,3,2,0, 3,1,5,3,2,1, 1,5,3,2,2,0, 5,3,4,3,2,4, 5,5,6,4,6,4)) # 42 pair sets xabl <- expand.grid(teamB=teams, teamA=teams)idx <- with(xabl, which(teamA==teamB))xabl <- xabl[-idx,]

X <- matrix(0, nrow=nrow(xabl), ncol=length(teams))for (i in 1:nrow(X)) { X[i,as.numeric(xabl$teamA)[i]] <- 1 X[i,as.numeric(xabl$teamB)[i]] <- -1}X <- data.frame(X)names(X) <- as.character(teams)

ABL - home advantage

teamB teamA scoreA scoreB Milwaukee Detroit Toronto NY Boston Cleveland Baltimore2 Detroit Milwaukee 4 3 1 -1 0 0 0 0 03 Toronto Milwaukee 4 2 1 0 -1 0 0 0 04 NY Milwaukee 4 3 1 0 0 -1 0 0 05 Boston Milwaukee 6 1 1 0 0 0 -1 0 06 Cleveland Milwaukee 4 2 1 0 0 0 0 -1 07 Baltimore Milwaukee 6 0 1 0 0 0 0 0 -1

fit.BTO<-glm(cbind(scoreA, scoreB)~1+Milwaukee + Detroit + Toronto + NY + Boston + Cleveland, family=binomial(link=logit), data=xabl)

glm(formula = cbind(scoreA, scoreB) ~ 1 + Milwaukee + Detroit + Toronto + NY + Boston + Cleveland, family = binomial(link = logit), data = xabl)

Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.3023 0.1309 2.308 0.020981 * Milwaukee 1.6196 0.3474 4.662 3.13e-06 ***Detroit 1.4754 0.3446 4.282 1.85e-05 ***Toronto 1.3271 0.3403 3.900 9.64e-05 ***NY 1.2813 0.3404 3.764 0.000167 ***Boston 1.1438 0.3378 3.386 0.000710 ***Cleveland 0.7047 0.3350 2.104 0.035417 * ---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 73.516 on 41 degrees of freedomResidual deviance: 38.643 on 35 degrees of freedomAIC: 137.11

Number of Fisher Scoring iterations: 4

ABL - home advantage

Bradley Terry - Extensions

• Ordinal Response:cumulative logit model logit P(Y ≤ j) = µj + µa - µb

e.g. “loss”, “tie”, “win”

• Nominal Response: baseline categorical model log P(Y = j)/P(Y = J) = µj + µaj - µbj

Repeated Measures Models

• Extension of matched pairs data

• Multiple (T ≥ 3) measurements observed for same individual, e.g. individuals’ weekly progress

• Measurements for cluster of individuals (T ≥ 3), e.g. one litter, teeth at dentist’s visit, ...

Example: Drug Comparisons

• Cross-over effect of drugs A, B, C

• Interested in marginal distributionsP(A=S), P(B=S), P(C=S)

8 Repeated Response Data

A lot of studies observe individuals repeatedly, e.g. longitudinal studies.

For these data we will be mainly interested in the marginal distributions.

Let (Y1, Y2, ...., YT ) be a tuple of binary response variables observed at (time) points t = 1, ..., T with T > 2.

We are interested in the probability of success for each t, i.e. we are interested in P (Yt = 1).

A logit model is then defined as

logit P (Yt = 1) = α + βt,

with constraint βT = 0 (or α = 0).

If β1 = β2 = ... = βT = 0 we observe marginal homogeneity, i.e. then P (Y1 = 1) = P (Y2 = 1) = ... =

P (YT = 1).

Example: Crossover-Study of Drugs Drugs A, B, and C are tested on 46 individuals with a chronic

disease in a cross-over study, i.e. each individual is treated for some time with each drug. Responses are

binary: (success/failure) for each drug, giving a data set of

A B C count

S S S 6

S S F 16

S F S 2

S F F 4

F S S 2

F S F 4

F F S 6

F F F 6

Total 46

One question of interest for these data is, whether all of the drugs are equally effective or whether there are

difference. This question translates to whether marginal homogeneity is present or not.

From the raw data we get estimates for the effectiveness of each drug as P (A = S) = 28/46 = 0.61 = P (B =

S) = 0.61 and P (C = S) = 0.35.

In a comparison of A and C we are only interested in the sub-table

drug C

drug A F S

F 10 8

S 20 8

Using the McNemar test statistic

�n21−n12√n12+n21

�2=

�12/√

28�2

= 5.14 for df = 1.

From the raw data we can also find Wald-CI (using a Bonferroni adjustment for multiple testing). For a

significance level of α = 0.05 we need to come up with (1−α/3) = 0.98 100% CI: and we get for the difference

between A and C 0.261 ± 2.39 · 0.108 = (0.002, 0.520).

For a modeling approach on marginal homogeneity we need a theoretical excursion on generalized loglinear

models:

Generalized Loglinear ModelsGeneralized loglinear models are written as

C log Aµ = Xβ.

If C and A are identity matrices, the generalized loglinear model becomes a standard loglinear model.

102

Multiple Binary Response

• Yt binary response for time points t=1, ..., T

• logit model estimability:

• Marginal homogeneity

8 Repeated Response Data

A lot of studies observe individuals repeatedly, e.g. longitudinal studies.

For these data we will be mainly interested in the marginal distributions.

Let (Y1, Y2, ...., YT ) be a tuple of binary response variables observed at (time) points t = 1, ..., T with T > 2.

We are interested in the probability of success for each t, i.e. we are interested in P (Yt = 1).

A logit model is then defined as

logit P (Yt = 1) = α + βt,

with constraint βT = 0 (or α = 0).

If β1 = β2 = ... = βT = 0 we observe marginal homogeneity, i.e. then P (Y1 = 1) = P (Y2 = 1) = ... =

P (YT = 1).

Example: Crossover-Study of Drugs Drugs A, B, and C are tested on 46 individuals with a chronic

disease in a cross-over study, i.e. each individual is treated for some time with each drug. Responses are

binary: (success/failure) for each drug, giving a data set of

A B C count

S S S 6

S S F 16

S F S 2

S F F 4

F S S 2

F S F 4

F F S 6

F F F 6

Total 46

One question of interest for these data is, whether all of the drugs are equally effective or whether there are

difference. This question translates to whether marginal homogeneity is present or not.

From the raw data we get estimates for the effectiveness of each drug as P (A = S) = 28/46 = 0.61 = P (B =

S) = 0.61 and P (C = S) = 0.35.

In a comparison of A and C we are only interested in the sub-table

drug C

drug A F S

F 10 8

S 20 8

Using the McNemar test statistic

�n21−n12√n12+n21

�2=

�12/√

28�2

= 5.14 for df = 1.

From the raw data we can also find Wald-CI (using a Bonferroni adjustment for multiple testing). For a

significance level of α = 0.05 we need to come up with (1−α/3) = 0.98 100% CI: and we get for the difference

between A and C 0.261 ± 2.39 · 0.108 = (0.002, 0.520).

For a modeling approach on marginal homogeneity we need a theoretical excursion on generalized loglinear

models:

Generalized Loglinear ModelsGeneralized loglinear models are written as

C log Aµ = Xβ.

If C and A are identity matrices, the generalized loglinear model becomes a standard loglinear model.

102

8 Repeated Response Data

A lot of studies observe individuals repeatedly, e.g. longitudinal studies.

For these data we will be mainly interested in the marginal distributions.

Let (Y1, Y2, ...., YT ) be a tuple of binary response variables observed at (time) points t = 1, ..., T with T > 2.

We are interested in the probability of success for each t, i.e. we are interested in P (Yt = 1).

A logit model is then defined as

logit P (Yt = 1) = α + βt,

with constraint βT = 0 (or α = 0).

If β1 = β2 = ... = βT = 0 we observe marginal homogeneity, i.e. then P (Y1 = 1) = P (Y2 = 1) = ... =

P (YT = 1).

Example: Crossover-Study of Drugs Drugs A, B, and C are tested on 46 individuals with a chronic

disease in a cross-over study, i.e. each individual is treated for some time with each drug. Responses are

binary: (success/failure) for each drug, giving a data set of

A B C count

S S S 6

S S F 16

S F S 2

S F F 4

F S S 2

F S F 4

F F S 6

F F F 6

Total 46

One question of interest for these data is, whether all of the drugs are equally effective or whether there are

difference. This question translates to whether marginal homogeneity is present or not.

From the raw data we get estimates for the effectiveness of each drug as P (A = S) = 28/46 = 0.61 = P (B =

S) = 0.61 and P (C = S) = 0.35.

In a comparison of A and C we are only interested in the sub-table

drug C

drug A F S

F 10 8

S 20 8

Using the McNemar test statistic

�n21−n12√n12+n21

�2=

�12/√

28�2

= 5.14 for df = 1.

From the raw data we can also find Wald-CI (using a Bonferroni adjustment for multiple testing). For a

significance level of α = 0.05 we need to come up with (1−α/3) = 0.98 100% CI: and we get for the difference

between A and C 0.261 ± 2.39 · 0.108 = (0.002, 0.520).

For a modeling approach on marginal homogeneity we need a theoretical excursion on generalized loglinear

models:

Generalized Loglinear ModelsGeneralized loglinear models are written as

C log Aµ = Xβ.

If C and A are identity matrices, the generalized loglinear model becomes a standard loglinear model.

102

8 Repeated Response Data

A lot of studies observe individuals repeatedly, e.g. longitudinal studies.

For these data we will be mainly interested in the marginal distributions.

Let (Y1, Y2, ...., YT ) be a tuple of binary response variables observed at (time) points t = 1, ..., T with T > 2.

We are interested in the probability of success for each t, i.e. we are interested in P (Yt = 1).

A logit model is then defined as

logit P (Yt = 1) = α + βt,

with constraint βT = 0 (or α = 0).

If β1 = β2 = ... = βT = 0 we observe marginal homogeneity, i.e. then P (Y1 = 1) = P (Y2 = 1) = ... =

P (YT = 1).

Example: Crossover-Study of Drugs Drugs A, B, and C are tested on 46 individuals with a chronic

disease in a cross-over study, i.e. each individual is treated for some time with each drug. Responses are

binary: (success/failure) for each drug, giving a data set of

A B C count

S S S 6

S S F 16

S F S 2

S F F 4

F S S 2

F S F 4

F F S 6

F F F 6

Total 46

One question of interest for these data is, whether all of the drugs are equally effective or whether there are

difference. This question translates to whether marginal homogeneity is present or not.

From the raw data we get estimates for the effectiveness of each drug as P (A = S) = 28/46 = 0.61 = P (B =

S) = 0.61 and P (C = S) = 0.35.

In a comparison of A and C we are only interested in the sub-table

drug C

drug A F S

F 10 8

S 20 8

Using the McNemar test statistic

�n21−n12√n12+n21

�2=

�12/√

28�2

= 5.14 for df = 1.

From the raw data we can also find Wald-CI (using a Bonferroni adjustment for multiple testing). For a

significance level of α = 0.05 we need to come up with (1−α/3) = 0.98 100% CI: and we get for the difference

between A and C 0.261 ± 2.39 · 0.108 = (0.002, 0.520).

For a modeling approach on marginal homogeneity we need a theoretical excursion on generalized loglinear

models:

Generalized Loglinear ModelsGeneralized loglinear models are written as

C log Aµ = Xβ.

If C and A are identity matrices, the generalized loglinear model becomes a standard loglinear model.

102

Drugs Crossover> head(drugs.m) count id variable value1 6 1 A Y2 16 2 A Y3 2 3 A Y4 4 4 A Y5 2 5 A N6 4 6 A N

glm(formula = value ~ variable - 1, family = binomial(link = logit), data = drugsm, weights = count)

Deviance Residuals: Min 1Q Median 3Q Max -3.698 -2.740 -0.220 2.152 3.986

Coefficients: Estimate Std. Error z value Pr(>|z|) variableA 0.4418 0.3021 1.462 0.1436 variableB 0.4418 0.3021 1.462 0.1436 variableC -0.6286 0.3096 -2.031 0.0423 *---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 191.31 on 24 degrees of freedomResidual deviance: 182.60 on 21 degrees of freedomAIC: 188.60

Number of Fisher Scoring iterations: 4

Drugs Crossover

Marginal Homogeneity

> anova(drugs.null, drugs.mh, test="Chisq")Analysis of Deviance Table

Model 1: value ~ 1Model 2: value ~ variable - 1 Resid. Df Resid. Dev Df Deviance P(>|Chi|) 1 23 191.05 2 21 182.60 2 8.451 0.01462 *---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1