factor - the university of tennessee at chattanooga · web viewlecture 10 – single factor designs...

42
Lecture 10 – Single Factor Designs Factor New name for nominal/categorical independent variable In ANOVA literature, categorical independent variables are called Factors. In that same literature, quantitative independent variables are called Covariates. Values of factor are called Levels of Factors So, a Factor is a nominal (aka categorical) independent variable. One Factor design: Research involving only one nominal IV, i.e., one factor Three general types of design 1. Between subjects, no matching Different groups of participants. No attempt to match 2. Between-subjects, participants matched. With two groups, fairly easy. With more than two groups, gets harder. Matching variable must be correlated with dv. 3. Within-subjects design, same people serve at all levels of the factor. These are sometimes called repeated measures designs. This should seem familiar, because it’s the same trichotomy we encountered in the Comparing Two Groups lecture. Single Factor Designs - 1 3/5/2022

Upload: ngoduong

Post on 14-Apr-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Lecture 10 – Single Factor Designs

Factor

New name for nominal/categorical independent variable

In ANOVA literature, categorical independent variables are called Factors.In that same literature, quantitative independent variables are called Covariates.

Values of factor are called Levels of Factors

So, a Factor is a nominal (aka categorical) independent variable.

One Factor design: Research involving only one nominal IV, i.e., one factor

Three general types of design

1. Between subjects, no matching

Different groups of participants. No attempt to match

2. Between-subjects, participants matched.

With two groups, fairly easy. With more than two groups, gets harder.

Matching variable must be correlated with dv.

3. Within-subjects design, same people serve at all levels of the factor.

These are sometimes called repeated measures designs.

This should seem familiar, because it’s the same trichotomy we encountered in the Comparing Two Groups lecture.

Single Factor Designs - 1 5/7/2023

The Various Tests Comparing K Research Conditionsby Design and Dependent Variable Characteristics

Dependent Variable CharacteristicsDesign Interval / Ratio

Dependent VariableOrdinal

Dependent VariableCategoricalDependent Variable

Independent Groups /Between Subjects

Design

US: One way Between Subjects Analysis of Variance

Skewed: Kruskal-Wallis

Kruskal-WallisCrosstabulation

withChi-square Test

Matched Participantsor

Within-subjects /Repeated Measures

Design

Repeated Measures ANOVA

Friedman ANOVA Advanced analyses

If this looks familiar, it should. It’s the same table presented in Lecture 9 on Two group comparisons, except that it now covers comparisons of two or more groups.

Single Factor Designs - 2 5/7/2023

One-Way Between-subjects Analysis of Variance

Comparing the means of 3 or more groups.

Suppose there are three groups – Group A, Group B, and Group C.

Why not just perform multiple t-tests.

t-test comparing Mean of Group A with Mean of Group Bt-test comparing mean of Group A with Mean of Group Ct-test comparing mean of Group B with Mean of Group C

The above 3 t-tests exhaust the possible comparisons between 3 groups.

Problem with the above method: It’s very difficult to compute the correct p-value for the tests, which makes it difficult to use in hypothesis testing.

What is needed is a single omnibus test.

Such a test was provided by Sir R. A. Fisher. It’s based on the following idea

Single Factor Designs - 3 5/7/2023

Consider 3 populations whose means are all equal:

Now consider samples from each of those populations

o o o o o o o

o o o o o o o

o o o o o o o

Finally, consider the means of the three samples . . .

o oo

Now think about the variability in the above dots.

Within-group variability

There is variability of the individual scores in each sample.

The variance of scores within each sample would be an estimate of the population variance, σ2.So the average of the 3 sample variances, (S1

2 +S22+S3

2)/3, would be a really good estimate of σ2.

Between-group variability

But there is more variability in the above situation. There is variability of the sample means, S2X-bar.

The variability of the means S2X-bar would be equal to σ2/N from our study of the standard error of the mean.

Equivalently, N* S2X-bar would be approximately equal to σ2.

That is, N times the variance of the sample means would be about equal to the population variance.

So, we have two estimates of the population variance in the above scenario.

1) The estimate based on the average of the variances of the three samples.2) The estimate based on the variance of the sample means.

When the samples are from populations with equal means, the two estimates will be about equal.

Single Factor Designs - 4 5/7/2023

Now consider 3 populations whose means are NOT equal.

Now consider samples from each of these populations

o o o o o o o

o o o o o o o

o o o o o o o

Now consider the means of those samples . . .

O o o

Note that the variability of the individual scores within each sample is about the same as above.

BUT, note that when the population means are not equal, the means of samples from those populations are quite variable, much more so than they were when the population means were equal.

This means that in this case S2X-bar would be LARGER THAN σ2/N.

Equivalently, it means that N* S2X-bar would be LARGER THAN σ2.

This suggests that N* S2X-bar is an indicator of whether or not the population means are equal or not.

If the means were equal, N* S2X-bar would be equal to σ2.

But if the means were not equal, N* S2X-bar would be larger than σ2

Since the variability of individual scores within samples was the same in both situations, Fisher proposed the ratio: N* S2

X-bar / Mean of the individual sample variances as a test statistic.

N*S2X-bar N times variance of sample means

F = --------------------------- = ---------------------------------- Mean of sample variances Mean of sample variances

If the population means are equal, F will be about equal to 1.

If the population means are not equal, F will be larger than 1.

Fisher computed the sampling distribution of F and proposed it as an omnibus test of the equality of population means. (He did not name the statistic F after himself.)

Single Factor Designs - 5 5/7/2023

Sample means ARE very variable, so N* S2X-bar is > σ2.

Specifics of the One-Way Between-subjects Analysis of Variance – Start here on 11/21/17

The research design employs two or more independent conditions (no pairing).

The groups are identified by different levels of a single factor.

The dependent variable is interval / ratio scaled.

The distribution of scores within groups is unimodal and symmetric.

Variances of the populations being compared are equal.

Hypotheses:

H0: All population means are equal

H1: At least one inequality is present.

Test Statistic:

Estimate of population variance based on differences between sample meansF =

Estimate of population variance based on differences between individual scores within samples

Hand computation when sample sizes are equal

Common sample size * Variance of Sample MeansF = ---------------------------------------------------------------

Average of sample variances

Compare the result to tabled critical F value

Likely values if null is true

Values around 1

Likely values if null is false.

Values larger than 1

Single Factor Designs - 6 5/7/2023

Example problem

Michelle Hinton Watson, a 95 graduate of the program interviewed employees and former employees of a local company, Company X. A set of 7 questions assessing overall job satisfaction was given to all respondents. She interviewed 107 persons who had left the company prior to her contacting them. She also interviewed 49 persons who left the company within a year after she contacted them, and 51 persons who were still working for the company a year after the initial contact. The interest here is on whether the three groups have different average job satisfaction.

Specifying the analysis

Analyze -> General Linear Model -> UnivariateData are mdbr\hinton\cxfile\cxfilenm.sav

Specifying a plot of means –

Single Factor Designs - 7 5/7/2023

Click on the Post Hoc… button to specifyPost Hoc comparisons of means

Click on the Plot… button to specify a graph of means.

Click on the Options… button to specify descriptive statistics and estimates of effect size.

Specifying Post Hoc Comparisons

If the overall F statistic is significant, post hoc comparisons are often used to determine exactly which pairs of means are significantly different.

Post Hoc tests vary on a dimension of liberalness vs conservativeness.

Liberal Test / AKA Powerful Conservative TestTends to find differences, some of them Type I errors Tends to not find differences, even those that existMost Powerful – able to find small differences Least powerful, unable to see small differencesFor Affordable Care Act Supports Big Business

The LSD test is the most liberal. The Scheffé test is the most conservative. The Tukey’s-b test is a compromise between the above two extremes.

LSD --------------------------------------------------- Tukey-b------------------------------------------------------ Scheffé

Strategy: If a conservative test rejects the null, most likely a difference.If a liberal test fails to reject the null, most likely not a difference.

Single Factor Designs - 8 5/7/2023

The Options: Specifying display of Effect Size and Observed Power:

Single Factor Designs - 9 5/7/2023

The output

Between-Subjects FactorsValue Label N

finaldest .00 Left Co. before Q given 107

1.00 Left Co. after Q given 49

2.00 Stayed w. Co. 51

Descriptive StatisticsDependent Variable: ovsat

finaldest Mean Std. Deviation N

.00 3.2443 .78566 107

1.00 4.1399 .59009 49

2.00 3.9888 .57983 51

Total 3.6398 .80700 207

Levene's Test of Equality of Error Variancesa

Dependent Variable: ovsat

F df1 df2 Sig.

8.670 2 204 .000

Tests the null hypothesis that the error variance of

the dependent variable is equal across groups.

a. Design: Intercept + finaldest

The p-value of .000 for Levene’s test of equality of error variance means that we should be particularly cautious when interpreting the comparisons of means that follow.

This is like the test of equality of variances printed with the Independent groups t. Alas, there is no “unequal variances” ANOVA test.

We should inspect distributions for each group. We should also consider a nonparametric test of equality of location, which I will do.

Single Factor Designs - 10 5/7/2023

Tests of Between-Subjects Effects

The first two lines of this table give technical information that is not needed for most analyses.For that reason, you should ignore the “Corrected Model” and “Intercept” lines in the table.

Dependent Variable: ovsat

Source

Type III

Sum of

Squares df

Mean

Square F Sig.

Partial Eta

Squared

Noncent.

Parameter

Observed

Powerb

Corrected Model 35.203a 2 17.602 36.288 .000 .262 72.575 1.000

Intercept 2620.378 1 2620.378 5402.145 .000 .964 5402.145 1.000

finaldest 35.203 2 17.602 36.288 .000 .262 72.575 1.000

Error 98.953 204 .485

Total 2876.449 207

Corrected Total 134.156 206

a. R Squared = .262 (Adjusted R Squared = .255)

b. Computed using alpha = .05

For this semester, ignore the “Corrected Model” and the “Intercept” lines.

Partial Eta squared:

This is the effect size for one way ANOVA. See effect sizes for ANOVA in Power lecture for a characterization. Recall: Eta2 = .01 for small; Eta2=.059 for medium; Eta2 = .138 for large. Eta2 = .262 means we have a SuperSized with Fries effect size.

Observed Power

Observed power is power – probability of finding a significant difference - if the population means were as different as the sample means.

The value, 1.000, means that if the population means were as different as the sample means, and many independent tests of the null hypothesis of equality of population means were run, the F would be significant in about 100% of those tests.

Single Factor Designs - 11 5/7/2023

Homogeneous Subsets

Single Factor Designs - 12 5/7/2023

Interpreting the TableIf two means are in the same column, they are not significantly different.

If two means are only in different columns, they ARE significantly different.

vs.

SPSS Profile Plots

A picture is worth 1000 words.

Single Factor Designs - 13 5/7/2023

ANOVA in Rcmdr

R -> Rcmdr -> Import data -> from SPSS data set . . . -> cxfilenm.savStatistics -> Means -> One-way ANOVA . . .

> AnovaModel.1 <- aov(ovsat ~ finaldest, data=hinton1)

> summary(AnovaModel.1)

Df Sum Sq Mean Sq F value Pr(>F) finaldest 2 35.20 17.602 36.29 3.29e-14 ***Residuals 204 98.95 0.485 ---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

> with(hinton1, numSummary(ovsat, groups=finaldest, statistics=c("mean", + "sd"))) mean sd data:nLeft Co. before Q given 3.244326 0.7856553 107Left Co. after Q given 4.139942 0.5900895 49Stayed w. Co. 3.988796 0.5798262 51

Single Factor Designs - 14 5/7/2023

Note – the Groups variable must be recognized by Rcmdr as a factor.

Sample means with overall mean subtracted out.

Basic information in theTests of Between Subjects Effects table from SPSS.

> local({+ .Pairs <- glht(AnovaModel.1, linfct = mcp(finaldest = "Tukey"))+ print(summary(.Pairs)) # pairwise tests+ print(confint(.Pairs)) # confidence intervals+ print(cld(.Pairs)) # compact letter display+ old.oma <- par(oma=c(0,5,0,0))+ plot(confint(.Pairs))+ par(old.oma)+ })

Simultaneous Tests for General Linear Hypotheses

Multiple Comparisons of Means: Tukey Contrasts

Fit: aov(formula = ovsat ~ finaldest, data = hinton1)

Linear Hypotheses: These are pairwise differences and their significances Estimate Std. ErrorLeft Co. after Q given - Left Co. before Q given == 0 0.8956 0.1201Stayed w. Co. - Left Co. before Q given == 0 0.7445 0.1185Stayed w. Co. - Left Co. after Q given == 0 -0.1511 0.1393 t value Pr(>|t|) Left Co. after Q given - Left Co. before Q given == 0 7.455 <1e-05 ***Stayed w. Co. - Left Co. before Q given == 0 6.282 <1e-05 ***Stayed w. Co. - Left Co. after Q given == 0 -1.085 0.522 ---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1(Adjusted p values reported -- single-step method)

Simultaneous Confidence Intervals

Multiple Comparisons of Means: Tukey Contrasts

Fit: aov(formula = ovsat ~ finaldest, data = hinton1)

Quantile = 2.35795% family-wise confidence level

Linear Hypotheses: Confidence intervals on pairwise differences Estimate lwr upr Left Co. after Q given - Left Co. before Q given == 0 0.8956 0.6125 1.1788Stayed w. Co. - Left Co. before Q given == 0 0.7445 0.4651 1.0238Stayed w. Co. - Left Co. after Q given == 0 -0.1511 -0.4795 0.1772

I don’t know what the purpose of the display below is.Left Co. before Q given Left Co. after Q given Stayed w. Co. "a" "b" "b"

Single Factor Designs - 15 5/7/2023

Kruskal-Wallis One way Analysis of Variance by Ranks

The research design employs two or more independent conditions (no pairing).

The groups are distinguished by different levels of the independent variable.

The dependent variable is ordinal or the DV is interval/ratio scaled but the distributions within groups are skewed or have unequal variances between groups.

Hypotheses:

H0: All population locations are equal

H1: At least one inequality is present.

From Howell, D. (1997). Statistical Methods for Psychology. 4th Ed. p. 658. "It tests the hypothesis that all samples were drawn from identical populations and is particularly sensitive to differences in central tendency."

Test Statistic:

Kruskal-Wallis H statistic. The probability distribution of the H statistic when the null is true is the Chi-square distribution with degrees of freedom equal to the number of groups being compared minus 1.

Example problem

(Same problem as above). Mdbr\hinton\cxfile\cxfilenm.sav

The interest here is on whether the three groups are distinguished by their overall job satisfaction – persons who had previously left the company, persons who left after the initial interview or persons who stayed with the company after the initial interview.

It is appropriate to conduct this test since the variances were not homogenous in the above analysis, resulting in some suspicion concerning whether there actually are differences between the groups.

Single Factor Designs - 16 5/7/2023

Specifying the analysis

Analyze -> Nonparametric tests -> Legacy Dialogs -> K Independent Samples

The Results

Kruskal-Wallis TestRanks

finaldest N Mean Rank

ovsat .00 Left Co. before Q given 107 74.59

1.00 Left Co. after Q given 49 142.67

2.00 Stayed w. Co. 51 128.55

Total 207

Test Statisticsa,b

ovsat

Chi-Square 54.996

df 2

Asymp. Sig. .000

a. Kruskal Wallis Test

b. Grouping Variable: finaldest

Alas, there are no post-hoc tests of which I’m aware for the Kruskal-Wallis situation. Some investigators will follow up with Mann-Whitney U-tests, using that test as a substitute for a true post-hoc test.

Single Factor Designs - 17 5/7/2023

This is the probability of a chi-square value as large as the obtained value of 54.996 if the null hypothesis of equal distributions were true.

Ranks are from smallest to largest, so group 0 appears to have the smallest scores.

Click on this button to invoke the dialog box below. Put the minimum group no. and maximum group no. in the two boxes.

Put the name(s) of the dependent variable(s) in this box.

Kruskal-Wallis Test in Rcmdr

R -> Rcmdr -> Data -> Import data -> From SPSS data set . . . -> cxfilenm.savStatistics -> Nonparametric tests -> Kruskal-Wallis test . . .

> with(hinton1, tapply(ovsat, finaldest, median, na.rm=TRUE))

These are sample medians. Some people view the K-W test as a comparison of medians.Left Co. before Q given Left Co. after Q given Stayed w. Co. 3.142857 4.142857 4.000000

> kruskal.test(ovsat ~ finaldest, data=hinton1)

Kruskal-Wallis rank sum test

data: ovsat by finaldestKruskal-Wallis chi-squared = 54.996, df = 2, p-value = 1.142e-12

Single Factor Designs - 18 5/7/2023

Note – the Groups variable must be recognized by Rcmdr as a factor.

Chi-Square Analysis of a Dichotomous Dependent Variable

The research design employs two or more independent conditions (no pairing).

The groups are distinguished by categories of an independent variable or factor.

The dependent variable is categorical. This test may used when the DV is interval/ratio scaled or ordinal but you are uncomfortable with the numeric values. But you definitely should not categorize a variable that can be analyzed as a quantitative variable. You should categorize only in emergencies. It represents the most conservative assumption you can make about your dependent variable, that its values are only categorizable into High and Low.

Hypotheses:

H0: Percentages in each category are equal across populations

H1: At least one inequality is present.

Test Statistic:

Two-way chi-square. If the null is true, its probability distribution is the Chi-square distribution with degrees of freedom equal to the product of (No. of DV categories - 1) x (No. of Groups -1).

Example problem

Same data and questions as above.

Categorizing the dependent variable using a median split.

Each OVSAT score was categorized as 0 if it was less than or equal to the median of all the OVSAT scores or 1 if it was greater than the overall median. This is called performing a median split.

The categorized variable is called SATGROUP.

frequencies variable=ovsat /sta=median.Frequencies

StatisticsovsatN Valid 207

Missing 0Median 3.8571

recode ovsat (lowest thru 3.8571=0)(else=1) into satgroup.frequencies variable=satgroup.

satgroup

Frequency Percent Valid PercentCumulative

PercentValid .00 101 48.8 48.8 48.8

1.00 106 51.2 51.2 100.0Total 207 100.0 100.0

Single Factor Designs - 19 5/7/2023

Specifying the analysis

Analyze -> Descriptive Statistics -> Crosstabs

Single Factor Designs - 20 5/7/2023

Put the dependent variable in the Row(s) field.

Put the independent variable in the Column(s) field.

Click on the "Cells" button to invoke this dialog box.

Check "Column" percentages.

The Results

Crosstabs

All three tests – analysis of variance, Kruskal-Wallis, and chi-square resulted in the same conclusion, suggesting that there are significant differences between the satisfaction scores of the three groups. It appears that members of group 0 – those that had left prior to the interview – were generally least satisfied.

Single Factor Designs - 21 5/7/2023

Click on the “Statistics” button and check the Chi-square box.

If the null hypothesis that in the populations, all %s were equal is true, the probability of a chi-square as large as 51.221 would be 0.000. So reject that hypothesis.

If you put the independent variable in the “Columns” field above and the dependent variable in the “Rows” field, and checked “Column” percentages, the table will be displayed as below.

Note that this display makes it easy to specify what values are being compared – they’re the % within finaldes values that I’ve circled.

Chi-square Test in Rcmdr

R -> Rcmdr -> Data -> Import Data -> From SPSS dataset -> cxfilenm.sav.The variable, satgroup, had already been created in the file that was imported by Rcmdr.

Data -> Manage Variables in Active Data set -> Convert numeric variables to factors . . .

Statistics -> Contingency Tables -> Two-way Tables . . .

Single Factor Designs - 22 5/7/2023

Argh!!

Rcmdr requires that all variables compared using the chi-square test be factors.

So, the satgroup variable has to be converted to a factor before it can be used for the chi-square test.

Click on the [Yes] button.

Rcmdr chi-square analysis of contingency table output.

> local({+ .Table <- xtabs(~satgroup+finaldest, data=hinton1)+ cat("\nFrequency table:\n")+ print(.Table)+ .Test <- chisq.test(.Table, correct=FALSE)+ print(.Test)+ })

Frequency table: finaldestsatgroup Left Co. before Q given Left Co. after Q given Stayed w. Co. 0 77 7 17 1 30 42 34

Pearson's Chi-squared test

data: .TableX-squared = 51.221, df = 2, p-value = 7.544e-12

Single Factor Designs - 23 5/7/2023

One way Repeated Measures ANOVAIn Mike Clark’s thesis, three versions of the Big Five questionnaire was given to participants under three instructional conditions . . . (See Mike’s latest book chapter in Picture Show folder.)

1) Honest: Respond honestly2) Dollar: Respond honestly, but participants who score highest will be entered into a drawing3) Instructed: Respond to maximize your chances of obtaining a customer service job.

These three conditions are called the Honest, Dollar, and Instructed - H, D, and I - conditions respectively.

The question here concerns the mean score on the Conscientiousness scale across the three conditions.

If the participants were not paying attention to the instructions, then we’d expect the means to be equal.

But if participants faked in the second two conditions, we’d expect differences in mean Conscientiousness scores across the three conditions.

The data are in G:\MdbR\Clark\ClarkDataFiles\ClarkAndNewDataCombined070710.savThe data for repeated measures analyses must be in different columns – an H column, a D column, and an I column for this problem.

Single Factor Designs - 24 5/7/2023

Analysis

Menu sequence: Analyze -> General Linear Model -> Repeated Measures

Single Factor Designs - 25 5/7/2023

Enter a name for the Repeated Measures factor here

Enter the number of levels of the factor.

Click the [Add] button.

Highlight the name of one of the variables to be included in the analysis and then click on the [->] button.

Single Factor Designs - 26 5/7/2023

Click on the [Plots] button in the main dialog box and put the name of the repeated measures factor as the Horizontal Axis of the plot.

Click on the [Options] button in the main dialog box and check the three boxes shown below.

The Output

General Linear Model

[DataSet1] G:\MdbR\Clark\ClarkDataFiles\ClarkAndNewDataCombined070710.sav

Within-Subjects Factors

Measure: MEASURE_1

hc

dc

ic

condit1

2

3

DependentVariable

Descriptive Statistics

4.4029 .92630 249

4.7979 1.05333 249

5.4779 .96747 249

hc

dc

ic

Mean Std. Deviation N

The GLM procedure first prints Multivariate Tests of the hypothesis of no difference between means. The multivariate tests are robust with respect to violations of the various assumptions of the analysis, although less powerful than the tests on the next page, if those tests meet the assumptions.

Multivariate Tests c

.471 110.015b 2.000 247.000 .000 .471 220.031 1.000

.529 110.015b 2.000 247.000 .000 .471 220.031 1.000

.891 110.015b 2.000 247.000 .000 .471 220.031 1.000

.891 110.015b 2.000 247.000 .000 .471 220.031 1.000

Pillai's Trace

Wilks' Lambda

Hotelling's Trace

Roy's Largest Root

Effectcondit

Value F Hypothesis df Error df Sig.Partial EtaSquared

Noncent.Parameter Observed Power

a

Computed using alpha = .05a.

Exact statisticb.

Design: Intercept Within Subjects Design: condit

c.

Mauchly’s test should be nonsignficant. If it is significant, as it is below, then the most powerful test, labeled “Sphericity Assumed” below should not be reported.

Mauchly's Test of Sphericity b

Measure: MEASURE_1

.909 23.571 2 .000 .917 .923 .500Within Subjects Effectcondit

Mauchly's WApprox.

Chi-Square df Sig.Greenhouse-

Geisser Huynh-Feldt Lower-bound

Epsilona

Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is proportional to anidentity matrix.

May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed in theTests of Within-Subjects Effects table.

a.

Design: Intercept Within Subjects Design: condit

b.

Single Factor Designs - 27 5/7/2023

Since Mauchly’s test was significant, only the last 3 tests below should be used. It happens, though, that for these data, all tests give the same result, so in this particular case, it doesn’t make a difference.

Tests of Within-Subjects Effects

Measure: MEASURE_1

147.241 2 73.620 121.599 .000 .329 243.197 1.000

147.241 1.833 80.321 121.599 .000 .329 222.909 1.000

147.241 1.846 79.756 121.599 .000 .329 224.487 1.000

147.241 1.000 147.241 121.599 .000 .329 121.599 1.000

300.297 496 .605

300.297 454.622 .661

300.297 457.840 .656

300.297 248.000 1.211

Sphericity Assumed

Greenhouse-Geisser

Huynh-Feldt

Lower-bound

Sphericity Assumed

Greenhouse-Geisser

Huynh-Feldt

Lower-bound

Sourcecondit

Error(condit)

Type III Sumof Squares df Mean Square F Sig.

Partial EtaSquared

Noncent.Parameter Observed Power

a

Computed using alpha = .05a.

Tests of Within-Subjects Contrasts

Measure: MEASURE_1

143.866 1 143.866 217.487 .000 .467 217.487 1.000

3.374 1 3.374 6.142 .014 .024 6.142 .695

164.050 248 .661

136.246 248 .549

conditLinear

Quadratic

Linear

Quadratic

Sourcecondit

Error(condit)

Type III Sumof Squares df Mean Square F Sig.

Partial EtaSquared

Noncent.Parameter Observed Power

a

Computed using alpha = .05a.

Tests of Between-Subjects Effects

Measure: MEASURE_1

Transformed Variable: Average

17883.568 1 17883.568 10565.248 .000 .977 10565.248 1.000

419.784 248 1.693

SourceIntercept

Error

Type III Sumof Squares df Mean Square F Sig.

Partial EtaSquared

Noncent.Parameter Observed Power

a

Computed using alpha = .05a.

Profile PlotsAgain, worth 1000 words.

Single Factor Designs - 28 5/7/2023

Ignore for this class

Ignore for this situation.

Mean Conscientiousness scores increased significantly from 1 (Honest) to 2 (Dollar) to 3 (Instructed) conditions. The participants responded to the instructions in the expected fashion.

Rcmdr Repeated measures ANOVA

?

You can use R to perform repeated measures analyses of variance, but those analyses are not available in Rcmdr.

Single Factor Designs - 29 5/7/2023