statistics (cont.) psych 231: research methods in psychology
TRANSCRIPT
Statistics (cont.)
Psych 231: Research Methods in Psychology
Announcements
Journal Summary 2 assignment Due in class this week (Wednesday, Nov. 13th)
Group projects Plan to have your analyses done before Thanksgiving break,
GAs will be available during lab times to help Poster sessions are last lab sections of the semester (last
week of classes), so start thinking about your posters. I will lecture about poster presentations on Monday (a week from today).
Samples and Populations
2 General kinds of Statistics Descriptive statistics
• Used to describe, simplify, & organize data sets
• Describing distributions of scores
Inferential statistics• Used to test claims about the
population, based on data gathered from samples
• Takes sampling error into account. Are the results above and beyond what you’d expect by random chance?
Sample
Inferential statistics used to generalize back
Population
Statistics
2 General kinds of Statistics Descriptive statistics
• Used to describe, simplify, & organize data sets
• Describing distributions of scores
Inferential statistics• Used to test claims about the
population, based on data gathered from samples
• Takes sampling error into account. Are the results above and beyond what you’d expect by random chance?
Sample
Inferential statistics used to generalize back
Population
Inferential Statistics
Purpose: To make claims about populations based on data collected from samples What’s the big deal?
Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control)
After treatment period test both groups for memory
Results: Group A’s average memory score is 80% Group B’s is 76%
Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?
Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control)
After treatment period test both groups for memory
Results: Group A’s average memory score is 80% Group B’s is 76%
Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?
Population
Sample ATreatmentX = 80%
Sample BNo TreatmentX = 76%
Testing Hypotheses
Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis
“Reject H0”
“Fail to reject H0”
Step 1: State your hypotheses
Testing Hypotheses
Step 1: State your hypotheses
• “There are no differences (effects)”
• Generally, “not all groups are equal”
You aren’t out to prove the alternative hypothesis (although it feels like this is what you want to do)
If you reject the null hypothesis, then you’re left with support for the alternative(s) (NOT proof!)
This is the hypothesis that you are testing
This is the hypothesis that you are testing
Null hypothesis (H0)
Alternative hypothesis(ses) (HA)
Testing Hypotheses
Step 1: State your hypotheses
In our memory example experiment Null H0: mean of Group A = mean of Group B Alternative HA: mean of Group A ≠ mean of Group B
(Or more precisely: Group A > Group B)
It seems like our theory is that the treatment should improve memory.
That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics.
Instead, we test the H0
In our memory example experiment Null H0: mean of Group A = mean of Group B Alternative HA: mean of Group A ≠ mean of Group B
(Or more precisely: Group A > Group B)
It seems like our theory is that the treatment should improve memory.
That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics.
Instead, we test the H0
Testing Hypotheses
Step 2: Set your decision criteria Your alpha level will be your guide for when to:
• “reject the null hypothesis”• “fail to reject the null hypothesis”
Step 1: State your hypotheses
This could be correct conclusion or the incorrect conclusion• Two different ways to go wrong
• Type I error: saying that there is a difference when there really isn’t one (probability of making this error is “alpha level”)
• Type II error: saying that there is not a difference when there really is one
Error types
Real world (‘truth’)
H0 is correct
H0 is wrong
Experimenter’s conclusions
Reject H0
Fail to Reject H0
Type I error
Type II error
Error types: Courtroom analogy
Real world (‘truth’)
Defendant is innocent
Jury’s decision
Find guilty
Type I error
Type II error
Defendant is guilty
Find not guilty
Error types
Type I error (α): concluding that there is an effect (a difference between groups) when there really isn’t. Sometimes called “significance level” We try to minimize this (keep it low) Pick a low level of alpha Psychology: 0.05 and 0.01 most common
Type II error (β): concluding that there isn’t an effect, when there really is. How likely are you able to detect a difference if it is there Related to the Statistical Power of a test
Testing Hypotheses
Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics
Descriptive statistics (means, standard deviations, etc.) Inferential statistics (t-tests, ANOVAs, etc.)
Step 5: Make a decision about your null hypothesis Reject H0 “statistically significant differences” Fail to reject H0 “not statistically significant differences”
Step 1: State your hypotheses Step 2: Set your decision criteria
Statistical significance
“Statistically significant differences” When you “reject your null hypothesis”
• Essentially this means that the observed difference is above what you’d expect by chance
• “Chance” is determined by estimating how much sampling error there is
• Factors affecting “chance”• Sample size
• Population variability
Sampling error
n = 1
Population mean
x
Sampling error
(Pop mean - sample mean)
Population
Distribution
Sampling error
n = 2
Population mean
x
Population
Distribution
x
Sampling error
(Pop mean - sample mean)
Sample mean
Sampling error
n = 10
Population meanPopulation
Distribution
Sampling error
(Pop mean - sample mean)
Sample mean
xxx
x
x
xx
xx x
Generally, as the sample size increases, the sampling error decreases
Generally, as the sample size increases, the sampling error decreases
Sampling error
Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance”
Small population variability Large population variability
Sampling error
These two factors combine to impact the distribution of sample means. The distribution of sample means is a distribution of all possible sample
means of a particular sample size that can be drawn from the population
XA XB XC XD
Population
Samples of size = n
Distribution of sample means
Avg. Sampling error
“chance”
More info
Significance
“A statistically significant difference” means: the researcher is concluding that there is a difference above
and beyond chance with the probability of making a type I error at 5% (assuming an
alpha level = 0.05)
Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference
Non-Significance
Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis”
(remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to
detect a difference that is really there)• Check the statistical power of your test
• Sample size is too small• Effects that you’re looking for are really small
• Check your controls, maybe too much variability
Summary to this point
Real world (‘truth’)
H0 is correct
H0 is wrong
Experimenter’s conclusions
Reject H0
Fail to Reject H0
Type I error
Type II error
Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control)
After treatment period test both groups for memory Results:
Group A’s average memory score is 80% Group B’s is 76%
Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control)
After treatment period test both groups for memory Results:
Group A’s average memory score is 80% Group B’s is 76%
XAXB
76% 80%
Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?
Two sampledistributions
H0: there is no difference between Grp A and Grp B
H0: μA = μB
About populations
“Generic” statistical test
Tests the question: Are there differences between groups
due to a treatment?
One population
Real world (‘truth’)
H0 is correct
H0 is wrong
Experimenter’s conclusions
Reject H0
Fail to Reject H0
Type I error
Type II error
H0 is true (no treatment effect)
XAXB
76% 80%
Two possibilities in the “real world”
Two sampledistributions
“Generic” statistical test
XAXB XAXB
H0 is true (no treatment effect) H0 is false (is a treatment effect)
Two populations
Real world (‘truth’)
H0 is correct
H0 is wrong
Experimenter’s conclusions
Reject H0
Fail to Reject H0
Type I error
Type II error
76% 80% 76% 80%
People who get the treatment change, they form a new population (the “treatment population)
People who get the treatment change, they form a new population (the “treatment population)
Tests the question: Are there differences between groups
due to a treatment?
Two possibilities in the “real world”
“Generic” statistical test
XBXA
Why might the samples be different?(What is the source of the variability between groups)?
ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment
“Generic” statistical test
The generic test statistic - is a ratio of sources of variability
Observed difference
Difference from chance=
TR + ID + ER
ID + ER=
Computed test statistic
XBXA
ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment
Sampling error
The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population
XA XB XC XD
Population
Samples of size = n
Distribution of sample means
Avg. Sampling error
“chance”
“Generic” statistical test
The generic test statistic distribution To reject the H0, you want a computed test statistics that is large
• reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion
Distribution of the test statistic
α-level determines where these boundaries go
Distribution of sample means
Test statistic
TR + ID + ER
ID + ER
“Generic” statistical test
Distribution of the test statistic Reject H0
Fail to reject H0
The generic test statistic distribution To reject the H0, you want a computed test statistics that is large
• reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion
“Generic” statistical test
Distribution of the test statistic Reject H0
Fail to reject H0
The generic test statistic distribution To reject the H0, you want a computed test statistics that is large
• reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion
“One tailed test”: sometimes you know to expect a particular difference (e.g., “improve memory performance”)
“Generic” statistical test
Things that affect the computed test statistic Size of the treatment effect
• The bigger the effect, the bigger the computed test statistic
Difference expected by chance (sample error)• Sample size• Variability in the population
Significance
“A statistically significant difference” means: the researcher is concluding that there is a difference above
and beyond chance with the probability of making a type I error at 5% (assuming an
alpha level = 0.05)
Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference
Non-Significance
Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis”
(remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to
detect a difference that is really there)• Check the statistical power of your test
• Sample size is too small• Effects that you’re looking for are really small
• Check your controls, maybe too much variability
Some inferential statistical tests
1 factor with two groups T-tests
• Between groups: 2-independent samples
• Within groups: Repeated measures samples (matched, related)
1 factor with more than two groups Analysis of Variance (ANOVA) (either between groups or
repeated measures)
Multi-factorial Factorial ANOVA
T-test
Design 2 separate experimental conditions Degrees of freedom
• Based on the size of the sample and the kind of t-test
Formula:
T = X1 - X2
Diff by chance
Based on sample error
Observed difference
Computation differs for between and within t-tests
T-test
Reporting your results The observed difference between conditions Kind of t-test Computed T-statistic Degrees of freedom for the test The “p-value” of the test
“The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.”
“The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.”
Analysis of Variance
Designs More than two groups
• 1 Factor ANOVA, Factorial ANOVA• Both Within and Between Groups Factors
Test statistic is an F-ratio Degrees of freedom
Several to keep track of The number of them depends on the design
XBXA XC
Analysis of Variance
More than two groups Now we can’t just compute a simple difference score since
there are more than one difference So we use variance instead of simply the difference
• Variance is essentially an average difference
Observed variance
Variance from chanceF-ratio =
XBXA XC
1 factor ANOVA
1 Factor, with more than two levels Now we can’t just compute a simple difference score since
there are more than one difference• A - B, B - C, & A - C
XBXA XC
1 factor ANOVA
Null hypothesis: H0: all the groups are equal
XA = XB = XC
Alternative hypotheses
HA: not all the groups are equal
XA ≠ XB ≠ XC XA ≠ XB = XC
XA = XB ≠ XC XA = XC ≠ XB
The ANOVA tests this one!!
Do further tests to pick between these
XBXA XC
1 factor ANOVA
Planned contrasts and post-hoc tests:
- Further tests used to rule out the different Alternative
hypothesesXA ≠ XB ≠ XC
XA ≠ XB = XC
XA = XB ≠ XC
XA = XC ≠ XB
Test 1: A ≠ B
Test 2: A ≠ C
Test 3: B = C
Reporting your results The observed differences Kind of test Computed F-ratio Degrees of freedom for the test The “p-value” of the test Any post-hoc or planned comparison results
“The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < 0.05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p <0.05). Groups B and C did not differ significantly from one another”
1 factor ANOVA
Factorial ANOVAs
We covered much of this in our experimental design lecture More than one factor
Factors may be within or between Overall design may be entirely within, entirely between, or mixed
Many F-ratios may be computed An F-ratio is computed to test the main effect of each factor An F-ratio is computed to test each of the potential interactions
between the factors
Factorial ANOVAs
Reporting your results The observed differences
• Because there may be a lot of these, may present them in a table instead of directly in the text
Kind of design• e.g. “2 x 2 completely between factorial design”
Computed F-ratios• May see separate paragraphs for each factor, and for interactions
Degrees of freedom for the test• Each F-ratio will have its own set of df’s
The “p-value” of the test• May want to just say “all tests were tested with an alpha level of
0.05” Any post-hoc or planned comparison results
• Typically only the theoretically interesting comparisons are presented
Distribution of sample means
The following slides are available for a little more concrete review of distribution of sample means discussion.
Distribution of sample means
Distribution of sample means is a “theoretical” distribution between the sample and population
Population Distribution of sample means Sample
• Mean of a group of scores– Comparison distribution is
distribution of means
Distribution of sample means
A simple case Population:
– All possible samples of size n = 2
2 4 6 8
Assumption: sampling with replacement
Distribution of sample means
A simple case Population:
– All possible samples of size n = 2
2 4 6 8
2
4
62
2
82
2
4 4
4
6
8
28
8
8
8
84
64
6
6
6
6
4
6
8
2
4 2
mean mean mean2
3
4
5
3
4
5
6
4
5
6
7
5
6
7
8
There are 16 of them
Distribution of sample means
2
4
6
8
2
4
6
8
2
4 6
2
6
2
6
4 6
4
6
8
28
8
8
8
4
4
4
6
8
2
2
mean mean mean2
3
4
5
3
4
5
6
4
5
6
7
5
6
7
8
means2 3 4 5 6 7 8
5
234
1
In long run, the random selection of tiles leads to a predictable pattern
Properties of the distribution of sample means
Shape If population is Normal, then the dist of sample means
will be Normal
Population Distribution of sample means
N > 30
– If the sample size is large (n > 30), regardless of shape of the population
Properties of the distribution of sample means
The mean of the dist of sample means is equal to the mean of the population
Population Distribution of sample means
same numeric valuedifferent conceptual values
• Center
Properties of the distribution of sample means
Center The mean of the dist of sample means is equal to the
mean of the population Consider our earlier example
2 4 6 8
Population
μ = 2 + 4 + 6 + 84
= 5
Distribution of sample means
means2 3 4 5 6 7 8
5
234
1
2+3+4+5+3+4+5+6+4+5+6+7+5+6+7+816
=
= 5
Properties of the distribution of sample means
Spread• Standard deviation of the population
• Sample size
Putting them together we get the standard deviation of the distribution of sample means
– Commonly called the standard error
Standard error
The standard error is the average amount that you’d expect a sample (of size n) to deviate from the population mean In other words, it is an estimate of the error that you’d
expect by chance (or by sampling)
All three of these properties are combined to form the Central Limit Theorem– For any population with mean μ and standard
deviation σ, the distribution of sample means for sample size n will approach a normal distribution with a mean of and a standard deviation of as n approaches infinity
(good approximation if n > 30).
Central Limit Theorem
Distribution of sample means
Keep your distributions straight by taking care with your notation
Sample
s
X
Population
σ
μ
Distribution of sample means
Back