summary. two-sided t-test this is not an exact formula! it just demonstrates main ingrediences....

55
SUMMARY

Upload: angelina-green

Post on 28-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

SUMMARY

Two-sided t-test

𝑡≈𝑥1− 𝑥2𝑠𝑥1−𝑥2√𝑛

This is not an exact formula! It just demonstrates main ingrediences.

𝑡=𝑥−𝜇0𝑠

√𝑛

difference between means, i.e. variability between samples

variability within samples

Two-sided t-test • The numerator indicates how much the means differ.

• This is an explained variation because it most likely results from the differences due to the treatment or just dut to the differences in the populations (recall beer prices, different brands are differently exppensive).

• The denominator is a measure of an error. It measures individual differences of subjects.• This is considered an error variation because we don't know why

individual subjects in the same group are different.

𝑡≈𝑥1− 𝑥2𝑠𝑥1−𝑥2√𝑛

Explained variation

Error variation

3 samples

3 samples

ANOVA• Compare as many means as you want just with one test.

𝑀𝑆𝑊= 𝑆𝑆𝑊𝑑 𝑓 𝑊

=∑𝑘

(𝑥 𝑖−𝑥𝑘 )2

𝑁−𝑘𝑀𝑆𝐵=

𝑆𝑆𝐵𝑑 𝑓 𝐵

=∑𝑛𝐾 (𝑥𝑘−𝑥𝐺 )2

𝑘−1

𝑠=√∑ (𝑥𝑖−𝑥 )2

𝑛−1⟹𝑠2=

∑ (𝑥 𝑖− 𝑥 )2

𝑛−1=𝑆𝑆𝑑𝑓

𝑡≈𝑥1− 𝑥2𝑠𝑥1−𝑥2√𝑛

𝐹 𝑑 𝑓 𝐵 ,𝑑 𝑓𝑊=𝑀𝑆𝐵𝑀𝑆𝑊

Total variability• What is the total number of degrees of freedom?

• Likewise, we have a total variation

𝑑 𝑓 𝐵=𝑘−1

Hypothesis• Let's compare three samples with ANOVA. Just try tu guess

what the hypothesis will be?

at least one pair of samples is significantly different

• Follow-up multiple comparison steps – see which means are different from each other.

between− group variabilitywithin− group variability

Multiple comparisons problem• And there is another (more serious problem) with many t-

tests. It is called a multiple comparisons problem.

http://www.graphpad.com/guides/prism/6/statistics/index.htm?beware_of_multiple_comparisons.htm

NEW STUFF

Post hoc tests• F-test in ANOVA is the so-called omnibus test. It tests the

means globally. It says nothing about which particular means are different.

• post hoc tests, multiple comparison tests.• Tukey Honestly Significant Differences

TukeyHSD(fit) # where fit comes from aov()

ANOVA assumptions• normality – all populations samples come from are normal• homogeneity of variance – variances are equal• independence of observations – the results found in one

sample won't affect others

• Most influencial is the independence assumption. Otherwise, ANOVA is relatively robust.

• We can sometimes violate• normality – large sample size• variance homogeneity – equal sample sizes + the ratio of any two

variances does not exceed four

ANOVA kinds• one-way ANOVA (analýza rozptylu při jednoduchém

třídění, jednofaktorová ANOVA)aov(beer_brands$Price~beer_brands$Brand)

• two-way ANOVA (analýza rozptylu dvojného třídění, dvoufaktorová ANOVA)• Example: engagement ratio, measure two educational methods

(with and without song) for men and women independently• aov(engagement~method+sex)• interactions between factors

dependent variable independent variable

CORRELATION

Introduction• Up to this point we've been working with only one

variable.• Now we are going to focus on two variables.• Two variables that are probably related. Can you think of

some examples?• weight and height• time spent studying and your grade• temperature outside and ankle injuries

Car dataMiles on a car Value of the car

60 000 $12 000

80 000 $10 000

90 000 $9 000

100 000 $7 500

120 000 $6 000

• x – predictor, explanatory, independent variable• How do you think y is called? Think about opposites to x

name.• outcome• determiner• response• stand-alone• dependent

Car dataMiles on a car Value of the car

60 000 $12 000

80 000 $10 000

90 000 $9 000

100 000 $7 500

120 000 $6 000

• How may we show these variables have a relationship?• Tell me some of yours ideas.

• scatterplot

Scatterplot

Stronger relationship?

Correlation• Relation between two variables = correlation• strong relationship = strong correlation, high correlation

Match these

strong positive

strong negative

weak positive

weak negative

Correlation coefficient• r (Pearson's r) - a number that quantifies the relationship.

• … covariance of X and Y. A statistic for how much X and Y co-vary. In other words, how much do they vary together.

• … standard deviations of X and Y. Describes, how to variables vary apart from each other, rather than with each other.

• measures the strength of the relationship by looking at how closely the data falls along a straight line.

Covariance

• Watch explanation video.

http://www.youtube.com/watch?v=35NWFr53cgA

divide by n-1 for sample but by n for population

Coefficient of determination• Coefficient of determination - is the percentage of

variation in Y explained by the variation in X.• Percentage of variance in one variable that is accounted for by the

variance in the other variable.

r2 = 0

r2 = 0.25

r2 = 0.81

from http://www.sagepub.com/upm-data/11894_Chapter_5.pdf

+1 -1 +0.14 +0.93 -0.73

• If X is age in years and Y age in months, what will the correlation coefficient be?• +1.0

• X is hours you're awake a day, Y is hours you're asleep a day.• -1.0

Crickets• Find a cricket, count the number of its chirps in 15

seconds, add 37, you have just approximated the outside temperature in degrees Fahrenheit.

• National Service Weather Forecast Office:

http://www.srh.noaa.gov/epz/?n=wxcalc_cricketconvert

chirps in 15 sec temperature chirps in 15 sec temperature

18 57 27 68

20 60 30 71

21 64 34 74

23 65 39 77

Hypothesis testing• Even when two variables describing a sample of data

may seem they have an relationship, this could just be due to the chance. The situation in population may be different.

• … sample corr. coeff., … population corr. coeff.• How hypotheses will look like?

𝐻0 :𝑟=0 𝐻0 :𝜌=0A B C D

Hypothesis testing

• test statistic has a t-distribution• Example: we are measuring relationship between two

variables, we have 25 participants, we get the t-statistic = 2.71. Is there a significant relationship between X and Y?• , non-directonal test,

Confidence intervals

Statistics course from https://www.udacity.com

95% CI = (-0.3995, 0.0914) 95% CI = 0.1369, 0.5733)

• reject the null• fail to reject the null

• reject the null• fail to reject the null

try to guess:

Hypothesis testing• A statiscally correct way how to decide about the

relationship between two variables is, of course, hypothesis testing.

• In these two particular cases:

Correlation vs. causation• causation – one variable causes another to happen

• e.g. the facts it is raining cause people to take their umbrellas to work

• correlation – just means there is a relationship• e.g. do happy people have more friends? Are they just happy

because they have more friends? Or they act a certain way which causes them to have more friends.

Correlation vs. causation

• There is a strong relationship between the ice cream consumption and the crime rate.

• How could this be true?• The two variables must have

something in common with one another. It must be something that relates to both level of ice cream consumption and level of crime rate. Can you guess what that is?

• Outside temperature.

from causeweb.org

Correlation vs. causation• If you stop selling ice cream, does the crime rate drop?

What do you think?• That’s because of the simple principle that correlations

express the association that exists between two or more variables; they have nothing to do with causality.

• In other words, just because level of ice cream consumption and crime rate increase/descrease together does not mean that a change in one necessarily results in a change in the other.

• You can’t interpret associations as being causal.

Correlation vs. causation• In an ice cream example, there exist a variable (outside

temperature) we did not realize to control.• Such variable is called third variable, confounding

variable, lurking variable.• The methodologies of scientific studies therefore need to

control for these factors to avoid a 'false positive‘ conclusion that the dependent variables are in a causal relationship with the independent variable.

• Let’s have a look at dependence of murder rate on temperature.

from http://www-personal.umich.edu/~bbushman/BWA05a.pdfJournal of Personality and Social Psychology, 2005, Vol. 89, No. 1, 62–66

from http://www-personal.umich.edu/~bbushman/BWA05a.pdfJournal of Personality and Social Psychology, 2005, Vol. 89, No. 1, 62–66

high assault period

low assault period

http://xkcd.com/552/

Correlation and regression analysis• Correlation analysis investigates the relationships

between variables using graphs or correlation coefficients.

• Regression analysis answers the questions like: which relationship exists between variables X and Y (linear, quadratic ,….), is it possible to predict Y using X, and with what error?

Simple linear regression• also single linear regression (jednoduchá lineární regrese)

• one y (dependent variable, závisle proměnná), one x (independent variable, nezávisle proměnná)

• – y-intercept (constant), – slope• is estimated value, so to distinguish it from the actual

value corresponding to the given statisticans use

Data set• Students in

higher grades carry more textbooks.

• Weight of the textbooks depends on the weight of the student.

outlier

strong positive correlation, r = 0.926

from Intermediate Statistics for Dummies

Build a model• Find a straight line y = a + bx

Interpretation• y-intercept (3.69 in our case)

• it may or may not have a practical meaning• Does it fall within actual values in the data set? If yes, it is a clue it may have

a practical meaning.• Does it fall within negative territory where negative y-value are not possible?

(e.g. weights can’t be negative)• Does a value x = 0 have practical meaning (student weighting 0)?

• However, even if it has no meaning, it may be necessary (i.e. significantly different from zero)!

• slope• change in y due to one-unit increase in x (i.e. if student’s weight

increases by 1 pound, its textbook’s weight increases by 0.113 pounds)

• now you can use regression line to estimate y value for new x

Regression model conditions• After building a regression mode you need to check if the

required conditions are met.• What are these conditions?

• The y’s have to have normal distribution for each value of x.• The y’s have to have constant spread (standard deviation) for each

value of x.

Normal y’s for every x• For any value of x, the population of possible y-values

must have a normal distribution.

from Intermediate Statistics for Dummies

Homoscedasticity condition

As you move from left to the right on the x-axis, the spread around y-values remain the same.

source: wikipedia.org

Confidence and prediction limit

95% confidence limits – this interval includes the true regression line with 95% probability.

(pás spolehlivosti)

95% prediction limits – this interval represents the 95% probability for the values of the dependent variable. i.e. 95% of data points lie within these lines.(pás predikce)

Residuals• To check the normality of y-values you need to measure

how far off your predictions were from the actual data, and to explore these errors.

• residual (residuum, reziduální hodnota predikce)

from Intermediate Statistics for Dummies

actual value

predicted value

residual

Residuals• The residuals are data just like any other, so you can find

their mean (which is zero!!) and their standard deviation.• Residuals can be standardized, i.e. converted to the Z-

score so you see where it falls on the standard normal distribution.

• Plotting residuals on the graph – residual plots.

Using r2 to measure model fit• r2 measures what percentage of the variability in y is

explained by the model.• The y-values of the data you collect have a great deal of

variability in and of themselves. • You look for another variable (x) that helps you explain

that variability in the y-values. • After you put that x variable into the model, and you find

it’s highly correlated with y, you want to find out how well this model did at explaining why the values of y are different.

Interpreting r2

• high r2 (80-90% is extremely high, 70% is fairly high)• A high percentage of variability means that the line fits well because there is not

much left to explain about the value of y other than using x and its relationship to y.

• small r2 (0-30%) • The model containing x doesn’t help much in explaining the difference in the y-

values• The model would not fit well. You need another variable to explain y other than the

one you already tried.

• middle r2 (30-70%) • x does help somewhat in explaining y, but it doesn’t do the job well enough on its

own. • Add one or more variables to the model to help explain y more fully as a group.

• Textbook example: r = 0.93, r2 = 0.8649. Approximately 86% of variability you find in textbook weights for is explained by the average student weight. Fairly good model.