chapter 8 – regression 2

73
1 Chapter 8 – Regression 2 Basic review, estimating the standard error of the estimate and short cut problems and solutions.

Upload: martha-fleming

Post on 30-Dec-2015

37 views

Category:

Documents


3 download

DESCRIPTION

Chapter 8 – Regression 2. Basic review, estimating the standard error of the estimate and short cut problems and solutions. You can use the regression equation when:. 1. the relationship between X and Y is linear, - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Chapter 8 – Regression 2

1

Chapter 8 – Regression 2

Basic review, estimating the standard error of the estimate and short cut problems and solutions.

Page 2: Chapter 8 – Regression 2

2

You can use the regression equation when:

1. the relationship between X and Y is linear,

2. r falls outside the CI.95 around 0.000 and is therefore a statistically significant correlation, and

3. X is within the range of X scores observed in your sample,

Page 3: Chapter 8 – Regression 2

3

Simple problems using the regression equation

tY' = r * tX

tY' = .150 * 0.40 = 0.06

tY' = .40 * -1.70 = -0.68

tY' = .40 * 1.70 = 0.68

Page 4: Chapter 8 – Regression 2

4

Predictions from Raw Data

1. Calculate the t score for X.

2. Solve the regression equation.

3. Transform the estimated t score for Y into a raw score.

XX sXXt /)(

)(*)( YY stYY

)( XY trt

Page 5: Chapter 8 – Regression 2

5

Predicting from and to raw scores

Problem: Estimate the midterm point total given a study time of 400 minutes.It is given that the estimated mean of the study time is 560 minutes and the estimated standard deviation is 216.02. (Range = 260-860)It is given that the estimated mean of midterm points is 76 and their estimated standard deviation is 7.98.There were 10 pairs of tX,tY scoresThe estimated correlation coefficient is .851.

Page 6: Chapter 8 – Regression 2

6

Can you use the regression equation?

Page 7: Chapter 8 – Regression 2

123456789

101112

.

.

.100200300500

10002000

10000

-.996 to .996-.949 to .949-.877 to .877-.810 to .810-.753 to .753-.706 to .706-.665 to .665-.631 to .631-.601 to .601-.575 to .575-.552 to .552-.531 to .531

.

.

.-.194 to .194-.137 to .137-.112 to .112-.087 to .087-.061 to .061-.043 to .043-.019 to .019

.997

.950

.878

.811

.754

.707

.666

.632

.602

.576

.553

.532...

.195

.138

.113

.088

.062

.044

.020

.9999.990.959.917.874.834.798.765.735.708.684.661

.

.

..254.181.148.115.081.058.026

df nonsignificant .05 .01

Page 8: Chapter 8 – Regression 2

8

YES!

r (8) = .851, p < .01

400 minutes is inside the range of X scores seen in the random sample

(260-860 minutes)

Page 9: Chapter 8 – Regression 2

9

Predicting from and to raw scores 1. Translate raw X to tX score. X X-bar sX (X-X-bar) / sX = tX

400 560 216.02 (400-560)/216.02= -0.74

Page 10: Chapter 8 – Regression 2

10

Use regression equation

2. Find value of tY'

r r * tX = tY'

.851 .851*-0.74=-0.63

Page 11: Chapter 8 – Regression 2

11

Translate tY' to raw Y'

Y sY Y + (tY' * sY) = Y'76.00 7.98 76.00+(-0.63*7.98) = 70.97

Page 12: Chapter 8 – Regression 2

12

A Caution

Never assume that a correlation will stay linear outside of the range you originally observed.

Therefore, never use the regression equation to make predictions from X values outside of the range you found in your sample.

Example: Basing a prediction of the height of a 50 year old adult based on a study examining the correlation of age and height in a sample composed only of children age 14 or less.

Page 13: Chapter 8 – Regression 2

13

Correlation Characteristics: Which line best shows the relationship between age (X) and height (Y)

Linear vs Curvilinear

Page 14: Chapter 8 – Regression 2

14

Reviewing the r table and reporting the results of calculating r from a random sample

Page 15: Chapter 8 – Regression 2

15

How the r table is laid out: the important columns

Column 1 of the r table shows degrees of freedom for correlation and regression (dfREG)

dfREG=nP-2 Column 2 shows the CI.95 for varying degrees of

freedom Column 3 shows the absolute value of the r that

falls just outside the CI.95. Any r this far or further from 0.000 falsifies the hypothesis that rho=0.000 and can be used in the regression equation to make predictions of Y scores for people who were not in the original sample but who were part of the population from which the sample is drawn.

Page 16: Chapter 8 – Regression 2

123456789

101112

.

.

.100200300500

10002000

10000

-.996 to .996-.949 to .949-.877 to .877-.810 to .810-.753 to .753-.706 to .706-.665 to .665-.631 to .631-.601 to .601-.575 to .575-.552 to .552-.531 to .531

.

.

.-.194 to .194-.137 to .137-.112 to .112-.087 to .087-.061 to .061-.043 to .043-.019 to .019

.997

.950

.878

.811

.754

.707

.666

.632

.602

.576

.553

.532...

.195

.138

.113

.088

.062

.044

.020

.9999.990.959.917.874.834.798.765.735.708.684.661

.

.

..254.181.148.115.081.058.026

df nonsignificant .05 .01

Page 17: Chapter 8 – Regression 2

123456789

101112

.

.

.100200300500

10002000

10000

-.996 to .996-.949 to .949-.877 to .877-.810 to .810-.753 to .753-.706 to .706-.665 to .665-.631 to .631-.601 to .601-.575 to .575-.552 to .552-.531 to .531

.

.

.-.194 to .194-.137 to .137-.112 to .112-.087 to .087-.061 to .061-.043 to .043-.019 to .019

.997

.950

.878

.811

.754

.707

.666

.632

.602

.576

.553

.532...

.195

.138

.113

.088

.062

.044

.020

.9999.990.959.917.874.834.798.765.735.708.684.661

.

.

..254.181.148.115.081.058.026

df nonsignificant .05 .01If r falls in within the 95% CI

around 0.000, then the result is not significant.

Find your degrees of freedom (np-2)

in this columnYou cannot reject

the null hypothesis.

You must assume that rho = 0.00.

Does the absolute valueof r equal or exceed thevalue in this column?

r is significant withalpha = .05.

If r is significant youcan consider it an unbiased,

least squares estimate of rho.alpha = .05.

You can use it in theregression equation to

estimate Y scores.

Page 18: Chapter 8 – Regression 2

18

Can we generalize to the population from the correlation in the sample?A Type 1 error involves saying that there

is a correlation in the population as a whole, when the correlation is actually 0.000 (and the null is true).

We carefully guard against Type 1 error by using significance tests to try to falsify the null hypothesis.

Page 19: Chapter 8 – Regression 2

19

Example : Achovy pizza and horror films, rho=0.000

H1: People who enjoy food with strong flavors also enjoy other strong sensations.

H0: There is no relationship between enjoying food with strong flavors and enjoying other strong sensations.

anchovies7733084111

horrorfilms

7986965216

Can we reject the null hypothesis?

(scale 0-9)

Page 20: Chapter 8 – Regression 2

20

Can we reject the null hypothesis?

0

8

6

4

2

0 8642

Horror films

Pizza

Page 21: Chapter 8 – Regression 2

21

Can we reject the null hypothesis?

r = .352

df = 8

We do the math and we find that:

Page 22: Chapter 8 – Regression 2

123456789

101112

.

.

.100200300500

10002000

10000

-.996 to .996-.949 to .949-.877 to .877-.810 to .810-.753 to .753-.706 to .706-.665 to .665-.631 to .631-.601 to .601-.575 to .575-.552 to .552-.531 to .531

.

.

.-.194 to .194-.137 to .137-.112 to .112-.087 to .087-.061 to .061-.043 to .043-.019 to .019

.997

.950

.878

.811

.754

.707

.666

.632

.602

.576

.553

.532...

.195

.138

.113

.088

.062

.044

.020

.9999.990.959.917.874.834.798.765.735.708.684.661

.

.

..254.181.148.115.081.058.026

df nonsignificant .05 .01

Page 23: Chapter 8 – Regression 2

123456789

101112

.

.

.100200300500

10002000

10000

-.996 to .996-.949 to .949-.877 to .877-.810 to .810-.753 to .753-.706 to .706-.665 to .665-.631 to .631-.601 to .601-.575 to .575-.552 to .552-.531 to .531

.

.

.-.194 to .194-.137 to .137-.112 to .112-.087 to .087-.061 to .061-.043 to .043-.019 to .019

.997

.950

.878

.811

.754

.707

.666

.632

.602

.576

.553

.532...

.195

.138

.113

.088

.062

.044

.020

.9999.990.959.917.874.834.798.765.735.708.684.661

.

.

..254.181.148.115.081.058.026

df nonsignificant .05 .01

Page 24: Chapter 8 – Regression 2

123456789

101112

.

.

.100200300500

10002000

10000

-.996 to .996-.949 to .949-.877 to .877-.810 to .810-.753 to .753-.706 to .706-.665 to .665-.631 to .631-.601 to .601-.575 to .575-.552 to .552-.531 to .531

.

.

.-.194 to .194-.137 to .137-.112 to .112-.087 to .087-.061 to .061-.043 to .043-.019 to .019

.997

.950

.878

.811

.754

.707

.666

.632

.602

.576

.553

.532...

.195

.138

.113

.088

.062

.044

.020

.9999.990.959.917.874.834.798.765.735.708.684.661

.

.

..254.181.148.115.081.058.026

df nonsignificant .05 .01

Page 25: Chapter 8 – Regression 2

123456789

101112

.

.

.100200300500

10002000

10000

-.996 to .996-.949 to .949-.877 to .877-.810 to .810-.753 to .753-.706 to .706-.665 to .665-.631 to .631-.601 to .601-.575 to .575-.552 to .552-.531 to .531

.

.

.-.194 to .194-.137 to .137-.112 to .112-.087 to .087-.061 to .061-.043 to .043-.019 to .019

.997

.950

.878

.811

.754

.707

.666

.632

.602

.576

.553

.532...

.195

.138

.113

.088

.062

.044

.020

.9999.990.959.917.874.834.798.765.735.708.684.661

.

.

..254.181.148.115.081.058.026

df nonsignificant .05 .01

Page 26: Chapter 8 – Regression 2

123456789

101112

.

.

.100200300500

10002000

10000

-.996 to .996-.949 to .949-.877 to .877-.810 to .810-.753 to .753-.706 to .706-.665 to .665-.631 to .631-.601 to .601-.575 to .575-.552 to .552-.531 to .531

.

.

.-.194 to .194-.137 to .137-.112 to .112-.087 to .087-.061 to .061-.043 to .043-.019 to .019

.997

.950

.878

.811

.754

.707

.666

.632

.602

.576

.553

.532...

.195

.138

.113

.088

.062

.044

.020

.9999.990.959.917.874.834.798.765.735.708.684.661

.

.

..254.181.148.115.081.058.026

df nonsignificant .05 .01

Page 27: Chapter 8 – Regression 2

27

This finding falls within the CI.95 around 0.000

We call such findings “nonsignificant” Nonsignificant is abbreviated n.s.We would report these finding as followsr (8)=0.352, n.s.Given that it fell inside the CI.95, we must

assume that rho actually equals zero and that our sample r is .352 instead of 0.000 solely because of sampling fluctuation.

We go back to predicting that everyone will score at the mean of Y.

Page 28: Chapter 8 – Regression 2

28

In fact, the null hypothesis was correct; rho = 0.000 I made up that example using numbers

randomly selected from a random number table.So there really was no relationship between the

two sets of scores: rho really equaled 0.000But samples don’t give you an r of zero, they

fluctuate around 0.000Significance testing is your protection against

mistaking sampling fluctuation for a real correlation.

Significance testing protects against Type 1 error.

Page 29: Chapter 8 – Regression 2

29

We use significance testing to protect us from Type 1 error.Our sample gave us an r of .352. Without the r table, we could have thought

that far enough from zero to represent a true correlation in the population.

0.352 was the product only of sampling fluctuation

Significance testing is your protection against mistaking sampling fluctuation for a real correlation.

Significance testing protects against Type 1 error.

Page 30: Chapter 8 – Regression 2

30

How to report a significant r

For example, let’s say that you had a sample (nP=30) and r = -.400

Looking under nP-2=28 dfREG, we find the interval consistent with the null is between -.360 and +.360

So we are outside the CI.95 for rho=0.000 We would write that result as r(28)=-.400, p<.05 That tells you the dfREG, the value of r, and

that you can expect an r that far from 0.000 five or fewer times in 100 when rho = 0.000

Page 31: Chapter 8 – Regression 2

31

Then there is Column 4

Column 4 shows the values that lie outside a CI.99

(The CI.99 itself isn’t shown like the CI.95 in Column 2 because it isn’t important enough.)

However, Column 4 gives you bragging rights. If your r is as far or further from 0.000 as the number

in Column 4, you can say there is 1 or fewer chance in 100 of an r being this far from zero (p<.01).

For example, let’s say that you had a sample (nP=30) and r = -.525.

The critical value at .01 is .463. You are further from 0.000 than that.So you can brag.

You write that result as r(28)=-.525, p<.01.

Page 32: Chapter 8 – Regression 2

32

To summarize

If r falls inside the CI.95 around 0.000, it is nonsignificant (n.s.) and you can’t use the regression equation (e.g., r(28)=.300, n.s.

If r falls outside the CI.95, but not as far from 0.000 as the number in Column 4, you have a significant finding and can use the regression equation (e.g., r(28)=-.400,p<.05

If r is as far or further from zero as the number in Column 4, you can use the regression equation and brag while doing it (e.g., r(28)=-.525, p<.01

Page 33: Chapter 8 – Regression 2

33

Can you reject H0?

10111213141516171819

.

.

.405060

-.575 to .575-.552 to .552-.531 to .531-.513 to .513-.496 to .496-.481 to .481-.467 to .467-.455 to .455-.443 to .443-.432 to .432

.

.

.-.303 to .303-.272 to .272-.249 to .249

.576

.553

.532

.514

.497

.482

.468

.456

.444

.433...

.304

.273

.250

.708

.684

.661

.641

.623

.606

.590

.575

.561

.549...

.393

.354

.325

df nonsignificant .05 .01

r = .386np= 19

dfREG = 17

Page 34: Chapter 8 – Regression 2

34

Can you reject H0?

10111213141516171819

.

.

.405060

-.575 to .575-.552 to .552-.531 to .531-.513 to .513-.496 to .496-.481 to .481-.467 to .467-.455 to .455-.443 to .443-.432 to .432

.

.

.-.303 to .303-.272 to .272-.249 to .249

.576

.553

.532

.514

.497

.482

.468

.456

.444

.433...

.304

.273

.250

.708

.684

.661

.641

.623

.606

.590

.575

.561

.549...

.393

.354

.325

df nonsignificant .05 .01

r = -.386np= 47

dfreg = 45

Page 35: Chapter 8 – Regression 2

35

How much better than the mean can we guess?

Page 36: Chapter 8 – Regression 2

36

Improved prediction If we can use the regression equation rather

than the mean to make individualized estimates of Y scores, how much better are our estimates?

We are making predictions about scores on the Y variable from our knowledge of the statistically significant correlation between X & Y and the fact that we know someone’s X score.

The average unsquared error when we predict that everyone will score at the mean of Y equals sY, the ordinary standard deviation of Y.

How much better than that can we do?

Page 37: Chapter 8 – Regression 2

37

Estimating the standard error of the estimate the (very) long way.Calculate correlation (which includes

calculating s for Y).

If the correlation is significant, you can use the

regression equation to make individualized

predictions of scores on the Y variable.

The average unsquared error of prediction

when you do that is called the estimated

standard error of the estimate.

Page 38: Chapter 8 – Regression 2

38

Example for Prediction Error

A study was performed to investigate whether the quality of an image affects reading time.

The experimental hypothesis was that reduced quality would slow down reading time.

Quality was measured on a scale of 1 to 10. Reading time was in seconds.

Page 39: Chapter 8 – Regression 2

39

Quality vs Reading Time data: Compute the correlation

Quality(scale 1-10)

4.304.555.555.656.306.456.45

Reading time(seconds)

8.18.57.87.37.57.36.0

Is there a relationship?Check for linearity.Compute r.

Page 40: Chapter 8 – Regression 2

40

Calculate t scores for X

X4.304.555.555.656.306.456.45

X=39.25 n= 7 X=5.61

(X - X)2

1.711.120.000.000.480.710.71

X - X-1.31-1.06-0.06 0.04 0.69 0.84 0.84

tX =(X - X) / sX

-1.48-1.19-0.070.050.780.950.95

MSW = 4.73/(7-1) = 0.79

s = 0.89

SSW = 4.73

Page 41: Chapter 8 – Regression 2

41

Calculate t scores for Y

Y8.18.57.87.37.57.36.0

Y=52.5 n= 7 Y=7.50 MSW = 3.78/(7-1) = 0.63

sY = 0.794

(Y - Y)2

0.361.000.090.040.000.042.25

Y - Y0.601.000.30-0.200.00-0.20-1.50

tY =

(Y - Y) / sY

0.76 1.26 0.38-025 0.00-0.25-1.89

SSW = 3.78

Page 42: Chapter 8 – Regression 2

42

Plot t scores

tY

0.76 1.28 0.39-0.25 0.00-0.25-1.89

tX

-1.48-1.19-0.07 0.05 0.78 0.95 0.95

Page 43: Chapter 8 – Regression 2

43

t score plot with best fitting line: linear? YES!

-2.00

-1.00

0.00

1.00

2.00

-2.00 -1.00 0.00 1.00 2.00

Image quality (t score)

Rea

din

g T

ime

(t s

core

)

Page 44: Chapter 8 – Regression 2

44

Calculate r

tY

0.76 1.28 0.39-0.25 0.00-0.25-1.88

tX

-1.48-1.19-0.07 0.05 0.78 0.95 0.95

tY -tX

-2.24-2.47-0.46 0.30 0.78 1.20 2.83

(tY -tX)2

5.026.100.210.090.611.448.01

(tX - tY)2 / (nP - 1) = 3.580

r = 1 - (1/2 * 3.580) = 1 - 1.79 = -0.790

(tX - tY)2 = 21.48

Page 45: Chapter 8 – Regression 2

45

Check whether r is significant

r = -0.790

df = nP-2 = 5

is .05Look in r table:With 5 dfREG, the CI.95 goes from -.753 to +.753

r(5)= -.790, p <.05

r is significant!

Page 46: Chapter 8 – Regression 2

46

We can calculate the Y' for every raw X

X4.304.555.555.656.306.456.45

Y'8.428.237.547.477.016.916.91

Page 47: Chapter 8 – Regression 2

47

Can we show mathematically that regression estimates are better than mean estimates?

Y8.18.57.87.37.57.36.0

Y'8.428.237.547.477.016.916.91

To calculate the standard deviation we take deviations of Y from themean of Y, square them, add them up,divide by degrees of freedom, and then take the square root.

To calculate the standard error of the estimate, sEST, we will take the deviations of each raw Y score from its regression equation estimate, square them, add them up, divide by degrees of freedom, and take the square root.

We expect of course that there will be less error if we use regression.

Y7.57.57.57.57.57.57.5

Page 48: Chapter 8 – Regression 2

48

Estimated standard error of the estimate

MSRES = 1.49/(7-2) = 0.298

SEST = 0.546

(Y - Y')2

0.100.070.070.030.240.150.83

Y - Y'-0.32 0.27 0.26-0.17 0.49 0.39-0.91

SSRES = 1.49

Y8.18.57.87.37.57.36.0

Y'8.428.237.547.477.016.916.91

Page 49: Chapter 8 – Regression 2

49

How much better?

MSRES = 0.30MSY = 0. 64

%53526.63.

298.63.

52% less squared error when we use the regression equation instead of the mean to predict Y scores.

Page 50: Chapter 8 – Regression 2

50

How much better is the estimated standard error of the estimate than the estimated standard deviation?

SEST = 0.546SY = 0.80

%31312.794.

546.794.

31% less error of prediction(using unsquared units) when we use the regression equation instead of the mean to predict.

Page 51: Chapter 8 – Regression 2

51

Mathematical magic

There is usually an alternative formula for calculating statistics that is easier to perform.

We went through a lot of extra steps to calculate SEST = 0.546.It is not necessary to calculate all of the estimated Y scores, find the difference between each actual Y score

and Y', then square, sum, and divide by dfREG.

Page 52: Chapter 8 – Regression 2

52

Another way to phrase it: How much error did we get rid of? Treat it as a weight loss problem.If Jack is 30 pounds overweight and

he loses 40% of it, how much is he still overweight.

He lost .400 x 30 pounds = 12 pounds.

He has 30 – 12 = 18 pounds left to lose.

Page 53: Chapter 8 – Regression 2

53

How did we solve that problem?

First we found how much weight Jack had gotten rid of.

That equaled the percent he lost (expressed as a proportion) times the amount overweight he started with

He was 30 pounds overweight and lost 40% of it.

30 * .400 = 12.00.He lost 12 pounds.

Page 54: Chapter 8 – Regression 2

54

Then we found how much he was still overweight.

He started off 30 pounds overweight.He lost 12.00 pounds.So he had 30-12=18 pounds of

overweight left.To find what is left, subtract what

you got rid of from what you lost.

Page 55: Chapter 8 – Regression 2

55

So to compute how much of something is left after some is lost, you need to know how much there was to start with and what percentage was gotten rid of.

Percentage gotten rid of times original quantity = amount gotten rid of.

Original quantity minus amount gotten rid of = what’s left.

Page 56: Chapter 8 – Regression 2

56

SSY= error to startr2=percent of error lost

SSY is the total amount of error we start with when prediction scores on Y. It is the amount of error when everyone is predicted to score at the mean.

The proportion of error you get rid of using the regression equation as your predictor equals Pearson’s correlation coefficient squared (r2)!

Page 57: Chapter 8 – Regression 2

57

To get the total error left find how much you got rid of, then subtract from what you started with

Amount you got rid of: SSY * r2

Amount left: SSRES = SSY – (SSY * r2 )

Page 58: Chapter 8 – Regression 2

58

Now you want to estimate the average amount of unsquared error you will have left if you use the regression equation to make predictions for the whole population.

Page 59: Chapter 8 – Regression 2

59

SSREs= sum of squared error left when using the regression equation in your sample.

AS USUAL, to estimate the average squared error of prediction in the population when you use the regression equation to predict Y scores, divide the sum of squares by degrees of freedom.

Page 60: Chapter 8 – Regression 2

60

Only change is that you used the regression equation to get SSRES.

So you divide SSRES by dfREG=nP-2 to get MSRES, the average amount of squared error you will have left when you use the regression equation.

Then take the square root of MSRES to get sEST.sEST is your best estimate of the average

unsquared error of prediction when you properly use the regression equation to predict Y scores.

Remember to properly use the regression equation, r must be significant and X within the range of X scores observed in your random sample.

Page 61: Chapter 8 – Regression 2

61

Here are the formulae:

Residual sum of squared error if the regression equation is used: SSRES = SSY - (SSY * r2)

Estimated average amount of squared error left: MSRES = SSRES/dfREG = SSRES/(nP-2)

Estimated average amount of unsquared error left: sEST = square root of MSRES

Page 62: Chapter 8 – Regression 2

62

Computing sEST the easier way! In the problem for which we computed sEST the long way, we already knew that SSY = 3.78 and r = -0.790. Thus, r2 = (-0.790)2=0.624. Here is the computation:

MSRES = 1.42/(7-2) = 0.284

SEST = 0.533

= 3.78 - (3.78 * (0.624)2)) SSRES = SSY - (SSY * r2) = 1.42

Page 63: Chapter 8 – Regression 2

63

How much better?

SEST = 0.533SY = 0.80

%33329.794.

533.794.

33% less error when use the regressionequation instead of the mean to predict. Note: the difference between 33% and 31% when we calculated using the long way is mostly due to rounding error in the long calculation. 33% is more accurate

Page 64: Chapter 8 – Regression 2

64

Stating the obvious: The estimated standard deviation (s) was the estimated

average unsquared distance of scores in the population from mu. If we are looking at Y scores it is the average unsquared difference of Y scores from the mean of Y.

When using the regression equation we are predicting Y scores.

The estimated standard error of the estimate (sEST) is the estimated average unsquared distance of Y scores in the population from the regression equation based predicted Y scores.

Both reflect the error of prediction. Using the regression equation individualizes prediction. If r is significant, and prediction restricted to values of X within the range of X scores seen in the random sample, using the regression equation leads to less error.

Page 65: Chapter 8 – Regression 2

65

Do one yourself.

Assume the original sum of squares for error is 420.00, nP=22 and the sum of the squared differences between the tX and tY scores is 12.60.

What is r? Is r statistically significant? Write the results as

you would in a report.What is the estimated average unsquared

distance of Y scores from the regression line?What percent improvement is obtained when s

is compared to sEST?

Page 66: Chapter 8 – Regression 2

66

Answers: What is r? Is it significant?

Compute r

(tX - tY)2 / (nP - 1) = 12.60/21=.600r=1.000-1/2(.600) = .700

Is r significant?r(20)= .700, p<.01

(tX - tY)2 = 12.60

Page 67: Chapter 8 – Regression 2

67

What is the estimated average unsquared distance of scores in the population from the regression line?

That is the same as asking “What is the estimated standard error of the estimate?”

MSRES = 214.20/(20) = 10.71

SEST = 3.27

= 420.00 – [420.00 * (0.70)2] SSRES = SSY - (SSY * r2) = 214.20

Page 68: Chapter 8 – Regression 2

68

What percent improvement is obtained when s is compared to sEST?

MSW=SSW/df = 420.00/21 = 20.00

47.400.20 s

Page 69: Chapter 8 – Regression 2

69

Last and (perhaps) least:

Proportion improvement = (s-sEST)/s

(4.47 – 3.27)/4.47=.268 Percent improvement = proportion improvement

*100

In this case there was about a 26.8% improvement in unsquared error when you use the regression equation rather than the mean as your basis for predicting Y scores.

Page 70: Chapter 8 – Regression 2

70

Some final notes on types of error and alpha

Page 71: Chapter 8 – Regression 2

71

Error Types: Type 1 ErrorType 1 error occurs when you

accidentally get a random sample with an r outside the range predicted by the null hypothesis even though rho=0.000. This forces you to reject the null hypothesis when there really is no relationship between X and Y in the population as a whole.

Scientists are conservative and set up conditions to avoid Type 1 errors.

Page 72: Chapter 8 – Regression 2

72

Error Types: Type 2 ErrorA type 2 error can only occur when

there really is a correlation between X and Y in the population, but you accidentally get a sample r that falls within the range predicted by the null hypothesis. You must then fail to reject the null and assume rho=0.000

This is incorrect and results in Type 2 error.

Page 73: Chapter 8 – Regression 2

73

Alpha levels

Any result can be found by chance.However some results are so strong that

they are very unlikely.Unlikely is defined as occuring by chance

5 (or fewer) times in 100.The risk of getting a weird sample that

causes a Type 1 error is called alpha. Almost universally in the biomedical

sciences = .05