Dr. William Allan Kritsonis - Statistics

Download Dr. William Allan Kritsonis - Statistics

Post on 03-Apr-2018

218 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 1/17</p><p>Chapter 5 - Data Analysis and Research</p><p>B. Interpreting, Analyzing, and Reporting the Results from Data</p><p>William Allan Kritsonis, PhD</p><p>INTRODUCTION</p><p>The purpose of this chapter is to interpret, analyze, and report the</p><p>results from data. This chapter will introduce the methods and examples of</p><p>the paired samples t-test, independent samples t-test, one-way ANOVA, and</p><p>Bivariate-Pearson-Correlation. Steps to the tables and External Links for</p><p>online tutorial are provided for each test. Software package SPSS 10.0 was</p><p>used to analyze the data.</p><p>THE T-TEST</p><p>The t-test provides the probability that the null hypothesis is true</p><p>when examining the difference between the means of two groups. Normally</p><p>we use this test when data sets are small.</p><p>There are two different t-tests</p><p> The paired samples t-test, and</p><p> The independent samples t-test.</p><p>THE PAIRED SAMPLES T-TEST</p><p>When do we use it? We assume that the confidence interval is at 95% all</p><p>the time.</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 2/17</p><p>1. When there is a natural relationship between the subjects from whom the</p><p>two sets of scores are obtained.</p><p>Example 1: Looking at differences between pre- and post-tests of one</p><p>group, you would choose the paired samples t-test, the scores of both</p><p>data sets came from the same persons.</p><p>Example 2: A teacher diagnostically tests her students at the</p><p>beginning of the year. After intensive instruction, the test is repeated</p><p>at the end of the semester. She is interested in knowing if the students</p><p>have made significant gains.</p><p>THE INDEPENDENT SAMPLES T-TEST</p><p>(We use this t-test more often)</p><p>When do we use it?</p><p>1. When there is no natural relationship between subjects whose scores are</p><p>being contrasted. Comparing scores obtained from two different groups</p><p>of people, you would use this t-test.</p><p>2. The data descriptions are normally distributed and of the groups are</p><p>homogeneous.</p><p>Example 1: Two groups of students are identified: an experimental</p><p>and a control group. Both groups are pretested (Both groups are</p><p>posttested), an intervention is used with the experimental group and is</p><p>withheld from the control group. Both groups are posttested.</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 3/17</p><p>Example 2: TLI scores are collected on students who have attended</p><p>school using block scheduling and students who have attended</p><p>schools with traditional scheduling. We want to know if the TLI</p><p>scores are significantly different according to the schedule</p><p>experienced by the students. (We reject the null when p0.05).</p><p>ONE-WAY ANALYSIS OF VARIANCE (ANOVA)</p><p>About the one-way ANOVA,</p><p>1. The probability that the null hypothesis is true when examining the mean</p><p>differences among three or more groups. This procedure is equal to the</p><p>test, except that it handles more than two groups.</p><p>2. The assumptions for one-way ANOVA are the same as for the t-test:</p><p>normal distributions and homogeneity of variances. We have to run</p><p>Levenes test for homogeneity. There are two situations: (1) use</p><p>Bonferroni when the data fail to reject the null and (2) use Tamhane</p><p>when the data reject the null.</p><p>3. A probability value p &lt; 0.05indicates that a significant difference exists</p><p>among the various means, but it does not indicate which means are</p><p>significantly different and which are chance differences.</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 4/17</p><p>Example 1: If the GPA averages were significantly different according to</p><p>undergraduate majors. We would input all of the GPAs into a variable (this</p><p>would be the dependent variable), and in a second variable (an independent</p><p>variable often called a factor) assigning a 1 if the GPA belonged to an</p><p>English major, a 2 for History majors, a 3 for Psychology majors, etc..</p><p>(Dont use 0 for anything because it does not work sometimes.)</p><p>Example 2: TAAS scores are collected to describe scores of students. Three</p><p>different methods of teaching were used after the students had been divided</p><p>into three equal groups. The socioeconomic level of each student was</p><p>identified. The hypothesis was used: was there a significant difference in</p><p>TAAS scores according to method used.</p><p>Example 3: Professors who are primarily university administrators, regular</p><p>tenured professors, and regular non-tenured professors are rated by students</p><p>according to enthusiasm displayed in the classes they teach. The null</p><p>hypothesis is, there is no significant difference in the degree of enthusiasm</p><p>displayed among the three groups of professors.</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 5/17</p><p> The standard way to report the one-way ANOVA:</p><p>The null hypothesis is that there will be no significant differences in</p><p>_____________________________________________________________</p><p>_. To test this hypothesis, the one-way ANOVA from SPSS (10.0) was</p><p>used. The null hypothesis is accepted/ not accepted F=(n-1, N-n), p = ____ than 0.05.</p><p>We reject the null when p&lt; 0.05; we fail to reject the null when p&gt; 0.05.</p><p>Example 4: It was hypothesized that students who excel in fine arts are also</p><p>the best students in the academic subjects. A measure of fine arts</p><p>achievement and a measure of academic achievement were collected. The</p><p>relationship of the two measures was analyzed statistically. (Bivariate-</p><p>Pearson-Correlation)</p><p>Example 5: There will not be a significant relationship between the percent</p><p>of students passing all TAAS tests and the size of the school districts.</p><p>(Bivariate-Pearson-Correlation)</p><p> The correlation coefficient is between 1 and 1. The closer to the</p><p>positive/negative 1 the stronger the relationship. The closer to the 0 the</p><p>weaker the relationship.</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 6/17</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 7/17</p><p>Dr. William Allan Kritsonis</p><p>Review for the Comprehensive PhD Examination</p><p>A Single Factor ANOVA</p><p>To compare the effectiveness of three different methods of</p><p>teaching reading, 26 children of equal reading aptitude were divided</p><p>into three groups. Each group was instructed for a give period of time</p><p>using one of the three methods. After completing the instruction</p><p>period, all students were tested. The test results are shown in the</p><p>following table. Is the evidence sufficient to reject the hypothesis that</p><p>all three instruction-methods are equally effective? Use = 0.05.</p><p> Method I Method II Method III</p><p>Test scores: 45 45 44</p><p>51 44 50</p><p>48 46 45</p><p>50 44 55</p><p>46 41 51</p><p>48 43 51</p><p>45 46 45</p><p>48 49 47</p><p>47 44</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 8/17</p><p>To do the following:</p><p>1) Test the Normality Assumption.</p><p>2) Test the Equality of Variance Assumption.</p><p>3) Run the ANOVA test and produce the ANOVA Table.</p><p>4) Run Post-hoc comparisons.</p><p>SPSS Data Entry: (check on scale)</p><p>readscr teachmth</p><p>45 1</p><p>51 1</p><p>48 150 1</p><p>46 1</p><p>48 1</p><p>45 1</p><p>48 1</p><p>47 1</p><p>45 2</p><p>44 2</p><p>46 2</p><p>44 2</p><p>41 2</p><p>43 2</p><p>46 249 2</p><p>44 2</p><p>44 3</p><p>50 3</p><p>45 3</p><p>55 3</p><p>51 3</p><p>51 3</p><p>45 3</p><p>47 3</p><p>Run the Analysis</p><p>1) Check Normality.</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 9/17</p><p>Steps to the tables:</p><p>1. Analyze Descriptive Statistics Explore Dependent : readscr</p><p>Factor List: teachmth</p><p>Go to and check: Statistics Descriptives</p><p>Go to and check: Plots Box plots</p><p> Factor Levels to get that</p><p> Normality plots with tests</p><p>Explore</p><p>TEACHMTH</p><p>Case Processing Summary</p><p>9 100.0% 0 .0% 9 100.0%</p><p>9 100.0% 0 .0% 9 100.0%</p><p>8 100.0% 0 .0% 8 100.0%</p><p>TEACHMTH</p><p>1.00</p><p>2.00</p><p>3.00</p><p>READSCR</p><p>N Percent N Percent N PercentValid Missing Total</p><p>Cases</p><p>Tests of Normality</p><p>.193 9 .200* .933 9 .490</p><p>.173 9 .200* .950 9 .667</p><p>.193 8 .200* .919 8 .437</p><p>TEACHMTH</p><p>1.00</p><p>2.003.00</p><p>READSCR</p><p>Statistic df Sig. Statistic df Sig.</p><p>Kolmogorov-Smirnova</p><p>Shapiro-Wilk</p><p>This is a lower bound of the true significance.*.</p><p>Lilliefors Significance Correctiona.</p><p>Test these two assumptions for each of the three groups:</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 10/17</p><p>(a) Normality</p><p>(b) Homogeneity (equality) of variance</p><p>Write a short paragraph in which you describe the results.</p><p>Analyzing the data:</p><p>(a) The assumption of Normality was analyzed using two tests of</p><p>significance: the Kolmogorov-Smirnov test and the Shapiro-Wilk</p><p>test. The Kolmogorov-Smirnov test showed a probability coefficient</p><p>of 0.2 for each group since this value is greater than 0.05, the</p><p>Kolmogorov-Smirnov test did not reject the null hypothesis that the</p><p>scores for each group is normally distributed.</p><p>The Shapiro-Wilk test showed a probability coefficient of 0.49 for</p><p>method 1, .667 for method 2, and 0.437 for method 3. In all three</p><p>cases the coefficient is greater than 0.05. Therefore, the Shapiro-</p><p>Wilk test did not reject the null hypothesis that the scores for each</p><p>group are normally. The results for both the Kolmogorov-Smirnov</p><p>test and Shapiro-Wilk test provide support for the assumption of</p><p>normality.</p><p>(b) The assumption of homogeneity of variance was tested using the</p><p>Levene test. The results for the Levene test showed a probability</p><p>coefficient of 0.042. Since this value is less than 0.05, the null</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 11/17</p><p>hypothesis is rejected. The assumption of homogeneity of variance is</p><p>not supported.</p><p>Post-hoc comparisons among the groups could be tested with either the</p><p>Bonferroni or Tamhane test, depending on whether or not the homogeneity</p><p>of variance assumption was rejected. The Bonferroni test is appropriate if</p><p>the homogeneity of variance assumption is supported and the Tamhane test</p><p>is appropriate when it is not supported. Since the Levene test showed that</p><p>the homogeneity of variance assumption was not supported, the Tamhane</p><p>test was used to test differences between the means of the three groups.</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 12/17</p><p>Descriptives</p><p>47.5556 .6894</p><p>45.9657</p><p>49.1454</p><p>47.5062</p><p>48.0000</p><p>4.278</p><p>2.0683</p><p>45.00</p><p>51.00</p><p>6.00</p><p>3.5000</p><p>.335 .717</p><p>-.651 1.400</p><p>44.6667 .7454</p><p>42.9479</p><p>46.3855</p><p>44.6296</p><p>44.0000</p><p>5.000</p><p>2.2361</p><p>41.00</p><p>49.00</p><p>8.00</p><p>2.5000.450 .717</p><p>1.300 1.400</p><p>48.5000 1 .3628</p><p>45.2776</p><p>51.7224</p><p>48.3889</p><p>48.5000</p><p>14.857</p><p>3.8545</p><p>44.0055.00</p><p>11.00</p><p>6.0000</p><p>.429 .752</p><p>-.887 1.481</p><p>Mean</p><p>Lower Bound</p><p>Upper Bound</p><p>95% Confidence</p><p>Interval for Mean</p><p>5% Trimmed Mean</p><p>Median</p><p>Variance</p><p>Std. Deviation</p><p>Minimum</p><p>Maximum</p><p>Range</p><p>Interquartile Range</p><p>Skewness</p><p>Kurtosis</p><p>Mean</p><p>Lower Bound</p><p>Upper Bound</p><p>95% Confidence</p><p>Interval for Mean</p><p>5% Trimmed Mean</p><p>Median</p><p>Variance</p><p>Std. Deviation</p><p>Minimum</p><p>Maximum</p><p>Range</p><p>Interquartile RangeSkewness</p><p>Kurtosis</p><p>Mean</p><p>Lower Bound</p><p>Upper Bound</p><p>95% Confidence</p><p>Interval for Mean</p><p>5% Trimmed Mean</p><p>Median</p><p>Variance</p><p>Std. Deviation</p><p>Minimum</p><p>Maximum</p><p>Range</p><p>Interquartile Range</p><p>Skewness</p><p>Kurtosis</p><p>TEACHMTH1.00</p><p>2.00</p><p>3.00</p><p>READSCRStatistic Std. Error</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 13/17</p><p>To analyzing, interpreting and reporting the results from data:</p><p>Method I has 9 scores ranging from 45 as the lowest score to the</p><p>highest score of 51. The mean of the distribution is 47.56, the median is 48,</p><p>and the standard deviation is 2.07. The skew and Kurtosis coefficients are</p><p>0.34 and 0.65, respectively. Method I can be considered as a normal</p><p>distribution.</p><p>Method II has 9 scores ranging from 41 as the lowest score to the</p><p>highest score of 49. The mean of the distribution is 44.67, the median is 44,</p><p>and the standard deviation is 2.24. The skew and Kurtosis coefficients are</p><p>0.45 and 1.3, respectively. Method II can be considered as a normal</p><p>distribution.</p><p>Method III has 8 scores ranging from 44 as the lowest score to the</p><p>highest score of 55. The mean of the distribution is 48.5, the median is 48.5,</p><p>and the standard deviation is 3.85. The skew and Kurtosis coefficients are</p><p>0.43 and 0.887, respectively. Method III can be considered as a normal</p><p>distribution.</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 14/17</p><p>2) Finish the Analysis: run the ANOVA test/table, and Post-hoc</p><p>comparisons.</p><p>Steps to the tables:</p><p>1) Analyze Compare means one way ANOVA</p><p>Dependent: readscr</p><p>Factor: teachmth</p><p>Go to: Post-hoc, check:</p><p> Bonferroni</p><p> Tamhane T2</p><p>Go to: Options, check:</p><p> Statistics</p><p> Descriptives</p><p> Homogeneity of Variance</p><p>Oneway</p><p>Descriptives</p><p>READSCR</p><p>9 47.5556 2.0683 .6894 45.9657 49.1454 45.00 51.00</p><p>9 44.6667 2.2361 .7454 42.9479 46.3855 41.00 49.00</p><p>8 48.5000 3.8545 1.3628 45.2776 51.7224 44.00 55.00</p><p>26 46.8462 3.1457 .6169 45.5756 48.1167 41.00 55.00</p><p>1.00</p><p>2.00</p><p>3.00</p><p>Total</p><p>N Mean</p><p>Std.</p><p>Deviation Std. Error</p><p>Lower</p><p>Bound</p><p>Upper</p><p>Bound</p><p>95% Confidence</p><p>Interval for Mean</p><p>Minimum Maximum</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 15/17</p><p>Test of Homogeneity of Variances</p><p>READSCR</p><p>3.641 2 23 .042</p><p>Levene</p><p>Statistic df1 df2 Sig.</p><p>ANOVA</p><p>READSCR</p><p>69.162 2 34.581 4.463 .023</p><p>178.222 23 7.749</p><p>247.385 25</p><p>Between Groups</p><p>Within Groups</p><p>Total</p><p>Sum of</p><p>Squares df </p><p>Mean</p><p>Square F Sig.</p><p>The null hypothesis (H0) is that there is no significant difference in the</p><p>effectiveness of three different methods of teaching reading. To test this</p><p>hypothesis, the one-way ANOVA from SPSS (10.0) was used. The null</p><p>hypothesis is rejected F(2/23) = 4.463, p = 0.023</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 16/17</p><p>Post Hoc Tests</p><p>Multiple Comparisons</p><p>Dependent Variable: READSCR</p><p>2.8889 1.3122 .114 -.4993 6.27</p><p>-.9444 1.3526 1.000 -4.4369 2.54</p><p>-2.8889 1.3122 .114 -6.2771 .49</p><p>-3.8333* 1.3526 .028 -7.3258 -.34</p><p>.9444 1.3526 1.000 -2.5480 4.43</p><p>3.8333* 1.3526 .028 .3408 7.32</p><p>2.8889* 1.3122 .035 .1815 5.59</p><p>-.9444 1.3526 .909 -5.2769 3.38</p><p>-2.8889* 1.3122 .035 -5.5963 -.18</p><p>-3.8333 1.3526 .091 -8.2019 .53</p><p>.9444 1.3526 .909 -3.3880 5.27</p><p>3.8333 1.3526 .091 -.5352 8.20</p><p>(J) TEACHMTH</p><p>2.00</p><p>3.00</p><p>1.00</p><p>3.00</p><p>1.00</p><p>2.00</p><p>2.00</p><p>3.00</p><p>1.00</p><p>3.00</p><p>1.00</p><p>2.00</p><p>(I) TEACHMTH</p><p>1.00</p><p>2.00</p><p>3.00</p><p>1.00</p><p>2.00</p><p>3.00</p><p>Bonferroni</p><p>Tamhane</p><p>Mean</p><p>Difference</p><p>(I-J) Std. Error Sig.</p><p>Lower</p><p>Bound</p><p>Uppe</p><p>Boun</p><p>95% Confidence</p><p>Interval</p><p>The mean difference is significant at the .05 level.*.</p><p>Post-hoc comparisons among the groups could be tested with either</p><p>the Bonferroni or Tamhane test, depending on whether or not the</p><p>homogeneity of variance assumption was rejected. The Bonferroni test is</p><p>appropriate if the homogeneity of variance assumption is supported and the</p><p>Tamhane test is appropriate when it is not supported. Since the Levene test</p><p>showed that the homogeneity of variance assumption was not supported, the</p><p>Tamhane test was used to test differences between the means of the three</p><p>groups.</p><p>The Tamhane test indicated the following:</p></li><li><p>7/28/2019 Dr. William Allan Kritsonis - Statistics</p><p> 17/17</p><p> There was a statistically significant difference between the mean of</p><p>Method I and Method II (sig.=0.035). The mean for Method I was 2.889</p><p>higher than the mean for Method II.</p><p> There were no statistically significant differences between Methods I and</p><p>III or Methods II and III.</p><p>Links for SPSS 10.0 Tutorial</p><p>Statistical Package for the Social SciencesIt covers a broad range of statistical procedures that allow you to summarize</p><p>data (e.g., compute means and standard deviations), determine whether there</p><p>are significant differences between groups (e.g., t-tests, analysis of</p><p>variance), examine relationships among variables (e.g., correlation, multiple</p><p>regression), and graph results (e.g....</p></li></ul>