university of minnesota educational psychology conducting classroom research in statistics...
Post on 14-Dec-2015
217 Views
Preview:
TRANSCRIPT
University of MinnesotaEducational Psychology
Conducting Classroom Research in Statistics Education: Issues,
Challenges and Examples
Andrew Zieffler Ph.D.
University of Minnesota
Statistics Education Research: A Diverse Discipline or a Many
Headed Hydra?
Interdisciplinary field of inquiry Not reliant on any one tradition
of empirical research methodology
Variety of research questions, methodologies, operational definitions, outcome variables studied, findings
Statistics Education Research: Goals
Improving instruction should be the key goal in any educational research (Raudenbush, 2005)
Therefore, the goal of statistics education research should be the improvement of teaching statistics, leading to improved student learning.
Statistics Education Research: Goals
The Research Advisory Board of the Consortium for the Advancement of Undergraduate Statistics Education (CAUSE) (http://www.causeweb.org/research/).
Designed so that the results will have direct implications for instruction
Research studies in this area should specifically address classroom implications and the generation of new research questions
Statistics Education Research: Improvement?
How can we improve the future statistics education research drawing on what is available now?
Four Suggestions Based on Review of research on teaching
and learning statistics at the college level Higher quality research questions Thorough literature reviews Paying attention to Measurement Consideration of different methodologies
Formulating a Problem: Developing a Quality Research Question
Narrow down the focus of the research question (Garfield, 2006)
Many of the studies reviewed examined broad research questions
For example, ‘Does technology improve student
learning?’ versus ‘How does a particular technology help
students to understand a particular statistical concept?’
Formulating a Problem: Developing a Quality Research Question
“The bottom line for judging research is, does it advance the current knowledge in the field in a significant way (Simon, 2004, p. 158)?” (Field can refer to practitioners or researchers)
How will the study contribute/add to the existing literature?
Relate it to the teaching and learning of statistics
Helps us meet the goal of improving instruction
Relevant Information: The Importance of a Thorough
Literature Review
Role of the literature review - provide a critical review, analysis and synthesis of the literature relevant to the particular topic being studied How the literature reviewed is relevant to the
research question being examined. Helps contextualize the research within the
field (Identifies gaps in previous research, etc.)
Builds on the work of others
Relevant Information: The Importance of a Thorough
Literature Review
As an interdisciplinary area of research, statistics education researchers need to reflect that in their evaluation of the prior research.
Research appears in journals from many disciplines (Teaching Psychology, JSE, JRME, American Statistician, etc.)
Read and Review the Literature: Be Exhaustive
CAUSE WEB Statistics Education Journals Journals in other disciplines Google Scholar
Importance of Measurement: Where Good Intentions Go
Wrong
Measurement refers to “the process of quantifying observations [or descriptions] about a quality or attribute of a thing or person (Thorndike & Hagen, 1986, p.5).”
Measurements used are essential to the findings that are produced Measurements need to be valid and reliable
(Pedhazur & Schmelkin, 1991).
Importance of Measurement: Where Good Intentions Go
Wrong
Descriptions of development of the measurements/evidence of their meaningfulness and appropriateness essential elements in the reporting of research
Importance of Measurement: Where Good Intentions Go
Wrong
For example, in many of the studies,Students’ statistical knowledge or
reasoning was translated into a degree of quantification by the assignment of a test score to each student. These scores were then generally subjected to some kind of quantitative analysis.
Importance of Measurement: Where Good Intentions Go
Wrong
Measurements were typically course specific student outcomes, (e.g., final exam grades, course evaluations)
Assessments using instructor constructed items often have less desirable psychometric properties (e.g., Gullickson & Ellwein, 1985; Weller, 2001)
Measurements often have dependence to a particular course
Lack of external validity Difficult to understand the learning outcomes due to
omission of the assessment items that were used by the researcher
Were the students tested on computational and procedural skills, or on higher levels of thinking and reasoning?
Importance of Measurement: Where Good Intentions Go
Wrong
Recommend the use of research instruments such as CAOS (see ARTIST; https://app.gen.umn.edu/artist/index.html)
Careful development and validation of research created instruments
Methodology: The “Gold Standard” is not always the
Gold Standard
Imagine comparison of “traditional” course to “reform” course with students randomly assigned to each
Even if it seems experimental, this is NOT the gold standard
Still potential problems Operational definitions – What is a
“traditional” course? External Validity/Generalizability? Issues of Fidelity Individual Differences (teachers, classes,
etc.)
Methodology: The “Gold Standard” is not always the
Gold Standard
For instance, it may be better to compare Two different sets of activities to
develop an student reasoning/understanding of a particular topic (e.g., sampling distribution)
Two different sequences of topics across many sections of the same class.
“Classical experimental method can be problematic in education (Schoenfeld, 2000, p. 645).”
Methodology: Analysis
“Good research is a matter not of finding the one best method, but of carefully framing that question most important to the investigator and the field and then identifying a disciplined way in which to inquire into it that will enlighten both the scholar and his or her community (Schulman, 1997, p. 4).”
Methodology: Analysis
Methodology needs to be responsive to purposes/contexts of research (Howe & Eisenhart, 1990) Alternatives to controlled experiments
Classroom-Based Research Teaching Experiments Naturalistic Observation Videotaped Interviews
Classroom Research in Statistics Education: Some
Advice
Plan, Plan, Plan Research Question Study Design/Methodology Assessment/Measurement IRB Pitfalls that may arise
Classroom Research in Statistics Education: Some
Advice
Form collaborative research groups (ASA, 2007)
Teachers of statistics Faculty from other disciplines
(e.g., psychology or education). See Garfield and Ben-Zvi (in
press) for more arguments and suggestions for this type of research.
Classroom Research in Statistics Education: Some
Advice
Consult other experts/Collaborate “Look for collaborators who share
your research interests but who may bring different background (even disciplines) and strengths to a new collaboration (Garfield, 2006, p. 8).”
Removes the pressure of having to be an expert in everything
References
American Statistical Association. (2007), “Using Statistics Effectively in Mathematics Education Research,” Retrieved Feb. 14, 2007, from ASA Web site: http://www.amstat.org/research_grants/pdfs/SMERReport.pdf.
Garfield, J. B. (2006), “Collaboration in Statistics Education Research: Stories, Reflections, and Lessons Learned,” in Proceedings of the Seventh International Conference on Teaching Statistics, eds. A. Rossman and B. Chance, Salvador, Bahia, Brazil: International Statistical Institute, pp. 1-11.
Garfield, J. and Ben-Zvi, D. (in press), “Developing Students’ Statistical Reasoning: Connecting Research and Teaching Practice,” Emeryville, CA: Key College Press.
References
Gullickson, A. R., and Ellwein, M. C. (1985), “Teacher-Made Tests: The Goodness-of-Fit Between Prescription and Practice,” Educational Measurement: Issues and Practice, 4(1), 15-18.
Howe, K., & Eisenhart, M. (1990), “Standards for Qualitative (and Quantitative) Research: A Prolegomenon,” Educational Researcher, 19, 2-9.
Pedhazur, E. J., and Schmelkin, L. P. (1991), “Measurement, Design, and Analysis: An Integrated Approach,” Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.
References
Raudenbush, S. W. (2005), “Learning from Attempts to Improve Schooling: The Contribution of Methodological Diversity,” Educational Researcher, 34(5), 25-31.
Schoenfeld, A. H. (2000). “Purposes and Methods of Research in Mathematics Education,” Notices of the AMS, 47(6), 641-649.
Schulman, L. S. (1997). “Disciplines of Inquiry in Education: A New Overview.” in Complementary Methods for Research in Education, ed. R. M. Jaeger, Washington DC: American Educational Research Association, pp. 3-29.
References
Simon, M. A. (2004). “Raising Issues of Quality in Mathematics Education Research,” Journal for Research in Mathematics Education, 35(3), 157-163.
Thorndike, R. L., and Hagen, E. (1986), “Cognitive Abilities Test: Examiner's Manual Form 4,” Chicago, IL: Riverside.
Weller, L. D. Jr. (2001), “Building Validity and Reliability into Classroom Tests,” National Association of Secondary School Principals, NASSP Bulletin [Online], February.
Contact Information
Andrew Zieffler, Ph.D.
University of Minnesota
zief0002@umn.edu
top related