caep candidate performance data: indicators of teaching

62
Executive Summary Indicators of Teacher Effectiveness CAEP Initial Standards 4.2, 5.4 St. John Fisher College Ralph C. Wilson, Jr. School of Education Completer Case Studies – Teacher Effectiveness Five case studies were conducted over three semesters from Fall 2018 through Spring 2021. The case studies were qualitative consisting of multiple components in its methods such as: completer action research, observation data, reflective journals, national validated student survey (Tripod), and formative and summative student assessments. The methodology included a purposive sample of completers from the undergraduate (BS INCH and BS INAD) and graduate (MS SPED CHED and MS SPED ADOL) initial certification programs, no more than one to three years post-graduation, who are now teaching in surrounding districts (urban, suburban and rural). All completers are teaching in designated high needs schools. Of the five participants in the case studies: Initial Undergraduate Program Initial Graduate Program 1 BS Inclusive Education-Childhood; 2 BS Inclusive Education-Adolescence- English 2 MS Special Education Childhood Education Over the course of three semesters, Fall 2018, Spring 2020 and Spring 2021, five completer participants administered the Tripod survey, a nationally validated survey that measures teacher effectiveness from the student perspective. The validity and reliability of Cambridge Education’s Tripod Survey was established as part of the 2010 Measure of Effective Teaching project (MET) whose goal is to improve the quality of information about teaching effectiveness available to education professionals within states and districts--information that will help them build fair and reliable systems for measuring teacher effectiveness that can be used for a variety of purposes. The MET the largest study of classroom teaching ever conducted in the United States. More than 2,500 fourth- through ninth-grade teachers working in 317 schools participated in the study. The MET project was continued in 2012 and 2013 resulting in a sample of more than 44,500 students. Multiple studies including some using random assignment, have established the reliability and validity of Tripods 7Cs measures. Tripod’s framework of effective teaching and represent a set of instructional practices that research links to student achievement, engagement and motivation, as well as success skills and mindsets. Using the Tripod survey, educators have the ability to measure student perceptions in the following areas: 1. Teaching Effectiveness: Measures tied to each teacher are quality assured and benchmarked against national norms. 2. Student Engagement: Data concerning effort and motivation indicate for each classroom how students judge their own attitudes, behaviors and effort. 3. Student Satisfaction: Data indicate whether each classroom, building and district is a place where students feel safe, welcome and satisfied with their progress. 4. Whole‐school Climate: Data from individual classrooms can be aggregated up to measures of whole school climate. In addition, surveys include questions that pertain to the school as a whole.

Upload: others

Post on 16-Mar-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Executive Summary Indicators of Teacher Effectiveness CAEP Initial Standards 4.2, 5.4

St. John Fisher College Ralph C. Wilson, Jr. School of Education

Completer Case Studies – Teacher Effectiveness

Five case studies were conducted over three semesters from Fall 2018 through Spring 2021. The case studies were qualitative consisting of multiple components in its methods such as: completer action research, observation data, reflective journals, national validated student survey (Tripod), and formative and summative student assessments. The methodology included a purposive sample of completers from the undergraduate (BS INCH and BS INAD) and graduate (MS SPED CHED and MS SPED ADOL) initial certification programs, no more than one to three years post-graduation, who are now teaching in surrounding districts (urban, suburban and rural). All completers are teaching in designated high needs schools.

Of the five participants in the case studies:

Initial Undergraduate Program Initial Graduate Program 1 BS Inclusive Education-Childhood; 2 BS Inclusive Education-Adolescence-English

2 MS Special Education Childhood Education

Over the course of three semesters, Fall 2018, Spring 2020 and Spring 2021, five completer participants administered the Tripod survey, a nationally validated survey that measures teacher effectiveness from the student perspective. The validity and reliability of Cambridge Education’s Tripod Survey was established as part of the 2010 Measure of Effective Teaching project (MET) whose goal is to improve the quality of information about teaching effectiveness available to education professionals within states and districts--information that will help them build fair and reliable systems for measuring teacher effectiveness that can be used for a variety of purposes. The MET the largest study of classroom teaching ever conducted in the United States. More than 2,500 fourth- through ninth-grade teachers working in 317 schools participated in the study. The MET project was continued in 2012 and 2013 resulting in a sample of more than 44,500 students. Multiple studies including some using random assignment, have established the reliability and validity of Tripods 7Cs measures. Tripod’s framework of effective teaching and represent a set of instructional practices that research links to student achievement, engagement and motivation, as well as success skills and mindsets. Using the Tripod survey, educators have the ability to measure student perceptions in the following areas: 1. Teaching Effectiveness: Measures tied to each teacher are quality assured and benchmarked against national norms. 2. Student Engagement: Data concerning effort and motivation indicate for each classroom how students judge their own attitudes, behaviors and effort. 3. Student Satisfaction: Data indicate whether each classroom, building and district is a place where students feel safe, welcome and satisfied with their progress. 4. Whole‐school Climate: Data from individual classrooms can be aggregated up to measures of whole school climate. In addition, surveys include questions that pertain to the school as a whole.

Benchmark and Comparison Data The EPP implemented a Case Study Research Design to explore the impact of instructional strategy use in improving student learning (Creswell, J, 2007). The case study approach is an empirical inquiry that investigates phenomenon in depth and within its real-life context. Detailed, in-depth data collection involved multiple sources of information (e.g., completer action research, observation data, reflective journals, national validated student survey (Tripod), formative and summative student assessments, pre and post surveys and/or focus group). Within-case and across case analysis was conducted to identify common themes. The Case Study Research design does not lend itself to external benchmarking and comparison of data.

However, the Tripod survey instrument measures seven domains of teacher effectiveness. Scaled scores for construct and composite range from 202 to 398 with 270 to 300 as the mid-range and 300 as the mid-point. Scaled scores for each item in the 7cs combine responses to a single item for every student in a class. Results are presented as Low, Middle or High. High means the score was in the top 25% of scores from similar classrooms surveyed by Tripod over the past several years. Low means the score was in the bottom 25% of all scores from similar classrooms surveyed by Tripod over the past several years. Middle means the score fell in the middle 50% of scores from similar classrooms surveyed by Tripod over the past several years. The following chart represents the overall composite scaled score for Case Study Participants when compared against similar classrooms surveyed by Tripod over the past several years. These results, in the form of scaled scores, allow for comparisons because the composite scores are scaled from a standardization scale that includes scores from similar classrooms.

Composite scaled scores for case study participants indicate that four of five participants scored within the middle score range. Tripod indicates that, because the mid-point of a scaled score, is average, the majority of classrooms will find their 7C construct and composite scores in the middle of the distribution (270-330 for construct items) and “middle” for composite scores. One participant scored overall in the high or top range meaning their overall score was in the top 25% of scores from similar classrooms surveyed by Tripod in the past several years.

Except for one domain, classroom management, where two completers scored below the average range, the scores fell within the medium and high ranges providing strong evidence that completers: show concern for students’ well-being, encourage and value student ideas, spark and maintain student interest, help students understand content and resolve confusion, help students integrate and synthesize key ideas, and insist that students do their best work. In the domain of classroom management P4 taught virtually during Spring 2021 so the domain was not assessed. P1 during Fall 2018 scored in the lower range (236) and P2 scored 262, closer to the mid range in class management. It should be noted that teacher observations of completers in the area of classroom management did not corroborate these student perceptions. This survey also provides strong evidence that the completers communicate content, engage, support and challenge students.

Observational Tool to Assess Teacher Effectiveness

During each of the Fall 2018, Spring 2020 and Spring 2021 case studies, a School of Education (SoE) faculty member completed classroom observations of the completer participants. Completers in Fall 2018 were observed once during implementation of the action research and twice during the Spring 2020 and Spring 2021 term.

State and nationally validated and reliable observation tools were chosen by districts (C-1 NYSUT 2014 Teacher Practice Rubric Standard III, Instructional Practice; C2- Danielson 2011 Teacher Practice Rubric, Domain 3 Instruction). All five completers were assigned ratings in the Instructional Practice Domain/Standard on the rubric. No completer was rated as Ineffective or Unsatisfactory over all three terms.

During Fall 2018, one completer was rated Developing in the areas of Aligning instruction to Standards, Responding to Students and Providing Directions and Procedures. This completer was in the first year of teaching and reported feeling challenged by student management, an area of practice that is a common challenge for first year teachers. This completer’s classroom management score on the Tripod survey was in the low range (262) confirming that the teacher and students perceived this area as a challenge area. The second completer from Fall 2018 was rated Basic in Using Assessment in Instruction. It should be noted that the Basic performance level has two components, assessing students throughout the process and engaging students in self assessment. The researcher noted that the completer was effective in assessing instruction but needed to engage students in self assessment. Both Fall 2018 completers were rated as Distinguished/Highly Effective or Proficient/Effective in the areas of Engaging Students in Learning and Communicating Content with students. One completer was rated Distinguished in the area of Demonstrating Flexibility and Responsiveness to student needs. This completer’s scores on the Tripod survey construct Confer was rated highly as well. These ratings confirm the findings of the action research where completers effectively engaged students in strategies to support content understanding and impact student learning improvement.

The Spring 2020 case study improved the process by implementing two observations during the period of the study. The completer scored proficient in all categories except Using Assessment in Instruction which was scored at the Basic Level during the first observation. The faculty researcher provided feedback specific to assessment and engagement and during the second observation the completer improved in the area of Using Assessment in Instruction (from Basic to Proficient) and in the area of Engaging Students in Learning (from Proficient to Distinguished). Faculty researcher feedback was applied by Completer S2020 and, as a result, all domain areas were scored in the range of Proficient to Distinguished.

The Spring 2021 case study included two observations during the period of the study for both completers. Completer 1-21 instructed virtually and each of the observations were conducted via zoom. Completer 2-21 instructed face to face. The difference was due to COVID-19. C1-2020 received highly effective ratings across four of the six domains (aligning standards and engaging students; differentiating instruction; challenging students; using assessment and feedback to evoke growth) and effective ratings in two domains (maximizing understanding; setting expectations). Completer 2-21 received distinguished ratings across both observations in the domain of using assessment in instruction and proficient ratings across both observations in

communicating content, using questioning and discussion techniques, engagement, and demonstrating flexibility and responsiveness.

Teacher Practice Rubrics-Validity and Reliability During the Fall 2018 (completer 2), Spring 2020 (completer 1) and Spring 2021 (completer 2) case studies, the Danielson Framework for Teaching Rubric (2011) was used to observe completers. Only Domain 3, Instructional Practice was scored. The Danielson Framework for Teaching (FFT) (2011 Revised Edition) is a research-based teacher evaluation protocol developed by Charlotte Danielson in 1996. The FFT is aligned with the INTASC standards, which represent the professional consensus of what a beginning teacher should know. The FFT divides the complex activity of teaching into 22 components (and 76 smaller elements) clustered into four domains of teaching responsibility: • Planning and preparation (Domain 1), • Classroom environment (Domain 2), • Instruction (Domain 3), and • Professional responsibilities (Domain 4). As was the case with the Tripod survey, the validity and reliability of Danielson’s Framework for Teaching was established by the MET. Additionally the FFT has been subjected to several validation studies over the course of its development and refinement, including an initial validation by Educational Testing Service (ETS). During the Fall 2018 (completer 2) and Spring 2021 (completer 1) case studies, the NYSUT Teacher Practice Rubric (2014 Edition) was used for observations. Only Standard III, Instructional Practice was scored. The NYSUT Teacher Practice Rubric was derived from the aforementioned Danielson’s Framework for Teaching (FFT) by aligning the FFT to the New York Learning Standards. The first edition of the NYSUT Teacher Practice Rubric was published in 2011; since then it has undergone two revisions based on feedback from school districts across New York State. The most recent edition (2014) was used in our work. The rubric is one of only approved teacher practice rubrics in New York State for use statewide by districts implementing their Annual Professional Performance Reviews. The rubric is aligned with the New York State Teaching Standards, was extensively field tested to establish validity and reliability. http://usny.nysed.gov/rttt/teachers-leaders/practicerubrics/Docs/NYSUTapp.pdf

In addition to the observations, the researchers asked completers to reflect on their practice through completer reflections. Across the case studies, reflections demonstrated completers employed a variety of strategies to effectively engage learners, make content more accessible and meet learner needs. A variety of strategies for differentiating instruction included choice, scaffolds and targeting of skills. Researchers noted all completers used assessments to adapt instruction and analyzed assessment data to inform instruction and employed a variety of strategies for meeting student needs. Differences in the sophistication of strategies for instructional practice was observed between completers in their first year of teaching and those with more than one year of experience.

In general, the case studies, yielded useful information about our programs. Results across case studies provide evidence that completers were effectively applying knowledge, skills and dispositions our preparation experiences were designed to achieve. Across observation

instruments, student surveys, and completer reflections, strengths were revealed in completers’ use of assessment-based instruction, communication of content and student engagement.

Ralph C. Wilson, Jr. School of Education St. John Fisher College Program Completer Action Research Project – Spring 2021 CAEP Initial Standards 4.1 and 4.2

Program Impact Study Design

I. Description of Study and Study Sample

The spring 2021 Program Completer Action Research study was completed by two completers, otherwise known as Completer 1– Spring2021 or S1-21 or CS1-21; and Completer 2-Spring 2021 or S2-21 or CS2-21, both of whom were selected from a purposive sample of 49 completers representing completers from the undergraduate and graduate childhood/special education and adolescence/special education initial certification programs, who are now teaching suburban, rural, or urban school districts. Two completers agreed to participate in the study. Completer S121 and Completer S221 were selected by faculty because they 1) were known to have a P-12 teaching position, and 2) were likely to participate in the project based on his participation in his classes. Completers was told that if they completed the project, they would receive a $125 honorarium (see Appendix A: Invitation to Participate email).

Completer S1-21 graduated from the Undergraduate Inclusive Childhood Education Program (INCH) in May 2019. Completer S2-21 graduated from the Undergraduate Inclusive Adolescence-English Education Program in 2019 (December 2018). Completer S1-21 currently teaches 6th grade in a middle school position in a city school district (urban), and due to COVID, her classes are virtual. Her class size is 24 students. At the middle school where C1-21 teaches, 60% of the student population is economically diverse. Completer -S2-21 teaches 9th grade at a rural high school in upstate New York. She teaches face to face and her class size is 18 students. At the high school where the -S2-21 teaches, 48% of the student population is economically disadvantaged.

II. Methodology

Completer S1-21 and S2-21 met through zoom in December, 2020 with the associate dean and assessment coordinator to review study expectations and to get answers to any questions about the study. The directions and timeline of the study were presented, and access to the shared Google Doc was set up.

A. Directions and Timeline of the Completer Study Initial Meeting:

December, 2020

1. Identify one of Marzano’s Nine High Yield Instructional Strategies to implement in the classroom (see Appendix B: Marzano’s Nine High Yield Instructional Strategies). 2. Review study expectations and Tripod Survey directions (see Appendix C: Guide to Tripod's 7Cs Framework).

Phase One: Jan 11-15 2021

1. Identify where in the curriculum Marzano’s Nine High Yield Instructional Strategy could be implemented. 2. Create and administer a pre-assessment to obtain a baseline of the students’ knowledge base. Take data on the baseline results. 3. Implement a lesson using the identified High Yield Instructional Strategy. 4. Create and implement a formative assessment after the strategy implementation. Analyze the data from the assessment and create a plan to implement the high yield instructional strategy a second time for improved student learning. 5. Begin responding to your reflective journal in Google Doc (see Appendix D: Reflective Journal Prompts) and reflect on the strategy implementation and the formative assessment analysis.

Phase Two: January 18-29, 2021

1. Implement the High Yield Instructional Strategy a second time. 2. Create and implement a summative assessment after the second strategy implementation. 3. Analyze the date from the summative assessment for overall success of the strategy implementation. 4. Add the reflective journal on Google Doc discussing the results of the second implementation on student learning. Address next steps if you were to use the strategy in the future. 5. Schedule observations in phase 1 and/or phase 2 with SJFC faculty researcher

Phase Three: Feb 1-12 and 22-26, 2021

1. Send home consent forms for administration of the student Tripod Survey. 2. Administer the student Tripod Survey to students. 3. Review results from the Tripod Survey (see Appendix E: Understanding Your Tripod Survey Results – Teacher Report). 4. Add to the Google Doc Reflective Journal discussing the results of the Tripod Survey.

III. Discussion of the Study Design and Implementation

A. Phase One: Pre-Assessment and First Strategy Implementation and Analysis

Completer S1-21 identified her 6th grade class for the study.

Completer S1-21 chose the high impact strategy of “annotating text”, articulating the following rationale: “The strategy that I choose to implement is annotating text. “Annotation is one of the most powerful thinking tools for learning, understanding and remembering.” (Harvey & Goudvis, 2017). Students will mostly focus on annotating by highlighting the text. Due to students being all virtual, they will be using a digital highlighter.

Students will be learning to differentiate between important and unimportant details. Students will be taught to use two different color highlighters - one to identify important details and one to identify unimportant details. This will allow students to differentiate the two different types of details and visually be able to see this difference within the article by looking at the two different highlighter colors. The visual will provide insight into whether the article examined provides important information to the topic or information that is not important to the topic. This is also a strategy that can be carried over from the digital platform into paper and pencil. Whether students are working on their computers with an article or a printed article, students can highlight with two different colors to differentiate the details.

When teaching students to annotate the text, it will be important to model this skill to students. Students will be able to see how to differentiate the details and how an article may look once completed. This will also show students how to think deeply about the reading as I model how I determine if the information is important or not. Students should also understand why they are annotating. Harvey & Goudvis (2017) mention that annotation allows for readers to leave “tracks like this” and gives “readers a place to hold and remember their thinking.”

The pre-assessment tool provides students a task on a Google Doc. Students read a short passage that has been adapted from a local newspaper during class time. Within the passage, students may highlight or annotate if they would like but directions will not require or encourage them to do so. Below the passage, there are two text boxes with questions asking students to determine the important information and unimportant information. Students will fill in details from the article into these text boxes to show their ability to differentiate important and unimportant information. This pre-assessment comes as students are being introduced to a new skill of differentiating important and unimportant skills. Students have begun to learn inquiry skills such as determining background knowledge and asking research questions. The class voted on a topic to review - social media, virtual learning, or testing in schools. The pre-assessment involved a passage about the topic the class chose, social media in this case. This connects to the topics learned in the previous lessons as students have learned about determining background knowledge and asking research questions. Now, they will begin to focus on answering research questions.

Additionally, this pre-assessment is done through a Google Doc as it is a platform that the students are familiar with. Students have used Google Suite products throughout the school year already and are able to use the tools and navigate the pages. The pre-assessment will not provide barriers to the students for understanding the platform. It will instead focus on students being able to demonstrate their ability towards the skill.

Two attempts at pre-assessment were conducted because the first attempt resulted in few students participating. The virtual environment impacted participation. A second pre assessment was conducted using Nearpod. Students were more responsive in Nearpod to completing the activity because they knew their teacher could track their progress easily in real time.

This pre-assessment consisted of four questions. Each question asked students to read the short reading below, three to four sentences from a NewsELA article. Then, it listed the focus research question for our project: What are the benefits of social media? This was followed by the three to four sentences from the article and then a question that asked students to write either the important or not important information listed in that selection. Students moved through one question at a time and were given 2-3 minutes before students were progressed on to the next question. For this focus class, the teacher read the question aloud once to them before giving them time to answer.

The table below indicated the scores that each target student received on S1-21 pre- assessment.

Student Question 1 Question 2 Question 3 Question 4 Overall

A.S. Incorrect - identified unimportant information

Incorrect - identified unimportant information

Partially correct - identified 1 important and 1 unimportant

Incorrect - did not properly answer the question

0.5/4

A.R. Partially correct - identified information that is important to question but not listed in selection

Correct No answer No answer 1.5/4

A.L. Incorrect - did not properly answer the question

Partially correct - identified information that is important to question but not listed in selection

Incorrect - identified important information as unimportant

Incorrect - did not use information from the selection

0.5/4

D.J. Incorrect - identified unimportant information

Incorrect - identified unimportant information

Correct Correct 2/4

E.D. Incorrect - did not properly answer the question

No answer Incorrect - identified important information as unimportant

Incorrect - identified 1 unimportant and 2 important

0/4

G.S. No answer Correct Incorrect - identified important information as unimportant

Incorrect - identified 1 unimportant and 2 important

1/4

I.P. No answer Incorrect - identified unimportant information

No answer No answer 0/4

J.R. No answer Partially correct - part of the important information but not the why

Incorrect - identified important information as unimportant

Correct 1.5/4

J.L. No answer No answer No answer No answer 0/4

J.L. Correct Correct Incorrect - identified important information as unimportant

Incorrect - identified 1 unimportant and 2 important

2/4

K.S. No answer No answer No answer No answer 0/4

M.H. Correct Partially correct - part of the important information but not the why

Correct Incorrect - identified important information as unimportant

2.5/4

M.R. Incorrect - identified unimportant information

Correct Incorrect - identified important information as unimportant

Correct 2/4

N.A. Incorrect - identified unimportant information

Incorrect - identified unimportant information

Incorrect - identified important information as unimportant

No answer 0/4

R.R. No answer No answer No answer No answer 0/4

S.J. Incorrect - did not properly answer the question

Partially correct - part of the important information but not the why

Incorrect - identified important information as unimportant

Incorrect - did not use information from selection

0.5/4

S.S. Incorrect - identified unimportant information

Incorrect - identified important information but not information from this selection

Incorrect - identified important information as unimportant

Incorrect - identified important information as unimportant

0/4

T.R. No answer No answer No answer No answer 0/4

V.T. No answer No answer No answer No answer 0/4

The analysis of the pre-assessment by S1-21:

Overall, 19 students were entered into the pre-assessment with 8 students answering all questions, 5 students not participating, and 6 students answered some of the questions. There were four questions in total. The average score on a scale of 0 to 4 with each correct answer earning one point was 0.7.

“Students are displaying an inability to distinguish between important and unimportant information. This may be due to students not focusing on the research question and differentiating information based on the research question. While teaching students to annotate the text, I will often refer students back to the research question and focus students attention on deciding what information best answers the question. Additionally, I will break the information down in chunks for students. The first pre-assessment was not completed by many students which might be due to all the questions being given to them at once. Students participated more when the questions were broken down into chunks with one question done at a time. It may further help students to focus on one sentence or paragraph at a time to decide whether the information is important or not important. “

Completer S2-21 identified her 9th grade English class for the study

Which High Impact Evidence-Based Teaching Strategy did you choose to implement? Please provide a rationale to support your choice.

“With my teaching, I have noticed that I often tell students to edit, revise, and proof-read their work. However, I have realized that several of my students do not actually know how to effectively revise their work. Most of my students just rush through this step in the writing process; when I believe that this step is in fact, one of the most important aspects of writing. Therefore, I have decided to implement the RADR editing technique. The acronym stands for.” R= Replace, A=Add, D=Delete and R=Reorder. In order to pre-assess my students, I had them complete an expository writing assignment based on Lord of the Flies, a novel that we are currently reading. The writing prompt was: “After reading Chapter 4 of Lord of the Flies, what kind of behavior do you believe the masks compel (cause) cause the children to take part in? Please use textual evidence to support your answer.” I told my students that this should be at least a one paragraph response, but other than that, I did not give them much guidance on how to complete the assignment. Once they submitted their writing, I graded them on a standard rubric that I use for most expository writing. (See rubric attached)””

“The reason why I chose this tool for pre-assessment was so I could see what their writing was like before giving them the RADR editing

tool that will (hopefully) help them improve their writing skills. Overall, the pre-assessment went fairly well. This is my honors class, so they normally do a good job, but a lot of them made careless mistakes. Also, there were several of them that did not do nearly enough analysis with their writing. Those were the two main errors or issues that I saw in my students’ writing.”

The rubric is out of 20 points. I did have one student get a perfect score (a 20 out of 20), but this was not surprising to me because almost all year she has kept a 100 average, and is a really good writer and overall student. The lowest score out was an 11 out of 20. This student is also a very good student, but he made lots of careless mistakes, and did not fully answer the prompts. The class average turned out to be 15.38.

CLASSROOM OBSERVATION(S) of C1-21 and C2-21

C1-21 A School of Education faculty researcher completed remote classroom observations during the first implementation of the strategy and again following two weeks of strategy implementation. The observations took place on January 28, 2021 and February 11, 2021. The faculty member observed two lessons: The first group consisted of 15 students, and the second group was 15 students.

NYSUT Teacher Practice Rubric 2014 Edition Element III Instruction Domain was scored. The performance levels are

Highly Effective, Effective, Developing, Ineffective

Element III.1: Teachers use research based practices and evidence of student learning to provide developmentally appropriate and standards-driven instruction that motivates and engages students in learning.

Rating Observation #1

Rating Observation #2

A. Aligns instruction to standards Highly Effective Highly Effective

B. Engages Students Highly Effective Highly Effective

Element III.2 Teachers communicate clearly and accurately with students to maximize their understanding and learning.

A. Provides directions to students Highly Effective Highly Effective

B. Uses questioning techniques to engage students.

Effective Effective

C. Responds to students Effective Effective D. Communicates content. Highly Effective Highly

Effective Element III.3: Teachers set high expectations and create challenging learning experiences for students.

A. Articulates measures of success Highly Effective Highly Effective

B. Implements challenging learning experiences. Highly Effective Highly Effective

Element III.4: Teachers explore and use a variety of instructional approaches, resources, and technologies to meet diverse learning needs, engage students, and promote

achievement. A. Differentiates instruction Highly Effective Highly

Effective Element III.5: Teachers engage students in the development of multi-disciplinary skills,such as communication, collaboration, critical thinking, and use of technology

A. Provides synthesis, critical thinking, problem-solving, and opportunities for collaboration.

Effective Effective

Element III.6: Teachers monitor and assess student progress, seek and provide feedback, and adapt instruction to student needs

A. Uses formative assessment to monitor and adjust pacing

Highly Effective Highly Effective

B. Provides feedback during and after instruction

Highly Effective Highly Effective

Ratings Summary observation #1: Highly Effective ratings –9/12 (75%) Effective ratings –3/12 (25%) Developing ratings –0 /12 (0%) Ineffective ratings – 0/12 (0%) Ratings Summary Observation #2: Highly Effective ratings –9/12 (75%) Effective ratings –3/12 (25%) Developing ratings –0 /12 (0%) Ineffective ratings – 0/12 (0%)

In the first observation, C1-21. had a very effective opening and introduction to her lesson. She tied today’s work to the last session. She has exemplary management skills and strategies and was able to constantly monitor student engagement, participation and performance - reading comments in the chat, repeating instructions as often as needed, acknowledging student comments, and redirecting to the target task. C1-21 utilized multiple activities to reinforce the use of the strategy (jamboard, graphic organizer, News ELA). C1-21 modeled use of the strategy and utilized think alouds effectively. Questioning improved over the course of the lesson to be more open-ended and support critical thinking (e.g. “Why do you think so?). The lesson ended with a quick, but very effective closing.

In the second observation, C1-21 welcomed each student as they joined the class, repeating instructions clearly for them to get started by opening the Nearpod link that she put in the chat. She clearly stated the objective and agenda for the lesson and launched a poll for the students to self-evaluate their performance with the objective so far. M.E. is very enthusiastic, positive and encouraging, eliciting as much participation as she can. She gives clear, specific directions (‘There are two ways to do this. The first way…). She called on as many different students as possible to engage and assess them. When a student asked a question about the content, she encouraged the desire to learn more and elaborate on the information without detracting from the lesson. The nearpod included several different guided and independent activities that reinforced the strategy (News ELA articles, game, open-ended questions). Managed the chat well, addressing irrelevant side conversations quickly and respectfully. Unbelievably strong management skills!

C2-21 Classroom Observations

An SoE Education faculty member completed a classroom observation during the first implementation of the strategy and a second observation following two weeks of implementation. The observations took place on the January 21, 2021 and February 5, 2021. The faculty member observed two lessons: The first group consisted of #18 students, while the second group was #17 students. Standard III Instructional Practice Domain of the Teachscape Framework for Teaching Teacher Practice Rubric (2011) was used as the observation tool. This observation tool was chosen because this is the observation tool used by the administration for teacher review in the district. The tool has four performance levels: Distinguished, Proficient, Basic and Unsatisfactory.

Table 1: Domain 3 Instruction Elements and Ratings – Observation #1 and Observation #2

Observation 1 2 3a: Communicating with Students Proficient Proficient

3b: Using Questioning and Discussion Techniques

Proficient Proficient

3c: Engaging Students in Learning Proficient Proficient

3d: Using Assessment in Instruction Distinguished Distinguished

3e: Demonstrating Flexibility and Responsiveness

Proficient Proficient

Ratings Summary Observation 1: Distinguished ratings –1/5 (20%)

Proficient ratings –4/5(80%) Basic ratings –0/5 (0%) Unsatisfactory ratings –0/5 (0%) Ratings Summary Observation 2: Distinguished ratings –1/5 (20%) Proficient ratings –4/5(80%) Basic ratings –0/5 (0%) Unsatisfactory ratings –0/5 (0%)

Narrative Comments: During the first observation, C2-21. opened by reviewing the importance of editing and revising written work and engaged several different students in sharing the strategies they currently use to do so. C2-21 then introduced the RADR strategy. She went over each step clearly, giving examples and nonexamples of each. She asked students to try each step with a current piece of work, while circulating to answer questions. She used excellent skills to redirect, engage, and refocus students on the independent task, then provided time for students to share their thoughts with a peer. During the second observation,C2-21 stated the objective very clearly, reviewed what the students know from previous lessons. She used good questioning to build on their responses and support deeper thinking. Students were very comfortable sharing their ideas and thought process. C2-21 smoothly transitioned to the next activity, reminding remote students who just entered what to be working on. She reviewed the RADR strategy, step by step, writing prompts in the google chat at the same time. One student in the class needs to process information by talking about and often raised his hand to share what he was thinking. C2-21 managed this expertly, allowing him the time to voice his thinking while also engaging several other students and stay on track with the lesson. Great time management, and effective use of class time!

Phase Two: Second Strategy Implementation C1-21 Strategy Implementation

1. Describe the strategy implementation lesson and the formative assessment you used at the end of the lesson.

“The lesson that I taught the strategy consisted of two different class sessions. In the first class session, I gave students the second try of the pre-assessment, introduced important vs. unimportant details and had students play a game. In the second class sessions, students reviewed important vs. unimportant details and read an article as a class and individually where they identified the important information.”

” In the first lesson, after the students completed the pre-assessment, I introduced students to what was important by referencing the questions in the pre-assessment and the research question we were focusing on, “What are some of the benefits of social media?” I provided the students examples of what would be important and what would not be important under this research question. Students then played a game where they practiced differentiating between important and unimportant details. Within the game, they were focusing on the question of “What are some of the benefits of social media?” The 8 questions gave a single sentence of information and students identified if it was important or not important.”

” In the second lesson, students begin by reviewing our learning objectives as well as our class topic of social media. I reviewed what it meant if information was important or not important. I then introduced an activity with Google Jamboard. The center of the page had a target on it. I reminded students that we are looking for information that is about our focus question. I introduced the analogy of playing darts and wanting to hit in the center to earn the most points and explained that this is similar to important information. At the bottom of the page, there were 9 cards that students would move based on whether they contained important or not important information. Important information went into the center of the target. Not important information went outside of the target. ”

” In the next part of the lesson on the second day, I showed students an article from NewsELA. We read the article together and I paused after every few lines to ask students if the information I read was important or not important. They would respond either yes or no in the chat of Google Meet. If it was identified as important, I would highlight it and explain why it either was or was not important. I modeled reading almost the whole article and then students went to complete a reading of the article on their own. Students were instructed to do just like I had done - read the article, pause every few sentences and highlight the information that was important. After highlighting, they were asked to summarize the article in a few sentences.”

2. Reflect on the strategy implementation and the overall effectiveness of the lesson.

“The strategy implementation went well and I was able to cover a lot of pieces within the two lessons. The first lesson had more student participation but students were struggling to differentiate between the two types of details. Students were actively involved using a game and then responding in the chat while I read an article. Students were engaged in this lesson. In the second students, students did not seem as engaged in the lesson and actively participating. Ten students completed the first activity and 5 students completed the second activity during the class period. Of those students, students scored an average of 6.3 out of 10 points in differentiating

between important and unimportant details on the Jamboard activity. Only five students participated in the NewsELA activity. Of these students, students scored an average of 2.6 out of 5 points. “

The Jamboard activity shows that students are starting to understand the difference between important and unimportant details. In this activity, students dragged cards with a sentence to different spots signifying either important or not important details. The information was isolated to working on one sentence at a time. The NewsELA activity shows, with the small sample size, that students were struggling to differentiate between important and unimportant details within a whole article. In the previous two activities, students viewed information in chunks. In this activity, students were looking at the whole article at once.

While students may have struggled with completing the last activity, these two assessments are showing that students are becoming more comfortable with differentiating important and unimportant details. Students are doing better when viewing the information in chunks but are struggling when the information is provided in whole.

3. Analyze the results of the formative assessment. What information did the assessment provide to guide your second implementation of the High Impact Evidence-Based Strategy?

Only five students completed the NewsELA article assessment out of 24 students. Six of the students were absent and the rest of the students did not participate during the given time or read the article but did not engage with the text by highlighting any information. In the NewsELA article, there were 5 sections that I identified as important to our research question. Students on average identified 2.6 of these passages. Two students identified 4 out of the 5 sections of important information, one identified three, and two identified one. The five students had all also highlighted information that was not important. Many of the students had highlighted sentences surrounding the important information and identified them as important since the neighboring information was not important.

While not many students completed the activity (5 on the NewsELA article), this did show that the students were struggling with identifying information within the article. The students were able to identify some of the important information but needed to work on focusing more on each individual sentence rather than focusing on a paragraph in whole. While modeling to students, I would read a paragraph and pause at the line break. I would ask students to identify whether the information was important or not important by answering in the chat. Students may have continued to use this strategy to read the paragraph and stop at the line break. While modeling, some paragraphs I broke up into sentences to decide if it was important or not but this may not have been obvious to students. To help students focus on information in even smaller chunks, it may be more helpful to model to students with one sentence at a time rather than waiting until the line break to pause.

On the Jamboard, eleven students completed the activity with an average score of 6 out of 10 points. There were five pieces of information that were important and five pieces of information that were not important. Students were to drag information to either the important or not important section of the target. While breaking it down to the 10 different pieces of information, students seemed to struggle on some of the same ones.

All students but one were able to identify one as important (Social media may make people worry about how they look). All students identified one as unimportant as this one was done as a model. Eight students incorrectly marked a card as not important that identified important information (On the screen, we see what people want to show us). Students may have interpreted these in different ways out of the context of the original article. They may have also believed that it is important for people to show what they want. I had interpreted it as we may not see the truth when this happens and that it could be a potential problem. Five students had marked a card as not important (There’s a lot of make-believe on screens) when it should have been marked as important. Similar to this other card mentioned above, students may have not understood what this make-believe might be on the screens and view it as a potential problem.

Five students incorrectly identified a not important statement as important (A study found that college students feel very anxious). The same thing happened with the following statement, “Anxiety is now the most common mental-health problem in the US” as five students marked this as important. This shows that students see anxiety as a potential problem but they did not consider if this has to do with social media or not. The information does not make it clear if this anxiety is due to social media or if it is due to something else or multiple factors. Therefore, this information is not important because it can’t be contributed to social media.

Similarly, four students incorrectly marked a statement as important (We need more balance in our lives) when it was not important. It is not important as it does not clearly have anything to do with social media. Students may have not been focused on the research question when making this decision. Five students marked a statement as not important that was supposed to be important (We care too much about the number of likes or followers on Instagram). Maybe these students thought that this was a good thing and did not identify it as a problem despite the author writing “we care too much”.

The Jamboard responses show that students may have interpreted the information differently outside of the context of the article. Students also did not remain completely focused on the research question as they identified information that could be potential problems but that were not related to social media.

Based on the information provide by the Google Jamboard and NewsELA activities, the next lesson continued to focus on differentiating between important and unimportant information. Students are still struggling to remain focused on the research question and dealing with information within a whole article and not in separate chunks. A model for students before they work independently will be provided. However, this time the completer will pause after each sentence and show thinking aloud to decide if that information answers the specific research question and whether or not it is important or not. This will help students to view the information in chunks as they will have seen me model this way as well.

Average of 6 out of 10 on Jamboard (11 students), 2.6 out of 5 on NewsELA (5 students)

Student Jamboard Target Activity NewsELA Highlighting Activity Pre

A.S. 6 out of 10 accurately identified Did not highlight anything. 0.5/4

A.L. No work No work 0.5/4

A.R. No work No work 1.5/4

A.L. No work No work ---

D.J. 6 out of 10 accurately identified 3 out of 5 highlighted correctly, 6 incorrectly Incorrect due to - background information not portraying problem

2/4

E.D. 4 out of 10 accurately identified No work 0/4

F.S. 10 out of 10 accurately identified 4 out of 5 highlighted correctly, 4 incorrectly Incorrect due to - background information not portraying problem

---

G.S. No work No work 1/4

I.P. 4 out of 10 accurately identified No work 0/4

J.R. No work Did not highlight anything 1.5/4

J.L. 8 out of 10 accurately identified No work 0/4

J.N. No work No work ----

J.L 9 out of 10 accurately identified 4 out of 5 highlighted correctly, 5 incorrectly Incorrect due to - background information not portraying problem

2/4

K.S. No work No work 0/4

M.H. 6 out of 10 accurately identified 1 out of 5 highlighted correctly, 1 incorrectly Incorrect due to - background information not portraying problem

2.5/4

M.R. No work No work 2/4

N.R. No work No work ----

N.A. 3 out of 10 accurately identified Did not highlight anything 0/4

P.R. 9 out of 10 accurately identified 1 out of 5 correctly highlighted, 1 incorrectly Incorrect due to - background information not portraying problem

----

R.R. No work No work 0/4

S.J. No work No work 0.5/4

S.S. 2 out of 10 accurately identified Did not highlight anything 0/4

T.R No work No work 0/4

V.T No work No work 0/4 CS 1-21 Results of Second Strategy Implementation Summative Assessment Green - represents growth shown

Student Pre-Assessment First Implementation

Review Second Implementation

A.S. 0 correct - 2 reversals, 1 both important and not important info included, 1 not relevant answer

6/10 on Jamboard activity, did not complete NewsELA

10 correct, 8 incorrect

3 correct - 1 reversal, 1 both important and not important

A.L. Did not complete Did not complete Did not complete Did not complete

A.R. 1 correct - 1 not relevant answer, 2 not answered

Did not complete Did not complete Did not complete

A.L. 0 correct - 3 not relevant answers, 1 reversal

Did not complete Did not complete Did not complete

D.J. 2 correct - 2 reversals

6/10 on Jamboard, ⅗ on NewsELA

8 correct, 3 incorrect, 2 missed

2 correct - 2 both important and not important

E.D. 0 correct - 1 reversal, 2 not relevant answers

4/10 on Jamboard, Did not complete NewsELA

Did not complete 1 correct - 3 both important and not important info included, 1 reversal

F.S. Did not complete 10/10 on Jamboard, 4.5/5 on NewsELA

8 correct, 7 incorrect, 2 missed

3 correct - 1 reversal, 1 both important and not important info included

G.S. 1 correct - 2 reversals

Did not complete Did not complete Did not complete

I.P. 0 correct - 1 reversal 4/10 on Jamboard, Did not complete NewsELA

Did not complete 0 correct - 2 reversals, 2 both important and not important info included

J.R. 1 correct - 1 reversal, 1 not relevant answer

Did not complete Did not complete 0 correct - 1 reversal, 1 both important and not important info included

J.L. Did not complete 8/10 on Jamboard, Did not complete NewsELA

3 correct, 4 incorrect, 6 missed

1 correct - 3 not relevant, 1 both important and not important info included

J.N. Did not complete Did not complete Did not complete Did not complete

J.L. 2 correct - 2 reversals

9/10 on Jamboard, ⅘ on NewsELA

4 correct, 3 incorrect, 6 missed

3 correct - 2 both important and not important info included

K.S. Did not complete Did not complete Did not complete Did not complete

M.H. 2 correct - 1 reversal, 1 not relevant answer

6/10 on Jamboard, ⅕ on NewsELA

Did not complete 0 correct - 2 both important and not important info included, 3 reversals

M.R. 2 correct - 2 reversals

Did not complete Did not complete 0 correct - 4 not relevant answers

N.R. Did not complete Did not complete Did not complete Did not complete

N.A. 0 correct - 3 reversals

3/10 on Jamboard, Did not complete NewsELA

Did not complete 1 correct - 2 not relevant answers, 2 both important and not important info included

P.R. Did not complete 9/10 on Jamboard, ⅕ on NewsELA

Did not complete Did not complete

R.R. Did not complete Did not complete Did not complete Did not complete

S.J. 0 correct - 1 reversal, 2 not relevant answers

Did not complete 9 correct, 1 incorrect, 1 missed

Did not complete

S.S. 0 correct - 3 reversals, 1 not relevant answer

2/10 on Jamboard, Did not complete NewsELA

Did not complete Did not complete

T.R. Did not complete Did not complete Did not complete Did not complete

V.T. Did not complete Did not complete Did not complete Did not complete

C 1 21-Analysis of data from the results of the summative assessment, the impact on student learning, and the overall effectiveness of the strategy.

Students improved from the pre assessment to the summative assessment. Eleven students (out of 24) completed the post-assessment with an average of 2 out of 5 points scored. The average score on the pre-assessment was 0.7 out of 4 points. On the pre-assessment, 22 responses were marked incorrect as the students reversed important and not important information when answering the questions. On the post-assessment, this mistake was only made 10 times. Additionally, students were providing not relevant information on both assessments as they may have included information that was not listed in the selected reading. A trend was observed throughout this unit that the students have done better when being asked what is important as compared to reporting information that is not important. On the post-assessment, when asked what information was important, 13 responses were correct. When asked what information was not important, 1 response was correct. On the post-assessment, 2 questions asked for important information, 2 questions asked for not important information and the last asked for important information. Students seemed to struggle with making this distinction between focusing on what the question was asking.

Overall, 5 students showed growth from the pre assessment and the post-assessment. One student’s score remained the same while 3 students scores decreased from the pre-assessment. Perhaps the students were not paying attention to the focus question correctly or whether it was asking them to identify important or not important information. For 15 out of the 24 students, the completer was unable to determine their growth as they did not complete either the pre-assessment, the post-assessment or both.

C2-21 Strategy Implementation

Describe the strategy implementation lesson and the formative assessment you used at the end of the lesson. Students answer the questions “If I ask you to revise or proofread you work, what does that mean to you? What are some strategies you use to edit your work?” They discussed their answers, and then were showed an infographic with the RADR editing strategy. The completer explained each letter and what it stood for, and gave examples of ways students could implement this in the quickwrite that they had previously completed.

Students were provided two minutes to do the following two steps:

● First, choose one letter from the RADR strategy to implement in their QW. ● Then, after they implemented this strategy they would write down on a scrap sheet of paper which strategy they implemented,

what the edit was that they made, and why they made that edit. This caused the students to truly think about the edits they were making and the benefits of those revisions.

After the two minutes students shared their revisions with a partner, and then shared out to an entire class. This process was implemented twice. Afterward students were provided time to make any final edits that they needed to make, and were told to let the RADR infographic really drive their rationale behind their revisions.

2. Reflect on the strategy implementation and the overall effectiveness of the lesson.

“Overall, the students did a really good job with the implementation of the RADR editing technique. All of them really made good use of their editing time, and all of them were able to vocalize why they made the edits they did, and how it benefited their writing. I also knew this implementation was successful because when I asked them what they thought the purpose of writing down what they edited and why, and the purpose of this overall activity, they gave great and meaningful responses. They all understood that they had to write down what they edited to actually force them to make revisions that would be beneficial to their writing, and to help them better understand why we revise our work and how to properly do that.”

3. Analyze the results of the formative assessment. What information did the assessment provide to guide your second implementation of the High Impact Evidence-Based Strategy?

“ I could tell that the implementation of this strategy was effective because 13 of my 18 students increased their scores. The others who did not increase their scores did not backtrack, their scores just stayed the same (one of those students who stayed the same already had a perfect score). This helped guide my plan for the second implementation of my strategy because I now have the goal that I want to aim for all of my students to increase their scores, especially from their pre-assessment score. Also, for my second implementation of the

strategy, I want to emphasize using the RADR technique to help them choose better evidence, and to help them improve upon their analysis of their evidence used in their writing. “

1. Reflect on the second strategy implementation and the overall effectiveness of the lesson. (Administer the summative assessment).

The second implementation of the RADR strategy included having students complete an expository writing technique which related to our current text Lord of the Flies. The prompt asked them three questions, and those questions were:

❏ Give a quick summary of the chapter, who was considered the “beast?” ❏ How would you describe the boys’ behavior in the passage? Please use a piece of textual evidence to support your response

(don’t forget to analyze and cite) ❏ Lastly, were you surprised that Ralph and Piggy were involved in the murder?

While the prompt was a different one, a main difference between the instruction with the first strategy implementation and the second implementation was that students were asked to reflect on their evidence and analysis that they used in their writing. For instance, students were told that while they are using the RADR editing technique they should decide if the evidence they used to support their response is the best possible evidence that they could have chosen. Then, they were instructed to use the RADR editing technique to ensure that they truly analyzed the evidence in their response. The completer instructed the students to take two minutes to complete one aspect of the RADR strategy, write down which edit they made and why, and then share their edit with a partner, and then with a class as a whole. They did this activity twice. Once they were done, the students were provided time to make any other necessary revisions using the RADR strategy.

C2-21 indicated, “I found this implementation very successful because yet again my students were able to vocalize and make sense of the revisions they made in their writing, and because I saw significant improvement with my students’ analysis skills. “

CS 2-21 Results of Second Strategy Implementation Summative Assessment

Student #

Rubric Score for Pre-Assessment (out of 20 points)

Rubric Score after first implementation

+/- Scores

Rubric score after 2nd implementation

Total Gain from Pre-assessment to Post Assessment

Notations

1 15 16 1 20 5

2 16 18.5 2.5 19 3

3 14 14 0 18.5 4.5

4 17 17 0 19.5 2.5

5 16 18 2 19.5 3.5

6 11.5 19 7.5 20 8.5

7 17.5 18.5 1 19 1.5

8 17 19 2 19 2

9 17 19 2 19 2

10 17 17 0 20 3

11 18 19 1 20 2

12 19.5 19.5 0 20 .5

13 19.5 20 .5 20 .5

14 18 19 1 20 2

15 17 19 2 19 2

16 15.5 16.5 1 17 1.5

17 11 18 7 19 8

18 20 20 0 20 0 Started out with perfect score

Average Score for class

15.38 18.17

19.3

C2-21 found the RADR strategy to be very effective with impacting the students’ learning. By looking at the assessment data, every single student (other than the student who received a perfect score the first time around) increased their rubric score with their quickwrites. 15 of 18 students increased their rubric score by multiple points from the pre-assessment to the summative assessment. Additionally, the class average increased from a 15.38 out of 20, to a 19.3 out of 20. Not only did C2-21 identify a major increase in rubric scores, but she reflected, “I also noticed that my students were making a lot less careless mistakes in their writing due to using the RADR strategy.

Phase Three: Tripod Student Survey Implementation and Analysis

Completer C1-21 administered the Tripod student survey and 12 students participated. The classroom management portion of the student survey was not administered because the class was virtual.

C1-21 Tripod Student Survey Results – 7 Cs Scores Overall Survey Score: 300

Care Confer Captivate

Clarify Consolidate Challenge Class Mgmt.

318 326 322 288 290 256 N/A Medium Medium High Medium High Low Medium

High Medium High

Medium High Low Low

Medium Medium Medium Medium Medium Low Medium Medium Low Medium Low Medium Medium

The Tripod survey measures seven domains of teacher effectiveness. Scaled scores for construct and composite range from 202 to 398 with 270 to 300 as the mid-range and 300 as the mid-point. Scaled scores for each item in the 7cs combined responses to a single item for every student in a class. Results are presented as Low, Medium or High. High means the score was in the top 25% of scores from similar classrooms surveyed by Tripod over the past several years.

The overall composite score of 300 falls within the Mid Range indicating the C121 ranked in the middle part of the scale (between 270 and 330) where the majority of classrooms are. A teacher in this range knows that he/she has scores similar to that of most teachers and classrooms surveyed on the teaching practices and student engagement measures. Completer S1-21: showed concern for students’ well-being, makes learning enjoyable and gives students times to explain ideas. The area of challenge is the only area that fell below the middle part of the scale. Areas to focus on include: helping students learn to correct mistakes, asking frequent questions to ensure understanding and not letting students give up when it gets hard.

C1-21 reflected on the administration of the survey with the students. C1-21 revealed that the class that has been virtual since the beginning of the school year. C1-21 has never met in person with the students and only sees the students for 30 minutes a week when most of them don’t have their cameras on or unmute their microphones. Twelve students participated in the survey which is half of the class.

For Care dimension, C1-21 indicated “ it has been hard to understand how the students individually feel and direct support to them individually. I was surprised to see that these were in the medium to high range as it is very hard to specifically focus on individual students. Confer is also difficult as the students do not turn on their cameras or microphones often. However, the students reported that students speak up and share their ideas about class work. This is sometimes done through the chat but mostly is to answer a question I pose or to ask a clarifying question on the assignment. “

C1-21 felt that the section on Clarify was an area where students may not be able to accurately rate each of the statements. The first one and fifth one mentioned comments on work. Since the class does not have grades, often teacher comments are just to say good job or to tell the students they are missing a certain part of the assignment. C1-21 indicated that gauging student understanding has been difficult in a virtual environment, as students often do not speak up to ask for help and I can’t see them. Captivate was one of the few areas where C1-21 felt that the students could accurately rate their experiences as this can be done virtually and in person. Ratings in this area were between medium and high which pleased C1-21 with given the small sample size in the survey. For consolidate, C1-21 has tried to summarize lessons at the end of the class as goals for the day are revisited and students noticed this in the first statement. Nearpod allows C1-21 to see student’s work in real time and track where they are in the lesson. Overall, C1-21 reflected that she is concerned that the survey may not be an accurate representation since instruction during Covid 19 is virtual. . However, C1-21 states that “while in the virtual world, I want to continue working on summarizing the lesson at the end of the day and asking more questions throughout the lesson to ensure understanding and that students are on track. “

C2-21 Tripod Student Survey Results – 7 Cs Scores

C2-21 surveyed 15 students and reported the administration went well. Overall Survey Score: 312

Care Confer Captivate

Clarify Consolidate Challenge Class Mgmt.

294 330 288 324 306 350 290 Medium High Medium High Medium High Medium

Low Medium

Medium Medium

Medium Medium Medium Medium Medium Medium

Medium Medium Medium High Low Medium Medium High Medium Medium Medium Medium Medium Medium High Medium High

The Tripod survey measures seven domains of teacher effectiveness. Scaled scores for construct and composite range from 202 to 398 with 270 to 300 as the mid-range and 300 as the mid-point. Scaled scores for each item in the 7cs combined responses to a single item for every student in a class. Results are presented as Low, Medium or High. High means the score was in the top 25% of scores from similar classrooms surveyed by Tripod over the past several years.

The overall composite score of 312 falls within the Mid Range indicating the C121 ranked in the middle part of the scale (between 270 and 330) where the majority of classrooms are. A teacher in this range knows that he/she has scores similar to that of most teachers and classrooms surveyed on the teaching practices and student engagement measures. Similarly, all subcomponent scores fell within all ranges (288-350) with the area of Challenge scoring in the high range. Completer S2-21 demonstrates relative strengths in helping students correct mistakes, providing feedback that supports students to improve, inviting the suggestions/thoughts of students, and ensuring students explain their thinking and elaborate on their answers and don’t give up when things get hard. Opportunities for enhancement are making learning more interesting and enjoyable.

C2-21 indicated that administering the Tripod survey was very easy to do. C2-21 reported that some students expressed that they thought some questions were weird, and that they struggled with the wording of some of the questions, but overall, everything seemed to go smoothly. C2-21 indicated that the survey provided her information about students’ perceptions that were “definitely interesting to see my strengths and weaknesses according to the survey. I found that my biggest strengths were the feedback I give to my students with their work. It seems as though most of the students felt that I give helpful feedback, and that makes me feel good

because I obviously want them to improve upon their skills, so the fact that they feel like I give them good feedback means that I am helping them improve in ELA. “

C2-21 indicated that the one area that the survey identified as an area in need of improvement is knowing when something is bothering the students. C2-21 indicated, “This piece of feedback was troubling for me because I always do “mindful Mondays” where I check in with my students and give them an opportunity to share their feelings, but I guess I need to do a better job at recognizing when my students are having a bad day.” C2-21 indicated that another area where scores were lower was “student behavior in this class makes the teacher angry.” C2-21 reflected on instances where a couple of students were disrespectful to other students in the room, so C2-21 pulled those students out and talked to them. Her reflection on how to use the survey information to improve her instruction: “I guess with this feedback I need to make sure that I am mindful about how I handle certain behaviors.”

Phase Three Reflection Prompts

Two questions were asked of C1-21 and C2-21 to make connections between their case study and the preparation they received from their SJFC preparation program. The questions and responses are summarized below:

1.Think about the strategies you used for the case study. In which course(s) do you think you learned about the strategy? What did your SJFC coursework teach you about why this strategy would be important and the impact the strategy may have on student learning and performance?

C1-21 reflected: “One of the classes that I think most applies to the work in this study was the course, Introduction to Differentiated Curriculum, Instruction, and Assessment. During this class, we learned about providing multiple ways of representing content and expression. Throughout this study, I tried to accomplish this. Students participated in multiple ways of differentiating between important and not important details.”

C2-21 reflected: “ I did not learn about this particular strategy in my coursework in SJFC, I actually learned about it through a professional development opportunity provided by my district. However, my SJFC coursework was what taught me that a strategy such as the RADR strategy would be successful.”

During my time studying education at SJFC, I learned that there are several components that make a lesson successful. Two of those components is just making sure the students are modeling for the students, and making sure the students are engaged. Before having the students practice the RADR strategy I modeled ways in which they may use the strategy with their writing. “

2.How did your SJFC coursework help you use assessments to drive instruction? And how is this study an application of that? (InTASC 6,7,8)

C1-21 reflected: “During my SJFC coursework, I learned about the different types of assessment. Throughout this study, I was able to include both formative and summative assessments. These were essential at different parts of this instruction cycle to gauge student understanding. I was able to start with a pre-assessment to understand the student's prior knowledge. Once I taught differentiating between important and unimportant details, I was able to include more formative assessments on NewsELA and Google Jamboard to check student’s progress. At the end, the summative assessment consisted of short answer questions within a Nearpod. These various assessments allowed me to see the entire process of student work and progress. “ C1-21 reflected: “This study demonstrated the planning-assessment instruction cycle. The pre-assessment determined the current understanding and needs of the students. From there, I planned the lessons knowing that students did not have a very strong understanding of differentiating between these details. In each of the lessons I taught, I included an informal, formative assessment which guided the following lessons.”

C2-21 reflected: “My coursework at SJFC explicitly taught me the importance of assessments to drive instruction. The education courses taught me the difference between formative and summative assessments, which has not only been helpful with this study, but also with my teaching in general. Through Fisher I learned to use formative assessments to help gauge how my students are doing in the learning process, and then I use those formative assessments as feedback for myself to know what I need to work on, or review with my students. Then, the summative assessment is that final major assessment that tests their knowledge at the end of a unit. For this study I used the pre-assessment as a formative assessment to help me understand what students needed improvement on within their writing. I also used the first implementation of the RADR strategy as a formative assessment to see if the RADR strategy was actually beneficial to my students’ writing, and to see if I could improve the implementation of this strategy at all. This knowledge of assessments all came from my coursework at SJFC.”

Appendix A

Phase Three Reflection Prompts 1. Describe your experience with implementing the Tripod Survey with your students. The students did well

with the Tripod survey. There were no issues with implementation. 2. What did you learn from the student survey results and how will you use the information to impact your future practice?

3. Think about the strategies you used for the case study. In which course(s) do you think you learned about the strategy?

4. What did your SJFC coursework teach you about why this strategy would be important and the impact the strategy may have on student learning and performance? (InTASC 4,5)

5. How did this strategy help to make the discipline accessible and meaningful for learners so your students could master the content? (InTASC 4,5).

6. How did your SJFC coursework help you use assessments to drive instruction? And how is this study an application of that? (InTASC 6,7,8)

7. What, specifically, in your SJFC program prepared you to carry out the responsibilities you confront on your job? (InTASC 9,10)

8. Is there anything else you’d like to tell us about your preparation in the SoE preparation program?

Appendix B: Invitation to participate email

Good afternoon,

I hope you are well. As a well-respected alumnus of the Ralph C. Wilson, Jr. School of Education, I am reaching out to you to ask for your assistance in obtaining feedback about your preparation at SJFC and how that translates into your work in the classroom.

As part of our accreditation process through CAEP (Council for the Accreditation of Educator Preparation), we are required to work with our graduates in the field to explore, from your vantage point, how your preparation supports your students’ overall learning.

If you are willing to participate in the 2020-2021 study, the action research process would occur over six weeks and will include:

*Implementing a research-based high impact instructional strategy and reflecting, through blogs or journals, the impact of this strategy on your student’s learning. You will have the opportunity to pick the strategy you want to implement.

*Participating in two classroom observations of your use of the high impact instructional strategy

*Implementing a student survey

At the completion of the project, you will receive a $125 honorarium and the deepest gratitude from the School of Education.If you agree to participate, the School of Education will work with your district’s administration to obtain all necessary permissions. What we ultimately need from you is the opportunity to gain a firsthand look at the outstanding work you accomplish every day in the classroom.

Please let me know by Monday, Oct. 26th or earlier, if you would be willing to participate in and support this project OR if you are not able to participate at this time. I appreciate you considering this request, as it will help the School of Education to meet the requirement of the accreditation body.

Thank you,

Dr. Whitney Rapp, Associate Dean

Ralph C. Wilson, Jr. School of Education St. John Fisher College,

Rochester, NY

Appendix C: Reflective Journal Prompts

Directions: After each phase of the study is completed, please respond to the corresponding reflection prompts.

Phase One (Week of Jan 11-15 2021)

1. Which High Impact Evidence-Based Teaching Strategy did you choose to implement? Please provide a rationale to support your choice.

2. Describe the pre-assessment tool you implemented with your students and provide a rationale for choosing

the tool.

3. Discuss the results of the pre-assessment and how the results inform the first strategy implementation.

Phase Two (January 18 –January 29 2021) Reflection Prompts

1. Describe the strategy implementation lesson and the formative assessment you used at the end of the lesson.

2. Reflect on the strategy implementation and the overall effectiveness of the lesson.

3. Analyze the results of the formative assessment. What information did the assessment provide to guide your

second implementation of the High Impact Evidence-Based Strategy?

Phase Three (Feb 1-12, and Feb 22-26 2021 ) Reflection Prompts

1. Reflect on the second strategy implementation and the overall effectiveness of the lesson.

2. Describe your experience with implementing the Tripod Survey with your students.

Phase Four (March) Reflection Prompts

1. Analyze data from the results of the summative assessment, the impact on student learning, and the overall effectiveness of the strategy.

2. If you were to use this strategy in the future, what might be some next steps you would take?

3. What did you learn from the student survey results and how will you use the information to impact your future practice?

1

Ralph C. Wilson, Jr. School of Education St. John Fisher College

Program Completer Action Research Project – Spring 2020 CAEP Initial Standards 4.1 and 4.2

Program Impact Study Design

I. Description of Study and Study Sample

The spring 2020 Program Completer Action Research study was completed by one completer, otherwise known as Completer – Spring2020 or S20 or CS20, who was selected from a purposive sample of 12 completers representing completers from the undergraduate and graduate childhood/special education and adolescence/special education initial certification programs, who are now teaching suburban, rural, or urban school districts. Two completers agreed to participate in the study, but one of the completers withdrew from the study before it commenced. The completer that participated in the study graduated in May 2018 and was selected by faculty because he 1) was known to have a P-12 teaching position, and 2) was likely to participate in the project based on his participation in his classes. Completer S20 was told that if he completed the project, he would receive a $100 honorarium (see Appendix A: Invitation to Participate email).

Completer S20 graduated from the Undergraduate Inclusive Childhood Education Program with a B.S. in Inclusive Adolescent Education (English) in May 2018. His current position is a 10th grade English teacher in a rural school district. At the high school where the completer teaches, 41% of the student population is economically disadvantaged. Completer S20 taught two 10th grade English classes among other classes. He had 16 students in one class and nine students in the other class.

II. Methodology

Completer S20 met on campus on December 10, 2019 with the faculty researcher to review study expectations and to get his answers to any questions about the study. The directions and timeline of the study were presented, and access to the shared Google Doc was set up.

2

A. Directions and Timeline of the Completer Study

Initial Meeting: December 10, 2018

1. Identify one of Marzano’s Nine High Yield Instructional Strategies to implement in the classroom (see Appendix B: Marzano’s Nine High Yield Instructional Strategies). 2. Review study expectations and Tripod Survey directions (see Appendix C: Guide to Tripod's 7Cs Framework).

Phase One: Jan 13th --16th 2020

1. Identify where in the curriculum Marzano’s Nine High Yield Instructional Strategy could be implemented. 2. Create and administer a pre-assessment to obtain a baseline of the students’ knowledge base. Take data on the baseline results. 3. Implement a lesson using the identified High Yield Instructional Strategy. 4. Create and implement a formative assessment after the strategy implementation. Analyze the data from the assessment and create a plan to implement the high yield instructional strategy a second time for improved student learning. 5. Begin responding to your reflective journal in Google Doc (see Appendix D: Reflective Journal Prompts) and reflect on the strategy implementation and the formative assessment analysis.

Phase Two: Feb 3– 6th, 2020

1. Implement the High Yield Instructional Strategy a second time. 2. Create and implement a summative assessment after the second strategy implementation. 3. Analyze the date from the summative assessment for overall success of the strategy implementation. 4. Add the reflective journal on Google Doc discussing the results of the second implementation on student learning. Address next steps if you were to use the strategy in the future.

Phase Three: Feb 14-28, 2020

1. Send home consent forms for administration of the student Tripod Survey. 2. Administer the student Tripod Survey to students. 3. Review results from the Tripod Survey (see Appendix E: Understanding Your Tripod Survey Results – Teacher Report). 4. Add to the Google Doc Reflective Journal discussing the results of the Tripod Survey.

3

III. Discussion of the Study Design and Implementation

A. Phase One: Pre-Assessment and First Strategy Implementation and Analysis

Completer S20 identified his two 10th grade English classes for the study. One class had 9 students and the other had 16 students.

Completer S20 chose the cooperative learning strategy, articulating the following rationale:

I choose to implement Cooperative Learning in order to promote student collaboration. We are going to use literature circles to read segments of Hiroshima by John Hersey. The text is written from the perspective of six individuals, so each group will read the events from their assigned character’s point of view and share their findings with the class (Phase 1 Reflection). Completer S20 administered a pre-assessment of the literature text his students were reading to measure what information students knew about the types of imagery and the events that surrounded the bombing of Hiroshima and Nagasaki. Armed with a pre-assessment data, he would design a post assessment that would be a poster that includes a timeline of events from their character’s perspective, and the examples of imagery that they found in the text. After the project, the students should be comfortable enough with the content to succeed in the exam.

The table below indicated the scores that each target student received on the pre- assessment.

Student ID #

1

Rubric Score Pre- Assessment (20)

13

2 8 3 6 4 6 5 13 6 4 7 12 8 10 9 5 10 0

11 6 12 10 13 8 14 14 15 6 16 10

4

17 7 18 9 19 7 20 9 21 7 22 10 23 15 24 12 25 2

The analysis of the pre-assessment provided the following conclusions according to CS20 (Completer S20):

Students were unable to get a perfect score because the assessment asked questions that were specific to the text that would be read throughout the unit. The remaining points were earned by knowing the different types of imagery and basic World War II knowledge. Among these students, the average score was 8.36 out of 20. CS20 noted that the low average score indicated that the material that would be covered in this unit was new to my students; therefore, the unit itself was worth doing. Students mostly lost points because they were unable to summarize the events of the text, and they did not know general World War II history. This knowledge would be derived from reading the text. So, these literature circles should target those weak points. I need to make sure the summarizer in the literature circle shared their summary so that everyone knew the events that transpired in each chapter.

To implement student collaboration, CS20 noted that students read Hiroshima by John Hersey in literature circles. Each student in the group had a role: summarizer, illustrator, or linguist. The students read their assigned pages and completed their respective assignments. The formative assessment was a collection of their literature circle work: a summary from the summarizer, a drawn moment of imagery from the illustrator, and a collection of defined vocabulary words from the linguist. These tasks could not be completed properly without having read the text, and it also assessed the illustrator on their knowledge of imagery. By completing these tasks, the students were preparing for the post-assessment while working collaboratively.

Reflecting on this first strategy implementation, CS20 noted:

So far, the literature circles were successful. It promoted independent reading and autonomous work, while utilizing group collaboration and accountability. The students who were typically unmotivated felt pressured into working because they did not want to let their group down. I had a group that was attempting to skip the reading, and I was worried that this group was toxic. Also, one student was not doing his work at all, and I didn’t want this group to get frustrated. I was going to give a reflection survey to see how students felt about their group. If a group was having issues, I could address the problem and adjust my grading.

5

The analysis of the first strategy implementation provided the following conclusion:

1). The students did a great job with the formative assessment. 2). My two major concerns were summary writing and following directions. I had two students who summarized the first or second page of the book, but left out all the details from later pages. This indicated that they did not read the whole text. 3) There were a lot of moving parts for this group project to work, so the linguists might forget to do all of their jobs. 4). I am going to remind the students of what all the jobs entailed and give them a chance to make corrections. Hopefully, this would clear up any misconceptions before starting Chapter 2.

Classroom Observation

A School of Education faculty researcher completed a classroom observation during the first implementation of the strategy. The observation took place on the 3rd of February 2020. The faculty member observed two lessons: One was by 8:30 a.m., while the second lesson that was observed took place by 1:30 p.m. The first group consisted of 16 students, while the second group was 9 students. Both groups read the same chapter of the same book. Standard III Instructional Practice of the Donaldson Teacher Practice Rubric (2014) was used as the observation evaluation tool. This observation tool was chosen because this is the observation tool used by the administration for teacher review in the district. The tool has four possible indicators; Highly Effective, Effective, Developing, and Ineffective.

Table 1: Domain 3 Elements and Ratings – First Observation Domain 3: Instruction (Announced Classroom Observation)

3a: Communicating with Students Proficient

3b: Using Questioning and Discussion Techniques

Proficient

3c: Engaging Students in Learning Proficient

3d: Using Assessment in Instruction Basic

3e: Demonstrating Flexibility and Responsiveness

Proficient

Ratings Summary: Distinguished ratings – 0/5 (0%) Proficient ratings – 4/5 (80%) Basic ratings – 1/5 (20%) Unsatisfactory ratings – 0/5 (0%)

During the lesson, CS20 used the cooperative strategy to teach the book Hiroshima by John Hersey. The book has six major characters that had their own plotlines. Each group read about one character and followed the story of the character in the book. The students had three roles: illustrator, summarizer and linguist. There was no discussion leader. Students were

6

engaged during the lesson, and they were seen discussing the book. The essential question for the day was “What happens to my character in chapter 5.” The students were engaged in their groups, but there was no cross communication among the groups. At the end of the lesson, every student got up and left the class. Feedback to CS20 from the faculty researcher included that groups should share what they read because they were reading the same book but different characters. They ought to learn about other characters from their classmates so that they would comprehend the entire book. This information was communicated to CS20. .

Table 2: Domain 3 Elements and Ratings – Second Observation Domain 3: Instruction (Announced Classroom Observation)

3a: Communicating with Students Proficient

3b: Using Questioning and Discussion Techniques

Proficient

3c: Engaging Students in Learning Distinguished

3d: Using Assessment in Instruction Proficient

3e: Demonstrating Flexibility and Responsiveness

Proficient

During second observation by 1:30 pm, CS20 did implement some of the feedback that was provided by the faculty researcher to better support learning the morning class. Applying the feedback, he conferred with each group during the lesson to ensure they understood what they were reading. Then at the end of the lesson, each group shared what they did with the entire class. This enhanced the general comprehension of the book. Observation scores improved in the areas of engaging students in learning and using assessment in instruction.

Ratings Summary: Distinguished ratings – 1/5 (20%) Proficient ratings – 4/5 (80%) Basic ratings – 0/5 (0%) Unsatisfactory ratings – 0/5 (0%)

Phase Two: Second Strategy Implementation

For the second strategy implementation based on the first formative assessment results, CS20 continued the reading of the book Hiroshima. The analysis of the second strategy implementation provided the following conclusion to CS20: 1) The cooperative strategy sparked conversation within the groups, which means the strategy was effective at promoting collaboration. 2) Although the students had their own individual roles, they did a good job discussing the reading to make sure they carried out their respective parts correctly. 3) One oversight I made was that the characters in the book encountered each other once in a while, but each group did not have background information on the other group’s characters.

7

Therefore, in the second implementation, I made sure that each group shared their summary with their class after each literature circle:

TABLE 2: Analysis of the data from the results of the summative assessment, the impact on student learning, and the overall effectiveness of the strategy.

Student # Rubric Score Pre- Assessmen t (20)

Rubric Score Post Assessmen t (20)

+/- Scores Notations

1 13 20 +7 Above Standard

2 8 20 +12 Above Standard

3 6 20 +14 Above Standard

4 6 19 +13 Above Standard

5 13 18 +5 At Standard

6 4 Absent N/A N/A

7 12 20 +8 Above Standard

8 10 20 +10 Above Standard

9 5 20 +15 Above Standard

10 0 Absent N/A N/A

11 6 18 +12 At Standard

12 10 19 +9 Above Standard

13 8 16 +8 At Standard

14 14 20 +6 Above Standard

15 6 Absent N/A N/A

16 10 15.5 +5.5 At Standard

17 7 20 +13 Above Standard

18 9 16 +7 At Standard

8

19 7 20 +13 Above Standard

20 9 20 +11 Above Standard

21 7 20 +13 Above Standard

22 10 20 +10 Above Standard

23 15 20 +5 Above Standard

24 12 19 +7 Above Standard

25 2 20 +18 Above Standard

During the second implementation, the students were a lot more confident on what their roles entailed; therefore, the summaries were much stronger and the literature circles worked more cohesively. Completer S20 noticed that the students who were not the summarizer were more likely to skim the chapter because the plot was not very relevant to the linguist role. This problem was corrected by making sure students read for at least the first 20 minutes of class without discussing or writing anything. Overall, the strategy seemed to have accomplished the intended goals. A comparison of the pre-assessment and post-assessment scores indicated that all students’ scores improved after implementation of the cooperative learning strategy. The range of improvement is between 5 and 18 points confirming the effectiveness of the strategy on student improved learning. Completer S20 noted that in future, he would add a discussion leader role. The students worked collaboratively, but they did not truly discuss the book unless they were curious about something or had a simple question. It did not automatically force profound or meaningful discussions that would take the strategy to the next level of Bloom’s Taxonomy.

B. Phase Three: Tripod Student Survey Implementation and Analysis

Completer Survey

Completer S20 administered the Tripod student survey and stated that the survey administration went well.

9

Table 1: Tripod Student Surveys – 7 Cs Scores Overall Survey Score: 362

Care Confer Captivate Clarify Consolidate Challenge Class Mgmt.

348 386 368 364 364 356 346 High High High High High High Medium

Medium High High Medium Medium High High High High High High High High High

Medium High High High High Medium Medium High High High Medium High High

The Tripod survey measures seven domains of teacher effectiveness. Scaled scores for construct and composite range from 202 to 398 with 270 to 300 as the mid-range and 300 as the mid-point. Scaled scores for each item in the 7cs combined responses to a single item for every student in a class. Results are presented as Low, Middle or High. High means the score was in the top 25% of scores from similar classrooms surveyed by Tripod over the past several years.

The overall composite score of 362 falls within the High Range indicating the CS20 ranked in the top quarter of similar classrooms surveyed on the teaching practices and student engagement measures. Similarly, all seven subcomponent scores fell within the high range (346-386) providing strong evidence that Completer S20: showed concern for students’ well-being, encouraged and valued student ideas, sparked and maintained student interest, helped students understand content and resolve confusion. In addition, he explained, modeled and implemented routines to systematize class procedures, helped students integrate and synthesize key ideas, and insisted that students do their best work.

10

Phase Three Reflection Prompts 1. Describe your experience with implementing the Tripod Survey with your students.

The students did well with the Tripod survey. There were no issues with implementation. 2. What did you learn from the student survey results and how will you use the information to impact your future practice?

The results from the survey were overwhelmingly positive, which is a great feeling. I scored a “medium” for the questions about checking for understanding and summarizing the lesson. I think those two categories are connected. If I do a better job at summarizing the lesson during closure, the students will probably feel more comfortable with understanding all the central material. I could also utilize more formative assessment techniques throughout each day.

3. Think about the strategies you used for the case study. In which course(s) do you think you learned about the strategy?

I learned about the literature circle in Dr. Arndt’s School, Ability, and Learning class, and I learned about the power of collaboration in Dr. Rapp’s curriculum class. My approach to writing a unit is always inspired by Dr. Guarino’s methods class.

4. What did your SJFC coursework teach you about why this strategy would be important and the impact the strategy may have on student learning and performance? (InTASC 4,5)

My coursework taught me that the social component to schooling is just as important as the knowledge that is acquired. We discuss social justice and citizenship constantly in these classes, so I always try to embed these skills and concepts into every lesson. By having students discuss meaningfully, they can remember the content better for retention on exams, and form relationships with their peers to create a better school culture.

5. How did this strategy help to make the discipline accessible and meaningful for learners so your students could master the content? (InTASC 4,5).

The text that we used was my district’s required reading. It is very dry, long, and harrowing. Expediting the reading of the text through literature circles made the content much more digestible, and gets the main ideas across without losing interest in the content.

6. How did your SJFC coursework help you use assessments to drive instruction? And how is this study an application of that? (InTASC 6,7,8)

In Dr. Guarino’s methods class, we had multiple conversations about effective and ineffective assessments. The content of an examination does not need to be a secret to the students. They should know what they are supposed to learn; therefore, the pre- and post-assessments included the exact same information. Also, the standards and essential questions were always posted so that students would know what they will be tested on.

11

7. What, specifically, in your SJFC program prepared you to carry out the responsibilities you confront on your job? (InTASC 9,10)

I used the Universal Backwards design concepts in Dr. Guarino’s methods class every time I design a unit. I use instructional strategies like think-pair-share, fishbowl, gallery walk, and concept mapping from Dr. Rapp and Dr. Schlosser’s classes. The constant literary analysis in my English coursework has been extremely valuable when closely reading texts with my classroom.

8. Is there anything else you’d like to tell us about your preparation in the SoE preparation program?

No

Appendix A: Invitation to participate email

Good afternoon,

I hope you are well. As a well-respected alumnus of the Ralph C. Wilson, Jr. School of Education, I am reaching out to you to ask for your assistance in obtaining feedback about your preparation at SJFC and how that translates into your work in the classroom.

As part of our accreditation process through CAEP (Council for the Accreditation of Educator Preparation), we are required to work with our graduates in the field to explore, from your vantage point, how your preparation supports your students’ overall learning.

If you are willing to participate in the 2019-20 study, the action research process would occur over six weeks and will include:

*Implementing a research-based high impact instructional strategy and reflecting, through blogs or journals, the impact of this strategy on your student’s learning. You will have the opportunity to pick the strategy you want to implement.

*Responding to a brief survey about your use of the instructional practice

*Participating in two classroom observations of your use of the high impact instructional strategy

*Implementing a student survey

At the completion of the project, you will receive a $125 honorarium and the deepest gratitude from the School of Education.

12

If you agree to participate, the School of Education will work with your district’s administration to obtain all necessary permissions. What we ultimately need from you is the opportunity to gain a firsthand look at the outstanding work you accomplish every day in the classroom.

Please let me know by Monday, Oct. 28th or earlier, if you would be willing to participate in and support this project OR if you are not able to participate at this time. I appreciate you considering this request, as it will help the School of Education to meet the requirement of the accreditation body.

Thank you,

Dr. Susie Hildenbrand

Dr. Susan M. Hildenbrand

Associate Dean

Ralph C. Wilson, Jr. School of Education

St. John Fisher College, Rochester, NY

[email protected]

(585)385-7297

Dr. Linda McGinley Assessment Coordinator Ralph C. Wilson School of Education [email protected] (585)385-8One79

13

Appendix D: Reflective Journal Prompts

Directions: After each phase of the study is completed, please respond to the corresponding reflection prompts.

Phase One (Week of Jan 13th --16th 2020

1. Which High Impact Evidence-Based Teaching Strategy did you choose to implement? Please provide a rationale to support your choice.

2. Describe the pre-assessment tool you implemented with your students and provide a

rationale for choosing the tool.

3. Discuss the results of the pre-assessment and how the results inform the first strategy

implementation.

Phase Two (Feb 3– 6th, 2020) Reflection Prompts

1. Describe the strategy implementation lesson and the formative assessment you used at the end of the lesson.

2. Reflect on the strategy implementation and the overall effectiveness of the lesson.

3. Analyze the results of the formative assessment. What information did the assessment

provide to guide your second implementation of the High Impact Evidence-Based Strategy?

Phase Three (Feb 14-28, 2020 ) Reflection Prompts

1. Reflect on the second strategy implementation and the overall effectiveness of the lesson.

2. Describe your experience with implementing the Tripod Survey with your students.

Phase Four (March) Reflection Prompts

1. Analyze data from the results of the summative assessment, the impact on student learning, and the overall effectiveness of the strategy.

2. If you were to use this strategy in the future, what might be some next steps you would

take?

14

3. What did you learn from the student survey results and how will you use the information to impact your future practice?

1

Ralph C. Wilson, Jr. School of Education St. John Fisher College

Program Completer Case Studies – Fall 2018 CAEP Initial Standards 4.1 and 4.2

Program Impact Study Design

I. Description of Study and Study Sample The Fall 2018 Case Studies were qualitative consisting of multiple components in its methods such as: completer action research, observation data, reflective journals, national validated student survey (Tripod), formative and summative student assessments, pre and post surveys and/or focus group.

The Fall 2018 case studies were completed by two completers (Completer One – C1 and Completer Two – C2) who arose from a purposive sample of 24 completers representing completers from the undergraduate (12) and graduate (12)childhood/special education and adolescence/special education initial certification programs. The associate dean of the School of Education attending the spring student reception in May 2018 and put out a request for volunteers to potentially participate in the study to begin in fall 2018. The initial certification attendees filled out forms that indicated if they were interested in the opportunity and their contact information. The potential list of participants all graduated in 2018 to correlate with the best practice set by CAEP to work with completers no more than three-five years post-graduation. In October 2018, emails were sent to all 24 positive respondents, and two indicated their willingness to participate. Completers were told that if they completed the project, they would receive a $100 honorarium (see Appendix A: Invitation to Participate email). Approval from each completer was obtained by their building administrator.

C1 and C2 graduated in May 2018 from the MS Special Education/Childhood Education Program. C1’s current position is a third grade classroom teacher in a suburban school district. There are 23 students in their class, and 58% of the students receive free or reduced lunch. C2 graduated in May 2018 from the MS Special Education/Childhood Education Program. C2’s current position is a fifth grade classroom teacher in a rural school district. There are 17 students in their class, and 62% of the students receive free or reduced lunch.

II. Methodology C1 and C2 met on campus on November 12th, 2018 with the Associate Dean and the Assessment Coordinator to review study expectations and to answer any questions about the study. The directions and timeline of the study were presented, and access to the shared Googledoc was set up. A. Directions and Timeline of the Completer Study Initial Meeting: November 12th, 2018 1. Identify one of Marzano’s Nine High Yield Instructional Strategies to implement in the classroom (see Appendix B: Marzano’s Nine High Yield Instructional Strategies). 2. Review study expectations and Tripod Survey directions (see Appendix C: Guide to Tripod's 7Cs Framework). 3. Respond to electronic pre-survey about teacher preparation program (Appendix F: Standard 4 Pre-survey). Phase One: November 26th – November 30th, 2018 1. Identify where in the curriculum Marzano’s Nine High Yield Instructional Strategy could be implemented. 2. Create and administer a pre-assessment to obtain a baseline of the students’ knowledge base. Take data on the baseline results.

2

3. Implement a lesson using the identified High Yield Instructional Strategy. 4. Create and implement a formative assessment after the strategy implementation. Analyze the data from the assessment and create a plan to implement the High Yield Instructional Strategy a second time for improved student learning. 5. Begin responding to the Reflective Journal Googledoc (see Appendix D: Reflective Journal Prompts) and reflect on the strategy implementation and the formative assessment analysis. Phase Two: December 3rd – December 7th, 2018 1. Schedule a time to be observed implementing the High Yield Instructional Strategy for a second time. 2. Implement the High Yield Instructional Strategy a second time. 3. Create and implement a summative assessment after the second strategy implementation. 4. Analyze the date from the summative assessment for overall success of the strategy implementation. 5. Add to the Googledoc Reflective Journal discussing the results of the second implementation the impact on student learning. Address next steps if you were to use the strategy in the future. Phase Three: December 10th – December 14th, 2018 1. Send home consent forms for administration of the student Tripod Survey. 2. Administer the student Tripod Survey to students. 3. Review results from the Tripod Survey (see Appendix E: Understanding Your Tripod Survey Results – Teacher Report). 4. Add to the Google Doc Reflective Journal discussing the results of the Tripod Survey. Phase Four: Week of December 17th, 2018 1. Participate in a focus group/post survey about the study experience.

III. Discussion of the Study Design and Implementation

A. Completer Teacher Preparation Pre-Survey Results Before the study began, C1 and C2 were sent an electronic survey to assess their perceptions about their teacher preparation program and their ability to effectively apply the professional knowledge, skills, and dispositions that preparation experiences were designed to achieve, including having a positive impact on student learning (Standard 4.2 & 4.3). The survey can be found in Appendix F. The results from the survey include: (1) 100% of respondents strongly agree that the SJFC graduate initial certification program prepared them to

have a positive impact on students’ learning. (2) 100% of respondents strongly agree or agree that the SJFC graduate initial certification program prepared

them to contribute to an expected level of student learning growth. (3) 100% of respondents strongly agree or agree that the SJFC graduate initial certification program prepared

them to effectively apply the professional knowledge, skills, and dispositions to a diverse group of learners in the classroom.

(4) 100% of respondents strongly agree that the SJFC graduate initial certification program prepared them to analyze and interpret formative and summative data to inform lesson planning and instruction to increase students’ learning.

(5) The most valuable learning outcome from the SJFC graduate initial certification program included the Social Justice Class and the “understanding of the value of relationships and instructing the whole child based on his or her diverse needs.”

(6) The respondents noted that there was very little missing from the SJFC graduate initial certification program, but noted that it would have been helpful to have preparation on how to write grants and find resources for schools that have few resources.

(7) One respondent summarized that “the class discussions and assignments, along with the field experiences and student teaching, made a tremendous impact on me.”

B. Phase One: Pre-Assessment and First Strategy Implementation and Analysis Completer One (C1) C1 chose the cooperative learning strategy articulating the following rationale; “I’m choosing to implement Marzano’s instructional strategy of cooperative learning in my third grade classroom at Longridge Elementary School. Throughout the day, I provide students with opportunities to work

3

in heterogeneous groupings through “I do, we do, you do” instructional activities. More specifically, I will be implementing the aforementioned strategy during Tier 1 math instruction to look more closely at the effect of cooperative learning on student achievement in math. Each day, I work with two math groups that consist of twelve and eleven students respectively. The groups are labelled by color, purple group and yellow group, as the groups were created as an equal mix of academic levels and behaviors. During instruction, students will be given opportunities for collaborative learning as they work through their problem sets with a partner or small group before being asked to independently apply the skill being taught” (Phase One Reflection Response).

C1 chose a rounding worksheet that is provided with the EngageNY math modules that the students completed independently. The worksheet consisted of 26 questions. For the first thirteen questions, students were asked to round two-digit numbers to the nearest 10. For the second thirteen questions, students were asked to round three-digit numbers to the nearest 100. C1 chose this pre-assessment as a way to inform instruction for Topic C of Grade 3 Module 2 using EngageNY. In this topic, students further develop their skills by learning to round (3.NBT.1). The analysis of the pre-assessment provided the following conclusions according to C1: (1) Out of 23 students, four students scored 100%, two students scored between 75-99%, six students scored

between 50-74%, and nine students scored 50% or below (three students were absent). (2) The results indicated that many of the students need direct instruction and practice opportunities for

rounding multi-digit numbers to the nearest tens and hundreds place. (3) For the first strategy implementation, two heterogeneous cooperative learning groups will be created to

allow for supportive peer interactions. For the first implementation of the strategy based on the pre-assessment results, C1 implemented the strategy of cooperative learning during Tier 1 Math instruction. The lesson objective for Lesson 18 in Module 2 was to decompose twice to subtract measurements including three-digit minuends with zeros in the tens and ones places. After working through subtraction with the standard algorithm in Lesson 18, students will need to estimate differences by rounding numbers to the nearest ten or hundred in Lesson 19 (to be taught Monday, December 10). One of the two cooperative learning groups worked with C1 (which included direct instruction on the skill, guided practice, and working in pairs) while the other group worked with the Zearn Math program, and then the groups switched in an afternoon session. Zearn Math is a form of digital Tier 1 K-5 math instruction that is aligned to Eureka Math/EngageNY modules. Each day, students work through engaging digital content at their own pace and learn targeted lessons. According to the Zearn website, “Zearn Math is built on the Universal Design for Learning (UDL) framework, a set of research-based guidelines for developing flexible learning environments that accommodate individual learning differences. Zearn Math aligns with UDL principles by providing students with multiple ways of acquiring knowledge, showing in can incorporate cooperative learning by inviting more advanced students to help their peers who were repeatedly getting stuck on a specific concept or question. At the end of the math instruction, the students were given the Lesson 19 Exit Ticket created by EngageNY as a formative assessment. C1 reflected that the lesson had successes and challenges. Most of the students were able to use their peers to resolve misconceptions or mistakes in their problems, but other students chose instead to seek out one-to-one assistance from C1 instead of utilizing peer support. The analysis of the first strategy implementation provided the following conclusions according to C1: (1) The exit ticket had three math problems and results were:

Five students scored 0/3, three students scored 1/3, six students scored 2/3, and six students scored 3/3 (3 students did not complete the exit ticket).

(2) Most of the students did well with the first problem, and problems two and three presented the most incorrect responses. C1 feels the difficulties with problems two and three may be because half of the students are reading below grade level and the problems are written at a third grade level. In addition, no models of place value charts were allowed with the exit ticket.

A School of Education faculty member completed a classroom observation during the first implementation of the strategy. Standard III: Instructional Practice of the NYSUT’s Teacher Practice Rubric (2014) was used as the observation evaluation tool. This observation tool was chosen because this is the observation tool used by the administration for teacher review in the district. The tool has four possible indicators; Highly Effective, Effective, Developing, and Ineffective.

4

Table 3: Standard III Elements and Indicator Ratings – C1

Element III.1: Teachers use research based practices and evidence of student learning to provide developmentally appropriate and standards-driven instruction that motivates and engages students in learning.

A. Aligns instruction to standards. Developing B. Engages students. Effective

Element III.2: Teachers communicate clearly and accurately with students to maximize their understanding and learning.

A. Provides directions and procedures. Developing

B. Uses questioning techniques to engage students.

Developing

C. Responds to students. Developing

D. Communicates content. Effective

Element III.3: Teachers set high expectations and create challenging learning experiences for students.

A. Articulates measures of success. Developing

B. Implements challenging learning experiences.

Effective

Element III.4: Teachers explore and use a variety of instructional approaches, resources, and technologies to meet diverse learning needs, engage students, and promote achievement.

A. Differentiates instruction. Developing

Element III.5: Teachers engage students in the development of multi-disciplinary skills, such as communication, collaboration, critical thinking, and use of technology.

A. Provides synthesis, critical thinking, problem-solving, and opportunities for collaboration.

Effective

Element III.6: Teachers monitor and assess student progress, seek and provide feedback, and adapt instruction to student needs.

A. Uses formative assessment to monitor and adjust pacing.

Developing

B. Provides feedback during and after instruction.

Developing

Ratings Summary: Highly Effective ratings – 0/12 (0%) Effective ratings – 4/12 (33%) Developing ratings – 8/12 (67%) Ineffective ratings – 0/12 (0%) Completer Two (C2) C2 chose the Summarizing and Note Taking Strategy articulating the following rationale: I chose to implement Summarizing and Note Taking of Marzano’s Nine High Yield Instructional Strategies. The rationale behind my decision comes from research purported in the Learning-Focused Lesson Plan, which is the program used in the district where I am employed. According to the program, “summarizing is a learning strategy, not a teaching strategy. Students do summarizing in order to construct meaning for their new learning.” Additionally, this selection was made because the placement I am currently in is a 5th grade classroom in a minority majority school. There is a broad range of diverse needs in my classroom with nine out of the sixteen students reading at least two grades below level. Five of those nine students read at a second grade or below level as a fifth grader, This makes it incredibly hard for them to break apart the content in many of the nonfiction texts they are required to read. Also, it makes discussing upper level concepts such as author’s craft

5

and figurative language more of a challenge since they are more prone to reach frustration level early on in their reading. In addition to reading below grade level, seven of my sixteen students are English Language Learners whose first language was Spanish. It is important to note that these students come from homes where Spanish is the primary language spoken at home and that when they are in school they mostly speak Spanish with their peers. (Phase One Reflection Response).

C2 chose a teacher-created pre-assessment with their students. They administered a short survey which was formatted in a Likert scale template. The survey asked students questions such as “when I read, I highlight or underline important dates and events” to which students had to respond by indicating whether or not they take part in the behavior “always”, “sometimes”, or “never”. The survey questions were read aloud to the class and examples of each behavior were provided. The student responses were anonymous to ensure that the students felt comfortable providing honest answers. The analysis of the pre-assessment provided the following conclusions according to C2: (1) The students indicated that they rarely took part in summarizing activities. (2) C2 determined that more explicit instruction on how to properly take noted and use summarizing strategies

was needed. (3) C2 created guided notes and prompting questions to use while reading realistic text. For the first implementation of the strategy based on the pre-assessment results, C2 read aloud a non-fiction article on the Chinese New Year from NEWSELA. C2 provided scaffolded, fill-in-the blank notes to guide their interaction with the text. Guided questions were administered periodically while they listened to the story. After the read aloud was complete, the students were asked to provide a summary of the story in their own words. C2 reflected that strategy implementation seemed helpful because students were able to use the scaffolded fill-in-the blank note taking sheets as a guide to help support their interaction with the text. By having the notes, students were held more accountable to use their reread strategy to actually go back in the text and find important details and information. This supported their ability to provide a summary of what they had read, which ultimately supported their overall understanding of the text. The analysis of the first strategy implementation provided the following conclusions according to C2: (1) After analyzing the student summaries, C2 was able to see growth in the student responses, including the

use of specific text details related to the prompt. (2) C2 noted that the students still struggled with writing a detailed summary, so C2 created a worksheet titled,

“Super Summarizers,’ planning sheet. (3) C2 also created a “Three Dollar Summary” sheet, which will challenge the students to be more concise and

detailed while writing summaries by requiring a summary that has 300 words or less. C. Phase Two: Second Strategy Implementation and Summative Assessment

Completer One (C1) For the second strategy implementation based on the formative assessment results, C1 allowed the students to work in pairs with a buddy of their choice, work independently, or with C1 in a small group. The majority of the students chose to work in pairs. The students’ tasked during the second strategy implementation was to complete a four page practice test that contained multi-step problems concerning rounding to the nearest tens and hundreds, subtracting three digit numbers, measuring elapsed time, and comparing precise measurements to estimated measurements (a review of Module 2). The analysis of the second strategy implementation provided the following conclusions according to C1: (1) C1 observed that during this second iteration, that the practice tests were not collected, C1 was able to informally assess students’ understanding of concepts stressed in Module 2 when the class worked through each problem as a whole group the following day using the Smartboard. (2) This whole group activity allowed different perspectives to be shared, and showed that a variety of strategies was beneficial for the classroom of diverse learners when working through challenging, multi-step word problems. For the summative assessment, C1 used was a four-page Module 2 Math Assessment created by EngageNY for the NYS Common Core Mathematics Curriculum, using problems one and two for the summative assessment based on the content of the cooperative learning strategy implementation.

6

The analysis summative assessment provided the following conclusions according to C1: (1) For problem one, 17 students received full credit, one student received partial credit, and five students

received no credit. Overall, 78% received credit for problem one. (2) For problem two, 10 students received full credit and 13 students received no credit. Overall, 43% of

students received credit for problem two. (3) After reviewing students’ assessments, it’s clear that most students feel comfortable showing and rounding

a three-digit number to the nearest hundred, as responses to problem one indicated. At the beginning of this module, students had many opportunities to work cooperatively in small groups or with partners when working through their problem sets pertaining to rounding two-digit and three-digit numbers to the nearest ten and nearest hundred. As indicated by the summative assessment results, cooperative learning was an effective strategy for teaching and practicing the concept of rounding on a number line.

(4) The results from problem two indicate that cooperative learning did not necessarily have a positive correlation to student success with subtracting three digit numbers that included zeros. Although students were given various opportunities to work with each other and learn in a small group setting, perhaps there needed to be more teacher-led instruction, time for clarification, and a reintroduction of the Hundreds/Tens/Ones chart to visually represent subtracting with larger numbers.

C1 stated in their reflective response journal that for next steps based on their overall analysis that they would implement the following things: (1) Incorporate more movement activities and strengthen peer relationships through cooperative learning

opportunities. (2) Implement cooperative learning experiences that incorporate movement activities to encourage students to

be more engaged and energized. (3) Goal is to build students’ interdependence, trust and openness to one another, which could be done

through cooperative learning. Completer Two (C2) For the second strategy implementation based on the formative assessment results, C2 chose a fictional text for the read aloud (The Sneetches, by Dr. Seuss). As part of the study design, C2 was observed by a faculty member of the second strategy implementation.

The analysis of the second strategy implementation provided the following conclusions according to C2: (1) The students were able to access background knowledge about the difference between summarizing and

retelling at the beginning of the lesson. (2) The students indicated that they enjoyed the read aloud. For the summative assessment, C2 presented the aforementioned Super Summarizers and Three Dollar Summary sheets. The students completed the summarizing activities independently with teacher support as needed. The analysis summative assessment provided the following conclusions according to C2: (1) Student responses showed a deeper understanding of the summarizing strategy. (2) Using the organizers in place of an open response format increased the accuracy of the summary. C2 stated in their reflective response journal that for next steps based on their overall analysis that they would implement the following things: (1) Use the summarizing organizers immediately to scaffold student understanding. (2) Continue activities that allow practice in discriminating retelling from summarizing. A School of Education faculty member completed the observation during the second implementation of the strategy. Domain 3: Instruction (Announced/Unannounced Classroom Observation) of the Danielson 2011 Rubric was used as the observation evaluation tool. This observation tool was chosen because this is the observation tool used by the administration for teacher review in the district. The tool has four possible indicators; Distinguished, Proficient, Basic, and Unsatisfactory.

Table 4: Domain 3 Elements and Ratings – C2 Domain 3: Instruction (Announced/Unannounced Classroom Observation)

3a: Communicating with Students Proficient

3b: Using Questioning and Discussion Techniques Proficient

3c: Engaging Students in Learning Proficient

7

3d: Using Assessment in Instruction Basic

3e: Demonstrating Flexibility and Responsiveness Distinguished

Ratings Summary: Distinguished ratings – 1/5 (20%) Proficient ratings – 3/5 (60%) Basic ratings – 1/5 (20%) Unsatisfactory ratings – 0/5 (0%)

C. Phase Three: Tripod Student Survey Implementation and Analysis Completer One (C1) C1 made the following comments about the administration of the Tripod Survey in her 5th grade classroom: “Overall, I found that the Tripod Survey was easily accessible for my students, as well as myself. I quickly found and printed the student codes, and explained to students that each code was unique to them. After reading the administration guide, students logged in to the survey on their Chromebook. I did have to circulate during this portion, as many students were typing in the wrong web address or missing letters. Although we use Chromebook daily, my students vary in their typing accuracy and fluency. As soon as students entered in their codes, they began to take the survey. A handful of students found the bubble scales to be overwhelming, as they had never seen a scale of that sort before. I read through the options at the top, (for example: never, not always, sometimes, mostly or always), and explained that students needed to read each item, then rate it according to their thoughts/feelings/experiences associated with the item. I was not sure if I could read each item aloud, so I had students try their best to read and complete the survey independently. The very last section, which was comprised of questions pertaining to students’ backgrounds, was unanticipated. Many students were confused about this part, so I had them skip it if it was challenging to read or understand” (Phase Three Reflection Response).

Table 4: Tripod Student Surveys – 7 Cs Scores C1

Overall Survey Score: 286 Care Confer Captivate Clarify Consolidate Challenge Class Mgmt.

312 286 294 290 280 282 262 Medium Medium Medium

Medium Medium

Medium Medium Medium

Low Medium Medium Medium Medium

Medium Medium Medium

Low

Low Medium Medium Medium Medium

Low Low

Medium Medium

*Range for average scores: 270 – 330 or Medium Rating

The analysis of the Tripod Survey provided the following conclusions according to C1: (1) Room to grow in the areas of clarifying, consolidating, challenging and classroom management. (2) Wholeheartedly agree that instruction time is often loss due to time spent on redirection or lack of firm

routines. (3) Feel that the results from the Tripod Student Survey provides authentic student feedback from students to

inform instruction going forward. Completer Two (C2) C2 made the following comments about the administration of the Tripod Survey in her 5th grade classroom: “Implementing the Tripod Survey with my students was a challenge. Unfortunately, my school has not yet become a 1:1 building. So, as a result of not using the laptops often, students sometimes struggle with logging in and accessing a webpage. This impacted the results of the study because I was running around trying to get the students logged in on the Tripod page, and as I was doing so, I was also giving directions. I had asked the students to wait until I was finished giving directions before they started to click through their responses, however, by the time I had even said that, I already had three students who just clicked through the entire survey” (Phase Three Reflection Response).

8

Table 2: Tripod Student Surveys – 7 Cs Scores C2

Overall Survey Score: 290 Care Confer Captivate Clarify Consolidate Challenge Class Mgmt.

302 292 274 288 304 336 236 Medium Medium Medium

High Low

Low Medium

Low

Medium Medium Medium

Low Medium

Medium High

Medium Low

Medium High

Medium High

Medium

Low Low Low Low

*Range for average scores: 270 – 330 or Medium Rating

The analysis of the Tripod Survey provided the following conclusions according to C2: (1) Second language barriers can definitely impact a student’s understanding of a task. (2) The awareness of students’ not finding school enjoyable causes sadness and frustration. (3) Provide more opportunities to creatively write to increase engagement and feelings of competence.

D. Phase Four: Focus Group/Post Survey Excerpts (Appendix G: Focus Group/Post-Survey Questions)

Appendix A: Invitation to participate email

Good afternoon, As a notable recent graduate of the SJFC School of Education, we are reaching out to you to ask for your assistance in obtaining feedback about your preparation at SJFC and how that translates into your work in the classroom. As part of our accreditation process through CAEP (Council for the Accreditation of Educator Preparation ), we are required to work with our graduates in the field to explore, from your vantage point, how your preparation supports your students’ overall learning. If you are willing and interested in participating, this process would occur over seven weeks and include:

*Implementing a research-based high impact instructional strategy and reflecting, through blogs and journals, the impact of this strategy on your student’s learning. In addition, you would be asked to participate in a culminating focus group. *Responding to a brief survey about your use of the instructional practice *Implementing a student survey

At the completion of the project, you will receive a $100 honorarium and our deepest gratitude. If you agree to participate, we will work with your district’s administration to obtain all necessary permissions. What we ultimately need from you is the opportunity to gain a firsthand look at the outstanding work you accomplish every day in the classroom. Please let us know by Friday, Nov. 17th if you would be willing to participate in and support this project. We can be reached by email or phone. Sincerely, Dr. Susan Hildenbrand Dr. Linda McGinley Associate Dean Assessment Coordinator Ralph C. Wilson School of Education Ralph C. Wilson School of Education [email protected] [email protected] (585)385-7297 (585)385-8One79

Appendix D: Reflective Journal Prompts

Directions: After each phase of the study is completed, please respond to the corresponding reflection prompts.

Phase One (Week of Nov. 26th, 2018) Reflection Prompts 1. Which High Impact Evidence-Based Teaching Strategy did you choose to implement? Please provide a

rationale to support your choice.

9

2. Describe the pre-assessment tool you implemented with your students and provide a rationale for choosing the tool.

3. Discuss the results of the pre-assessment and how the results inform the first strategy implementation.

Phase Two (Week of Dec. 3rd, 2018) Reflection Prompts

1. Describe the strategy implementation lesson and the formative assessment you used at the end of the lesson.

2. Reflect on the strategy implementation and the overall effectiveness of the lesson.

3. Analyze the results of the formative assessment. What information did the assessment provide to guide

your second implementation of the High Impact Evidence-Based Strategy?

Phase Three (Week of Dec. 10th, 2018) Reflection Prompts 1. Reflect on the second strategy implementation and the overall effectiveness of the lesson.

2. Describe your experience with implementing the Tripod Survey with your students.

Phase Four (Week of Dec. 17th) Reflection Prompts 1. Analyze data from the results of the summative assessment, the impact on student learning, and the overall

effectiveness of the strategy.

2. If you were to use this strategy in the future, what might be some next steps you would take?

3. What did you learn from the student survey results and how will you use the information to impact your future practice?

Appendix G

Focus Group/Post-Survey Questions CAEP 4 Program Completer Study

1. Think about the strategies you used for the case study. In which course(s) do you think you learned about

the strategy?

2. What did the coursework teach you about why this strategy would be important and the impact the strategy may have on student learning and performance? InTASC Content Knowledge, Application of Content (4,5)

3. How did this strategy help to make the discipline (ELA and social studies) accessible and meaningful for learners so your students could master the content? InTASC Content Knowledge, Application of Content (4,5)

4. How did coursework help you use assessments to drive instruction? And how is this study an application of

that? InTASC 6-8 (assessment, planning, instructional strategies)

5. What, specifically, in your program prepared you to carry out the responsibilities you confront on your job? InTASC Standards 9-10 Professional Learning, ethical practice, leadership and collaboration

6. Is there anything else you’d like to tell us about your preparation in the SoE preparation program?

10