track my progress technical guide

39
Track My Progress Technical Guide

Upload: true-progress-llc

Post on 14-May-2015

466 views

Category:

Education


2 download

DESCRIPTION

The Technical Guide provides the reliability and validity documentation for the Track My Progress assessment.

TRANSCRIPT

Page 1: Track My Progress Technical Guide

Track My Progress Technical Guide

Page 2: Track My Progress Technical Guide

2  

Technical Guide Track My Progress® Copyright and Trademark Notice 2014© True Progress, LLC. All rights reserved. Track My Progress is a trademark of True Progress, LLC. Windows is a trademark of Microsoft Corporation. Macintosh and Safari are trademarks of Apple Computer, Inc., registered in the US and other countries. Internet Explorer and Windows are registered trademarks of Microsoft Corporation in the US and/or other countries. Adobe, Adobe Reader, and Adobe Flash Player are either registered trademarks or trademarks of Adobe Systems Incorporated in the US and/or other countries. Firefox is a trademark of the Mozilla Foundation. Information in this document is subject to change without notice and does not represent a commitment on the part of True Progress, LLC. The software described in this document is furnished under a license agreement or non-disclosure agreement. The software may be used only in accordance with the terms of the agreement. This document and the software described within it may not, in whole or in part, be copied, photocopied, reproduced, translated, or reduced to any electronic medium or machine-readable form other than that which has been specified herein without prior written consent from True Progress LLC. Version 1.0

Published by True Progress LLC, PO Box 5753 • NH • 03766 • USA TEL: 800-294-0989 • [email protected] • www.trackmyprogress.com

Page 3: Track My Progress Technical Guide

3  

Table of Contents Introduction  .................................................................................................................  4  

The  Assessment  Process  ...............................................................................................  6  Four  Test  Windows  ...................................................................................................................................................  6  Computer  Adaptive  Testing  ...................................................................................................................................  6  Test  Item  Types  ...........................................................................................................................................................  8  

Using  Track  My  Progress  Assessment  Data  .................................................................  10  Appropriate  uses  of  Track  My  Progress  assessment  data  .....................................................................  10  Scale  Scores  ................................................................................................................................................................  11  Percentile  Scores  .....................................................................................................................................................  12  Color  Categories  .......................................................................................................................................................  12  Using  Data  to  Guide  Learning  ............................................................................................................................  13  Universal  Screening  ...............................................................................................................................................  13  Defining  risk  ..............................................................................................................................................................  14  When  do  interventions  begin?  ..........................................................................................................................  16  Instructional  Groups  ..............................................................................................................................................  16  Differentiating  Instruction  ..................................................................................................................................  16  Diagnostic  ...................................................................................................................................................................  17  Test  Question  Analysis  ..........................................................................................................................................  18  Expected  Progress  ..................................................................................................................................................  19  Goal  Setting  ................................................................................................................................................................  20  

Validity  and  Reliability  of  Track  My  Progress  ..............................................................  22  Item  Pool  Development  ........................................................................................................................................  22  Item  Types  ..................................................................................................................................................................  22  The  Computer  Adaptive  Test  Engine  ..............................................................................................................  23  Pilot  Testing  ...............................................................................................................................................................  25  Reliability  ....................................................................................................................................................................  26  Validity  .........................................................................................................................................................................  26  Reading  vs.  Mathematics  .....................................................................................................................................  27  National  Norms:  Math  ...........................................................................................................................................  28  Score  Trends  .............................................................................................................................................................  30  

Frequently  Asked  Questions  ......................................................................................  31  

Track  My  Progress  Educator  Support  ..........................................................................  37  

References  .................................................................................................................  38  

Page 4: Track My Progress Technical Guide

4  

Introduction   Track My Progress is an online assessment that provides instant guidance on your students’ progress against the Common Core State Standards. Track My Progress provides Reading and Math assessments that can be given four times a year to guide your instruction and decision-making. This Technical Guide will help you understand the guiding purpose of Track My Progress as well as its technical underpinnings. Track My Progress is an easy to use online assessment that provides instant guidance for classroom instruction and decision-making. Track My Progress is designed with an understanding of how important instructional time is for your students. It generates valuable insights without disrupting your time with your students. The adoption of the Common Core State Standards provides the opportunity to develop a new kind of assessment that evaluates students’ deeper understanding of fundamental concepts while offering an engaging format with innovative question formats. As educators we developed Track My Progress to meet the needs of today’s demanding classroom environments. Our goal is to provide extremely useful progress data on your students while preserving your instructional time. We designed Track My Progress to be easy to use and simple to understand so that the assessment process does not overwhelm your instructional time. Track My Progress measures the depth and rigor of the Common Core while challenging students’ critical thinking and problem-solving skills. It uses computer adaptive test technology to test students at their level in less time than with paper and pencil. Track My Progress provides 20-minute tests for Reading and Math four times per year. Track My Progress engages students with a variety of innovative test question formats that go beyond multiple-choice to assess the deeper level of understanding that the Common Core requires. Every test question is designed to measure a specific Common Core standard, allowing you to confidently measure what you are teaching. As soon as students complete a Track My Progress test, you can view data for each Common Core subject and domain. The assessment measures progress on a nationally normed scale. It offers diagnostic reporting of strengths and weaknesses for each student or for the entire class or grade.

Page 5: Track My Progress Technical Guide

5  

Track My Progress organizes scores into four performance categories; at-risk, borderline, on grade-level, above grade level. This use of color categories allows you to easily see the various needs of your students. The assessment offers a thorough view of student’s test experience enabling you to base instruction on specific needs. Track My Progress allows you to;

• See a list of student’s test questions as organized by the Common Core State Standard they measure.

• See whether a student answered each test question correctly or incorrectly.

• See the time it took for a student to complete each test question. • See whether a student skipped a test question. • Explore further to see the exact questions and student answers.

Track My Progress helps identify the Common Core strengths and weaknesses for students in Reading and Math. The reporting allows you to identify each student’s instructional level to better differentiate instruction and interventions. The reporting also provides guidance on forming instructional groups for each Common Core subject and domain. By assessing students over the course of the school year you can evaluate the progress of each student. You can see which students are making expected progress and which students are falling behind. This progress data can be used to inform decisions about the effectiveness of interventions and core instruction. The Track My Progress early learner interface uses audio directions, large interactive buttons, and non-distracting graphics to enable group administration. Educators do not need to sit with students individually for each assessment. Track My Progress helps you to measure student learning accurately.    

Page 6: Track My Progress Technical Guide

6  

The  Assessment  Process  

Four  Test  Windows  Track My Progress assessments are organized around four test windows each year. Students take one reading test and one math test in each test window. Students should test in the same month of each test window. For example, if you test in August for the fall test window, then it would be best to test in November, February and May as that would put you on a test schedule of the first month in each test window. This will ensure that you have equal time intervals between each set of math and reading tests.

Table  1:  Track  My  Progress  Test  Windows   Students should test within the same two-week period during the test window. If one student tests at the beginning of a test window and another student tests at the end of the test window, for example, the scores will not be comparable. The summer test window can either be used to evaluate the progress of summer school students, or with careful planning can be used as a fourth test window during the school year. To test four times a year, a school could test in August, November, February and May (the first month of each test window). If a school starts the year in September the test months could be September, December, March and June (the second month of each test window).

Computer  Adaptive  Testing  Track My Progress is a computer adaptive test. This means that the difficulty of the test questions adapt to the learning zone of the student. If a student is above grade level and answers initial questions correctly, the test will become more challenging as harder test questions are selected. If a student is below grade level, poor performance on initial questions will lead the test to provide easier test questions.

Page 7: Track My Progress Technical Guide

7  

Figure  1:  Computer  Adaptive  Test  Progression     The graph above simulated a reading test for a grade 1 student. Correct answers are in blue. Incorrect answers are in red. The difficulty of the next test question increases or decreases based on whether the previous answer was correct or incorrect. Initially, there may be large changes in test question difficulty. Towards the end of the test the range in difficulty is narrowed as the test better finds the student's learning zone. Computer adaptive testing offers several benefits:

• Fewer test questions are required, making for a shorter test. • Students are less frustrated by fewer above level questions. • Students are not bored by too many below level questions. • Teachers receive relevant feedback from more meaningful test questions

for each student.    

Page 8: Track My Progress Technical Guide

8  

Test  Item  Types  The Track My Progress Reading test offers ten different item types. These different item types are designed to challenge student's thinking and in some cases probe for a deeper or more rigorous level of understanding. This provides for a richer assessment experience than can be offered with only using multiple-choice test items. Test Item Type Description Multiple Choice Student selects a correct answer from 2 or 4 answer

choices. Fill in the Blank Student selects an answer choice to complete the

sentence. Select a Word(s) Student selects a word (or words) from a passage by

clicking to highlight the word(s). Match Student drags four pictures/words to match four target

pictures/words. Multiple Solutions Student selects all correct answers from 6 choices. Maze Student selects one of three words to fill in blank. Every

seventh word of the passage has a blank that the student needs to complete.

Fluency Passage Student reads a timed passage, then answers multiple-choice question.

Select a Sentence(s) Student selects 1-3 sentences from a passage by clicking to highlight the sentence(s).

Passage with Multiple Choice

Student reads a passage and then selects an answer from four choices.

Passage with Multiple Solutions

Student reads a passage and then selects 2-5 answers from six choices.

Table  2:  Track  My  Progress  Reading  Test  Item  Types   The math test offers nine different test item types. These test items are designed to challenge the student to demonstrate a deeper level of understanding than can be shown with only multiple-choice test items. In addition, some math test items use on screen tools such as a ruler or protractor. The student must drag the tool into position in order to answer the question.

Page 9: Track My Progress Technical Guide

9  

Test Item Type Description Multiple Choice Student selects a correct answer from 2 or 4 answer

choices. Fill in the Box Student selects an answer choice to complete the

number sentence. Distribute Student distributes objects by clicking and dragging. Match Student drags four pictures/numbers to match four

target pictures/numbers. Multiple Solutions Student selects all correct answers from 6 choices. Open Response Student uses number keypad to enter answer. Fluency Student solves three number fact problems under time

constraint. Plot a Point Student plots a point on a grid or number line. Graph a Line Student plots a line on a grid by clicking and dragging

to create the line. Table  3:  Track  My  Progress  Math  Test  Item  Types        

Page 10: Track My Progress Technical Guide

10  

Using  Track  My  Progress  Assessment  Data  

Appropriate  uses  of  Track  My  Progress  assessment  data  Track My Progress is designed to address several specific needs. The data from the assessment let’s you know how each student is doing, and how your class is doing as a whole. However, it is important to keep in mind that no single data point should be the sole indicator of student progress. Track My Progress provides several data points over the course of the school year. A comprehensive data-informed instructional program considers many sources of data and does not support making significant educational decisions based on an isolated data point. When evaluating student progress there are many sources of data available to us. Quiz scores, homework quality, attendance, motivation and class participation are just a few examples of important sources of data. Let's look at an example from Track My Progress data. In the graph below you see the reading test of a student tested in the spring of third grade. She has a scale score of 615, which is at the 19th percentile, and based on this school's Response to Intervention protocol indicates she is at-risk (i.e. below the 20th percentile). Fortunately, this is not the only test score this school has for the student. The graph also includes other data points and provides a more complete picture of how this student is progressing.

Figure 2: Student Progress Over Five Test Events The student's score of 640 in the fall of third grade is at the 40th percentile. The student's score of 706 in the fall of grade four is at the 39th percentile. This student is making expected progress in reading and appears to have done unusually poorly on one out of five tests.

Page 11: Track My Progress Technical Guide

11  

Sometimes we do not have multiple data points over a period of years and we must respond to a student's needs and make a decision on whether additional or differentiated instruction is required. However, we always have additional data points when we consider sources like attendance, class participation, homework quality and class quiz scores. The principal is to always coordinate data with other sources and when possible progress over time data to get a fuller picture of student progress. It is also vital to keep in mind that a standardized test score is not a measure of a student, or even a student’s ability. A standardized test score is simply the quantification of the student’s representation of their learning at a given time in a specific context. The test score indicates how the student did on that test, not how that student is as a student.

Scale  Scores  The Track My Progress scale score shows the level of Common Core State Standards knowledge and skill in reading and mathematics. Track My Progress scores are based on U.S. national averages and they are expressed on a common numeric scale across grade levels and subjects. The Track My Progress scale uses standardized measurement units to track student progress. Each unit on the scale represents the same amount of progress regardless of where the student is located on the scale. For example, a student whose math score increases from 540 to 560 makes the same amount of progress in reading when the reading score increases from 500 to 520. Each student’s Track My Progress score is based on individually selected questions that specifically measure the student’s learning. Track My Progress weights the number of correct answers for each student by the difficulty of the questions. This provides a common scale for all students, subjects and grade levels. The Track My Progress Scale for K-5 ranges from approximately 250 to over 950. The table below represents the average scale score for Reading and Math for each grade level.

Page 12: Track My Progress Technical Guide

12  

Table  4:  Track  My  Progress  End  of  Year  Average  Scale  Scores  by  Grade  Level  

Percentile  Scores  Percentiles allow students to be ranked among peers who completed the test during the same testing window of the year. This is based off of the data collected during the norming period from a nationally representative US sample. The percentiles range from 1-99 and can give a teacher, administrator, or anyone involved in the student’s education, an idea of how well the student is learning. For example, if a student is ranked in the 70th percentile, they would be considered to be above average. However, if a student is ranked in the 20th percentile, then they are below average. Percentile is not a measure of how many questions the student answered correctly. If a student is in the 30th percentile it means they did better than 29% of the students who took the test during the same month in the norming period.

Color  Categories  The color of each progress bar on Track My Progress graphs is based on the percentile of the score. For example, if a student score is red it means that the score is below the 26th percentile. If a student score is in green it is above the 60th percentile. The percentile scores are updated for each test window. The school administrator can customize these settings based on the school or district policies for cut scores. For example, if a school uses the 20th percentile to define at-risk for universal screening, the school administrator can make this adjustment in Track My Progress so that all student cores at the 20th percentile or below will appear in red.

Page 13: Track My Progress Technical Guide

13  

Table  5:  Track  My  Progress  Default  Cut  Scores  for  Color  Categories    

Using  Data  to  Guide  Learning  As soon as a student has completed an assessment you can view data instantly by signing in to your Track My Progress account. You will be able to guide your decision making by answer important questions:

• Which of my students are at-risk for school failure (universal screening)? • What are my instructional groups? • How can I differentiate my instruction for each student (Diagnostic & Test

Question Analysis)? • Are my students making expected progress?

Universal  Screening  Upon signing in to your account you will see your students' scale scores for Common Core math. You can use the buttons in the left column to switch to Reading. You can also use these buttons to select specific domains of the Common Core. In the screen below you can see the 23 students for this grade three class, sorted by scale score. The three students in red at the bottom of the screen have been identified as at-risk for school failure in math. These students are below the cut-score that defines a score as at-risk. The three students in yellow have scores that have been identified as borderline. The scores that are blue are on grade level and the students in green are above grade level. The cut-scores for these categories are defined by your school administrator. Universal Screening is typically the first step in an educator's use of data in a school wide multi-level instructional system for preventing school failure. Universal Screening is the name for the process many schools use to identify students who may be struggling and need additional instruction, practice or support. The word 'universal' indicates that all students in the school will

Page 14: Track My Progress Technical Guide

14  

participate in this process. The word 'screening' suggests that this process will be quick and efficient and will not compromise instructional time. A lengthy, complex, or in-depth assessment should not serve as a "screener" because it would unnecessarily take away from core instructional time. Therefore, universal screening is a process schools use to efficiently identify students early in the school year who may need additional attention. The goal is to identify struggling students early in the school year and early in their academic career so help can be provided to prevent school failure. Identifying at-risk students during their first two years in school, and providing them with additional support before they have significant academic problems, increases their chances of establishing and maintaining appropriate levels of academic progress. Track My Progress provides a number of advantages as a universal screener compared to more traditional pencil and paper assessments:

• Aligned to the Common Core State Standards • Multi-skill assessment • Nationally normed • Adjustable cut scores • Four times a year administration • Easy and efficient to administer • Provides an overall score as well as Common Core domain scores • No scoring by hand or data-entry required

Defining  risk  Track My Progress reports universal screening results by way of the color of each student's score. Red scores are categorized as at-risk, yellow as borderline, blue as on grade level and green as above grade level. The cut scores that define these color categories can be changed from a School Administrator account to suit the needs of your school or district. The color of each progress bar is based on the percentile of the score. For example, if a student scores is red it means that the score is below the 26th percentile. If a student score is in green it is above the 60th percentile. The percentile scores are updated for each test window. The graph below shows a typical Track My Progress distribution of scores. Most of the scores are in the grade level (blue) and above grade level (green) range. However, there are three borderline scores (yellow) and three at-risk scores (red). Therefore, the results of the fall universal screening for this class has identified three students as at-risk (at the 25th percentile or below) and three students as borderline (between the 26th and 40th percentiles).

Page 15: Track My Progress Technical Guide

15  

Figure  3:  Track  My  Progress  Groups  Report   It is possible that the default cut scores in Track My Progress may identify more or less than 20 percent of your students, depending on the demographics of your particular school. In this situation, you might consider choosing a cut score that reflects the performance abilities of students enrolled in your district or school. Choosing a more stringent cut score, such as the 40th percentile for at-risk, will result in more students appearing with red scores. Choosing a less stringent cut score, such as the 10th percentile for at-risk, will result in fewer students appearing with red scores. The key is using cut scores that make sense for your school or district. A definition of at-risk that identifies half of the students in red is unsustainable because schools likely do not have the resources to provide additional instruction to half the students at the school. Similarly, when the Track My Progress results only return students in the blue and green color categories it likely means that the cut scores are not stringent enough and there are students who are following behind their peep group but have not been identified by the universal screening process.

Page 16: Track My Progress Technical Guide

16  

When  do  interventions  begin?  How we respond to a student who has a Track My Progress score identified as "at-risk" or "borderline" by the universal screening process varies by state and district. Some states or districts have prescribed procedures to follow once a student has been identified during the universal screening process. Others take a more flexible approach and respond to each student on a case by cases basis. The Track My Progress philosophy is to coordinate data questions with multiple data points when possible. Some districts use a "direct approach" where an at-risk score on the universal screening requires interventions start right away and the student is assigned to a different instructional tier based solely on the one universal screening result. The rationale for this strategy is that at-risk students should not be delayed in receiving interventions due to further observation and progress monitoring. A limitation to this method is that some students may be falsely identified as at-risk during the universal screening and unnecessarily participate in an intervention. Other districts use progress monitoring as a second form of assessment to confirm the universal screening finding. In the progress-monitoring method, all students with at-risk universal screening scores are monitored for an additional amount of time before they are assigned to a different instructional tier.

Instructional  Groups  These same categories, at-risk, borderline, on grade level and above grade level can be used to guide your decision making in forming instructional groups. For small group instruction you can organize your groups based on the color of their scale score. These scores and groupings will be different for each Common Core subject and domain. If you are planning an instructional block for Geometry you can click that button in the left column to see the scores and groups for the Geometry domain. The colors are guideposts for your groupings. In the example in Illustration 3, above, you may decide to group the red and yellow students together given the proximity of their scores. Or you may decide to group the students with green scores in two separate groups.

Differentiating  Instruction  There are two data analysis techniques to apply to Track My Progress data in order to differentiate instruction for your students. The first is by using the diagnostic feature of Track My Progress to see your students' overall strengths and weaknesses. The second technique is test question analysis that provides you with a very concrete view of what your students do or do not understand for each Common Core domain.

Page 17: Track My Progress Technical Guide

17  

Diagnostic  Click the Diagnostic button at the top left of your screen, below the Track My Progress logo. The changes your data view from instructional groupings and universal screening to a diagnostic view of student strengths and weaknesses. The default view represents the overall strengths and weaknesses for your entire class defined by Common Core subjects and domains. Generally, for medium to large classes, you will not see major differences between the Math domains or between the Reading domains like you might for an individual student. If you do see a domain for your overall class that is markedly lower that the other domains for that subject then this is indicating a specific weakness for your class relative to the national average. This may indicate a gap in curriculum for the current or previous years. You can see the diagnostic strengths and weaknesses for an individual student by clicking their name from list in the left column. Here you will find more variability between domain scores than you probably saw for the entire class. Some students may have relatively balanced scores that do not indicate relative strengths or weaknesses. With other students you are likely to some very specific relative weaknesses and strengths. The diagnostic scores represented in Illustration 4 indicate a profile that is generally on grade level. However, the overall math score is borderline. The diagnostic profile reveals that Common Core Measurement is the specific domain that may be undermining overall math performance with Common Core Base Ten as a secondary weakness. Differentiating instruction for this student could focus on supplemental instruction in Measurement and Base Ten concepts.

Figure  4:  Track  My  Progress  Diagnostic  Graph  

Page 18: Track My Progress Technical Guide

18  

Test  Question  Analysis  Track My Progress is a transparent assessment, which means that you can drill into your data to see the test questions your students worked on including the answers that they provided. This helps make the bridge from the abstract scale scores to the concrete reality of the kinds of problems your students can and cannot handle. To see more information on the specific domain performance of a student follow these steps:

1. When viewing the diagnostic graph for a student click the lowest domain score.

2. This will bring you to a vertical bar graph that shows the student's performance in this domain for all available test dates.

3. Click on the most recent vertical bar to see a list of test questions the student saw for the domain in question.

Figure  5:  Track  My  Progress  Test  Item  Report   Illustration 5 represents the five Measurement and Data questions the student was tested on. The table indicates the standard for each test question as well as the difficulty level, time to complete the question and whether the student answered the question correctly, incorrectly or skipped the question.

1. Place your cursor on the standard to see a definition of what the test question is designed to measure.

2. Click on the row to see the actual test question. The test question view allows you to see exactly what concepts or problems did and did not challenge the student. You can click See Student Answer to see how the student answered the question. This gives you a window into their thinking and the nature of their misunderstanding of the concept. You can use the blue arrow at the top center of the screen to page through each of the questions to get a better sense of what level is best for instructing this student for this domain.

Page 19: Track My Progress Technical Guide

19  

Figure  6:  Track  My  Progress  Text  Question  View  

Expected  Progress  Track My Progress is a nationally normed assessment that allows you to track your students' progress relative to the national average. The table below represents the average scale score for Reading and Math for each grade level at the end of the school year. The table also shows the average or expected yearly progress and weekly progress for each grade level. You can compare your students' progress to the national average to determine if a student is on track, falling behind, or closing the gap.

Table  6:  Track  My  Progress  Expected  Progress  by  Grade  Level  

Page 20: Track My Progress Technical Guide

20  

Goal  Setting  Track My Progress can be used for educational goal setting for your students. The table below displays the average yearly progress and the average weekly progress for Math and Reading tests for each grade level. Typically an educational goal will be set for a student who is behind her peer group. The strategy is to select a goal that will enable the student to close the gap with her peers. If a below grade-level student only makes average yearly progress she will continue to be behind her peers and will be at increased risk for school failure with each passing year. Also displayed in the below table are the "accelerated weekly progress*" and the "very accelerated weekly progress*" values for Math and Reading for each grade level. These values can help you set goals for your students that aim for closing the gap with their peer group.

Table  7:  Track  My  Progress  Expected  Rates  of  Progress  for  Goal  Setting   The first question that commonly arises during the goal setting process is, "how ambitious a goal should I set for my student?" The answer depends on a number of factors that you will need to evaluate to choose an appropriate goal for your student:

• What has been the student's learning rate prior to the proposed intervention?

• What is the student's motivation to learn? • How intensive is the intervention? • Is the intervention large group or small group? • How have other students responded to this intervention?

Page 21: Track My Progress Technical Guide

21  

To set a goal for your student follow these steps:  

1. Choose the subject or domain for which you would like to set a goal. Typically this will be in the area of intervention and the student's lowest Track My Progress domain scores. For example, a student who has a Foundational Reading score in red and is about to begin using a computer-based phonics program the Foundational Reading domain should be used for goal setting. Alternatively, the subject (Reading or Math) can be used if it does not make sense to select a domain.

2. Identify the student’s baseline score from the most recent test. 3. Determine how ambitious a goal is appropriate for this student. For

example, average weekly progress, accelerated weekly progress or very accelerated weekly progress can be selected from the table above. This decision will provide you with the weekly growth rate to use.

4. Determine the number of weeks between the most recent test and the next scheduled test. Multiply the number of weeks by the weekly growth rate from step 3. Add this number to the baseline score (step 2).

It is important to keep in mind the principal of using multiple data points in educational decision-making. This means continuing to track the progress of your student through multiple test windows or by supplementing your assessment process with more frequent progress monitoring.    

Page 22: Track My Progress Technical Guide

22  

Validity  and  Reliability  of  Track  My  Progress   Track My Progress aims to support instruction and learning in mathematics and reading by combining best practice in educational assessment as defined by the American Educational Research Association (2011), with the latest insights of domain experts in education, psychometrics, developmental psychology, and cognitive science. Track My Progress uses computer adaptive testing (CAT), which dynamically selects questions targeted to each student’s level of knowledge and understanding of the material (for overviews see e.g., Lange, 2007; Wainer, et al., 2000). The decrease in testing time gained by computer adaptive testing allows teachers’ to devote more time to the prime motivator of learning – the instructional time they spend with their students. Students’ answers are stored, so they can be tracked, even for past school years, and detailed diagnostic reports are provided to highlight students’ strengths, as well as their weaknesses. Other state of the art features include the use of innovative test items that challenge students to demonstrate a deeper level of understanding and provide for a more rich assessment experience. Track My Progress test items provide students with the opportunity to earn partial credit on certain test questions. Several test item types ask students to select multiple correct answer choices. If a student correctly selects some but not all correct answers partial credit is recorded. If a student selects all correct answers and some incorrect answers partial credit is also recorded.

Item  Pool  Development    Track My Progress test items are designed to measure progress against the Common Core State Standards for mathematics and reading. From concept through design, we relied on the expertise of teachers, as well as Masters and Doctoral level domain consultants, to create large item banks with thousands of items to cover each CCSS standard from kindergarten through grade eight. Items for each level of difficulty are available (e.g., very easy, easy, medium, and hard) so that each student’s strengths and weaknesses can be assessed accurately.

Item  Types    The Common Core State Standards are intended to support the development and implementation high-quality curricula and instruction. These standards draw on a wide variety of sources, including state departments of education, scholars, assessment developers, professional organizations, educators from kindergarten through college. But we note that new academic skills have been introduced, like fluency in reading, and new advanced processes and proficiencies within mathematics. For instance, the kindergarten curriculum now

Page 23: Track My Progress Technical Guide

23  

includes algebraic thinking, measurement and data, and numeric operations in base ten, while grade 8 is now expected to cover statistics and probability. Assessing the CCSS standards requires an approach that goes beyond the standard multiple-choice format, and Track My Progress uses a variety of item types that challenge students beyond the more superficial responses required by only using multiple choice. In addition students are also given computer based virtual tools to use in solving problems. For example, some items require that students use an onscreen ruler or protractor to measure lengths and angles. These item types and online tools were prototyped and then subjected to usability testing with students. This was followed by a teacher review process that allowed teachers to try these test items for themselves and provide feedback and suggestions. The test item types were then piloted in a yearlong national field test followed by a collection of educator feedback. And each step of the test item type development process student and educator feedback was incorporated and the necessary changes were made.

The  Computer  Adaptive  Test  Engine  The Track My Progress CAT approach does away with the assumption that all students learn the same way or are at the same general level of learning. Instead, the different areas and sub-areas in reading and math are assessed according to students’ particular strengths and weaknesses. The major steps in this process are shown in Figure 7. By targeting the questions to individual students, testing efficiency and diagnostic precision is improved, as fewer questions are needed to obtain valid and reliable scores. Also, by omitting questions that are either too hard or too easy, student motivation is enhanced because students will experience testing as a meaningful activity.

Page 24: Track My Progress Technical Guide

24  

Figure 7: An Overview of the Track My Progress CAT Engine The Track My Progress CAT engine is a direct implementation of the Rasch estimation algorithms described in Lange (2007) and Linacre (1998), which allow arbitrary mixtures of binary (right vs. wrong) and partial credit items with multiple scoring levels. Given estimates of items’ parameters, persons’ overall locations are estimated based on their answers using a Joint Maximum Likelihood (JMLE) approach, similar to that used in the Winsteps software (Linacre, 2013). In addition to students’ scores on the sub-areas of math and reading, the CAT engine also computes the fit of students’ responses to the Rasch model. A proprietary item selection method is used so students do not encounter any items they have already answered on previous occasions. For piloting purposes, CAT runs can also include new pilot items that are not scored; instead, their results are stored for later analysis.

Use  student’s  previous  Math  or  ELA  estimate,  or  administer  starter  items

Estimate  this  student’s  current performance  in  Math  or  ELA

Is  the  estimate  sufficiently  precise? Are  all  sub-­‐areas  covered?

Store  results  in  tracking  data  base

Administer  the  item  and  score the  student’s  answer

Select  the  item  that  will  provide the  most  information  about  this particular  student’s  

Database  of  calibrated  mathematics and  reading  questions  with  varying  contents  and  difficulty  levels

Page 25: Track My Progress Technical Guide

25  

Pilot  Testing    To be included in Track My Progress assessments, questions first must survive review by content experts. Next, rigorous pilot testing is performed in agreement with the requirements of the standards for educational and psychological testing provided by the American Educational Research Association (2011). Across the math and reading, extensive pilot studies were performed using data of over 50,000 students from 9 states, including California, Connecticut, Florida, Georgia, Illinois, Massachusetts, New York, Rhode Island and Tennessee. Throughout these studies, Rasch measurement (see e.g., Bond and Fox, 2007; Custer, Omar, and Pomplun, 2006; Jungnam, et al., 2009) was used to evaluate, calibrate and select the test questions using Rasch scaling criteria. In particular, ambiguous questions or questions that failed to consistently reflect knowledge or insight into mathematics or reading knowledge were identified and omitted. Items that provide unfair scores with respect to demographic subgroups of students were rejected. As is recommended in the literature (e.g., Baumer et al., 2009), piloting used the traditional process (i.e., by administering a fixed set of items), while a CAT version embedded varying sets of pilot items during testing. Also, using the approach recommended by Kolen and Brennan (2004), some students answered a few of the items in the next lower grade, thereby allowing a comparison between the two grades. Based on these overlapping items, a “vertical scale” was created that assesses students from Kindergarten through Grade 8 with the same ‘yardstick.’ The vertical scale makes it very easy to determine students’ progress within a single grade, while also providing a baseline to assess progress as students move to the next grade. Grade Mathematics Reading 1 0.92 0.90 2 0.91 0.91 3 0.94 0.89 4 0.96 0.88 5 0.96 0.87 6 0.95 0.97 7 0.97 1.03 8 0.98 1.02

Table 8: Median Standard Error of estimated person scores (logits) by grade

Page 26: Track My Progress Technical Guide

26  

Reliability  Score reliability is reflected in the potential variation in the Rasch estimates (SE) obtained for a single student (i.e., lower variation indicates greater reliability). Table 8 lists the median standard error across grades 1 through 8 for math and reading. The standard errors are expressed in terms of the theoretical units according to the Rasch model (logits). Note that the values are quite similar, and when expressed as reliability coefficients all values exceed 0.90. These values are well above the critical values established in the literature. It can be seen that test scores tend to be slightly more reliable in the lower grades (smaller SE) than those for higher grades (with larger SE). This trend is more pronounced for reading than for mathematics.

Validity  The validity of Track My Progress tests derives from independent sources. Content validity was ensured by a panel of 15 teachers who reviewed items to confirm whether they were aligned to the designated Common Core State Standard and whether it was appropriate for the intended grade level. Rejected items were revised to address the concerns of the panel. Test validity was addressed through the use of Rasch scaling (Bond and Fox, 2007; Lange, 2007). Valid measurement requires that all items should form a single difficulty hierarchy. This was enforced through item selection and rewriting. Moreover, the item hierarchy was constructed to be invariant across subgroups, thereby ensuring a uniform score interpretation and an absence of bias.

Page 27: Track My Progress Technical Guide

27  

Figure 8: Correlation Between Track My Progress Math and ISAT Across Grades 3 through 8 based on 2011 Illinois Student Data (N=281) An empirical validity test is provided for Mathematics by comparing some student scores to those obtained on another widely accepted mathematics test. In 2011 all students in Illinois in grades 3 through 8 were required by state law to take the ISAT test which contains Pearson’ nationally normed – and also Rasch scaled – SAT-10. Mathematics sub scores on the ISAT were available for 218 students who also completed a pilot form of Track My Progress. Figure 8 shows that there exists a strong correlation between these two tests, as students who score high on one test also tend to score high on the other. The correlation with the ISAT (X-axis) scores was 0.83, which explains about 73% of the variance. In the figure all test results are expressed as z-scores. The high level of agreement with students’ scores on a well-established test of mathematical achievement strongly supports the concurrent validity of the Track My Progress approach. During the development of the Track My Progress assessment no other Common Core Assessments were available to provide a more precise correlation study. As other Common Core assessments are implemented across the country we will execute additional test validity studies.

Reading  vs.  Mathematics  At the time of writing this report, 10916 students across Kindergarten through Grade 8 had completed both our Mathematics and English Language Arts tests.

Page 28: Track My Progress Technical Guide

28  

The correlation between the two scores was 0.87, a value that agrees with the literature. It is not possible to plot over 10,000 score pairs, and therefore the Rasch estimates of reading were divided into 8 equally spaced categories by rounding to the nearest logit value. Figure 9 plots students’ average mathematics estimates for each of these eight categories. It can be seen that the relation is essentially linear (r = 0.99), indicating that students’ progress in reading (X-axis) and math (Y-axis) are highly correlated.

Figure 9: The relation between Reading (X-axis) and Mathematics (Y-axis).

National  Norms:  Math  National norms were computed for Track My Progress math in Grades 4 and 8 based on a three-step linking process.

1. Track My Progress vs. New York Mathematics Assessment: In 2011, a sample of 279 students completed the New York Mathematics Assessment as well as our Mathematics test. Convergent validity is strongly supported as these two sets of tests scores show a correlation of 0.78. Using equipercentile equating, Track My Progress mathematics scores can thus be expressed on the same scale as the New York Mathematics Assessment.

y  =  1.0373x  -­‐  0.5789  R²  =  0.98784  

-­‐6.00  

-­‐5.00  

-­‐4.00  

-­‐3.00  

-­‐2.00  

-­‐1.00  

0.00  

1.00  

2.00  

3.00  

4.00  

-­‐5   -­‐4   -­‐3   -­‐2   -­‐1   0   1   2   3   4   5  

Page 29: Track My Progress Technical Guide

29  

2. New York Mathematics Assessment vs National Assessment of Educational Progress: New York’s mathematics performance in grades 4 and 8 on the latest NAEP test (given in 2009) can be found via NAEP’s Data Explorer at http://nces.ed.gov/ nationsreportcard/naepdata/. For each of the 50 states, this site provides means as well as the 10th, 25th, 50th, 75th, and 90th score percentiles. Assuming normality, New York’s test results could thus be transformed into approximate NAEP scores.

Figure 10: Linear regression of NAEP on Track My Progress math

3. Track My Progress vs. NAEP: The two linking steps described above were chained to derive a direct link between Track My Progress and NAEP mathematics in Grades 4 and 8, respectively. Figure 10 shows six points in the estimated NAEP and Track My Progress mathematics distributions in New York: The mean (M), P10, P25, P50, P75, and P90. Note that the least-squares regression lines values fit very well, explaining over 99% of the variation among these statistics of the two tests. Also, the high end of the Grade 4 regression line nearly parallels the lower part of the line for grade 8. Thus, the same basic linear relation is carried forward from grade 4 to grade 8. Given their close relation, Track My Progress national percentiles could therefore be estimated based from the NAEP results.

Page 30: Track My Progress Technical Guide

30  

The national percentiles for grades other than 4 and 8 were obtained by linear interpolation.

Reading. National norms for reading were derived based on Figure 9, above, which revealed a close relation between reading and math. As a first approximation, students’ scores on these two tests were equated, thereby providing a solid basis to estimate national percentiles in reading.

Figure 11: Average Performance in Mathematics Reading

Score  Trends    Figure 11 shows students’ average scale scores across Grades 1 through 6 in Mathematics (solid line) and English Language Arts (dashed line). It can be see that the student scores show similar trends, with students’ reading scores being slightly higher in the lower grades than mathematics. Also, the lines level somewhat in the higher grades, indicating that the students gain slightly less as they progress.    

400  

500  

600  

700  

800  

900  

1   2   3   4   5   6  

Mathematics   English  Language  Arts  

Page 31: Track My Progress Technical Guide

31  

Frequently  Asked  Questions  

One  of  my  best  readers  in  my  class  scored  much  lower  than  my  other  students.  What  should  I  do  next?  The first step is to go through your student's test question by question to see what kinds of mistakes the student made. You can use the forward arrow at the top of the first test question to page through each test question. You can ask the data these questions:

• Did the student skip many test questions? • Did the student answer some questions in a very short amount of time

relative to the complexity of the task? • Did the student make mistakes that really surprise you given what you

know of the student's skills? • Did the student have difficulty with a particular domain or set of

standards that brought down her overall score? Probably the surest method to unraveling the mystery of an unexpected test score is to sit down with the student and ask her to answer each question as you watch. You can ask her to explain why she chose the choice that she did. You can review with her the answer choice she provided during the actual test and ask her to explain her thinking. In general there are four common causes of unexpectedly low test scores:

1. The student was not engaged in the test, did not try her best or wanted to be finished very quickly.

2. The test uncovered a specific domain or area of weakness that brought down the overall score.

3. The student struggled with the computer interface and made mistakes related to poor computers skills and not related to poor math or reading skills.

4. There was a technical problem during the test and the test questions did not display properly or function properly.

All  of  the  students  in  my  class  are  in  the  red.  Why  don't  see  any  instructional  groups  by  different  color.  This commonly occurs when teaching a class of students that are well below the national average. What is most important in your use of Track My Progress is your ability to see your most needy students in red (your students who are falling behind their peers in the class) and if possible your instructional groups organized by levels of skill and readiness. To address this issue your school administrator needs to modify the cut scores that define the color categories in Track My Progress.

Page 32: Track My Progress Technical Guide

32  

What  do  you  recommend  if  I  find  that  one  of  my  students  has  skipped  a  lot  of  the  test  questions?  The finding from a test where a student skipped a lot of test questions or guessed on many questions (i.e. spent very little time on each question) is that the student was not engaged in the test process. Track My Progress is a computer adaptive test, which means that each time the student skipped a test question or answered incorrectly the next question is easier. If the student skips or guesses incorrectly for multiple test questions the remaining test questions will become very easy relative to the student's grade level. You can also review the actual test questions to confirm that the student is skipping test questions that are at or below her skill level. What this means is that the student is not engaged in the test process and possibly not engaged in her overall learning process and goals. If this is the case look to interventions designed to address engagement and motivation issues. Help the student develop personal goals related to her learning in school. Help her understand how assessments help you understand better how to help her reach her goals.

I  created  goals  for  my  students  using  the  Track  My  Progress  process  for  goal  setting.  But  one  of  my  students  finished  the  school  year  below  the  goal  I  set  for  him  and  below  where  he  started  the  school  year.  How  is  this  possible?  There are several important concepts to keep in mind when setting goals for students and using data for educational decision making. The first is the concept of using multiple data points in your decision-making. It is possible that your student had a very poor academic year and did not make expected progress. If this were the case you would have seen this in the two or three sets of Track My Progress scores over the course of the school year. You would have seen 'flat' or declining scores over the course of the year. And this is why Track My Progress is designed to allow for four test windows a year to avoid the over reliance of leveraging a single test score for educational decisions. If, on the other hand, this student showed a steady progression through the fall, winter, and spring test windows and then had a very poor test in the summer window it likely means that this was a 'bad day' for the student and he was not able to show his best work. Here, you need to look to other sources of data (quiz scores, test scores, homework quality, attendance, attitude, etc.) to confirm that he is on track or maybe discover that he is not and that he should be followed more closely at the start of the next school year.

Page 33: Track My Progress Technical Guide

33  

Are  test  scores  reliable  for  kindergarten  students  who  have  very  little  computer  experience?  This is an important question and one you can answer by carefully watching your students during their Track My Progress test sessions.

• Do you see your students struggling to drag answers to their intended location?

• Do you find you are constantly moving around the computer lab helping students and showing them what to do?

• Are students more interested in exploring the computer-mouse interface and not really working to answer questions and show what they can do?

Some schools use the fall and sometimes winter test events for kindergarten students as purely an introduction to computer test taking. In other words, they do not use the data for educational decision-making. Other schools will not test kindergarten students in the first or second test windows of the year but will work on computer skills. Your strategy in how you use Track My Progress will depend on your specific students and their familiarity with computers. Some educators find it helpful to compare computer-based assessments for young students to other forms of assessment. For example, we have seen kindergarten students early in the school year who are completely overwhelmed by the one-on-one reading screening with an adult they have only just met. They are not familiar with interacting with adults this way and don't have a conception of what the assessment means. We suggest thinking of the computer-based assessment in a similar way. When a student is placed in a new context for an assessment (whether it is computer-based, one-on-one, or pencil and paper) we have to interpret any resulting scores or findings with this in mind.

Why  doesn't  Track  My  Progress  provide  a  score  for  every  standard?  Track My Progress provides scores at the 'subject' and the 'domain' levels. This means you will find reading and math scores for students at the subject level. At the domain level you will find scores for the different 'strands' or 'domains' for each subject. Track My Progress is focused on providing an efficient assessment that does not monopolize instructional time and does not overwhelm or discourage students. To that end we do not assess students in every Common Core standard in every test session. The math test delivers 25 test questions, the reading test 20. In order to provide an accurate and reliable score on each standard for each student we would need to test each standard 3-5 times which would make for a lengthy and possibly discouraging test experience for students.

Page 34: Track My Progress Technical Guide

34  

The Track My Progress focus on subjects and domains provides a high-level view of student progress without compromising instructional time or student and staff morale. The subject and domain scores offer sufficient detail for pinpointing strengths and weaknesses and gauging progress over time. The drill-down view of the specific test question performance allows for test question analysis that can complete the picture for a teacher to move forward with their students.

Why  is  Track  My  Progress  referred  to  as  a  'transparent  assessment?'  We believe that for an educational assessment to truly improve learning it must be transparent. This means that educators can see every test question seen by every student. Educators can see the answers that students provided and how much time they spend on each test question. Additionally, educators can use the 'improve this question' feature to submit feedback to our test question development team, thereby providing educators with a voice in the development and content of the test. We have learned from experience that using assessment data without this transparency can be limiting. For example, learning that a student received a low score for Foundational Reading would indicate that the student needs additional support in this area. However, if we can also see that the student skipped several of these questions, spent 3 seconds on another question and clearly guessed at the last, it helps us understand that the student may need support with engagement and motivation more than the actual skills in the Foundational Reading domain. A test that is not transparent fails to provide the story and context behind the scores to intervene responsibly.

Should  we  change  the  default  cut  scores  for  the  color  categories?  For many schools the default cut scores for the definition of red, yellow, blue and green scores is sufficient to identify learning groups, students 'at-risk' for falling further behind their peers and students who are ready for more of a challenge. If your school staff does not have experience with universal screening and benchmarking students you may want to continue with the default cut scores. Here are a few examples of scenarios where schools did decide to change the default cut scores:

• Most teachers only see students with scores in the blue and green color categories. This occurs at a school that is performing above the national average and nearly all students are above the 40th percentile. It can be valuable to adjust the cut scores to identify the students that are falling behind their peer group. If a student is at the 40th percentile and the core of the class is closer to the 65th percentile this student may still need additional support to close the gap or not fall further behind.

Page 35: Track My Progress Technical Guide

35  

• Most teachers only see students with scores in the red and yellow color

categories. This occurs at a school that is performing below the national average and most of the students are below the 40th percentile. It will not be possible to provide additional or intensive interventions to all students identified as 'at-risk.' However, there will likely still be students who are behind their peer group and will need additional support to close the gap and not fall further behind.

• Your state or district has a prescribed definition for 'at-risk' and/or

'borderline.' If your school is already working with a prescribed definition of 'at-risk' and/or 'borderline' it can be helpful for school staff to have the Track My Progress cut scores updated to match. This will keep everyone on the same page in terms of definitions and conventions.

I  would  like  to  set  an  end  of  year  goal  for  my  students  using  Track  My  Progress  scale  scores.  Is  this  appropriate?  Track My Progress scores are based on U.S. national averages and they are expressed on a common numeric scale across grade levels and subjects. The Track My Progress scale uses standardized measurement units to track student progress. Each unit on the scale represents the same amount of progress regardless of where the student is located on the scale. This means you can use the Track My Progress scale scores and expected growth table to set goals for your students. It is important to keep in mind the Track My Progress philosophy of using multiple data points in your educational decision-making. For example, the overall trend of four Track My Progress scores is a better indicator of student progress than leveraging the one end of year score. Additionally, it is best to coordinate your Track My Progress goal setting with other sources of data.

How  do  I  see  a  report  on  how  all  of  my  students  did  with  a  particular  test  question?  Track My Progress is an adaptive assessment. This means that each test is customized to the particular learning zone of each student. As a result students are typically seeing different questions for each test and it is not possible to report on how all of your student did on a particular question because they are seeing different questions.

Do  the  assessments  get  harder  from  one  test  window  to  the  next?  Track My Progress assessments are adaptive. This means that the difficulty level of the assessment is determined by what the student is ready for. For example, most students will make expected progress from the fall assessment to the winter assessment. For these students, because of their increased learning over the past few months, they will be able to handle and will

Page 36: Track My Progress Technical Guide

36  

see more challenging test questions on their winter assessment than what they experienced in the fall. However, for a struggling student who did not make much progress over the past few months the winter assessment questions could be the same difficulty as the fall. What does change from test window to test window are the norms. These are the percentile scores on the School Administrator interface as well as the colors in the graphs. For example, a grade one student might have a score of 472 in the fall that would be blue (on grade level). If that student earned a score of 472 in the winter it would be yellow (borderline) because the peer group of first grade students across the country made progress while this student did not. By the end of the year if this student scored 472 it would be in red (at-risk) because the peer group has made progress over the year which is reflected in the updated norms each test window.

Do  all  the  students  in  my  class  see  the  same  test  questions?  Track My Progress is a computer adaptive test that uses a question bank of over five thousand questions. During a test, Track My Progress chooses the difficulty of the next question based on how well the student answered the previous questions. For example, if a student answers three medium questions correctly, the test is then going to give more medium and hard questions and fewer easy questions. If a second student answers the medium questions incorrectly, the student will see fewer medium questions and more easy questions. This means that these two students are not likely to have seen any of the same test questions.

Can  my  students  take  the  math  and  reading  tests  on  the  same  day?  The math and reading tests cannot be taken on the same day. If a student completes the math test and then signs in again on the same day she will be greeted with a message that indicates her reading test will be available the next day.

Can  students  take  Track  My  Progress  tests  at  home?  Students need to take Track My Progress tests at school and under teacher supervision. They should not receive assistance answering test questions, which may happen at home. It is technically possible for students to sign in at home and take a Track My Progress test. Therefore sign in cards should be collected after test sessions have been completed. As an additional measure of security teachers can turn off a test until the day the test is to be taken at school.  

Page 37: Track My Progress Technical Guide

37  

Track  My  Progress  Educator  Support   Track My Progress educator support is available through a number of means. Please contact us with any questions or comments. Phone: 800-294-0989 Web: http://get.trackmyprogress.com/contact/ Chat: Sign in to your administrator account and select the chat option at the bottom of the screen for live chat with a Track My Progress Educator Advocate. Email: [email protected] Online documentation: http://support.trackmyprogress.com

Page 38: Track My Progress Technical Guide

38  

References   American Educational Research Association, American Psychological Association, & National Council of Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: AERA. Baumer, M., Roded, K., & Gafni, N. (2009). Assessing the equivalence of Internet-based vs. paper-and-pencil psychometric tests. In D. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing. Bond, T. G., & Fox, Ch. M. (2007). Applying the Rasch model: Fundamental measurement in the human sciences (2nd ed.). London: LEA. Custer, M., Omar, M.H., and Pomplun, M. (2006). Vertical Scaling with the Rasch Model Using Default and Tight convergence Settings with Winstepts and Bilog- MG. Applied Measurement in Education, 19, 133-149. Jungnam, K., Lee, W-C, Kim, D-I, Kelly, K. (2009). Investigation of Vertical Scaling Using the Rasch Model. Paper presented at the annual meeting of the National Council on Measurement in Education. San Diego, CA. Gorin, J. S., Dodd, B. G., J. Fitzpatrick, S. J., and Shieh , Y. Y. (2005). Computerized adaptive testing with the partial credit model: Estimation procedures, population distributions, and item pool characteristics. Applied Psychological Measurement, 29, 433-455. Kolen, M., and Brennan, R.L. (2004). Test Equating, scaling, and linking: Methods and practices. Springer Science and Business Media. Lange, R. (2007). Binary Items and Beyond: A Simulation of Computer Adaptive Testing Using the Rasch Partial Credit Model. In: Smith, E. and Smith, R. (Eds.) Rasch Measurement: Advanced and Specialized Applications. Pp. 148-180, Maple Grove, MN: JAM Press. Lange, R., Stevens, D., and Schwarz, P. (2011). Some Surprising Dynamics of “Technology Enhanced” Item Types: Taking Additional Time is Associated with Increased Student Performance on Some Types of Items, while Decreasing Performance on Others. International Association for Computerized Adaptive Testing Conference. Pacific Grove, California, October 3-5.

Page 39: Track My Progress Technical Guide

39  

Linacre, J. M. (1998). Estimating measures with known polytomous item difficulties. Rasch Measurement Transactions, 12, 638. Linacre, J. M. (2013). WINSTEPS Rasch measurement computer program [Computer program]. Chicago: Winsteps.com Loyd B. H., & Hoover, H.D. (1980). Vertical equating using the Rasch model. Journal of Educational Measurement, 17, 179-193. Pomplun, M., Omar, H., & Custer, M. (2004). Comparison of WINSTEPS and BILOG-MG for vertical scaling with the Rasch model. Educational and Psychological Measurement, 64, 600-616. Shapiro, E. S. (2008). Best practices in setting progress-monitoring monitoring goals for academic skill improvement. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 141-157). Bethesda, MD: National Association of School Psychologists. Wainer, H., Dorans, N.J., Flaugher, R., Mislevy, R.J., Green, B.F., Steinberg, L., and Thissen, D. (2000). Computerized Adaptive Testing: A Primer (Second Edition). Hillsdale, NJ: Lawrence Erlbaum Associates. Track My Progress PO Box 5753 Hanover, NH 03755 (800) 294-0989 Email: [email protected] Web: www.trackmyprogress.com