lesson five validity & practicality. contents introduction: definition of validity introduction:...

Download Lesson Five Validity & Practicality. Contents Introduction: Definition of Validity Introduction: Definition of Validity IntroductionDefinition of Validity

Post on 30-Dec-2015

215 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • Lesson FiveValidity & Practicality

  • ContentsIntroduction: Definition of ValidityTypes of validityNon-empiricalFace ValidityContent ValidityEmpiricalConstruct ValidityCriterion-related ValidityPracticality

  • IntroductionA writing test asks test takers to write on the following topic:Is Photography an Art or a Science?A valid writing test? Why or why not?You should be clear about what exactly you want to test (i.e., no other irrelevant abilities or knowledge).Validity concerns what a test measures and how well it measures what it is intended to measure.

  • Definition of Validitythe extent to which inferences made from assessment results are appropriate, meaningful, and useful in terms of the purpose of the assessment (cited in Brown 22)A valid test = a test that measures what it is intended to measure, and nothing else (i.e., no external knowledge or other skills measured at the same time).e.g. A listening test measures listening skill and nothing else. It shouldnt favor any students.

  • Non-empirical ValidityInvolving inspection, intuition, and common senseConsequential validity: Face validityContent validity

  • Consequential ValidityEncompasses all the consequences of a test: (Brown 26)Its accuracy in measuring intended criteriaIts impact on the preparation of test-takersIts effect on the learnerThe social consequences of a tests interpretation and useThe effect on Ss motivation, subsequence in a course, independent learning, study habits, and attitude toward school work.

  • Face Validity (1)You know if the test is valid or not by looking at it.It looks right to other testers, teachers, and testees, the general public, etc.It appears to measure the knowledge or abilities it claims to measure.

  • Face Validity (2)Face validity asked the Q: does the test, on the face of it, appear from the learners perspective to test what it is designed to test? (Brown 27)Face validity cannot be empirically tested.Essential to all kinds of tests, but it is not enough.

  • Content Validity (1)A test is said to have content validity if its content constitutes a representative sample of the language skills, structures, etc. with which it is meant to be concerned. (Hughes 1989)Also called rational or logical validity.Esp. important for achievement, progress, & diagnostic testsA valid test: contains appropriate and representative content.

  • Content Validity (2)A test with content validity contains a representative sample of the course (objectives), and quantifies and balances the test components (given a percentage weighting)Check against:Test specifications (test plan)Notes, textbooksCourse syllabus/objectivesAnother teacher or subject-matter experts

  • Content Validity (3)An example of a (fill-up) quiz on the use of articles: (see Brown 23)Does it have content validity if used as a listening/speaking test?Classroom tests should always have content validity.Rule of thumb for achieving content validity: always use direst tests

  • Criterion-related Validity (1)The extent to which the criterion of the test has actually been reached.how far results on the test agree with those provided by some independent and highly dependable assessment of the candidates ability.

  • Criterion-related Validity (2)Two kinds of criterion-related validityConcurrent validity:How closely the test result parallels test takers performance on another valid test, or criterion, which is thought to measure the same or similar activities test & criterion administered at about the same timepossible criteria = an established test or some other measure within the same domain (e.g., course grades, Ts ratings)

  • Criterion-related Validity (3)E.g., situation: conv. class, objectives = a large # of functions. To test all of which will take 45 min. for each S.Q: Is such a 10-min. test a valid measure?Method: a random sample of Ss taking the full 45 min-test = criterion test; compare scores on short version with the those on criterion test if a high level of agreement short version = valid test

  • Criterion-related Validity (4)Validity coefficient:A mathematical measure of similarityPerfect agreement validity coefficient = 1E.g., a coefficient = 0.7; (0.7)2 = 0.49 49%, which means almost 50% agreement

  • Criterion-related Validity (5)Predictive validity: How well the test result predicts future performance/successcorrelation done at future timeImportant for the validation of aptitude tests, placement test, admissions tests.Criterion:Outcome of the course (pass/fail), Ts ratings later

  • Construct Validity (1)Construct:any underlying ability (trait) which is hypothesized in a theory of language abilityAny theory, hypothesis, or model that attempts to explain observed phenomena in our universe of perceptions (Brown 25)

  • Construct Validity (2)Originated for psychological testsRefers to the extent to which the test may be said to measure a theoretical construct or trait which is normally unobservable and abstract at different levels (e.g., personality, self-esteem; proficiency, communicative competence)It examines whether the test is a true reflection of the theory of the trait being measured.

  • Construct Validity (3)A test has construct validity if it can be demonstrated that it measures just the ability which it is supposed to measure.Two examples:1. reading ability: involves a # of sub-abilities, e.g., skimming, scanning, guessing meaning of unknown words, etc.

  • Construct Validity (4) need empirical research to establish if such a distinct ability existed and could be measuredNeed of construct validity (because we have to demonstrate were indeed measuring just that ability in a particular test.

  • Construct Validity (5)2. when measuring an ability indirectly:E.g., writing abilityNeed to look to a theory of writing ability for guidance as to the form (i.e., content, techniques) an indirect test should takeTheory of writing tells us that underlying writing abilities = a # of sub-abilities, e.g., punctuation, organization, word choice, grammar . . .Based on the theory, we construct multiple-choice tests to measure these sub-abilities

  • Construct Validity (6)But, how do we know this test really is measuring writing ability?Validation methods:Compare scores on the pilot test with scores on a writing test (direct test) if high level of agreement yesAdminister a # of tests; each measures a construct. Score the composition (direct test) separately for each construct. Then compare scores.

  • Construct Validity (7)To examine whether the test is a true reflection of the theory of the trait being measured.In lang. testing construct= any underlying ability/trait which is hypothesized in a theory of language ability.Necessary in a case of indirect testing.Can be measured by comparing the scores of a group of students for two tests.

  • PracticalityPractical consideration when planning tests or ways of measurement, including cost, time/effort requiredEconomy (cost, time: administration & scoring)Ease of scoring and score interpretationadministrationtest compilationA test should be practical to use, but also valid and reliable

Recommended

View more >