validity in psychological testing

36

Upload: milen-ramos

Post on 06-May-2015

5.432 views

Category:

Business


3 download

DESCRIPTION

validation, psychological test

TRANSCRIPT

Page 1: Validity in psychological testing
Page 2: Validity in psychological testing

Reliability      Test reliablility refers to the degree to which a test is consistent and stable in measuring what it is intended to measure.

Most simply put, a test is reliable if it is consistent within itself and across time.

To understand the basics of test reliability, think of a bathroom scale that gave you drastically different readings every time you stepped on it regardless of whether your had gained or lost weight. If such a scale existed, it would be considered not reliable

Page 3: Validity in psychological testing

Validity      Test validity refers to the degree to which the test actually measures what it claims to measure.

Test validity is also the extent to which inferences, conclusions, and decisions made on the basis of test scores are appropriate and meaningful.

Page 4: Validity in psychological testing

The Relationship of Reliability and Validity      Test validity is requisite to test reliability. If a test is not valid, then reliability is moot.

In other words, if a test is not valid there is no point in discussing reliability because test validity is required before reliability can be considered in any meaningful way. Likewise, if as test is not reliable it is also not valid.

Page 5: Validity in psychological testing

classical models divided the concept into various "validities," such as

content validity criterion validity construct validity

Page 6: Validity in psychological testing

the modern view is that validity is a single unitary construct

Page 7: Validity in psychological testing

Cronbach and Meehl’s subsequent publication grouped predictive and concurrent validity into a "criterion-orientation", which eventually became criterion validity.

Page 8: Validity in psychological testing

A single interpretation of any test may require several propositions to be true (or may be questioned by any one of a set of threats to its validity). Strong evidence in support of a single proposition does not lessen the requirement to support the other propositions.

Evidence to support (or question) the validity of an interpretation can be categorized into one of five categories:

Evidence based on test content Evidence based on response processes Evidence based on internal structure Evidence based on relations to other variables Evidence based on consequences of testing

Page 9: Validity in psychological testing

1995 Samuel Messick’s article that described validity as a single construct composed of six "aspects“[In his view, various inferences made from test scores may require different types of evidence, but not different validities.

Page 10: Validity in psychological testing

In science and statistics, validity has no single agreed definition but generally refers to the extent to which a concept, conclusion or measurement is well-founded and corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. Validity of a measurement tool (i.e. test in education) is considered to be the degree to which the tool measures what it claims to measure.

In psychometrics, validity has a particular application known as test validity: "the degree to which evidence and theory support the interpretations of test scores" ("as entailed by proposed uses of tests").[1]

In the area of scientific research design and experimentation, validity refers to whether a study is able to scientifically answer the questions it is intended to answer.

In clinical fields, the validity of a diagnosis and associated diagnostic tests may be assessed.

Page 11: Validity in psychological testing

Construct validity Convergent validity Discriminant validity

Convergent validity refers to the degree to which a measure is correlated with other measures that it is theoretically predicted to correlate with.

Discriminant validityDiscriminant validity describes the degree to which the operationalization does not correlate with other operationalizations that it theoretically should not be correlated with.

Page 12: Validity in psychological testing

Content validity

Content validity is a non-statistical type of validity that involves “the systematic examination of the test content to determine whether it covers a representative sample of the behavior domain to be measured” (Anastasi & Urbina, 1997 p. 114). For example, does an IQ questionnaire have items covering all areas of intelligence discussed in the scientific literature?

Page 13: Validity in psychological testing

Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. For example, a test of the ability to add two numbers should include a range of combinations of digits. A test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content related evidence typically involves subject matter experts (SME's) evaluating test items against the test specifications.

A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain.

Foxcraft et al. (2004, p. 49) note that by using a panel of experts to review the test specifications and the selection of items the content validity of a test can be improved. The experts will be able to review the items and comment on whether the items cover a representative sample of the behaviour domain.

Page 14: Validity in psychological testing

Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct.

For example, a test of the ability to add two numbers should include a range of combinations of digits. A test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content related evidence typically involves subject matter experts (SME's) evaluating test items against the test specifications.

A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Foxcraft et al. (2004, p. 49) note that by using a panel of experts to review the test specifications and the selection of items the content validity of a test can be improved. The experts will be able to review the items and comment on whether the items cover a representative sample of the behaviour domain.

Page 15: Validity in psychological testing

Representation validity

Representation validity, also known as translation validity, is about the extent to which an abstract theoretical construct can be turned into a specific practical test.

Page 16: Validity in psychological testing

Face validity is an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Indeed, when a test is subject to faking (malingering), low face validity might make the test more valid.

Face validity is very closely related to content validity. While content validity depends on a theoretical basis for assuming if a test is assessing all domains of a certain criterion (e.g. does assessing addition skills yield in a good measure for mathematical skills? - To answer this you have to know, what different kinds of arithmetic skills mathematical skills include ) face validity relates to whether a test appears to be a good measure or not. This judgment is made on the "face" of the test, thus it can also be judged by the amateur.

Face validity is a starting point, but should NEVER be assumed to be provably valid for any given purpose, as the "experts have been wrong before--the Malleus Malificarum (Hammer of Witches) had no support for its conclusions other than the self-imagined competence of two "experts" in "witchcraft detection," yet it was used as a "test" to condemn and burn at the stake perhaps 100,000 women as "witches."

Page 17: Validity in psychological testing

Criterion validityCriterion validity evidence involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. In other words, it compares the test with other measures or outcomes (the criteria) already held to be valid. For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion).

If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data is collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence.

Page 18: Validity in psychological testing

Concurrent validityConcurrent validity refers to the degree to which the operationalization correlates with other measures of the same construct that are measured at the same time. Returning to the selection test example, this would mean that the tests are administered to current employees and then correlated with their scores on performance reviews.

Predictive validityPredictive validity refers to the degree to which the operationalization can predict (or correlate with) other measures of the same construct that are measured at some time in the future. Again, with the selection test example, this would mean that the tests are administered to applicants, all applicants are hired, their performance is reviewed at a later time, and then their scores on the two measures are correlated.

Page 19: Validity in psychological testing

Diagnostic validityIn clinical fields such as medicine, the validity of a diagnosis, and associated diagnostic tests or screening tests, may be assessed.

In regard to tests, the validity issues may be examined in the same way as for psychometric tests as outlined above, but there are often particular applications and priorities. In laboratory work, the medical validity of a scientific finding has been defined as the 'degree of achieving the objective' - namely of answering the question which the physician asks.[2]

An important requirement in clinical diagnosis and testing is sensitivity and specificity - a test needs to be sensitive enough to detect the relevant problem if it is present (and therefore avoid too many false negative results), but specific enough not to respond to other things (and therefore avoid too many false positive results).[3]

Page 20: Validity in psychological testing

In psychiatry there is a particular issue with assessing the validity of the diagnostic categories themselves. In this context:[4]

•content validity may refer to symptoms and diagnostic criteria; •concurrent validity may be defined by various correlates or markers, and perhaps also treatment response; •predictive validity may refer mainly to diagnostic stability over time; •discriminant validity may involve delimitation from other disorders.

Page 21: Validity in psychological testing

These were incorporated into the Feighner Criteria and Research Diagnostic Criteria that have since formed the basis of the DSM and ICD classification systems

Page 22: Validity in psychological testing

Kendler in 1980 distinguished between:[4]

•antecedent validators (familial aggregation, premorbid personality, and precipitating factors)

•concurrent validators (including psychological tests)

•predictive validators (diagnostic consistency over time, rates of relapse and recovery, and response to treatment)

Page 23: Validity in psychological testing

Nancy Andreasen (1995) listed several additional validators — molecular genetics and molecular biology, neurochemistry, neuroanatomy, neurophysiology, and cognitive neuroscience - that are all potentially capable of linking symptoms and diagnoses to their neural substrates.[4]

Kendell and Jablinsky (2003) emphasized the importance of distinguishing between validity and utility, and argued that diagnostic categories defined by their syndromes should be regarded as valid only if they have been shown to be discrete entities with natural boundaries that separate them from other disorders.[4]

Page 24: Validity in psychological testing

Robins and Guze proposed in 1970 what were to become influential formal criteria for establishing the validity of psychiatric diagnoses. They listed five criteria:[4]

•1) distinct clinical description (including symptom profiles, demographic characteristics, and typical precipitants) •2) laboratory studies (including psychological tests, radiology and postmortem findings) •3) delimitation from other disorders (by means of exclusion criteria) •4) follow-up studies showing a characteristic course (including evidence of diagnostic stability) •5) family studies showing familial clustering

Page 25: Validity in psychological testing

Kendler (2006) emphasized that to be useful, a validating criterion must be sensitive enough to validate most syndromes that are true disorders, while also being specific enough to invalidate most syndromes that are not true disorders. On this basis, he argues that a Robins and Guze criterion of "runs in the family" is inadequately specific because most human psychological and physical traits would qualify - for example, an arbitrary syndrome comprising a mixture of "height over 6 ft, red hair, and a large nose" will be found to "run in families" and be "hereditary", but this should not be considered evidence that it is a disorder. Kendler has further suggested that "essentialist" gene models of psychiatric disorders, and the hope that we will be able to validate categorical psychiatric diagnoses by "carving nature at its joints" solely as a result of gene discovery, are implausible.[5]

Page 26: Validity in psychological testing

Questions To Ask When Evaluating Tests

Page 27: Validity in psychological testing

TEST COVERAGE AND USE There must be a clear statement of recommended uses and a description of the population for which the test is intended. The principal question to ask when evaluating a test is whether it is appropriate for your intended purposes as well as your students. The use intended by the test developer must be justified by the publisher on technical grounds. You then need to evaluate your intended use against the publisher's intended use. Questions to ask: 1. What are the intended uses of the test? What interpretations does the publisher feel are appropriate? Are inappropriate applications identified? 2. Who is the test designed for? What is the basis for considering whether the test applies to your students?

Page 28: Validity in psychological testing

APPROPRIATE SAMPLES FOR TEST VALIDATION AND NORMING The samples used for test validation and norming must be of adequate size and must be sufficiently representative to substantiate validity statements, to establish appropriate norms, and to support conclusions regarding the use of the instrument for the intended purpose. The individuals in the norming and validation samples should represent the group for which the test is intended in terms of age, experience and background. Questions to ask: 1. How were the samples used in pilot testing, validation and norming chosen? How is this sample related to your student population? Were participation rates appropriate? 2. Was the sample size large enough to develop stable estimates with minimal fluctuation due to sampling errors? Where statements are made concerning subgroups, are there enough test-takers in each subgroup? 3. Do the difficulty levels of the test and criterion measures (if any) provide an adequate basis for validating and norming the instrument? Are there sufficient variations in test scores?

Page 29: Validity in psychological testing

RELIABILITY The test is sufficiently reliable to permit stable estimates of the ability levels of individuals in the target group. Fundamental to the evaluation of any instrument is the degree to which test scores are free from measurement error and are consistent from one occasion to another when the test is used with the target group. Sources of measurement error, which include fatigue, nervousness, content sampling, answering mistakes, misinterpreting instructions and guessing, contribute to an individual's score and lower a test's reliability. Different types of reliability estimates should be used to estimate the contributions of different sources of measurement error. Inter-rater reliability coefficients provide estimates of errors due to inconsistencies in judgment between raters. Alternate-form reliability coefficients provide estimates of the extent to which individuals can be expected to rank the same on alternate forms of a test. Of primary interest are estimates of internal consistency which account for error due to content sampling, usually the largest single component of measurement error

Page 30: Validity in psychological testing

Questions to ask: 1. How have reliability estimates been computed? Have appropriate statistical methods been used? (e.g., Split half-reliability coefficients should not be used with speeded tests as they will produce artificially high estimates.) 2. What are the reliabilities of the test for different groups of test-takers? How were they computed? 3. Is the reliability sufficiently high to warrant using the test as a basis for decisions concerning individual students? 4. To what extent are the groups used to provide reliability estimates similar to the groups the test will be used with?

Page 31: Validity in psychological testing

CRITERION VALIDITY The test adequately predicts academic performance. In terms of an achievement test, criterion validity refers to the extent to which a test can be used to draw inferences regarding achievement. Empirical evidence in support of criterion validity must include a comparison of performance on the validated test against performance on outside criteria. A variety of criterion measures are available, such as grades, class rank, other tests and teacher ratings. There are also several ways to demonstrate the relationship between the test being validated and subsequent performance. In addition to correlation coefficients, scatterplots, regression equations and expectancy tables should be provided. Questions to ask: 1. What criterion measure has been used to evaluate validity? What is the rationale for choosing this measure? 2. Is the distribution of scores on the criterion measure adequate? 3. What is the overall predictive accuracy of the test? How accurate are predictions for individuals whose scores are close to cut-points of interest?

Page 32: Validity in psychological testing

CONTENT VALIDITY Content validity refers to the extent to which the test questions represent the skills in the specified subject area. Content validity is often evaluated by examining the plan and procedures used in test construction. Did the test development procedure follow a rational approach that ensures appropriate content? Did the process ensure that the collection of items would represent appropriate skills? Other questions to ask: 1. Is there a clear statement of the universe of skills represented by the test? What research was conducted to determine desired test content and/or evaluate content? 2. What was the composition of expert panels used in content validation? How were judgments elicited? 3. How similar is this content to the content you are interested in testing?

Page 33: Validity in psychological testing

CONSTRUCT VALIDITY The test measures the "right" psychological constructs. Intelligence, self-esteem and creativity are examples of such psychological traits. Evidence in support of construct validity can take many forms. One approach is to demonstrate that the items within a measure are inter-related and therefore measure a single construct. Inter-item correlation and factor analysis are often used to demonstrate relationships among the items. Another approach is to demonstrate that the test behaves as one would expect a measure of the construct to behave. For example, one might expect a measure of creativity to show a greater correlation with a measure of artistic ability than with a measure of scholastic achievement. Questions to ask: 1. Is the conceptual framework for each tested construct clear and well founded? What is the basis for concluding that the construct is related to the purposes of the test? 2. Does the framework provide a basis for testable hypotheses concerning the construct? Are these hypotheses supported by empirical data?

Page 34: Validity in psychological testing

TEST ADMINISTRATION Detailed and clear instructions outline appropriate test administration procedures. Statements concerning test validity and the accuracy of the norms can only generalize to testing situations which replicate the conditions used to establish validity and obtain normative data. Test administrators need detailed and clear instructions to replicate these conditions. All test administration specifications, including instructions to test takers, time limits, use of reference materials and calculators, lighting, equipment, seating, monitoring, room requirements, testing sequence, and time of day, should be fully described. Questions to ask: 1. Will test administrators understand precisely what is expected of them? 2. Do the test administration procedures replicate the conditions under which the test was validated and normed? Are these procedures standardized?

Page 35: Validity in psychological testing

TEST REPORTING The methods used to report test results, including scaled scores, subtests results and combined test results, are described fully along with the rationale for each method. Test results should be presented in a manner that will help schools, teachers and students to make decisions that are consistent with appropriate uses of the test. Help should be available for interpreting and using the test results. Questions to ask: 1. How are test results reported? Are the scales used in reporting results conducive to proper test use? 2. What materials and resources are available to aid in interpreting test results?

Page 36: Validity in psychological testing

TEST AND ITEM BIAS The test is not biased or offensive with regard to race, sex, native language, ethnic origin, geographic region or other factors. Test developers are expected to exhibit a sensitivity to the demographic characteristics of test-takers. Steps can be taken during test development, validation, standardization and documentation to minimize the influence of cultural factors on individual test scores. These steps may include evaluating items for offensiveness and cultural dependency, using statistics to identify differential item difficulty, and examining the predictive validity for different groups. Tests are not expected to yield equivalent mean scores across population groups. Rather, tests should yield the same scores and predict the same likelihood of success for individual test-takers of the same ability, regardless of group membership. Questions to ask: 1. Were the items analyzed statistically for possible bias? What method(s) was used? How were items selected for inclusion in the final version of the test? 2. Was the test analyzed for differential validity across groups? How was this analysis conducted? 3. Was the test analyzed to determine the English language proficiency required of test-takers? Should the test be used with non-native speakers of English?