Upgrade to remove ads
Terms in this set (33)
A judgment or estimate of how well a test measures what it purports to measure in a particular context
- The process of gathering and evaluating evidence about validity.
- Both test developers and test users may play a role in the validation of a test.
- Test users may validate a test with their own group of testtakers - local validation
- This is a measure of validity based on an evaluation of the subjects, topics, or content covered by the items in the test
- A judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample
- Do the test items adequately represent the content that should be included in the test?
- This is a measure of validity obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures
- A judgment of how adequately a test score can be used to infer an individual's most probable standing on some measure of interest (i.e. the criterion)
This is a measure of validity that is arrived at by executing a comprehensive analysis of
how scores on the test relate to other test scores and measures, and
how scores on the test can be understood within some theoretical framework for understanding the construct that the test was designed to measure
- The ability of a test to measure a theorized construct (e.g. intelligence, aggression, personality, etc.) that it purports to measure.
- If a test is a valid measure of a construct, high scorers and low scorers should behave as theorized.
- All types of validity evidence, including evidence from the content- and criterion-related varieties of validity, come under the umbrella of construct validity
- A judgment concerning how relevant the test items appear to be.
- If a test appears to measure what it purports to measure "on the face of it," it could be said to be high in face validity.
- Many self-report personality tests are high in face validity, whereas projective tests, such as the Rorschach tend to be low in face validity (i.e. it is not apparent what is being measured).
- A perceived lack of face validity may lead to a lack of confidence in the test measuring what it purports to measure
A plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organization of the items in the test, etc.
- Developed a method whereby raters judge each item as to whether it is essential, useful but not essential, or not necessary for job performance
- If more than half the raters indicate that an item is essential, the item has at least some content validity
of a test varies across cultures and time
- Political considerations may also play a role
Developed the content validity ration (CVR)
- The standard against which a test or a test score is evaluated
- An adequate criterion is relevant for the matter at hand, valid for the purpose for which it is being used, and uncontaminated, meaning it is not part of the predictor
- An index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently)
- An index of the degree to which a test score predicts some criterion, or outcome, measure in the future
- Tests are evaluated as to their predictive validity
- A correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure
- Validity coefficients are affected by restriction or inflation of range
The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use
- An expectancy table shows the percentage of people within specified test-score intervals who subsequently were placed in various categories of the criterion (e.g. placed in "passed" category or "failed" category).
- In a corporate setting test scores may be divided into intervals (e.g. poor, adequate, excellent) and examined in relation to job performance (e.g. satisfactory or unsatisfactory). Expectancy tables, or charts, may show us that the higher the initial rating, the greater the probability of job success
- Graphic representation of an expectancy table
- Tells us that the higher the initial rating, the greater the probability of job success
- Shows the
of people within specified test-score intervals who subsequently were placed in various categories of the criterion (for example, placed in a "passed" category or "failed" category)
- The proportion of people who are accurately identified as possessing or not possessing a particular trait, behavior, characteristic, or attribute based on test scores
- For example, hit rate could refer to the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute
Evidence of homogeneity
How uniform a test is in measuring a single concept
Evidence of changes with age
Some constructs are expected to change over time (e.g. reading rate)
Evidence of pretest/posttest changes
Test scores change as a result of some experience between a pretest and a posttest (e.g. therapy)
Evidence from distinct groups
Scores on a test vary in a predictable way as a function of membership in some group (e.g. scores on the Psychopathy Checklist for prisoners vs. civilians)
Scores on the test undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established, tests designed to measure the same (or a similar) construct
Validity coefficient showing little relationship between test scores and other variables with which scores on the test should not theoretically be correlated
- A class of mathematical procedures, frequently employed as data reduction methods. designed to identify variables on which people/factors may differ
- A new test should load on a common factor with other tests of the same construct
- A factor inherent in a test that systematically prevents accurate, impartial measurement
- Bias implies systematic variation in test scores
- Prevention during test development is the best cure for test bias
Minimize test bias
Prevention during test development
is the best cure for test bias, though a procedure called estimated true score transformations represents one of many available post hoc remedies
- An error in measurement characterized by a tool of assessment indicating that the test taker possesses or exhibits a particular trait, ability behavior, or attribute when in fact the test taker does not
- A numerical or verbal judgement that places a person or attribute along a continuum identified by a scale of numerical or word descriptors called a rating scale
- A judgment resulting from the intentional or unintentional misuse of a rating scale.
- Raters may be either too lenient, too severe, or reluctant to give ratings at the extremes (
central tendency error
Central tendency error
- A type of rating error wherein the rater exhibits a general reluctance to issue ratings at either the positive or negative extreme and so all or most ratings cluster in the middle of the rating continuum
- Consequently, all of the rater's ratings would tend to cluster in the middle of the rating continuum
- A tendency to give a particular person a higher rating than he or she objectively deserves because of a favorable overall impression
The extent to which a test is used in an impartial, just, and equitable way
Gottfredson and group differences on tests
According to Gottfredson, the answer to group differences on tests will not come from measurement related research because differences in scores on many of the tests in question arise principally from differences in job-related abilities
THIS SET IS OFTEN IN FOLDERS WITH...
Chapter 10 PSYC
Ch 3 Elicited Behaviors and Classical Conditioning
YOU MIGHT ALSO LIKE...
Psychological Testing: Chapter 6
Psych assessment ch. 6 & 7
OTHER SETS BY THIS CREATOR
OB test 2
Exam 2 Study Guide
Exam 2 PEDS
OTHER QUIZLET SETS
NFS 207, Oaks, Ch. 5
Ch. 38 - Oxygenation - Professor Directed Study (N…
Pre-Assessment Study Pack
Pledge quiz 2