Scheduled maintenance: Saturday, March 6 from 3–4 PM PST
Home
Browse
Create
Search
Log in
Sign up
Upgrade to remove ads
Only $2.99/month
Chapter 6
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Validity
Terms in this set (42)
Validity
• A judgement or estimate of how well a test measures what it purports to measure in particular context
Validation
• Process of gathering and evaluating evidence about validity
3 measurements of validity
• Content validity
• Criterion related validity
• Construct validity
Three approaches to assessing validity
• Scrutinizing the tests content
• Relating scores obtained on the test to other test scores or other measures
• Executing a comprehensive analysis of:
1. How scores on the test relate to other test scores and measures
2. How scores on the test can be understood within a theoretical framework for understanding the construct that the test was designed to measure
Face Validity
• When test appears to measure to the person being tested
• Is a judgement concerning how relevant the test items appear to be
• E.g. inkblot tests may appear to be low in face validity and an extroversion/introversion test may appear to have high face validity
• Lack of face validity could contribute to a lack of confidence in the perceived effectiveness of the test- could result in a decrease in motivation/cooperation
Content Validity
• How well does the test measure what it intends to measure
• Important for occupational psychologists- important as it is important legally that you do not discriminate on grounds of gender, race or any other factor other than the individual's ability to do the job
Quantification of content validity
• Ask test group if each of the skills or knowledge is (for each particular quality being tested on)
1. Essential
2. Useful but not necessary
3. Not necessary
Content validity ratio
• Number of panelists saying a particular question is essential - (total number of panalists-2)
Divided by (total number of panelists/2)
• Negative CVR= fewer than half the panelists indicate a question to be essential
• Zero CVR= exactly half the panelists view a question to be essential
• Positive CVR= more than half (but not all) the panelists view a question is essential
Criterion Related Validity
• External Criteria in concurrent/predictive validity: do you pass or fail e.g. driving test, exam etc.
• Predictive Validity
• Concurrent Validity
Predictive Validity
• e.g. if there is correlation between number of hours study done now and exam study- can number of hours of study predict exam score?
• Degree to which a test score predicts a criterion measure
• Can be used to predict outcome of a treatment or therapy
Concurrent Validity
• If there is correlation between variable now and external criteria e.g. is there a correlation between score on depression scale and home life?
• Degree to which a test score is related to some criterion measure obtained at the same time
Criterion contamination
• Criterion measure than have been based on a predictor measure
• E.g. using the opinion of a guard to rate inmates violence potential when the same guards opinion was used as a criterion in the first place. Cannot use same as predictor and criterion
Validity coefficient
• Provides a measure of the relationship between test scores and scores on the criterion measure
Incremental validity
• Degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use
• E.g. GPA may reveal how much time was spent in the library during the year
Expectancy data
• Provide information that can be used in evaluating the criterion related validity of a test
• Illustrates the likelihood that the test taker will score within some interval of scores on a criterion measure- may be seen as 'passing' or 'failing'
Taylor Russell tables
• Provide an estimate of the extent to which inclusion of a particular test in the selection system will actually improve selection
• E.g. provide an estimate of the percentage of employees hired by the use of a particular test who will be successful at their jobs.
• Combination of 3 variables:
• The tests validity
• The selection ratio (a numerical value that reflects the relationship between the number of people to be hired and the number of people available to be hired)
• The base rate (the percentage of people hired under the existing system for a particular position)
• Provides the personnel officer with an estimate of how much using the test would improve selection over existing methods
• Judges utility of a particular test & determines the increase over current procedures
Naylor-Shine Tables
• Is used to obtain the difference between the means of the selected and unselected groups to derive an index of what the test is adding to already established procedures
• Determining the increase in average score on some criterion measure & Judges utility of a particular test
Base rate
• Extent to which a particular trait, behaviour, characteristic or attribute exists in the population
Hit rate
• The proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behaviour, characteristic or attribute
Miss rate
• Proportion of people the test fails to identify as having, or not having, a particular characteristic or attribute
False positive
• A miss where the test predicts that the test taker did possess the particular characteristic or attribute being measured when they did not
False negative
• A miss where the test predicts that the test taker did not possess the particular characteristic being measured when they did
Construct Validity
• Judgement about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called a construct
• How well do inferences drawn from a test score relate to current theories/knowledge (Constructs)
Construct
• Unobservable, underlying trait that a test developer may use to describe test behaviour or criterion performance
• An informed scientific idea developed or hypothesised to describe or explain behaviour
• E.g. intelligence- described why a person performs as they do in school; anxiety... etc.
• Evidence of construct validity:
1. test homogeneous, measures a single construct
2. test scores increase or decrease as a function of age, time or experimental manipulation
3. test scores obtained after some event or passing of time differ from pre-test scores
4. test scores obtained from people of distinct groups vary
5. test scores correlate with scores of other tests in accordance with what would be predicted
Homogeneity
• how uniform a test is at measuring a single concept
One item analysis procedure
• Focuses on relationship between test takers scores on individual items and their score on the entire test.
• Each item is then analysed with respect to how high scorers versus low scorers responded to it
Convergent evidence
measuring validity of a test against a pre-existing already validated test
Discriminate evidence
• A validity coefficient showing little relationship between test scores with which scores on the test being construct validated should not be correlated with.
(opposite of convergent evidence)
Factor Analysis
• Mathematical process to measure 'factors' such as attributes characteristics etc.
Explanatory Factor Analysis
• Deciding how many factors to retain; establishing or extracting factors
Confirmatory Factor Analysis
• Factor structure is hypothesised and tested for its fit with observed covariance structure of the measured variables
Factor Loading
• Conveys information about the extent to which the factor determines the test score or scores
Test bias
• A factor inherent in a test that systematically prevents accurate, impartial measurement
Systematic
• The same every time
Intercept bias
• When a test systematically under predicts or over predicts the performance of members of a particular group with respect to criterion (e.g. people with green eyes)
• When regression line intersects with the Y axis
Slope bias
• When a test systematically yields significantly different validity coefficients for members of different groups
Rating Error
• Numerical or verbal judgement that places a person or attribute along a continuum identified by a scale of numerical or word descriptors known as a rating scale
Leniency error (aka. Generosity error)
• When the test scorer is lenient and gives higher marks than are deserved
Severity error
• Test scorer being extremely critical
Central tendency error
• When rater exhibits systematic reluctance to give ratings at either the positive or negative extreme
Rankings
• Procedure put in place to overcome restriction of range rating errors where individuals are measured against each other instead of an absolute scale
Halo effect
• Tendency to rate a particular individual higher (rather than giving a fair objective rating) due to liking the person e.g. miley cyrus giving speech on validity would get marked better by her fans than by professionals who were objective
THIS SET IS OFTEN IN FOLDERS WITH...
Chapter 5
31 terms
Psychological Testing: Chapter 5
53 terms
CTT & IRT
20 terms
YOU MIGHT ALSO LIKE...
Psych assessment ch. 6 & 7
54 terms
Psychological Testing: Chapter 6
41 terms
Psychological Testing & Assessment (Cohen, 9th edi…
57 terms
Chapter 6
33 terms
OTHER SETS BY THIS CREATOR
Prof psych practice sem2
14 terms
APS Ethics
17 terms
Ethics lecture
33 terms
Chapter 9
12 terms
OTHER QUIZLET SETS
ADV Ex Phys Exam 2
80 terms
Final exam review
156 terms
M101C O'Dell Slideset 3
39 terms
LBC 101 - Exam #3 (2013F)
29 terms