Home
Browse
Create
Search
Log in
Sign up
Upgrade to remove ads
Only $2.99/month
reliability
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (20)
Reliability
consistency of measurement. how consistent the test results and other assessment results are from one measurement to another.
inter-rater reliability
consistency of rating among different markers
intra-rater reliability
The consistency of each individual rater is achieved
parallel-forms reliability
assesses the consistency of the results of two tests constructed in the same way from the same content domain
Test-retest method
Give the same test to the same group with any time interval between tests, from several minutes to several minutes Measure of stability
Equivalent forms method
Give two forms of the test to the same group in close succession
Test-retest with equivalent forms
Give two forms of the test to the same group with increased time interval between forms
Split-half method
Give test once. Score two equivalent halves of test (e.g., odd items and even items); correct correlation between halves to fit whole test by Spearman- Brown formula Measure of internal consistency
Kuder-Richardson method and coefficient Alpha
Give test once. Score total test and apply Kuder- Richardson formula.
Inter-rater method
Give a set of student responses requiring judgmental scoring to two and more raters and have them independently score the responses.
Measure of consistency of ratings
Internal consistency
estimates deal with the sources of errors within the test and the scoring procedures.
Stability estimates
show how consistent test scores are over time.
equivalence
estimates indicate how scores on alternate forms of a test are equivalent.
Stability (Test-Retest Reliability)
reliability can be computed by giving the test more than once to the same candidates. it provides an estimate of the stability of the test scores
True Score and Error Score
The CTS measurement theory defines two sources of variance: the true score variance, and the error score variance. Systematic variations are reasonable or logical variations. If a candidate studies hard, in the second administration, is going to do better. Since the reason for such a variation is known, it is called systematic. However, if the rater does not find out the logical reasons for the observed variations, they are called unsystematic. Reliability is estimated to calculate the true scores of the candidates. True scores are usually difficult to calculate, since the tests or measurements have errors.
Kuder-Richardson Reliability Coefficients (KR-20)
Reliability estimates based on item variances calls for splitting the test into halves in every way possible, and then computing the reliability coefficients based on these different splits, and then find the average of these coefficients.Instead, Kuder-Richardson reliability coefficients (KR-20) allow us to arrive at the same conclusion with more convenience without computing the reliability of every possible split half combination. e test, and S2 is the total variance of the test scores.
Kuder-Richardson Reliability Coefficients (KR-21)
If the items are of nearly equal difficulty and independent of each other, the rehability coefficient can be computed by using formula that is both easier to compute and that require less information. KR-21 for certain is a shortcut method which is less accurate than KR-20.
Internal consistency estimates
sources of errors within the test and the scoring procedures.
Stability estimates
show how consistent test scores are over time.
equivalence estimates
indicate how scores on alternate forms of a test are equivalent.
YOU MIGHT ALSO LIKE...
Research 2
22 terms
Tests and Measurements CH 6
32 terms
T&M - Ch. 6
24 terms
Psych 385 Chapter 4 and 5
75 terms
OTHER SETS BY THIS CREATOR
CLP final
23 terms
Exam 3 psych test/mesurement
64 terms
Reliability made by quizlet
63 terms
psych test & measurements exam 2
69 terms