Created by
Terms in this set (23)
T(uj = 1 | theta i) = 1 / (1 + e^(-a(theta i - bj)
--Applicable to data where all test items have the same discrimination parameter
-- Left side = probability of responding correctly to the test item (j)
-- theta = latent factor (xenophobia, test anxiety, idiocy, etc.)
-- b = "difficulty," the point on the theta continuum where there is equal likelihood of answering correctly or not
--a = discrimination parameter, slope of the theta trace line at point b.
--Applicable to data where all test items have the same discrimination parameter
-- Left side = probability of responding correctly to the test item (j)
-- theta = latent factor (xenophobia, test anxiety, idiocy, etc.)
-- b = "difficulty," the point on the theta continuum where there is equal likelihood of answering correctly or not
--a = discrimination parameter, slope of the theta trace line at point b.
L varies as PIsubu ( integral of PI from k to j of (Tj^(uj) (1-Tj) ^(1-uj))as a function of theta) ^ ru
-- Tj = The trace line for probability of answering correctly on an item
-- (1-Tj) = opposite of the above, the probability of answering incorrectly
-- PI from k to j of ... = items are locally independent (of each other), therefore you can multiply the probs for all of the items to get the entire response pattern of the test
-- ...as function of theta = theta is assumed to be on a standard normal distribution, therefore it is a continuous random variable
-- integral... = removes theta because the model assumes one dimension. If using a multidimensional model, you integrate more dimensions (theta2, theta3, etc.)
-- PIu (...) ^ru = multiplies together response patterns of all subjects to get total probs of all subjects, BECAUSE subjects are assumed independent
-- Tj = The trace line for probability of answering correctly on an item
-- (1-Tj) = opposite of the above, the probability of answering incorrectly
-- PI from k to j of ... = items are locally independent (of each other), therefore you can multiply the probs for all of the items to get the entire response pattern of the test
-- ...as function of theta = theta is assumed to be on a standard normal distribution, therefore it is a continuous random variable
-- integral... = removes theta because the model assumes one dimension. If using a multidimensional model, you integrate more dimensions (theta2, theta3, etc.)
-- PIu (...) ^ru = multiplies together response patterns of all subjects to get total probs of all subjects, BECAUSE subjects are assumed independent
-- Points along the theta curve that can be blocked off and added as areas of rectangles under the curve to closely approximate the true integral of the curve. The more quadrature points, the closer to the real curve estimation, the better the summation. But it also takes way longer.
-- Quadrature points/area summation is used to estimate the likelihood function in EM MML
-- Quadrature points/area summation is used to estimate the likelihood function in EM MML
Expectation-Step -- Find the likelihood function of theta (area under the curve)
-- Use this to calculate expected freqs of each response pattern (TFTF, FFTT, etc.), which is the number of subjects who responded in each pattern at each level of theta
Maximization-Step -- Use these estimated freqs to find the max value of the same likelihood function with these parameters:
-Theta is random, M=0, SD = 1
-Response patterns are already fixed based on e-step above
-Item parameters are free (a and b) so these are tweaked to find the highest possible frequency function
Plug the new item parameters back into the e-step and find a new likelihood function
Repeat until convergence (no change, or <0.000001 change)
-- Use this to calculate expected freqs of each response pattern (TFTF, FFTT, etc.), which is the number of subjects who responded in each pattern at each level of theta
Maximization-Step -- Use these estimated freqs to find the max value of the same likelihood function with these parameters:
-Theta is random, M=0, SD = 1
-Response patterns are already fixed based on e-step above
-Item parameters are free (a and b) so these are tweaked to find the highest possible frequency function
Plug the new item parameters back into the e-step and find a new likelihood function
Repeat until convergence (no change, or <0.000001 change)
--During an m-step, item parameters (a and b) are estimated to give max likelihood. Then they get plugged back into their item response functions (trace line, theta function)
--Multiply all item response functions associated with that response pattern (so, TFTF, you do prob of correct times incorrect times correct...)
--This gives likelihood of that specific response pattern, dependent on only theta because all other parameters are "known" (estimated)
--Assume theta is normal, like it has been
--Use this to estimate the posterior distribution by multiplying it by all the trace lines (all the calculated likelihoods) and putting it over the summation of the same thing across all quadrature points, which makes it integrate to 1
-- Posterior distribution is the frequency that people will respond to the test with that response pattern. If you multiply by the number of people in sample who gave that pattern, then sum all the pattern proportions, you get the total estimated number of people who embody that level (quadrature point) of the theta distribution.
--Multiply all item response functions associated with that response pattern (so, TFTF, you do prob of correct times incorrect times correct...)
--This gives likelihood of that specific response pattern, dependent on only theta because all other parameters are "known" (estimated)
--Assume theta is normal, like it has been
--Use this to estimate the posterior distribution by multiplying it by all the trace lines (all the calculated likelihoods) and putting it over the summation of the same thing across all quadrature points, which makes it integrate to 1
-- Posterior distribution is the frequency that people will respond to the test with that response pattern. If you multiply by the number of people in sample who gave that pattern, then sum all the pattern proportions, you get the total estimated number of people who embody that level (quadrature point) of the theta distribution.
--"Information" is the reciprocal of the precision of the parameter estimate (theta). Precision in analysis is defined as the variability. Therefore the information value for estimated theta is:
I= 1/SD^2theta
--SD^2 for a test is:
1/ (sum of k at j=1 of ([T-diff]^2/((Tj)(1-Tj))
or the reciprocal of the summation of all test items (k) starting at 1, of the differential of probability of correctness squared, divided by the item correctness times the incorrectness.
--So information is the reciprocal of all that.
--Which can be rearranged to:
I= Sum of k at j=1 of ([T-hat]^2/Tj(1-Tj)
(removes the "reciprocal of a reciprocal")
Which simplifies to:
Sum of k starting at j=1 of differential of T by theta ^2 / correctness*incorrectness)
-- to find information for one given item, just remove the summation and find the equation at j item.
-- In a 2PL model, T = 1/(1+e^(-a (thetai - bj))
--You can use the above general equation for I and plug in the equation for T, then do a lot of crazy derivations, to arrive at:
for any item j in a 2PL model: Ij = aj^2 Tj (1-Tj)
Or the squared discrimination parameter times probability of correctness times probability of incorrectness.
-- For a 1PL model (assuming all items have same discrimination): Ij = Tj * (1-Tj)
--For 3PL:
Ij = aj^2 (incorrectness/correctness) ((correctness - gj)^2) / ((1 - gj) ^2))
Where g is the "guessing" parameter.
I= 1/SD^2theta
--SD^2 for a test is:
1/ (sum of k at j=1 of ([T-diff]^2/((Tj)(1-Tj))
or the reciprocal of the summation of all test items (k) starting at 1, of the differential of probability of correctness squared, divided by the item correctness times the incorrectness.
--So information is the reciprocal of all that.
--Which can be rearranged to:
I= Sum of k at j=1 of ([T-hat]^2/Tj(1-Tj)
(removes the "reciprocal of a reciprocal")
Which simplifies to:
Sum of k starting at j=1 of differential of T by theta ^2 / correctness*incorrectness)
-- to find information for one given item, just remove the summation and find the equation at j item.
-- In a 2PL model, T = 1/(1+e^(-a (thetai - bj))
--You can use the above general equation for I and plug in the equation for T, then do a lot of crazy derivations, to arrive at:
for any item j in a 2PL model: Ij = aj^2 Tj (1-Tj)
Or the squared discrimination parameter times probability of correctness times probability of incorrectness.
-- For a 1PL model (assuming all items have same discrimination): Ij = Tj * (1-Tj)
--For 3PL:
Ij = aj^2 (incorrectness/correctness) ((correctness - gj)^2) / ((1 - gj) ^2))
Where g is the "guessing" parameter.
--These models cause observed response proportions to be dependent on the model. This is inappropriate for chi-square distributions and majorly inflates type 1 error (false positive).
-- The tests put subjects into equal sized subgroups. Therefore the results are hugely dependent on the sample, and the way in which those subgroups are chosen (how many, how spaced out they are) majorly affects the resulting statistic.
-- The tests put subjects into equal sized subgroups. Therefore the results are hugely dependent on the sample, and the way in which those subgroups are chosen (how many, how spaced out they are) majorly affects the resulting statistic.
-- find the observed proportions of correct responses using only the data collected. So for every person who got a score of 8, e.g., how many answered 1 on an item? Do this for all summed scores.
-- Then, find the expected proportions of correctness. Find the likelihood of a right or wrong response based on the IRT model.
-- S-K^2 = Sum of n-1 starting at k = 1 of (Nk) ((Oik - Eik)^2) / ((Eik (1-Eik))
k = number of correct scoring groups
O= observed proportion already calculated
E = Expected proportion already calculated
Nk = number of people in correct scoring group
--Since observed props don't depend on the model, there is no conflict as with other test stats
--Grouping of summed scores does not depend on sample size
--Data is grouped by each possible summed scores (also not sample dependent)
-- Can be used on polytomous items (more than one correct response)
-- Then, find the expected proportions of correctness. Find the likelihood of a right or wrong response based on the IRT model.
-- S-K^2 = Sum of n-1 starting at k = 1 of (Nk) ((Oik - Eik)^2) / ((Eik (1-Eik))
k = number of correct scoring groups
O= observed proportion already calculated
E = Expected proportion already calculated
Nk = number of people in correct scoring group
--Since observed props don't depend on the model, there is no conflict as with other test stats
--Grouping of summed scores does not depend on sample size
--Data is grouped by each possible summed scores (also not sample dependent)
-- Can be used on polytomous items (more than one correct response)
Other sets by this creator
Recommended textbook solutions

The Practice of Statistics for the AP Exam
5th Edition•ISBN: 9781464108730 (1 more)Daniel S. Yates, Daren S. Starnes, David Moore, Josh Tabor2,433 solutions



Statistics: Informed Decisions Using Data
5th Edition•ISBN: 9780134136783Michael Sullivan III2,446 solutions
1/2