Get ahead with a $300 test prep scholarship
| Enter to win by Tuesday 9/24
SEM Test 1
Terms in this set (73)
relationship (line) between exogenous predicting endogenous
β = relationship endogenous variable predicting another endogenous variable (between the endogenous variables)
measurement error in the meausurement model. What's left over after psi (shared variance) is pulled out
What dimensions is Ψ psi matrix, and what does it contain?
a p X p variance-covariance matrix of the residuals (zetas)
ε/measurement errors on the Y side (the endogenous side)
xi/ksi - An exogenous/independent variable. All the shared variance from the items.
ξ = an exogenous (independent) predictor variable. It's all the shared variance.
η = an endogenous (dependent) variable
λ = Path from latent independent variables to their indicators.
The number of free parameters
φ = variance/covariance matrix for the exogenous xi variables. If you have 2 xi's, then phi would be a matrix. Exogenous variables. You have variance of x1, variance of x2, and the coariance of x1 & x2. All the variances related to each xi: how much is captured by each, and how much they covary
θδ = theta-delta matrix
In CFA: Variance-covariance of the error terms. an i X i variance/covariance matrix of relations among the residual terms of X
* Helps: After you take out the latent variable, you have correlation left over.
# of observed exogenous variables
# of observed endogenous variables
# latent endogenous variables
# of latent exogenous variables
ζ = prediction error. The residuals. What's left over after all the X's try to predict the same thing. We add in our predictors, all predicting this outcome, and whatever is left over means we're missing something (like maybe variables aren't related to the Y, or maybe we need to add some better items)
the number of free parameters
Purpose of t rule
You want to know if the number of parameters being estimated in your model are less than or equal to the number of parameters you're putting into it.
1 x 1 matrix. You just multiply every number in the matrix by that number. So the inner numbers.
How do you know if 2 matricies can be multiplied?
If number of columns in first matrix equals number of rows in second one.
In matrix notation, is row first or column first?
row X column
All columns are independent of each other
What is a singular matrix? (and one implication of it)
Matrix with linear dependence, where one column is related to another column. You can't invert a singular matrix.
How do you divide a matrix?
You take the inverse of the second matrix, then multiply.
What does a transposed matrix look like?
Flip vertical, rotate counterclockwise 90. Row 1 becomes column 1.
What are the dimensions of the transpose of a 5x3 matrix?
Same on both sides (like a correlation table)
Variable that originates inside the model, predicted by some other variable inside the model.
What is the beta matrix
All the relationships between endogenous variables?
What variable represents X?
What variable represents Y?
One-directional paths only. No feedback loops.
Model with feedback loops
More than enough information to estimate your model.
You have just enough information to estimate your model, to estimate each of those points in the covariance matrix.
Assumptions of ML estimation
1. Sample size is large
2. Model-implied matrix and observed matrix are non-singular (no linear dependence)
3. All observations are independent and identically distributed.
Disadvantages of ML estimation (2)
1. Need larger sample size, which is bad because that's going to jack up our chi square
2. Assumes normality for all the variables
Advantages of ML estimation (4)
1. Estimates of the parameters are unbiased (they will be close to the true population values)
2. Consistent: As N becomes large, parameter estimtes get closer to population params
3. Efficient: As N becomes large, variance of the estimate is minimized
4. Scale-invariant & Scale-free: Scale of the variables doesn't matter. You can add 10 to all X values and get the same result.
This means you can analyze covariances OR correlations.
What are Barron and Kenny's steps for investigating mediation?
Step 1: A: What is the direct of X on the mediator? If the mediator is going to explain something, then it has to have SOME relationship with the mediator.
Step 2: C: What is the efect of X on the outcome variable? There should ultimately be an effect.
Step 3: B->C If you control for the mediator, there should be an effect left over
Step 4: The full model. You're looking for a change in C (called c-prime/C'). If C' is equal to zero, then you have full mediation. If you test the full model by controlling for M, and you have nothing left over, then you have full mediation.
What's a problem with the sobel test?
Which is better? Bootstrapping or ML?
Bootstrapping is better than ML because it gets around the normality assumption.
List 3 problems with Chi-square
1. Sample size required, but also makes it more likely that model won't fit.
2. Not robust to violations of normality.
3. Not robust to complex models.
How can you adjust Chi Square?
1. Adjust for sample size by doing a smaller sample size.
Pneumonic for RMSEA
SEA-sick. You want less than a .06
Pneumonic for CFI
Let's See If I get a 90 on this test, but I hope I get a 95.
Advantages of CFI... (2)
Model complexity. Corrects for complex models.
Disadvantage of CFI (2)
Sensitive to miss-specified loadings
Advantage of SRMR
Less sensitive to sample size
What SRMR value do you want?
<= .08 because S looks like an 8
Disadvantages of GFI (3)
1. Isn't robust to sample size
2. Sensitive to non-normality
3. Doesn't address model complexity
4. Insensitive to model complexity
What is an exactly identified model?
It has exactly as many parameters you need to estimate the model, but that means all your fit indices are going to be perfect. So you should constrain some aspects of your model, not let everything correlate with everything.
What's a Wald statistic tell you?
It tells you what happens when you drop a path.
3 ways to test for mediation
1. Baron & Kenny causal steps (OK)
2. Sobel test (better)
3. Bootstrapping (best)
Advantages of ULS (1)
Useful when using data that is not normally distributed
Disadvantages of ULS (3)
1. Still consistent but not an efficient estimator.
2. Not scale-invariant and not scale-free, so we have to use co-variances.
3. As N increases, your error estimates will be slightly larger.
4. No overall fit measure, no Chi-Squares, etc. So no way to assess fit.
When do you use GLS?
1. When you have non-normal data or hetroskadasicity.
2. Useful for repeated measures/longitudinal.
Advantages of GLS
1. Consistent estimator
2. Scale free and scale-invariant
Disadvantages of GLS
Can be biased
Variance/Covariance matrix of the relationsihps between endogenous variables
Gamma matrix - var/cov matrix of relationships between the exogenous and endogenous variables.