Search
Browse
Create
Log in
Sign up
Log in
Sign up
Upgrade to remove ads
Only $1/month
STA 4210 Final
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (52)
Give the definition of the Type I Error of a general hypothesis test
A Type I error occurs when you reject the null hypothesis when the null hypothesis is true.
Name one point that the fitted regression line always passes through
(Xbar, Ybar)
R^2
SSR/SST
F*
MSR/MSE
Properties of X matrix
None of these
Properties of X'X
Symmetric
Properties of H Matrix
Symetric, Idempotent
Properties of I Matrix
Diagonal, Symmetric, Idempotent
Properties of J Matrix
Symmetric
What could possibly increase when a predictor is added?
SSR, Multiple R^2, Adjusted R^2
What could possible decrease when a predictor is added?
SSE, Adjusted R^2
multicolinearity definition
Predictors are highly correlated between themselves
Technique to correct multicolinearity
Create new variables (centering)
(T/F) Statistical learning refers to a set of approaches for estimating f in the model Y = f(X) + e
TRUE
(T/F) We can divide learning problems into supervised and unsupervised situations. Linear regression, logistic regression and linear discriminant analysis are all supervised learning approaches.
TRUE
(T/F) K nearest neighbors (KNN) is a nonparametric approach, so it always performs better than linear regression when the true relationship is linear.
FALSE
(T/F) In general the more flexible a method is, the lower its test MSE will be.
FALSE
(T/F) The bias-variance trade-off means that as a method gets more flexible the bias will decrease and the variance will increase but expected test MSE may go up or down.
TRUE
(T/F) If a method has high bias, then small changes in the training data can result in large changes in f-hat
FALSE
(T/F) For a binary response variable Y with 0/1 coding, linear regression is equivalent to logistic regression.
FALSE
(T/F) For classification problems, the KNN classifier can produce a test error that is smaller than the Bayes error rate.
FALSE
(T/F) For regression problems, the validation set MSE is always larger than the test set MSE.
FALSE
(T/F) K-fold cross-validation usually leads to validation MSEs that are more stable than those found using the validation set approach.
TRUE
(T/F) According to the Bonferroni inequality, if simultaneous multiple interval estimates are desired with an overall confidence interval level 1 - (a/g) on g different E(Yh) with confidence level 1 - alpha. Assume that g is an integer greater or equal to 2.
FALSE
(T/F) For simple linear regression, if the correlation coefficient is equal to -1, then Yi = Yhati for all i = 1,2,.... n
TRUE
(T/F) The Normal Probability Plot is used to determine if the assumption of linearity of the regression function has been violated.
FALSE
(T/F) The coefficient of determination can take on a value between -1 and +1.
FALSE
(T/F) If a simple linear regression goes through the origin (0,0) then the slope is equal to 0: b1 = 0.
FALSE
In the normal error regression model, the distrubtion of Yi is N(____, ___)
B0 +B1Xi , σ^2
In SLR the degrees of freedom of SSR are to equal the degrees of freedom for SSE if n = ___
3
Suppose a 95% confidence interval for E(Yh) is (-0.7,1.2) Then a possible 95% prediction interval for Yh(new) could be (____,___)
(-1.5,2)
The mathematical expression for SSR is given by
∑ i = 1 to n [(Yhati - Ybar)^2]
There are 15 observations taken of 6 predictor variables. A multiple linear regression model was fit using all 6 predictor variables. The dimension of b is ....
7x1
Simplify the following : SSR(X1,X3) - SSR(X3) + SSR(X2 | X3)
SSR(X1,X2 | X3)
When X3 and X4 are uncorrelated, SSR (X4 | X3) =
SSR(X4)
A categorical predictor variable with 6 different category types would need ____ indicator variables to describe it.
5
4 predictors were used to fit a model. Find ∑ i=1 to n of (hii)
5
Cook's Distance is a quantity used to indentify ....
influential points
A researcher reports that for a linear regression model, the regression sum of squares is three times as large as the error sum of squares. Compute R^2 for this model
0.75
Under what conditions would Ridge Regression be used?
When there is multicolinearity present
What is exactly being tested for the Modified Levene Test? What is the Rejection Region?
Ho = error variance is constant. Ha = error variance is not constant. RR = t* > t(1-alpha/2, n-2)
What does a large value of hii indicate? How do we determine if hii is large?
Influential point in X. hii > 2p/n is large
Interpret the partial R-square value R^2 (3 | 1,2) = 0.42
42% of the variability in Y is explained by adding X3 to a model that contains X1 and X2.
Name the statistics used to identify influential Y observations
DFFITS, Cook's Distance, DFBETAS
Name the statistics used to identify extreme Y observations
Semi-studentized, studentized, studentized deleted residuals.
Give a brief explanation of how subset selection works.
Iteratively add/remove predictors to the model, testing at each step until we are left with a model where each predictor is significant.
Explain why logistic regression would be used instead of linear regression.
When the response variable Y is binary or categorical ( 0 or 1)
DFFits cutoff value
|DFFits| >= to 1 is considered extreme
Cook's Distance cutoff value
Di >= F(0.50,p,n-p) is considered extreme
DFBeta cutoff value
|DFBeta| >= 1 considered large
VIF cutoff value
VIF(Bhati) > 10
Var(bk)
inv([X'X]) at position (k+1, k+1)
OTHER SETS BY THIS CREATOR
ALC Exam
24 terms
STA 4183 Exam 1
3 terms
AMH 3423 Important People / Terms
60 terms
SWS 2007 Exam III
32 terms