Search
Browse
Create
Log in
Sign up
Log in
Sign up
Multiple Regression Analysis
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (35)
Assumption of SLR.4
All other factors affecting y are uncorrelated with x
Multiple regression Analysis
Allows us to explicitly control for many other factors that simultaneously affect the DV
E(u, educ,exper)=0
Other factors affecting y are not related on average to x1 and x2. We state that average ability levels must be the same across all individuals regardless of education or experience
Key assumption for the general multiple regression model
E(u, x1,x2,x3,x4...xk)=0. At a minimum this means that all factors in the unobserved error term must be uncorrelated with the explanatory variables
The intercept Beta hat 1
Predicted value of y when x1=0 and x2=0
When X2 is held fixed
then you see the affect that the change in x1 has on the change on y hat
Holding Other Factors fixed
allows scientists to infer ceteris paribus interpretation even if the data has not been collected in that way
Residual for each observation
defined as uhat i=yi-y hat i
OLS fitted values and residuals
Sample covariance between each IV and the OLS residuals is zero (no relationship)
The point xbar1, ybar1 is always on the OLS regression line. The averages of the observations are alwasy on OLS regression line
Partialling Out Interpretation
Beta 1 hat =(Sum of Residuals on a simple regression of x1 on x2 multiplied by y) /(Sum of residuals based on a simple regression of x1 and x2) squared. Residuals are part of xi1 that is uncorrelated with xi2. The r is the affect of xi1 after the effect of xi2 has been partialled out
When simple regression is identical to the multivariate case
Partial effect of x2 is zero, x1 and x2 are uncorrelated, if the degree of correlation is high the simple and multiple regression estimates are quite different (multicollinearity)
SST (Total Sums of squares)
Summation of yi-ybar. Difference between the individual score and the average of the individual scores
SSE Explained sum of Squares
Summation of (yhat-ybar) Difference between the predicited individual scores and the average individual scores
SSR Explained sum of the squared residuals
Summation of residuals squared....which is yi-yihat
SST=SSR+SSE
Total sums of squares equals sum of squared residuals added to the Explained sum of squares
R squared
SSE/SST or 1-SSR/SST
Never decreases, and usually increases when another IV is added to the regression
Factor that determines whether an explanatary variable should be added to a model
Whether the explanatory variable has a nonzero partial effect on y in the population
Expected Values of the OLS estimators
Are the multivariate estimators also unbiased estimates of the population model
MLR.1 Assumption
Linear in Parameters...where β0 β1 are the unknown parameters of interest and u is an unobservable random error
MLR.2 Assumption
Random Sampling, Sample of n observations from the population model
Assumption MLR.3
Zero Conditional Mean: THe error u has an expected value of zero given any values of the independent variables..... meaning the liklihood of an independent variable having any chance of holding a specified u is zero. covariance between u and x1, x2, is 0. They have no relationship
Failure of the Zero Conditional Mean
Functional Relationship in 3.31, omitting an important factor that is correlated with any of the IV's , measurement error, if one of the explanatory variables is jointly determined with y
exogenous explanatory variable
if xj is correlated with u in any way than it is endogenous
Assumption MLR.4 No Perfect Collinearity
If an independent variable is in exact linear combination with any of the other independent variables then we say the model suffers from perfect collinearity and cannot be estimated by OLS. They can be correlated just not perfectly correlated.
Theorem 3.1 unbiasedness of OLS
THe OLS estimators are unbaised estimators of the population
Including Irrelevent Variable
No affect on its unbiasedness, value of the irrelevant parameters estimate will converge to zero, Does have an undesirable affect on the variance of the OLS estimators
Omitted Variable Bias
Exclusion of a relevant variable creates a biased OLS model.
Homoskedasticity Assumption MLR.5
Error u has the same variance given any values of the explanatory variables. In other words var (u, x1...xk)=σ squared
Variance: Multicollinearity
Variance for the OLS estimators and a larger sigma squared, can be reduced by adding variables to the regression, total sample variation: a larer SST implies smaller variance for the estimators. Linear relationships among the IVs: a larger r squared implies larger variance for the estimators
Variance of the OLS Estimators: Error Variance
sigma squared noise in the equation the more difficult to estimate the partial effect of any IV on the DV. This is a feature of the population. For a given variable of y there is only one way to reduce the variance which is by adding more explanatory variables
Xj, SST: Total Sample Variation
The component of the variance that systmatically depends on sample size. Preer to have as much sample variation in Xj as possible. Increase the sample size
Linear Relationships among IV's Rjsquared
As Rjsquared increases to one, then Var of Beta hat j gets very large. . Rjsquared is the proportion of the total variation in xj that can be explained by other Independent variables in the equation. Smallest Var Beta hat j is obtained if Rjsquared=0.
Variances in Misspecified Models
Variance of Var Beta tilde is always smaller than the variance of the Beta hat (variance of the betas in the simple regression, smaller than the variance of the betas in the multiple regression) unless x1 and x2 are uncorrelated in the sample
When Beta2=0 include x2 in the model
Any bias in Beta Tilde does not shrink as the sample size grows.
Var Beta Tilde and Beta hat both shrink as sample size grows
multicollinearity induced by x2 becomes less important
when x2 is excluded and includes x2 the explained variance
BLUE
best is smallest variance,
linear means it can be expressed as a linear function of the data, unbiased, Expected value of Betaj=Betaj, For any estimator Var (Betahat) is less than the variance of Beta tilde hat.
YOU MIGHT ALSO LIKE...
Marketing Research
TextbookMediaPremium
$9.99
STUDY GUIDE
Business Stats final
49 Terms
cpera
Hair Chapter 4: Multiple Regression
55 Terms
Jordan_Kirkland
Regressions
99 Terms
nanyhernandez099
OTHER SETS BY THIS CREATOR
CSET Multiple Subjects: Subtest 1 Language and Linguistics
95 Terms
ABanducci13
Linguistics
19 Terms
ABanducci13
International Econ
9 Terms
ABanducci13
Chapter 5
5 Terms
ABanducci13
THIS SET IS OFTEN IN FOLDERS WITH...
Simple Regression Analysis
33 Terms
ABanducci13
Simple Regression: Key Terms
35 Terms
ABanducci13
CH 1 An Introduction to Ordinary Least Squares
12 Terms
steve12321
HDFS 2004W - Midterm
156 Terms
jenn9126357
;