## ISQM Chapter 4

##### Created by:

hermboy8  on September 27, 2010

##### Description:

Multivariate Data Analysis Key Terms

Pop out
No Messages

# ISQM Chapter 4

 Variance Inflation Factor (VIF)Indicator of the effect that the other independent variables have on the standard error of a regression coefficient. The variance inflation factor is directly related to the tolerance value (VIF = I/TOL). Large VIF values also indicate a high degree of collinearity or multcollinearity among the independent variables.
1/52
Preview our new flashcards mode!

Order by

#### Definitions

Variance Inflation Factor (VIF)Indicator of the effect that the other independent variables have on the standard error of a regression coefficient. The variance inflation factor is directly related to the tolerance value (VIF = I/TOL). Large VIF values also indicate a high degree of collinearity or multcollinearity among the independent variables.
TransformationA variable may have an undesirable characteristic, such as nonnormality, that detracts from the ability of the correlation coefficient to represent the relationship between it and another variable. A transformation, such as taking the logarithm or square root of the variable, creates a new variable and eliminates the undesirable characteristic, allowing for a better measure of the relationship. Transformations may be applied to either the dependent or independent variables, or both. The need and specific type of transformation may be based on theoretical reasons (such as transforming a known nonlinear relationship) or empirical reasons (identified through graphical or statistical means).
Total sum of squares (SST) Total amount of variation that exists to be explained by the independent variables. This baseline value is calculated by summing the squared differences between the mean and actual values for the dependent variable across all observations.
ToleranceCommonly used measure of collinearity and multicollinearity. The tolerance of the variable i (TOLi) is 1-R2i, where R2i is the coefficient of determination for the prediction of variable i by the other independent variables in the regression variate. As the tolerance value grows smaller, the variable is more highly predicted by the other independent variables (collinearity).
Suppression effectThe instance in which the expected relationships between independent and dependent variables are hidden or suppressed when viewed in a bivariate relationship. When additional independent variables are entered, the multicollinearity removes "unwanted" shared variance and reveals the "true" relationship.
Sum of squares regression (SSR) Sum of the squared differences between the mean and predicted values of the dependent variable for all observations. It represents the amount of improvement in explanation of the dependent variable attributable to the independent variable(s).
Sum of squared errors (SSE)Sum of the squared prediction errors (residuals) across all observations. It is used to denote the variance in the dependent variable not yet accounted for by the regression model. If no independent variables are used for prediction, it becomes the squared errors using the mean as the predicted value and thus equals the total sum of squares.
Studentized residualThe most commonly used form of standardized residual. It differs from other methods in how it calculate the standard deviation used in standardization. To minimize the effect of any observation on the standardization process, the standard deviation of the residual for observation i is computed from regression estimates omitting the ith observation in the calculation of the regression estimates.
Stepwise estimationMethod of selecting variables for inclusion in the regression model that starts by selecting the best predictor of the dependent variable. Additional independent variables are selected in terms of the incremental explanatory power they can add to the regression model. Independent variables are added as long as the partial correlation coefficient are statistically significant. Independent variables may also be dropped if their predictive power drops to a nonsignificant level when another independent variable is added to the model.
Statistical relationship Relationship based on the correlation of one or more independent variables with the dependent variable. Measures of association, typically correlations, represent the degree of relationship because there is more than one value of the dependent variable for each value of the independent variable.
StandardizationProcess whereby the original variable is transformed into a new variable with a mean of 0 and a standard deviation of 1. The typical procedure is to first subtract the variable mean from each observation's value and then divide by the standard deviation. When all the variables in a regression variate are standardized, the b0 term (the intercept) assumes a value of 0 and the regression coefficients are known as beta coefficients, which enable the researcher to compare directly the relative effect of each independent variable on the dependent variable.
Standard error of the estimate (SSE)Measure of the variation in the predicted values that can be used to develop confidence intervals around any predicted value. It is similar to the standard deviation of a variable around its mean, but instead is the expected distribution of predicted values that would occur if multiple samples of the data were taken.
Standard errorExpected distribution of an estimated regression coefficient. The standard of error is similar to the standard deviation of an set of data values, but instead denotes the expected range of the coefficient across multiple samples of the data. It is useful in statistical tests of significance that test to see whether the coefficient is significantly different from zero (i.e., whether the expected range of the coefficient contains the value of zero at a given level of confidence). The t value of a regression coefficient is the coefficient divided by its standard error.
Specification error Error in predicting the dependent variable caused by excluding one or more relevant independent variables. This omission can bias the estimated coefficients of the included variables as well as decrease the over all predictive power of the regression model.
SingularityThe extreme case of collinearity or multicollinearity in which an independent variable is perfectly predicted (a correlation of ±1.0) by one or more independent variables. Regression models cannot be estimated when a singularity exists. The researcher must omit one or more of the independent variables involved to remove this.
Simple regression Regression model with a single independent variable, also known as bivariate regression.
Significance level (alpha)Commonly referred to as the level of statistical significance, the significance level represents the probability the researcher is willing to accept that the estimated coefficient is classified as different from zero when it actually is not. This is also known at Type 1 error. The most widely used level of significance is .05, although researchers use levels ranging from .01 (more demanding) to .10 (less conservative and easier to find significance).
Sampling error The expected variation in any estimated parameter (intercept or regression coefficient) that is due to the use of a sample rather than the population. Sampling error is reduced as the sample size is increased and is used to statistically test whether the estimated parameter differs from zero.
Residual (e or ε)Error in predicting our sample data. Seldom will our predictions be perfect. We assume that the random error will occur, but we assume that this error is an estimate of the true random error in the population (ε), not just the error in prediction for our sample (e). we assume that the error in the population we are estimating is distributed with a mean of 0 and a constant that the error in the population we are estimating is distributed with a mean of 0 and a constant (homoscedastic) variance.
Regression variate Linear combination of weighted independent variables used collectively to predict the dependent variable.
Regression CoefficientNumerical value of the parameter estimate directly associated with an independent variable; for example, in the model Y = b0 + b1X1 the value b1 is the regression coefficient for the variable X1. The regression coefficient represents the amount of change in the dependent variable for a one-unit change in the independent variable. In the multiple predictor model (e.g., Y = b0 + b1X1+b2X2), the regression coefficients are partial coefficients because each takes into account not only the relationships between Y and X1 and between Y and X2, but also between X1 and X2. The coefficient is not limited in range, because it is based on both the degree of association and the scale units of the independent variable. For instance, two variables with the same association to Y would have different coefficients if one independent variable was measured on a 7-point scale and another was based on a 100-point scale.
Reference Category The omitted level of a nonmetric variable when a dummy variable is formed from the nonmetric variable.
PRESS statistic Validation measure obtained by eliminating each observation one at a time and predicting this dependent value with the regression model estimated from the remaining observations.
Power Probability that a significant relationship will be found if it actually exists. Complements the more widely used significance level alpha.
Polynomial Transformation of an independent variable to represent a curvilinear relationship with the dependent variable. By including a squared term (X2), a single inflection point is estimated. A cubic term estimates a second inflection point. Additional terms of a higher power can also be estimated.
Partial Regression PlotGraphical representation of the relationship between the dependent variable and a single independent variable. The scatterplot of points depicts the partial correlation between the two variables, with the effects of other independent variables held constant. This portrayal is particularly helpful in assessing the form of the relationship (linear vs non linear) and the identification of influential observations.
Partial F (or t) valuesIs simply a statistical test for the additional contribution to prediction accuracy of a variable above that of the variables already in the equation. When a variable (Xa) is added to a regression equation after other variables are already in the equation, its contribution may be small even though it has a high correlation with the dependent variable. The reason is that Xa is highly correlated with the variables already in the equation. ...is calculated for all variables by simply pretending that each, in turn, is the last to enter the equation. It gives the additional contribution of each variable above all others in the equation. A low or insignificant ... value for a variable not in the equation indicates its low or insignificant contribution to the model as already specified. A t value may be calculated instead of F values in all instances, with the t value being approximately the square root of the F value.
Partial Correlation CoefficientValue that measures the strength of the relationship between the criterion or dependent variable and a single independent variable when the effects of the other independent variables in the model are held constant. For example, rY, X2, X1 measures the variaiation in Y associated with X2 when the effect of X1 on both X2 and Y is held constant. This value is used in sequential variable selection methods of regression model estimation (e.g., stepwise, forward addition, or backward elimination) to identify the independent variable with the greatest incremental predictive power beyond the independent variables already in the regression model.
Part CorrelationValue that measures the strength of the relationship between a dependent and a single independent variable when the predictive effects of the other independent variables in the regression mode are removed. The objective is to portray the unique predictive effect due to a single independent variable among a set of independent variables. Differs from the partial correlation coefficient, which is concerned with incremental predictive effect.
ParameterQuanntity (measure) characteristic of the population. For example, µ and r2 are the symbols used for the population parameters mean (µ) and variance (r2). They are typically estimated from sample data in which the arithmetic average of the sample is used as a measure of the population average and the variance of the sample is used to estimate the variance of the population.
OutlierIn strict terms, an observation that has a substantial difference between the actual value for the dependent variable and the predicted value. Cases that are substantially different with regard to either the dependent or independent variables are often termed... In all instances, the objective is to identify observations that are inappropriate representations of the population from which the sample is drawn, so that they may be discounted or even eliminated from the analysis as unrepresentative.
Null plot Plot of residuals versus the predicted values that exhibits a random pattern. Is indicative of no identifiable violations of the assumptions underlying regression analysis.
Normal Probability PlotGraphical comparison of the shape of the sample distribution to the normal distribution. In the graph, the normal distribution is represented by a straight line angled at 45 degrees. The actual distribution is plotted against this line, so any differences are shown as deviations from the straight line, making identification of differences quite simple.
Moderator Effect Effect in which a third independent variable (the moderator variable) causes the relationship between a dependent/independent variable pair to change, depending on the value of the moderator variable. It is also known as an interactive effect and is similar to the interaction effect seen in analysis of variance methods.
Multiple regression Regression model with two or more independent variables.
Leverage points Type of influential observation defined by one aspect of influence termed leverage. These observations are substantially different on one or more independent variables, so that they affect the estimation of one or more regression coefficients.
Linearity Term used to express the concept that the model possesses the properties of additivity and homogeneity. In a simple sense, linear models predict values that fall in a straight line by having a constant unit change (slope) of the dependent variable for a constant unit change of the independent variable.
Measurement error Degree to which the data values do not truly measure the characteristic being represented by the variable. For example, when asking about total family income, many sources of measurement error make the data values imprecise.
Least squares Estimation procedure used in simple and multiple regression whereby the regression coefficients are estimated so as to minimize the total sum of the squared residuals.
Intercept (b0)Value on the Y axis (dependent variable axis) where the line defined by the regression equation Y = b0 + b1X1 crosses the axis. It is described by the constant term b0 in the regression equation. In addition to its role in prediction, the intercept may have a managerial interpretation. If the complete absence of the independent variable has meaning, then the intercept represents that amount. For example, when estimating sales from from past advertising expenditures, the intercept represents the level of sales expected if advertising is eliminated. But in many instances the constant has only predictive value because in no situation are all independent variables absent. An example is predicting product preference based on consumer attitudes. All individuals have some level of attitude, so the intercept has no managerial use, but it still aids in prediction.
Influential observationAn observation that has a disproportionate influence on one or more aspects of the regression estimates. This influence may be based on extreme values of the independent or dependent variables, or both. Influential observations can either be "good," by reinforcing the pattern of the remaining data or "bad," when a single or small set of cases unduly affects the regression estimates. It is not necessary for the observation to be an outlier, although many times outliers can be classified as influential observations as well.
HomoscedasticityDescription of data for which the variance of the error terms (e) appears constant over the range of values of an independent variable. The assumption of equal variance of the population error E (where E is estimated from the sample value e) is critical to the proper application of linear regression. When the error terms have increasing or modulating variance, the data are said to be...
Indicator codingMethod for specifying the reference category for a set of dummy variables where the reference category receives a value of 0 across the set of dummy variables. The regression coefficients represent the group differences in the dependent variable from the reference category. Indicator coding differs from effects coding, in which the reference category is given the value of -1 across all dummy variables and the regression coefficients represent group deviations on the dependent variable from the overall mean of the dependent variable.
Forward addition Method of selecting variables for inclusion in the regression model by starting with no variables in the model and then adding one variable at a time based on its contribution to prediction.
Degrees of freedomValue calculated from the total number of observations minus the number of estimated parameters. These parameter estimates are restrictions on the data because, once made, they define the population from which the data are assumed to have been drawn. For example, in estimating a regression model with a single independent variable, we estimate two parameters, the intercept (b0) and a regression coefficient for the independent variable (b1). In estimating the random error, defined as the sum of the prediction errors (actual minus predicted dependent values) for all cases, we would find (n-2) degrees of freedom. Provide a measure of how restricted the data are to reach a certain level of prediction. If the number of degrees of freedom is small, the resulting prediction may be less generalizable because all but a few observations were incorporated in the prediction. Conversely, a large degrees of freedom value indicates the prediction is fairly robust with regard to being representative of the overall sample of respondents.
Correlation coefficientCoefficient that indicated the strength of the association between any two metric variables. The sign (+ or -) indicates the direction of the relationship. The value can range from +1 to -1, with +1 indicating a perfect positive relationship, 0 indicating no relationship, and -1 indicating a perfect negative or reverse relationship (as one variable grows larger, the other variable grows smaller).
CollinearityExpression of the relationship between two (collinearity) or more (multicollinearity) independent variables. Two independent variables are said to exhibit complete collinearity if their correlation coefficient is 1, and complete lack of collineartiy if their correlation coefficient is 0. Multicollinearity occurs when any single independent variable is highly correlated with a set of other independent variables. An extreme case of collinearity/multicollinearity is singularity, in which an independent variable is perfectly predicted (i.e., correlation of 1.0) by another independent variable (or more than one).
Coefficient of determination R2Measure of the proportion of the variance of the dependent variable about its mean that is explained by the independent, or predictor, variables. The coefficient can vary between 0 and 1. If the regression model is properly applied and estimated, the researcher can assume that the higher the value of R2, the greater the explanatory power of the regression equation, and therefore the better the prediction of the dependent variable.
Beta coefficientStandardized regression coefficient (see standardization) that allows for a direct comparison between coefficients as to their relative explanatory power of the dependent variable. Whereas regression coefficients are expressed in terms of the units of the associated variable, thereby making comparisons inappropriate, beta coefficients use standardized data and can be directly compared.
Backward elimination Method of selecting variables for inclusion in the regression model that starts by including all independent variables in the model and then eliminating those variables not making a significant contribution to prediction.
All-possible-subsets regressionMethod of selecting the variables for inclusion in the regression model that considers all possible combinations of the independent variables. Fore example, if the researcher specifies four potential independent variables, this technique would estimate all possible regression models with one, two, three, and four variables. The technique would then identify the model(s) with the best predictive accuracy.
Adjusted coefficient of determinationModified measure of the coefficient of determination that takes into account the number of independent variables included in the regression equation and the sample size. Although the addition of independent variables will always cause the coefficient of determination to rise, the adjusted coefficient of determination may fall if the added independent variables have little explanatory power or if the degrees of freedom become too small. This statistic is quite useful for comparison between equations with different numbers of independent variables, differing sample sizes, or both.

### First Time Here?

Welcome to Quizlet, a fun, free place to study. Try these flashcards, find others to study, or make your own.