Search
Browse
Create
Log in
Sign up
Log in
Sign up
Upgrade to remove ads
Only $2.99/month
Methods Final
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (79)
Binary Variable
Same as a dummy variable. Dichotomous variable with only two values.
Odds
(Number of Occurrences of Outcome A)/(Number of Occurrences of Outcome B)
Odds Ratio
The relationship between the odds at one value of the independent variable compared with the odds at the next-lower value of the independent variable
Common Logarithm
Base-10 Logs
Logits
Natural log transformations of odds
Natural Logarithm (ln)
Base-e Logs
Percentage Change in the Odds
How much the odds increase and then converting this number into a percentage
Maximum Likelihood Estimation
Takes the sample-wide probability of observing a specific value of a binary dependent variable and sees how well this probability predicts that outcome for each individual case in the sample. Then takes the independent variable into account and determines if knowing the independent variable would reduce prediction errors
Likelihood Function
A number that summarizes how well a model's predictions fit the observed data
Logged Likelihood
The natural log of likelihood
Marginal Effects at Representative Values
Report changes in the probability of the dependent variable across the values of the independent variable and to do so separately for discrete categories of another independent variable
Marginal Effects at the Means
The effect of one independent variable on the probability of the dependent variable while holding the other independent variable constant at their sample averages
Average Marginal Effects
Calculate the probability of a dependent variable's value for each individual in the sample. Under the assumption that each individual is one value of the independent variable. Then calculate the same for the other value of the independent variable. Find the difference for each individual and then take the Average.
Adjusted R-Square
A conservative version of R-Square
Correlation Analysis
Produces a measure of association known as Pearson's Correlation Coefficient or Pearson's r.
Dummy Variable
A variable for which all cases falling into a specific category assume the value of 1, and all cases not falling into that category assume a value of 0.
Error Sum of Squares
Prediction error. The component of the total sum of squares that is not explained by the regression equation
Interaction Effect
An interaction relationship. When the effect of one independent variable on the dependent variable differs across values of another independent variable.
Interaction Variable
The multiplicative product of two (or more) independent variables that tries to account for the interaction effect
Multicollinearity
Occurs when the independent variables are related to each other so strongly that it becomes difficult to estimate the partial effect of each independent variable on the dependent variable
Multiple Regression
Isolates the effects of one independent variable on the dependent variable while controlling for the effects of another independent variable
Partial Regression Coefficient
Estimates the mean change in the dependent variable for each unit change in the independent variable, controlling for the other independent variables in the model
Pearson's Correlation Coefficient
Symbolized by a lowercase r. Determines the direction and strength of an interval-level relationship.
Prediction Error
The difference between the regression line and the actual data point
R-Square
A PRE measure. Assesses how much better the dependent variable can be predicted by knowing the independent variable than not.
Regression Analysis
Produces the regression coefficient
Regression Coefficient
The slope of the regression line. Estimates the size of the effect of the independent variable on the dependent variable
Regression Line
y=a+b(x) where a is the y intercept or constant and b is the slope of the line (the regression coefficient)
Regression Sum of Squares
The component of the total sum of squares that comes from knowing the independent variable
Scatterplot
Total Sum of Squares
An overall summary of the variation in the dependent variable. Also represents all our errors in guessing the value of the dependent variable
The sum of the squared differences between the actual value of the dependent variable and its mean value.
.05 Level Significance
The minimum standard. For a relationship to be considered significant, Type 1 error, declaring a relationship when there is none, must occur less than 5% of the time
Asymmetrical Measure of Association
A measure of association whose value may vary depending on which variable is considered the independent variable and which the dependent variable
Chi-Square Test of Significance
Determines whether the observed dispersal of cases departs significantly from what would happen if the null hypothesis was correct
Helps determine statistical significance when using nominal and ordinal level data
Concordant Pair
Consistent with a positive relationship
Confidence Interval Approach
Compare the confidence intervals of two or more sample means. If the intervals overlap, then the null hypothesis cannot be rejected. If the intervals do not overlap, then we can reject the null hypothesis
Cramer's V
Based on the Chi-Square. Takes a value between 0, no relationship, and 1, a perfect relationship. Cannot be interpreted as a gauge of predictive accuracy
Critical Value
Marks the upper plausible boundary of random error and so defined the null hypothesis's limit. Depends on the degrees of freedom and level of significance chosen by the researcher
Discordant Pair
Consistent with a negative relationship
Error Bar Chart
The means are represented by circles. The upper and lower confidence boundaries are shown as horizontal bars
Eyeball Test of Statistical Significance
Uses the mean difference and its standard error. Based on the confidence interval approach.
Lambda
Designed to measure the strength of a relationship between two categorical variables, at least one of which is a nominal-level relationship
Measure of Association
Tells how well the independent variable works in explaining the dependent variable
Null Hypothesis
The supposition that there is no difference between the dependent variable's value
P-Value Approach
The researcher determines the exact probability of obtaining the observed sample difference under the assumption that the null hypothesis is correct. If the Probability value is < .05 then the null hypothesis can be rejected. If the probability value is > .05, then the null hypothesis cannot be rejected
Proportional Reduction in Error (PRE)
A prediction-based metric that varies in magnitude between 0 and 1 that tells how much better the dependent variable can be predicted when the independent variable is known than when it is not known. No help = 0. Perfect Prediction = 1
Somer's d
Appropriate for gauging the strength of ordinal-level relationships
Standard Error of Difference
a measure used to examine the difference between two scores and determine if there is a significant difference
Symmetrical Measure of Association
A measure of association whose value will be the same when either variable is considered the independent variable or the dependent variable.
Test of Statistical Significance
Helps decide whether an observed relationship between an independent variable and a dependent variable exist or if it could have happened by chance
Test Statistic (t-ratio)
Tells exactly how many standard errors separate the sample difference from zero, the difference claimed by the null hypothesis. (Alternate hypothesis - Null Hypothesis)/Standard Error of the Difference
Tied Pair
Has different values on the independent variable but the same value on the dependent variable.
Type 1 Error
When the researcher concludes that there is a relationship in the population when there is none
Type 2 Error
When the researcher concludes that there is no relationship in the population when there is one.
95 percent confidence interval
The interval within which 95% of all possible sample estimates will fall by chance: sample mean ± 1.96(standard errors)
census
Measurements for all members of a population
central limit theorem
If we were to take an infinite number of samples of size n from a population of N members, the means of those samples would be normally distributed
degrees of freedom
The number of individual scores that can vary without changing the sample mean. Statistically written as 'N-1' where N represents the number of subjects
inferential statistics
A set of procedures for deciding how closely a relationship observed in a sample corresponds to the unobserved relationship in the population from which it was drawn
normal distribution
describes a symmetrical, bell shaped curve that shows the distribution of many physical and psychological attributes
population
Large aggregations of units
population parameter
A characteristic of a populations
probability
The likelihood of the occurrence of an event or set of events
random sample
A sample that has been randomly drawn from a population. Necessary for a sample statistic to yield an accurate estimate of a population parameter
random sampling error
The difference between the sample result and the result of a census conducted using identical procedures
random selection
Occurs when every member of the population has an equal chance of being included in the sample
range
Distance between highest and lowest scores in a set of data
response bias
Occurs when some cases in the sample are more likely than others to be measured
sample
A number of cases or observations drawn from a population
sample proportion
The number of cases falling into one category of the variable divided by the number of cases in the sample
sample statistic
An estimate of a population parameter based on a sample drawn from the population
sampling frame
an available list that approximates the population and from which we draw a sample to represent the population
selection bias
Occurs when nonrandom processes create compositional differences between test groups and the control group
standard deviation
The extent to which the cases in an interval-level distribution fall on or close to the mean of the distribution
standard error
the standard deviation of a sampling distribution
standardization
Occurs when the numbers in a distribution are converted into standard units of deviation from the mean of the distribution
Student's t-distribution
Probability distribution that can be used for making inferences about a population mean when the sample size is small
variance
The average of the squared deviations
Z score
a measure of how many standard deviations you are away from the mean. (deviation from the mean)/(standard unit)
THIS SET IS OFTEN IN FOLDERS WITH...
POL 215 Methods Vocab Ch. 6 - 9
80 terms
Methods Midterm Chapter 1
24 terms
Methods Midterm 2
21 terms
Methods Midterm 3
13 terms
YOU MIGHT ALSO LIKE...
Methods Final Exam
83 terms
Research Methods Final
134 terms
Research Methods Test 3
56 terms
Definitions
51 terms
OTHER SETS BY THIS CREATOR
Contracts Cases
62 terms
IME
62 terms
Micro Final
37 terms
GAAP Characteristics and Principles
15 terms