hello quizlet
Home
Subjects
Expert solutions
Create
Study sets, textbooks, questions
Log in
Sign up
Upgrade to remove ads
Only $35.99/year
Math
Statistics
Psych 210 Exam 2
Flashcards
Learn
Test
Match
Flashcards
Learn
Test
Match
Terms in this set (47)
What are the three assumptions that underlie parametric tests?
1. The dependent variable is assessed using a scale measure.
2. The participants are randomly selected.
3. The distribution of the population of interest must be approximately normal.
What does it mean when we say that a statistical test is robust?
The test will produce fairly accurate results even when the data suggest the population might not meet some of the assumptions
What are the six steps of hypothesis testing?
1. identify populations, comparison distribution, and assumptions
2. state the null and research hypothesis
3. determine characteristics of the comparison distribution
4. determine critical values
5. calculate the test statistic
6. make a decision
For the population SS/N = σ2, but for a sample SS/n is a biased estimator of σ2. Explain what that means and why it occurs. Why would SS/n typically underestimate the population variance?
...
Dividing SS by "n-1" results in a "sliding" adjustment, yielding an unbiased estimate of σ2. Explain the adjustment made by "n-1", especially as it relates to sample size.
n-1 = degrees of freedom; the number of scores that are free to vary when we estimate a population parameter from a sample
Be able to explain how the CLT permits us to estimate the position of population mean if we only know sample information? Why is it that we can take info from one sample and infer things about the population?
...
Explain what is represented by the null and alternate hypotheses? Be able to generate examples of each. Specify attributes of each. Why do we focus so much on the null hypothesis for hypothesis testing?
Null: no difference in sample and population means
Research: there is a difference
Explain the difference between directional and non-directional hypotheses. What are the implications of using a one-tail vs. two-tail hypothesis test? When should each be used?
One tail assesses data in one direction (above or below H0), two tail assesses data in both directions (both above and below H0)
What is meant by a Type I error (be sure that you can describe it in practical terms, for a real investigation, not just referring to the null hypothesis)?
False positive; we rejected the null hypothesis when we should have failed to reject (there is no change/difference)
If α = .05, what does that mean (in practical terms)? Why don't we set alpha even lower?
probability of finding a particular test statistic if the null hypothesis is true (no difference between means). Percentage is so small that it is unnecessary to set p-level even smaller
Why do scientists want to minimize Type I errors? Think about potential costs associated with a Type I error?
They lead people to action
What is meant by a Type II error (again think in practical terms)? What are some factors that influence the likelihood of this error?
False negative; failed to reject the null hypothesis when there is a change/difference
Explain the way(s) in which the experimenter influences or has control over Type I and Type II errors (direct & indirect).
Can increase sample sizes, try to eliminate confounding variables
How are Type I and Type II errors different from experimental biases (malfeasance) in the conduct of research?
These are done purposefully, whereas errors happen without meaning to
Explain the connection between sampling distributions (as specified by the CLT) and hypothesis testing.
...
What is meant by "statistical significance"? What is meant by "p < .05"? This is a probability of what? Under what circumstances do we make such statements?
Statistical significance: Outcome is very unlikely to have occurred if the null is true (no actual difference)
P<.05: probability of finding a particular test statistic if the null hypothesis is true (no difference between means).
What is a confidence interval, and what does it reveal? What is it centered around? How is it related to hypothesis testing? How is it different?
Confidence interval: an interval estimate based on a sample statistic; it includes the population mean a certain percentage of the time if the same population is sampled from repeatedly
- it is centered around the sample mean
- Confidence intervals give us a range of possible values and an estimate of the precision for our parameter value. Hypothesis tests tells us how confident we are in drawing conclusions about the population parameter from our sample.
What does it mean to say test is directional or nondirectional?
Directional: one-tailed test
Nondirectional: two-tailed test
What are the advantages and disadvantages of using a one-tailed test, and why do we almost always use a two-tailed test?
One-tailed tests only test for change in one direction, where two-tailed tests look for change in both directions
Be able to describe Cohen's effect size. What is meant, in practical terms, for a small, medium, or large effect size?
Cohens d: a measure of effect size that expresses the difference between two means in terms of standard deviation
- small effect size: relationship between two groups has little strength
What does an effect size contribute that significance does not? Is it possible to derive significance, but have a weak effect size? When might this be more likely?
Strength of a difference.
- possible to have significance and small effect size
- likely to happen when the variance is high
What is meant by power? Why is it important? Be sure to understand our power diagram and be able to interpret variations.
power: rejecting the null hypothesis when the null hypothesis is false (probability of being correct)
- goal in statistics is to increase power
What are three ways to increase the power of a statistical test?
1. Increase sample size
2. Decrease standard deviation
3. exaggerate mean difference between levels of the independent variable
Why must a specific alternative hypothesis be identified to calculate power? How does power hypotheses differ from our original (null & research) hypotheses?
...
Be able to demonstrate the impact of several variables (e.g., alpha, n, directional/non-directional tests, difference to be detected, variability) on power and beta. Think about what each does to our power diagram.
...
What are some criticisms of, or concerns regarding, traditional hypothesis testing?
...
What alternatives to traditional hypothesis testing are available? How do they differ (assumptions, interpretation, etc.)?
...
How does the sampling distribution of the mean change when sigma is unknown?
...
What is the difference between the t and z distributions? Explain why the t distribution needs to be wider than that for z?
...
How is the shape of the t distribution affected by df? Explain why.
...
We calculated the "proportion of variance accounted for" (r2) by the IV. What does this mean? What does it reveal about our experiment that isn't revealed by significance testing?
...
There are multiple ways of evaluating the outcome from an investigation. Compare the interpretation of: significance, confidence interval, effect size, power, and proportion of variance accounted for.
...
What is the value of replicating a study that has already found statistically significant results?
...
How does our sampling distribution change when we use two groups in hypothesis testing?
...
What is meant by independent groups designs (aka between subjects designs or between groups designs)? Give examples.
...
Sometimes we have equal n in two groups that are being compared and sometimes not. What impact does this have on our calculations? What is meant by a "pooled standard error", and why is that necessary?
...
What is meant by homogeneity of variance (homoscedasticity)? How can we test to see whether our data meet this assumption?
...
What assumptions are behind independent groups tests? What is meant if we say that a given test is robust regarding these assumptions? What does this tell us regarding the importance of the assumptions for the independent groups test?
...
What is meant by dependent groups? Give examples of research designs appropriately analyzed with these techniques.
...
How is the sampling distribution altered for dependent groups vs. independent groups? Compare the size of the standard error that one would typically obtain for dependent vs. independent groups.
...
How does the correlation between sets of scores influence the outcome for dependent groups? What does this suggest regarding the use of matching variables?
...
How is power affected by using independent vs. dependent groups? Discuss the impact of the size of the correlation on power.
...
Consider the pros and cons for using dependent groups for hypothesis testing.
...
What difficulties are encountered in estimating the required sample size for a given study? What kinds of information are required before you can estimate the appropriate sample size?
...
How does specifying a desired effect size help in estimating sample size? How is this considered a short-cut, i.e., what information is no longer required by using the effect size?
...
Why do we calculate confidence intervals?
...
How does considering our conclusions in terms of effect size help to prevent incorrect interpretations of our findings?
...
Students also viewed
Chapter 9- Conditional Discriminations
37 terms
Psych 210 Exam 4
33 terms
Psych 210 Test 3
83 terms
Psych 210 Exam 2
74 terms
Other sets by this creator
UW Madison Psych 460 Final Exam
110 terms
Psych 403 Final
86 terms
Psych 403 Exam 2
85 terms
Bio Exam II review (Mitosis and Meiosis)
21 terms