Only $35.99/year

Terms in this set (94)

SEM= The standard error is an estimate of the standard deviation of a population parameter. An estimate of the amount of variation due to error that we can expect in sample means. SEM of the mean ofte
n also called the standard deviation of the mean. Here are the key differences between the standard deviation (SD) and the standard error of the mean (SEM:
• The SD quantifies scatter — how much the values vary from one another.
• The SEM quantifies how precisely you know the true mean of the population (how much of the variance that you see is due to sampling error). It takes into account both the value of the SD and the sample size.
• Both SD and SEM are in the same units -- the units of the data.
• The SEM, by definition, is always smaller than the SD.
• The SEM gets smaller as your samples get larger. This makes sense, because the mean of a large sample is likely to be closer to the true population mean than is the mean of a small sample. With a huge sample, you'll know the value of the mean with a lot of precision even if the data are very scattered.
• The SD does not change predictably as you acquire more data. The SD you compute from a sample is the best possible estimate of the SD of the overall population. As you collect more data, you'll assess the SD of the population with more precision. But you can't predict whether the SD from a larger sample will be bigger or smaller than the SD from a small sample. (This is not strictly true. It is the variance -- the SD squared -- that doesn't change predictably, but the change in SD is trivial and much much smaller than the change in the SEM.)
• Size of the effect
For calculating a power value, you need to have an idea of the effect size you are looking for. Can come from: past research in area (effect sizes found in previous research), or if no past research in area (unlikely) can fall back on Cohen's values. (See Cohen (1988)'s guidelines on effect size) (=effect size= degree to which difference in DV are attributed to the IV)

A large effect size is easier to detect than a small effect sizes. Will need more power to find small effect sizes.

• The criterion significance level (i.e. the value of the significance level at which you are prepared to accept that results are probably not due to sampling error
The probability level that you are willing to accept as the likelihood that the results are down to sampling error

• Sample size:
If the effect we are looking for is rather small, then the larger the sample sizes, the greater the power we'll have to detect the effect
The larger the sample, the greater the power. If only a few participants, then a large difference in means could just be due to sampling error.
With more participants, greater chance of detecting a significant effect, and we can be more confident that the effect is due to something other than sampling error

• The type of statistical test you use
Parametric tests are more powerful than nonparametric tests
E.g. a t test is more likely to find an effect than its non parametric equivalent

• Whether the design is between participants or within participants
Repeated measures designs increase power because they reduce within-participants variablility as each participant acts as his or her own control.
Use repeated measures rather than independent wherever possible!

• Whether the hypothesis is one or two tailed
If a one-tailed hypothesis is appropriate, then use it. Two tailed require larger sample sizes to compensate for the loss of power
Two tailed hypothesis=
A two-tailed test, also known as a non directional hypothesis, is the standard test of significance to determine if there is a relationship between variables in either direction. Two-tailed tests do this by dividing the p= .05 in two and putting half on each side of the bell curve.

One tailed (one direction)= a directional hypothesis, to determine whether there is a relationship between the two variables in one (particular) direction. Used when you have a good idea, based on the literature/previous experiments that there is likely to be a directional difference between the variables
Put all the p=0.5 on one side, making it more sensitive and able to detect subtle differences
If you are using a significance level of 0.05, a two-tailed test allots half of your alpha to testing the statistical significance in one direction and half of your alpha to testing statistical significance in the other direction.

This means that .025 is in each tail of the distribution of your test statistic. When using a two-tailed test, regardless of the direction of the relationship you hypothesize, you are testing for the possibility of the relationship in both directions.
For example, we may wish to compare the mean of a sample to a given value x using a t-test. Our null hypothesis is that the mean is equal to x. A two-tailed test will test both if the mean is significantly greater than x and if the mean significantly less than x. The mean is considered significantly different from x if the test statistic is in the top 2.5% or bottom 2.5% of its probability distribution, resulting in a p-value less than 0.05.

The important point is that if we make a specific prediction, then we will need a smaller test statistic to find a significant result (since we are looking in only one tail. However, if prediction turns out to be in the wrong direction, then we'll miss out on the effect that does exist. And if we don't make a prediction of direction before collecting data, it will be too late to predict the direction after data collection; in this case we will no longer be able to claim a one-tailed test.

Standard error of the mean= an estimate of how much of the variation about the mean is due to sampling error [The standard error is an estimate of the standard deviation of a statistic. An estimate of the amount of variation due to error that we can expect in sample means.]
Confidence intervals, limits, and levels= the next step in the interpretation of standard error

means = POINT ESTIMATES of the population parameters, giving a range (confidence interval with associated confidence level) is better/more realistic

Confidence interval is the "range of values of a sample statistic that is likely (at a given level of probability, called a confidence level) to contain a population parameter

Confidence level: confidence level is the degree of confidence, or certainty, that the researcher wants to be able to place in the confidence interval
Confidence level = 1-alpha
For results with a 90% level of confidence, the value of alpha is 1 - 0.90 = 0.10.
For results with a 95% level of confidence, the value of alpha is 1 - 0.95 = 0.05.
For results with a 99% level of confidence, the value of alpha is 1 - 0.99 = 0.01

the probability that the parameter being estimated by the statistic falls within the confidence interval. The confidence level is usually expressed as a percentage, but it can also take the form of a proportion (which is also sometimes called a confidence coefficient). The confidence levels cited above were 68%, 95% or 99%. Since the 68% confidence level is only about two-thirds certainty, most researchers in the social sciences select either 95%, which is very confident, or 99%, which is about as confident as we would ever need to be.

confidence limits (also known as confidence bounds), are simply the "The upper and lower values of a confidence interval, that is, the values defining the range of a confidence interval"