Two tailed hypothesis=

A two-tailed test, also known as a non directional hypothesis, is the standard test of significance to determine if there is a relationship between variables in either direction. Two-tailed tests do this by dividing the p= .05 in two and putting half on each side of the bell curve.

One tailed (one direction)= a directional hypothesis, to determine whether there is a relationship between the two variables in one (particular) direction. Used when you have a good idea, based on the literature/previous experiments that there is likely to be a directional difference between the variables

Put all the p=0.5 on one side, making it more sensitive and able to detect subtle differences

If you are using a significance level of 0.05, a two-tailed test allots half of your alpha to testing the statistical significance in one direction and half of your alpha to testing statistical significance in the other direction.

This means that .025 is in each tail of the distribution of your test statistic. When using a two-tailed test, regardless of the direction of the relationship you hypothesize, you are testing for the possibility of the relationship in both directions.

For example, we may wish to compare the mean of a sample to a given value x using a t-test. Our null hypothesis is that the mean is equal to x. A two-tailed test will test both if the mean is significantly greater than x and if the mean significantly less than x. The mean is considered significantly different from x if the test statistic is in the top 2.5% or bottom 2.5% of its probability distribution, resulting in a p-value less than 0.05.

The important point is that if we make a specific prediction, then we will need a smaller test statistic to find a significant result (since we are looking in only one tail. However, if prediction turns out to be in the wrong direction, then we'll miss out on the effect that does exist. And if we don't make a prediction of direction before collecting data, it will be too late to predict the direction after data collection; in this case we will no longer be able to claim a one-tailed test.

See: http://www.ats.ucla.edu/stat/mult_pkg/faq/general/tail_tests.htm Standard error of the mean= an estimate of how much of the variation about the mean is due to sampling error [The standard error is an estimate of the standard deviation of a statistic. An estimate of the amount of variation due to error that we can expect in sample means.]

Confidence intervals, limits, and levels= the next step in the interpretation of standard error

means = POINT ESTIMATES of the population parameters, giving a range (confidence interval with associated confidence level) is better/more realistic

Confidence interval is the "range of values of a sample statistic that is likely (at a given level of probability, called a confidence level) to contain a population parameter

Confidence level: confidence level is the degree of confidence, or certainty, that the researcher wants to be able to place in the confidence interval

Confidence level = 1-alpha

For results with a 90% level of confidence, the value of alpha is 1 - 0.90 = 0.10.

For results with a 95% level of confidence, the value of alpha is 1 - 0.95 = 0.05.

For results with a 99% level of confidence, the value of alpha is 1 - 0.99 = 0.01

the probability that the parameter being estimated by the statistic falls within the confidence interval. The confidence level is usually expressed as a percentage, but it can also take the form of a proportion (which is also sometimes called a confidence coefficient). The confidence levels cited above were 68%, 95% or 99%. Since the 68% confidence level is only about two-thirds certainty, most researchers in the social sciences select either 95%, which is very confident, or 99%, which is about as confident as we would ever need to be.

confidence limits (also known as confidence bounds), are simply the "The upper and lower values of a confidence interval, that is, the values defining the range of a confidence interval"

http://jalt.org/test/bro_35.htm Effect size= an objective measure of the magnitude, or importance of the observed effect/the proportion of the total variation accounted for by the treatment effect

how much the difference in DV can be attributed to the IV(e.g. impact of having two experimental conditions, in terms of the difference between the means of the two groups), expressed in SD, it is the difference between two means, expressed in SD. If there is a large overlap between the CIs of two means, then effect size will be small, if small/no overlap between the CIs, then effect size will be large)

The t test tells us whether there is a difference, direction of difference and whether it is significant.

But does not tell us HOW MUCH/FAR this difference is due to the IV

The effect size (

d)(expressed in standard deviations) can

be calculated by (difference between the two means)/(mean SD of the two means)

And interpreted in terms of Cohen's values <0.3 is small, <0.6 medium, <1 large

Pearson's correlation coefficient r= another measure of the magnitude/strength of effect (strength of the correlation_, from 0 to 1 PROBABILITYLet the independent normal random variables
$$
Y_1, Y_2,...,Y_n
$$
have the respective distributions
$$
N(μ, γ^2x^2_i)
$$
, i = 1, 2, ..., n, where
$$
x_1, x_2, ...x_n
$$
are known but not all the same and no one of which is equal to zero. Find the maximum likelihood estimators for μ and γ². 5th EditionDaniel S. Yates, Daren S. Starnes, David Moore, Josh Tabor2,433 solutions

15th EditionDouglas A. Lind, Samuel A. Wathen, William G. Marchal1,236 solutions

9th EditionCharles Henry Brase, Corrinne Pellillo Brase3,048 solutions

11th EditionMario F. Triola2,879 solutions