23 terms

Inferential Statistics

Used to interpret and draw general conclusions about research findings.

What do inferential stats do/ what question do inferential stats answer?

*Help us rule out the possibility that our results are due to random chance

"What does my data say about what I expected?"

"What is the probability that the result might have occurred by chance alone?"

"What does my data say about what I expected?"

"What is the probability that the result might have occurred by chance alone?"

Core logic of hypothesis testing: what do researchers assume? what do researchers "test against" / seek to discount?

The researcher assumes that the sample mean is NOT different from the population mean.

*If my sample isn't different from the population, then what is the probability that I would have gotten the sample mean that I found?

*If my sample isn't different from the population, then what is the probability that I would have gotten the sample mean that I found?

Null hypothesis (what is it, be able to produce a null hypothesis from a research example)

Initial assumption that the researcher tests. The researcher tries to refute/reject this by "rejecting the null"

it's the opposite of the theory

u1=u2

it's the opposite of the theory

u1=u2

How is the null hypothesis notated?

Ho

u1=u2

u1=u2

Alternative hypothesis (what is it, be able to produce an alternative hypothesis from research example)

The opposite of the null hypothesis.

What the experimenter desired or expected all along.

What the experimenter desired or expected all along.

How is the alternative hypothesis notated?

H1

U1 does not equal U2

U1 does not equal U2

Steps involved in hypothesis-testing in psychology

1. start by formulating a theory & specifying your hypothesis

2. collect a sample

3. run an inferential statistical test on the sample (one sample z test / one sample t test) the test statistic produces a probability value (p value)

4. evaluate the probability of the results / what are the chances the results are a fluke?

5. decide whether or not you want to reject the null hypothesis and make an inference about the theory (based on the p < .05 cutoff)

2. collect a sample

3. run an inferential statistical test on the sample (one sample z test / one sample t test) the test statistic produces a probability value (p value)

4. evaluate the probability of the results / what are the chances the results are a fluke?

5. decide whether or not you want to reject the null hypothesis and make an inference about the theory (based on the p < .05 cutoff)

Inferential test statistics: One-sample z-test

Tests the likelihood that the difference between your obtained sample and the population mean is due to chance

Use when the sample is sufficiently large and sigma is known

Use when the sample is sufficiently large and sigma is known

Inferential test statistics: One-sample t-test

Does the same thing as a z test, but is used for primarily small samples (less than 25)

Know when to use a z vs. t-test

Z test: When sample size is sufficiently large and we know sigma (o)

Know the difference between a z-score and a z-test

...

p-value (know what it is and how to interpret)

used to evaluate the likelihood your results emerged due to chance

*results from your findings, not set by researcher

*results from your findings, not set by researcher

What does p< .05 mean? What does p> .05 mean?

p <.05 - probability that your finding due to chance is less than 5% (it's not very likely)

p>.05 - prob that your finding due to chance is greater than 5%

p>.05 - prob that your finding due to chance is greater than 5%

Alpha level

the cut-off point for making a decision about whether or not we think our study is due to chance or not

*typically set at .05

*researcher dictates this

*typically set at .05

*researcher dictates this

Difference between alpha and p-value

alpha level - what researcher decides is a cutoff point

p-value - what you get when you run an inferential test, it's the actual likelihood your results could be due to chance

p-value - what you get when you run an inferential test, it's the actual likelihood your results could be due to chance

How is the p-value expressed?

always expressed in decimals

Statistically significant vs. non-significant results

Significant means the following numerical expression p < .05

non significant - p > .05

non significant - p > .05

Know when to reject vs. retain the null hypothesis

If results are significant p < .05 REJECT

If results are non significant p > .05 RETAIN

If results are non significant p > .05 RETAIN

Type 1 and Type 2 errors

Type 1: Reject null hypothesis when it's true / inferring your theory is right when it's wrong

Type 2: fail to reject null when it's false / inferring your theory is wrong when it isn't wrong

Type 2: fail to reject null when it's false / inferring your theory is wrong when it isn't wrong

How do you decrease the probability of a type 1 error?

Set a lower significance level (e.g., p < .001)

How do you decrease the probability of a type 2 error?

Set a more lenient significant level (e.g., p < .10)

Be able to describe what each error is and explain what they are in context of an example (e.g., if given 2 hypothesis, be able to say what a type 1 and 2 error would be given those hypothesis and which one you'd want to minimize)

...