Experimental Psychology Final
Terms in this set (44)
Analysis of Variance (Anova)
A statistical test for analyzing data from experiments that is especially useful when the experiment has more than one independent variable or more than two levels of an independent variable.
Variables, other than the independent variable that may be responsible for the differences between your conditions.
The degree to which a study, test, or manipulation measures and/or manipulates what the researcher claims it does.
The extent to which a measure represents a balanced and adequate sampling of relevant dimensions, knowledge, and skills. In many measures and tests, participants are asked a few questions from a large body of knowledge. A test has this if its content is a fair sample of the larger body of knowledge.
Participants who are randomly assigned to not receive the experimental treatment. These participants are compared to the treatment group to determine whether the treatment had an effect.
Characteristics of a group, such a gender, age and social class.
Empty Control Group
A group that does not get any kind of treatment. The group gets nothing, not even a placebo. Usually because of experimenter biases that may result from such a group, you will want to avoid using an this.
A prediction that the treatment will cause an effect.
Participants who are randomly assigned to receive the treatment.
The degree to which the results of a study can be generalized to other participants, settings, and times.
The shape of a relationship. Depending upon this between the independent and dependent variable, a graph of the relationship may look like a straight line or might look like a U, an S, or some other shape.
A testable prediction about the relationship between two or more variables.
When participants alter their behavior to conform to their guess as to what the research hypothesis is. This can be a serious threat to construct validity, especially if participants guess right.
Independent Random Assignment
Randomly determining for each individual participant which condition he will be in. For example, you might flip a coin for each participant to determine what group he will be assigned.
The degree to which each question on a scale correlates with the other questions. This is high if answers to each item correlate highly with answers to all other items.
The degree to which a study establishes that a factor causes a difference in behavior. If a study lacks this, the researcher may falsely believe that a factor causes an effect when it doesn't.
A relationship between an independent and dependent variable that is graphically represented by a straight line.
The hypothesis that there is no relationship between two or more variables. The null hypothesis can be disproven, but it cannot be proven.
Results that fail to disconfirm the null hypothesis; results that fail to provide convincing evidence that the factors are related. Null results are inconclusive because the failure to find a relationship could be due to your design lacking the power to find the relationship. In other words, many of this are Type 2 errors.
A publicly observable way to measure or manipulate a variable; a "recipe" for how you are going to measure or manipulate your factors.
Variations in scores due to unsystematic, chance factors.
a sample that has been randomly selected from a population. If you do this to enough participants, the results will be fairly representative of the entire population. Often used to maximize a study's external validity. This does not promote internal validity.
A general term, often referring to the degree to which a participant would get the same score if retested. This can, however, refer to the degree to which scores are free from random error. A measure can be this, but not valid. But a measure cannot be valid if it is not this.
A measure of the extent to which individual scores deviate from the population mean. The more scores vary from each other, the larger the this will tend to be.
When a statistical test says that the relationship we have observed is probably not due to chance alone, we say that the results are this. In other words, because the relationship is probably not due to chance, we conclude that there probably is a real relationship between our variables.
The most common way of analyzing data from a simple experiment. It involves computing a ratio between two things: (1) the difference between your group means, (2) the standard error of the difference (an index of the degree to which group means could differ by chance alone).
Type 1 Error
Rejecting the null hypothesis when it is in fact true. In other words declaring a difference statistically significant when the difference is really due to chance.
Type 2 Error
Failure to reject the null hypothesis when it is in fact false. In other words, failing to find a relationship between your variables when there really is a relationship between them.
Three reasons participants might change between pretest and posttest
maturation, history and testing.
In experimental research
In this the variables are manipulated
Two levels of the independent variable
In the simple experiment there are two of these values. These can be two different types of treatment or amounts of treatment administered.
The difference between the experimental group and the control group
One receives the treatment of the administration of the independent variable, and the other is used to compare the results.
Why you shouldn't let your groups become "groups"
threat to internal validity
When to use a T-test
use this when analyzing a simple experiment
Within groups variability
reflects the effects of random error
2X2 factorial experiment
there are two independent variables and both have two levels.
An experimental design that has at least one within-subjects factor and one between-subjects factor.
The ability to find the differences between conditions
The variable being manipulated by the experimenter. Participants are assigned to a level of this by independent random assignment.
When to use the ANOVA
use this when there is more than one independent variable.
Between the group variance
indicates the extent to which the group means vary or differ.
In correlational research
In this variables are observed.
Matched pairs design
An experimental design in which the participants are paired off by matching them on some variable assumed to be correlated with the dependent variable.
The factor that the experimenter predicts will be affected by the independent variable; The participants response that the experimenter is measuring.
YOU MIGHT ALSO LIKE...
Psychology | Sdorow, Rickabaugh, Betz
Research Methods - Chapter 4
Research and Statistics
OTHER SETS BY THIS CREATOR
History and Systems final
Theories and Techniques of Counseling Final
Pastoral Counseling Final Study Guide
Experimental psychology chapter 13