Upgrade to remove ads
Research Methods in Psychology: Chapter 10
Terms in this set (76)
In psychological science, the term experiment specifically means that the researchers manipulated at least one variable and measured another
A variable that is controlled, such as when the researchers assign participants to a particular level (value) of the variable
Variables that take the form of records of behavior or attitudes, such as self-reports, behavioral observations, or physiological measures.
- The manipulated variable in an experiment
- The name comes from the fact that the researcher has some "independence" in assigning people to different levels of this variable.
The levels of a study's independent variables
- The measured variable
- Also known at the outcome variable
- How a participant acts on the measured variable depends on the level of the independent variable.
When researchers are manipulating an independent variable
They need to make sure they are varying only one thing at at time
- Any variable that an experimenter holds constant
- Technically not variables because they do not vary
- Important for establishing internal validity
- Allows researchers to separate one potential cause from another and thus eliminating alternative explanations for results
Three rules for establishing causation:
2.) Temporal precedence
3.) Internal validity
Minimum requirements for a study to be an experiment
A measured variable and manipulated variable
If independent variables do not vary
covariance can not be established
A level of an independent variable that is intended to represent "no treatment" or a neutral condition
The other level or levels of the independent variable when a study has a control group
A control group that is exposed to an inert treatment such as the sugar pill
All experiments need a ____________________ group but does not necessarily need a _______________________ group
To establish temporal precedence
Experimenters control which variable came first. By manipulating the independent variable, the experimenter virtually ensures that the cause came before the effect.
Establishing temporal precedence
allows experiments to be superior to correlational design because experiments unfold over time
A well-designed experiment
establishes internal validity, which is one of the most important validities to interrogate when you encounter causal claims
To be internally valid
a study must ensure that the causal variable, and not other factors, is responsible for the change in the effect variable
Alternative explanations that are potential threats to internal validity
An experimenter's mistake in designing the independent variable; it is the second variable that happens to vary systematically along with the intended independent variable and therefore is an alternative explanation for the results; a classic threat to internal validity
In an experiment, the levels of a variable coinciding in some predictable way with experimental group membership, creating a potential confound.
In an experiment, when levels of a variable fluctuate independently of experimental group membership, or made by someone who is hidden or posing as a bystander. It can obscure, or make it difficult to detect differences in the dependent variable
- An effect that occurs in an experiment when the kinds of participants in one level of the independent variable are systematically different from those in the other.
- Can also result if the experimenters assign one type of person to one condition and another type of person to another type of condition
- Can also occur when experiments let participants choose which group they want to be in
- The use of a random method (e.g., flipping a coin) to assign participants into different experimental groups
- Assigning participants at random to different levels of the independent variable - by flipping a coin, rolling a die, or using a random number generator - controls for all sorts of potential selection effects
- Random assignment does not usually create numbers that are perfectly even; however, it can often result in fairly even distributions
- A way of desystematizing the types of participants who end up in each level of the independent variable
In the simplest type of random assignment
researchers assign participants at random at one condition or another in the experiment
In practice, random assignment
does not always work perfectly, especially when the samples are on the small size
- An experimental design technique in which participants who are similar on some measured variable are grouped into sets; the members of each matched set are then randomly assigned to different experimental conditions. Also known as matching
- Some researchers choose to use this method when the study involves a small group of participants
- Matching has an advantage of randomness. Because each member of the matched set is randomly assigned, the technique prevents selection effects.
An experimental design technique in which different groups of participants are placed into different levels of the independent variable.
An experimental technique in which there is only one group of participants, and each person is presented with all levels of the independent variable
Two basic forms of independent-groups designs
posttest-only design and the pretest/posttest design
- An experimental design in which participants are randomly assigned to independent variable groups and are tested on the dependent variable once
- This design satisfies all three criteria for causation. They allow researchers to test for covariance by detecting differences in the dependent variable. They establish temporal precedence because the independent variable comes first in time. And when they are conducted well, they establish internal validity
An experimental design in which participants are randomly assigned to at least two groups and are tested on the key dependent variable twice - once before and once after exposure to the independent variable.
Researchers might use a pretest/posttest design when
they want to evaluate whether random assignment made the groups equal
A pretest/posttest design
works well to track how participants in the experimental groups have changed over time in response to some manipulation
The posttest-only design may be
the most basic type of independent-groups experiment, but its combination of random assignment plus a manipulated variable can lead to powerful causal conclusions
The pretest/posttest design
adds a pretesting step to the most basic independent-groups design.
Researchers might use a pretest/posttest design if
they want to be extra sure that the two groups were equivalent at pretesting - as long as the pretest does not make the participants change their more spontaneous behavior
An experimental design in which participants are exposed to all the levels of an independent-variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent variable.
A type of within-groups design in which participants are measured on a dependent variable more than once - that is, after exposure to each level of the independent variable.
The principle advantage of a within-groups design
is that it ensures the participants in the two groups will be equivalent; after all, they are the same participants
give researchers more power to notice differences between conditions and can be attractive because it generally requires fewer participants overall
refers to the ability of a study to show a statistically significant result when an independent variable truly has an effect in the population
Because within-groups designs enable researchers to manipulate an independent variable and incorporate comparison conditions
they provide an opportunity to establish covariance
In a within-groups design, a threat to internal validity in which exposure to one condition changes participants' responses to a later condition
An effect in which a long sequence might lead participants to get better at the task, or to get tired or bored toward the end
An effect in which some form of contamination carries over from one condition to the next.
- When researchers present the levels of the independent variable to participants in different orders
- With counterbalancing, any order effects should cancel other out when all the data are collected
When researchers counterbalance conditions (or levels) in a within-groups design
they have to split their participants into groups; each group receives one of the condition sequences
When all possible condition orders are represented in a within-groups experiment.
When only some of the possible condition orders are represented.
One way to partially counterbalance is to
present the conditions in a randomized order for each subject
A technique for partial counterbalancing which ensures that each condition appears in each position at least once
Within-groups design have three main disadvantages:
(1) repeated-measures designs have the potential for order effects, which can threaten internal validity
(2) a within-groups design might not be possible or practical
(3) a third problem occurs when people see all levels of the independent variable and then change the way they would normally act
A threat to internal validity that occurs when some cue leads participants to guess a study's hypotheses or goals
In true within-groups design
participants are exposed to all levels of a meaningful independent variable
In a pretest/posttest design
participants see only one level of the independent variable, not all levels
In an experiment
researchers operationalize two constructs: the independent variable and the dependent variable
To interrogate the construct validity of the independent variables
you would ask how well the researchers manipulated (or operationalized) them
To evaluate manipulation
You can simply assess its face validity. Does the manipulation (operationalization) fit the researchers' definition of the construct?
Another way to evaluate a manipulation's construct validity
is to see whether and how the other researchers have used this manipulation before
An extra dependent variable that researchers can insert into an experiment to help them quantify how well an experimental manipulation worked
- A simple study, using a separate group of participants, that is completed before (or sometimes after) conducting the study of primary interest
- Researchers may use pilot study data to confirm the effectiveness of their manipulations
When evaluating the construct validity of an experiment
- you assess the quality of two operationalizations: one for the independent variable and one for the dependent variable
- Testing and collecting additional data are ways researchers can show results that support their theory
When you are interrogating the construct validity of an experiment
you can ask what evidence shows that the manipulations and measures actually represent the intended constructs in the theory
When interrogating a causal claim's external validity
- you should ask how the experimenters recruited their participants
- you should ask about random sampling - randomly gathering a sample from a population
When interrogating internal validity
you should ask about random assignment - randomly assigning each participant in a sample into one experimental group or another
To get a clean manipulation
Researchers may have to conduct their in an artificial environment, such as a university laboratory, although such studies conducted there are not representative of people in the real world
When the difference between conditions is not statistically significant
you cannot conclude that there is covariance - you cannot conclude that the independent variable had a detectable effect on the dependent variable
If there is no covariance
the study does not support a causal claim
Three fundamental internal validity questions that are worth asking for any experiment:
(1) Did the experimental design ensure that there were no design confounds or did some other variable accidentally covary along with the intended independent variable?
(2) If the experimenters used an independent-groups design, did they control for selection effects by using random assignment or matching?
(3) If the experimenters used a within-groups design, did they control for order effects by counterbalancing?
Why does Max's experiment satisfy the causal criterion of temporal precedence?
b. Because the participants shook the experimenter's hand before rating her friendliness
In Max's experiment described above, what was a control variable
d. The standard greeting the experimenter used while shaking hands
What type of design is Max's experiment?
a. Posttest-only design
Max randomly assigned people to shake hands either with the "warm hands" experiment or the "cold hands" experiment. Why did he randomly assign participants?
c. Because he wanted to avoid selection effects
Which of the following questions would be interrogating the construct validity of Max's experiment?
b. How well did Max's "experimenter friendliness" rating capture participants' actual impressions of the experimenter?
THIS SET IS OFTEN IN FOLDERS WITH...
Research Methods in Psychology: Chapter 8
Research Methods in Psychology: Chapter 11
Research Methods in Psychology: Chapter 3
Research Methods in Psychology: Chapter 12
YOU MIGHT ALSO LIKE...
Research Methods in Psychology, Chapter 10
Research Methods in Psychology, Chapter 10
Research Methods Morling Ch.10
OTHER SETS BY THIS CREATOR
Common Words on the GRE (Magoosh)
Tech & Romance
Statistics in Psychology: Exam 1
Human Sexuality: Chapter 11