the degree to which the results of a study apply to individuals and realistic behaviors outside the study
the degree to which a study provides causal information about behavior
the degree to which the results of a study can be replicated under similar conditions
the definition of an abstract concept used by a researcher to measure or manipulate the concept in a research study
data collection involving noninvasive observation of individuals in their natural environments
a measure of how different observers rate behaviors in similar ways
data collection where control is exerted over the conditions under which the behavior is observed (ex. memory, problem solving speed)
a specific type of archival data that involves analysis of what someone has said
the manipulated variable in an experiment
the presence of some other factor that affect the dependent variable and can decrease internal validity
a third variable or effect that affects all aspects of the study, NOT A CONFOUND AUTOMATICALLY
an extraneous factor that affects the results in such a way that one cannot tell what caused the effect/difference/relationship
Assignment to the different LEVELS of the independent variable
a type of research design where a comparison is made, but there is no random assignment. Ex. Men vs. Women
behavior is measured both before and after a treatment or condition is implemented
a scale of data that involves numerical responses that are equally spaced (ex. Likert Scale) - does not have true 0
a scale of data that involves numerical responses that allow comparison between individual scores. (ex. reaction time). Here 0 is typically the lowest possible value.
indicates that a survey measures the behavior it is designed to measure. The WHOLE Survey.
on the surface, a study or scale appears to be intuitively valid
Order of conditions in a within-subjects design can affect data collected in different conditions. Type of validity affected: INTERNAL
Minimize: Use without pre-test, or unobtrusive measures, counterbalancing
Multiple testing sessions (or first testing) affects subsequent testing. INTERNAL
Minimize: Use without pre-test, or unobtrusive measures, counterbalancing
Regression Toward the Mean
Extreme scores will on average be less extreme on a retest.
Minimize: Use a control group, avoid extreme scorers
Changes due to some event that occurs during the study, which might have affected the results.
Minimize: Use a control group, possibly shorter duration of the experiment
Changes due to normal growth or predictable changes.
Minimize: Use a control group, possibly shorter duration
Loss of participants during a study, the people that drop out may be different from those who continue.
INTERNAL AND EXTERNAL
Minimize: Use a large group, or follow-up procedures with a portion of those who leave the study. Change in design - Cohort Sequential.
Any factor that creates groups that are not equal at the start of the study.
Minimize: Random SELECTION and Random ASSIGNMENT. If one can't, investigate covariates
Studying participants can change their behavior. (Workers productivity increases when they know they are being studied)
Minimize: Observe participants unobtrusively, make responses anonymous, use deception/disguise true purpose of study
Participants change their behavior based on the perceived purpose of the study.
Minimize: Use blind procedure and/or unobtrusive observation, use deception. Assess whether demand characteristics are a problem by asking participants about their perceptions of the purpose of the research. If individuals do guess the hypothesis, analyze data separately or exclude.
Diffusion of Treatment
Changes in participants' behavior in one condition because of information they obtained about the procedures in other conditions.
Minimize: Blind procedure, use of deception
Any preconceived idea by the researcher about how the experiment should turn out can influence the results.
Minimize: Use of blind procedure (at least single possibly, double blind if concerned about participant and observer effects)
Any change in the calibration of the measuring instrument over the course of the study.
Minimize: Careful specification and control of the measurement procedures. Standardized instruments, administration, data collection procedures, and training
each participant experiences all levels of the variable
Testing/Order effects more likely
each participant experiences only one level of the variable
Sample chosen such that individuals are chosen with a specific probability.
self-explanatory, participants are not chosen with a known probability
Lowers EXTERNAL and INTERNAL validity
Probability sample: each member of the population has an equal chance of being selected using random sampling.
Advantage: reduces sampling error.
Disadvantage: difficult to ensure that each member of a large population can be chosen in a sample
Probability Sample: clusters of individuals are identified and then a subset of clusters is randomly chosen to sample from.
Advantage: Make it easier to choose members randomly from smaller clusters to better represent population
Disadvantage: Can ignore segments of population that are not in the clusters chosen for the sample.
Probability Sample: members of a population are selected such that the proportion of a group in the sample is equal to the proportion of that group in the population using random sampling.
Advantage: Reduces bias to an identified characteristic
Disadvantage: Similar to simple-random, can be difficult to ensure equal probability of being chosen from a large population
Convenience: members of population chosen based on convenience by who volunteers
Advantage: easier to obtain
Disadvantage: May not represent the population well due to selection bias because random sampling is not used.
Convenience: Members of population are selected such that the proportion of a group in the sample is equal to the proportion of that group in population
Advantage: Easier to obtain and allows for better representation
Disadvantage: May not represent population well due to selection bias
Type I Error
Error made in a significance test when the researcher rejects the null hypothesis when it is actually true
Type II Error
Error made in a significance test when the researcher fails to reject the null hypothesis when the null hypothesis is NOT true
ability of a significance test to detect an effect or relationship if one exists.
method of testing scores' internal consistency that indicates the average correlation between scores (form of reliability)
a between subjects experiment that involves sets of participants matched on a specific characteristic with each member of the set randomly assigned to a different level of the independent variable.
Counterbalancing technique where the number of orders of conditions used is equal to the number of conditions in the study.
Useful for same level independent variables
An experiment or quasi-experiment that includes more than one independent variable. Within-Subjects variable
Time Series Design
a research design where patterns of scores over time are compared from before a treatment is implemented and after a treatment is implemented.
Interrupted Time Series Design
a time series design where the treatment is an independent event that cannot be controlled (war, a new law, etc)
Noninterrupted Time Series Design
a time series design where the "treatment" is implemented by the researcher
a developmental design where a single sample of participants is followed over time and tested at different ages.
Attrition, testing effects
a developmental design where multiple samples of participants of different ages are tested once
The best of longitudinal and cross-sectional
multiple samples of participants of different ages over time and tested at different ages
Criterion Related Validity
determining the validity of scores by examining the relationship between the survey scores and other established measures of behavior of interest. (Ex. SAT and ACT) Predictive Validity
Problems with Survey Research
Knowledge vs. Recall
Ways to increase power
increase sample size
increase effect size (ex. double dosage of medicine)
within-subjects instead of between
If you have to do between use a matched design
The individual ITEMS on a survey/test measure what is supposed to be being measured.
the range of data is restricted in some way and there is no variability in the data.
participation is voluntary
must avoid unnecessary harm
can leave at any point
benefits outweigh risks
debrief purpose and benefits