Search
Create
Log in
Sign up
Log in
Sign up
Get ahead with a $300 test prep scholarship
| Enter to win by Tuesday 9/24
Learn more
Research Methods
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (29)
Statistical Regression
The tendency of extreme scores on any measure to revert (or regress) toward the mean of a distribution when the measure is administered a second time. Regression is a function of the amount of error in the measure and the test-retest correlation.
External Validity
The extent to which the results can be generalized or extended to persons, settings, times, measures, and characteristics other than those in this particular experiment.
Incremental Validity
A type of validity that is used to determine whether a new psychometric assessment will increase the predictive ability of an existing method of assessmen. In other words, incremental validity seeks to answer if the new test adds much information that might be obtained with simpler, already existing methods.
Internal Validity
The extent to which the experimental manipulation or intervention, rather than extraneous influences, can account for the results, changes, or group differences.
Strong Inference
Strong inference is a model of scientific inquiry that emphasizes the need for alternative hypotheses, rather than a single hypothesis in order to avoid confirmation bias.
The method, very similar to the scientific method, is described as:
1. Devising alternative hypotheses;
2.Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly as possible, exclude one or more of the hypotheses;
3. Carrying out the experiment so as to get a clean result;
4. Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
Construct Validation
Construct validity refers to whether a scale measures or correlates with the theorized psychological scientific construct (e.g., "fluid intelligence") that it purports to measure. In other words, it is the extent to which what was to be measured was actually measured.
In the context of experimental design, this refers to a type of experimental validity that pertains to the interpretation or bias of the effect that was demonstrated in an experiment. In the context of a psychological assessment, it refers to the extent to which a measure has been shown to assess the construct of interest.
Pretest Sensitization
An administration of pretest may alter the influence of the experimental condition that follows.
Solomon Four Group Design
An experimental design that is used to evaluate the effect of pretesting. The design can be considered as a combination of the pretest-posttest control group design and a posttest-only design in which pretest (provided versus not provided) and the experimental intervention (treatment vs. no treatment) are combined.
Moderator
A variable that influences the relationship of two variables of interest. The relationship between the variables (A and B) changes or is different as a function of some other variable (sex, age, ethnicity).
Mediator
The process, mechanism or means through which a variable produces a particular outcome. Beyond knowing that A may cause B, the mechanism elaborates precisely what happens (biologically or psychologically) that explains how B results.
Latent Variable
Variables that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured). Mathematical models that aim to explain observed variables in terms of latent variables are called latent variable models.
Spuriousness
is a mathematical relationship in which two events or variables have no direct causal connection, yet it may be wrongly inferred that they do, due to either coincidence or the presence of a certain third, unseen factor (referred to as a "confounding factor" or "lurking variable"). Suppose there is found to be a correlation between A and B. Aside from coincidence, there are three possible relationships:
A causes B,
B causes A,
OR
C causes both A and B.
In the last case there is a spurious correlation between A and B. In a regression model where A is regressed on B but C is actually the true causal factor for A, this misleading choice of independent variable (B instead of C) is called specification error.
Because correlation can arise from the presence of a lurking variable rather than from direct causation, it is often said that "Correlation does not imply causation".
Effect Size
A measure of the strength or magnitude of an experimental effect. Also, a way of expressing the difference conditions (treatment vs. control) in terms of a common metric across measures and studies. The method is based on computing the difference between the means of interest on a particular measure and dividing this by the standard deviation (pooled std. dev. of the conditions).
An effect size is a measure of the strength of the relationship between two variables in a statistical population, or a sample-based estimate of that quantity
Maturation
Processes within the individual, reflecting changes over time that may serve as a threat to internal validity.
Non-equivalent Control Group Design
A group used in quasi-experiments to rule out or make less plausible specific threats to internal validity. The group is referred to as nonequivalent because it is not formed through random assignment in the investigation.
Falsification
The act of disproving a proposition, hypothesis, or theory. Falsificationism strives for falsification of hypotheses instead of proving them.
Eg: The assertion that "all swans are white" is falsifiable, because it is empirically verifiable that there are swans that are not white. However, not all statements that are falsifiable in principle are falsifiable in practice. For example, "it will be raining here in one million years" is theoretically falsifiable, but not practically so.
The concept was made popular by Karl Popper.
Multi-trait, Multi-method matrix
An approach to examining Construct Validity developed by Campbell and Fiske. The set of correlations obtained by administering several measures to the same subject. These measures include two or more methods (self-report, direct observation). The purpose of the matrix is to evaluate convergent and discriminant validity and to separate trait from method variance.
Multiple Operationalism
It refers to the use of two or more measures (rather than just one measure) to represent a construct. Defining a construct by several measures or in several ways. Typically, researchers are interested in general construct (depression, anxiety) and seek relations among variables that are evident beyond any single operation or measure to define the construct. (opp. of single operationism).
Selection X History
One of the groups has a historical experience (exposure to some event outside of the investigation) that the other group did not have and that experience might plausibly explain the results. The threat of history is selective and applies to only one (or some but not all) of the groups.
Relationship between Reliability and Validity
Reliability: Reliability is concerned with questions of stability and consistency - does the same measurement tool yield stable and consistent results when repeated over time. Reliability refers to a condition where a measurement process yields consistent scores (given an unchanged measured phenomenon) over repeat measurements.
Validity: Validity refers to the extent we are measuring what we hope to measure (and what we think we are measuring).
At best, we have a measure that has both high validity and high reliability. It yields consistent results in repeated application and it accurately reflects what we hope to represent.
It is possible to have a measure that has high reliability but low validity - one that is consistent in getting bad information or consistent in missing the mark. *It is also possible to have one that has low reliability and low validity - inconsistent and not on target.
Finally, it is not possible to have a measure that has low reliability and high validity - you can't really get at what you want or what you're interested in if your measure fluctuates wildly.
Suppressor Variable
A variable which increases the predictive validity of another variable (or set of variables) by its inclusion into a regression equation". For instance, if you are set to examine the effect of a treatment (e.g. medication) on an outcome (e.g. healing from a disease), a suppression would mean that instead of the drop that you would see from the direct effect of the treatment on the outcome when the mediator is included, the opposite happens. The inclusion of the suppressor variable in the equation increases, rather than decreases the relation between the treatment and outcome.
Explain the principal of falsifiability as a means of evaluation of theory. Give an example of a theory that does not meet this criterion very well, and why.
Falsifiability is the ability of a theory—a working framework for explaining and predicting natural phenomena—to be disproved by an experiment or observation. The ability to evaluate theories against observations is essential to the scientific method, and as such, the falsifiability of theories is key to this and is the prime test for whether a proposition or theory can be described as scientific.
E.g. Creationism states that the universe, and life, were directly created by God. It is not falsifiable as its proponents base the theory on a human text (the Bible) which provides accounts of creation and other events that cannot be tested by observation or experiment but are instead accepted as infallible truth.
Discuss the use of idiographic methods in research. Within this context, describe repeated measures research designs and multi-level hierarchical approaches. What are the strengths and potential weaknesses of these designs for research? What do such approaches add to nomethetic inquiry?
In psychology, idiographic describes the study of the individual, who is seen as a unique agent with a unique life history, with properties setting him/her apart from other individuals (see idiographic image). A common method to study these unique characteristics is an (auto)biography, i.e. a narrative that recounts the unique sequence of events that made the person who she is. Nomothetic describes the study of classes or cohorts of individuals. Here the subject is seen as an exemplar of a population and their corresponding personality traits and behaviours.
Idiographic assessment is the measurement of variables and functional relations that have been individually selected, or derived from assessment stimuli or contexts that have been individually tailored, to maximize their relevance for the particular individual. Costs to internal validity. They're not as controlled.
Advantage: there are many measures of an independent variable. You can establish temporal precedence of variables and behavior.
The problem with ideographic data is that it can be hard to generalize and can be more unreliable.
The combination of repeated data measures from each individual in a study and a random effects model can be an effective way to capture individual. One advantages of this method is that it can be used to compare and distinguish between intra- and inter-individual variability. It can be used to look at psychological processes within a subject and then compare it to the population average of those same variables. The random effects model can also lead to more accurate parameter estimates.
From a group of experimental designs attached to the test (with x's and o's) evaluate any major threats to the validity in terms of history, maturation, statistical regression, testing, and selection, and combinations such as selection x history.
Kazdin (2003) describes four types of group designs with random assignment of subjects to each group: a) pre- and post-test control group, b) post-test only control group, c) Solomon four-group and d) factorial designs. Although a pretest is generally very advantageous, one weakness is that it could sensitize the subject to the treatment effect. Kazdin (2003) and Cook and Campbell (1979) also describe a few different types of quasi-experimental designs where random assignment is not used: a) pre- and post- test design (one and two groups), and b) post-test only design (one, two and multiple groups). Quasi-experiments can serve as a starting ground to measure effects of manipulation and provide information for more controlled studies in the future.
The lack of a pre-test in quasi-experiments makes it difficult to estimate group differences prior to treatment (Kazdin, 2003; Cook & Campbell, 1979). Especially in designs without a control group it is hard to eliminate threats such as maturation, history, testing, statistical regression, and other third variable factors could be affecting the change between pre- and post-test. Although not ideal, when pre-tests and control conditions are not possible, researchers can minimize threats to validity and rival explanations by using known base rates, archival records, matching other correlate variables (age, sex, social class, etc.), large sample size, common sense, theory, and experience.
What are the methodological advantages of experimental manipulation over passive observation in drawing inferences from study results? Under what conditions might you favor the use of observational research? Provide an example favoring experimental manipulation or epidemiological/observation methods.
In observational studies, the investigator evaluates the variables of interest by selecting groups, rather than experimentally manipulating the variables of interest. The goal is to show associations between variables, but it may be possible to show causation. One limitation is that it does not allow for strong influences to be drawn about what lead to the outcome of interest.
Two types of observational studies are case control and cohort designs. The designs address a range of questions about how variables interact to produce an outcome (mediators) and how the characteristics that influence (moderators) whether or for whom the outcome occurs.
Observational studies require special attention to construct validity (the conclusions can be attributed to the constructs the researcher had in mind) rather than to other influences. In addition to specifying the construct,
As we have already read, although quasi-experimental designs have their limitations, there are ways that researchers can minimize threats to validity and rival explanations by using known base rates, archival records, matching subjects on other correlate variables (age, sex, social class, etc.), large sample size, common sense, theory, and experience.
What is SEM and how is it used in psychological research? Identify the basic steps and the advantages they confer over traditional methods of examining relationships among measured variables.
It is a statistical technique for testing and estimating causal relations using a combination of statistical data and qualitative causal assumptions.
Structural Equation Models (SEM) allow both confirmatory and exploratory modeling, meaning they are suited to both theory testing and theory development. Confirmatory modeling usually starts out with a hypothesis that gets represented in a causal model. The concepts used in the model must then be operationalized to allow testing of the relationships between the concepts in the model. The model is tested against the obtained measurement data to determine how well the model fits the data. The causal assumptions embedded in the model often have falsifiable implications which can be tested against the data.
Among the strengths of SEM is the ability to construct latent variables: variables which are not measured directly, but are estimated in the model from several measured variables each of which is predicted to 'tap into' the latent variables.
In SEM, interest usually focuses on latent constructs--abstract psychological variables like "intelligence" or "attitude toward the brand"--rather than on the manifest variables used to measure these constructs. Measurement is recognized as difficult and error-prone. By explicitly modeling measurement error, SEM users seek to derive unbiased estimates for the relations between latent constructs. To this end, SEM allows multiple measures to be associated with a single latent construct.
Define moderator and mediator effects. Describe a research study (of your own invention) in which the relationship between four variables are studied and both mediation and moderator effects are predicted. Include confounding and possible suppressor effects in your discussion.
Four variable system containing an independent variable (X), a dependent variable (Y), and a third variable that may be a mediator (M) or moderator (W), a confounder (C), or a suppressor (S).
Mediation: the causal process by which an independent variable affects a dependent variable
Confounding: The confounding hypothesis suggests that a third variable explains the relationship between an independent and dependent variable. Unlike the mediational hypothesis, confounding does not necessarily imply a causal relationship among the variables.
Suppressor: A variable which increases the predictive validity of another variable (or set of variables) by its inclusion in a regression equation. The magnitude of the relationship between an independent variable and a dependent variable becomes larger when a third variable is included would indicate suppression.
Eg. A treatment outcome study where the treatment is the mediator, the baseline level of dysfunction is the moderator, having camp counselors evaluate both the IV and the outcome measures (DV) could be a confounding variable because they are biased.
Critique the research described in a narrative supplied in class identifying the threats to causal inference, including fallacies in reasoning and measurement difficulties that you suspect would lead to erroneous conclusions.
Inferences are said to possess internal validity if a causal relation between two variables is properly demonstrated.
A causal inference may be based on a relation when three criteria are satisfied:
1. the "cause" precedes the "effect" in time (temporal precedence),
2. the "cause" and the "effect" are related (covariation), and
3. there are no plausible alternative explanations for the observed covariation (nonspuriousness).
Ambiguous Temporal Precedence
Confounding
Selection Bias
History
Maturation
Repeated testing (also referred to as Testing Effects)
Instrument change (Instrumentality)
Regression toward the mean
Mortality/differential attrition
Selection-maturation interaction
Diffusion
Compensatory rivalry/resentful demoralization
Experimenter bias
Define qualitative approaches and describe how they differ from the more conventional quantitative approaches. From your readings in class, what advantages if any accrued from the use of qualitative coding?
Although qualitative research is based on descriptive and lengthy narrative accounts with little standardized measurement, it's important to note that it can be done rigorously and systematically (Kazdin, 2003). Qualitative research is a methodical way of understanding and evaluating individual experiences. The analysis is based on the researcher's interpretation and identification of important themes and ideas in the narratives of the participants. However, sometimes researchers come up with a coding scheme a priori based on a theoretical framework (deductive approach) and can re-evaluate the codes as they go through the data (Weitzman, NIH e-course). Another step that is encouraged in qualitative studies is for investigators to consult with other researchers about their identification of important themes and their interpretation of the data. Additionally, the subject's feedback is also collected and taken into consideration. These additional steps add to the triangulation process in which multiple methodologies, perspectives, and analyses are used to strengthen the conclusion of the study.
However, even with multiple methods and the elaborate details available from participants, the results are still vulnerable to misinterpretation (interpretive validity). And even with rich details of day-to-day experiences, qualitative studies usually have very small sample sizes, which can make it hard to generalize the results (external validity). Confirmability is the extent to which the results are free of the experimenter's bias and can be replicated by others. However, if the outcome of the study is based on the unique situation and experience of the participants and the subjective interpretation of the researcher, replication seems unlikely with a different set of participants and researchers.
Despite its limitations, qualitative studies can be especially useful as part of a mixed methods design that combines qualitative and quantitative research (Creswell et al., 2011). A qualitative exploratory study can generate hypothesis about key constructs that can be further evaluated with quantitative studies. Qualitative data can also be collected as a follow-up to better understand quantitative data. However, carefully designing and properly executing a mixed methods study can be a complex and cumbersome process. It requires an integrative multi-member research team with diverse expertise, specific leadership qualities, bigger budget, access to increased resources needed for multiple methodologies, continued training, additional time commitment for frequent meetings, large sample sizes, analytical expertise in mixed methods reserach, etc. Researcher should evaluate both the theoretical need and the resources available before taking on mixed methods research project.
-Qualitative studies are done systematically and with precision if done well.
-Studying subjects in context
-Confirmability: gaining consensus on data, as to do with replication.
- Coding and then classifying important themes are the first two steps.
;