Home
Subjects
Explanations
Create
Search
Log in
Sign up
Upgrade to remove ads
Only $2.99/month
Social Science
Psychology
Experimental Psychology
Psy341 Research Design Final
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (97)
Factor
independent variable
Levels of a factor
the number of levels of a factor is equal to the number of variations or groups of that factor
Factorial designs
an experimental design that has more than one independent variable
Interactions
one independent variable's influence on the dependent variable changes depending on the level of the other independent variables
Main effect
focus on the effect of a single independent variable on the dependent variable
2X2 designs
two factors with 2 levels each
2X3 designs
two factors with 2 levels in the first IV and 3 levels in the second IV
2X2X2 designs
three factors with two levels in each IV
Two-way analysis of variance
simultaneously test how two separate factors influence the DV
Results in APA style of main effects
The effect of IV1 on DV depended on IV2
MANOVA
multivariate analysis of variance used when you have two dependent variables that are conceptually related
MANCOVA
two or DVs that are conceptually related just add covariates
When to use a two-way ANOVA?
When you have more than one independent variable
Pretest-postest design
we measure the dependent variable before and after exposing participants to treatment or intervention
Repeated-measures design
a type of within-subjects design where participants are exposed to each level of the independent variable and are measured on the dependent variable after each level; unlike the pretest-posttest design, there is no baseline instrument
Longitudinal design
participants are repeatedly measured on the dependent variable over a period of time
Advantages of a repeated measures design
-can asses change or relative comparisons
-fewer research participants needed
-keep individual differences constant
Disadvantages of a repeated measures design
-potential threats to internal validity
-potential external validity concerns
-potential logistics challenge
Attrition(mortality)
the differential dropping out of a participant from a study
Testing effects
participants' scores changing on subsequent measurements simply because of their increased familiarity with the instrument
Instrumentation
unwanted changes during course of the study
History
an unexpected or related event occurring during the study that could influence participants' response
Maturation
physiological changes occurring in participants
Order effects
influence that the sequence of experimental conditions can have on the dependent variable
Practice effects
changes in a participants response or behaviors due to increased experience with the measurement instrument, not the variable under investigation
Fatigue effects
deterioration in measurements due to participants becoming tired, less attentive or careless
carryover effects
exposure to one treatment changes participants' reactions to another treatment
Sensitization effects
with each treatment, participants become more likely to guess the research hypotheses and change their behavior accordingly
Counterbalancing
using all potential treatment sequences in a within-subjects design
Why is counterbalancing important?
minimizes potential order effects
Why is counterbalancing important in a repeated measures design?
relegate potential order effects to random error, but in a larger design, the issue of order effects can turn into a research question
repeated-measured ANOVA
a statistic used to test a hypothesis from a within-subjects design with three or more conditions
dependent means t-test
a statistic used to determine if there is a statically significant different between two related sets of scores
Multi-group design
an experimental design with three or more groups that allows us to have multiple levels of an independent variable
Multi-group designs contain
1 categorical independent variable with 3 or more levels
1 continuous dependent variable
Analysis of variance ANOVA
allows us to compare several means using one statical test
Omnibus
overall
ANOVA
-test for an overall difference between groups
-it tells us that the group means are different but it doesn't tell us exactly which means differ
small F value
more of the variance is due to error and not the independent variable
large F value
more of the variance is due to the independent variable and not due to error
Cohen's D
eta2
Planned contrast
Test comparisons between groups that we predicted AHEAD OF TIME. A clear directional hypothesis
Post-hoc test
Test all of the possible combinations of conditions in a way that accounts for the fact that we DID NOT PREDICT ahead of time. A more conservative and non-directional hypothesis
one-way analysis of variance (one-way ANOVA)
a statistical test that determines whether responses from the different conditions are essentially the same or whether the responses from at least one conditions differ from the other
between-group variability
explained variance
within-group variability
unexplained variance
The bigger the explained variance
the smaller the unexplained variance
Why do we need to conduct follow-up tests in ANOVA?
The ANOVA doesn't tell where the action is in terms of which specific conditions are different form each other
two-group design
an experimental design that compares two groups or conditions and is the most basic way to establish cause and effect; also known as a simple experiment because it is the simplest way to establish causality
extraneous variable
a factor other than the intended treatment that might change the outcome variable
internal validity
the degree to which we can rule out other possible casual explanations for an observed relationship between the independent and dependent variables
temporal precedence
when changes in the suspected cause(treatment) occur before changes in the effect(outcome)
covariation
when changes in one variable are associated with changes in another variable; part of determining causality
A two group design has
-1 categorical independent variable with 2 levels
-1 continuous dependent variable
dependent variable
ratio;interval
independent variable
nominal;ordinal
Random assignment
ensure that the participants are placed in groups in a non biased way and that each participant has an equal chance of being in any group
Null hypothesis
no difference between groups
alternative (experimental) hypothesis
a difference between groups
p value is .001, .01, .05
very unlikely that the null hypothesis is correct
p value is .10, .20, .50, .80, 1.0
very likely that the null hypothesis is correct
independent samples t-test
a statistical test comparing the groups' means to see if the groups differ to a degree that could not have just happened accidentally or by chance
t-test formula
t= mean difference-null hypothesis(0)/ standard error
The bigger the t
the smaller the p
effect size .20
small
effect size .50
medium
effect size .80
large
The bigger the d value
the more meaningful it is
confound
a variable that the researcher unintentionally varies along with the manipulation
control group
any condition that serves as the comparison group in an experiment
writing the results of a t-test in APA style
Participants experienced greater anxiety to real spiders(M=47.00, SD= 11.03) than to pictures of spiders (40.00, SD= 9.29). This difference however, was not significant, t(22) = -1.68, p=.11
Survey
a quantitive research strategy systematically collecting information form a group of individuals
Open-ended questions
a question participants answer using their own words
Forced-choice format
a scale where a person must choose between only two response alternatives for each item
Likert type questions
closed ended questions with predetermined sets of response alternatives
leading questions
...
double-barreled question
prompts participants to provide a single response to an item that asks two separate questions
Negatively worded questions
...
Reliability
...
internal consistency reliability
the degree to which the individual items in a scale are interrelated. How interrelated are the individual items in scale?
test-retest reliability
the temporal stability of a measure, Has consistent is the scale over time?
alternative-form reliability
a form of reliability that evaluates how well a measure correlated with a similar, but different, measure of the same variable. How consistent is the scale with other measures of the variable?
Validity
measures what it claims it measures
Face validity
the degree ti which a scale appears, on the surface to measure the intended variable, does the scale appear to be measuring the variable?
Content validity
the degrees to which the items on a scale reflect the range of material that should be included in a measurement of the target variable, Do the items on the scale represent the various aspects of the variable being measured?
Construst validity
the extent to which the scale actually measures the desired construct; established by evaluating the convergent and discriminative validity of the measurement; Does the scale actually measure the intended variable?
Convergent validity
the degree to which scores on a measurement correspond to measures of other theoretically related variables; used to help establish the construct validity of a measurement; Does the scale relate to other measures of the variables?
Discriminant validity
the extent to which a measurement does not correspond to measures of unrelated variables; used to help establish the construct validity of a measurement; Does the scale relate to measures of unrelated variables?
Criterion validity
the extent to which a measurement relates to a particular outcome or behavior; established by evaluating the concurrent and predictive validity of the measurement; Does the scale relate to a relevant outcome or behavior?
Concurrent validity
the extent to which a measurement corresponds with an existing outcome or behaviors;used to establish the criterion validity of a measurement; Does the scale relate to a relevant outcome or behavior that was measured at the same time?
Predictive validity
the extent to which a measurement corresponds with a particular outcome or behavior that occurs in the future; used to establish the criterion validity of a measurement; Does the scale relate to a relevant outcome or behavior that occurs in the future, after the scale is completed?
Randomization
...
descriptive statistics
statistics that describe or summarize quantitative data in a meaningful way
observational research
the viewing and recording of a predetermined set of behaviors
reactivity
when a participant's behavior is affected by the fact that they are being observed
basic research
research directed at understanding the fundamental aspects of a phenomenon
applied research
research deigned to be used in real world situations
Sets found in the same folder
Morling: Research Methods in Psychology - Carroll…
96 terms
Organizational Communication Exam 1 Dr.…
102 terms
Methods of Assessment in Special Education Vocabul…
35 terms
Educational Theories
31 terms
Sets with similar terms
Zechmeister Chapter 6
22 terms
Research Method-Morling (Chapter 10)
32 terms
Research Methods Ch 10
51 terms
302 exam 3
50 terms
Other sets by this creator
Chapter 19: Citrus Acid Cycle
23 terms
metabolism
21 terms
Allostery
22 terms
Chapter 13 - Enzymes
58 terms