PSYC 2001 - Exam

Surveys and Polls: what's important to keep in mind when designing them?
Click the card to flip 👆
1 / 119
Terms in this set (119)
1. Leading questions:
- How fast do you think the car was going when he hit the other car?

2. Double-barreled questions:
- Do you enjoy swimming and wearing sunscreen?

3. Negatively worded questions: when disagreement is the socially desirable answer
- People who do not drive with a suspended license should never be punished, disagree or agree?
1. Response sets: answering a number of questions in the same way; this weakens construct validity

2. Acquiescence: answering positively to a number of items instead of each specific item

3. Fence sitting: playing it safe by choosing the response in the middle of the scale, or the neutral response

Also, people try to look good/smart and respond in socially desirable ways.
What is a case-study?An in-depth study of an individual or event.What is archival research?Using previously recorded research data.What is the difference between a biased and representative sample?Biased sample: some members of the population of interest have a higher chance of being included in your sample compared to others. Representative sample: all members of the population have an equal chance of being included in the sample. They are all chosen randomly.How can you know if a sample is biased?Convenience sampling: when it samples those who are easy to contact: Self-selection: sampling only those who volunteerHow can you obtain a representative sample?Probability sampling: every member of the population of interest has an equal chance of being selected Simple random sampling: assigning a number to each member of the population and then using a table of numbers to select your sample.What are the different types of probability sampling?1. Cluster sampling: clusters in a population is selected, and then everyone in the cluster is selected. 2. Multistage sampling: has 2 stages. 1: collecting a random sample of clusters. 2: collecting random people from the clusters. 3. Stratified random sampling: selecting specific demographic categories and then randomly selecting individuals from those categories 4. Oversampling: like stratified, but overrepresenting one or more groups 5. Systematic sampling: using a computer or random number generator to select participants, i.e. every 4th person.What is purposive sampling? What is snowball sampling? What is quota sampling? What is convenience sampling?Purposive: Studying certain types of people, so you only select those types of people. Snowball: studying based on recommendations from the initial sample Quota: using nonrandom sampling to fill a quota for each category of participants in the sample Convenience: when it samples those who are easy to contact:What are some nonprobability / nonrandom sampling techniques?1. Purposive 2. Snowball 3. Quota 4. ConvenienceWhy is random sampling and random assignment good?Random sampling increases external validity. Random assignment increases internal validity (and is usually used to assign participants to groups at random)Why is external validity important?When making frequency claims, you are reporting on how often something happens in a population. So it's important to have high external validity / generalizability in a study to be able to make frequency claims.What are bivariate correlations?Associations that involve exactly 2 variables.What are association claims?Variables that associate are said to correlate / covary.What are the types of associations and levels of associations?Association types: - Positive (high x leads to high y) - Negative (high x leads to low y) - Zero (no association) Strength (coefficient value, based on r): - less than .10 = weak - between .10 and .30 = strong - over .50 = strongHow can you interrogate association claims?Look at the 4 validities: 1. Construct: how well was the variable measured? 2. Statistical: how well does the data support the conclusion? 3. Internal: Can we make a causal inference from the association? 4. External: To whom an the association be generalized?What is effect size?The strength of a relationship between 2 or more variables. The strength of their correlation. The degree to which the phenomenon is present in the population, or the degree to which the null hypothesis is false.What is statistical significance? What is it measured by?The probability that results is due to chance if there isn't a correlation in the real world. Measured by p value. if p <0.05, then the result is considered statistically significant.What is a curvilinear association?One where the correlation coefficient is close to 0 and the relationship between the two variables isn't a straight line. i.e. age and health care system usage.What's the difference between correlation and causation? How can you determine causation?Correlation = the two variables are associated Causation = X causes Y To determine causation: 1. variables must be correlated / have covariance. 2. there must be temporal precedence between the variables. 3. there must be internal validity; there can be no alternate explanations for the relationship.What are mediator and moderator variables?Moderating variables: when the relationship b/w the 2 variables change depending on the level of another, external variable. i.e. residential mobility moderates the relationship between success and attendance. Mediating variables: an explanation of the relationship between the independent and dependent variables. i.e. turning on a stove -> heat generated -> water boils. independent -> mediating -> dependentWhat do mediators and moderators ask?Mediators: "why are these 2 variables related?" Moderators: "are these 2 variables linked in the same way for every situation?"What are longitudinal designs? What are strengths and weaknesses?Measuring the same variable(s) in the same people at different times in their lives. Used often in developmental psychology. Strength: temporal precedence is established Cons: expensive and time consuming, participants can drop outWhat are cross-sectional studies?Measuring variables in one instance of time, at the current, present moment. There is on experimental procedure, so no variables are manipulated by the researcher. The researcher is simply recording characteristics. Strengths: can measure many variables at once, is fast Cons: cannot infer causation.What dictates if a study is an experiment?to be an experiment: at least one variable has to manipulated, and another, measured.What are the types of variables?1. Independent variable, x, manipulated 2. Dependent variable, y, measured 3. Control variable: constant variables that are the sameWhy do experiments support causal claims?1. They establish covariance 2. They establish temporal precedence 3. They establish internal validity by ruling out extraneous variablesWhat types of groups are there in an experiment?1. Comparison (to compare to) 2. Control (no treatment condition) 3. Treatment (one or more treatment conditions) 4. Placebo (placebo control)What are threats to internal validity in an experiment?1. Design confounds 2. Selection effects 3. Order effectsWhat are confounds? What causes it?Confounds = confuse Design confounds are external variables that affect the dependent variable aside from the independent variable. This can cause an alternative explanation for the results. Can be caused by systematic variability; where maybe one comparison group is affected by something and the other isn't. Unsystematic variability is random and affects both groups.What is selection effect? Give an example. How can it be avoided?When participants are systematically different. i.e. some participants may live further, and therefore want to take the more intensive treatment, for an experiment where their condition can be cured. Can be avoided through random assignment of the independent variable. Can also be avoided with matched groups: participants are sorted from lowest to highest on some variable and grouped into sets of two.What are the kind of designs for groups?1. Independent-group design 2. Within-group designWhat is independent group design?When different groups of participants are placed at different levels of the independent variable. i.e. randomly assigning participants a medium or large bowlWhat is within groups design?When each participant is presented with all levels of the independent variable. i.e. making all participants use a large bowlWhat are the types of within-groups design?1. Repeated-measures design: exposes participants to the IV at each level, and measures the dependent variable more than once 2. Concurrent-measures design: exposes participants to the IV levels all at once and a single preference is the DVWhat are post-test only designs? What about pre-test?When participants are randomly assigned to different levels of the independent variable and tested on the dependent variable only once. Pre-test gives each group a pretest before the experiment to ensure groups are equal at the beginning of the experiment.Question: what type of experimental design can address selection effects?Matched-group designWhat are order effects?They are confounds for within-groups designs where being exposed to one condition can affect how the participant reacts to other conditions. Has 2 types: 1. Practice / fatigue: they get better over time due to practice, or worse due to fatigue 2. Carryover: contamination from one condition can carry over to the next. i.e. drinking coffee then decaf coffee. the first coffee will have an effect even if the decaf wont.How can you avoid order effects?Counterbalancing: presenting levels of the IV to participants at different orders. Full counterbalancing is when all possible condition orders are presented Partial is when only some condition orders are used.Question: practice effects and carryover effects are examples of what kind of effects?Order effects (they are confounds that can affect how participants reacts to other conditions)What are demand characteristics?When participants pick up on cues that lead to them guess the experiment's hypothesis.How can you evaluate construct validity?Construct validity looks at how well the variables are measured and manipulated. To evaluate, you can: 1. do a manipulation check: add in an extra dependent variable to see if their experimental manipulation worked 2. do a pilot study: do a simple study with a separate group before the actual experimentHow can you evaluate external validity?External validity looks at how well a claim generalizes to the rest of the population. To evaluate, you can: 1. generalize to other people: were the participants randomly selected? 2. generalize to other situations: given different but similar independent/dependent variables, would the results be the same?How can you evaluate statistical validity?Statistical validity looks at how well the data supports the causal claim. To evaluate, you can: 1. see if the difference is statistically significant 2. see how large the effect or effect size isDifference between r and dweak: r < 0.10, d <0.2 medium: r < 0.3, d < 0.5 strong: r < 0.5, d < 0.8 r increments by 0.2 d increments by 0.3Question: experiments use random assignment to avoid which of the following? 1. Random selection 2. Selection effects 3. Getting participants they don't want 4. Demand characteristics2. Selection effectsQuestion: what is used to control order effects in an experiment?CounterbalancingQuestion: What is an advantage of within-groups design?They need fewer participants. Disadvantages: Harder to do, take more time, have order effects.How can you evaluate internal validity?Internal validity looks at if there are any other alternative explanations for the outcome. To evaluate, ask questions: 1. were there any design confounds? 2. if an independent-groups design was used, did researchers control for selection effects using random assignment or matching? 3. if a within-groups design was used, did researchers control for order effects?What are threats to internal validity?There are 12. Mnemonic: This Poor Damd Testing: testing can result in order effects History: external factors affect participants Instrumentation: observers change coding standards Selection effects: systematic differences in groups Placebo effects: improving due to belief Observer bias: observer has influencing expectations Order effects: carryover confounds Regression (to the mean): extreme values change over time due to random effects at the time of testing Demand characteristics: participants find out the study's hypothesis Attrition: people dropping out Maturation: participants adapt to environment Design confounds: alternative explanationsWhat are null effects? What can cause them? How can they be prevented?When an experiment outcome shows that there is no relationship between the IV and DV. Therefore the experiment does not show an expected effect. Can be caused due to weak manipulations, or ceilings or floors in the experiment's design. Weak = changes in the IV are so small that ofc it doesn't cause an effect Ceiling = questions are too easy so everyone gets it right Floor = questions are too hard so everyone gets it wrongWhat are interaction effects?When the effect of one IV depends on the level of another IV. Therefore, interaction effects are when there are multiple independent variables. Another definition is the "difference in differences"What is a factorial design?An experiment with more than one IV. i.e. studying the effects of driving while no the phone. IV = cell phone use, and drivers ageWhat are participant variables?Variables whose levels are selected, not manipulated.What are the results to interpret when analyzing a factorial study with 2 IVs?Two main effects (looking at each IV separately), and one interaction effect (seeing if the IV's relate to eachother). The interaction effect is almost always more important than main effects.What are the type of factorial designs?1. Independent-groups factorial designs 2. Within-groups factorial designs 3. Mixed factorial designs 4. Increasing the number of levels of an independent variable 5. Increasing the number of independent variablesWhat is an independent-groups factorial design?When both IVs are studied as independent groups (48 participants)What is a within-groups factorial design?When both IVs are manipulated within groups. (48 / 4 = 12 participants)What is a mixed factorial design?When one IV is manipulated as independent groups and the other is manipulated within groups. (48 / 2 = 24 participants)How are factorials notated?__ x __ number of blanks = number of IVs numbers in blanks = number of levels for each IVQuestion: if you have a 2x2x3 within-group factorial design with 20 participants in each cell. How many participants is required overall?20 participants are required. Since it's within-groups, you recycle the same participants in each group. So only 20 are needed because she can keep recycling them.How can you increase the number of levels in an independent variable?i.e. low, medium, high heatHow can you increase the number of independent variables?i.e. heat -> cold, and heatAre most psychological study outcomes main effects?No, most outcomes are interactions - showing that one IV's level depends on another.What are quasi-experiments?Experiments where you might not be able to randomly assign participants.What is one-group posttest only design?When you only have one group doing something, and you ask them about their thoughts afterwards. Since you only have one group, you lack control/comparison groups.What is a one-group pretest-posttest design?Similar to one-group posttest design, but since you also have a pretest, then you have a baseline measure for comparison.What is nonequivalent control group design?When a separate control group is introduced. The groups are not equivalent because of lack of random assignment.Why is pretesting good?It improves internal validity by providing a baseline measure to compare the end results to.Question: in a quasi-experiment, researchers have how much control over the experiment?Some, but not all.What is a (interrupted) time series design?A design that examines the DV over a extended period of time, before and after the IV is introduced.What is a control series design?Like a time series design, but has a control group that isn't exposed to the IV.What are things to keep in mind regarding internal validity in quasi-experiments?THIS PO(OR) DAMD: Testing History Instrumentation Selection effects Placebo effects Observer bias Design confounds Attrition effects Maturation effects Demand characteristicsWhat are small-N studies? What are cons of them?Studies that have few participants, sometimes only one. Cons: - they may not generalize well to the rest of the populationHow are quasi experiments and validities?Good construct validity Good statistical validity Can have bad internal validity Can have good external validityWhen conducting quasi-experimental designs, researchers tend to give up _____ for some ____.internal validity, external validity. Internal validity is lacking because the researcher(s) can't fully control the experiment, meaning that there can be alternative explanations. External validity is good kind of the same reason.How are small-N studies and validities?Construct: can be high if definitions are good Statistical: not really relevant in small-N studies Internal: can be very high if the study is carefully designed External: can be problematic depending on the study's goals.One disadvantage to small-N designs is what?They are not always generalizable.What are the types of replication when it comes to replicating studies?1. Direct replication: repeating the study as closely as possible 2. Conceptual replication: exploring the same research question, but using different procedures 3. Replication Plus Extension: replicating the original study, but adding some variables to test additional questionsIf a study uses the same variables at an abstract level but uses different ways to operationalize them, wheat kind of replication is this?Conceptual replication.What is a meta-analysis?A statistical analysis of many scientific literature to yield a quantitative summary.What is the file-drawer problem?Is the idea that meta-analyses can overestimate the support for a theory, because studies that showed a null effect of the theory are less likely to be published compared to studies that have significant results.What is generalizing?How applicable a study's results are to the general population, or to another setting. External validity is all about generalization. It looks at how a sample is obtain, not how many people you have in your sample.What is generalization mode? What validity matters most in this mode?The mode usd when researchers want to generalize the findings from the sample in the study to a larger population of interest. External validity matters the most here. Typical mode claims: - Frequency (always) - Association and causal (sometimes)What is the theory-testing mode? What validity matters most in this mode? What claims are usually in theory-testing mode?A mode used when testing association or causal claims to investigate whether there is support for a particular theory. External validity matters less than internal validity in this case. Typical mode claims: - Association - CausalQuestion: what is open science collaboration?The idea that researchers should share their data and materials so others can collaborate and verify the results.Which claim is most likely in generalization mode? - 4/10 teenagers cant identify fake news when they see it - reading stressful news makes adults anxious - people who walk faster live longer4/10 teenagers cant identify fake news when they see it. generalization mode and frequency claims go hand in hand.What is the acronym WEIRD? What is it for?"Western, educated, industrialized, rich and democratic" It's a cultural identifier for psychology participants.What's a field setting example?Field settings = studies that take place in real-life settings, i.e. a parkWhat is experimental realism?Laboratory research that's as realistic as research conducted in the real world.What is the difference between quantitative and qualitative research?Quantitative: numerical, answers "how", "what", or "why". Qualitative: textual, answers "what", or "how"What are guiding principles of qualitative research?1. Reflexivity: recognizing constructs that implicitly and explicitly influence the research processWhat are ways to collect qualitative data?1. Interviews 2. Focus groups 3. Surveys 4. Field notes 5. Researcher as instrumentWhat is triangulation?Using multiple sources of data to come to a conclusion.What are methods of qualitative research?1. Thematic analysis 2. Grounded theory 3. Discourse analysis 4. Phenomenological analysis 5. Linguistic analysisWhat is thematic analysis?identifying recurring patternsWhat is grounded theory?inductive strategies to collect and analyze data to develop theoriesWhat is discourse analysis?analyzing language "beyond the sentence"What is phenomenological analysis?a detailed examination about the person's experience and their perception about the experienceWhat is: credibility, dependability, confirmability, and transferability when it comes to qualitative research?Credibility: results are true and credible Dependability: results are repeatable using the same participants, coders/observers, and context Confirmability: other researchers would confirm the results Transferability: how well the results can be transferred to other situationsWhat's: nominal, ordinal, interval, and ratio?Nominal = categorical data Ordinal = numerical data that is ranked in order Interval = numerical data in equal intervals with no absolute zero Ratio = interval with absolute zeroWhat is a frequency distribution? What are some examples?A table that depicts the observations of a variable. i.e. pie graph, bar graph, histograms, frequency polygons, etc.What are the 3 central tendencies?Mean: average Median: middle number Mode: most frequent numberWhat is: standard deviation, variance, and range? What is variability overall?Standard deviation = average deviation of scores from the mean Variance = standard deviation to the power of 2 Range = difference between highest and lowest score Variability = standard deviation and range.What's effect size? How can it be measured?The strength of association between variables. Can be measured by the r value of a graph.What is inferential statistics?techniques that use chance and probability to make decisions about the meaning of data.What is the null hypothesis vs research hypothesis? When do you accept the null hypothesis?Null hypothesis = nothing happens, no result. Research hypothesis = something happens, there is a result and relationship between variables. Accept when alpha level (p) is low: p<0.05What is a type 1 error? What about type 2?Type 1: a false positive: there isn't an effect but you say there is Type 2: a false negative: there is an effect but you say there isn'tWhat is power? What is power affected by?The likelihood of not making a type 2 error or: The likelihood that a study will have a statistically significant result when the IV really has an effect. Power is affected by your alpha level, sample size, variability, and effect size, NOT by your mean.How can you reduce the risk of a type 1 error? What about type 2 error?Type 1 error: alpha level Type 2 error: increase sample size, effect size, etc.What are steps to significance testing?1. input data into software programs 2. conduct an analysis 3. interpret the output