Home
Subjects
Explanations
Create
Study sets, textbooks, questions
Log in
Sign up
Upgrade to remove ads
Only $2.99/month
Social Science
Psychology
Experimental Psychology
Chapter 4 - Defining and Measuring Variables
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (96)
Variable
Any factor or attribute that can assume two or more values.
They are the ways in which people differ from each other, how some change over time or act differently across diverse settings, and ways in which other species, objects and environments differ.
Qualitative Variables
Represent properties that differ in "type" (i.e., a type of tribute or quality) such as biological sex, religious affiliation, eye colour and marital status.
Planning, decision, and execution errors represent a qualitative variable.
Quantitative Variables
Represent properties that differ in "amount."
People differ quantitatively in their height, weight, degree of shyness, time spent in a learning task, and blood alcohol level. Sounds can vary in intensity and perceived loudness. Task performance is a common quantitative variable in behavioural research.
What is a similarity between a quantitative and quantitative variable?
Qualitative variables, like quantitative, can generate numerical data and be statistically analyzed.
With qualitative variables, we count the number of instances that occur within a category, report that information as frequencies, percentages, proportions, and perform statistical tests as needed.
Discrete Variable
Between any two adjacent values (ex: 0, 1, 2, 3 children) no intermediate values are possible.
*Intermediate values are impossible.
Ex: a family may have 2 or 3 kids, but they can't have 2.76 kids.
Continuous Variables
Between any two adjacent scale values, intermediate values are possible.
*Intermediate values are possible - precision limited only by our measurement tools.
Ex: No matter how close times are between two people finishing a task, is is always possible for another score to exist between them. So, if they have times of 4.0 and 5.0, then times of 4.1, 4.2, 4.3 etc are possible. We can always obtain new intermediate values by considering one additional decimal place.
When we study continuous variables, why must they be converted into discrete variables?
When we measure time, minute, second, no matter how many decimal places we carry out in the measurement there has to be a cutoff.
This creates discrete values beyond which we simply round up or down.
What type of graph is more appropriate to display qualitative variables?
A bar graph.
When variable on the X-axis is quantitative but discrete (ex: size of a family), this type of graph is appropriate because it avoids implying that the variable has intermediate scale values.
When measuring sound intensity what type of graph is it appropriate to display results on?
A line graph because sound intensity is a continuous quantitative variable.
When a graph presents many findings (x-axis has many variables) its often easier to grasp overall patter of two variables and how they're related in a line graph.
What type of graph would you use when measuring memory?
Line graphs are often used to highlight the nonlinear or linear relations between variables and memory performance. Items to be recalled would be plotted on the x-axis and recall scores on the y-axis.
What graphs are used to portray discrete quantitative valuables along the x-axis?
Line and bar graphs.
Independent Variable
Is the presumed cause in a cause-effect relation; in experiments, it is a factor that researchers manipulate or systematically vary in order to asses its influence on some behaviour or outcome.
Dependent Variable
Is the presumed effect in a cause-effect relation; in an experiment, it is the behaviour or outcome that the researcher measures to determine whether the independent variable has produced an effect.
What is the difference between independent and dependent variables?
Basically its those that are proposed to be causes and those proposed to be effects.
The dependent variable "depends" on the independent variable.
The outcome of the dependent variable is presumed to depend on the value of (level of OR condition of) the independent variable.
Situational Variable
A characteristic that differs across environments or stimuli.
Subject Variable
A personal characteristic that differs across individuals.
What are experiments best suited for?
Identifying cause-effect relationships because they provide researchers a high degree of control over variables being examined. Researchers create different levels or conditions of the independent variable and then exposes participants to those conditions.
Ex: a key aspect of a participants' behaviour (dependent variable) is measured in each of the conditions and compared to determine whether the independent variable influenced the participants behaviour.
What is the hallmark of experimentation?
Manipulating independent variables.
What are the key points about independent and dependent variables?
1) The "same" factor can be an independent or dependent variable depending on the particular research question being studied.
Ex: Question 1: Does stress affect one's craving for alcohol? AND Question 2: Does alcohol influence the strength of one's stress response when in a threatening situation?
2) Researchers often want to know how an independent variable influences multiple dependent variables; i.e. a study may examine how an independent variable influences tow or more dependent variables.
Ex: Question: Does alcohol affect people's altruism and willingness to take risks?
3) Researchers may examine how two or more independent variables simultaneously influence the same dependent variable or variables; i.e. a study may examine how multiple independent variable simultaneously influence one or more dependent variables.
Ex: Question: What effects do consuming alcohol (IV#1) and energy drinks (IV#2) individually and mixed together have on people's subsequent desire to drink more alcohol?
Hypothetical Constructs
Underlying characteristics or processes that are not directly observed but instead are inferred from measurable behaviours or outcomes.
Many concepts in the behavioural scientists study represent psychological attributes that cannot be directly observed, so we observe measurable responses that are presumed to reflect these underlying attributes.
Ex: motivation, aptitude, memory, stress, personality, intelligence, self-esteem, happiness, etc. They represent psychological states or processes that are hypothesized to exist but can't be directly observed. What we observe are measurable responses that are presumed to reflect these psychological attributes.
Mediator Variable
A variable that provides a causal link in the sequence between an independent variable and a dependent variable.
Between the independent and dependent variables there is a proposed link, the mediator variable. When the mediator variable is added between the IV and DV, it provides an explanatory causal link.
They are often internal psychological constructs that are hypothesized to represent a mechanism by which an IV influences a DV. External factors can also serve as mediator variables.
Helps explain "why" an IV influences a DV.
Moderator Variable
A factor that alters the strength or direction of the relation between an independent and dependent variable.
Helps explain "when" and "for whom" an IV produces a particular effect.
The different values of a ________________ variable represent different "types" of an attribute such as different college majors or ethnic groups.
Qualitative
The different values of a ________________ variable represent different amounts of some attribute, such as height, weight, or the speed of a response.
Quantitative
In terms of cause and effect, the ________________ variable represents the cause and the ________________ represents the effect.
Independent; Dependent
A ________________ variable is a variable that helps to explain why an independent variable influences a dependent variable.
Mediating
Operational Definition
Refers to defining a variable in terms of the procedures used to measure or manipulate it. The precise statement of how a conceptual variable is turned into a measured variable.
Scientists need to decide on specific procedures for measuring variables in research. If the research is an experiment involving manipulating variables to create different conditions to which participants will be exposed, then researchers must also decide on procedures - operations use to measure and manipulate - as a concrete way of defining those variables.
In terms of hypothetical constructs, operational definitions translate abstract/hypothetical concepts that can't be directly observed into tangible, measurable variables. It is a road map for others to replicate a study.
Is a rating on a scale from -2 (strongly disagree) to +2 (strongly agree) in response to the statement "I am in love with my current partner." an operational or conceptual definition?
Operational
Is a feeling of deep longing for, and sense of commitment to, another person an operational or conceptual definition?
Conceptual
Is an emotion in which the presence or thought of another person triggers arousal, desire, and a sense of caring for that person an operational or conceptual definition?
Conceptual
Measurement
Is the process of systematically assigning values (numbers, labels, or other symbols) to represent attributes of organisms, objects or events.
The term systematic means values are assigned according to some rule.
Scales Of Measurement
Refers to rules for assigning scale values to measurements.
Scale Values
Numerical scores or category labels that represent the variables being measured. Scale values of 0, 1, 2, 3, 4... can be used to represent the number of errors made by someone, can be the names of academic majors, etc.
Ex: 1 = biology, 2 = math, 3= business, etc. The numbers are substitute symbols devoid of any quantitative information. The numbers can be assigned any way the researcher wishes.
Why are scales of measurement important?
While numbers can be assigned to any variable, the mathematical operations that researchers can meaningfully perform on those numbers (add, subtract, multiply, etc) and the ways in which researchers analyze and interpret their data., are influenced by the scale of measurement used.
There are four measurement scales: nominal, ordinal, interval and ratio.
What is conveyed as scientists progress from nominal to ratio scales?
Levels of measurement; progressively more information about the variable being measured.
Nominal Scale
The scale values represent only qualitative differences (difference of type rather than amount) of the attribute of interest.
Key characteristic: Different scale values only represent different qualities. Ex: Olympic teams and their primary colour of team jersey, classifying people's political affiliation, students college majors, or people as having different types of anxiety disorders.
Involves creating a set of labels or names for categories that are mutually exclusive and assigning each case into one of those categories. All cases within each category must be equivalent on the attribute represented by that category.
Found in everyday life such as menus in restaurants categorizing food, or computers itemizing documents into different folders.
Common and important, but are considered the weakest level of measurement because they provide the least amount of information regarding the cases being measured. All one can assume is that the cases within a category are equivalent to one another an different from the cases in other categories.
Even though numbers can be assigned to represent categories, the numbers themselves are arbitrary. They provide no information other than greater or less than and aren't amenable to the basic arithmetic operations of addition, subtraction etc.
Ordinal Scale
The different scale values represent relative differences in the amount of some attribute such as rankings.
Key Characteristic: Scale values represent quantitative ordering. Ex: Order of finishing a race, categorizing people as young children, adolescents, adults based on age, or documents as top, moderate and low priority.
Values on these scales can be represented by numbers or category labels that imply an ordering on some quantitative dimension.
They provide information about how different people/entities stand relative to one another on dimensions such as greater, less, better or worse than.
They don't tell us the distance or the amount of distance between rankings is equivalent across the entire scale range.
Interval Scale
When equal distances between values on the scale reflect equal differences in the amount of the attribute being measured.
Key Characteristic: Equal scale intervals represent equal quantitative differences. Ex: Temperatures in a stadium throughout the day, scores from intelligence tests, personality trait tests, or attitude scales.
The information yielded is more precise than the information we obtain at the nominal and ordinal levels of measurement.
Ratio Scale
When equal distances between values on the scale reflect equal differences in the amount of the attribute being measured and the scale also has a true zero point.
These scales provide the most amount of information about the attribute being assessed.
Key Characteristic: Equal scale intervals represent equal quantitative differences, and there is a true zero point. Ex: Time to finish a race in seconds.
While scale intervals may be equal in each system, you can't use the scale values to create meaningful ratios. But you can measure contrast, time, incomes and length.
What kind of measurement do you use to measure place of residence such as dorms, frat houses, or off campus?
Nominal
What kind of measurement do you use to measure the number of college credits completed?
Ratio
What kind of measurement do you use to measure the number of current standing of intramural softball teams (1st, 2nd, 3rd place)?
Ordinal
Standardized Procedure
When collecting data, each measurement is taken under conditions that are equivalent as possible.
Accuracy
The measure represents the degree to which the measure yields results that agree with a known standard.
Systematic Error
Also called a bias, is a consistent degree of error that occurs with each measurement.
Occurs when the measured variable is influence by other conceptual variables that are not part of the conceptual variable of interest. Ex: optimism as influenced by religious belief.
These variables systematically increase or decrease the scores on the measured variable.
You can try to eliminate the bias by recalibrating your scale against a known standard.
Why can measurement accuracy be determined for many psychological variables?
In behavioural science, the accuracy of physical instruments used to measure many variables (speed, frequency of response, duration, force, weight, etc) "can" be calibrated against known standards. But many psychological variables can't be determined because known standards don't exist.
Ex: measuring IQ with a psychological test because what "is" the known standard against which you can compare the accuracy of those measuring instruments?
In these cases, as well with others where accuracy can be determined, researcher devote their attention to other aspects of measurement: reliability and validity.
Reliability
A measure is assessed by examining its consistency. It yields consistent numbers. The extent to which a measure is free from random error. Reliable measures give you consistent "readings". Ex: bathroom scale.
According to Classical Test Theory; the reliability of a measure reflects the degree to which it is free from random measurement error.
Researchers estimate the reliability of measures by assessing their consistency.
Does a measure yield consistent, repeatable results under conditions where consistency would be expected?
Random Measurement Error
Random fluctuations/variations that occur during measurement and cause the obtained scores to deviate from a true score.
The greater the random error, the less reliable the measuring instrument will be.
When studying behaviour, what can random measurement error be introduced by?
-Participants characteristics (ex: momentary fluctuations in mood or attentiveness).
-Measurement setting or procedures (ex: chance variations in room temperatures or how instructions are delivered).
-The measuring instrument itself (ex: ambiguously worded test items or criteria for classifying behaviour, fluctuations in attentiveness of observers who are doing the recording).
-Other factors (ex: random mistakes in transcribing data).
Test Retest Reliability
One common way of measuring reliability; is determined by administering the same measure to the same participants on two or more occasions under equivalent test conditions.
Ex: Testing personality traits or intelligence. Traits and intelligence are relatively stable characteristics that don't change substantially over time. A test would be given to participants say at the beginning of a 30 day month, then again at the end of that 30 day month. Then statistically analyze how well their scores at time 1 correlate with scores at time 2. The stronger the correlation between the two sets of test scores, the higher the reliability of the measure.
Examines how strongly tests scores at Time 1 correlate with test scores at Time 2.
Split Half Reliability
The items that compose a test are divided into two subsets and the correlation between subsets is determined.
Internal consistency calculated by correlating a person's score on one half of the items with their score on the other half of the items.
Is one example of an overall approach to estimating reliability that is often called internal reliability or internal-consistency reliability. It assesses the interrelatedness of the items within a measure and researcher interpret stronger interrelatedness as evidence of higher test reliability.
Ex: Cronbach's alpha, it reflects how strongly the individual items in a test or in a subset of a test correlate with one another overall. The most common, and the best, index of internal consistency. Gives an estimate of the average correlation among all of the items on the scale.
Examines how strongly scores on the two halves of test correlate (or examines the average correlation of all possible split-halves).
Interobserver Reliability
Also called interrater reliability, represents the degree to which independent observers show agreement inter observations.
Examines how well the codings, ratings or other observations of two or more independent observers agree.
Validity
Can we truthfully infer that a measure actually does what it is claimed to do? Assessing what its claiming to assess.0
Ex: If the developers of a psychological test claim it measures shyness, does evidence indicate that the test actually measures shyness and not some other psychological characteristic.
Researchers commonly use the term validity as representing a property of the measure itself. The concept refers to the inferences that we draw about a measure and the results we obtain from it.
For a measure to be valid, it must first have good reliability. But just because a measure is reliable, it doesn't mean its valid.
Asks: Are we truly measuring what we are intending to measure? Can we infer that a measure is really assessing what it is supposed to asses?
Face Validity
Concerns the degree to which the items on a measure appear to be reasonable. Extent to which the measured variable appears to be an adequate measure of the conceptual variable.
Its not a scientific form of validity, but can be important. It may influence how readily people accept the results of a test being valid.
Its possible for a test to have low face validity and measure what it claims to measure. However, just because a test "looks good" (has face validity) doesn't mean it is scientifically valid.
Content Validity
Represents the degree to which the items on a measure adequately represent the entire range or set of items that could have been appropriately included. OR degree to which the measured variable appears to have adequately sampled from the potential domain of questions that might relate to the conceptual variable of interest.
Asks: Does the content of a measure adequately represent the range of relevant content?
Criterion Validity
Addresses the relation between scores on a measure an an outcome.
Asks: Do scores on a measure relate to scores on a relevant criterion?
Predictive Validity
A type of criterion validity, demonstrates when a measure recorded at one time predicts a criterion that occurs in the future.
When criterion validity involves attempts to foretell the future.
Asks: Will the scores on a measure help us predict future scores on a criterion?
Concurrent Validity
A type of criterion validity; The relation between scores on a measure and an outcome, when that measure and outcome are assessed at the same time (concurrently).
When criterion validity involves assessment of the relationship between a self-report and a behavioural measure that are assessed at the same time.
Asks: Do the scores on a measure help us estimate current scores on a criterion?
Construct Validity
Is demonstrated when a measure truly assesses the construct that it is claimed to assess. Extent to which a measured variable actually measures the conceptual variable it is designed to assess.
Asks: Does a measure asses the construct that it claimed to asses, and are not some other construct?
Is the broadest and most theoretically based type of validity. established by the pattern of how a particular measure relates to other measures that in theory should or should not be related to it.
Evidence for construct validity of a measures also builds as the results of studies support its content and criterion validity.
What are the types of criterion validity?
Predictive and Concurrent
What are the types of construct validity?
Convergent and Discriminant
Convergent Validity
A type of construct validity; on a measure should correlate highly (converge) with scores on other measures of the same construct.
Extent to which a measured variable is found to be related to other measured variables designed to measure the same conceptual variable.
Discriminant Validity
A type of construct validity, scores on a measure should not correlate too highly/strongly, if at all, with measures of other constructs.
Extent to which a measured variable is found to be unrelated to the measured variables designed to assess different conceptual variables.
True or false? The speedometer on a car consistently overestimates the car's true speed by 5%. This type of error is called random measurement error.
False
True or false? Test retest reliability and split half reliability are different methods of assessing whether a psychological test yields consistent measurements.
True
True or false? Predictive and concurrent validity both represent types of criterion validity.
True
True or false? Construct validity is the most theoretical type of validity.
True
Qualitative Variable
Apples vs. oranges. Variable levels are categories and values reflect difference in "kind".
Quantitative Variable
Small apples vs. large apples. Variable levels exist on a continuum from low to high and values reflect difference in "amount".
Conceptual Variable
Often expressed in general, theoretical, qualitative, or subjective terms and important in hypothesis building process. Ex: satisfaction, aggression, depression, decision making skills, etc
Converging Operations
The use of different operationalizations of the same conceptual variable allowing the research to triangulate on the conceptual variable of interest.
Ex: In attachment research using a questionnaire recording experiences in close relationships, a behavioural operation such as the "strange situation experiment" and a physiological tool such as measuring ones cortisol level after separation.
Conceptual and Measured Variables in a Correlational Research Design
The research depicted here tests the correlational relationship between the conceptual variables of job satisfaction and performance, using a specific operational definition of each. If the research hypothesis is correct (job performance is correlated with job satisfaction) and if the measured variables actually measure the conceptual variables, then a relationship between the two measured variables (bottom of the curved arrow) should be observed.
Nominal Scales
Used to differentiate among members of a category (provide labels).
Ex: 1 = Catholic, 2 = Jewish, 3 = Muslim, etc
Ex: 1 = Liberal, 2 = Conservative, 3 = NDP
Limitations: It is entirely arbitrary and mathematical operations are nonsensical.
Ordinal Scales
Used to "rank" order participants on some variable.
Ex: 1 = weakest, 10 = strongest.
There is no assumption of equal intervals.
Interval Scales
Used to measure incremental changes in the measured variable.
Distances between consecutive values are assumed to be equal. Ex: on a 7-pt scale of happiness the difference between a score of 2 and 3 is assumed to be the same as the difference between the score of 5 and six (i.e. 1 unit).
It lacks a true zero point. Ex: score of zero on a temperature scale doesn't mean an "absence" of temperature or a zero on a personality test doesn't mean they have no personality.
You can say that the difference between 10 and 11 degrees is the same amount of difference between 20 and 21 degrees but NOT that 20 degrees is twice as hot as 10 degrees.
Most scales in psychology are really quasi-interval but at best are treated statistically as if they are interval.
Ratio Scales
Interval scales WITH a true zero point. Examples are height, weight, time.
There are few psychological variables that can be measured on a ratio scale.
What do mediators explain?
A causal relationship which sheds light on the process by which the IV influences the DV.
If you are looking at the relationship between pornography consumption and one's level of fidelity in a relationship, the level of commitment within the relationship can be a causal relationship between consumption and fidelity levels.
Moderator Variables
A moderator variable influences the direction and/or strength of the relationship between two variables.
Moderators "change" or qualify the IV-DV relationship.
For example, after watching violent video games, girls become disgusted and act nicer, whereas boys get excited and act aggressively. But note that video games don't cause gender, gender is a pre-existing condition.
What are reliability and validity?
Techniques for evaluating the relationship between measured and conceptual variables.
Random Error
Chance fluctuations in measured variables. Some sources are: misreading/misunderstanding the questions, measurement of individuals on different days or in different places, the experimenter makes errors in recording responses, or individuals mark their answers incorrectly.
Random and Systematic Error
Random error - coding errors, participants inattention to and misperception of questions, etc.
Scores on a measured variable such as a Likert scale measure of anxiety, will be caused not only by the conceptual variable of interest (anxiety), but also by random measurement error as well as other conceptual variable that are unrelated to anxiety.
Reliability is increased to the extent that random error has been eliminated as a cause of the measured variable.
Construct validity is increased to the extent that the influence of systematic error has been eliminated.
Test Retest Reliability
Extent to which scores on the same measure, administered at two different times, correlated with each other.
Retesting Effects
When the same measures is given twice, responses on the second administration may be influence by the measure having been taken the first time.
Equivalent Forms Reliability
Extent to which scores on similar, but not identical measures, administered at two different times, correlate with each other.
Internal Consistency
Extent to which the scores on the items of a scale correlate with each other, usually assessed using coefficient alpha.
Item To Total Correlations
Correlations between the score on each of the individual items and the total scale excluding the item itself. Those items that don't correlate highly with the total score can be deleted from the scale. The resulting scale has higher reliability.
Kappa (K)
Used in interrater reliability, is a statistic used as the measure of agreement among judges.
Criterion Variable
The name given to the behavioural variable when validity is assessed through correlation of a self-report measure which a behavioural measured variable.
The correlation is an assessment of the self-report measure's criterion validity.
How do you improve reliability and validity?
1. Conduct a pilot test by trying out a questionnaire or other research on small group of individuals.
2. Use multiple items.
3. Increase item response range.
4. Write good items.
5. Attempt to get respondents to take questions seriously.
6. Attempt to make items nonreactive
7. Consider face and content validity.
8 When possible, use existing measures.
Descriptive Statistics
Summary of both (A) typical scores and measures of, and (B) how spread out and diverse the scores are.
Used by researchers to summarize and "describe" data found during research. They are used to describe or summarize data in ways that are meaningful and useful.
Typically researchers deal with lots of data and they provide a way for the researchers to summarize the main properties of a large group of data into just a few numbers.
This lets the researcher show what the data are without tons and tons of numbers.
Some examples of descriptive statistics are frequency distributions, measures of center (i.e., mean, median, mode), range, and standard deviation.
Inferential Statistics
Ways of analyzing data that allow the researcher to make conclusions about whether a hypothesis was supported by the results. You can remember the term inferential because it comes from the word 'inference,' meaning 'to draw a conclusion from clues in the environment.
There are two types: t-test and analysis of variance.
Analysis of Variance
Researchers usually use the nickname ANOVA for this test.
It is a test that compares the average scores between three or more different groups in a study to see if the groups are different from each other.
It can analyze multiple groups at once. The difference between ANOVA and a t-test is simply how the equation works to analyze the groups.
The ANOVA compares multiple groups, while a t-test can only compare two groups.
Mean
Arithmetic average of entire distribution. It is calculated by finding the sum of the study data and dividing it by the total number of data.
Ex:
(8+4+9+3+5+8+6+6+7+8+10) Total sum = 74
Total amount of values = 11
74/11 = mean of 6.73
To calculate add up all the numbers, then divide by the total amount of numbers given.
Mode
It is the number that appears most frequently in the set of data. Ex: 2, 10, 12, 2, 6, 28, 2 = Mode: 2
Median
Data point in a distribution that divides the distribution in half. The middle value in a set of data.
It is calculated by first listing the data in numerical order then locating the value in the middle of the list. When working with an odd set of data, the median is the middle number.
For example, the median in a set of 9 data is the number in the 5th place. (3, 6, 9, 11, "13", 15, 17, 19, 21).
When working with an even set of data, you find the average of the two middle numbers. For example, in a data set of 10, you would find the average of the numbers in the fifth and sixth places. (2, 4, 6, 8, "10, 12", 14, 16, 18, 20)
Standard Deviation
A statistic that is calculated as the square root of a variance, or a data set calculated by taking the mean of the squared differences between each value and the mean value.
Because the differences are squared, units of variance are not units of data. This is why a standard deviation is the square root of the variance.
The points or units plotted from the variance becomes the data set. Standard deviations and variances are common measures of dispersion.
Say we have a bunch of numbers: 9, 2, 5, 4, 12, 7, 8, 11.
To calculate the standard deviation of those numbers:
1. Work out the Mean (the simple average of the numbers)
2. Then for each number: subtract the Mean and square the result
3. Then work out the mean of those squared differences.
4. Take the square root of that and we are done!
σ = standard deviation
xi = each value of dataset
x (with a bar over it) = the arithmetic mean of the data (This symbol will be indicated as mean from now)
N = the total number of data points
∑ (xi - mean)^2 = The sum of (xi - mean)^2 for all data points
Recommended textbook explanations
Myers' Psychology for AP
2nd Edition
David G Myers
900 explanations
A Concise Introduction To Logic (Mindtap Course List)
13th Edition
Lori Watson, Patrick J. Hurley
1,242 explanations
Myers' Psychology for the AP Course
3rd Edition
David G Myers
955 explanations
Psychology: Principles in Practice
Spencer A. Rathus
1,024 explanations
Sets found in the same folder
Chapter 10 - Experimentation and Validity
122 terms
Psychology 312 Exam 2
53 terms
Research Psych Exam 2
107 terms
Psych 312 Exam 3 (WSU)
40 terms
Sets with similar terms
Research Methods 2161 Exam 1
47 terms
Exam 2: lecture 4
40 terms
Research Methods (Chp. 4) Privitera
33 terms
Methods Ch. 4
66 terms
Other sets by this creator
Chapter 6 - Chronology Building: How To Get A Date
5 terms
Chapter 5 - Geoarchaeology and Site Form…
35 terms
Chapter 4 - Doing Fieldwork
25 terms
Chapter 3 - Doing Fieldwork: Surveying For Archaeo…
40 terms
Other Quizlet sets
Research Based Practice
17 terms
Anth. 195 Terni Final
40 terms
101 short question bank
40 terms
Brain and Cranial Nerves
44 terms