How can we help?

You can also find more resources in our Help Center.

victoria research methods

STUDY
PLAY
Goals of behavioral science (4)
1. Description of behavior
2. Prediction of behavior
3. Determining the causes of behavior
4. Explanation of behavior
Temporal precedence
order of events in terms of cause and effect
Covariation of cause and effect
cause=effect
no cause=no effect
basic research
tries to answer question about the nature of behavior and cognition
Applied research
conducted to address issues in which they are practical problems and potential solutions
Program evaluation
assesses the social reforms and innovations that occur in government, education, healthcare, etc
Empiricism
the idea that knowledge is based on observation
Operational definition
a set of procedures used to measure or manipulate the variable
Variable
any event, situation, behavior or individual characteristic that varies
Positive linear relationship
increases in the values of one variable are accompanied by increases in the values of the other variable
Negative linear relationship
increases in the values of one variable are accompanied by decreases in the values of the other variable; decreases proportinally
Curvilinear relationship
increases in the values of one variable are accompanied by systematic increases and decreases in the values of the other variable
Correlation coefficient
numerical index of the strength of relationship between variables
Nonexperimental method
relationships are studied by making observations or measures of the variable of interest; both variable are measured and either covary or correlate
aka correlational method
two problems:
1. direction of cause and effect
2. third variable problem
Experimental method
one variable is manipulated and the other is then measured; gives cause and effect; has experimental control and randomization
Third variable problem
there may be no direct causal relationship between the 2 variables; another variable could cause a relationship
Independent variable
considered to be the cause and the manipulated variable
Dependent variable
considered to be the effect
Internal validity
the ability to draw conclusions about causal relationships from the results of the study;
Requires analysis of 3 elements:
1. temporal precedence
2. covariation between the 2 variables
3. need to eliminate plausible alternative explanations for observed relationship
External validity
the extent to which the results can be generalized to other populations and settings
Construct validity
concerns whether our methods of studying variables is accurate
Response set
a tendency to respond to all questions from a particular perspective rather than to provide answers that are directly related to the questions- most common is social desirability
Closed-ended questions
answer questions from a list- structure
Open-ended questions
free response
Rating scales
scale of how much/how one feels about a certain topic
Graphic rating scale
mark along a line between a description at each end
Semantic differential scale
measure of meaning of concepts- rates behaviors, ideas and objects on a 7 number scale (anything can be measured)
Panel study
the same people are surveyed at two or more points in time
Confidence interval
the true population value lies with this interval around the obtained sample result
Sampling error
confidence interval gives you information about the likely amount of the error- margin from error
Probability sampling
each member of the population has a specific probability of being chosen (required when you want to make precise statements about a specific population)
Simple random sampling
type of probability sampling- every member of the population has an equal probability of being selected from the sample
Stratified random sampling
type of probability sampling- population is divided into subgroups (strata) and random sampling techniques are used to select sample members from each strata
Cluster sampling
type of probability sampling- when no list is available and just cluster people
Non-probability sampling
we don't know the probability of any particular member
Haphazard sampling
type of non-probability sampling- all about convenience- "take them where you find them"
Purposive sampling
type of non-probability sampling- obtain a sample of people who meet predetermined criterion
Quota Sampling
type of non-probability sampling- collect specific proportions of data representative of percentages of groups within population and then use haphazard techniques
Sample frame
used to evaluate samples- a list of clusters from which a random sample of clusters will be drawn
Response rate
used to evaluate samples- the percentage of people in the sample who actually completed the survey
Reliability
comes before validity; refers to the consistency or stability of a measure of behavior
true score
real score on he variable being measured
Measurement error
Variability shown when an unreliable test has been taken
Test-retest reliability
measuring the same individual at two points in time
Internal consistency reliability
using different questions (items) and using responses at only one point in time
3 types:
1. split-half reliability
2. Cronbach's alpha
3. item-total correlations
Split-half reliability
type of internal consistency- correlation of the total score on one half of the test with the total score on the other half
Cronbach's alpha
type of internal consistency- assessed by examining the average correlation of each item (question) in a measure with every other question; best choice for reliability
Item-total correlations
type of internal consistency- the correlation between two scores on individual items with the total score on all items of a measure
Interrater reliability
the extent to which raters agree in their observations; Cohen's kappa
Face validity
does the test look like it is going to do what it is supposed to do (not very important)
Content Validity
does the instrument (test) cover the content it is supposed to (achievement tests- knowledge)
Predictive Validity
Look at SAT scores and predict that student's freshman GPA- does not have to be academic ex whose likely to commit suicide
Concurrent Validity
Scores in the measure are related to a criterion measured at the same time
Convergent Validity
Scores of the measure are related to other measures of the same construct
Discriminant Validity
the test doesn't correlate with things it shouldn't correlate
Reactivity
a problem in measurement in which the measure changes the behavior being observed- knowing what is being tested
Nominal Scale
categories with no numeric scales; ex. males/females
Ordinal Scale
Rank ordering, numeric values limited; ex. 2-,3- and 4- star restaurants
Interval Scale
Numeric properties are literal; ex. intelligence, temperature
Ratio Scale
Zero indicates absence of variable measured; ex. reaction time, weight, age
Observational methods
can be classified as primarily quantitative or qualitative
Qualitative methods
focuses on people behaving in natural setting and describing their world in own words; conclusions are based on interpretations drawn by the investigator
Quantitative methods
Focus on specific behaviors that can be easily quantified (counted); conclusions are based upon statistical analysis of data
Naturalistic observation
The researcher makes observations of individuals in their natural environments
Participant observation
Allows the researcher to observe the setting from inside in which they experience events
Systematic observation
Observations of one or more specific variables, usually made in a precisely defined setting
Coding system
A set of rules used to categorize observations
Case Studies
An observational method that provides a description of an individual (rare cases)
Psychobiography
A type of case study in which a researcher applies psychological theory to explain the life of an individual
Archival Research
The use of existing sources of information for research. Sources include statistical records, survey archives, and written records
Content analysis
A systematic analysis of existing documents
Hypothesis
Tentative idea or question that is waiting for evidence to support or refute it
Prediction
An assertion what will occur in a particular research investigation
Theory
A systematic body of ideas about a particular topic or phenomenon
Anatomy of a Research Article
1. Abstract
2. Introduction
3. Method
4. Results
5. Discussion
6. References
Abstract
A summary of the research report that includes information about the hypothesis, the procedure, and the results
Introduction
The researcher outlines the problem that has been investigated; specific hypotheses being tested or explained
Method
It is divided into subsections that include:
overview of the design
description of characteristics of participants
detailed procedures
materials necessary
Also it does not start a new page
Results
The researcher presents the findings in three ways:
1. narrative form
2. statistical language
3. tables and graphs
Discussion
The researcher reviews the research from various perspectives
The Belmont Report
Defined the principles and applications that have guided more detailed regulations and the APA ethics code; three basic ethical principles:
1. beneficence
2. respect for persons (autonomy)
3. justice
Informed consent
Potential participants should be provided with all information that might influence their decision of participation
Beneficence
The need for research to maximize benefits and minimize any possible harmful effects of participation
Deception
Occurs when there is active misrepresentation of information
Role Playing
Alternative to deception - the experimenter describes the situation to participants and then asks them how they would respond to the situation
Simulation
Alternative to deception - real world situation; can be used to examine conflict between competing individuals or jury deliberation for example
Honest experiments
Alternative to deception - Participants agree to have their behavior studied and to know what the researchers hope to accomplish
Institution review board (IRB)
Local review agency composed of at least five individuals; every college that receives federal funding must have an IRB
Exempt research
Research in which there is no risk is exempt from review.
Examples:
Anonymous questionnaires
Surveys and educational tests
Naturalistic observation
Archival research
Minimal risk research
The risks of harm to participants are no greater than risks encountered in daily life or in routine physical and psychological tests
Debriefing
An opportunity for the researcher to deal with issues of withholding information, deception, and potential harmful effects; occurs after the study
Confounding variable
A variable that varies along with the independent variable
Posttest only design
Type of basic experiment;
1. Obtain two equivalent groups of participants
2. Introduce the independent variable
3. Measure the effect of the independent variable on the dependent variable
Selection difference
The people selected to be in the conditions cannot differ in any systematic way
Pretest-postest design
The dependent variable is measured both before (pretest) and after (posttest) manipulation of the independent variable
Mortality or Attrition
Dropout factor in experiments
Independent group design (between-subjects design)
Participants are randomly assigned to various conditions so that each participates in only one group; comparisons are made between different groups
Repeated measures design (within-subjects design)
Each participant is assigned both levels of the independent variable
Order effect
The order of presenting the treatments affects the dependent variable
Practice effect or learning effect
Performance on second task might improve
Carry over effect
The effect of the first treatment to carry over to influence the response to the second treatment
Counterbalancing
Including all orders of treatment, presentation, or randomly determining the order for each subject
Matched pairs design
Matched people with a participant variable
Latin Square
To control for order effects without having all possible orders
Active Voice
Subject matter first.
Example.
Participants took...
I gave...