79 terms

Research Methods Midterm

STUDY
PLAY

Terms in this set (...)

Quantitative Research
Measure objective facts
Focus on variables
Reliability
Value free
Researcher is detached
Theory and data are separate
Independent of context
Many cases, subjects
Statistical analysis
"data condensers" see bigger picture
Qualitative Research
Construct social reality, cultural meaning
Focus on interactive processes, events
Authenticity is key
Values are present and explicit
Researcher is involved
Theory and data are fused
Situationally constrained
Few cases, participants
Thematic analysis

"data enhancers" see key aspects more clearly
Mixed Methods Research
Uses both quantitative and qualitative techniques, in an effort to build convincing claims about the relationships between attributes and outcomes
Basic Research
research that fills in the knowledge we don't have; it tries to learn things that aren't always directly applicable or useful immediately.
Applied Research
research that seeks to answer a question in the real world and to solve a problem
Exploratory Studies
Purpose: to explore a new area or an existing phenomenon in a new context; to begin to define key features of a phenomenon; paves the way for future research
Descriptive Studies
Purpose: to describe situations, events, behaviors, beliefs, attitudes, processes, etc.; key concepts are already defined
Explanatory Studies
Purpose: to explain causal relationships; key concepts are defined and there are hypotheses about their relationships
"Good" Research Questions
Have not already been answered.
Are important to the field / have connections to other concepts in the field.
Will provide useful data regardless of the results.
Are answerable using the resources available to you.
Are clear, unambiguous, and easily understood.
Are specific enough to suggest the data you will need to collect in your study.*
Paradigm
Broad, foundational assumptions shared by essentially all researchers in a field
Metatheory
A theory concerned with the investigation, analysis, or description of theory itself; ideas about how concepts in a field should be though about and researched
Theory
A system of assumptions, accepted principles, and rules of procedure devised to analyze, predict, or otherwise explain the nature or behavior of a specified set of phenomena.
A theory appears in abstract, general terms and generates more specific hypotheses (testable propositions).
Model
A tentative ideational structure used as a testing device
ADD: Can also add principles, rules, classification, typologies under the 'model' bullet point. They establish vocabulary and define the phenomena of interest. Don't forget ... models can be mathematical.
Basic Elements of a Theory
1) Concepts (labels and definitions variables)
-Description
2) Regularities in the relationships among concepts (variables)
-Prediction
-Explanation
Functions of Theory
1) Organize and summarize knowledge; communication
2) Focuses attention on important variables and relationships
3) Clarify what is observed: helps understand relationships and interpret findings
4) Observational: tells us what to observe and how to observe it.
5) Predicts outcomes
6) Research heuristic: a good theory generates research
7) Generative: challenge existing cultural life and generate new ways of living
A Good Theory?
1) Theoretical scope
What does it cover?
2) Heuristic value
Can it be used to generate research?
Is it testable?
3) Validity
Value, or worth
Correspondence, or fit
Coherence: logical, consistent, no contradictions
Generalizability
4) Parsimony
Is it simple? (Or, at least simpler than alternatives?)
5) Usefulness
Does it help practitioners understand or take steps to address issues they encounter in their jobs?
Sampling
process by which participants are selected for a research study
Probability Sampling
Every individual has a known, non-zero probability of being selected
Must have a sampling frame
Selection is random
Non-Probability Sampling
Any form of sampling that doesn't meet the criteria for a probability sample
Sometimes the goal is still representativeness, but not always
Simple Random Sampling
A sampling procedure that assures each element in the population of an equal chance of being included in the sample. (probability sampling method)
Systematic Sampling
A procedure in which the selected sampling units are spaced regularly throughout the population; that is, every n'th unit is selected. (probability sampling method)
Stratified Sampling
Population divided into strata first, then sampled.
Strata should have some relationship to the study concepts or goals.
Homogeneity within subgroups
Heterogeneity between subgroups
(probability sampling method)
Cluster Sampling
Initial sampling units are groups of elements.
You can then sample all the elements in each chosen group, or sample further within each group.
Heterogeneity within subgroups
Homogeneity between subgroups

(probability sampling method)
Sampling Element
an object on which measurements are taken
Coverage Error
Your sampling frame doesn't cover the entire population, and excluded elements are different in meaningful ways than included elements
Sampling Error
Estimates the uncertainty resulting from the fact that you haven't sampled the entire population
Can be calculated (based on the variation in your sample and sample size) - margin of error or confidence interval
Purposive Sampling
Elements are purposefully chosen because of some characteristic. Usually know the typical case
Options to maximize or minimize variability, or factor in opposing views or extreme cases?
(non-probability sampling method)
Quota Sampling
Determine which characteristics are of interest and set a quota for each level of that characteristic.
Similar to stratified sampling, but participants are not randomly selected.
(non-probability sampling method)
Snowball Sampling
Initial participants identify more participants.
(non-probability sampling method)
Convenience Sampling
Choosing elements because they are easy to access.
AKA opportunity or accidental sampling
(non-probability sampling method)
Hypotheses
Statements that we assume, for the sake of argument, to be true
Usually propose a relationship between two or more variables or concepts
May suggest a direction for the relationship or may simply propose that a relationship exists
This type of hypothesis is called the alternative hypothesis or research hypothesis
Not always stated or explicit
Logic of Disconfirming Hypotheses
1) A researcher cannot prove the alternative hypothesis
2) Even with overwhelming evidence
Hypotheses can be rejected based on negative evidence
3) Therefore, researchers often set up a second hypothesis contradicting their research hypothesis and seek to reject it in their study.
Null Hypothesis
States that there is no association between input (independent) variables and output (dependent) variables
Assumed to be true when researchers start their investigations
The researcher accepts the burden of collecting evidence that shows the null hypothesis is inadequate.
statistical analysis compares results to null hypothesis
Experiments
One of the oldest approaches to scientific research

Experiments typically try to collect evidence for a correlational or causal relationship between two or more variables
True Experiment
there is a control or comparison group, or multiple measures
participants randomly assigned
Pre Experiment
there is a control or comparison group, or multiple measures
Quasi-Experiment
there is a control or comparison group, or multiple measures
participants are NOT randomly assigned
Between Subjects Experimental Design
Examines differences between individuals or groups
Each group exposed to a different treatment
Quicker for each user but requires larger pool of subjects to account for individual differences
Within Subjects Experimental Design
Examines differences in a particular variable for individual subjects (or all subjects on average)
Each group exposed to all treatments (sometimes in different orders)
Removes individual differences but adds concern for ordering of treatments and increases time burden per subject
Validity
Are the measurements repeatable in your study?
Reliability
Are we accurately measuring the thing we're supposed to be measuring (internal validity)? To what extent can the study's results be generalized to other settings (external validity)?
Improve Reliability
Improve your questions
Train raters
Test your survey / instrument for reliability - don't assume it's reliable!
Ask more questions
Pilot / pretest
Improve Validity
Compare your test or its results to existing measures
Ground your work in existing theories and models
Triangulate your data
Have experts assess your measures
Pilot test and get feedback from participants
History
Internal Validity Threat
An unanticipated event occurred during the experiment that affected the outcome variable
Example: You are testing the impact of cultural competency training on academic librarians; midway through the semester-long training program, the campus is embroiled in a scandal when the university president makes a racist remark on camera
Maturation
Internal Validity Threat
Levels of the outcome variable changed due to normal developmental processes as a function of time
Example: You measure high school students' information literacy skills in September and again in June; if they have increased, it might be attributable to the passage of time
Regression
Internal Validity Threat
Because of imperfect reliability, subjects selected on the basis of extreme scores tend to "regress" toward the mean on subsequent tests
Example: You select the bottom 10% of incoming freshmen according to their scores on a mandatory information literacy test for your test group
Mortality
Internal Validity Threat
Differential loss of participants across groups or over time
Example: Subjects who found the experimental intervention too difficult dropped out of the study.
Testing
Internal Validity Threat
Pre-test affects scores on the post-test
Example: You want to test whether your new curriculum improves information literacy, but the pre-test sensitizes students to the topics that will be covered in the post-test
Selection
Internal Validity Threat
Groups are not equivalent at the start of the study
Example: You implement a new information literacy curriculum in one section of a course but not another; however, students self-selected the section and all of the athletes enrolled in the same section.
Design Contamination
Internal Validity Threat
Subjects "compare notes" across groups and the control group ends up receiving some treatment
Example: You want to test the impact of your pathfinders on final exam scores, but students in the experimental group share the link with students in the control group.
John Henry Effect / Resentful Demoralization
Internal Validity Threat
Subjects in the control group know they are not receiving the intervention, causing them to purposefully over- or underperform, respectively.
External Validity Threats
1) Poor sampling
Sample is not actually representative of the population, for any number of reasons
2) Hawthorne Effect
Participants modify their behavior because they know they are being observed
3) Setting (ecological validity)
Real life is messy; researchers trade environmental control for authenticity. Will people behave the same way outside of the lab?
Nonresponse Error
Occurs when the people selected for the survey who do not respond are different from those who do respond in a way that is important to the study.
Measurement Error
Results from inaccurate answers to questions and stems from poor question wording, survey mode effects, or aspects of the respondents' behavior.
The Response Task
(Way to reduce measurement error)
Minimize problems in these areas:
Comprehension
Recall
Judgment
Formatting
Editing
Minimize Comprehension Problems (Response Task)
Def: Understanding the individual words in the question
Understanding word meaning in the context of the question
Goals:
Keep questions short
Break questions apart if necessary
Use simple words
No jargon or acronyms
Define key terms and reference periods
Avoid "double-barreling" - asking two questions at once
Avoid hidden assumptions
Minimize Recall Problems (Response Task)
Def: Applies mainly to factual information
Goals:
Provide cues
Use an appropriate reference period
Be specific
Use a longer question (maybe)
Diaries or logs
Calendars or aids during the interview
Minimize Judgement Problems (Response Task)
Def: For factual questions: "Does the information I've recalled fit the criteria in question?" (e.g. reference period)
For attitudinal questions: "I know I have feelings / beliefs about this topic, but what are they within the specific parameters of the question?" (e.g. I know I am generally opposed to censorship, but what about in the specific instance the question portrays?)
Information used to answer previous questions is particularly likely to come to mind
Goals:
Avoid negatively-worded statements
Pay attention to question order
Avoid hypothetical questions
Use consistent response scales
For factual questions, use an appropriate reference period
Use examples, but carefully
Minimize Format Problems (Responsive Task)
Def: Forced choice, scales, ratings, etc.
Remember: Choice of response categories can impact all other response tasks too!
Goals:
Response categories should:
Match the format of the question
Be mutually exclusive
Cover all circumstances
Avoid vague quantifiers ("usually," "often")
Label scale points
Avoid long lists
Make sure it is clear how the respondent should indicate his/her answer
Minimize Editing Problems (Response Task)
Def: I have comprehended the question and recalled, judged, and formatted my response, but I don't think it accurately represents me, so I say something else.
Social desirability
Goals:
Private self-administration of sensitive questions
Reassuring phrases
"There are no right or wrong answers."
"We want your opinion."
"People have different opinions about this."
"Remember, your answers are confidential."
Naturalistic Research
Takes place in real-world settings
Researcher does not attempt to manipulate the phenomenon of interest
Within this broad approach, many different data collection methods can be used
Observation (direct or participatory)
Interviews
Open-ended and often situated in the setting
Analysis of artifacts and other existing content
Observation
Researcher role may be complete participant (researcher status unknown to participants), observer-as-participant, participant-as-observer, or complete observer
Pros: researcher gains first-hand experience and can record information as it occurs; unusual aspects or aspects that are "invisible" to participants can be observed; allows for exploration of sensitive topics
Cons: researcher may be seen as intrusive; some information not appropriate to report; quality of observations highly dependent on researcher's skill level; difficult to gain rapport with certain participants (e.g. children)
Interviews
May be conducted face-to-face, over the phone, or via correspondence (e.g. email); may be conducted with individual participants or groups
Pros: useful when observation is impossible; participants can provide historical information; researcher controls line of questioning
Cons: information gained is indirect and filtered; not always possible to interview in a natural setting; researcher's presence or personality may bias responses; not all people equally responsive or articulate
Artifacts and Existing Content
Can include public and private documents, photographs, videos, art objects, software, film, etc.
Pros: gives the researcher access to language / words / creative expression of respondents; can be accessed any time; no transcription necessary in some cases
Cons: May be difficult to access and interpret; materials may be incomplete, inauthentic, or inaccurate; not all participants equally articulate or creative
Ethnography
A subcategory of naturalistic research
Originally developed in anthropology as an approach to interpreting cultures and explaining how everyday events and details of experience in a particular setting and time create "webs of meaning" for members of the culture
Term is now applied to a wide variety of qualitative studies where the intent is to provide a detailed, in-depth description of everyday life and practice (thick description)
Case Studies
Research studies focused on a single case or (small) set of cases
Natural settings; no manipulation
Quantitative and/or qualitative data collection and analysis
Typically answer why and how questions; helpful for exploring, classifying, and/or generating hypotheses
Evaluating Qualitative Research
Truth-value, applicability, consistency, neutrality
Credibility
The extent to which the data adequately reflect the construct that is being studied.
Truth-Value:
How can one establish confidence in the "truth" of the findings of a particular inquiry for the subjects (respondents) with whom, and the context in which, the inquiry was carried out?
Transferability
The extent to which the results can be applied to another context.
Applicability:
How can one determine the extent to which the findings of a particular inquiry have applicability in other contexts or with other subjects (respondents)?
Dependability
The coherence of the internal process and the way the researcher accounts for changing conditions.
Consistency:
How can one determine whether the findings of an inquiry would be repeated if the inquiry were replicated with the same (or similar) subjects (respondents) in the same (or similar) context?
Confirmability
The extent to which the characteristics of the data, as posited by the researcher, can be confirmed by others who read or review the research results.
Neutrality:
How can one establish the degree to which the findings of an inquiry are determined by the subjects (respondents) and conditions of the inquiry and not by the biases, motivations, interests, or perspectives of the inquirer?
Establishing Credibility
Prolonged engagement
Persistent observation
Triangulation
Negative case analysis
Checking interpretations against raw data
Member checking
Transparency in data collection and analysis
Referential adequacy
Peer-Debriefing
Establishing Transferability
Use theoretical sampling
Describe context, setting and participants in detail (thick description)
Burden placed on research consumer as much as researcher
Establishing Dependability
Provide details of methods
Do stepwise replication
Constantly compare raw data with reduced data
Remain in the research context for an adequate period of time
Collect data from multiple sources
Have multiple researchers examine and analyze data
Save data for later reanalysis
Establishing Confirmability
Maintain an Audit Trail:
a transparent description of the research steps taken from the start of a research project to the development and reporting of findings.
Pragmatism
Primary concern is "what works" to solve practical problems
Theories and models judged by their fruits and consequences, not by their origins or their relations to antecedent data or facts.
Theories are instruments or tools for coping with reality
There is a reality outside the mind, but we can never fully know it.
"we would be better off if we stopped asking questions about... what is really "real" and devoted more attention to the ways of life we are choosing and living when we ask the questions we ask." (Cherryholmes, 1992)
Evidence Based Practice
Premise is that LIS professionals should base their decisions on the best possible evidence, which they can gather by:
Reading and staying current with existing research
Generating their own data by conducting local studies
EBP especially important when resources are limited
EBP helps LIS professionals be proactive instead of reactive
EBP builds the LIS research base
Action Research
EBP; Action research is located farther on the "applied" end of the spectrum than DBR
Design Based Research
EBP; Lab studies not equal to the real world, but observation doesn't actually help practice.
DBR focuses specifically on the design, implementation, and impact of some intervention or object (for example a new course or assignment, a piece of software, etc.)
DBR's goals include not only local action but also theory contribution and generalized design guidelines