Advertisement Upgrade to remove ads

All notes from the last half of class

Qualitative Research Definition

The investigation of phenomena, typically in an in-depth and holistic fashion, through the collection of rich narrative materials using a flexible research design.

Characteristics of Qualitative Research Design

• Requires the researcher to become the research instrument (participant observer);
• Requires ongoing analysis of the data to formulate subsequent strategies (emergent design) and to determine when field work (data collection) is done.

Qualitative Research Traditions

• Ethnography
• Phenomenology
• Grounded Theory
• Narrative Research
• Participatory Action Research
• Mixing Worlds - Mixed Methods Research

Ethnography

Qualitative research methodology for investigating cultures - 'learn from'

Underlying assumption of ethnography

Every human group eventually evolves a culture that guides the member's view of the world

Culture with Ethnography

Culture is not tangible, it is inferred from the words, actions, and products of members of a group

Emic vs etic

Emic= insider perspective
Etic=outsider perspective

Goal of Ethnography

To uncover tacit knowledge

Phenomenological Research

Seeks to understand people's everyday life experiences and perceptions.

Why? Because truth about reality is found within these experiences

• Very small participant group (~10) and in-depth conversations
• Useful for poorly defined or understood human experiences

What does phenomenological research ask?

What is the essence of this experience or phenomenon and what does it mean?

What is essence in phenomenological research?

Essence is what makes a phenomenon what it is, the essential aspects

Grounded Theory

Inductive theory building research.

Seeks to understand the 'why' of peoples actions by asking the people themselves, then, they 'ask' the data:

• First: What is the primary problem?
• Second: What behaviors do people use to resolve the problem?

What does grounded mean?

'Grounded' means the problem must be discovered from within the data

Simultaneous sampling, analysis, and data collection in an iterative process

Narrative research

• Researcher studies the lives of individuals through the use of 'story'
• Underlying premise is that stories are the mode for communicating and making sense of life events
• Findings are usually a 're-telling' of the overarching story

What do Narrative analysts ask?

"WHY was the story told this way?"

Participatory Action Research

• Collaborative efforts in all aspects of the research design and process
• Goal is empowerment, move to action
• Overtly stated intention (bias) to produce positive change within the community where study occurs
• Increasing popularity

Mixed Method Research

The Benefits
• Able to get the best of both worlds
• The two major traditions dovetail progress well
• Improves confidence in validity
• Some questions are best answered by a combination of methods

Common Applications of Mixed Methods

• Developing instruments (measurement tools)
o Usually not within one study, but a series of studies will use mixed methods
• Explication
o Quant can identify relationships
o Qual can fill in the 'why'
• Intervention Development
o What is likely to be effective?
o What might the problems be?

Population

• The entire set of individuals (or 'cases' or 'elements') in which a researcher is interested. (e.g., study of American nurses with doctoral degrees)
• Whole pie

Sampling

• The process of selecting a portion of the population (subset) to represent the entire population
• Representative sample is one whose main characteristics most closely match those of the population
• Slice of the pie

Sampling Bias

• Distortions that arise when a sample is not representative of the population from which it was drawn
• Human attributes are not homogeneous, so we need representatives of all the variety that exists
• Always want to think about who did NOT participate in any given study
• Risk of sampling bias, when sample is not a characteristic sample of whole population
• Always think about who didn't participate

Inclusion/exclusion Criteria

• The criteria used by a researcher to designate the specific attributes of the target population, and by which participants are selected (or not selected) for participation in a study.
• Ex: Only wants English
• What is necessary to qualify for the study?

Nonprobability sampling

• Nonrandom -
• Less likely to produce a representative sample
• Methods:
• Convenience
• Snowball
• Quota
• Consecutive
• Purposive

Probability sampling

• Random selection of participants/elements
• Different from random assignment into groups
• Each element has an equal and independent chance of being selected into the study
• Methods:
o Simple random sampling
o Stratified
o Cluster
o systematic

Convenience Sampling

• Selection of the most readily available people as participants in a study
• Very common method
• High risk of bias/weakest form
• Why? The sample may be atypical
• Example: Nurse distributing questionnaires about vitamin use to first 100 contacts

Quota Sampling

• The nonrandom selection of participants in which the researcher pre-specifies characteristics of the sample's subpopulations (or strata), to increase its representativeness
• Convenience sampling methods are used, ensuring an appropriate number of cases from each strata
• Improvement over strict convenience sampling, but still prone to bias

Consecutive sampling

• 'Rolling enrollment' of ALL people over a period of time (the longer period the better)
• Reduces risk of bias, but is not always practical or relevant to the study question

Purpose Sampling

Qual and Quan

• Hand picking the sample hand pick professionals who have knowledge of what u are sampling
• Judgment sampling
• Who will be most knowledgeable, most typical?
• Limited use in quantitative research
• Valuable in qualitative research

Snowball Sampling

Quan and Qual

• The selection of participants by means of referrals from earlier participants; also referred to as network sampling
• Helps identify difficult to find participants
• Has limitations...

Qualitative sampling:
Theoretical Sampling

• Qualitative sampling method
• Members are selected based on emerging findings/theory
• Aim to discover untapped categories and properties, and to ensure adequate representation and full variation of important themes
• Non probability

Probability Sampling:
Random Sampling

• The section of a sample such that each member of a population has an equal probability of being included
• Free from researcher bias
• Rarely feasible, especially when you have a large population

Stratified Random Sampling

• The random selection of study participants from two or more strata of the population independently
• Improves representativeness
• Sometimes not possible, when the stratifying information is unavailable
• Similar to quota sampling, but with random selection

Cluster Sampling

• Successive random sampling of units
• Large groupings ("clusters") are randomly selected first (e.g., nursing schools), then smaller elements are randomly selected (e.g., nursing students)
• Practical for large or widely dispersed populations
• Census surveyors

Systematic Random Sampling

• The selection of study participants such that every kth (e.g., every tenth) person (or element) is selected
• Essentially the same as simple random sampling

Sample Size & Power Analysis

• Sampling error = the difference between population values (average responses) and sample values (average responses)
• What could have been vs what actually was
• So, the larger the sample, the smaller the sampling error (because the sample average gets closer to the population average)
• Overcomes the error b/c increase in sample size
• The larger the sample, the more representative it is likely to be
• POWER ANALYSIS: procedure for estimating how large a sample should be. The smaller the predicted differences between groups, the greater the sample size will need to be
• Done pre-sampling
• Sampling plan is an important area to critique and evaluate

Data Collection Techniques

Qualitative
Interviews
Unstructured
Semi-structured
Focus groups
Life histories/diaries
Records or other forms documents
Observation
Quantitative
Use of an instrument or scale
Oral interview
Written questionnaire
Observation
Biophysiologic measurement

Primary question type

• Open-ended Question
o Useful for capturing what the participant prioritizes
o Takes more time
o Participants may be unwilling to thoroughly address the question
o Possibility of collecting rich responses
• Close-ended Question
o More difficult to develop
o Require participants to 'box themselves in'
o Easier to analyze and compare responses
o More efficient
o May lack depth
o May miss important responses (can't ask about what you don't know about)
o Have to pick from a certain group of responses, may not accurately capture how they feel, may tweak the validity of the test

Scales (Quantitative)

• Scales provide a way to capture gradations in the strength or intensity of individual characteristics in numerical form
• Voted "Most likely to be seen": Likert Scale
• Watch for RESPONSE SET BIASES:
o Social desirability response set bias - answering in a manner that is consistent with the 'norm' (answering the way the researcher might want me to answer)
o Extreme response set bias - tendency to consistently mark extremes of response
o Acquiescence response set bias - the 'yea-sayers' and the 'nay-sayers' = agreeing or disagreeing with the statements regardless of the content
• Can seek to reduce the risk of these biases: sensitively worded questions, facilitating an open atmosphere, guaranteeing confidentiality, alternating + and - worded statements

Issue of validity

o Is the topic at hand likely to tempt respondents to present themselves in the best light?
o Are they being asked to reveal potentially undesirable traits?
o Should we trust that they actually feel/act the way they say they do?

Observation

• Sometimes fits better than self-report, depending on the question and population (e.g., behavior of autistic children)
• Again, biases and other issues come into play:
o Observer bias leading to faulty inference or description (intra and inter-rater reliability)
o Validity questions- am I just seeing what I want to see or what I thought before-hand I would see?
o Hasty decisions may result in incorrect classifications or ratings
o Observer 'drift'
• When possible, these issues are mitigated through thorough observer training, breaks, re-training, well-planned timing of observations

What is Measurement?

• Measurement involves rules for assigning numbers to qualities of objects in order to designate the quantity of the attribute
• We're familiar with the 'rules' for measuring temp, weight, etc.
• Rules are also developed for measuring variables/attributes for nursing studies

Advantage of Measurement

• Enhances objectivity = that which can be independently verified (2 nurses weighing the same baby using the same scale)
• Fosters precision ("rather tall" versus 6'2") and allows for making fine distinctions between people
• Measurement is a language of communication that carries no connotations

Levels of measurement:
Nominal-scale

• The lowest level of measurement that involves the assignment of characteristics into categories
o Females - category 1
o Males - category 2
• The number assigned to the category has no inherent meaning (the numbers are interchangeable)
• Useful for collecting frequencies

Levels of Measurement:
Ordinal-scale

A level of measurement that ranks, in 'order' (1,2,3,4) the intensity or quality of a variable along some dimension.
1 = is completely dependent
2 = needs another person's assistance
3 = needs mechanical assistance
4 = is completely independent
• Does not define how much greater one rank is than another (no relative value given)

Level of Measurement:
Interval-scale

A level of measurement in which an attribute of a variable is rank ordered on a scale that has equal distances between points on the scale
SAT Scores - 550 - 500 - 450
• Likert scales and most other questionnaires fall here
• The differences between scores are meaningful
• Amenable to sophisticated statistics

Ratio-scale measurement

• A level of measurement in which there are equal distances between score units, and that has a true meaningful zero point.
o Weight - 200 lbs is twice as much as 100 lbs.
o Visual analog scale - 'No pain' is a true zero
• Higher levels of measurement are preferred because more powerful statistics can be used to analyze the information

Errors of Measurement

• Values and scores from even the best measuring instruments have a certain amount of error
• That which is random and varied
• Obtained score = True score + Error
o "Obtained score" is the score for one participant on the scale/questionnaire
o "True score" is what the score would be IF the measure/instrument could be infallible.
o "Error" can be both random/varied (we just have to deal with this) and systematic (bad)

Factors related to Errors of Measurement = BIAS

• Situational contaminants
• Response set biases
• Transitory personal factors
• Administration variations
• Item sampling what items on test and do they capture item of interest

Reliability of Measuring Instruments

Reliability: The consistency and accuracy with which an instrument measures the attribute it is designed to measure
A reliable instrument is close to the true score; it minimizes error

Test-Retest Reliability:

Assesses the stability of an instrument by giving the same test to the same sample twice, then comparing the scores
• Gets at the question of time-related factors that may introduce error.
• Only appropriate for those characteristics that don't change much over time

Internal Consistency:

The degree to which the subparts (each item) of an instrument are all measuring the same attribute or trait
• Cronbach's alpha is a reliability index that estimates the internal consistency or homogeneity of an instrument (the closer to +1.00, the more internally consistent the instrument)
• Best means of assessing the sampling of items

Interrater Reliability:

the degree to which two raters or observers, operating independently, assign the same values for an attribute being measured or observed. The more congruence, the more accurate/reliable the instrument

Validity -

the degree to which an instrument measures what it is intended to measure.
o You can have reliability without validity, but you can't have validity without reliability

Content Validity

The degree to which the items in an instrument adequately cover the whole of the content of interest
• Usually evaluated by a panel of experts in the content area

Criterion-Related Validity

The degree to which scores on an instrument are correlated with an external criterion
• Is there a clearly established criterion?
• Simulation = accurate reflection of nursing skill. If a written test attempts to capture the info in a simulation, the simulation becomes the criterion by which validity can be tested

Construct Validity

The degree to which an instrument measures the construct under investigation. What exactly is being measured? Could it be something other than what it looks like?

Descriptive Statistics

Synthesize and describe the data set
• Example - what is the average weight loss of patients with cancer?
• Provide foundational information and theory for inferential statistics
• Helps you assess the representativeness of the sample
• These are valuable

Inferential Statistics

• Provide a means for drawing conclusions about a population, given the data from a sample
• Based on the laws of probability
• Allows objective criteria for hypothesis testing

Research hypothesis:

Patients exposed to the film will breastfeed longer than those who do not see the film which direction are findings going to head

Null hypothesis:

There is no difference in breastfeeding length between the two groups: 1) seeing film 2) not seeing film
• Our goal: rejection of the null hypothesis, because we cannot directly demonstrate that the research hypothesis is correct

p value

values tell you whether the results are likely to be real
• Simply means that the results are not likely to be attributed to a chance occurrence
• In a study, 'significance' refers to an investigator's hypothesis being supported

Effect size analysis:

Conveys the estimated magnitude of a relationship without making any statement about whether the apparent relationship in the data reflects a true relationship

Discussion Section

• What's here? Interpretation of study findings
• Requires making multiple inferences
• Inference: use of logical reasoning to draw conclusions based on limited information
• Can we really make valid inferences based on 'stand-ins'? Yes, if we use rigorous design
• Investigators are often indirect (at best) in addressing issues of validity - you must be the judge

Assessing good research design
The main question:

To what degree did the investigators provide reliable and valid evidence?
• Investigator's primary design goal - control confounding variables

Confounding variable:

an extraneous, often unknown variable, that correlates (positively or negatively) with both the dependent variable and the independent variable.
• Confounding is a major threat to the validity of inferences made about cause and effect (internal validity)

Intrinsic (internal)

• Come with the research subjects
• These are factors that are simply characteristics of the individual subject
• Example: Physical activity intervention to improve CV fx in LTC patients
• Age Gender
• Smoking hx Physical activity hx
• All are extraneous variables, and all likely related to the outcome variable (dependent)
Associated with research subject

Extrinsic (external)

• Are part of the research situation
• Result in 'situational contaminants'
• If not sufficiently addressed, these factors raise question about whether something in the study context influenced the results
Associated with research situation or context, situational contaminants, instead of the variable alone

Controlling extrinsic factors
Goal -

create study and data collection condition that don't change from participant to participant
• What does this look like in a study?
o All data collected in the same setting
o All data collected at the same time of day
o Data collectors use a formal script, are trained in delivery of any verbal communication
o Intervention protocols are very specific

Controlling Intrinsic Factors
1. Random assignment into groups:

goal is to have groups that are equal with respect to ALL confounding variables, not just the ones we know about

Controlling Intrinsic Factors 2. Homogeneity:

limits confounders by including only people who are 'the same' on the confounding variable
• Shows up in exclusion/inclusion criteria
• Limits generalizability
Example: If gender is a confounder, sample only men

Controlling Intrinsic Factors 3. Matching:

Researcher uses information about each individual and attempt to match them with a corresponding individual, creating comparable groups. Has practical limitations...

Controlling Intrinsic Factors 4. Statistical controls:

Use of statistical tests to control for confounding variables (e.g., ANCOVA)

Construct Validity

• The degree to which the particulars of the study (the sample, the settings, the treatments, the instruments) are accurate representations of the higher-order constructs they are meant to represent
• If there are errors here, results can be misleading
• Most practically comes up in terms of the validity of tools (measurement instruments)

Statistical Conclusion Validity

• The degree to which the selected statistical tests for any given study accurately detect true relationships.
• Statistical power: the ability of the research design to detect true relationships. Achieved primarily through sample size, based on a power analysis
• Not all reports will tell you what the necessary sample size was determined to be..

Internal Validity

• How possible it is to make an inference that the independent variable is truly the causal factor?
• Any 'competing explanation' for the cause = threat to internal validity

Internal Validity 1. Selection

Any pre-existing, often unknown, differences between groups (is a risk for any non-randomly assigned groups)

Internal Validity 2. History -

Events occurring at the same time as the study that may impact outcomes (flu shot example)

Internal Validity 3. Maturation

Those effects that occur simply because time has passed (wound healing, growth, post op recovery)

Internal Validity 4. Mortality/Attrition

Who dropped out? Who stayed in? Are previously equivalent groups no longer equivalent? A drop out rate of 20% or more is of concern
whos dropping out, whos staying in, and looking at group at the end, and determine if its still valid, if greater than 20% it would ruin the study or make it invalid.
That's why we do pilot studies

External Validity

• Addresses how well the relationships that are observed in a study hold up in the real world
• Tied to generalizability
• Two main design pieces:
o How representative is the sample?
o Replication prospects (answered through multi-site studies or systematic reviews)
• Do the same findings hold true in a variety of settings or in diverse sample groups?
Recognize we are always looking at a sample and trying to apply it to society, getting a good sample group you can generalize

EBP and Critical thinking

• Requires a questioning approach, a 'why?' approach
• Willingness to challenge the status quo
• Asking - what is the evidence that suggests that what I'm doing is the best thing?
• Investigator carrying out the research must do a similar query: What evidence is there that the results are true
• Ok to maintain a skeptic's attitude until the accuracy of evidence is evaluated

Managing Qualitative Data
REDUCTIONIST in nature

4 hr interview need to look through hours of data, so they need to transcribe: reducing to meaningful info (reductionist aka reduce)

• Transcribing - accurate and valid?
o Researchers do own transcribing, responsibility of researcher to do own transcription and making sure it is accurate
o Overall goal is to get to know your data
• "Immersing" oneself in the data getting to know own data "drowning in data"
• Maintaining Files: Computers vs. Manual cutting out and putting into files and developing systems doing hard copy manor

Managing Qualitative Data
Developing a Category Scheme (template)

look at data, and then prioritize after you look at it and not before, you see these and see themes or categories that begin to develop
• can't make data fit predetermined categories, they need to develop as you go through data
o Participation in walking program - concrete/descriptive
o Asking questions of the data through 'constant comparison'
• What's going on? What is this person representing
• What does this stand for? What does it mean
• What else is like this? Who else is like this
• What is this distinct from? What is different
o Anniversary of birth trauma - abstract/conceptual
o How moms coped with it... people get PTSD symptoms related to traumatic birth
o The developing themes that began to emerge was the prolog to anniversary, the day of, the day after, and looking at future date if anniversary
o Then seeing where that category fit in

Coding Qualitative Data

• An opportunity for refining the category scheme
• You can relook at data and may see a new category, some may be removed or renamed, then you need to go back to step one and see how transcripts fit into new category scheme (Refining), go back and forth and relook at data a lot, not a liner process
• Complex process it is all complex, and error and sloppiness is high because of the complexity and analysis fatigue

Analytic Procedures conceptual process
CONSTRUCTIONIST in nature

Taking codes and piecing back together, making an innovated whole

Search for 'themes':

commonalities within the data that brings meaning to the experience under study
o What are the relationships within and among the identified themes?

Iterative process:

initial themes are identified, then analyst returns to the data with those themes in mind, asking, "does this fit"? Refining and clarifying process... circular process of looking at data, stepping back identifying themes, going back to previously determined themes, checking with other participants themes, then going back again
• Seeing if it fits, is it accurate, do others find similar themes and connections

Analytic Procedures
• Validation of findings:

aim is to minimize bias associated with analysis by only one researcher
• Think about validity through rechecking work, and see if it is actually about data and not about biases that came. Researchers are good at identifying own biases. Researchers acknowledge biases and let them work with them instead of against them. It is the consumers responsibility to recognize that and see if the biases get in the way or not
• Risk of only getting the perspective of that one person. You need to work to mitigate that risk

Analytic Procedures • Integration:

developing an overall structure - either a theory, conceptual map, or overarching description
• This integration piece is the 'so what?' piece
• Took at the end of the research and think so what.. the researcher needs to do a good job at linking the study to practice, and the so what needs to be focused so people can see how to utilize the research. If they can't you need to be asking a certain question or reword it...
• Difficult to get funded for qualitative research, and it does have a bearing on the write-up and why they think it is important and relevant
• Goal is to develop an overall perspective or idea that is useful at the bedside
• Find in a conceptual map, or a concept analysis, or an over arching description of moving from an inductive way, and see what these participants experience.

The Validity Debate

• A controversial term in qualitative circles
• Does the term 'validity', defined as "the quality of being sound and well-founded" apply to qualitative research? Works like a parallel
• Current conclusion on the debate is to agree to approach validity from a 'parallel perspective'
• The terms TRUSTWORTHINESS and INTEGRITY are parallel to reliability and validity in quantitative research
• Overall, there is agreement about the need for high-quality research and standards by which to determine what is 'high-quality'

Criteria for Trustworthiness
Lincoln and Guba (1985)

credibility, dependability, confirmability, transferability, authenticity

Credibility:

having confidence in the truth of the data and the interpretations (parallel to validity)

Dependability:

the interpretations hold true over time and over conditions (parallel to reliability)

Confirmability:

interpretations can be independently agreed upon (parallel to objectivity or neutrality)

Transferability:

the degree to which the interpretations have application in other settings (parallel to generalizability) This is not the job of the researcher, but of the consumer

Authenticity:

the degree to which researchers show the range of experiences within their data so that readers develop a sensitivity through rich contextual descriptions

Strategies to Enhance Quality
(and reduce threats to integrity/trustworthiness)

• Prolonged Engagement (scope)
• Persistent Observation (depth)
• Triangulation - means of avoiding single-method, single-observer biases
o Data: time, space, person
o Method: data sources
o Investigator: collaborative analysis
• Audit (decision) trails and Inquiry Audits
• Peer Review and Debriefing
• Thick description- providing sufficient detail to allow the reader to make judgments about the study's credibility

Evaluating Qualitative Findings

• Credibility (believability) - have you been convinced of the fit between the data and the interpretations presented?
• Close scrutiny - is there evidence that alternative interpretations were considered? Are limitations and their effects discussed?
• Importance - is there new understanding, new insight that is presented or does it seem common sense?

Systematic Review:

Systematic Reviews are the foundation of evidence based practice

a rigorous synthesis of research findings on a specific research question. Can be:
Narrative - AKA "Literature Review"
Statistical - AKA "Meta Analysis

Meta Analysis

is a quantitative and statistical approach to combining findings across studies that are reasonably similar
Every meta analysis is a systematic review, but not every systematic review is a meta analysis

• Main goal is to objectively integrate (synthesize) all the individual study findings on a specific topic
• Essentially use all the activities in an individual study EXCEPT what?
• Collecting original data

Transforming individual test statistics back to standard deviation units through the common metric of effect size. This allows a look at whether there is a relationship between the variables AND estimates the magnitude of the relationship
**Just need a general understanding of this... Things are being converted back to a standard deviation unit called an effect size, level things are looked at across studies, allows us to estimate to across studies**

Meta Analysis
THE FILE DRAWER PROBLEM

AKA publication bias
We don't publish things that are not significant, researchers don't want to take time to write it up, abandon when no sig results occur, but it can happen on publication side b/c it gets sent back and not wanted in journal "so stuffed in file drawer" don't know how many there are..

• Research community tends NOT to publish non-significant findings, leading to the risk of an 'overestimation of effect'
• To mitigate this bias, an estimation of the number of non-significant studies it would take to reverse the conclusion of the meta-analysis is often made. How many to make it significant. Researchers develop a threshold, so determine a publication bias
• ***An attempt being made to address this issue, create an environment to make all journals published.
• Before a justification would be a space imitation, but that is no longer the case
• See if it is addresses in meta analysis

Please allow access to your computer’s microphone to use Voice Recording.

Having trouble? Click here for help.

We can’t access your microphone!

Click the icon above to update your browser permissions above and try again

Example:

Reload the page to try again!

Reload

Press Cmd-0 to reset your zoom

Press Ctrl-0 to reset your zoom

It looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio.

Please upgrade Flash or install Chrome
to use Voice Recording.

For more help, see our troubleshooting page.

Your microphone is muted

For help fixing this issue, see this FAQ.

Star this term

You can study starred terms together

NEW! Voice Recording

Create Set