Upgrade to remove ads
Clinical Research Exam 1
Terms in this set (127)
1. Basic science research
2. Translation research
3. Clinical research
Basic science research
conducted in lab, not on people
prevent disease in humans
- patient oriented
- health services
Patient, Intervention, Comparison, Outcome
Drug Approval Process
Investigational New Drug (IND)
A drug not yet approved for marketing by the FDA but available for use in experiments to determine its safety and efficacy.
Phase I (early human)
small sample size, healthy volunteers
Phase II (Early efficacy)
Determine efficacy and safety
- patients with disease
- randomized and double-blinded,
Many patients with disease
- identify side effects
New Drug Application (NDA)
If there is sufficient data to demonstrate that a drug is safe and effective, the company submits an NDA as a formal request that the FDA approve it for marketing.
Phase IV (postmarketing studies)
focused on safety issues (
higher exposure to drug, better idea of side effects
- compares cost/benefit
- compares treatment options
ethical principles and guidelines for the protection of human subjects of research
Belmont report 3 ethical principles
1. Respect for persons
respect for persons
- not needed if you can document minimal risk
show acceptable risk benefit ratio
Institutional Review Board (IRB)
A committee at each institution where research is conducted to review every experiment for ethics and methodology.
at the beginning
1. Surveys, interviews, or observations
2. Studies of existing records, data, or specimens
3. research on normal education practices
fabrication, falsification, or plagiarism
Conflicts of interest
1. Dual role for investigator and clinician
- leads to bias
2. Financial conflicts
- need to be transparent
Responding to conflicts
establish a DSMB
Data and Safety Monitoring Board
- independent group
- ensure the safety and welfare of subjects by monitoring data for safety concerns, etc
is it ethical?
details each procedural step of the research design
Components of a Research Protocol
1. Research Question
2. Background and significance
3. Study design
4. Study subjects
5. Variables to be measured
objective of the study
background and significance
context and rationale
1. Observational Study
- case report/series
- retrospective/prospective cohort
characteristics that the prospective subjects must have if they are to be included in the study
Characteristics that eliminate a potential subject from the study to avoid extraneous effects
predictor variable (independent variable)
Observational: used to identify predictors for disease
- ex. age, sex, race
- ex. aspirin, sodium bicarb
- ex. MI, death, AKI
extraneous factor that interferes with the action of the independent variable on the outcome
- Randomization and larger sample size minimizes!
the hypothesis that there is no significant difference between predictors and outcome
related to research question
- statement of effect
Goal of a study
that can be
to a certain population
- optimize validity!
- minimize errors
Is outcome caused by intervention/predictors?
extent to which we can generalize findings to real-world settings (all patients)
results due to
- reduced by increasing sample size
results due to
- reduced ONLy by minimizing bias
using a sample of people who are readily available to participate
- may cause sampling bias
random sampling (GOLD STANDARD)
simple random sampling
A sampling procedure in which each member of the population has an
of being included in the sample.
stratified random sampling
Population divided into subgroups (strata) and random samples taken from each strata
- identifies particular demographic categories of interest, like minorities
- makes sure everyone is
Goals of recruitment
1. minimize non responders
2. having enough subjects
Expresses a predicted relationship between one independent variable and one dependent variable
More than one independent variable and one dependent variable
Direction is known.
- Ex: anxiety is higher in pharmacy students
Direction is not stated, only an association noted.
- Ex: there is an association between being a pharmacy major and stress levels.
Type I error
: State that there is a difference when there is not.
- Reject a null that was true
- Occurs due to chance
type II error
: Say there is no difference when there is one.
- Accept a null hypothesis that is not true.
- Occurs usually due to small sample size.
describes the strength of an association
- larger differences = smaller sample size needed
Probability of committing a type 1 error
- use an alpha of 0.05 = 5% chance of incorrectly rejecting null when it is actually true
- Closely related to p
Probability of committing a type 2 error
- use 0.1 to 0.2 = there is a 20% chance of not detecting a real difference.
- use a lower beta to avoid a type II error
The ability to detect a difference between groups (Power=1-beta)
- normal power is 0.8 or 80% ability to detect a difference between groups.
Probability. If p<0.05, the null hypothesis is rejected. If p>0.05, the null hypothesis is accepted (due to CHANCE)
Detailed published report on an individual patient. (anecdotal evidence only)
- Includes observations, usually something unique or unexpected.
- non-study design
Similar to case report, tracks a very small number of patients with an observation of interest.
- Can be confounded by selection bias (limits external validity)
- Non-study design
just describes trends
attempts to show causality
cohort study design
Can be prospective (outcome has not occurred yet) or retrospective (outcome has already occurred)
prospective cohort study structure
Investigator chooses a group of subjects and measures characteristics that might predict subsequent outcomes.
- periodic measurements of outcomes.
- Can be one group or multiple (following smokers v. following smokers and non-smokers)
Strengths of Prospective Cohort Study
measure predictor variables completely
minimizes recall bias
the proportion of people who develop a disease over time
weaknesses of prospective cohort study
expensive and inefficient
smaller sample size
causal inference is challenging bc of confounding variables
retrospective cohort study structure
all measurements, follow-up, and outcomes are from the past
- similar to prospective
Strengths/weaknesses of a retrospective cohort study
S: less cost and less time-consuming
W: existing data may not be complete
- "slice" of time
- Describes variables, identifies prevalence.
- Often used as a starting point for cohort study.
The proportion who have a disease at a given time.
Strengths/weaknesses of cross-sectional study
S: fast, inexpensive, no loss to follow-up
W: can't establish causation, difficult with rare disease states.
Observational (Generally retrospective)
- Grouping is by outcome of interest, looking for predictors
- strength of association using odds ratio
Strengths of case-control studies
- good for examining rare outcomes/diseases
- good for generating hypothesis
Weaknesses of case-control studies
- can't determine incidence/prevalence
- only one outcome can be studied
- susceptible to
- bias due to measurement error =
Ways to avoid Sampling bias
Matching: compare people on same baseline
Explanation for associations
1. NOT real, due to chance alone
2. NOT real, due to bias
3. REAL, but opposite
- effect → cause
4. REAL, due to confounding variable
5. REAL, cause → effect
Coping with confounding variables
- specification (exclusion variables)
2. Analysis phase
Evidence favoring causality
- consistency of research
- strength of association
- direct correlations
- biological plausibility
Investigator applies a treatment (intervention) and observes an outcome
- Best design to know causation
- Classic design is random blinded trial
Strengths of Clinical Trials
potential to demonstrate causality, randomization to intervention is possible, blinding is easier
Weaknesses of Clinical Trials
expensive, time consuming, may put patient at risk, addresses a narrow question, not feasible for many questions.
Steps in a randomized-clinical trial
1. Select a sample from population
2. Measure baseline variables
3. Randomize participants
4. Apply intervention
5. Follow cohorts
6. Measure outcome variables
7. Analyze results
Selecting a sample for clinical trials
: subjects with High risk for an outcome, subjects likely to have greatest effect from treatment
: treatment would be harmful, treatment is unlikely to be effective, subject unlikely to adhere to treatment, subject unlikely to follow-up, any practical problems, unethical to enroll
Purpose of randomization
Theoretically eliminates influence of confounding variables before the start of the study
researchers are unaware of which group patients are being assigned at the time they enter the study
opaque, sealed envelopes
equal distribution btw treatment and control groups
stratified block randomization
divides cohort into groups based on important predictor variables
Purpose of blinding
Eliminates influence of confounding or bias after the start of the study
medical events that occur as a result of disease or treatment
- ex. symptoms, death, MI
An indirect outcome (such as a physiological measure)
- ex. Cholesterol level, pH, BP, HgbA1c
Intention to treat analysis
compares outcomes btw groups regardless of what happened to patients (drop-out, death)
per protocol analysis
A comparison of treatment groups that includes only those patients who completed the treatment originally allocated.
- more reflective of effect
A randomized clinical trial designed to establish that the drug of interest is not inferior to the gold standard therapy.
- NEW drugs
goal: make sure new drug is NOT worse
Non-randomized clinical trial
absence of randomization
Each person is given treatment and compared to their own baseline w/o treatment
- Individuals are their own control
Summary of study designs
study of studies
does NOT reduce bias
(garbage in = garbage out)
- evidence based medicine
study that analyzes a large number of other studies
- NO definitive results
Why are systematic reviews important?
1. ↑ sample size = ↓ results due to chance
2. Bias (publication bias/ reporting bias/ academic bias)
3. Heterogeneity = how applicable is it = ↑ I²
↑ I² = something other than chance affecting the outcome (close to 100%)
- 0% = ALL due to chance
Forest plots of meta-analytic findings
look at distribution of lines: (same side?)
confidence interval: (
diamond (summary): (one one side?)
I²: (big or small?)
# events: (small or large?)
Confidence Interval (CI)
indicates a range in which the population mean is believed to be found.
- represented by lines of plots
if the CI includes 1 = NOT statistically significant
- goal = narrow CI
Questions to consider for meta-analysis
1. Was the search exhaustive?
2. Was the risk of bias assessed?
3. Is the process reproducible?
reporting bias in meta-analysis
1. Were multiple databases searched?
2. Did the authors use hand-searching?
3. Were there language restrictions?
unpublished reports, conference papers, and grant proposals
- contact author!
How to avoid bias in meta-analysis
1. randomization (Selection bias)
2. Allocation concealment = tamperproof (Selection bias)
3. Blinding (detection/performance bias)
4. Selective reporting (reporting bias)
journals are more likely to publish studies with statistically significant results than those that have null results
A graphical display used in meta-analysis to look for publication bias.
- large trials are on top
big hole = publication bias
= missing studies for null (false positive)
research that relies on what is seen in field or naturalistic settings more than on statistical data
- inductive structure = moves from specific to general
goal = understand phenomenon
Qualitative Research Methods
Aim to understand the nature or meaning of experiences, behavior, ways of thinking, and culture, which cannot be quantified into numbers
qualitative research design
Data collection using interviews, fieldwork, observation, photos, texts, and other subjective measures
Qualitative research analysis
researcher uses their brain to analyze the data and "constructs" the results
Role of qualitative research
1. understand the stories
2. quality improvement
3. inform guidelines
the chance of something happening
- # of ways event can occur /# total
- ex: 50% chance of flipping heads
# of events/# of non-events
- Example: odds of flipping heads is 1:1
Odds vs. Probability
P = 0, O = 0
P = 0.25, O = 1/3
P = 0.50, O = 1
P = 0.75, O = 3
Outcome can be on the side or top!
Event Rate (Risk)
# of events/# total
- example: if 38 people out of 120 have an MI, the risk is 38/120
Interpretation of risk
Patients exposed to drug A have a 10% risk (chance) of death
- Drug A = 10/100 for death
Absolute Risk Reduction (ARR)
Risk in treatment - risk in control
- ex: Drug A = 10% death, Drug B = 5% death, so ARR = -5% for drug B
- MINUS sign = reduction
The risk of death is 5 percentage points lower in patients taking drug B compared to drug A
Relative Risk (RR)
Risk of treatment/risk of control
CANNOT be calculated for case-control studies
- NOT sensitive to event rate
RR death (a vs B) = (10/100)/(5/100) = 2
- Patients exposed to drug A are twice as likely to die as those exposed to drug B
Odds Ratio (OR)
Odds of treatment / odds of control
- ex: OR death (A vs B) = (10/90)/(5/95) = 2.1
ONLY way to calculate case-control study
- note: lower event rate = safer to make conclusions
Patients exposed to Drug A have 2.1 times higher odds of dying compared to patients exposed to drug B
Number needed to treat (harm)
Number you must treat for ONE patient to benefit/be harmed. (
- ex. NNH is 7, this means that for every 7 people we put on the drug, one more will have a bleed.
- NNT: lower # = BETTER (round up)
- NNH: higher # = BETTER (round down)
You might also like...
Research and Design
Exam 1 Vocab - Research & Biostats
Other sets by this creator