Home
Subjects
Create
Search
Log in
Sign up
Upgrade to remove ads
Only $2.99/month
Clinical Research Exam 1
STUDY
Flashcards
Learn
Write
Spell
Test
PLAY
Match
Gravity
Terms in this set (127)
Biomedical research
1. Basic science research
2. Translation research
3. Clinical research
Basic science research
conducted in lab, not on people
Clinical research
prevent disease in humans
 patient oriented
 epidemiological
 health services
PICO
Patient, Intervention, Comparison, Outcome
Drug Approval Process
Drug discovery
Step 1
preclinical trials
Investigational New Drug (IND)
Step 2
A drug not yet approved for marketing by the FDA but available for use in experiments to determine its safety and efficacy.
Phase I (early human)
Step 3
small sample size, healthy volunteers
Phase II (Early efficacy)
Determine efficacy and safety
 patients with disease
 randomized and doubleblinded,
minimizes bias
Phase III
Many patients with disease
 identify side effects
 RCT
New Drug Application (NDA)
If there is sufficient data to demonstrate that a drug is safe and effective, the company submits an NDA as a formal request that the FDA approve it for marketing.
Phase IV (postmarketing studies)
focused on safety issues (
higher exposure to drug, better idea of side effects
)
 compares cost/benefit
 compares treatment options
Belmont report
ethical principles and guidelines for the protection of human subjects of research
Belmont report 3 ethical principles
1. Respect for persons
2. Beneficence
3. Justice
respect for persons
informed consent
 not needed if you can document minimal risk
Beneficence
show acceptable risk benefit ratio
Justice
fairness; rightfulness
Institutional Review Board (IRB)
A committee at each institution where research is conducted to review every experiment for ethics and methodology.

at the beginning
exempt research
1. Surveys, interviews, or observations
2. Studies of existing records, data, or specimens
3. research on normal education practices
Scientific Misconduct
fabrication, falsification, or plagiarism
Conflicts of interest
1. Dual role for investigator and clinician
 leads to bias
2. Financial conflicts
 need to be transparent
Responding to conflicts
establish a DSMB
DSMB
Data and Safety Monitoring Board
 independent group
 ensure the safety and welfare of subjects by monitoring data for safety concerns, etc

is it ethical?

halfway point
Research Protocol
details each procedural step of the research design
Components of a Research Protocol
1. Research Question
2. Background and significance
3. Study design
4. Study subjects
5. Variables to be measured
6. Statistics
Research question
objective of the study
FINER
FINER criteria
Feasible
Interesting
Novel
Ethical
Relevant
background and significance
context and rationale
Study Designs
1. Observational Study
 case report/series
 retrospective/prospective cohort
 crosssectional
 casecontrol
2. Clinical
 RCT
inclusion criteria
characteristics that the prospective subjects must have if they are to be included in the study
exclusion criteria
Characteristics that eliminate a potential subject from the study to avoid extraneous effects
predictor variable (independent variable)
Observational: used to identify predictors for disease
 ex. age, sex, race
Clinical: intervention
 ex. aspirin, sodium bicarb
dependent variable
outcome
 ex. MI, death, AKI
confounding variable
extraneous factor that interferes with the action of the independent variable on the outcome
 Randomization and larger sample size minimizes!
null hypothesis
the hypothesis that there is no significant difference between predictors and outcome
Hypothesis
related to research question
 statement of effect
Goal of a study
to make
inferences
that can be
generalized
to a certain population
 optimize validity!
 minimize errors
internal validity
Is outcome caused by intervention/predictors?
external validity
extent to which we can generalize findings to realworld settings (all patients)
Random error
results due to
chance
 reduced by increasing sample size
Systematic error
results due to
bias
 reduced ONLy by minimizing bias
convenience sampling
using a sample of people who are readily available to participate
 may cause sampling bias
probability sampling
random sampling (GOLD STANDARD)
simple random sampling
A sampling procedure in which each member of the population has an
equal probability
of being included in the sample.
stratified random sampling
Population divided into subgroups (strata) and random samples taken from each strata
 identifies particular demographic categories of interest, like minorities
 makes sure everyone is
equally represented
Goals of recruitment
1. minimize non responders
2. having enough subjects
simple hypothesis
Expresses a predicted relationship between one independent variable and one dependent variable
complex hypothesis
More than one independent variable and one dependent variable
onesided hypothesis
Direction is known.
 Ex: anxiety is higher in pharmacy students
twosided hypothesis
Direction is not stated, only an association noted.
 Ex: there is an association between being a pharmacy major and stress levels.
Type I error
False positive
: State that there is a difference when there is not.
 Reject a null that was true
 Occurs due to chance
type II error
False negative
: Say there is no difference when there is one.
 Accept a null hypothesis that is not true.
 Occurs usually due to small sample size.
Effect size
describes the strength of an association
 larger differences = smaller sample size needed
Alpha (ɑ)
Probability of committing a type 1 error
 use an alpha of 0.05 = 5% chance of incorrectly rejecting null when it is actually true
 Closely related to p
Beta (β)
Probability of committing a type 2 error
 use 0.1 to 0.2 = there is a 20% chance of not detecting a real difference.
 use a lower beta to avoid a type II error
Power
The ability to detect a difference between groups (Power=1beta)
 normal power is 0.8 or 80% ability to detect a difference between groups.
Pvalue
Probability. If p<0.05, the null hypothesis is rejected. If p>0.05, the null hypothesis is accepted (due to CHANCE)
Case reports
Detailed published report on an individual patient. (anecdotal evidence only)
 Includes observations, usually something unique or unexpected.
 nonstudy design
case series
Similar to case report, tracks a very small number of patients with an observation of interest.
 Can be confounded by selection bias (limits external validity)
 Nonstudy design
Descriptive studies
just describes trends
Analytic studies
attempts to show causality
cohort study design
Can be prospective (outcome has not occurred yet) or retrospective (outcome has already occurred)
prospective cohort study structure
Investigator chooses a group of subjects and measures characteristics that might predict subsequent outcomes.
 periodic measurements of outcomes.
 Can be one group or multiple (following smokers v. following smokers and nonsmokers)
Strengths of Prospective Cohort Study
calculates incidence
describes predictors
measure predictor variables completely
minimizes recall bias
incidence
the proportion of people who develop a disease over time
weaknesses of prospective cohort study
expensive and inefficient
smaller sample size
causal inference is challenging bc of confounding variables
retrospective cohort study structure
all measurements, followup, and outcomes are from the past
 similar to prospective
Strengths/weaknesses of a retrospective cohort study
S: less cost and less timeconsuming
W: existing data may not be complete
Crosssectional study
Observational.
 "slice" of time
 Describes variables, identifies prevalence.
 Often used as a starting point for cohort study.
prevalence
The proportion who have a disease at a given time.
Strengths/weaknesses of crosssectional study
S: fast, inexpensive, no loss to followup
W: can't establish causation, difficult with rare disease states.
Casecontrolled studies
Observational (Generally retrospective)
 Grouping is by outcome of interest, looking for predictors
 strength of association using odds ratio
Strengths of casecontrol studies
 good for examining rare outcomes/diseases
 good for generating hypothesis
Weaknesses of casecontrol studies
 can't determine incidence/prevalence
 only one outcome can be studied
 susceptible to
sampling bias
(underrepresented population)
 bias due to measurement error =
recall bias
Ways to avoid Sampling bias
Matching: compare people on same baseline
Explanation for associations
1. NOT real, due to chance alone
2. NOT real, due to bias
3. REAL, but opposite
 effect → cause
4. REAL, due to confounding variable
5. REAL, cause → effect
Coping with confounding variables
1. Designphase
 matching
 specification (exclusion variables)
2. Analysis phase
 stratification
 adjustment
Evidence favoring causality
 consistency of research
 strength of association
 direct correlations
 biological plausibility
Clinical trials
Investigator applies a treatment (intervention) and observes an outcome
 Best design to know causation
 Classic design is random blinded trial
Strengths of Clinical Trials
potential to demonstrate causality, randomization to intervention is possible, blinding is easier
Weaknesses of Clinical Trials
expensive, time consuming, may put patient at risk, addresses a narrow question, not feasible for many questions.
Steps in a randomizedclinical trial
1. Select a sample from population
2. Measure baseline variables
3. Randomize participants
4. Apply intervention
5. Follow cohorts
6. Measure outcome variables
7. Analyze results
Selecting a sample for clinical trials
Inclusion
: subjects with High risk for an outcome, subjects likely to have greatest effect from treatment
Exclusion
: treatment would be harmful, treatment is unlikely to be effective, subject unlikely to adhere to treatment, subject unlikely to followup, any practical problems, unethical to enroll
Purpose of randomization
Theoretically eliminates influence of confounding variables before the start of the study
Allocation concealment
researchers are unaware of which group patients are being assigned at the time they enter the study
 use
opaque, sealed envelopes
block randomization
equal distribution btw treatment and control groups
stratified block randomization
divides cohort into groups based on important predictor variables
Purpose of blinding
Eliminates influence of confounding or bias after the start of the study
Clinical outcomes
medical events that occur as a result of disease or treatment
 ex. symptoms, death, MI
surrogate outcomes
An indirect outcome (such as a physiological measure)
 ex. Cholesterol level, pH, BP, HgbA1c
Intention to treat analysis
compares outcomes btw groups regardless of what happened to patients (dropout, death)

MORE reallife!
per protocol analysis
A comparison of treatment groups that includes only those patients who completed the treatment originally allocated.
 more reflective of effect
Noninferiority trial
A randomized clinical trial designed to establish that the drug of interest is not inferior to the gold standard therapy.
 NEW drugs

goal: make sure new drug is NOT worse
Nonrandomized clinical trial
absence of randomization
Crossover design
Each person is given treatment and compared to their own baseline w/o treatment
 Individuals are their own control
Summary of study designs
systematic review
study of studies

does NOT reduce bias
(garbage in = garbage out)
 evidence based medicine
metaanalytic study
study that analyzes a large number of other studies
 NO definitive results
Why are systematic reviews important?
1. ↑ sample size = ↓ results due to chance
2. Bias (publication bias/ reporting bias/ academic bias)
3. Heterogeneity = how applicable is it = ↑ I²
I² value
↑ I² = something other than chance affecting the outcome (close to 100%)
 0% = ALL due to chance
Forest plots of metaanalytic findings
look at distribution of lines: (same side?)
confidence interval: (
narrow
or wide?)
diamond (summary): (one one side?)
I²: (big or small?)
# events: (small or large?)
Confidence Interval (CI)
indicates a range in which the population mean is believed to be found.
 represented by lines of plots

if the CI includes 1 = NOT statistically significant
 goal = narrow CI
Questions to consider for metaanalysis
1. Was the search exhaustive?
2. Was the risk of bias assessed?
3. Is the process reproducible?
reporting bias in metaanalysis
1. Were multiple databases searched?
2. Did the authors use handsearching?
3. Were there language restrictions?
grey literature
unpublished reports, conference papers, and grant proposals
 contact author!
How to avoid bias in metaanalysis
1. randomization (Selection bias)
2. Allocation concealment = tamperproof (Selection bias)
3. Blinding (detection/performance bias)
4. Selective reporting (reporting bias)
publication bias
journals are more likely to publish studies with statistically significant results than those that have null results
funnel plot
A graphical display used in metaanalysis to look for publication bias.
 large trials are on top

big hole = publication bias
= missing studies for null (false positive)
Qualitative Research
research that relies on what is seen in field or naturalistic settings more than on statistical data
 inductive structure = moves from specific to general

goal = understand phenomenon
Qualitative Research Methods
Aim to understand the nature or meaning of experiences, behavior, ways of thinking, and culture, which cannot be quantified into numbers
qualitative research design
Data collection using interviews, fieldwork, observation, photos, texts, and other subjective measures
Qualitative research analysis
researcher uses their brain to analyze the data and "constructs" the results
Role of qualitative research
1. understand the stories
2. quality improvement
3. inform guidelines
Probability
the chance of something happening
 # of ways event can occur /# total
 ex: 50% chance of flipping heads
Odds
# of events/# of nonevents
 Example: odds of flipping heads is 1:1
Odds vs. Probability
P = 0, O = 0
P = 0.25, O = 1/3
P = 0.50, O = 1
P = 0.75, O = 3
2x2 table
Outcome can be on the side or top!
Event Rate (Risk)
# of events/# total
 example: if 38 people out of 120 have an MI, the risk is 38/120
Interpretation of risk
Patients exposed to drug A have a 10% risk (chance) of death
 Drug A = 10/100 for death
Absolute Risk Reduction (ARR)
Risk in treatment  risk in control
 ex: Drug A = 10% death, Drug B = 5% death, so ARR = 5% for drug B
 MINUS sign = reduction
ARR interpretation
The risk of death is 5 percentage points lower in patients taking drug B compared to drug A
Relative Risk (RR)
Risk of treatment/risk of control
 *
CANNOT be calculated for casecontrol studies
*
 NOT sensitive to event rate
RR interpretation
RR death (a vs B) = (10/100)/(5/100) = 2
 Patients exposed to drug A are twice as likely to die as those exposed to drug B
Odds Ratio (OR)
Odds of treatment / odds of control
 ex: OR death (A vs B) = (10/90)/(5/95) = 2.1

ONLY way to calculate casecontrol study
 note: lower event rate = safer to make conclusions
OR interpretation
Patients exposed to Drug A have 2.1 times higher odds of dying compared to patients exposed to drug B
Number needed to treat (harm)
Number you must treat for ONE patient to benefit/be harmed. (
1/ARR
)
 ex. NNH is 7, this means that for every 7 people we put on the drug, one more will have a bleed.
 NNT: lower # = BETTER (round up)
 NNH: higher # = BETTER (round down)
Risk/Odds summary
You might also like...
Research and Design
77 terms
Research Methodology
53 terms
Exam 1
53 terms
Exam 1 Vocab  Research & Biostats
92 terms
Other sets by this creator
IQVIA interview
3 terms
GSM Training
27 terms
comp110 MT0
15 terms
Clinical Trials
781 terms