Upgrade to remove ads
HLTH 440 Chapter 14
Terms in this set (67)
Compare Formative Evaluations and Process Evaluations:
Process evaluation: looks back on the implementation process and what went well and what went poorly
Formative Evaluation: informs and guides the program as it goes
Compare Formative and Summative Evaluations:
Formative: Descriptions of strategies
Summative: Experimental, Quasi-experimental, and non-experimental designs
What are the main objectives of process evaluation?
1. To describe how closely the program implementation followed protocols.
2. How successful it was in recruiting and reaching members in the PP?
3. How many people participated?
4. How many products or services were distributed?
5. What other factors may have competed with or confounded program results?
What are the elements of a process evaluation?
Fidelity, Dose, Recruitment, Reach, Response, Context
What is formative evaluation?
-focuses on quality of program content and program implementation
-collects data and informs stakeholders of important finding to improve a program or its delivery
-allows planners to make these changes before program is completed
Which elements of a comprehensive formative evaluation provide assurance that programs are supported by stakeholders and are evidence-based?
Justification and Evidence
What is the relationship of summative evaluation to process, impact, and outcome evaluation?`
Includes impact eval (knowledge, attitudes, skills, environment, behaviors)
Includes outcome eval (mortality, morbidity, disability)
What are elements of a comprehensive formative evaluation?
Justification, Evidence, Capacity, Resources, Consumer-Orientation, Multiplicity, Support, Inclusion, Accountability, Adjustment, Recruitment, Reach, Response, Interaction, Satisfaction
(Definitions in individual cards)
What procedures are used in formative evaluation?
Key Informant Interviews
Expert Panel Reviews
Program and Evaluation Forms
What are elements and strategies related to summative evaluation?
experimental, quasi-experimental, and non-experimental designs
What are the most important questions in selecting evaluation design?
-How much time to conduct evaluation?
---Do stakeholders want basic results or do they want a more sophisticated analysis?
---What indicators are stakeholders most interested in tracking?
-What financial or budgetary resources are available to conduct the evaluation?
-How many participants can be included in the evaluation?
-Are you more interested in qualitative or quantitative data?
-Do you have the data analysis skills or access to statistical consultants?
-Is it important to be able to generalize your findings to another population?
-Are the stakeholders concerned with validity and reliability?
-Do you have the ability to randomize participants into experimental and control groups?
-Do you have access to a comparison group?
What are the four steps in choosing evaluation design?
1. Orient oneself to the situation: identify resources, constraints, hidden agendas
2. Defining the problem: What is to be evaluated
3. Making a decision about the design: qualitative vs. quantitative
4. Choosing how to measure the dependent variable, deciding how to collect data, how the data will be analyzed, and determining how results will be reported
Compare and Contrast Qualitative and Quantitative methods of evaluation:
Quantitative Method: Deductive, numeric data
-levels of occurrence, provide proof, measure levels of action and trends
Qualitative Method: Inductive, narrative data
-Depth of understanding, study motivation, enable discovery, allow insights into behavior and trends
List the various qualitative methods that can be used in evaluation and research:
Content Analysis: a systematic review identifying specific characteristics of messages
Delphi Techniques: a process that generates consensus through a series of questionnaires
Films, photographs, and videotapes
Nominal Group Process
What is internal validity:
the degree to which change that was measured can be attributed to the program and allows evaluators to speak with more confidence that the program actually made the difference
Identify threats to internal validity:
--History: unanticipated events between pretest and posttest
--Maturation: participants in program show pretest to posttest differences due to growing older/more mature
--Testing: participants become familiar with test format due to repeated testing
--Instrumentation: change in measurement in pretest to posttest because they are more familiar with the instrumentation
--Statistical Regression: when high/low scores naturally move closer to mean on posttest
--Selection: differences in experimental and comparison groups due to lack of randomization
--Attrition: participants who drop out of the program
--Compensatory Equalization of Treatments: control group won't tolerate inequality with treatment group
--Compensatory Rivalry: when the control group is seen as the underdog and is motivated to work harder
--Resentful Demoralization of Respondents Receiving Less Desirable Treatments: People are resentful for receiving less desirable treatments
How to control threats to internal validity?
What is external validity?
the extent to which the program can be expected to produce similar effects in other populations
What are threats to External Validity?
--Social Desirability: individuals give response to impress or satisfy wants of evaluator
--Expectancy Effect: when attitudes projected onto individuals cause them to act in a certain way
--Hawthorne Effect: behavior change due to special status of those being tested
--Placebo Effect: change in behavior due to belief in the treatment
How to control threats to external validity?
-conducting the program several times in a variety of settings with different participants
--Blind/Double blind/Triple Blind studies
How can evaluation design increase control?
-helps ensure that conclusions drawn about the program will be accurate
What two comprehensive elements of formative evaluation relate to the development and content of a program?
What are three comprehensive elements of formative evaluation that pertain to promoting a program and ensuring that people in the priority population are aware of the program?
Recruitment, Reach, Response
in a partnership arrangement, when each organization performs its work as previously arranged
the process whereby planners make necessary changes to the program or its implementation based on feedback from participants and partners.
-MOST CRITICAL PART OF FORMATIVE EVALUATION
refers to formative, process, and summative evaluation and suggests these types of evaluation are clearly distinct
an evaluation wherein participants do not know if they belong to the experimental group or control group
The individual, organizational, and community resources that enable a community to take action.
-requires evaluators to carefully examine the abilities and competency of those who are designing and implementing a program (strengths and weaknesses).
As part of a summative evaluation or research study, a nonequivalent group (not randomly selected) that does not receive the treatment or program but is compared with the experimental group
One that has an unpredictable or unexpected impact on the dependent variable
A dedicated effort to understand a priority population prior to developing an intervention and then keeping this knowledge at the center of all program planning decisions
Assesses the presence of any confounding factors
As part of summative evaluation or research study, a randomly selected group of individuals, similar to the experimental group that does not receive the treatment or program but is compared with the experimental design group.
Measures dollars spent on a program versus dollars saved or gained
Measures dollars spent on a program vs. dollars saved or gained.
-how much it costs to produce a certain effect
compares interventions to determine which is least expensive in the context of impact achieved
applying a generally accepted principle to an individual case
diagrams that display steps or associations between elements in the evaluation process, including specific and unique notations. Exclusively related to summative evaluations in this chapter.
the number of program units delivered
an evaluation wherein neither participants nor those implementing the program know which group is experimental and which group is the control
a body of data that can be used to make decisions about planning
random assignment to experimental and control groups with measurement of both groups
as part of a summative evaluation or research study, a group of individuals that receives the treatment or intervention
extent to which the program can be expected to produce similar effects in other populations
Ensures that programs are implemented either as intended or as per protocol
extent to which a program can be expected to produce similar effects in other populations
ensures that the right partners are involved with a program
Individual cases are studied to formulate a general principle
Can be defined in two ways:
1. in planning: the degree to which practitioners effectively work and communicate with the program participants
2. In evaluation: when participants in the control or comparison group interact and learn from the experimental group
provides assurance that programs are supported by key stakeholders
refers to the number of components or activities that make up an intervention
-multiple component programs cater more effectively to the varied needs of consumers and tend to be accepted more readily
use of pretest and posttest comparisons, or posttest analysis only, without a control group or comparison group
A set of procedures used to try out various processes during program development using a small group of participants prior to implementation
Testing components of a program, service, or product with the priority population after the completion of a program
Testing components of a program, service, or product with the priority population prior to implementation
Can be defined in two ways:
1. getting feedback from the priority population on products, messages, and materials before launching a social marketing campaign
2. Collecting baseline data prior to program implementation that will be compared with the posttest data to measure the effectiveness of programs
an inductive method that produces narrative data
a deductive method that produces numeric data
use of a treatment group and a non-equivalent (nonrandomized) comparison group with measurement of both groups
Portion of the priority population that has an opportunity to participate in a program
making those in the priority population aware of a program
the human, fiscal, and technical assets available to plan, implement, and evaluate a program.
-relate to adequate internal or external funding and assistance from partner organizations
ensuring that an adequate number of people participate in a program
approval after participation
any combination of measurements and judgments that permit conclusions to be drawn about impact, outcome, or benefits of a program or method
Ensures that programs have appropriate built-in reinforcement components to assist participants with the expected level of involvement and/or behavior change
An evaluation wherein neither the participants, nor those implementing the program, nor the evaluators, know which group is experimental and which group is the control.
YOU MIGHT ALSO LIKE...
HLTH 440 Chapter 14 Terms
HPEB 300 Exam 3 Chapter 13 and 14
Chapter 11 Evaluation Research
OTHER SETS BY THIS CREATOR
Close Packed and Loose packed position of joints
OTHER QUIZLET SETS
Volleyball, Ult. Frisbee, Team handball, & Speedba…
CDL PERMIT TEST 1 STUDY GUIDE 1-81
MCRO 221- MT3: Environmental Microbiology