Planning and Evaluating: Evaluation Approaches Chapter 13/14

Background Info on Evaluation
Click the card to flip 👆
1 / 65
Terms in this set (65)
Adequacy of resources.
-Cost identification analysis: compares different interventions available for a program to determine which intervention would be the least expensive
-Cost benefit analysis: dollar benefit received from the dollars invested in the program
-Cost effectiveness analysis: quantifies effects of program in monetary terms
MultiplicityDegree to which multiple components are built into programSupportDegree to which a support component is built into interventionInclusionExtent to which an adequate range and number of participants are involved in the programAccountabilityExtend to which the staff is fulfilling it's responsibilitiesAdjustmentDegree to which programs, services, or activities are modified based on feedback received from participants, partners or other stakeholdersRecruitmentDegree to which members of priority population are adequately recruited through appropriate channels and places consistent with cultural characteristicsReachProportion of population given the opportunity to participate in programResponseProportion of population actually in programInteractionQuality of interactions between planners and participantsSatisfactionDegree to which needs of participants are being met, how satisfied they are with the program, service, or activity and their belief that a positive impact is being made in their livesProcedures Used in Formative EvaluationFocus groups, surveys, interviews, expert panel reviews, quality circles, protocol checklist, Gantt chart, program and eval forms, direct observationProcess EvaluationLooks back on the implementation process and measures what went well and what went poorly.Main Objectives of Process Evaluation-How closely the program implementation followed protocols -How successful it was in recruiting and reaching members of the priority population -How many people participated or how many products or services were distributed -Other factors that may have competed with or compounded program resultsElements of Process Evaluation: FidelityExtent to which the program or service was delivered as planned or as per protocol including use of time lines and logic modelsDoseNumber of program units deliveredRecruitment/Reach/Response again/ContextContext is external factors that may influence program resultsSummative EvaluationPurpose is to assess the effectiveness of intervention and extent to which outcomes of interest changed as a result of program. Pre and post measuresImpact EvaluationFocuses on intermediary or immediate measures such as behavior change or changes in attitude, knowledge, awarenessOutcome EvaluationsMeasures the degree to which end points such as diseases or injuries actually decreased. Impact and outcome evaluations together constitute summative evaluation.Purpose of Evaluation1. To determine achievement of objectives related to improved health status 2. To improve program implementation 3. To provide accountability to funders, the community, and other stakeholders 4. To increase community support for initiatives 5. To contribute to the scientific base for community public health interventions 6. To inform policy decisionsFramework for Program Evaluation: Step 1-Engaging StakeholdersWho are the stakeholders? Those involved in program operations. Those served or affected by program. Primary users of the evaluation results.Step 2-Describing the ProgramSets the frame of reference for all decisions in the evaluation process. Describes mission, goals, objectives, capacity to effect change, stage of development, and how it fits into larger community. Usually logic model used in this step.Step 3-Focusing Evaluation DesignMakes sure interests of stakeholders are addressed. Identifies reason of evaluation, how it will be used, questions to be asked, design of evaluation, and finalizes any agreements about the processStep 4-Gathering Credible EvidenceDecides on measurement indicators, sources of evidence, quality and quantity of evidence, logistics for collecting evidence. Organizes data including specific processes related to coding, filing, and cleaning.Step 5-Justifying ConclusionsCompare evidence against standards of acceptability. Judging the worth, merit or significance of program. Creating recommendations for actions based on results.Step 6-Enduring Use and Sharing Lessons LearnedUse of dissemination of findings to real world. Needs of each group of stakeholders addressed.Standards of Evaluation1. Utility standards-ensure needs of users satisfied 2. Feasibility standards-ensure evaluation is viable and pragmatic 3. Propriety standards-ensure evaluation is ethical 4. Accuracy standards-ensure evaluation are correctPractical Problems or Barriers in Evaluation1. Planners fail to build evaluation in planning process 2. Adequate resources not available to conduct appropriate evaluation 3. Organizational restrictions prevent hiring consultants 4. Effects are hard to detect because changes are small, come slowly, or don't last 5. Lengt of time allotted for program and evaluation not realistic 6. Restrictions limit collection of data among priority population 7. Difficult to determine cause and effect 8. Difficult to evaluate multi-strategy interventions 9. Discrepancies between professional standards and actual practice exist 10. Evaluators motvies to demonstrate success introduce bias 11. Stakeholders perceptions of evaluations value may vary too drastically 12. Intervention strategies not delivered as intended or are not culturally specificEvaluation in the Program Planning StagesEvaluation must reflect program goals and objectives. Must be planned in early stages of development and be in place before start of program. Baseline data-measures reflecting initial status of participants. Initial data regarding program should be analyzed promptly to make any necessary adjustments. By creating summative evaluation early in the planning process, planners can ensure results are less biased.Ethical ConsiderationsEvaluation should never cause mental, emotional, or physical harm to those in priority population. Participants should always be informed of the purpose and potential risks and should give consent. No individual should ever have personal information revealed in any setting or circumstance. When appropriate, evaluation plans should be approved by institutional review boards.Internal EvaluationAn individual trained in evaluation and personally involved with the program conducts the evaluation.Advantages of Internal EvaluationFamiliar with organization and program history. Knows decision making style of those in organization. Present to remind people of results now and in the future. Able to communicate results more frequently and clearly. Less expensive.Disadvantages of Internal EvaluationPossibility of evaluator bias or conflict of interestExternal EvaluationConducted by someone not connected with the program. Evaluator should be credible, objective, have a clear role in evaluation design, and accurately report findings.Advantages of External EvaluationObjective review and fresh perspective. Can ensure unbiased evaluation outcome. Brings global knowledge of working in a variety of settings. Typically brings more breadth and depth of technical expertise.Disadvantages of External EvaluationMore expensive. Can be somewhat isolated, often lacking knowledge of and experience with the program.Evaluation ResultsWho will receive the results of the evaluation? Different aspects of evaluation can be stressed, depending on group's needs and interests. Different stakeholders may want different questions answered. The planning for the evaluation should include a determination of how the results will be used.Evaluation ApproachesRefers to formative, process, and summative evaluation and suggests these types of evaluation are clearly distinct.Evaluation DesignsRelates to summative evaluation; experimental, quasi-experiemental and non-experimental.Questions for Selection of Summative Evaluation DesignHow much time do you have to conduct? What financial resources are available? How many participants can be included? Quantitative or qualitative data? Do you have data analysis skills or access to statistical consultants? In what ways can validity be increased? Is it important to be able to generalize your findings to other populations? Are the stakeholders concerned with validity and reliability? Do you have the ability to randomize participants into experimental and control groups? Do you have access to a comparison group?Steps in Selecting Evaluation Design1. Orientation to the situations-resources, constraints, hidden agendas 2. Defining the problem-dependent, independent, confounding variables 3. Basic design decision-qualitative, quantitative, combination 4. Plans for measurement, data collection, data analysis, reporting of resultsQuantitative Data CollectionDeductive, applying principle to case. Produces numeric data such as counts, ratings and scoresQualitativeInductive, examining case to form principle. Produces narrative data, such as words and descriptions. Good for programs that emphasize individual outcomes or in cases where other descriptive information from participants is needed. Useful during process.Qualitative Methods Used in EvaluationCase studies, content analysis, delphi technique, ethnographic studies, film ethnography, focus groups, historical analysis, in-depth interviewing, nominal group process, participant-oberver studies, quality circle, unobtrusive techniquesFour Ways Qualitative and Quantitative Methods Might be IntegratedModel 1: Qualitative methods are used to help develop quantitative measures and instruments. Model 2: Qualitative methods are used to help explain quantitative findings Model 3: Quantitative methods are used to embellish a primarily qualitative study. Model 4: Qualitative and quantitative methods are used equally and parallelExperimental GroupGroup of individuals who receive the interventionControl GroupGroup that does not receive the intervention. Should be similar to experimental group. Random assignment. Right to status quo, informed of purpose, right to new services, not subjected to infective or harmful programs.Comparison GroupWhen individuals cannot be randomly assigned to an experiment or control group, a non-equivalant group may be formed.Experimental DesignOffers greatest control over confounding variables. Involved random assignment to experimental and control groups with measurement of both groups. Produces the most interpretable and defensible evidence of effectiveness.Quasi-ExperimentalResults in interpretable and supportive evidence of program effectiveness. Cannot control for all factors that affect validity of results. No random assignment to the groups. Comparison are made on experimental and comparison groups.Non-ExperimentalWithout the use of a comparison or control group, has little control over the factors that affect the validity of the results.Internal ValidityDegree to which change that was measured can be attributed to program. Many factors can threaten validity. Most threats to internal validity can be controlled through randomization.Threats to Internal ValidityHistory, maturation, testing, instrumentation, statistical regression, selection, mortality, diffusion or imitation of interventions, compensatory equalization of treatments, compensatory rivalry, resentful demoralization, interaction of several threatsExternal ValidityThe extent to which program can be expected to produce similar effects in other populations. Factors that threaten external validity are sometimes know as reactive effects, since they cause individuals to react in a certain way.Threats to External ValiditySocial desirability, expectancy effect, hawthorne effect, placebo effect, multiple X interference. Can be counteracted by making a greater effort to treat all subjects identically. Blind study-participants do know what group they are in Double blind-planners and participants don't know what groups they are in Triple blind-info not available to participants, planners, or evaluators