Upgrade to remove ads
RPTA 360 MIDTERM with Sampling
Keri Schwab, Cal Poly, Spring 2014
Terms in this set (90)
Five P's of Evaluation
Impacts, activities, people involvement, knowledge, attitudes, skills, aspirations, long-term impact.
Safety concerns, master planning, adequate facilities.
Accountability, cost-benefit, cost-effectiveness, equitable provision of services, economic impact.
Motivations/satisfaction, changes in attitudes, changes in knowledge, changes in skills and abilities, learning transfer, interactions/relationships, demographic characteristics.
Performance review, assess training needs, provide feedback for improvement.
Finite, no in-between numbers, can be counted with exact numbers.
Measured along a continuum--weight, temperature, height.
Some implied rank order. 1st, 2nd or 3rd. First in class.
Nominal/ Categorical Data
Have a true zero--can create percentages, averages, and it is measureable. EX: height and weight
Ordered categories have intervals and predictable size differences. No determined zero point. EX: Temperature.
External evaluator, external standards and accreditation, often based on minimum requirements.
Gut reactions, day to day feedback observations, not systematic and not scientific.
Goal-Attainment approach to eval
Goal-Free approach to eval
no pre-determined ideas--> evaluates the actual effects, outcomes, impacts without considering what they were supposed to be
When not to evaluate:
1. When you're not sincere about making decisions to improve your program.
2. If you know your program has serious problems.
3. When you don't have goals and objectives.
4. When a program is just starting.
5. If the disadvantages will outweigh the advantages.
Why do we evaluate?
1. To determine accountability.
2. To assess or establish a baseline.
3. To assess the attainment of goals and objectives.
4. To ascertain outputs and outcomes.
5. To determine the keys to successes and failures.
6. To improve and set future directions.
7. To comply with external standards.
Set a road map that the evaluator will follow from the beginning to the end of the project.
Pieces of information that are collected to help determine whether criteria is being met.
Interpretation and decisions based on data gathered and questions asked.
Participants have a right to
No coercion, (always optional unless job required), written consent, do no harm (physical, social, psycho, emotional, enviro), and results.
Multiple perspectives, many truths, goal-free model, realities are multiple, different for everyone, inductive reasons from specific details, moving out to general, qualitative data.
Facts and truth can be known, one truth, goal-attainment model, one reality, empirical obs, tesing, deductive general theory, quantitative data.
Deductive (general, broad theory, moves to specific data) Useful for assessing specific goals, objectives, logic models, Preordinate: all set ahead of time. Highly controlled, rules and procedures.
Most relevant questions first, less importance later, group more controversial with less controversial, demographics last.
Ranking in order from most important to least important numerically.
Indicate your perceptions of staff on each of the following (list as opposite of terms as possible) Mean/Friendly, Helpful/Unhelpful, Disorganized/Organized.
How satisfied are you with the following? 1-5/Completely Unsatisfied/Neutral/Completely Satisfied
Close-Ended Question Types
Likert scale, semantic differential, rankings
Close-Ended Questions Weaknesses
Must ask exactly what you want to know, must provide all answers appropriate to question, including "other" can provide too much/more info than can use, no explanation for responses.
Close-Ended Questions Strengths
Easy for participants to complete, easily coded/entered for data analysis.
Open-Ended Question Weaknesses
Difficult to analyze (lots of data), depended on education or vocab level, takes longer, threatening if broad, or if participant is unsure.
Tool for discovering, understanding lived experience. Many conclusions can be drawn. Descriptions rather than quantity. Evaluator is the instrument.
Open-Ended Question Strengths
Unfamiliar with population, not sure of possible responses, interest in variety or responses, less evaluator influence.
No pre-formed answers, respondent supplies answers (short answer, short description, explanation)
The measurement tool (the survey, checklist)
To gauge, quantify, or assess something.
Individual questions/statements on the instrument.
Values attributed to some type of meaning.
What can we measure?
-Opinions/ Values/ Attitudes
-Feelings/ Emotional Responses
what someone has done or will do
ex. how many times this quarter have you...
Opinions/ Value/ Attitudes
ex. what did you enjoy about...
what is important to you when you attend...
Feelings/ Emotional Responses
ex. how much did this trip make you feel happy?
how does volunteering make you feel?
ex. what is the height of a basketball hoop?
ex. What is your age?
Where is your hometown?
When Pilot Testing Survey
-Check for misunderstandings/ problems
- make changes
-small, simple sample
-distribute and administer exactly the same
-ask about questions
When Distributing Survey
-Explain project clearly and briefly
---why should them care?
Used in qualitative research, focus on information--not numbers, continue until saturation.
Participants recommend friends, who recommend friends, who recommend friends, who.... biased, not likely to be representative.
Judgement, sample drawn from experts on topic.
Divide population into subgroups, draw sample to fill quota. Not representative of population.
Easy access to sample of your population, affordable, weak generalizability.
You believe sample is representative--people, activities, location, times--all "represent" your research purpose, and justify why.
Sample not drawn by chance, biases could occur, caution about making generalizations.
Areas from which to collect data--geographic areas, city blocks, schools, rec centers--Randomly choose people within each cluster.
Stratified Random Sample
Organize population into subsets and then select sample from each subset.
Every nth person until sample is reached--higher chance of intended bias.
Simple Random Sampling
Useful if you know the entire population. Randomly draw names from population, or number generator. Least amount of bias.
Non-Probability Sampling Types
Purposive, Convenience, Quota, Expert, Snowball.
Everyone has same chance of being chosen, sample is more likely to represent population, removes bias, can draw inferences--generalize to larger population.
Probability Sampling Types
Simple random, stratified, systematic, cluster.
Unintended bias or errors such as: misunderstood questions, interviewer error, lack of knowledge, time survey give, location.
Difference between traits of sample and traits of population--larger difference, larger error.
Everyone has same chance of being selected.
Not everyone has equal chance of being selected.
Continuous sampling until reach data saturation (qualitative.)
Portion of people that reflect that population.
All people who might make up a group.
Using more than one method to gather data.
Consider: time, conducive setting, manageable data collection & analysis, clear instructions & questions.
The degree to which an assessment tool produces stable and consistent results.
Refers to how well a test measures what it is purported to measure.
Intuition, gut reation
formal, procedures, criteria, data
Why is the eval being done? Improve/increase something?
Ask about behavior, attitude, knowledge, reasons
field observations, archival records
we can learn about emotions, process, social interaction, environment
How to create behavior observation checklist
list all possible behaviors
Types of observations
duration: how long
frequency: how often
interval: every"X" minutes
latency: how long btwn cue and response
purpose of and how to pilot a survey
•Check for problems, misunderstandings
•Small, similar sample
•Distribute and administer exactly same
•Ask about questions (easy to understand?)
Field Test- Will collection method actually work?
strengths and weaknesses of closed ended survey questions
•Easy for participants to complete
•Easily coded/entered for data analysis
•Must ask exactly what you want to know
•Must provide all answers appropriate to question
•Including "Other"...can provide too much/more info than you can use
•No explanation for responses
tips for writing quality survey questions
1.Avoid double barreled questions; one idea per question.
2.Clear, brief, and simple
3.Avoid leading questions
4.Do not ask for estimates
5.Use respondents language
6.Write clear instructions
7.Avoid negative questions "Do you plan to not attend a baseball game?"
8.Do not use vague words like "often", "frequently", or "seldom"
9.Use precise alternatives (bad: should people recycle or not? Better: Should people recycle, compost, throw away or other?)
10.Pilot test your questionnaire
just the facts, simplify data
directly from findings
summarize from evaluation
justify conclusions based on findings
proposed actions linked from conclusions related to the purpose
involve your unbiased judgment
YOU MIGHT ALSO LIKE...
Test 3 Lowery
MKTG 442: Exam 1
OTHER SETS BY THIS CREATOR
RPTA 360 Final (After Midterm Info)
POLS 325 Cal Poly Zhang
RPTA 360 MIDTERM without sampling
Econ 201 Final Study