Study sets, textbooks, questions
Upgrade to remove ads
Assessment Test #1
Terms in this set (26)
Goal of ESEA
Evaluate the progress of undeserved students
Goal of NCLB
Promote high standards, reduce achievement gap, and increase accountability
Goal of ESSA
same as NCLB but with flexibility in implementation
Goal of Standards for Ed. and Psy. Testing
Provide guidelines for test development, evaluation, and uses
Difference between testing and assessment
Final exams, midterms, quizzes
How much did students learn?
Paper-pencil "traditional testing"
PRODUCT that measures a set of objectives
Broad, non-restrictive label for the kinds of measuring a teacher must do
Determine students' status
PROCEDURE that measures a set of objectives
find out what students know
Why do teachers assess? (7 reasons)
Determine students' current status
Monitor students' progress
Determine one's own instructional effectiveness
Influence public perceptions of educational effectiveness
Provide basis for teacher evaluation
Identify and clarify instructional intentions
Difference between large grain and small grain objectives? Can you identify each?
-Focused, specific objectives are small scope or "small-grain-size" (can be done very quickly)
-Broad, general objectives are broad scope or "large-grain-size" (takes longer)
-To tell the difference...look at the time needed to achieve the objective.
Define each assessment domain. Give an example of each.
"Will the assessments be built to focus on the cognitive domain, the affective domain, or the psychomotor domain?"
cognitive- deal with intellectual operations
affective- deal with attitudes, interests, and values
psychomotor- focused on a large-muscle or a small-muscle
Difference between norm-referenced and criterion-referenced tests? Know how to explain the score. Know key words to distinguish between the two.
Norm-referenced compares a student to a group of students. (comparison)(percentile)
Criterion-referenced- do they know the material? Did they meet the standards?(percentages)
What is the difference between selected response and constructed response?
Student selects a response from a set of responses
Fill in the Blank with word bank
Constructed: (gives you a better idea of what students know)
Student constructs (or creates) a response.
Short answer (single word, a couple of words, or a sentence)
Display (cooking, creating, etc)
Put these in order: write objectives, plan assessment, design lesson, look at standards
look at standards, write objectives, plan assessment, & design lesson
The extent to which assessments are CONSISTENT.
understand the reliability coefficient. what do the scores mean?
-A score that indicates the relationship between students' scores on two different tests (test as whole)
-Ranges from -1.0 to +1.0
-above .80= very good reliability
-below .50= not considered a reliable test
identify the three types of reliability measures. give an example of each.
Test-Retest (used in the classroom)
-Give the same assessment twice, separated by an amount of time (days, weeks, months). Reliability would be the correlation between the scores at Time 1 and Time 2.
Alternate Form (used in the classroom)
-Create two forms of the same test (slightly varied items). Reliability would be the correlation between the scores of Test 1 and Test 2.
-does not mean mix questions around on the same test
Internal Consistency (will be from standardized tests and given to us)
-Compare one half of the test to the other half. Use Kuder-Richardson 20 or Cronbach's coefficient alpha (a).
Explain SEM. Know how to explain a score.
-Standard Error of Measurement (SEM) - Are an individual's scores consistent?
-The lower the SEM, the more consistent the scores. We want the SEM to be low.
How can you make sure your assessments are reliable? That you grade all students' work equally?
- test-retest & alternate forms, rubric
The accuracy of an assessment.
Think about HOW the assessment is being used.
Validity focuses on the INTERPRETATION of the results.
There is no such thing as a "valid test" or an "invalid test."
Identify and describe the three validity cautions.
Teacher observations- Validity is based on the interpretation of the results. Sometimes, teacher judgements and interpretations can be "off the mark."
Content Underrepresentation- If a test fails to capture all aspects that should be measured, the content is underrepresented.
Example: Test of overall reading comprehension that omits certain types of reading passages (fiction, nonfiction, poetry, etc.).
Construct-Irrelevant Variance- Sometimes, a student's score can be influenced by outside factors.
Example: A math test that requires strong reading skills may not show accurate math understanding if the student is a struggling reader.
Explain the four sources for validity. (TRIV)
Test content- Does my assessment adequately represent the content of the standard being measured?
Response Processes- Will a student interpret the assessment the way I intended?
Internal structure- Does the internal organization of my assessment confirm an accurate measure of the content?
relations to other variables- Does my assessment allow for foreseen and unforeseen variables without causing drastic results?
Define the three types of validity. (CCC)
Content-related- Does my assessment measure the content standards - knowledge, skills, and/or attitudes?
Criterion-related- The degree to which a score predicts performance on a criterion variable.
Example: How well does the ACT predict students' performance in college?
Construct-related- A "construct" is an underlying trait that is responsible for some observable behavior.
Do my test items measure the number of constructs I intend to measure?
Know the relationship between reliability and validity.
A test can be reliable (consistent) without being valid (accurate).
A test cannot be valid (accurate) unless it is reliable (consistent).
Absence of bias
What are the two forms of bias in classroom assessment?
What is most appropriate way for a teacher to reduce/eliminate bias?
What is the purpose of an IEP?
Identifies yearly standards and sets goals
Specifies services needed by student
Identifies assessment modifications
What is the difference between accommodation and modification?
-Accommodations must NOT alter the nature of the skills and knowledge being tested. - Accommodation- assisting (didn't change what is expected or what you need to know)-- giving them an even playing field
-Modification- what Sp.Ed. teachers do (marking out 2/4 on multiple choice)-- making changes to the content of the test (going to be in IEP or 504)
Recommended textbook explanations
Myers' Psychology for the AP Course
David G Myers
Myers' Psychology for AP
David G Myers
Understanding Psychology, Student Edition
Richard A. Kasschau
Bundle: A Concise Introduction To Logic Aplia 1 Term Printed Access Card
Patrick J. Hurley
Other sets by this creator
ECE 407 Test 3
FCS 352 Exam 4
FCS 352 Exam 3
American Lit Final
Other Quizlet sets
US History Unit 1 Study Guide
M11 Brandschutzsanierungsplanung am bestehenden Ge…
Strategic Management exam 1 (chapters 1-6)