Study sets, textbooks, questions
Upgrade to remove ads
Exam 1: Chapters 1, 2, 6, 7, & 11
Terms in this set (80)
ESEA (Elementary and Secondary Education Act)
The most significant federal statute influencing U.S. educational testing
NCLB (No Child Left Behind Act)
Signed into law in 2002 by President George W. Bush, this mandated that students in twice as many grades were to be assessed than previously.
IASA (Improving America's Schools Act)
With this act, states were relatively free to carry out whatever sorts of educational assessment they thought appropriate, with the chief exception being the assessment of students being educated, at least in part, via federal dollars.
AYP (Annual Yearly Progress)
In 2011, President Obama made allowances so that schools could apply for a waiver of the requirement of NCLB only if the school was not going to meet which of the following requirements:
Any attempt to determine students' status with respect to educational variables of interest
Typically thought of as a paper and pencil measurement
Anything a teacher measures in the classroom
Knowledge or skills students have acquired
Students' feelings, values, opinions, likes, dislikes, etc.
The notion that students possess an inborn potential about which schools can do little to influence
The notion of students' academic potential and it is malleable.
Why is it important to educate parents concerning assessment?
So that parents are familiar enough to understand their children's test scores. As educators, we need parents on our side.
Classroom assessments should be unequivocally focused on the action options that teachers have at their disposal.
Sought for ends of instruction
The means that teachers employ in an attempt to promote achievement.
Objectives written to indicate the behavior to be measured.
Objectives that deal with a student's intellectual operations.
Objectives that deal with students' attitudes, interests, and values.
Objectives that deal with large-muscle and small-muscle skills.
Lists the objectives to be covered in an assessment that distinguishes between lower-level cognitive skills and upper-level cognitive skills.
During this era in the 1970s and 1980s, legislators in most states enacted a string of laws requiring students to pass basic skills tests in order to receive their high school diplomas or, in some cases, to be advanced to higher grade levels.
Interprets a students' performance in relation to the performance of students who have previously taken the same assessment; Ex. 90th percentile means that the student scored better than 90% of others who also took the test.
An absolute interpretation of assessment because it hinges on the extent to which the curricular aim represented by the test is actually mastered by the student; Ex. 90% mastery means that the student mastered 90% of the test.
A student is required to recognize the correct answer on a test item.
A student is required to produce a product or display a specific behavior.
Importance of the degree of measurability and the grain-size of curricular aims
More specific curricular aims are considered "small-grain-size" instructional objectives, whereas more general curricular aims are regarded as "large-grain-size" instructional objectives. Teachers should strive to come up with a half dozen or so truly salient, broad, yet measurable, curricular aims for their classrooms. Teachers need to create assessments to help them determine whether their students have, as hoped, achieved their curricular aims.
Contributions of Benjamin Bloom
He proposed a classification system for educational objectives organized around three domains: cognitive, affective, and psychomotor.
Contributions of Robert Glaser
He first introduced the notion of criterion-referenced measurement.
Two types of educational standards
Content standards and performance standards
Describes knowledge or skills that the educators want students to learn; from a classroom teacher's perspective, the most useful kind of standard.
Identifies the desired level of proficiency at which educators want a standard mastered.
Congressionally mandated project of the U.S. Department of Education to assess students' knowledge and skills in geography, reading, writing, mathematics, science, history, the arts, civics, and other academic areas.
Developed by congress in 1988 to formulate policies; Some of their responsibilities include the development of assessment objectives.
Represents the consistency with which an assessment procedure measures whatever it's measuring.
Reflects the defensibility of the score-based inference on the basis of an educational assessment procedure.
Signifies the degree to which assessment procedures are free of elements that would offend or penalize students on the basis of students' gender, ethnicity, and so on.
The 8 general item-writing guidelines/commandments
1.) No opaque directions
2.) No ambiguous statements
3.) No unintended clues
4.) No complex syntax
5.) No unnecessary advanced vocabulary
6.) Adequate space is provided for students to respond
7.) Items are not split on two different pages
8.) Group items of same format together
Selected-response item types
True or False, Matching, Multiple-Choice, and Short Answer.
The guidelines for Binary-Choice items
1.) Phrase item to elicit thoughtfulness
2.) No negative statements
3.) Use only one concept per item; the item needs to be entirely true or entirely false.
4.) Balance response categories; No systematic patterns.
5.) Maintain item-length similarity
6.) Absolute terms and other specific determiners are avoided. Words like "always" and "never" usually means the item is false; words like "sometimes" and "usually" are a clue the item is true.
A category of selected-response items where two options are provided for students to select; T/F are the most popular.
The guidelines for Multiple Binary-Choice items
1.) Separate item clusters vividly from one another. Make it clear a new cluster is commencing. Can use asterisks, dots, lines, boxes, and so forth.
2.) Make certain each item is linked to the cluster's stem in a meaningful way.
A category of selected-response items where a cluster of items is presented that then require a binary response to each of the items in the cluster.
"Stem" of a cluster
The informational portion of the cluster; it is the part the student must read to gain the information needed in order to read the questions and select an answer.
"Items" of a cluster
The questions the student must answer on the left side, and the binary-choice answers on the right side (true or false).
1.) Stem Stuffing
2.) Avoid negatively stated stems/alternatives
3.) Attending to alternative length. The correct answer should not stick out like a sore thumb.
4.) Make sure you have approximately used all the alternatives the same number of times.
5.) Never use "all of the above." Only use "none of the above" when it will help you make the correct inference.
6.) The stem should be able to stand alone.
7.) Responses are homogeneous in content.
8.) Absolute terms are avoided in alternatives.
9.) A key term appearing in both the item and correct answer is avoided
10.) Use questions whenever possible instead of incomplete statements.
11.) If the item is in a fill-in-the-blank format, the blank is at the end.
Another form of selected-response items. They are made up of a "stem" and "alternatives." In this case, the "stem" is the question being asked, and the "alternatives" are various answer choices for that "stem." Incorrect "alternatives" are called "distractors."
The most popular (most used) of all the items types; Can measure lower-level and higher-level cognitive abilities; Can contain options with relative correctness;
Can measure a huge variety of student skills and knowledge; More reliable than T/F or matching.
Answer choices for a Multiple-Choice item.
They are incorrect alternatives devised to distract the student.
Disadvantage/Disadvantages of Multiple-Choice items
1.) They only require students to recognize the correct answer. Guessing plays a factor.
2.) Of all the selected-response items, these take students the longest to complete. Thus, less content can be covered.
Another form of selected-response items that contain directions and two lists; The list on the left is termed the "premises." The list on the right is termed the "responses."
Disadvantage/Disadvantages of Matching items
As with many selected-response items they tend to measure memorization of low-level facts.
Matching item guidelines
1.) Employ homogenous lists.
2.) Use relatively brief lists, placing the shorter list on the right and the lengthier on the left.
3.) Employ more responses than stimuli so one answer isn't "give away."
4.) Order the responses logically (order alphabetically or chronologically) so student can quickly find answer and move on the next item.
5.) Describe the basis for the matching and the number of times responses may be used. For example, "Match the city on the left with the state capital on the right. You will use a state capital only once."
6.) Place all of the premises and responses for an item on the same page.
Another form of selected-response items where a student must supply a word, phrase, or ONE sentence.
They usually assesses knowledge; Major advantage is that students need to produce the correct answer rather than simply finding it; They are tougher to score. The longer the response, the harder to score reliably.
Short-Answer item guidelines
1.) Use direct questions rather than incomplete statements, especially for younger students.
2.) Ensure it truly is a SHORT answer item.
If using fill-in-the-blank, position the blank at the end of the statement
3.) If using fill-in-the-blank, use only one to two blanks per item. Too many blanks are referred to as a "swiss-cheese items."
4.) To avoid unintended clues, ensure that all your blanks are the same length.
The most common constructed-response item that varies in asking students to write a paragraph or two, to asking them to write many pages on a specific topic. Best choice for educators if they want many complex learning outcomes.
Disadvantage/Disadvantages of Essay items
1.) They are difficult to write properly.
2.) Difficult to score
3.) Difficult to determine reliability of the scoring. They are notoriously unreliable. The more complex the students' responses, the more attention the teacher will need to give to scoring.
Essay Item Guidelines
1.) Indicate to the student both the upper and lower limit of your expectations in the length.
For example, state something like, "In three to five sentences..." It is important to include the upper limit so that the students don't write their entire life story when you were only wanting 3-5 sentences. Do not employ the "in the space below" strategy.
2.) The most important part is the description of the task. This is termed the prompt. The student must truly understand what is being sought to provide an appropriate response.
3.) Indicate how many points the item is worth and let the students know how much time they should spend on the item.
4.) Do not allow the students to pick and choose the item they wish to answer from a list of items. Difficult to score reliably and impossible to do so when students are not answering the same items.
5.) Consider your item from the perspective of your students. Answer the item as you would if you were a student in your class.
Items that limit the form and content of the students' responses. For example, "Cells with a nucleus contain three basic parts. In 3-6 sentences, indicate the three basic parts and provide a definition of the three parts."
Items that provide the students with far more latitude in responding. For example, "In 5-10 paragraphs describe what you believe to be the most important issue facing education today. Provide a rationale for your choice and provide potential solutions."
An essay item
Steps to effectively scoring students' responses on constructed-response items
1.) Devise the key in advance
2.) Decide early the importance of mechanics. The students need to know this early on as well. If you indicate that mechanics isn't going to be factored into the student grading then you must stick to your guns and not let mechanics influence the way you score.
3.) Score one item at a time.
What is the central purpose for classroom assessment?
To draw appropriate inferences about students' status so you can make more appropriate educational decisions.
What are the "yesteryears" reasons on why teachers should know about assessment?
Diagnosing students' strengths and weaknesses, monitoring students' progress, assigning grades, and determining one's own instructional effectiveness.
Diagnosing students' strengths and weaknesses
In order to make sure the students are comprehending the content that is taught to them while also instructionally addressing any difficulties they may be having with the subject matter.
Teachers also need to know what their students' prior accomplishments are so as to not waste time in instruction by reiterating content their students have already mastered.
Monitoring Student Progress
It helps educators determine whether their students are making satisfactory progress. Although educators can informally assess the progress of their students, they need to systematically monitor students' progress through assessment in order to eliminate the assumption that progress is being made. Through systematic assessment, educators know factually whether their students are making progress or not.
It is the most prevalent reason given when asked why teachers give assessments. It should not be the exclusive reason for assessments. It is a form of evidentiary support for a student's progress. Must be given by collecting evidence of the students' accomplishments in order to be fair and inclusive. It can also be used as a motivator for progress.
Determining one's own instructional effectiveness
Pre-testing and post-testing the students is a clear way to determine whether your instruction has been effective or ineffective in the classroom.
What are the three "new" reasons for why educators should know about assessment?
Influencing public perceptions of educational effectiveness, helping to evaluate teachers, and clarifying teachers' instructional methods.
Helping to influence public perceptions of educational effectiveness
After the 70s and 80s, newspapers and media publications started to share testing scores with the mass public. After being publicly shared and ranked according to scoring rates, educators are undoubtedly evaluated for their performance.It greatly influences the future of educators.
Help to evaluate teachers
Legislators want to "hold teachers accountable" by not only requiring compelling pre and post test data that shows progress is made and measured, but also by requiring a teacher to produce "satisfactory" improvements in students' achievement which should be directly reflected in their students' test scores.
Clarifying teachers' instructional methods
A way to ensure that tests are no longer regarded as an afterthought. Assessments should be intentionally and deliberately created before instruction is begun. High-stakes testing has motivated educators even with low-stakes testing to be prepared prior to any instructional planning. This method ensures that the educator fully understands what to incorporate in their classroom instructional activities prior to assessment so as to be more effective teachers.
What are the dividends or benefits for teachers who clearly state curricular aims and who create assessments before teaching?
More accurate learning progressions, more on-target practice activities, and more lucid expositions.
How will the educator receive more accurate learning progressions by creating their assessments first?
They will know how to identify and teach their students the subskills they need to acquire before they can master what's being taught.
How will the educator create more on-target practice activities by creating their assessments first?
By being more precise in aligning the in-class activities with the outcomes desired.
How will the educator receive more lucid expositions because they created their assessment first?
The educator knows the outcome of their instruction by clearly understanding what is going to be assessed at the ending. Foresight and planning provide clear explanations of the content as well as a clear sense of direction for the students making both the educator and the student more effective.
List the ways to make multiple-choice distractors more plausible.
1.) Use students' most common errors
2.) Use important-sounding words that are relevant to the item stem, but don't overdo it!
3.) Use words that have verbal associations with the item stem
4.) Use textbook language or other phraseology that has the appearance of truth
5.) Use incorrect answers that are likely to result from misunderstanding or carelessness
6.) Use distractors that are homogenous and similar in context to the correct answer
7.) Use distractors that are parallel in form and grammatically consistent with item's stem
8.) Make the distractors similar to the correct answer in length, vocabulary, sentence structure, and complexity of thought
Holistic Rubric or Holistic Scoring
Focuses on scoring content as a whole. The scoring is for an "overall grade."
Analytic Rubric or Analytic Scoring
Focuses on scoring content in a fine-grained, specific point-allocation approach. The scoring contains specific criteria and a corresponding numerical range to signify whether the student has satisfied the criteria or not.
Sets with similar terms
EDF 461-Exam 1
Assessment (Exam 1)
Sets found in the same folder
EDF 461 test 2
Exam 2: Chapters 13, 14, 16
Exam 3: Chapters 8, 9, 10, 12
EDF 461-Exam 1
Other sets by this creator
PLT K-6 Praxis Review
SPED Praxis Review
Ch. 10 Measures of Intelligence and Adaptive Behav…
ED 445/545 - Quiz 5
Recommended textbook solutions
Chemical Reaction Engineering
Chemistry for Engineering Students
Lawrence S. Brown, Thomas A. Holme
Introduction to Chemical Engineering Thermodynamics
Hendrick Van Ness, J.M. Smith, Michael Abbott
Advanced Engineering Mathematics
Other Quizlet sets
Bio lab final 2017
Nutrition Exam Ch. 2
Leadership Quiz 6
Short Answers for WWII Test