Instructional Assessment System

Anonymous
timer Asked: Feb 3rd, 2019
account_balance_wallet $10

Question Description

Please create a 300–400 words response to this chapter. I do not expect summaries of the readings but instead reflection about .

Not a summaries of the readings but instead reflection, and “Be sure to write it in your own words,”

See attached .


Chapter 6 Develop Assessment and Accountability Systems to Monitor Student Progress Rose M. Ylimaki Key Topics • • • • Accountability Levels and Issues A Comprehensive Assessment System Summative and Formative Assessments Assessment-Curriculum Connections According to the ISLLC Standard 2, effective instructional leaders develop assessment and accountability systems to monitor student progress. Today’s instructional leaders face accountability pressures at all levels. Principals/instructional leaders must be able to implement and monitor various summative and formative assessments, align assess ments with curriculum and instruction, and lead difficult conversations regarding achievement gaps. In other words, principals/instructional leaders must have the assessment literacy (knowledge and skills) to enhance learning opportunities and close achievement gaps. This chapter presents leadership for effective accountability and assessment systems (formative and summative) that improve teaching and learning for all students. Extended Reflection 6.1 Define accountability. Describe your current assessment system, including summative and formative assessments. Be prepared to share your thoughts with your colleagues. Refer to your answers as you read the chapter. Accountability Levels: Direct and Indirect Influences As a landmark of education reform in the United States, the No Child Left Behind (NCLB) Act of 2001 uses accountability to leverage policy implementation. Some studies have indicated direct positive effects of NCLB on leveraging much-needed improvements for students in persistently underperforming schools (e.g. Skrla et al., 2004). Findings from other studies disagree. For instance, analyses of state test score trends revealed the marginal effects of NCLB accountability policy on student achievement gaps and graduation-rates between racial and socioeconomic groups of students (Lee, 2006; Orfield, Losen, & Balfanz, 2006). Still other studies have raised concerns about the effects of NCLB policy on the narrowing of teaching practices, curriculum (e.g. Daly, 2009; Ylimaki, 2011), and on local leaders’ decision-making practices to a focus on standardized testing data (e.g. Duke et al., 2003; Luizzi, 2006). Thus, studies of NCLB accountability indicate direct influences on student achievement overall, marginal direct influences on closing racial/ethnic achievement gaps, and some unintended (indirect) influences on narrowing the curriculum and teacher decision-making strategies. Direct and indirect influences of accountability are further described in the next two sections. Direct Influences Federal and State The federal NCLB Act of 2001 and related state testing mandates use accountability as a direct lever of policy implementation. In particular, the No Child Left Behind Act articulates consequences for failure to make adequate yearly progress on state tests toward a goal of 100 percent proficiency by the year 2014. As this chapter is written, thirty-two states have filed for waivers regarding the 2014 goal. Regardless of the 100 percent waivers, if schools do not make adequate yearly progress on state-administered standardized tests over a series of years, their leaders and teachers face severe consequences, including conversion to charter school status, staff restructuring, and reconstitution. In a similar vein, Race to the Top rewards schools for attaining “labels” of high performance. School performance indicators may also impact principals in other ways, including particularly administrator and teacher evaluations. For instance, Arizona mandated that districts base 33–50 percent of principal evaluations on student academic growth or student outcomes on state tests (Arizona Revised Statutes § 15– 203(A) (38)). Common Core Curriculum also uses accountability as a lever for increased rigor and postsecondary preparation. As this chapter is written, forty-five states have adopted the Common Core Standards, and each state must develop an accountability system to ensure that each child has access to a “high quality education” and post-secondary options. According to the Common Core Standards initiative, schools must accomplish these goals by: (1) driving school and district performance towards college and career readiness; (2) distinguishing among students performances in order to provide supports and interventions to students most in need; and (3) providing timely and transparent data to promote action at all levels; and (4) fostering continuous improvement throughout the system. PARCC assessments will also directly influence district and school use of resources within each of these recommended processes. If PARCC assessments require students to be proficient in particular strategies and in using technology to demonstrate their proficiency, how are districts likely to respond? Districts are likely to expend resources on technology and professional development aimed at these tested strategies. Local District Closely related, instructional leaders are often directly influenced and held accountable for many local district accountability policies or implantation mandates. For example, many districts mandate the use of locally developed benchmark assessments or commercial tests, such as Galileo, that have predictive value for state test performance. That is, quarterly benchmarks are aligned with state test items, and student benchmark performance predicts how well students will perform on the yearly state assessments. The logic is as follows: if students struggle on particular benchmark items, they are likely to struggle on the state assessments. Benchmark assessment results, then, provide teachers with critical information to guide instruction for the remainder of the school year. Many districts hold principals and teachers accountable for benchmark assessment results, with rewards that include performance pay tied to teacher evaluation systems and even merit pay. As a result, some studies have noted a narrowing of curriculum and teacher decision-making focus to the benchmark and state assessment results (e.g. Johnson & Johnson, 2005; Ylimaki, 2011). Today’s instructional leaders must be careful to avoid narrowing the curriculum to standardized test items. Indirect Influences School and district leaders are also influenced and held accountable in indirect ways. For instance, local/regional newspapers and organizations often publish state assessment results that indirectly put leaders under tremendous pressure to improve performance. Such reports frequently rank districts and schools according to their performance on assessments with topperforming districts/schools at the top and low-performing ones at or near the bottom. When a district and/or school performs at or near the bottom of such a ranked list over a series of years, local school boards, parents, and other community members often see their local schools (and leaders) as deficient. As a result, instructional leaders may change their priorities to focus on tests. And while a focus on high test performance is not bad in and of itself, the related narrowing of curriculum and decisions is problematic for educating the whole child (e.g. Johnson & Johnson, 2005; Ylimaki, 2011). Consider the following questions: • How often have your professional learning communities (PLCs) focused on standardized test data analysis? • How much time do your PLCs spend on formative assessments aligned with standardized test improvement? Now think about these questions: • How often have your PLCs focused on community/civic engagement or service learning? • How often do you and your colleagues talk about students’ funds of knowledge and background information as assets for the curriculum? • How often do your PLCs talk about improving the arts and humanities? Answers to these questions point to the need for a comprehensive assessment system aligned to curriculum, instruction, and learning for the whole child. Fieldwork 6.1 Ask a Principal about Accountability Pressures Ask a principal about the different types of accountability he/she faces. Prompt him or her to talk about national accountability sources (NCLB, Race to the Top, Common Core assessments), state accountability (testing mandates, principal/teacher evaluation systems), and local sources (newspaper rankings, community, school board, benchmark assessments). Then ask the principal how he/she manages these accountability pressures. Finally, ask him/her how the accountability pressures have reduced or even eliminated other goals (e.g. service learning, the arts). Be prepared to share your responses with the class. A Comprehensive Assessment System and its Components Effective instructional leaders use formative and summative assessment measures, as essential components of a comprehensive accountability system that connects assessments, instruction, and curriculum for the whole child within local communities and beyond. We can divide an assessment system into two broad categories of assessments: summative and formative. Essentially, summative assessments provide information about what students have learned at a particular point in time, and formative assessments provide feedback about what students are learning during an instructional time. Teachers use formative assessment feedback to modify their instruction in ways that help all children learn more during subsequent instruction. Summative Assessments Summative assessment (an assessment of learning) typically documents how much learning has occurred at a particular point in time. Overall, the purpose of summative assessment is to measure the level of student, school, or program success. Today’s instructional leaders must know how to analyze summative assessment data and use that data to analyze program effectiveness and develop plans for school or curriculum improvement (see Chapter 10 for suggestions). At the same time, leaders must recognize summative assessments as part of an overall comprehensive assessment system. Summative assessments are most often given periodically to determine at a particular point in time what students know and do not know. Commonsensically, the term “summative assessment” is often associated with large-scale standardized tests, such as state assessments, but summative assessments are also used in district and classroom programs. In this sense, the key is to think of summative assessments as a way to measure student learning at a particular point in time. Summative assessments provide important information that can only help in evaluating specific points in the learning process. Because summative assessments of learning are spread out over time and occur after the instruction occurs (from a few weeks to a year), they are useful to evaluate the effectiveness of programs and access to a quality education. In the wake of NCLB and related accountability policies, instructional leaders must examine summative data for evidence of equitable opportunities for learning as well as academic achievement. Equity audits and assessment processes. Skrla et al. (2004) developed an equity audit process to help school leaders and other school members systematically examine summative data, looking for equity of learning opportunities in their schools. More specifically, Skrla et al. (2004) posited twelve indicators grouped into three categories for equity audits—namely, teacher quality equity, programmatic equity, and achievement equity. High quality teachers are key determinants of students’ opportunities to be academically successful (Skrla et al., 2004). Yet students of color and students form low income backgrounds often have non-certified teachers and or teachers with less experience and training. According to Skrla et al. (2004), if children of color and children living in poverty get lower quality teachers than their Anglo peers from middle and upper class neighborhoods, we cannot expect equitable achievement. Furthermore, if the inequity in teacher quality is distributed across a school or district, the result is likely to be a systemic inequity in achievement. Similar quality patterns can exist within schools. In your school, do the more experienced teachers teach advanced placement courses? Do least experienced teachers teach intervention classes? Equity in the quality of programs is just as important as teacher quality (Skrla et al., 2004). Skrla and colleagues recommend an audit of four key indicators of program quality: 1. 2. 3. 4. special education gifted and talented education bilingual education student discipline. Historically, students of color and students from low-income backgrounds are over-represented in special education and under-represented in gifted/talented programs. In the equity audit, the indicator for quality in special education and gifted talented programs is whether all student groups are represented in reasonably proportionate percentages. With regards to bilingual education, the question is whether students are being well served and not simply segregated from the kind of quality instruction necessary to make academic progress. Students who are routinely and consistently removed from classes for discipline are also denied equal access to learning. In combination, teacher quality equity and programmatic equity contribute to achievement equity. Fieldwork 6.2 Work with your principal, PLC, and/or grade level team to conduct an equity audit, looking at data regarding teacher quality, programmatic quality, bilingual education, and discipline. Add the resulting data to your Data Wall. Interview your PLC members to gain an understanding of their perceptions about the resulting equity data. Work with your team to develop plans to attain more equitable academic achievement in your school. Equity audits require deep trust among staff members and instructional leaders who can facilitate difficult conversations with regards to race, whiteness, language, and poverty. Moreover, today’s instructional leaders must have strategies and analytical tools to help school members get beyond deficit views and blaming external factors for achievement gaps. See Chapter 2 for additional culturally responsive assessment strategies that help school members move beyond deficit views of traditionally marginalized students. Summative data provides instructional leaders with many understandings about what students have learned as well as their opportunities for learning (teacher quality and programmatic quality. Yet summative assessments happen too far away from instruction and other school practices to make instructional adjustments or interventions during the learning process. For that, we need formative assessments. Formative Assessments Formative assessments (assessments for learning) provide information or feedback to modify teaching and learning activities. According to Heritage, Kim, Vendlinski, and Herman (2009), formative assessment is “a systematic process to continuously gather evidence and provide feedback about learning while instruction is underway” (p. 24). Popham (2003) adds that formative assessment is a planned process; it does not happen accidentally. Teachers who regularly utilize formative assessments are better able to: (1) determine what curriculum content students already know and to what degree during the instructional process; (2) decide what minor modifications or major changes in instruction they need to make so that all students can succeed in upcoming instruction and on subsequent assessments; (3) create appropriate lessons and activities for groups of learners or individual students; and (4) inform students about their current progress in order to help them set goals for improvement. Common Formative Assessments Ainsworth and Viegut (2006) recommend common formative assessments designed by teams of teachers and then administered to students by each participating teacher periodically throughout the academic school year. In particular, common formative assessments assess student understanding of particular curriculum standards that the grade-level or department educators are currently focusing on in their individual classrooms. Teachers collaboratively score the assessments, analyze the results, and discuss ways to achieve improvements in student learning on the next common formative assessment they will administer. In this way, assessment informs decision-making during the instructional process. If the common formative assessments are aligned to the large-scale assessments in terms of what students will need to know and be able to do on those assessments, the formative assessment results will provide valuable information regarding what students already know and what they yet need to learn in order to do well on summative assessments. Using formative assessment results, educators can adjust instruction to better prepare students for success on the large-scale, summative assessments. Further, educators can use formative assessments to understand each child’s learning approach, background knowledge, and any misconceptions that may negatively affect their comprehension of material. Instructional leaders/principals play a vital role in implementing common formative assessment processes in their schools. They must look for creative ways to change daily teaching schedules to promote to promote collaborative curriculum development, instructional planning, and analysis of student progress. By freeing participating teachers to meet in appropriate teams, administrators/instructional leaders provide teachers with the support necessary to plan and align curriculum, instruction, and assessments. In sum, whether to regard an assessment as either formative or summative depends on the assessment’s purpose and how it is to be used. Summative and formative assessments are integral to a comprehensive assessment system. Further, instructional leaders/principals and teachers need to consider what they regularly assess, what they do not regularly assess, and for what purpose. With many schools under tremendous pressure to quickly raise standardized test scores, teachers and principals may prioritize tested standards in terms of instruction and formative assessments to the exclusion of promoting civic responsibility, inclusion, and social justice. The next several subsections describe a comprehensive assessment system (summative and formative) with components that inform a multi-dimensional curriculum (described in Chapter 3) and consider the issues identified in equity audits. When instructional leaders connect a comprehensive assessment to a multi-dimensional curriculum, they have the potential to avoid a narrow curriculum and limited decision-making identified in the research on NCLB (e.g. Johnson & Johnson, 2005; Ylimaki, 2011). Fieldwork 6.3 Formative or Summative? Decide if the following assessment examples are formative or summative: 1. The assessment is a final measure of how students performed on constructed response items on multiple measures taught during the quarter. 2. The teacher uses the results from a unit test to inform instruction for the same students during the next unit of study. 3. teacher provides students with the opportunity to revise and then improve their performance on a particular assessment during the evaluation process. 4. Students complete their revisions and the final evaluation is determined. Be prepared to share your responses. The entire process begins with the policy-related curriculum (now common core) as well as students’ funds of knowledge and then continues through each successive practice. Common formative (school-based) assessments should be intentionally aligned to all of these standards. Previously in many states, teachers needed to power or prioritize long lists of discrete state standards. The Common Core Standards are relatively brief and relatively equal in importance; common core standards work better in their totality than as individual standards. In this prioritization process, the Common Core Standards are the power standards (Shanahan, 2012). Teachers then align classroom performance assessments to both the Common Core Standards and to the school-based common formative assessments. School-based common formative assessments are deliberately aligned to the formative and summative district benchmark assessments (typically administered quarterly) and end-of-course or end-of-year summative assessments. Last, district benchmark assessments and the endof-course assessments are deliberately aligned to the annual state assessments. In this model, curriculum standards (academic core including students’ funds of knowledge, humanities, and social goals) are aligned with daily practice. Teachers work together to gain deep, collective understandings about what children need to learn in relation to their backgrounds/funds of knowledge. Next, teachers determine big ideas and essential questions to focus instruction and assessment for each standard. In so doing, teachers cross-reference students’ funds of knowledge and Common Core Standards with state assessment data and state assessment requirements. In this part of the process, teachers examine state assessment data to see where students are scoring low and to identify in the state test requirements those standards that receive the greatest emphasis. Teachers consider how students’ funds of knowledge may be assets for learning these standards. Teachers then make any instructional modifications as needed and continue the cycle. These sequential steps are described in the next several sub-sections. Develop a Common Understanding of Standards-based Curriculum Because the Common Core Standards are minimal in number and recursive in nature, instructional leaders should provide teachers with a full K–12 set of the core standards rather than a set per grade level. With many previous state standards, standards were much more linear and discrete, and thus grade level standards made sense. In these instances, asking teachers to prioritize or “power” grade level standards made sense. The point is that leaders must examine standards and adjust any alignment or prioritization processes in relation to the needs of particular standards. In any case, teachers should develop deep, collective understandings of standards prior to the development of formative assessments. After teachers conduct a thorough review of the Common Core Standards or whatever standards are mandated, teachers need to consider interdisciplinary application, students’ funds of knowledge, and broader social goals. Develop Big Ideas and Essential Questions Teachers determine the “Big Ideas” inherent in the Common Core standard and relevant social goals for their schools and communities. Big Ideas are statements of understanding that students derive from study of particular standards. These Big Ideas often occur to students during the “ah-ha” moments of the learning process, particularly when teachers guide students to draw conclusions and make connections among curriculum standards and needs for an equitable and just democratic society. Wiggins and McTighe (1998) describe Big Ideas as enduring understandings, “the important understandings that we want students to get inside of and retain after they’ve forgotten many of the details” (p. 10). The value of developing big ideas is in the collaborative process among teachers. As educators analyze the standards, they get to know exactly what the standards require them to teach and the students to learn. Many teacher teams reorganize the information into a graphic organizer that makes sense to all teacher participants. The standards become less daunting, and the educator is better able to consider how best to teach the concepts and skills that now become much clearer. Beyond recall of new information (knowledge), students learn to make inferences and gain insights between new information, prior understandings, and skills that students will need to use throughout their lives. Moreover, students learn to access and draw on their background experiences or funds of knowledge in order to gain these new insights and understandings. Formulate Essential Questions Matched to the Big Ideas “Essential questions represent the essence of what you believe students should examine and know in the short time they have with a teacher” (Hayes-Jacobs, 1997, p. 26). Essential questions serve as instructional filters (Ainsworth & Viegut, 2006) for selecting the most appropriate lessons and activities to advance student understanding of the concepts and skills. The goal is for students to be able to respond to Essential Questions with the Big Ideas stated in their own words at the conclusion of an instructional unit. Identification of Big Ideas and Essential Questions lays the foundation for developing classroom and common assessments. In the following example, teachers matched Essential Questions to corresponding Big Ideas (in parentheses). Essential Questions and Corresponding Big Ideas (Grade 5 Math) 1. Why do we need to know and be able to use text structure? (Understanding how to use text structure is a way to comprehend the main idea of a text and is a necessary reading skill applicable to all expository text.) 2. Why learn how to graphically organize text structure? (Graphic organizers provide shortcuts for accessing various text structures.) 3. How can we apply text organizational skills in writing? (Graphic organizers can be used to guide the prewriting phase of writing an expository text.) Collaboratively Design Common Formative Pre- and Post-Assessments After teachers develop big ideas/essential questions, they work in teams to collaboratively design a common formative pre- and post-assessment matched directly to those concepts, skills, and “Big Ideas” in the standards. Assessment questions and items need to be written so as to address each of concepts inherent in the Common Core. Items also need to match the specific level of rigor of the identified standard. (See information about level of rigor in Chapter 3). For example, if the standard requires students to analyze or evaluate, assessment items need to reflect that higher level of cognition and rigor. In order to assess student proficiency of Common Core Standards, teachers will need to develop selected response type of common formative assessments as well as a constructed response type of common formative assessment in which students write their own responses to the Essential Questions. The Big Ideas—stated in the students’ own words—along with supporting details derived from the standards should appear in the students’ written responses. Combining two types of assessments (selected- and constructed-response) provides teachers with a multiple-measure assessment of students’ understandings. Teachers may analyze constructive responses of expository text and other writing forms to assess students’ funds of knowledge in relation to the learned curriculum (see Chapters 2 and 3). Because teachers know in advance the students’ funds of knowledge as well as core concepts, skills, and understandings students will be required to demonstrate on the common formative pre- and post-assessments, each individual teacher on the team can plan and teach the instructional unit that is aligned with the curriculum and assessments used to evaluate students’ learning progress. Administer and Score Common Formative (Pre-) Assessments Teachers administer common formative assessments individually and then score them with colleagues in a collaborative setting, particularly for constructed responses. Collaborative scoring of constructed-response assessments requires timely professional development. Teachers will need a conceptual and procedural framework for scoring common formative assessments, including a rationale and process for scoring assessments. To begin, teachers consider their purposes for the common formative assessment—what they most wanted to find out about their students’ learning—before deciding which type of assessment would best meet their needs. Questions might revolve around: • what understandings and/or funds of knowledge the students have with regard to using particular Common Core Standards • what application of the concepts and skills embedded in those standards the students can demonstrate • what kind of integration of understanding the students have gained (i.e., whether or not students can articulate the Big Ideas in their own words and then support those Big Ideas with details from the standard concepts and skills). For example, teachers may answer the above questions to decide that the best type of assessment to meet these multiple purposes has to be a constructed-response assessment, such as an extended response writing assessment. After administration of the assessment in individual classrooms, teachers then meet to score the assessments. Students can also learn to self-assess their work against a set of established scoring guide criteria. Such self-assessment enables students to identify where their strengths lie and where they need to improve: “Engaging in self-assessment prior to receiving feedback … shifts the primary responsibility for improving the work to the student, where it belongs” (Stiggins et al., 2004, p. 195). Many teachers prefer to wait until they have more experience with their own evaluation of common formative assessments before they involve students. Teachers who do involve their students in the evaluation of assessments using the scoring guide the students help create will benefit from these efforts, including providing students with greater ownership of the learning process. Administer and Score Common Formative (Post-) Assessments for Use by Data Teams The collaborative scoring process described below will provide participating teachers with the degree of reliable and valid feedback on student performance they need to “inform” their future instruction. The sequence of eight steps to collaboratively score constructed-response student papers is as follows (Ainsworth & Viegut, 2006): 1. Discuss the definition of key terms. 2. Review criteria in task-specific or generic rubric created by team that will be used to score papers. Clarify and revise any subjective criteria to ensure consensus of understanding among members. 3. Read through student assessments and select exemplars/“anchor” papers/constructed responses and “range finder” papers to use during group scoring practice. 4. Conduct a group scoring practice to evaluate selected student papers representing a range of student responses. The practice session should include the following activities. 5. Begin with actual scoring of student papers using the rubric while referencing “anchor” and “range finder” papers. 6. Double-score or conduct a “read behind” scoring of each student paper to ensure inter-rater reliability. 7. Resolve adjacent or discrepant scoring disagreements by having a third evaluator score the paper in question. 8. Record rubric scores on a student roster for each class of students. Analyze Post-Assessment Results in Data Teams Compare pre- to post-assessment results, reflect on the process, and make plans for further improvement during the next instructional cycle. The term “data team” is defined as a gradelevel, cross-grade level, or department team of educators composed of teachers who all teach the same content standards to their students, and who meet regularly for the express purpose of analyzing common assessment data. Data teams often include educators from the areas of special education, English language acquisition, and performing arts. The data team process is a practical method that school systems throughout the nation are using to make their data meaningful in terms of improving instruction and student achievement. Data team processes frequently feature the following five steps, but the steps are very recursive and interactive. The data team process enables teacher teams to collaboratively identify in student work the strengths or characteristics of proficiency as well as the learning challenges of non-proficiency. Once the analysis is complete, the team establishes a very specific goal for student improvement on the next common assessment and selects the most effective instructional strategies to meet that goal. Figure 6.1 Common Formative Assessment and Rubric • Step 1: Chart the data. Record the number and percentages of students who met or exceeded the established proficiency score on the common formative pre-assessment and do the same for those who did not. Record data regarding teacher quality (experience, certification if appropriate) and programmatic quality (percentages of students in special education, gifted/talented, time in bilingual education and in discipline). • Step 2: Analyze the results. Identify strengths in proficient student papers and areas of need in non-proficient student papers. Identify proportional percentages for traditionally marginalized students in special education and gifted/talented as well as time spent in bilingual education and discipline programs. • Step 3: Set goal. Write a specifically worded goal statement based on the common preassessment results and equity audit results that represent achievable student improvements on the common post-assessment and equity improvements. • Step 4: Select effective teaching strategies. Select the most effective instructional strategies (experience-based and research-based) to achieve identified goal. • Step 5: Determine the results indicators. Decide how to gauge the effectiveness of the team’s selected instructional strategies and other processes designed to increase equity in learning opportunities. Each step is further described below. Step one. The first step of the data team process is to record on a group chart each teacher’s student assessment results derived from the common formative pre-assessment. For example, if all the participating teachers agreed to score the common pre-formative assessment using a percentage scale and decided prior to administration of the assessment that the “cut” score for student proficiency would be 80 percent, then each teacher first separates all the student papers into two broad groups—those that scored 80 percent or higher (proficient and above) and those that scored below 80 percent (non-proficient). Each teacher records on a student roster the number and percentage of students who scored in each of these two categories and submits the student roster to a team member who prepares a group chart with all participant teachers’ scores. In order to examine students’ learning needs more specifically during the data team meeting, the teachers often sort all of their own proficient student papers into “proficient” or “above proficient” percentage bands (i.e., 80–89 percent and 90–100 percent, respectively). Teachers then sort the non-proficient papers into “almost proficient” or “beginning proficient” percentage bands (i.e., 70–79 percent and below 70 percent, respectively). Because these scores are derived from a common pre-assessment administered before any instruction takes place, in all likelihood, there will be many students scoring below 70 percent. Each teacher records the student data on an individual class roster and submits this data to the person designated to chart the data for the entire team. Teachers may decide to use a rubric rather than a percentage scale to evaluate student proficiency record the number and percentage of students in each class who scored at or above the rubric “cut” score and those who did not. As a group, teachers then record data regarding teacher quality (experience, certification if appropriate) and programmatic quality (percentages of students in special education, gifted/talented, time in bilingual education and in discipline). Step two. Data team time is devoted to analysis of student learning needs (strengths/funds of knowledge or assets and learning challenges) and equity issues based in the data. If the participating teachers designed and administered a selected-response type of common formative assessment using a percentage scale, they first review their sorted student papers by levels of proficiency. Next, teachers dialogue to determine student learning strengths, challenges, and uses of funds of knowledge as revealed in the student responses. In analyzing selected responses, educators will need to determine which particular items the students marked correctly and which particular items they marked incorrectly. The teachers can then conduct an item analysis of each of the assessment items to pinpoint the concepts and skills students are understanding and not understanding. Teachers will prepare a T-chart of strengths and challenges based on their findings and rank-order the challenges according to the greatest need. Finally, teachers need to examine written student work as opposed to completed multiplechoice answer sheets. This can only be accomplished if the common formative assessment includes constructed response types of items. For example, if teachers gave a common formative assessment that asked students to draw on their background knowledge and apply that knowledge to a curriculum standard, teachers may be able to assess students’ funds of knowledge, any misconceptions, and the degree to which the student uses his/her background knowledge to comprehend the standard and content. After determining students’ funds of knowledge/background knowledge, strengths, and greatest areas of student need—by ordering the challenges to proficiency in the list above— data team members are ready to set goals for the post-assessment results. Further, after identifying proportional percentages for traditionally marginalized students in special education/gifted and talented programs as well as time spent in bilingual education and discipline programs, data team members are ready to set goals for improving equity in a systemic way. Step three. Data team members use the assessment results to set appropriate goals for students. Many team members struggle with goal setting in terms of percentages, often arbitrarily setting a certain percentage number as a goal. In the example above, imagine that the pre-assessment data only showed 15 percent of students scoring at the proficient and above levels. Since the data team will write a post-assessment goal based on a short instructional cycle of typically a month in duration, many team members hesitate to set high goals (i.e., 100 percent of students will attain proficiency). Dramatic gains in student learning are possible if participating teachers’ focus on the concepts and skills from the specific standards on which the common formative assessments are based and if all teachers (regular and special area teachers) work together to help children learn those concepts and skills. By referring back to the Step 1 chart created by the data team, the members use their actual student data to write a realistic goal statement. With student papers already sorted into developmental categories, the teachers can project with more accuracy, the number and percentage of students likely to achieve proficiency and above on the common formative postassessment. Conzemius and O’Neil (2001) developed a system for writing goals called SMART goals, which are specific, measurable, achievable, relevant, and timely. For example: The percentage of Grade 4 students scoring at proficiency and above in math multiple-step story problems will increase from 17% to 88% by April 30 (4 weeks) as measured by a grade-level developed common formative post-assessment that each participating teacher will administer to students on April 30. The percentage of Grade 10 students of color with out-of-class or in-school discipline will decrease to a proportional level from 60% to 23% by April 30 (12 weeks) as measured by a school-wide calculation of data on April 30. SMART goals do not include the how or instructional strategies that individual teachers will use to achieve those results. Instructional strategies are determined in the next step. Step four. After the data team sets the short-term SMART goals for the current instructional cycle or grading period, the participating teachers will generate a list of possible instructional or systemic change strategies to meet it. The teachers (regular and special) might choose to begin by reflecting upon their own professional experience and identifying research-based strategies that have proven effective. The data team conducts a search of research-based instructional or systemic change strategies in order to identify and select the most appropriate techniques to meet the goal they have set. After strategies have been selected, data team members review the list and decide upon the two or three that they think will have the greatest impact on improving student learning (or equity to learning) relative to the specific purpose— the rank-ordered challenges that were identified in Step 2. All of the teachers then agree to use each of the selected strategies during the instructional cycle or grading period. If teachers are not experienced with using one or more of the strategies, instructional leaders/principals and coaches will need to provide mentoring or coaching along with professional development about the strategy during faculty meetings and or PLC meetings. Step five. The final step in the data team process is to decide what evidence the team will need to determine if the instructional strategy they selected in Step 4 proved effective. Specifically, the team is interested in determining the positive indicators that will give them the evidence that their selected strategies accomplished their purpose. Data teams then write a “Results Indicator” statement to represent the effectiveness of their selected strategies. Returning to the reading/writing example, the results indicator statements might be written as follows: • • • • Students who scored in the non-proficient range on the common formative pre-assessment will score in the proficient range on the post-assessment. The students will be able to: correctly use graphic representation to illustrate text structure in a given expository text arrive at a correct identification of the text structure incorporate text structure vocabulary and structure to develop an original writing piece communicate their understanding of the process they used in order to use text structure for comprehension and how they used that structure in their own writing. When the five steps are completed, the data team creates an action plan to guide the instructional improvement process. The data team leader then shares this action plan with the principal along with a summary of the team’s progress as recorded on the data team’s five-step documents. This action plan can be as brief or as detailed as the school members would like but should address: • • • • • instructional strategies used resources, materials and additional collaboration time needed rigor required instructional differentiation informal coaching and classroom support visits needed and when these classroom visits will be conducted • sources for help if teachers encounter problems with implementation of the strategies • strategies to eliminate systemic inequities in the school (teacher experience, program quality, bilingual education, and discipline) • additional help or support needed from administrators. Post-Assessment Data Team Meetings After the common formative post-assessments are administered and scored at the end of the month, the data from each participating member of the team must again be charted prior to the data team meeting. The actual number and percentage of students who scored in each of the proficient and non-proficient categories on the common formative post-assessment are calculated and recorded for each educator on the team. The teachers then compare their preassessment results with the post-assessment results and later represent their gains on a data wall display. In this context, a data wall is not only a graphical representation of student achievement gains as measured by pre- and post-common formative assessment data collected during an instructional cycle but also a representation of how students’ funds of knowledge are reflected in the curriculum. According to Ainsworth and Viegut (2006), the data wall is a three-part display that includes the group’s identifying information, the student data results, the analysis of that data, the team-determined goal, and the results indicators are continually changing to reflect the current emphasis of instruction and provide a visible means of assessing growth from one instructional cycle to the next. Naturally, faculties celebrate gains in student learning evident on the data walls. At the same time, data walls provide visible evidence of challenges to improving student learning for individual students and subgroups. NCLB specifically requires schools to report student progress according to subgroups in an attempt to close achievement gaps according to race, gender, and SES. When teachers attain (SMART) achievement goals and have evidence that they attained equitable student achievement among various subgroups (e.g. gender, race/ethnicity) in culturally responsive ways, then teachers, administrators, students and parents have reason to celebrate. If, however, this is not the case, team members must be able to discuss these challenges openly. Instructional leaders can use the following questions to guide this discussion: • Did we use the identified strategies effectively and in culturally responsive ways? • Did we use the identified strategies with sufficient frequency? • Did we differentiate the identified strategies to meet diverse learning needs of students who are not yet proficient? • Did we use the identified strategies to eliminate systemic inequities for traditionally marginalized students? • Do we need further assistance or practice in how to use the identified instructional or systemic change strategies? • Do we need to abandon the strategy and use others that might be more effective? • How are our students doing overall and why do we think they performed the way they did? • What are we going to do about intervening for students who are not proficient? • How will we accelerate instruction for students who continue to excel so that we keep them motivated and progressing according to their own learning needs? • Which instructional or systemic change strategies produced the greatest results? • What other modifications do we want to make in our work collaboratively designing, administering, scoring, and analyzing common formative assessments? Using these conversations and reflections, teachers then determine when to plan the next administration of a common formative assessment. They will repeat each of the steps in the data team process and plan for greater improvements in equity and student learning. Fieldwork 6.4 Observe a data team in your school or a nearby school. What aspects of the data team process described above did you observe? What else did the team do? How, if at all, did the team use common formative assessments? What other kinds of assessment data did the group use (e.g. students’ funds of knowledge, equity audit, state assessment/summative data)? Strategies for Leaders How do instructional leaders put all of this information on accountability, assessment, and equity into practice? This section provides guidance for what today’s instructional leaders need to know and be able to do as well as analytical tools and strategies for practice. Instructional leaders need to: Know: • • • • • requirements for rigor in text and instructional strategies requirements of assessments options for formative assessments options for summative assessment students’ funds of knowledge and assets. Be able to: • • • • help teachers develop deep understandings of curriculum standards align curriculum with assessments and instructional approaches monitor student progress on summative and formative assessments provide time for collaborative work on data, assessments, and culturally responsive instructional strategies • disaggregate data by subgroups • conduct equity audits • lead difficult conversations about achievement equity and gaps. How might you use these strategies in relation to the case study below? Case 6.1 Impressive Teaching But Unimpressive School Label Middle School principal Isabel Sanchez stared at the state assessment results. Her school had improved in all academic areas, but the improvements were not enough to move out of a “C” rating. Dr. Sanchez is principal of a school in a barrio with tremendous pride in the Mexican-American culture, with many citizens of Mexican descent as well as immigrants (documented and undocumented). The school also has a high percentage of English language learners and children living in poverty. She knew that the ratings would be published in the local newspaper in the morning and that her teachers would be very disappointed. At the same time, Dr. Sanchez had worked very hard to develop a data-driven culture focused on continuous improvement. She hoped the teachers would see the ratings as a challenge, and she would not have long to wait. When the scores were released, Dr. Sanchez called a special faculty meeting so that she could discuss the results with her staff in person. Dr. Sanchez watched her teachers’ positive reactions as she presented increases for each grade level. Then she said, “Well, all of these improvements show our hard work is paying off, but we have farther to go. The state still gave us a ‘C’ rating.” Cindy, one of her best veteran teachers, spoke up first, “Well, okay, so we have to work harder.” Meg, one of her new teachers, said, “I think most of my kids did their best. I don’t want to sound like I’m making an excuse, but the bar is very high, and many of the students did not have background experiences in the topics featured in the reading passages. What did I miss in the common formative assessments?” Jose responded with a bit more skepticism, “I think the tests are doing what they are supposed to do. They are telling us where we stand as a school and a district. The formative assessments gave us good information to help the kids develop from where they are. You’re right. It’s a steep climb with some of those nonfiction passages.” Dr. Sanchez responded, “I have to say I’m pleased with the way all of you are trying to use these results to improve your practices.” Over the past year, the school had prioritized their standards and developed common formative assessments. A data team, along with the principal, analyzed the results with teachers. In the last quarter, Cindy and some other teachers developed an intervention system for students who were struggling with the common formative assessments and district benchmarks. Dr. Sanchez made a mental note to talk with Cindy and the other intervention teachers about the content of this program. The next morning, Dr. Sanchez stopped by the “intervention” room to observe Cindy and a couple other teachers and assistants working with students on non-fiction reading, writing, and math. She was immediately struck by the rigor of the reading material and dialogue. In the discussion, students were questioning underlying assumptions in the text and making connections across texts as well as their personal experiences. During the course of the lesson, the students talked about the big idea in their own words. Students naturally completed a formative assessment in which they mapped the text structure of each text and made comparisons across the readings. Overall, Dr. Sanchez told the teachers she was impressed and that, with more time, she believed their efforts would yield results on the state assessments. At the same time, Dr. Sanchez wondered if she and the teachers were missing something beyond time. She reflected on all the assessment and alignment activities from the past year. Teachers worked collaboratively to develop and score common formative assessments, all of which were aligned with the state tests and the state standards. What were they missing? Why was the school coasting? Questions 1. What were they missing? 2. Why was the school coasting? 3. How did Dr. Sanchez and the teachers use formative and summative assessments to meet accountability requirements and improve student learning? What else could they have done? 4. How do the principal’s and teachers’ assessment practices consider the needs of the students and community that are not as obvious in the state assessment data? Extended Reflection 6.2 Revisit your definition of accountability that you wrote at the beginning of the chapter. How, if at all, has your definition changed? How, if at all, would you like to change your school’s assessment system based upon what you learned in this chapter? Be prepared to share your thoughts in a small group. Summary This chapter examined accountability and a comprehensive assessment system, including summative and formative assessments. Recent major policies (e.g. NCLB) indicate direct influences of accountability on student achievement overall, marginal direct influences on closing racial/ethnic achievement gaps, and some unintended (indirect) influences on narrowing the curriculum and teacher decision-making strategies. In order to meet accountability demands at all levels, today’s instructional leaders must use formative and summative assessment measures as essential components of a comprehensive accountability system that connects assessments, instruction, and curriculum for increasingly diverse students. Thus, this chapter divided an assessment system into two broad categories of assessments: summative and formative. Essentially, summative assessments provide information about what students have learned at a particular point in time, and formative assessments provide feedback about what students are learning during an instructional time. NCLB and related accountability policies require instructional leaders to examine appropriate data for evidence of equitable opportunities for learning and academic achievement. Drawing on Skrla et al.’s (2004) reconception of equity audits, this chapter provides instructional leaders with strategies to examine data with regard to teacher quality, programmatic quality, and overall academic opportunity. This chapter provides guidance for assessing students’ funds of knowledge as resources for the curriculum. As schools and communities become increasingly diverse, curriculum rigor and accountability has increased. The new instructional leader must integrate their assessment literacy skills with understandings of equity and intercultural education to develop and sustain a comprehensive assessment system aimed at improved teaching and learning for all students. References Ainsworth, L. & Viegut, D. (2006). Common formative assessments: How to connect standards-based instruction and assessment. Thousand Oaks, CA: SAGE. Arizona State Legislature. (2011). Arizona Revised Statutes § 15–203(A) (38). Conzemius, A. & O’Neil, J. (2001). Building shared responsibility for student learning. Alexandria, VA: Association for Supervision and Curriculum Development. Daly, A. (2009). Rigid response in an age of accountability. Educational Administration Quarterly, 45(2), 168–216. Duke, D., Grogan, M., Tucker, P., & Heinecke, W. (2003). Educational leadership in an age of accountability: The Virginia experience. Albany, NY: SUNY Press. Hayes-Jacobs, H. (1997). Mapping the big picture: Integrating curriculum and assessment K–12. Alexandria, VA: Association for Supervision and Curriculum Development. Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in formative assessment? Educational Measurement Issues and Practices, 28(3), 24–31. Johnson, B. & Johnson, D. (2005). High stakes: Poverty, testing, and failure in American schools (2nd ed.). Lanham, MD: Rowan & Littlefield Publishers. Lee, J. (2006). Tracking achievement gaps and assessing the impact of NCLB on the gaps: An in-depth look into national and state reading and math outcome trends. Cambridge, MA: Civil Rights Project at Harvard University. Luizzi, B. D. (2006). Accountability and the principalship: The influence of No Child Left Behind on middle school principals in Connecticut. (Doctoral dissertation). New York: Teachers College, Columbia University. Retrieved July 24, 2009 from http://proquest.umi.com/pqdlink?Ver=1&Exp=07-232014&FMT=7&DID=1203564851&RQT=309 Orfield, G., Losen, D., & Balfanz, R. (2006). Confronting the graduation rate crisis in Texas. Cambridge, MA: The Civil Rights Project at Harvard University. Retrieved August 23, 2009 from www.civilrightsproject.ucla.edu/research/dropouts_gen.php Popham, W. J. (2003). Test better, teach better: The instructional role of assessment. Alexandria, VA: Association for Supervision and Curriculum Development. Shanahan, P. (2012). Shanahan on literacy. Retrieved December 15, 2012 from www.shanahanonliteracy.com/2012/07/common-core-or-guided-reading.html Skrla, L., Scheurich, J., Garcia, J., & Nolly, G. (2004). Equity audits: A practical leadership tool for developing equitable and excellent schools. Educational Administration Quarterly, 40(1), 133– 161. Stiggins, R., Arter, J., Chappuis, J., & Chappuis, S. (2004). Classroom assessment for student learning: Doing it right—using it well. Portland, OR: Assessment Training Institute. U.S. Congress. (2001). No Child Left Behind Act. Retrieved September 28, 2003 from www.ed.gov/nclb/landing.jhtml Wiggins, G. & McTighe, J. (1998). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development. Ylimaki, R. (2011). Critical curriculum leadership: A framework for progressive education. New York: Routledge.

Tutor Answer

Microtutor_Burchu
School: UCLA

Attached.

Running head: INSTRUCTIONAL ASSESSMENT SYSTEM

Instructional Assessment System
Name
Institution

1

INSTRUCTIONAL ASSESSMENT SYSTEM

2

Instructional Assessment System
One of the most significant challenges I face as an instructional leader is about the
implementation and alignment of assessments with the curriculum and instruction. To be a great
instruction leader requires sufficient knowledge and skills to improve students’ learning
opportu...

flag Report DMCA
Review

Anonymous
Tutor went the extra mile to help me with this essay. Citations were a bit shaky but I appreciated how well he handled APA styles and how ok he was to change them even though I didnt specify. Got a B+ which is believable and acceptable.

Similar Questions
Hot Questions
Related Tags
Study Guides

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors