Performance-Based Assessments, psychology homework help

User Generated

fbnq29

Humanities

Description

Performance-Based Assessments


Chapter 6 examines the purpose of authentic assessment, as well as the pros and cons of using authentic assessments with children. Imagine that your program, school, or center has adopted performance assessment to measure a child’s growth. Your supervisor has asked you to create a letter for families explaining performance assessment. In your letter, you must address the following:
  • At least two reasons why using performance assessment with children is a reliable method of measuring growth.
  • At least two typical concerns associated with performance assessment and how you will address those concerns.
  • At least two different performance assessments that you will use in the classroom and why you will use these assessments.

Post your letter to the discussion forum.



*You must properly cite and reference the course text in every discussion. A citation is a parenthetical note within the body of your response. It comes after a direct quote or a paraphrase. A reference comes at the end of your response and refers to the required reading or material. Use in-text citations.*

Howard, V. F., & Aiken, E. (2015). Assessing learning and development in young children. San Diego, CA: Bridgepoint Education

Unformatted Attachment Preview

Performance-Based Assessment 6 Michelle Del Guerico/Science Source/Getty Images Pretest 1. Worksheets, exams, and screening tests are examples of performance-based assessments. T/F 2. Performance-based assessments provide an authentic evaluation of student learning. T/F 3. Performance-based assessment is particularly vulnerable to error based on bias. T/F 4. Portfolios are best used to collaborate with children. T/F 5. Rubrics are easy to design and use. T/F Answers can be found at the end of the chapter. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution.  Introduction Learning Objectives By the end of this chapter, you should be able to: ሁሁ ሁሁ ሁሁ ሁሁ ሁሁ Define and give examples of performance-based assessment. Identify the advantages and understand the limitations of performance-based assessment. Identify multiple approaches used to assess performance in early childhood. Explain the purposes and development of portfolios. Design and implement rubrics to evaluate performance-based outcomes. Gretchen is a Head Start teacher in a predominantly migrant community in south Florida. Her 17 students range in age from 2 to 4 and are developmentally diverse. This Head Start curriculum is aligned with Florida’s Early Learning and Developmental Standards, which are developmentally based. These standards encourage a play-based curriculum, so Gretchen arranges authentic and culturally relevant activities. Children will be guided in free play, art, sensory experiences, science projects, embedded early mathematics, stories and dramatic play, and gross and fine motor activities. Gretchen spends the first few weeks of school establishing routines, such as transitions, meals, and independent and group play. Additionally, Gretchen talks with, reads stories to, and is a play partner to all the children as she carefully observes her students’ social development, problemsolving skills, early academic competencies, movement, and communication strengths. She recognizes that 4-year-old Kristen is well liked by the other children, is eager to learn, has already learned to count to 20, and can reliably retell the main parts of a story after a single reading. At the same time, 3-year-old Stetson prefers playing alone, is easily provoked to aggression, and has a difficult time sitting through the reading of a short book, either alone or with other children. It could be said that Gretchen conducted performance-based assessment of her students. That is, she observed Kristen, Stetson, and the rest of her children performing curriculum-based activities. Through the remainder of the year, Gretchen will continue to make performance-based assessments to be used to plan activities to challenge Kristen and scaffold Stetson, as well as the other 15 children in the class. Outcomes for these activities will also be continuously monitored to ensure that all children make adequate progress. Introduction Like all early childhood educators, Gretchen has continuous access to the most authentic evidence of learning—the products of children’s performance on daily activities. By using artifacts of learning continuously and purposefully to gauge growth and make decisions, Gretchen expertly uses performance-based assessment as a best practice to facilitate high-quality early childhood education. This chapter will explain the purpose and execution of a wide range of performance-based measurements, all of which Gretchen might use to assess and make decisions regarding Kristen and Stetson. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Performance-Based Assessment Section 6.1 6.1 Performance-Based Assessment Performance-based assessment (PBA) can be defined as the collection of data on students as they engage in learning. When a teacher either directly or indirectly asks a child to “show me,” an opportunity for PBA is created. Unlike traditional assessment procedures that tend to focus only on outcomes, PBA involves both the process of learning as well as the product (Palm, 2008). According to Schurr (2012), “Performance assessment is an assessment of how well individuals can do something, as opposed to determining what they know about doing something” (p. 8). Likewise, ECE professionals are concerned with how children behave while completing a task—their focus, effort, language, movement, interactions with others, and so on—as well as the degree to which the outcome of their actions conform to criteria or standards. A term closely aligned with performance-based assessment is authentic assessment, or the assessment of children in real settings as they perform the actions and engage in the activities that are expected of them in those settings. Of all the forms of assessment, authentic assessment has been recognized as a foundational practice in early childhood education (SusmanStillman, Bailey, & Webb, 2014). EC educators generally agree that the work of children is play, since it represents the most developmentally fertile context for cognitive, social, and language growth. With this in mind, authentic curriculums are play based. If a play-based curriculum is authentic, it follows that assessment should align with the curriculum and be authentic as well. For example, although it is common to assess a child’s understanding of prepositions using a worksheet, it is also possible to assess understanding by observing this skill in the context of building blocks, where children are asked questions such as “What will you put under your airport hangar?” The latter is an example of authentic assessment, where under is assessed in the context of play. Whenever assessment is decontextualized, or removed from activities that are developmentally appropriate, the assessment becomes less authentic. Authentic assessment reflects the understanding that knowledge and skills alone are insufficient to learning. Acquired knowledge and skills must be applied in order to be meaningful. Furthermore, learned information must be meaningfully applied in real-world situations if the learning process is to be complete. Actionable learning (or performance) filters personalized, acquired knowledge and skills through problem solving, creativity, and often a group process. These processes are essential aspects of 21st-century learning priorities that include problem solving, creativity, collaboration and team skills, technology, critical thinking, and communication. Performance-based assessments are compatible with developmentally appropriate practices that guide ECE because they assess children as they engage in authentic early childhood activities and play. Subsequently, aligning both process and product requires tools that differ from other forms of assessment. Performance-based assessment may be both a formative and a summative process. However, it is most commonly used formatively, whereby ECE professionals regularly measure student progress in the context of daily learning routines and activities. Examples of performances that provide a context for or indicators of child development are social and dramatic play, self-regulation and rules-based behavior, inquiry learning in science, applied projects involving social sciences and math, reading, and storytelling. Table 6.1 lists some examples of PBA artifacts, which are derived from activities such as interviews, games, directed assignments, live performance, and projects. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.1 Performance-Based Assessment Table 6.1: Performance-based artifacts Performancebased assessment artifacts Interviews Description Example Teachers ask questions about specific content, and students respond orally. After the class took a field trip to the local zoo, the teacher interviewed the students individually to find out what they observed and what new information they gained. Games Teachers engage in games with students or observe students playing games together. Directed assignments Teachers give specific tasks for students to complete. Performance Teachers observe students engaged in performance situations. Projects Students create works to demonstrate understanding. While watching two students play a game in which they matched cards with digits to cards with dots, the teacher observed that both students had mastered an understanding of the numbers 1 through 9. During language arts time in a first-grade classroom, one of the center choices is sentence sequencing using sentence strips and a large pocket chart. The teacher can observe how the students sequence sentences from a mentor text that the class explored together during group time. During the spring music performance, the teacher observed students’ ability to sing songs and move to music. After taking a field trip to the local fire station and reading many books about fire trucks, the students created a fire truck out of a refrigerator box, and the teacher observed skills in multiple domains. The terms performance-based assessments and authentic assessments are used in this chapter to describe methods of evaluating behavior in vivo. In vivo means assessors observe and evaluate children as they live, engaging in developmentally appropriate activities, solving real-life problems, and interacting with others in meaningful contexts—all of which enable children to adapt to the changing expectations in their lives. Performance-based assessment can be viewed in two ways: a general understanding of the nature of performance/authentic assessment, and a more formal definition comprising three dimensions by which an assessment qualifies as performance based (Frey, Schmitt, & Allen, 2012). That is, performance activities must take into account three dimensions: (a) the context or setting within which an activity is assessed must be real or natural for young children, (b) children must demonstrate their learning through authentic activities, and (c) the procedures used to score children’s performance to determine progress must align with the performance expectations. See Table 6.2 for examples of the three assessment dimensions of performance assessment. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.1 Performance-Based Assessment Table 6.2: Examples of selected assessment dimensions of performance assessment Grade Context Children Scoring example Example 1: prekindergarten (performance) After hearing the story of Little Red Riding Hood, props from the story (red hoods, baskets, wolf masks, and so on) are provided in the dramatic play area (natural setting for young children). Children perform the various roles and act out the story Little Red Riding Hood. Example 2: second grade (project) After reading about mammals over multiple days in small group reading, children examine several skulls of real mammals (connection to real world), noting the sizes of the teeth, lengths of the noses, and locations of the ears. Children draw and label a mammal head of their choosing. This activity was designed to address learner outcomes for listening comprehension, vocabulary development, problem solving, and social interactions. Skill mastery was assessed using questions while reading the story and anecdotal notes for the play. Source: Adapted from Frey, Schmitt, & Allen, 2012. This activity was designed to address learner outcomes for reading comprehension, vocabulary development, and art. Skill mastery was assessed using questions while students were reading, a checklist for mammal skull anatomy, and a rubric for their mammal skull drawings. Performance-based assessments can be as different and unique as the classes in which they are used. In addition to working naturally in ECE settings, PBAs have numerous other advantages. Advantages of Performance-Based and Authentic Assessments Performance-based assessment is essentially bound to observational procedures. Determining how children do things requires that they be watched. Watching children while they are engaged in meaningful activities is an informal assessment process. That is, teachers assess behavior as it Reflection unfolds rather than explicitly eliciting a series of behaviors according to a scripted format. As mentioned in Give an example in which subjective assessment previous chapters, observational assessment of chil- may provide an advantage to decision making while also being a possible source of error in dren’s abilities is always subject to observer bias and decisions made. therefore somewhat subjective. Using authentic or performance-based assessment is a best practice in early childhood education (Bagnato, Goins, Pretti-Frontczak, & Neisworth, 2014). As mentioned earlier, early childhood philosophy tends to endorse authentic experiences as a medium for whole child growth and development. Therefore, assessing children within meaningful activities should uncover meaningful evidence of their growth and development. If children are assessed playing with toys they engage with every day, interacting with people with whom they are familiar, and © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Performance-Based Assessment Section 6.1 doing tasks that are similar to ones they regularly tackle, then the most “truthful” evidence of development is likely to be revealed from the assessment. On the other hand, when assessment is conducted in situations that are contrived, outcome data are less likely to faithfully reveal a child’s abilities. A child may perform poorly in a testing situation that features new materials, assessors, time limitations, settings, and task requirements. However, this same child might successfully demonstrate an understanding of the same concepts during a class project—a feat that is relevant to real-world expectations. For example, a student may struggle to complete a worksheet that requires her to count clusters of dots (a contrived task), but when the same child is asked to count how many classmates will be eating lunch and get the corresponding number of napkins, she will have no problem counting classmates or napkins (a real-world expectation). Performance-Based and Authentic Assessments Practices Because learning in authentic contexts involves a wide range of ages and developmentally appropriate activities, many methods have been developed to assess children’s growth during these activities. For example, a professional working with infants would likely use different means of assessing developmental progress than a kindergarten teacher, as the type and complexity of behaviors and contexts will vary substantially. However, a set of guiding principles applies to all performance-based assessments, including establishing learning ­objectives, using multiple measures of growth, documenting and analyzing performance for decision making, and using performance data to give children feedback. Learning Objectives As with all assessment, PBA begins with identifying the learning outcome. Knowing what a child should gain from an activity informs the type of assessment. For example, if the learning objective is to “develop a sense of humor,” a child might be most likely to “show” a sense of humor during free or dramatic play or by listening to a funny story. However, humor is somewhat subjective and may therefore be best assessed by observing children during activities and using a subjective scale such as a rubric or a Likert scale (discussed later in this chapter). After clearly identifying learning objectives (see Chapter 4), performance activities are designed to achieve those outcomes. Assessment starts at the beginning of instruction in order to determine baseline or beginning skill levels. Suppose a teacher found upon preassessment that, of nine children, six had already mastered a concept. This information would be used to determine whether instruction was needed for those six children and to plan appropriate activities for the other three who had yet to learn the concept. Using the Right Assessment Having established learning objectives, the next step is to consider a range of assessment tools that are most likely to reliably reveal mastery. As seen from the previous example, it is possible to assess student gains in more than one way. Selecting the right tool requires the assessment to align with the target behavior by answering the question, “Is the learning objective met?” It also must be practical for an assessor to execute, given other demands. For example, a checklist (see Figure 6.1) is a quick way to indicate that a skill such as “humor in young children” has been observed—but it is imprecise. If a child seems to be progressing typically, a checklist is a perfectly reasonable choice. However, if a child is experiencing difficulty, a more intensive assessment may be necessary—such as daily observation to assess the © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.1 Performance-Based Assessment frequency of “uses/acknowledges humor with prompting” or anecdotal notes about when a child intentionally did something that others found funny, the context in which the child created humor, and who found the child’s actions funny. Figure 6.1: PBA in action: Checklist ሁሁ Checklists are a quick and easy way to document behaviors as they are observed in everyday interactions. PBA in Action: Checklists Example Date: October 5, 2015 Repeats behavior that draws laughter Laughs at funny stories Tells jokes Makes faces, sounds, body postures, or products that are humorous for age Children Anjali Bernie Chas Gaul Kalisha × × × × × × × × × × × × × × As with other forms of assessment, it is essential to collect performance evidence that may be used to make instructional decisions. How these data are collected depends on the tool selected. For example, if a professional wants to analyze a 13-month-old child’s language competencies, a good way to gather data is to videotape interactions between the child and mother. The evidence can be viewed over and over, and no information is likely to be overlooked. In this case the tool can be very detailed and follow administrative procedures for quantifying the types and numbers of words used by the child, his or her social responsiveness, and receptive responses to the mother’s communication in a natural context. Similarly, consider a preschooler’s art project in which children are making animal masks. Assessment can be conducted during the project to gauge behaviors such as holding a crayon, sustained attention to the task, sharing materials with others, and spontaneous communication about the animals and masks. In addition, the final mask may be assessed using a rubric that takes into account traits such as neatness, creativity, relationship between features of the mask to real animals, and problem solving. The mask itself can be put into the child’s portfolio. Still other performance measures are only administered during the process itself. For example, if children are learning new songs and dances for the school holiday festival, an assessor might watch the final performance and rate students’ skills on a scale to indicate mastery of words and hand gestures to the songs, motor skills in learning steps and executing movements, social skills in following directions, and confidence in performing in front of a crowd. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Performance-Based Assessment Section 6.1 Providing Feedback An important element of teaching and learning is providing feedback (Hattie, 2009). Performance-based assessment is no different. Using outcome data from PBA, EC professionals can relate what children do well and what they need to work on to move to the next level. For example, a teacher may say to a child who has just built a bridge with blocks, “Meg, you built a bridge so that I can walk the little man over without getting wet—good planning!” and “If you add some blocks to the ends, the bridge will be high enough to drive your blue boat under.” Feedback may also come in the form of questions that move children forward. “Meg, your bridge is not quite long enough to get over the river. Could you build it so my little man won’t get wet?” While PBA outcome data are useful to professionals, sharing these data with children further enhances learning. Even for very young children, immediate feedback in the form of smiling, clapping, and hugging tells children that they have done well. More complex feedback based on performance assessment that communicates how infants and toddlers may advance can be made by modeling, physically helping, and providing simple verbal prompts. For instance, if infant Josie is capable of cruising along furniture, she could be helped transition to free standing by holding her hand, encouraging her to move away from furniture, and excitedly saying, “Josie, you’re standing all by yourself!” This physical support and praise will motivate Josie to try to move away from furniture and stand independently, which will eventually lead to walking. Case Study Mr. McDonald was recently hired to teach Early Head Start. As he described it, the children came to school the first day like “bright shiny pennies.” They were cheerful and could not wait to show him their new shoes and school supplies. However, Mr. McDonald had no idea what these children were capable of or where he should begin teaching. His plan was to design a range of developmentally appropriate activities for the first several days that would permit him to get to know each child. Through these activities, he believed he would be able to assess students’ language, knowledge, social and motor skills, and behavioral strengths. The following are some of the activities Mr. McDonald used during the first week of school: • Each day during circle time, he introduced a new song, including “The More We Get Together,” “I’m a Little Tea Pot,” “The ABCs,” and “The Wheels on the Bus.” • Each day, he set up several centers from which children could choose to play. These featured manipulatives, a water table, blocks, and dramatic play. • During meals, children ate family style, and Mr. McDonald sat at a different table each day so he could talk with new children. Critical-Thinking Questions 1. Is Mr. McDonald using performance-based assessment? 2. What tactics might improve McDonald’s understanding of the children in his class? Suggest a few. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Performance-Based Assessment Section 6.1 Avoiding Issues With Performance-Based Assessment Although PBA offers clear advantages in understanding children and linking learning activities to developmental growth, there are also serious considerations regarding the use of informal assessment procedures. In fact, research comparing the relative strength of authentic assessment to more conventional measurement methods (such as worksheets) is lacking, even though teachers tend to prefer authentic measurements to traditional assessment (Bagnato et al., 2014). This preference may be linked to several factors, including a belief that authentic assessment data are more meaningful, that less formal assessments take less time to collect data and score, or that the assessments more closely align to student outcomes and activities. Just as standardized tests must accurately measure what they intend to measure (validity) and must do so consistently (reliability), it is also important for informal measures to assess a child honestly and dependably. If a teacher’s filter is clouded by preconceptions about a child—based on history, biases, or the teacher’s own measurement skills—assessment of the child’s performance can be skewed and in turn influence the child’s opportunities. In fact, there is evidence that ECE personnel are alarmingly unreliable when classroom assessment outcomes are compared to external evaluation of the same children (Brookhart, 2011). To a significant degree, EC educators’ assessments of children are influenced by assessor subjectivity (Waterman, McDermott, Fantuzzo, & Gadsden, 2012). To increase the quality of PBA outcomes and subsequent decisions, the following practices should be implemented: 1. Multiple types of assessment should be used to provide depth, strengthen perceptions, and reduce the possibility of measurement bias. For example, anecdotal notes about a child’s social interactions with peers may be supported with videos, photographs, and social-skills checklists with the same behavior targets. 2. Interobserver reliability is necessary to calibrate the accuracy of data collection. It is recommended that teacher judgments occasionally be compared to those made by a second observer to see if there is agreement. Interobserver reliability is especially helpful if a professional is using the data to make important programmatic decisions regarding a child. 3. Performance-based assessments should require a formative process, rather than a summative one; PBAs are generally informal measures, and therefore they tend to lack the technical validation of standardized tools that are useful in making credible summative decisions. On the other hand, PBA data can and should be paired with more formal assessment information whenever an important decision is being considered. 4. ECE professionals should practice designing and scoring performance-based assessments for authentic learning opportunities. Collaboration with and feedback from skilled mentors will help emerging assessors align assessments to accurately evaluate progress toward learning outcomes. 5. PBA should be constructed to permit direct observation of behaviors so as to indicate mastery of learning outcomes. Assessments that rely on opinions about behaviors that cannot actually be seen are much more subject to bias. For example, it is common for teachers to assume that a child who is learning quickly is motivated. Making this assumption about what is going on in a child’s head (which cannot be seen) actually constitutes a barrier to intervention because unseen phenomena are © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Performance-Based Assessment Section 6.1 inaccessible and cannot be directly taught or assessed. In contrast, it is possible to observe how often or how well a child completes a task when given the opportunity. A child can be seen completing a task, whereas motivation must be inferred from visible behaviors, such as task completion. Moreover, the amount and quality of work completed can be compared from one opportunity to the next. If progress toward task completion is unsatisfactory, intervention strategies might be evaluated and changed by collecting data using the same assessment. If behavior can be described and scored in a way that is direct and observable—as through the use of a running record as opposed to a rating scale—biased outcomes are less likely. It may be noted that the practices described above are not unique to PBA. Multiple measures, reliability of measurement, practice, using data to make decisions, and relying on objective descriptions of behavior are all common to other kinds of assessments. Because PBA relies heavily on observational methods, these practices are reintroduced here to remind readers of strategies they can use to enhance the accuracy of PBA outcomes. Challenge Using the Venn diagram (see Figure 6.2), classify the following characteristics as being representative of either traditional assessment, performance-based assessment, or both. • Compares students to other students’ performance • Assesses depth of knowledge well • Measures information that is hard to quantify for comparison • Measures low order thinking • Scoring is easy and fast • Scoring is objective • Assesses small subset of knowledge • Scoring is subjective • Scoring is complicated • Assessment is part of the learning process • Learning and assessment take place over time • Capable of assessing creativity well • Students are actively engaged in self-assessment • Many possible answers may be correct • Assesses wide scope of knowledge • Measures behavior where students work cooperatively • Student reacts to fixed stimulus • Measures behavior where student acts individually (continued) © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.2 Portfolio Approach to Assessment Challenge (continued) Figure 6.2: Comparison of assessment characteristics ሁሁ This Venn diagram is used to classify the characteristics of learning that are associated with traditional and performance-based assessment or both. Traditional Assessment PerformanceBased Assessment Refer to the Appendix for the answers. Source: Based on Schurr, 2012. 6.2 Portfolio Approach to Assessment In addition to conventional assessments such as screenings and standardized tests, early childhood education has long relied on developmentally appropriate, informal, and authentic measurements of children in individual and group contexts. The documentation generated by both formal and informal assessments can be used together in a way that tells children’s stories as they grow over time. Portfolios provide one way of aggregating information from various assessment sources to tell these stories. Reasons to Use Portfolios Although portfolio assessments may come in many forms, a portfolio is a metaphorical storyboard used to document children’s ongoing work. Portfolios may be used informally or may be very purposeful. For example, some ECE professionals may add documents to a portfolio whenever they think the information is useful. Other professionals plan their portfolio to include regular documentation that illustrates progress according to targeted learning standards and behavioral observations. There are three key reasons to use portfolios. First, they provide parents with actual evidence of a child’s work, and serve to provide an opportunity for periodic and ongoing communication © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Portfolio Approach to Assessment Section 6.2 between parents and teachers. Parents and educators can appreciate growth across domains by viewing portfolio entries across time. Second, portfolios provide comprehensive evidence of progress that can be used to make instructional decisions. Portfolios have traditionally been used to archive informal assessment information, but it is also acceptable to include more data-driven student performance work. Finally, portfolios are increasingly being used in a more summative way to document program quality. Consider the following example. Suppose Ms. Rosa started a three-ring-binder for each of the students in her preschool classroom at the beginning of the school year. Throughout the year, she added developmental checklists, work samples (particularly art and writing), dictations of stories, anecdotal notes (often about math and science reasoning), pictures (frequently of things that could not be kept, like block towers), video recordings, and observational assessments that documented actions (such as dramatic play or gross motor skills) to each child’s portfolio. She organized everything by curricular domain (language, motor, cognition, art and music, social, self-concept) and date to document how each of her students grew and changed over the year. She used the portfolios to guide parent teacher conferences in the spring. It took Rosa several years to master this carefully orchestrated process of archiving documents, but the portfolios she now creates meaningfully assess progress while excluding unnecessary or irrelevant material. Portfolio Logistics Portfolios may take many forms. Children may have individual binders, poster boards, or large folios that contain their work. As mentioned, a portfolio is not a haphazard collection of interesting artifacts or finished work (Laski, 2013). Rather, portfolios are most powerful when they adopt a focused and systematic approach to serve the purpose of assessment. To that end, a teacher must first establish the purpose of implementing portfolio assessment. All entries should then support that purpose. As a tool for assessment, one purpose is to document progress. The domains being assessed may vary from child to child and teacher to teacher. For example, Teacher A may establish portfolios to mark the emergence of early phonemic awareness skills of 4-year-olds with the goal of rhyming mastery by the time the children transition to kindergarten. At the same time, Teacher B may be more interested in creating and maintain portfolios to document creativity and critical-thinking skills in her 4-year-olds, and thus may include a range of art and science project samples. Once the portfolio’s purpose has been established, it becomes clearer what trail of evidence is needed to document progress toward the objective. Although there may be some overlap, the materials included in these two sets of portfolios will clearly be very different. The way in which portfolios are organized may also be linked to their purpose, though not necessarily. Portfolios may be organized by theme or topic, chronology, type of product, quality of work, and so on (Schurr, 2012). The point of organization is to facilitate the telling of a story through portfolio sharing. Professionals should adopt a completed portfolio checklist (see Figure 6.3) that serves numerous purposes, including to provide a manifest of entries, communicate the portfolio’s organizational logic, prompt child and teacher to enter completed work, and notify archivists when evidence of critical artifacts has been satisfied. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.2 Portfolio Approach to Assessment Figure 6.3: Portfolio organizational checklist ሁሁ It is important to organize portfolios so they are informative and useful. That is often accomplished using an organizational checklist of portfolio components. PORTFOLIO CHECKLIST Portfolio collection Evidence entered and date Check mark for completion Demographic information Beginning reading screening Completed 9/3/14 Mid-year reading screening Completed 1/7/15 End of year reading screening Evidence of Soc. Emotional Standard 1.0 Photo of Sean working with group on science problem (12/15/14) Standard Met: Proficient Evidence of Cognitive Standard 1.0 Observational notes of Sean working independently to build a bridge with recycled items (10/3/14) Standard Met: Proficient Evidence of Language Standard 1.0 Audio recording of Sean telling a story during circle (9/20/14; 1/5/14) Portfolios are as varied as the teachers who create them. That said, there are certain key elements that drive a portfolio’s utility as an assessment piece. Elements of a Portfolio Many different kinds of authentic evidence may be included in a portfolio (see Table 6.3). Regardless of the specific artifacts included, educators, children, and parents should all be involved in a portfolio’s development and utility if it is to meet the criteria for performancebased assessment (Seitz & Bartholomew, 2008). Table 6.3: Types of authentic evidence for portfolios Element Description Writing samples May include journal entries, stories, scribble progressions, and so on Art projects Examples of painting, drawing, coloring, cutting, gluing, and so on Academic work sample Videos May include more formal academic documentation, such as traditional worksheets, science journal entries, and data collection, like graphs Used to record performances (that is, singing), physical accomplishments (that is, climbing a structure on the playground), interactions (dramatic play with others), and so on (continued) © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Portfolio Approach to Assessment Section 6.2 Table 6.3: Types of authentic evidence for portfolios (continued) Element Description Photographs Can document children’s physical abilities, like gross and fine motor skills, or can be used to more easily collect other forms of evidence, like a large piece of art or a sculpture created outside with natural materials—neither of which could be kept or stored easily Skills checklists Anecdotal notes Learning stories Used to easily note which skills children have accomplished and when Used to record children’s words and actions Narrative account written by a teacher of an event that represents growth in a child; examples can be found at http://tomdrummond.com/learning-stories Four primary elements define a portfolio, according to Seitz and Bartholomew (2008): 1. Roles. Of the three players involved in developing a portfolio, each has particular responsibilities, yet there is overlap. Teachers facilitate portfolio utility, students create products for the collection, and parents provide support and encouragement. All members are responsible for collaborating and reviewing entries while learning from the process. 2. Big picture. Factors to be considered when envisioning a portfolio are standards and learning outcomes and developmentally appropriate benchmarks. Each element added to the portfolio should show evidence of progress toward learning outcomes. 3. The collection. The collection can be guided by the mantra “collect, select, reflect.” When collecting content (photos, paper products, videos, anecdotal notes), the pieces chosen should illustrate maturation in skills, knowledge, and behavior according to mastery of assessment standards. Archival selection is thoughtful, and not every item produced by a child should be entered. A teacher can help ensure the portfolio contains sufficient evidence to document progress while also parsing the collection for redundancy, irrelevancy, and efficiency. At regular intervals, careful reflection on collection entries helps children appraise progress and consider what has been learned and how they can use the information to grow. At the same time, ECE professionals analyze the entries, looking for evidence of progress or concerns. 4. Presentation. Children are able to share their portfolio with parents or others, often with help from teachers. As children become familiar with and reflect on their own progress, they become ready to communicate their progress to others. Both informally (when children have made notable breakthroughs or accomplishments) and formally (during school–family meetings), teachers can set aside time for children to highlight their accomplishments. Since a portfolio reflects a child’s journey, the emphasis is on the child’s personal growth, and the telling of this journey belongs to children. This presentation should be a celebration of growth. Since there is always growth, there is always cause for celebration. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.2 Portfolio Approach to Assessment Challenge Look over the work samples from a 3-year-old provided in Figure 6.4. Decide which of these artifacts you would include in a portfolio for this child, and why. Using the link below, find your state early learning standards and document which 3-year-old standards (that is, physical, language, social–emotional, cognitive) are being met by the samples you choose to include. State-by-state links to Early Learning Guidelines Figure 6.4: Sample artifacts A. Sticker and marker drawing of “a lady” B. Alphabet BINGO card using water color paints to mark known letters C. Drawing of “mommy and daddy” and an “elephant” D. Thank you note, “Thank you for the bracelet, Grandma. I love you.” © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Project-Based Learning and Assessment Section 6.3 6.3 Project-Based Learning and Assessment According to Gilbert (2005), knowledge has evolved from a noun to a verb. Knowledge is more than what we have; it is what we do. That is, as we act on the world around us, knowledge is created and new meaning is derived. Participating in activities that channel our experiences is knowledge in action, or project-based learning. Project-based learning is consistent with Reggio Emilia, Montessori, constructivism, and other dominant approaches to early education (Helm & Katz, 2011). Each of these approaches is based on the principle that children learn by doing and become active problem solvers, and in doing so make meaning of their world. The characteristics of project-based learning also align with the performance-based assessment criteria explained earlier, whereby children set about resolving a complex task, usually in groups. In very early childhood, a project is often seen in the context of problem solving. It has been found that parents facilitate problem solving by offering scaffolded tasks that require age-appropriate critical thinking, such as saying to a 3-year-old, “The birds get hungry in the winter because there is not as much food. What can we do to help the birds?” Even parents of infants can provide challenges, such as hiding a toy inside a box and asking, “Where did the toy go?” (Carlson, 2003). In this case exploring a problem is the project, and parents are interested in if or how the problem is solved, what kind of answer is found, and if any related side effects, such as social skill and language development, accrue. These early problem-based activities with parents have been associated with increased levels of self-regulation (Carlson, 2003). Assessment of Project-Based Learning There are many ways to assess children’s problem-solving performance in project-based learning. Presentations, live performances, demonstrations, group work, and experiments all offer students opportunities to engage in creative processes that require critical thinking. Even more traditional written assessments can become performance based if they are designed around problems derived from the real world in areas such as social–personal, performing and visual arts, community and global, as well as STEM problems—Science, ­Technology, Engineering, and Math (Schurr, 2012). Although some of these examples are not appropriate for very young children, those in early elementary grades can engage in a range of complex activities through project-based learning experiences. Final products produced through project-based learning may be assessed using checklists, rating scales, and observations, all of which can be used to document specific skills or standards. During the assessment, educators can also suggest alternate conclusions, changes, next steps, and ways to improve (Schurr, 2012). Since deep learning is facilitated through project-based learning, assessment should provide depth and breadth. In other words, educators should assess more than simple knowledge. Children’s work should reveal their ability to use this knowledge, make connections, evaluate their work, and ask new questions. Assessment should come in a variety of meaningful forms, all intended both to measure progress and enhance formative growth. For example, photographs, anecdotal records, skills checklists, and work samples can all be used to assess children’s new understandings over the course of a project. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Project-Based Learning and Assessment Section 6.3 Rubrics A common tool for evaluating artifacts and entire projects is a rubric, which breaks complex skills into smaller units and evaluates each separately (see Figure 6.5). Subscores can be combined at the end to provide an overall assessment. For example, a teacher who wants to assess a group presentation might evaluate the following subskills: knowledge of the subject, communication, and collaboration. In developing the rubric, each of these subskills will be further broken into qualitative descriptors that range from poor performance to exemplary performance, with different point values assigned by proficiency. Figure 6.5: Criteria behaviors for each quality indicator ሁሁ Rubrics can be used to examine proficiency of skills or performance using a gradient scale. This rubric looks at group work through a range of demonstrated levels of proficiency. Quality indicators Group work (criteria) Not seen Child does not engage with other children, does not complete tasks assigned to role in group. Emerging Child engages with other children, completes some assigned role tasks. Developing Child problem solves with other children, completes assigned role tasks, and is integral to team completion of project. Proficient Child problem solves with other children, shows leadership in helping and engaging other children, makes suggestions and includes ideas of other children toward solutions, completes all personal work, and helps define and work toward completion of project. Rubrics have broad utility as a generalized assessment method and are applicable to most learning outcomes. For example, a rubric may be used to evaluate handwriting samples, art projects, and social and play skills. There are two main advantages of using rubrics over traditional assessments. First, assessors’ attention is focused on identifying criteria and measuring student work accordingly, thereby reducing the arbitrariness and unreliability in scoring. Second, those who are being assessed can be told what is expected of their performance or work based on the criteria established in the rubric. The best rubrics give the steps in the assignment or task in explicit language for the learner, identify the process that the learner must use, and describe what the final product or performance will look like. Thus, the rubric guides the learner in the acquisition of the skill, process or knowledge required. (Mindes, 2011, p. 122) Today rubric use is ubiquitous at all levels of education. One might think that because rubrics are so widely used, they are also easy to create. This is not the case. To create a rubric that is useful for both the evaluator and the student, the language must be direct and complete. However, it is challenging to anticipate and describe all the possible ways that a person might demonstrate progressive mastery of criteria, which might explain why rubrics are often frustrating to develop and unsatisfying to score (Andrade, 2005). © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.3 Project-Based Learning and Assessment To alleviate this frustration, guidelines for building a project-based rubric are explained below. Andrade (2005) and Baryla, Shelley, and Trainor (2012) provided the following tips for designing quality rubrics: 1. Descriptions of criteria must be written clearly and define explicit, observable behavior. 2. Criteria should be aligned to learning standards while avoiding jargon. 3. Criteria should be specific to the task; generalized rubrics tend to be less useful (for example, a rubric that defines proficiency as “age-appropriate drawing” is much more difficult to score than one that says “draws basic figures with three details”). 4. Quality indicators must be distinct; when descriptions under these categories are vague or indistinct, the rubric is reduced to a pass/fail checklist, which is not the intention. 5. Descriptions of behavior should be stated positively as much as possible (for example, writing “child gives suggestions to other children to help solve problems” is positive, whereas “child does not help group solve problems by giving suggestions” is discouraging). 6. Descriptions of behaviors within indicators should be succinct, not overly detailed or lengthy. 7. Avoid using too many criteria, particularly when there may be overlap. Designing a Rubric The first step in designing a rubric is to identify the critical attributes of a high-quality project. For example, reasonable indicators for evaluating children’s work on a group project may include positive team work, proficiency of language use, creativity of solutions, effort put forth, content knowledge and production of a meaningful outcome. While teachers may use a variety of sources to identify key criteria, learning outcomes and standards should always be considered. Generally, four to six criteria are sufficient for selecting critical indicators. Once the indicators have been selected, the next step is to set up qualitative categories of proficiency. As mentioned earlier, degrees of proficiency are basically an articulated Likert scale. A Likert scale rates behavior on a gradient, ranging from low to high. People complete Likert questions all the time but are usually not aware of how these are designed. The following is an example of a Likert item (see Figure 6.6). Figure 6.6: Sample Likert scale ሁሁ Sample rating item that asks the reader to “rate child’s group work on a scale of 1 to 4.” The numbers correspond with the categories of unsatisfactory (1), emerging (2), developing (3), and proficient (4). Rate child’s group work on a scale of 1 to 4 Unsatisfactory Emerging Developing Proficient 1 2 3 4 © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.3 Project-Based Learning and Assessment The reader may see from this scale that rankings are highly judgmental, requiring respondents to possess their own barometer of group work that may vary considerably from professional to professional. Even though well-trained observers may be able to use such a scale to their advantage, a rubric takes a Likert scale to the next level by adding depth and dimension to each of the ratings. Adding behavioral descriptions to the ratings should facilitate greater understanding and reliability of scoring the same behavior. After determining the range of quality indicators (usually three to five categories), a rubric developer will then fill in the narrative details to explain what behaviors exemplify the intersection between criteria and quality indicator (see Figure 6.7). Figure 6.7: PBA in action: Rubrics ሁሁ Rubrics provide a manageable way to assess multiple components of students’ work or performance. PBA in Action: Rubrics Example Play Skills Developing (1) Requires some scaffolding Emerging (2) Independently performing skills with little prompting Mastery (3) Independently performing skills with no prompting Enters groups Rarely of two to three demonstrates children without prompting Enters group with verbal prompting Occasionally enters group independently Consistently enters group when opportunities arise Imitates others during play Imitates adults during turntaking play with adults Imitates peers and adults occasionally without prompting Frequently imitates others and prompts others to imitate self Not present (0) Requires high level of scaffolding Rarely demonstrates When the rubric is complete, all the cells will be clearly defined. Rubrics provide rich information regarding a child’s development; if the same ones are used over time, these rubrics can show patterns of progress toward proficiency. To see these patterns of progress, professionals will compare a child’s current performance to his or her past performance, rather than comparing one child to another. Using Available Rubrics Many rubrics have already been developed and are widely available to EC professionals. When possible, it makes sense to use these rubrics rather than creating new ones, considering that someone has already completed the hard work of developing and testing the tool. Several digital sources—such as teacher websites and digital applications—not only provide predeveloped rubrics, but also provide templates and ways to score and store rubrics electronically (for example, see iRubric). © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.3 Project-Based Learning and Assessment On the other hand, since it is challenging to develop a good rubric, many poor ones exist. The Center for Teaching and Learning (2014) described several characteristics to look for when evaluating a rubric, including: 1. Are the indicators written clearly and concisely so that children can understand? 2. Do indicators provide sufficient information to guide children’s actions? 3. Is there sufficient range in the indicators to span children’s possible performance levels? 4. Are the criteria defined in a way that permits scorers to evaluate performance in an accurate and unbiased way? 5. Are all the criteria equally important? 6. Does the rubric evaluate both how children perform (process) as well as the quality of the outcome (product)? When a good rubric cannot be found, teachers must either fall back on their own rubric design expertise or take an existing rubric and adapt it. Challenge Write descriptions for each of the empty cells in this performance rubric. Group work Not seen Emerging Developing Proficient Language use Not seen Emerging Developing Proficient Creativity Not seen Emerging Developing Proficient (continued) © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.4 Other Forms of Performance-Based Assessment Challenge (continued) Effort Not seen Emerging Developing Proficient Content knowledge Not seen Emerging Developing Proficient Meaningful outcome Not seen Emerging Developing Proficient Refer to the Appendix for the answers. 6.4 Other Forms of Performance-Based Assessment There are a variety of ways that EC professionals can go about collecting PBA in natural settings. The most authentic PBAs will result from the intentional selection of a specific mode of assessment in order to produce the most accurate assessment of a child’s skills, abilities, or behaviors. This can be achieved through either teacher-designed or commercial forms of assessment. Interactional Assessments Perhaps the most organic assessment is that which takes place within the context of interactions with children. Interactional assessment is conducted when educators evaluate children while engaging in authentic activities with them. Typical daily activities offer opportunities for interactional assessment. Like all assessment, the process of collecting evidence of progress is purposeful, both in terms of behaviors that teachers expect to observe and strategies for collecting data. The interactional approach is particularly useful for assessing language and communication. Social interactions that may include puppet shows, reading, and meals are natural opportunities to assess language and other developmental areas. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Other Forms of Performance-Based Assessment Section 6.4 Although interactions commonly take place between children and between other adults and children, interactional assessment is specifically structured to take place while an observer is taking part in the action. To educators, natural benefits of this authentic practice over traditional assessment include immediate feedback, scaffolding as needed, and acting and feeling credible as a conversation partner. At the same time, children engaged with their teacher or evaluator benefit from the immediate feedback and the activity’s authentic feel. The first order of interactional assessment is to determine what knowledge, skills, or behaviors are to be targeted. Once an observer-participant establishes the purpose, a means of assessing targeted behaviors can be determined, as well as how the data will be used within a formative assessment framework. Finally, it is equally important that observers allow themselves to be surprised by unexpected behaviors. If the purpose is defined too narrowly or the assessment technique too structured, observers are likely to overlook behaviors that are the very point of the assessment. For example, if a teacher is playing with a small group of children who are building a space station, her intent may be to observe and evaluate their vocabulary. If the focus is limited to language content alone, however, she may miss children’s use of pragmatics, imagination, persistence, group problem solving, the isolation of some children, and fine motor development. Consider the example of a speech therapist named Gina who records anecdotal notes of 3-year-old children’s behaviors during breakfast. While Gina eats with five of the children, she purposefully attends to how her students’ language is developing. Because Sam is working on following two-step directions, Gina makes a complex request: “Sam, please pour yourself some milk and then pass the pitcher to Anand.” Sam passes the milk to Anand, but does not pour milk for himself. After making a note of this omission, Gina turns to two of the girls. “Rachel, your auntie said you went to the fair this weekend. Did you see Clara and her rooster in the Poultry Barn?” This interactional prompt is intended to facilitate peer-to-peer conversation, and Gina observes the two girls exchange information about the rooster and then transition to the topic of horses. Gina records a description of the length of this student-led interaction, their knowledge of fair-going content, and the extent to which they use pragmatic strategies to communicate successfully (such as turn taking, listening, staying on topic, asking and answering questions, and repairing miscommunication). All these observations provide substantive evidence of the degree to which Gina’s students are moving toward mastery of learning standards. The interactional assessment approach has a high potential for observer bias due to its subjective nature and the difficulty of being a participant assessor. That is, observers must balance the need to be a true partner in an activity with the need to carefully observe and take note of a wide range of planned and unplanned behaviors. At the same time, this approach is also likely to yield the most truthful behavior—that is, behavior that is most representative of a child’s abilities to perform in the real world. For very young children (infants, toddlers, and preschoolers), the most natural context is play. For example, an interactional assessment between a 20-month-old and an ECE professional that involves blocks makes it possible to see how the child moves, communicates, processes problems, and interacts socially—without administering a single test item. To administer interactional assessments accurately, ECE professionals must possess a deep understanding of the developmental expectations relative to the age of those children they are assessing. They must also establish and maintain joint attention and be capable of creating © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Other Forms of Performance-Based Assessment Section 6.4 conditions that are likely to elicit a range of behaviors that enable a child to perform at his or her best. It might also be clear that no single observation is likely to be sufficient to elicit the breadth and depth of a child’s behavioral capacity. Today’s teachers interact and appraise students continuously, both with explicit and implicit intent. Doing this effectively requires sound, research-based, practice-oriented, preservice training and continuous professional development to help guide meaningful interactional assessment. Interactional assessment generates richer and more contextually meaningful information than assessment in which an observer is not a party to activities (Ishihara & Chiba, 2014). Often, however, preservice teachers do not receive the necessary training and continued support to perform interactional assessments effectively. Reflection In addition to not receiving adequate training, ECE pro- What are some adult behaviors that might limit fessionals do not always have the tools to help them an assessor’s ability to accurately evaluate a child make meaningful interactional assessments (Moreno & through interactional assessment? Klute, 2011). Although interactional assessment may be informal and a part of every educator’s daily routine, formal versions also exist. An example of a formal tool that has been developed specifically to introduce authenticity into interactional assessment for infants and toddlers is the Learning Through Relating Child Assets Record (LTR-CAR) (Moreno & Klute, 2011). The first consideration in assessment is the developmental goal; keeping in mind the assessment indicators from LTR, educators select typical activities such as social play that are likely to give the child the opportunity to demonstrate the skill(s) being assessed. For example, if the developmental goal is “delayed imitation of an adult,” the educator might use an activity that involves turn taking and problem solving. During play, the assessor would look for but not directly prompt (such as, “How did I make a spider bed?”) spontaneous delayed imitation (Moreno & Klute, 2011). It may seem that a standardized tool could not fit the character of a truly authentic interaction. Yet even for older students, a framework for performance and PBA is established in order to elicit behavior in a meaningful context. In fact, the assessment, teaching–learning, assessment cycle depends on systematic planning for and reflective assessment of child– professional interactions (Grisham-Brown, Hallam, & Brookshire, 2006). The LRT is both an interactional assessment and a play-based assessment. However, not all play-based assessment involves actively engaging with children, as we will see in the following section. Play-Based Assessment As discussed throughout this chapter, performance-based assessment is conducted within natural contexts for learning based on developmentally appropriate practices. Until children reach kindergarten, and even then, best practice suggests that play is the most natural and effective context for learning many necessary social, emotional, cognitive, language, and motor skills. Thus, play-based assessment is performance-based assessment, where children’s abilities are evaluated in the natural context of play. As mentioned in earlier chapters, early childhood educators are increasingly feeling pressure to replace play-based curricula with academic-based curricula. According to Russo (2012), ECE professionals are put in a position of having to defend the value of play to administrators and parents, even as research supports the important connection between play © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Other Forms of Performance-Based Assessment Reflection When might it be best to assess play as an outside observer? As a partner in play? Section 6.4 and the nurturing of cognitive, social, and behavioral development, which do not necessarily evolve from more structured academic activities (Bergen, 2002). As a consequence, ECE professionals must be mindful of the value of 21st-century skills such as risk taking, social competency, imagination and creativity, and problem solving that are derived from play. Professionals can conduct play-based assessments during naturally occurring and child-regulated play, using either teacher-developed or research-based tools. A number of play-based assessment instruments have been developed, such as Play Assessment Scale, Transdisciplinary Play Assessment-2 (TDPA-2), and Playing in Early Childhood (Kelly-Vance & Ryalls, 2008). However, the potential of these tools to provide meaningful outcome information has not been realized, since they tend to lack rigorous standards of documentation and scorers have difficulty achieving interrater agreement, particularly for cognitive behaviors (O’Grady & Dusing, 2014). For example, TDPA-2 requires one to undergo a 5-day training before becoming a competent administrator. For the time being, educators may be better served by developing their own performance-based measures. Still, there are compelling reasons to use play as the context for assessment. Play provides a natural and appealing context in which children can express themselves, which allows children to be comfortable and happy and thus perform their best. Further, play is child initiated and child maintained. This unscripted behavior reveals a wide range of emotions and relevant evidence of maturity across domains. Play-based assessment is both a learning and demonstrating medium that allows adults to discover how children approach tasks and the ways that children learn from their experiences. Consequently, play has the potential to enhance learning in many different ways. Therefore, it is necessary to link assessment strategies to the purposes for which assessment is conducted. Play itself may be the target (such as independent, parallel, and reciprocal play), or professionals may be more interested in learning that takes place through play, such as fine motor skills, problem solving, language use, and social–emotional development. Unlike interactional assessment, play-based assessment may or may not feature the observer’s active participation in children’s play. For one example of play-based assessment, suppose Mr. Marco observed his kindergarteners playing a game that they called “Groundies.” By observing the children’s invented game, Mr. Marco could see that Liza was a strong communicator; she explained the rules to Raymond, who had not played the game before. Mr. Marco also watched all the children climb, jump, run, and balance on the playground equipment, demonstrating many maturing gross motor skills. He used his phone to take a quick video of the children, and later, when reviewing the video, he noticed that Jack and Javier had resolved a conflict by talking through a misunderstanding without assistance from any of the adults on the playground. As with other forms of assessment, play-based assessment (unless using one of the tools mentioned earlier) should be designed by first considering its purpose. What does the observer hope to learn by observing a child? Next, an observer may select or arrange a typical early childhood activity that should give the child an opportunity to demonstrate targeted behaviors. Finally, an assessment device or method must be selected or designed. Kelly-Vance and Ryalls (2008) recommend the following best practices in arranging the environment to facilitate play-based assessment: © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Other Forms of Performance-Based Assessment Section 6.4 1. Play-based assessment may be conducted in any authentic setting, including a home, child care center, school, and clinic. 2. Toys should be age appropriate. For example, if observing cognitive skills, toys that enable children to manipulate in different ways, construct, or problem solve should be made available. 3. The environment should not be chaotic, but also not pristine, as some clutter can lead to problem-solving opportunities that may not occur in a Spartan setting. 4. Free play may better reveal children’s behavioral competencies than facilitated play; if the target behavior is not elicited or on display during free play, the observer may need to intervene and facilitate an opportunity for the child to exhibit the behavior. 5. Play with peers is appropriate if social skills or communication are important to observe, but it may hinder assessment if an observer is more interested in seeing the complexity of a child’s cognitive skills. 6. Those conducting play-based assessment require a strong knowledge of child development in order to yield valid observations. 7. A variety of tools may be used to assess children in play, including video recordings, checklists, rubrics, narration, and so on. Having established the conditions for assessing children’s competencies during play, an EC professional may choose to be a part of play or to observe play without participating in it. This decision will influence the assessment’s outcome, since children play differently when an adult is present than when playing alone or with other children. While observing children during play, whether facilitated or not, professionals should constantly keep learning objectives in mind. To remember what to look for, it is helpful to refer to learning outcomes multiple times a day. In doing so, professionals document what children can and cannot do, relative to age-based expectations. If a child can perform a skill, planning should facilitate acquisition of more advanced skills. For example, if a child is playing with other children with prompting, planning should rest on removing those prompts or shifting prompting to other children (such as, “Billy, do you see someone who could help you cook breakfast?”). Similarly, if a child cannot yet perform a skill, the ECE professional may consider continuing instruction as planned or modifying instruction to provide more modeling, prompting, or feedback. Routines-Based Assessment For very young children, performance assessment takes place in the context of daily routines. As routines-based intervention is considered best practice for very young children, so too is routines-based assessment (McWilliam, 2010). For infants and toddlers, even those not receiving interventions, routines are activities that occur regularly and naturally during families’ daily lives. These include dressing, eating, making transitions, playing, and so on. During these activities, infants and toddlers communicate, solve problems, move, and interact in their most natural ways. Additionally, individuals who interact with children during these routines are typically family members. Thus, assessment is conducted relative to parents’ or other consistent caregivers’ satisfaction with children’s behaviors during these routines. Finally, if assessment indicates a need, intervention is planned as embedded facilitation of learning within regular daily routines. An early intervention specialist, Ms. Unrau, made weekly visits to see 14-month-old Kia, a child born with Williams syndrome, and her parents, Bridget and Michael. Using a routines-based intervention model, Ms. Unrau explained how Kia’s parents could support her development while engaging in typical daily activities. This week Ms. Unrau filled out an observational log © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.4 Other Forms of Performance-Based Assessment while watching Michael give Kia a bath and help her complete her range-of-motion exercises. Ms. Unrau provided a narrative of the type of information Michael could record between home visits to monitor progress (see Figure 6.8). In this way both parents and early intervention professionals directly link routines-based intervention to routines-based assessment. Figure 6.8: Routines-based assessment observation log ሁሁ Routines-based assessment is primarily used with very young children as they are observed in their natural routines in order to see developmental progress. This example shows parental data for Kia, who is working on range of motion during bath time. From the data, it is possible to see that Kia’s parents are vigilant in following through with range of motion and that there are some movements that were more difficult. Her parents and early interventionist can use the data to think about ways to improve Kia’s range of movements. Kia’s Range of Motion Log Date Routine Bathing Instructions: While engaging in daily routines, check if Kia completes movement on right (R) and left (L) side and circle if she completes the movement with full range of motion with or without assistance parent assistance. Make a note when something happens during the routine that affects movement (e.g., if she cries during movement, types of objects that motivates independent movement, if she has partial independent movement—but not full range) Arms Reach to front R X L w/assistance Y N X Comment Kia refused to reach with left arm and resisted assistance. Raised arm above to her shoulder with assistance. Reach out to side R X L X w/assistance Y X N R X L X w/assistance Y X N R X L X w/assistance Y L X w/assistance Y Comment Reach to front Comment Trunk Reaches across midline Comment Reaches behind R N X X N Comment Kia was getting tired and did not like the plastic bear we used, so did not reach behind on right; did attempt to reach behind for bunny on left, but needed assistance for full range. Hips Flexes at hip R X L X w/assistance Y R X L X w/assistance Y R X L w/assistance Y Comment Moves leg away from center X N N X Comment Moves leg past center to cross X N Comment © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Other Forms of Performance-Based Assessment Section 6.4 Before planning an intervention, families are interviewed to determine needs and contexts for routines-based activities. Interviews may be designed according to the following routinesbased outline, during which an interviewer should take complete notes, putting a star next to items that seem particularly important to parents: 1. Ask families to identify their main concerns about their child and how these concerns impact family members’ lives. 2. Ask families to review a typical day, explaining regularly occurring events. 3. Highlight areas of concern. For example, “Every time I try to leave the house, Ainsley cries uncontrollably.” 4. Rate routines. As parents complete their description of each routine, the interviewer may ask parents to evaluate their satisfaction with that routine. 5. Final questions: a. What worries you most? b. If there was one thing you could change about your life, what would it be? 6. Summary: The interviewer summarizes the interview outcomes, putting emphasis on those routines that were starred. 7. Make choice(s) and sequence priorities. Based on the starred routines from the interview, parents identify and rank those that are of most concern (McWilliam, 2010). This assessment approach leads directly to identifying the learning outcomes that are important to families and can be used to plan intervention. Recommendations, training, and implementation are targeted toward supporting families who wish to improve the flow of daily routines that will advance the child’s development. Consistent with this family-centered assessment of child performance is ongoing assessment that is likewise family-centered. For example, the Vanderbilt Home Visit Script (VHVS) is an interview that focuses on how families believe “things have been going” (McWilliam, 2004, p. 151). The interviewer is advised to follow evidence-based interviewing strategies that provide emotional support to families. These include making positive comments about the child, responding to family requests and indirect expressions of need, focusing on the entire family and not just on the child and/or primary caregiver(s), treating family members as if they were neighbors rather than “clients,” and demonstrating empathy by considering issues from the perspective of families (McWilliam, Tocci, & Harbin, 1998). The VHVS has seven general questions, modified below, on which an interview will expand based on initial assessment and planning: 1. How has it been going? 2. Is there anything you would like to ask me? 3. How has it been going with respect to priority routines? (Ask about each priority separately.) 4. Is there a time of day that is difficult? 5. How is (each family member) doing? 6. Do you or have you had recent or upcoming appointments? 7. Do you feel you are doing too much or not enough with your child? During each regular visit with families, professionals repeat these questions so that parents both feel validated as effective caregivers and can be supported with additional recommendations, training, or referrals to other services. This authentic assessment procedure will be © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Other Forms of Performance-Based Assessment Section 6.4 periodically augmented by more systematic standardized assessment, but as a formative tool, routines-based assessment provides the primary basis for intervention. Problem-Based Assessment Routines-based assessment is grounded in observing children as they go about their typical routines, and it often solves problems in doing so. Yet there is a form of PBA that is completely structured around children solving problems. Problem-based learning is a process by which children research, innovate, and resolve a specific problem. Hattie (2009) found that problem-based learning was a powerful strategy for enhancing higher level cognitive skills such as critical thinking and creativity. For example, it is recommended that teaching science and math through inquiry in early childhood is best achieved when paired with problembased questions (Wang, Kinzie, McGuire & Pan, 2010). Fostering scientific thinking through performance-based activities is the basis for inquiry-based science instruction. Since real-world problems are seldom discipline specific, it makes sense that performancebased tasks designed to foster scientific understanding would integrate curricular objectives for other disciplines, such as reading, writing, social skills, and technology. Indeed, an international interest in STEM has filtered to early childhood, and many examples of problem-based curricula have been designed to promote scientific understanding. These include Pathways to Science, Scientific Literacy Project, Scratchjr, and Head Start on Science and Communication Program (Senocak, Samarapungavan, Aksoy, & Tosun, 2013). Science itself depends on measuring phenomenon, so assessment considerations for student acquisition of scientific knowledge are a natural partner to learning itself. Assessment may consist of informal measures such as rubrics, observations, competitions, and presentations, or gains in critical thinking may also be measured using standardized tools such as the Test of Critical Thinking (Kim, VanTassel-Baska, Bracken, Feng, & Stambaugh, 2014). In one example of problem-based assessment, suppose that Ms. Gladwell’s second graders were learning about composting as part of a science unit on where food comes from. Intrigued by the idea, the children started to brainstorm ways they might be able to start composting at their school. After researching a variety of approaches to composting, they decided to start worm boxes. The second graders held a bake sale and helped host a community rummage sale to raise money to purchase compost bins and worms. Then members of the class created posters and visited the other classrooms in the school to explain about the worm boxes. They started composting in October, and by May they were able to use the compost in the flower beds around the school. In doing so they cut down on lunchroom waste by 25%. Ms. Gladwell evaluated this project during and after the work. During the project, she assessed the process by scoring children on a rubric that included teamwork; number, quality, and practicality of solutions; planning composting and fund-raising activities as well as executing those plans; children’s own data collection; and analysis of the success of their composting project. In addition to process, Ms. Gladwell evaluated the final product, including factors such as impact, which was calculated by measuring the volume of waste and from interviews with members of other classrooms where the children presented, as well as school staff; quality of both oral and written/visual presentation of final results, which was assessed with a rubric; and the depth of understanding as well as application of the science of composting, which was assessed by a summative quiz. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Section 6.4 Other Forms of Performance-Based Assessment Mathematics, too, provides a natural context for problem-based learning, because math is problem solving. Using a problem-based approach to understanding mathematics adds to children’s interest and elaboration of prior knowledge. It is putting mathematics in motion. Infants, toddlers, and preschoolers develop early mathematical concepts when they play with clay, blocks, drawings, and visual schedules to solve problems related to mass, volume, quantity, addition, and time (Charlesworth & Leali, 2012). Again, assessment of these early math skills will focus on how (the process) children solve problems as well as what (the product) children conclude. Measuring acquisition of math constructs and learning-outcome mastery may include observation, rubrics, checklists, permanent products, and verbal explanations. Challenge Consider the following problem-based activities. Pre-K case: Children in a preschool program are provided with a whole group challenge to match animals to their habitats. The learning objectives of this task include the following: 1. Identify five new North American wild and/or domestic animals in both aquatic and land environments. 2. Apply basic rules of ecology (for example, legs, fins, wings) to match animals to land, water, and sky. 3. Create an ecological mural, locating images (from magazines, drawings, photos) of animals to their respective habitats 4. Explain to other children and adults about the newly learned animals, their habitat(s), and reasons why particular animals were matched to particular habitats. Relevant Learning Standard (IL) 4: 21 months to 36 months: Children use their communication skills to indicate interests in observations, experiences, and engagement with the world around them. Children actively experiment with their environment to make new discoveries. Building a base, EC professionals will introduce early ecological concepts through activities such as reading stories, relating previous experiences, and watching short videos about many different animals (some known and some new) who live in the forests, lakes, and streams, as well as those who fly. During these activities the characteristics that enable animals to adapt to their environments (such as, wings, fur, legs, fins) will be explained. Children will act out animal parts, pretending to have the characteristic features relative to habitats. Finally, children will design, draw, color, and populate earthly environments with respective animals. K–3 case: Three first graders are given a problem of identifying solutions for abandoned pets in the community. Learning Objective 1: Students will identify the main causes of the problem. Learning Objective 2: Students will use the team problem-solving process to brainstorm, evaluate ideas, and develop solutions. Learning Objective 3: Students will present their final solution to the class using a chart. (continued) © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Other Forms of Performance-Based Assessment Section 6.4 Challenge (continued) Relevant Common Core Standard (CA) 1.5.1: Students describe the human characteristics of familiar places and the varied backgrounds of U.S. citizens and residents in those places. Recognize the ways in which they are all part of the same community, sharing principles, goals, and traditions despite their varied ancestry; the forms of diversity in their school and community; and the benefits and challenges of a diverse population. After researching the topic, reading stories about abandoned pets, and interviewing friends and family about abandoned pet experiences, the children are ready for more real-world research. When asked by the students, the teacher arranges for them to meet with the director of an animal shelter, a pet control officer, and a philanthropist. Later, the parent of one of the teammates arranges an afternoon field visit by the group to the animal shelter. When these activities are completed, the children are ready to make recommendations to the class for how to reduce the number of abandoned pets. Based on this case, complete the following: 1. Write three behaviors you might performance assess on the animal–habitat matching and the abandoned pet project. 2. Think of five different ways the first-grade team members could demonstrate their knowledge and skills through this project. 3. Think of three ways the preschoolers might demonstrate their understanding of habitats. 4. How does your assessment approach differ from preschool to early elementary? Commercial PBA Tools In addition to the modes of assessment previously discussed—which are often teacher designed—commercial PBA tools are also available. The two commercial assessments used most frequently by EC educators are the Work Sampling System (WSS) and the Creative Curriculum/Teaching Strategies Gold (CCTS) (Susman-Stillman et al., 2014). These screening tools represent the future of early childhood assessment, since both systems offer authentic, comprehensive, developmentally appropriate, standards-based, and curriculum-referenced instruments that may be used online. They also allow data to be compiled and reports made using online portfolio analysis. Some ongoing assessment tools, such as Teaching Strategies Gold, have reliability certification for teachers to complete when teacher observation is a part of the assessment process. Creative Curriculum/Teaching Strategies Gold The Creative Curriculum/Teaching Strategies Gold is the most frequently implemented commercial assessment system. For example, Creative Curriculum is adopted twice as often in preschool programs in Florida as the next most frequently used assessment tool (Flanagan & Greenberg, 2013). This program was designed to be an authentic and standards-based framework for assessing student progress and linking outcomes to curricular activities (Lambert, Kim, & Burts, 2014). As such, Creative Curriculum fits well and has been adopted as a part of the Head Start Outcomes Framework. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources Designed for children from birth through kindergarten, the assessment uses 38 predictors of school success to focus educators’ attention on gathering evidence to support mastery. Teacher observation checkpoints and prompted “opportunity cards” that ask guided questions to support observation and documentation are supplemented with an online portfolio system, which is linked to an automated report-generating program. Children’s skills, concepts, and behaviors are observed as they perform daily activities. The CCTS focuses on nine developmentally appropriate areas, including four developmental domains (cognition, language, motor, and socioemotional) and five content areas (literacy, math, social studies, arts, and science/technology) (Lambert, Kim, Taylor, & McGee, 2010). For example, under the objective “Demonstrates positive approach to learning” are four developmental indicators that range from “Pays attention to sights and sounds” to “Sustains attention to tasks or projects over time (days/weeks), and returns to activities after interruptions.” Under each of these indicators, which show a progression from “not present” to sophisticated “mastery,” teachers enter narrative explanations, document files, or video clips that represent a child’s performance based on daily observations and interactions. An example of a narrative entry would be, “Latreisha returned to work on her storyboard 2 days after she started.” Many educators download the CCTC app to their phones, whereby they can easily videotape a child’s performance in an attempt to illustrate mastery of a standard. The video clip can then be sent directly to the CCTC database, where progress is documented. For example, suppose that during an art project, Jillian is observed holding a marker with her thumb and two fingers (progress over a previous assessment where she held the marker with her thumb and four fingers). Jillian’s therapist takes a 5-second video using her smartphone and immediately sends the video file to the central database. The Work Sample System The Work Sample System has been in use for some time and is the second most commonly used commercial program to assess young children (Susman-Stillman et al., 2014). Like the Creative Curriculum, WSS is designed for performance assessment to account for progress in children’s skills, understandings, and behaviors. Assessment is facilitated through checklists, portfolios, and the generation of meaningful reports. Seven preschool through third-grade learning domains are assessed in WSS: socioemotional, language/literacy, art, social studies, mathematics, scientific thinking, and motor. Using a very structured framework, teachers are guided to make selections and score children’s work using carefully scaffolded judgments about their performance (Brookhart, 2011). Brookhart (2011) concluded that when teachers lack the support of this structured scaffolding, they have difficulty reliably assessing portfolios, which can impact later decision making. Since the purpose of formative assessment is to inform teaching decisions to optimize learning opportunities, it is necessary both to collect good data and interpret it correctly. Summary and Resources Chapter Summary Performance-based assessment is an authentic way to evaluate children as they go about the business of learning through real-world activities. Early childhood educators agree that PBA, or authentic assessment, is a cornerstone best practice. A natural link exists between early childhood best practices and performance-based assessment, as both are predicated on the © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources most natural and meaningful constructs of child development. PBA also lends itself to the most basic of assessment purposes, which is to inform teaching so as to provide children with rich and purposeful experiences that are suited to their needs. Because PBA is primarily a formative assessment approach, it makes sense that the types of activities that lend themselves to ongoing assessment are themselves dynamic, continuous, ever changing. Portfolios constitute a way to collect and analyze artifacts of progress that include various forms of PBA. Portfolios should be designed with purpose, whereby ECE professionals select, reject, and reflect on items that represent growth according to standards or objectives identified by the developer. Forms of PBA discussed in this chapter include problem-based assessment, interactional assessment, play-based learning, and commercially designed performance evaluation systems. Perhaps the most common way to score performance using each of these methods is to use a rubric. Even though each of these popular methods falls under the authentic assessment umbrella, research on their validity and reliability is not extensive. In fact, research suggests that due to the subjective nature of PBA, there is high risk and occurrence of bias in scoring PBAs (Kane & Mitchell, 2013). Recommendations for improving PBA’s effectiveness include better preparing EC personnel in child development; training providers to collect and analyze data accurately; and intentionally matching assessment strategy with the purpose for which outcome information will be used. Posttest 1. Performance-based assessment and principles of developmentally appropriate prac. tices are compatible because a. both emphasize learning in natural contexts b. both rely on the Common Core mandate c. both place a high value on early academic learning rather than play d. both came from work conducted by the NAEYC 2. Which of the following is an advantage of performance-based assessment over traditional assessment practices? a. PBA’s emphasis on formal evaluation results in highly valid outcomes. b. PBAs are easy to design and interpret. c. PBAs are authentic, giving children the best opportunity to show their full capabilities. d. Parents, paraprofessionals, and professionals require no training to be proficient in using PBA. 3. Which of the following is a disadvantage of performance-based assessments? a. They are costly to design and interpret. b. They are more subject to bias than standardized tests. c. They are less useful for making instructional decisions for a child than are screening tools. d. They cannot be aligned with learning objectives. © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources 4. Which of the following is NOT a strategy for increasing the quality of performancebased assessment data? a. Conduct a statistical item analysis to establish construct validity. b. Use multiple assessments and assessment procedures. c. Write the assessment in a way that behavior can be observed rather than inferred. d. Have a second evaluator compare outcomes to determine if there is agreement and evidence of reliability. 5. All of the following are examples of a typical portfolio entry EXCEPT a. photographs b. work samples c. skills checklists d. evaluations of teachers by administrators . 6. Which of the following is a principle that should be followed in developing portfolios? a. Portfolios should be loosely organized, with few boundaries that would limit what and when items are added to the collection. b. Students should be included in designing, selecting, and assessing portfolio entries for progress. c. Portfolios should be shared with all relevant team members but are not typically useful to parents. d. Teachers should enter all of a child’s work into his or her portfolio. 7. Children’s conceptual knowledge and critical thinking as developed through prob. lem-based learning should be a. determined by weekly testing b. assessed annually through high-stakes testing c. determined by assessing how children solve problems and evaluating their solutions d. decided by assessing the product rather than the process 8. A good rubric . a. has a large number of criteria and indicators b. should have indicators that overlap with at least 50% of adjacent indicators c. is not difficult to create d. is aligned to expected learning objectives 9. Play-based assessment . a. is conducted during dramatic group play b. assumes that play is the best early childhood context for learning social, cognitive, language, and motor skills c. is conducted by contriving play situations to elicit specific developmental behaviors d. is associated with play therapy to help children with emotional disabilities © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources 10. Performance-based assessment outcomes are used to make instructional decisions . that may include any of the following EXCEPT a. placing a child in a program or grade relative to the child’s IQ b. continuing instruction without making any changes c. changing logistics, such as when instruction is given, how long a child spends on a task, or how many opportunities are provided for learning d. changing the goals to be more or less advanced to more closely match a child’s development Answers: 1 (a), 2 (c), 3 (b), 4 (a), 5 (d), 6 (b), 7 (c), 8 (d), 9 (b), 10 (a) Critical-Thinking and Discussion Questions 1. Although PBA is considered a best practice in ECE, it is also easily influenced by bias. What steps might you take to ensure that the PBA you use is bias free? 2. Of all the ways to document PBA that were discussed in this chapter, which do you think is the best for documenting play-based assessment? Routines-based assessment? Problem-based assessment? Why? 3. Portfolios can include a wide range of artifacts. What artifacts do you think are most valuable? Why? 4. How do performance-based assessments differ from observational assessments (which were discussed in Chapter 4)? 5. Who might you work with to build interobserver reliability of your PBA? How might you work with them to do so? Additional Resources Learn more about creating digital portfolios in order to easily manage students’ work. http://technologyinearlychildhood.com/2013/06/06/ using-digital-tools-to-create-a-portfolio-for-your-students This news article, “Assessing Young Children: What’s Old, What’s New, and Where Are We Headed?,” details how PBA fits into the larger context of assessment in ECE. http://www.earlychildhoodnews.com/earlychildhood/article_view. aspx?ArticleID=210 The article “Tool Trend: Using Early Childhood Rubrics” provides an explanation for using rubrics in ECE as well as examples. http://www.brighthubeducation.com/teaching-preschool/77875-using-rubrics-inthe-preschool-classroom This book chapter, titled “Best Practices in Play Assessment and Intervention,” profiles how play-based assessment can be used to support early intervention. http://www.nasponline.org/publications/booksproducts/bp5samples/549_ bpv71_33.pdf This article from Education World, “Project-Based and Problem-Based Learning in Early Childhood,” provides a clear and easy-to-follow narrative about the ways in which project-based and problem-based learning are similar and different. http://www.educationworld.com/a_curr/virtualwkshp/virtualwkshp002.shtml The Project TaCTICS website provides more “how to” of routines-based assessment. http://tactics.fsu.edu/modules/modOne.html © 2015 Bridgepoint Education, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources You can learn more about analytic and holistic rubrics, as well as a third type of rubric, through this easy-to-read “Know Your Terms” article. http://www.cultofpedagogy.com/holistic-analytic-single-point-rubrics Answers and Rejoinders to Chapter Pretest 1. False. Performance assessments measure active engagement and the outcome of active engagement in authentic learning experiences. Although worksheets, exams, and screening tests have their place, they do not capture the process of learning or the outcome of learning by doing. 2. True. Performance assessment is linked to activities that are themselves authentic relative to children’s age and developmentally appropriate expectations. Assessment of growth aligns directly with learning outcomes in natural settings, where children engage in real-world activities. 3. True. Student growth is subjectively assessed using a variety of authentic tools. To the extent that learned behavior is described clearly, evaluator bias is removed. However, evaluators will still make relative judgments about children’s abilities. These can be influenced by previous experiences with children, which ...
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Attached is the completed work. Thank you .

Surname 1
Performance assessment
Name
Course
Professor
25th May 2017

Surname 2
Performance assessment
Name of institution
Address
25th May 2017

Dear parents/guardians
RE; performance assessment on your children
This is meant to inform you about our institutions decision to use performance
assessment in monitoring your child’s growth. Ther...


Anonymous
Great! Studypool always delivers quality work.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Related Tags