Paper reviewing course objectives listed below:

Anonymous

Question Description

Please review the directions below carefully! This paper should include peer reviewed articles and the attached course content ONLY! Please cite in text and within the reference section using APA format. This paper does not have any length requirements. Please attempt to describe and define objectives using layman's terms and keep direct quotes and paraphrasing to a minimum. CITE ALL OUTSIDE INFORMATION. Please in-cooperate specific examples into each objective.

DIRECTIONS:

For the final, you will use the course objectives (listed below) and create a project that displays your understanding of the objectives of the course. The display of knowledge can take any shape or form (i.e., paper, video presentation, or etc.) but should provide the reader (someone who is just learning this content) with a clear picture of your understanding of the content (definitions) and how the concepts are related (those that are and those that aren’t).

Objectives

By participating in and completing the course, students should be able to:

  1. Define measurement, measurement scales, and scoring
  2. Differentiate between the various testing applications (ex. Norming, Equating, etc.)
  3. Apply and define characteristics of Classical Test Theory
  4. Explain and differentiate between reliability and validity
    1. Consistency of measurement
    2. Interrater reliability
    3. Consistency of ratings
    4. Various types of validity
  5. Understand and apply Item Analysis
  6. Articulate Item Response Theory
  7. Understand and apply basic statistical terms
  8. Applications of Formative Assessments

Elements being graded:

  • Are all course objectives covered, including 4a-4d?
    • Hint: Use the course objectives as headings and subheadings.
  • Does the project show your understanding of the definitions for each objective listed?
  • Does the project show your understanding of how some/all concepts are related?
  • Does the project show your understanding of how some/all concepts are not related?
  • Would a person who does not know this content understand the information as presented?

Think outside the box – diagrams, concept maps, etc. I am not looking for a standard paper – I want you to present the information in a way that makes the most sense to you. I am grading for content coverage (90%) and a clean presentation (10%) of the information.

Unformatted Attachment Preview

STATE COLLABORATIVE ON ASSESSMENT AND STUDENT STANDARDS TECHNICAL ISSUES IN LARGE-SCALE ASSESSMENT A Practitioner’s Introduction to Equating with Primers on Classical Test Theory and Item Response Theory Prepared for the Technical Issues in Large-Scale Assessment (TILSA) State Collaborative on Assessment and Student Standards (SCASS) of the Council of Chief State School Officers (CCSSO) by Joseph Ryan, Arizona State University Frank Brockmann, Center Point Assessment Solutions June 2009 The Council of Chief State School Officers The Council of Chief State School Officers (CCSSO) is a nonpartisan, nationwide, nonprofit organization of public officials who head departments of elementary and secondary education in the states, the District of Columbia, the Department of Defense Education Activity, and five U.S. extra-state jurisdictions. CCSSO provides leadership, advocacy, and technical assistance on major educational issues. The Council seeks member consensus on major educational issues and expresses their views to civic and professional organizations, federal agencies, Congress, and the public. State Collaborative on Assessment and Student Standards The State Collaborative on Assessment and Student Standards (SCASS) projects were created in 1991 by the Council of Chief State School Officers to encourage and assist states in working collaboratively on assessment design and development for a variety of topics and subject areas. These projects are organized and facilitated within CCSSO by the Division of State Services and Technical Assistance. Technical Issues in Large-Scale Assessment (TILSA) TILSA is part of the State Collaborative on Assessment and Student Standards (SCASS) project whose mission is to provide leadership, advocacy, and service by focusing on critical research in the design, development, and implementation of standards-based assessment systems that measure the achievement of all students. TILSA addresses state needs for technical information about large-scale assessment by providing structured opportunities for members to share expertise, learn from each other, and network on technical issues related to valid, reliable, and fair assessment; designing and carrying out research that reflects common needs across states; arranging presentations by experts in the field of educational measurement on current issues affecting the implementation and development of state programs; and developing handbooks and guidelines for implementing various aspects of standardsbased assessment systems. This long-standing partnership has conducted a wide variety of research over the years into critical issues affecting the technically-sound administration of K–12 assessments, including research on equating, setting cut-scores, consequential validity, generalizability theory, use of multiple measures, alignment of assessments to standards, accommodations, assessing English language learners, and the reliability of aggregate data. In addition, TILSA has provided professional development in critical topics in measurement for its members. The partnership has developed technical reports on each topic it researched. In addition, TILSA has produced guidelines for developing state assessment programs. The Council of Chief State School Officers’ Board of Directors T. Kenneth James (Arkansas), President Susan Gendron (Maine), President-Elect Directors: Michael Flanagan (Michigan), Steve Payne (West Virginia), Susan Castillo (Oregon), Judy Jeffrey (Iowa), Alexa Posny (Kansas), Gerald Zahorchak (Pennsylvania), Jim Rex (South Carolina) CCSSO Staff and Consultants John Tanner, Director, Center for Innovative Measures Robert M. Olsen, Senior State Collaborative Manager Doug Rindone, TILSA Coordinator Duncan MacQuarrie, TILSA Assistant Coordinator Phoebe Winter, TILSA Consultant Council of Chief State School Officers State Collaborative on Assessment and Student Standards One Massachusetts Avenue, NW, Suite 700 Washington, DC 20001-1431 Phone (202) 336-7000, Fax (202) 408-8072 www.ccsso.org Copyright © 2007–2009 by the Council of Chief State School Officers. All rights reserved. Table of Contents Acknowledgements ........................................................................................................................................ iii INTRODUCTION ....................................................................................................................................................... 1 1. Audience ...................................................................................................................................................... 1 2. Purpose ....................................................................................................................................................... 1 3. Industry Standards and Other References ................................................................................................... 2 4. How This Handbook Is Organized .............................................................................................................. 3 5. Conventions Used in This Handbook .......................................................................................................... 4 CHAPTER 1: AN OVERVIEW OF ASSESSMENT, LINKING, AND EQUATING CONCEPTS .................... 6 1-A. Valid Inferences about Students: The Purpose of Assessment ................................................................................. 6 1-B. Validity and the Industry Standards ................................................................................................................. 6 1-C. Learning Targets........................................................................................................................................... 7 1-D. Linking and Equating .................................................................................................................................. 8 1-E. Common Misconceptions about Equating ......................................................................................................... 11 CHAPTER 2: A PRIMER OF CLASSICAL AND IRT MEASUREMENT THEORIES .................................. 15 2-A. Fundamental Concepts of Classical Test Theory ................................................................................................ 15 2-B. Fundamental Concepts of Item Response Theory (IRT) ....................................................................................... 18 Basic IRT Models ................................................................................................................................ 19 Conceptualizing IRT under the 1-Parameter or “Rasch” Model..................................................................... 21 The 2- and 3-Parameter IRT Models....................................................................................................... 23 Scoring Method .................................................................................................................................... 25 Parameter Invariance and Scale Indeterminacy with IRT Models ................................................................... 26 Numbers, Scales, and Scaling ................................................................................................................. 26 Common IRT Uses and Applications ...................................................................................................... 27 CHAPTER 3: BASIC TERMS, CONCEPTS, AND DESIGNS FOR EQUATING ............................................ 30 3-A. Equating Designs ....................................................................................................................................... 31 Equivalent Groups (Random Groups) Design ........................................................................................... 33 Single Group Design .............................................................................................................................34 Single Group Design with Counterbalancing .............................................................................................. 35 Anchor Test Design..............................................................................................................................36 3-B. Related Concepts and Procedures: Item Banking, Matrix Sampling, Spiraling ......................................................... 41 Item Bank Development ........................................................................................................................ 41 Matrix Sampling ................................................................................................................................. 42 Spiraling ............................................................................................................................................ 43 3-C. Imprecision in Measurement .......................................................................................................................... 44 Random Error .................................................................................................................................... 44 i CHAPTER 4: THE MECHANICS OF EQUATING ............................................................................................. 47 4-A. Conceptual Overview ................................................................................................................................... 47 4-B. Classical Test Theory (CTT) Equating ........................................................................................................... 48 Linear Equating .................................................................................................................................. 48 Equipercentile Equating ........................................................................................................................ 50 Linear vs. Equipercentile Equating ......................................................................................................... 52 4-C. Item Response Theory (IRT) Equating ............................................................................................................ 52 Equating through Common Items ............................................................................................................ 53 Equating by Applying an Equating Constant............................................................................................ 53 Equating by Concurrent or Simultaneous Calibration .................................................................................. 56 Equating with Common Items through Test Characteristics Curves ................................................................ 57 Common Person IRT Calibration ........................................................................................................... 58 Pre-Equating and Post-Equating ............................................................................................................ 58 Pre-equating ........................................................................................................................................ 58 Post-equating....................................................................................................................................... 59 CHAPTER 5: COMMON EQUATING ISSUES.................................................................................................... 61 5-A. Changes in Test Specifications ....................................................................................................................... 62 5-B. Anchor Item Considerations .......................................................................................................................... 63 5-C. Open-Ended or Constructed Response Items...................................................................................................... 66 5-D. Writing Assessments ................................................................................................................................... 67 5-E. Paper-and-Pencil and Computerized Testing ..................................................................................................... 69 5-F. Issues in Vertical Scaling vs. Horizontal Equating ............................................................................................ 70 5-G. Item Banking ............................................................................................................................................. 73 5-H. Standard Setting and Accountability .............................................................................................................. 77 5-I. Technical Documentation ............................................................................................................................... 79 5-J. Inferences Based on Linking and Equating ........................................................................................................ 81 5-K. Quality Control Issues .................................................................................................................................. 82 REFERENCES AND RECOMMENDED READING ...........................................................................................85 6-A. Reference List ............................................................................................................................................ 86 ii Acknowledgements This publication was sponsored by the Council of Chief State School Officers (CCSSO) and developed in cooperation with the Technical Issues in Large-Scale Assessment (TILSA) collaborative under the leadership of Doug Rindone, who played a critical role in advancing this publication and supporting its goals. TILSA’s Subcommittee on Equating initiated the project, led by Subcommittee Chair Michael Muenks (Missouri Dept. of Elementary and Secondary Education). Duncan MacQuarrie (CCSSO) and Phoebe Winter (consultant) provided leadership and guidance throughout the development of this document. Joseph Ryan (professor emeritus, Arizona State University) served as chief editor and provided research, writing, editing and psychometric/equating expertise. Frank Brockmann (Center Point Assessment Solutions) drafted much of the original text, composed diagrams, and provided ongoing logistical and project management support. The TILSA collaborative provided direct input to help shape the development and revision of this document. The TILSA Subcommittee on Equating reviewed draft material in June 2007 (Nashville, TN), October 2007 (Salt Lake City, UT), February 2008 (Atlanta, GA), and June 2008 (Orlando, FL), and the TILSA group as a whole also provided valuable feedback during these meetings that helped guide the project as it moved forward. iii During its later stages of development, this handbook was reviewed externally by the Technical Special Interest Group of National Assessment of Educational Progress (NAEP) coordinators. Comments from this external review helped provide additional editorial refinement and focus for this document. Contributing members include: NAEP State Coordinators Vickie Baker, West Virginia Kate Beattie, Minnesota Barbara Bianchi, New Mexico Pauline Bjornson, North Dakota Challis Breithaupt, Maryland Dianne Chadwick, Iowa Mike Chapman, Montana Mark Decandia , Kentucky William Donkersgood, Wyoming Jeanne Foy, Alaska David Gebhardt, New Hampshire Wendy Geiger, Virginia Carrie Giovanonne, Arizona Cynthia Hollis, Missouri Patsy Kersteter, Delaware Tor Loring-Meier, Nevada Jo Ann Malone , Mississippi Angie Mangiantini, Washington Jan Martin, South Dakota Andy Metcalf, Illinois John Moon, Nebraska Pam Sandoval, Colorado Renee Savoie, Connecticut Barbara Smey-Richman, New Jersey Bert Stoneberg, Idaho Jessica Valdez, California NAEP State Coordinators (Former or Interim) Therese Carr, South Carolina Elaine Hultengren, Oregon Chris Webster, South Carolina Carolyn Trombe, New York Tribal Urban District Assessment Coordinators Margaret Bartz, Chicago Public Schools Maria Lourdes de Hoyos, Austin Independent School District State Department of Education, Bureau of Assessment Gail Pagano, Connecticut NAEP Coaches Dale Carlson Gordon Ensign James Friedebach Carole White NAEP State Service Center Jason Nicholas Grateful thanks also to Michael Kolen, University of Iowa, for his critical review and very thoughtful suggestions for this handbook and to Hariharan Swaminathan, University of Connecticut, who offered advice on an earlier version of the text. iv Introduction 1. Audience This handbook focuses primarily on equating test forms. Equating is a technical procedure or process conducted to establish comparable scores on different versions of a test, allowing them to be used interchangeably. It is an important aspect of establishing and maintaining the technical quality of a testing program by directly impacting the validity of assessments—the degree to which evidence and theory support the interpretations of test scores. When two test forms have been successfully equated, educators can validly interpret performance on one test form as having the same substantive meaning compared to the equated score of the other test form. There are a number of substantive and technical issues involved in equating and many potential pitfalls in its use. This handbook was written for decision makers to guide them in addressing these issues and to help them avoid potential problems. Intended as both a guide and teaching tool, it aims to provide readers with the practical knowledge needed to make appropriate decisions, especially readers who may have arrived at their current position from a non-technical background. Therefore, this publication is for • newly appointed assessment personnel coming from non-technical disciplines who need practical guidance with regard to equating decisions and their potential impacts • experienced psychometric experts who may benefit by offering an equating primer as a resource to non-psychometrician colleagues to encourage a better understanding of the issues • policy personnel in the position of explaining the reasoning behind prior equating decisions or advocating the future direction of an assessment program • psychometricians who would benefit from basic models that illustrate past decisions and how they related to policy • anyone who might benefit from a better understanding of what equating is, why it is done, and how common problems might be avoided 2. Purpose Equating is an essential tool in educational assessment due the critical role it plays in several key areas: establishing validity across forms and years; fairness; test security; and, increasingly, continuity in programs that release items or require ongoing development. Although the practice of equating is rooted in long standing practices that go back many decades, one of the driving forces behind this handbook has been the notion that a great deal of information about equating is unfamiliar to or not easily accessed by practitioners. This information appears in scholarly texts, professional journals, and other publications. It derives from the accumulated experiences of people who are trained and experienced in measurement and have highly technical understandings of the issues; however, even when 1 expert consultants advise policymakers on equating-related matters, communicating the substantive significance of the technical concepts in a user-friendly manner is a challenge. Thus the primary aim of this handbook is to provide an abbreviated conceptual background of key concepts and describe some common ...
Purchase answer to see full attachment

Tutor Answer

Sarajem
School: Purdue University

Attached.

Running head: PAPER REVIEW

1

PAPER REVIEW
NAME:
INSTITUTION:
DATE

PAPER REVIEW

Measurement, measurement scales, and scoring
Measurement is defined as assigning values to a test, object, person, etc. according to
some rules. For a measurement to happen, there must be three things to aid which are a
variable, the object being measured and unit of measurement. The variable is the
characteristic, the quality, or the property that is being measured in the object. For example,
the weight of a book, the height of the teacher, etc. An object is a material being measured.
For example, the teacher, the book, etc.
Measuring scales are said to be ways into which variables stated above are
categorized. There are four types of measurement scales. They are nominal, ordinal, interval
and ratio (Embretson, S. E., & Reise, S. P. , 2013). Nominal measuring scale involves the
use of numbers to categorize objects. For example, using 1 for male and 2 for females. The
ordinal scale of measurement involves ranking related objects. For example, in an exam, the
ranking of students in the final results represents the ordinal scale of measurement. That is,
first, second, third up to the last. Or ranking the pain levels of disease from 0 – 10 with 0
being least pain and 10 highest pain. The ratio measuring scale is used to represent the
quantity of the object. For example, measuring the height of the students. It will be
represented in units of meters or centimeters. The interval measuring scale is similar to the
interval scale with the only difference being in ratio zero is the lowest value while in the
interval, negative values are part of the measurement. For example, the weight of a pen
cannot be below 0g therefore, it is a ratio scale while for temperature, there can be a -10oC
and therefore it is an interval scale.

2

PAPER REVIEW

3

Scoring is the act of assigning values to an object based on a test, standard or
performance (Moss, C. M., & Brookhart, S. M. , 2019). For example, of the process of
determining one as a male or a female, the process of assigning one’s weight, etc.

Testing applications
There are three testing applications. They are norming, equating and scaling.
In a psychological assessment, the norms or performance of the whole group is prepared in a
norming process. The process involves comparing individual tests to group tests. The
equating process involves comparing individual performances of a group to other group
members while scaling involves scoring each group member performance and ranking it to
th...

flag Report DMCA
Review

Anonymous
Goes above and beyond expectations !

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors