Florida International CH11 WK8 Organizations Effectiveness Measurement Plan PPT

User Generated

fjrrgtvey15

Business Finance

Florida International University

Description

For this Assignment, imagine you are a consultant for a nonprofit organization. You may create a fictional program/organization or you can select a current existing program at your agency/organization. As the consultant, you will create a presentation outlining a plan to measure the effectiveness of the organization.

Your presentation will describe Impact Program Evaluation and the various research designs. Additionally, you will assist the organization in understanding the pros and cons of each design and (based on their program) provide an explanation of the best option for program evaluation.

As explained in Chapter 11 of your text, there are common threats to internal validity preventing organizations from acquiring the accurate data needed to effectively plan and improve a program. Describe and discuss the seven threats to internal validity to the organization’s program. Include in the presentation a description of ways in which the organization is vulnerable to these threats and provide methods to prevent these internal threats to validity.

Provide appropriate citations and references in APA format when utilizing the text and scholarly sources in your presentation. Your Assignment should include a 15-slide PowerPoint presentation, not including the title and reference slides. Please keep text on the slide uncluttered and to the point, and use the notes area to provide a script of what would be said during an actual presentation. Refer to the Assignment Rubric under Course Resources to ensure that you have covered all of the expectations for the presentation.

Unformatted Attachment Preview

Chapter 11: Impact Program Evaluation and Hypothesis Testing Kettner, Designing and Managing Programs: An Effectiveness-Based Approach 5e © 2017, SAGE Publications. Differentiating Impact Program Evaluation From Performance Measurement • Impact program evaluation focuses on changes in program participants and not on changes in organizations and communities • Impact program evaluation attempts to determine if the program has an impact and achieves positive outcomes (results) for participants Differentiating Impact Program Evaluation From Performance Measurement Key distinguishing features: • Frequency – Impact program evaluation tends to be episodic – Performance measurement is an ongoing activity • Issue(s) – Impact program evaluation is frequently driven by specific stakeholder questions – Performance measurement deals with general performance issues that are relatively constant over time Differentiating Impact Program Evaluation From Performance Measurement Key distinguishing features: • Attribution of outcomes – impact program evaluation is concerned with determining if the program actually caused the outcome (result) achieved or if the outcome (result) is caused by some other factor – Performance measurement generally assumes attribution Impact Program Evaluation • Attempts to demonstrate that an outcome (result) is attributable to the program (cause-and-effect relationship between a program and its outcome) • An impact program evaluation can demonstrate that a social service program is: – Successful – Unsuccessful due to theory failure – Unsuccessful due to program failure Impact Program Evaluation • Successful programs have alignment between the program hypothesis (theory) and implementation of the program design (cause) which produces the desired outcome (result) • Unsuccessful programs are not aligned: – Theory failure--alignment between program hypothesis (theory) and implementation of program design but program does not produce desired outcome (result) – Program failure—misalignment between program hypothesis (cause) and implementation of program design which fails to produce the desired outcome (effect) Impact Program Evaluation • If a program fails to achieve its desired outcome (result) due to theory failure: – the result is still a valid test of the program and inferences can be made • If a program fails to achieve the desired outcome (result) due to program failure: – the program has not been properly tested and inferences about the success or failure of a program cannot be made Impact Program Evaluation and Hypothesis Testing Program hypothesis: • Makes assumptions underlining social service program outcome (result) explicit • Establishes a framework to bring internal consistency to the implementation of a social service program • Enables inputs, process, outputs and outcomes to be examined for internal consistency Impact Program Evaluation and Hypothesis Testing Programs can fail to achieve their desired outcome (result) due to flaws in either the theory of the program or the implementation of the program: • Theory failure--program is implemented according to its design, but anticipated outcome (result) is not achieved • Theory failure--results from a flaw in the hypothesis underlying the program • Program failure--program is not implemented according to its design • Program failure–hypothesis may be or may not be correct Research Designs for Impact Program Evaluation • Essence of impact program evaluation is comparison • Purpose of comparison is to determine what actually happens to clients as a result of participation in a program compared to clients who do not participate • Comparisons are usually made two ways: – Two different groups are compared (experimental and comparison groups) – One group is compared to itself (a group serves as both the experimental and its own comparison group) Research Designs for Impact Program Evaluation Three types of impact program evaluation designs frequently used in human service programs: • Single-group pretest/posttest design • Non-equivalent comparison group design • Randomized experimental design Internal Validity • The extent to which the program evaluator can prove the program intervention caused changes in the program participant. • Example – Treatment for depression • +outside of treatment got a new job, fell in love • -spouse has diagnosed with cancer, laid off. Threats to Internal Validity • History – Something major happens that only affects one group not the other (experimental and control group) • 9/11/2001 • COVID • midterms Threats to Internal Validity • Mortality – Many subjects drop out of one condition and not the other – Skew your average – Can’t control either mortality or history Threats to Internal Validity • Instrumentation – Used different instrumentation methods with control and experimental group – Both groups should get the same: • Questionnaire • Instructions • Conditions Threats to Internal Validity • Maturation – Outcomes of the evaluation vary as a natural result of time. – Example-get more comfortable with parenting as the new parent spends more time with the baby and feels less intimidated Threats to Internal Validity • Testing/Repeated Testing – Are you learning or just remember the test question? – Participants showed higher productivity at the end of the program because the same test was administered. Due to familiarity, awareness of questionnaire’s purpose. Threats to Internal Validity • Selection Bias – Experimental and control groups are selected and assigned to groups based upon some characteristic, or different recruitment is used for different groups. – Members of the experimental and control group are not truly equal Threats to Internal Validity • Diffusion or Imitation of the Program – is where the control group is affected by the treatment. – This could happen because individuals in the control groups and treatment groups talk to each other about the treatment. – Usually an issue in research involving training or informational programs Research Designs for Impact Program Evaluation Single-Group Pretest/Post Test Design • Single group serves as both the experimental group and its own comparison group • Initial measurement or observation, pre-test, occurs before clients begin the program • Measurement or observation is related to the client characteristic the program is designed to change • Initial measurement or observation becomes the baseline used for comparison purposes • After completion of the program a second measurement or observation, post-test, occurs Research Designs for Impact Program Evaluation Single-Group Pretest/Post Test Design • Pre-test and post-test are exactly the same but administered at different times • Compare scores of pre-test to post-test and resulting change (impact) is attributed to participation in the program • Counterfactual/measure of what would have happened without the program is the difference between scores • Cannot be certain that the difference is the result of the program due to possibility of confounding factors or threats to validity Research Designs for Impact Program Evaluation The Nonequivalent Comparison Group Design • Begins to address the issue of threats to internal validity (selection bias and testing) • Two separate and distinct groups are created: – Experimental group is comprised of individuals who will participate in the program – Comparison group is comprised of individuals who are “statistically similar” to the clients participating in the program but who will not actually participate in the program • Difference between scores of the experimental and comparison groups is the measure of the program’s impact Research Designs for Impact Program Evaluation The Randomized Experimental Design • Most valid of the three approaches because it involves the random assignment of participants to the experimental group and the control group • Random assignment controls for all the threats to internal validity except for the testing effect • Process and analysis are the same as for the nonequivalent comparison group design • The randomized experimental design produces data that most clearly demonstrate the real impact of the program Assignment Unit Eight • Describe Impact Program Evaluation • Describe Various Designs and Pros and Cons of Each – Single-Group Pretest/Post Test Design – The Nonequivalent Comparison Group Design – The Randomized Experimental Design • Describe and discuss the seven threats to internal validity to the organization’s program. Include in the presentation a description of ways in which the organization is vulnerable to these threats and provide methods to prevent these internal threats to validity. Video • https://www.unicefirc.org/KM/IE/impact_1.php Kettner, Designing and Managing Programs: An Effectiveness-Based Approach 5e © 2017, SAGE Publications.
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Here is the complete PowerPoint presentation. I will be around. Hit me up should you need clarifications or edits.

Organization’s effectiveness measurement plan
Name
Institution
Tutor
Date

Organization’s effectiveness
measurement plan
Introduction


Measuring the organization’s effectiveness is very significant



It helps in knowing whether the goals are achievable.



A good plan is vital for measuring organization’s effectiveness



It helps in determining the actions to take



The program might be improved or terminated



A quality plan aids in examining change between two groups



It reveals the effect of interventions on a particular group(Bedford,
Bisbe & Sweeney, 2019)

Effectiveness measurement
plan


Children's home society of Florida is a program for children



It is aims at improving children’s welfare



Its major goal is providing quality environment or children development



Impact program evaluation is essential for measuring program’s
effectiveness.



It involves testing a change initiated by a program (Gertler, et a., 2018).



The program can be effective if it achieves the objectives



Impact evaluation will examine the results after the program.

Impact program evaluation
research designs
single-group pretest/posttest.


The design determines the program interventions’ effects(Abdulkadi...


Anonymous
Really useful study material!

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Related Tags