Running Head: Methods of reporting evaluation findings
Methods of reporting evaluation findings
Introduction
1
Methods of reporting evaluation findings
2
How the presentation of the evaluation finding will be conducted is of key significance as
it will determine on the implementation strategy of the evaluation program. The method of
presenting an evaluation finding is mainly determined by the nature of the audience being
addressed and should meet the specific needs of the audience.
Methods of reporting the evaluation finding
1. Written report
Is the official reporting of the evaluation finding and are of a formal format. Vito &
Haggins (2015, p. 136) echo Synder, who asserts that these reports should be presented in a
manner that is usable and readily understandable or clear to the practitioners. The report should
provide enough information and advice on what is to be done, and offer any available
alternatives too. It is essential to the peer reviewers and practitioners.
2. Face-to-face (Verbal)
This is an active mode of presentation. According to Vito & Haggins (2015, p.136), the
information here should be succinct. The advantage of this method is that the stakeholders and
practitioners are readily involved, and this involvement will maximumly benefit the community
and stakeholders.
3. Electronic presentation
This is through use of TV, radio, projectors, and videos (Wall & Solutions, 2014). In this
category, only key research messages are projected to the audience. This works well for all
setting and audience. Also, the use of websites will help the information reach a larger audience.
4. Social media
Methods of reporting evaluation findings
3
Mostly deemed informal and findings are shared through social platforms such as a blog,
and Twitter (Wall & Solutions, 2014). The key message here is quickly shared with a large
audience.
The maximum utilization of this program can be enhanced through ensuring that the
collected data and information by the researchers should be easily understandable and handed
over to the practitioners, and these two teams are encouraged to communicate (Vito & Haggins,
2015, p.136). Also, according to Vito & Haggin (2015), the stakeholders should be identified,
ensure the program is relevant to the needs of the target group, and encourage the users to be
committed to the program.
In regard to interpretation and reporting, it is ethical to identify the target group and their
needs and report to them using the appropriate method that they easily understand (Trevisan &
Walser, 2014).
References
Methods of reporting evaluation findings
4
Higgins, G. E., & Vito, G. F. (2014). Practical program evaluation for criminal justice.
Routledge.
Trevisan, M. S., & Walser, T. M. (2014). Evaluability assessment: Improving evaluation quality
and use. Sage Publications.
Wall, J. E., & Solutions, S. (2014). Program evaluation model 9-step process. Sage Solutions.
http://region11s4. lacoe. edu/attachments/article/34, 287(29), 209.
Book
Cover
Here
Chapter 9
Reporting and Using Evaluations
Copyright © 2014, Elsevier Inc. All Rights Reserved
1
Reporting and Using Evaluations
Report must be clearly and specifically
communicated
Information must be given to those who
can use it
The aim of the report must be to provide
information and advice on what should be
done and which alternatives are worth
consideration
Copyright © 2014, Elsevier Inc. All Rights Reserved
2
Abstract or Program Summary
Present the purpose of the program, how
the evaluation was conducted, and a
summary of research findings
Copyright © 2014, Elsevier Inc. All Rights Reserved
3
Presenting the Theory Supporting
the Program
Identify and describe theory that will serve
as the basis for the program and the
expectation that it will have the desired
impact
Copyright © 2014, Elsevier Inc. All Rights Reserved
4
Presenting the Process Evaluation
Emphasis is on how the host organization
served as the basis for the implementation
of the program
The evaluator should describe program
implementation
Describe program destruction and delivery
in detail
Provide a description of how services were
delivered
Copyright © 2014, Elsevier Inc. All Rights Reserved
5
Presenting the Process Evaluation
The process evaluation should include:
Relevant description on program operations
Consider the management of the program
Describe and assess how the desired service
was provided by the program
Must tap the opinions of program staff and
clients
Describe the nature of the client population
Identify and frankly present management
issues
Copyright © 2014, Elsevier Inc. All Rights Reserved
6
Presenting the Process Evaluation
The process evaluation should include:
How implementation issues affect service
delivery
Assessing service delivery inovlves observing
and describing how it was delivered by
program staff
Copyright © 2014, Elsevier Inc. All Rights Reserved
7
Presenting the Impact Evaluation
This is the meat of the evaluation
Conclusions are reached and supporting
data are presented
The summary is a crucial aspect of the
report because it is the first section
consulted by readers
Copyright © 2014, Elsevier Inc. All Rights Reserved
8
Factors Influencing the Use of
Program Evaluation Results
Researchers focus on analysis to
determine “what works” in criminal justice
Practitioners are looking for language to
tell them how to do it
Criminology needs to get both data and
information into the hands of policy makers
and administrators
Must identify stakeholders in the process
Copyright © 2014, Elsevier Inc. All Rights Reserved
9
Factors Influencing the Use of
Program Evaluation Results
“Collaborative Evaluation” assumes that
active ongoing engagement between
evaluators and program staff will result in
stronger evaluation designs, enhanced
data collection and analysis, and results
that stakeholders understand and use
Copyright © 2014, Elsevier Inc. All Rights Reserved
10
Table 9.1 Focus of Process and
Impact Evaluations
Process Evaluation Focus
Inputs
Results
Personnel
Arrests
Equipment
People trained
Expenditures
Barriers installed
Other resources
Other tasks accomplished
Impact Evaluation Focus
Outcomes
Crimes reduced
Fear abated
Accidents reduced
Other reductions in problems
Copyright © 2014, Elsevier Inc. All Rights Reserved
11
Table 9.2 Interpreting Results of
Process and Impact Evaluations
Process
Evaluation
Results
Impact
Evaluation
Results
Response
Implemented
as Planned
Problem
A. Evidence that
declined and no the response
other likely
caused the
cause
decline
Problem did not
decline
Response Not
Implemented
as Planned
C. Suggests that
the response
was accidentally
effective or that
other factors
may have
caused the
decline
B. Evidence that D. Little is
the response
learned
was ineffective
Copyright © 2014, Elsevier Inc. All Rights Reserved
12
GENNARO F. VITO & GEORGE E. HIGGINS
•
Examines major evaluation types (as well as the benefits, concerns, and constraints of each), including needs
evaluations, theory evaluations, process evaluations, outcome/impact evaluations, and cost-efficiency evaluations.
•
Defines the data points each evaluation type requires, and the manner in which these data can be collected.
•
Demonstrates how different types of evaluations can be used together to provide clear information regarding a
program’s overall performance level.
•
Cites and makes use of actual, real-world policy evaluations and vetted programs.
When closely examined, many of the most prominent criminal justice policies to emerge over the last 30 years are found wanting.
High rates of incarceration and recidivism reflect the flaws of many contemporary crime prevention approaches. It’s becoming
increasingly clear that policy need not just be executed, but properly evaluated. Practical Program Evaluation for Criminal
Justice describes applicable, step-by-step instructions on how to determine whether an initiative is truly necessary prior to its
adoption (thus eliminating the risk of wasting resources), as well as how to accurately gauge its effectiveness during initial rollout
stages. This is achieved through the gradual introduction of basic data analysis procedures and statistical techniques, which, once
mastered, can prove or disprove a program’s worth and make for an improved criminal justice system. Practical Program
Evaluation for Criminal Justice provides the knowledge and tools needed to successfully apply the principles of
fiscal responsibility, accountability, and evidence-based practice to criminal justice reform plans.
Gennaro F. Vito is a Distinguished University Scholar and professor in the Department of Justice Administration at the University
of Louisville, where he has a faculty appointment in the Administrative Officer’s Course of the Southern Police Institute. He holds a
Ph.D. in public administration from The Ohio State University. He is a past President and Fellow of the Academy of Criminal Justice
Sciences and received its Bruce Smith Sr. Award in 2012. His research interests are concerned with criminal justice policy analysis
and program evaluation and police management.
George E. Higgins is a Professor in the Department of Justice Administration at the University of Louisville where he also serves
as the Ph.D. Program Coordinator. He received his Ph.D. in Criminology from Indiana University of Pennsylvania in 2001. He has
published more than 100 journal articles and book chapters primarily in the areas of criminological theory testing, racial profiling,
and cybercrime. In 2009, he was awarded the Coramae Richey Mann Leadership Award, which is the top award from the Minority
and Women’s Section of the Academy of Criminal Justice Sciences for research and leadership in race and ethnicity research. He
is the past editor of the American Journal of Criminal Justice and the current editor of The Journal of Criminal Justice Education.
Criminal Justice | Public Policy
PRACTICAL PROGRAM EVALUATION FOR CRIMINAL JUSTICE
An applicable, step-by-step explanation of how to conduct credible criminal justice program evaluations
and improve the quality and effectiveness of the criminal justice system.
VITO • HIGGINS
PRACTICAL PROGRAM
EVALUATION FOR CRIMINAL JUSTICE
ISBN 978-1-4557-7770-9
9 781455 777709
Routledge
www.routledge.com
PRACTICAL PROGRAM
EVALUATION FOR
CRIMINAL JUSTICE
GENNARO F. VITO & GEORGE E. HIGGINS
PRACTICAL
PROGRAM
E VA L U AT I O N F O R
CRIMINAL JUSTICE
Page Intentionally Left Blank
PRACTICAL
PROGRAM
E VA L U AT I O N F O R
CRIMINAL JUSTICE
GENNARO F. VITO
GEORGE E. HIGGINS
First published 2015 by Anderson Publishing
Published 2015 by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
711 Third Avenue, New York, NY 10017
Routledge is an imprint of the Taylor & Francis Group, an informa business
Copyright © 2015 Taylor & Francis. All rights reserved.
No part of this book may be reprinted or reproduced or
utilised in any form or by any electronic, mechanical, or other means, now
known or hereafter invented, including photocopying and recording, or in any
infor mation storage or retrieval system, without permission in writing from
the publishers.
Notices
No responsibility is assumed by the publisher for any injury and/or damage to
persons or property as a matter of products liability, negligence or otherwise,
or from any use of operation of any methods, products, instructions or ideas
contained in the material herein.
Practitioners and researchers must always rely on their own experience and
knowledge in evaluating and using any information, methods, compounds, or
experiments described herein. In using such information or methods they should
be mindful of their own safety and the safety of others, including parties for
whom they have a professional responsibility.
Product or corporate names may be trademarks or registered trademarks, and
are used only for identification and explanation without intent to infringe.
This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as
may be noted herein).
Library of Congress Cataloging-in-Publication Data
Application Submitted
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
ISBN-13: 978-1-4557-7770-9 (pbk)
Dedication
To the Vito and Higgins families.
Page Intentionally Left Blank
CONTENTS
Digital Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Chapter 1
Getting Started with Program Evaluation . . . . . . . . . . . . . . . . 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Administrator and Evaluator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Strengths and Weaknesses of Program Evaluation . . . . . . . . . . . . . . . . . 3
Evidence-Based Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Meta-Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Campbell Collaboration (Crime and Justice Group) . . . . . . . . . . . . . . . 12
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Chapter 2
Planning a Program Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 15
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Problem-Oriented Policing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Planning an Evaluation Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Logic Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Politics of Evaluation Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Ethical Issues in Evaluation Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Chapter 3
Needs Assessment Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 31
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Definition of Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Problems with Needs Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
vii
viii
CONTENTS
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Chapter 4
Theory-Driven Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Evaluability Assessment Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Describing and Producing Program Theory . . . . . . . . . . . . . . . . . . . . . . 50
Analyzing Program Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Chapter 5
Process Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Process Evaluation: Program Implementation . . . . . . . . . . . . . . . . . . . . 64
Process Evaluation: Monitoring Conduct of Evaluation
Research Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Process Evaluation: Use of Qualitative Methods . . . . . . . . . . . . . . . . . . 69
Process Evaluation Assessment: Evidence-Based
Correctional Program Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Chapter 6
Outcome Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Classic Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
To Experiment or not to Experiment? . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Quasi-Experimental Research Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Before-and-After Design (One Group Pre-Test, Post-Test Design) . . . . 91
Question of Causation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
CONTENTS
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Chapter 7
Cost-efficiency Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Limits of Cost Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Chapter 8
Measurement and Data Analysis . . . . . . . . . . . . . . . . . . . . . . 111
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Chapter 9
Reporting and Using Evaluations . . . . . . . . . . . . . . . . . . . . . . 127
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Review of Operation CeaseFire Chicago . . . . . . . . . . . . . . . . . . . . . . . 128
Factors Influencing the Use of Program Evaluation Results . . . . . . . . 136
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Chapter 10 Looking Ahead: A Call to Action in Evaluation Research . . . . 141
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Point 1: Use the Best Possible Research Design . . . . . . . . . . . . . . . . . 142
Point 2: Evaluators must get Involved in the Very
Beginning of the Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
ix
x
CONTENTS
Point 3: Evaluators must Include Some Measure of Cost
in their Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Point 4: Evaluation Leads to the Development of
Evidence-Based Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Point 5: Get out into the Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Point 6: Prepare to Partner with Practitioners . . . . . . . . . . . . . . . . . . . 145
Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Discussion Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Glossary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
DIGITAL ASSETS
Interactive resources can be accessed for free by registering at
www.routledge.com/cw/vito
For the Instructor
●
●
●
Test bank: Compose, customize, and deliver exams using an online
assessment package in a free Windows-based authoring tool that
makes it easy to build tests using the unique multiple-choice and
true or false questions created for Practical Program Evaluation
for Criminal Justice. What’s more, this authoring tool allows you to
export customized exams directly to Blackboard, WebCT, eCollege,
Angel, and other leading systems. All test bank files are also conveniently offered in Word format.
PowerPoint lecture slides: Reinforce key topics with focused
PowerPoint slides that provide a perfect visual outline with which
to augment your lecture. Each individual book chapter has its own
dedicated slideshow.
Instructor’s guides: Design your course around customized learning objectives, discussion questions, and other instructor tools.
xi
Page Intentionally Left Blank
PREFACE
Job training in prison, rehabilitation programming, or policing tactics are all organized ways that individuals or organizations attempt
to achieve their goals in criminal justice. A substantial amount of
resources and effort result in a number of services and products that
help those involved with criminal justice to meet the needs of society. The criminal justice system includes programs, facilities, and
policies to help individuals lead fruitful, satisfying, healthier, or safer
lives. Primarily, those involved with the criminal justice system have
needs that are being met through publicly funded efforts. These efforts
are called programs. The methods that are used to plan, monitor, or
improve programming are the basis of program evaluation. The evaluation methods in this book apply not only to criminal justice, but to
public and private agencies and for-profit organizations.
Management is a component of the success of a criminal justice program. Attempts to improve productivity or employee morale all require
good planning techniques, feedback on the impact of the plan, and the
utilization of the feedback. This book and the examples within reflect
publicly funded programming, and the principles of program evaluation have been widely used in criminal justice.
The book is arranged in a manner reflecting all of the pieces of
an evaluation. Chapter 1 provides the basic definitions of the book.
Chapter 2 provides the basics of planning a program evaluation, and
the issues that an evaluator needs to be aware of when conducting an
evaluation. Chapter 3 provides the foundation of a needs evaluation.
The book then shifts into developing an understanding of the underlying theory of the program. Chapter 4 is focused on theory evaluation.
Chapter 5 moves the reader through the different parts of a process
evaluation. Chapter 6 focuses on an outcomes and impact evaluation.
Chapter 7 helps the reader understand the issues involved in a costefficiency evaluation. In Chapter 8 the reader gets some perspective
on how measurement and statistics are used in program evaluations.
Chapter 9 provides the reader with an overview of writing a report and
the uses of an evaluation. Chapter 10 provides a call to action in criminal justice program evaluation.
We wrote this book in the hope that improved evaluation will lead
to more effective and efficient programming in criminal justice. We
give all students this charge.
Gennaro F. Vito and George E. Higgins
University of Louisville
xiii
Page Intentionally Left Blank
1
GETTING STARTED WITH
PROGRAM EVALUATION
Keywords
systematic review
Maryland Report
meta-analysis
Campbell Collaboration (Crime and Justice Group)
CHAPTER OUTLINE
Introduction 1
Administrator and Evaluator 2
Strengths and Weaknesses of Program Evaluation
Evidence-Based Practices 5
Maryland Report 6
3
“What Works” 6
“What’s Promising” 7
“What Doesn’t Work” 8
Meta-Analysis 9
Campbell Collaboration (Crime and Justice Group)
Summary 12
Discussion Questions 13
References 13
12
Program evaluation is the systematic assessment of the operation and/
or outcomes of a program or policy, compared to a set of implicit or
explicit standards, as a means of contributing to the improvement of
the program or policy.
—Carol Weiss (1998)
Introduction
The aim of program evaluation is to determine the effectiveness
of an intervention. In this book, we are interested in the effectiveness
of crime prevention programs, whether they are aimed at individuals
(treatment of offenders or victims, prosecution of career criminals)
1
2
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
or communities (crime prevention programs), and the operations of
the components of the criminal justice system (police, courts, corrections—both juvenile and adult).
The measurement of efficiency is the key problem faced by an
evaluation researcher. Accountability can only be established if the
evaluation measures are valid indicators of performance. Of course,
the measures must also be readily available. Evaluation of crime
programs is a significant exercise that can have a direct impact on
society. Effective crime programs are vitally needed to deal with the
crime problem and all the facets of public safety, such as rising crime
rates, fear of crime among the public, crowded jails and prisons, and
government expenditures. The pressure for information to deal with
these issues is evident. The role of evaluation research in criminal
justice is to provide evidence about the effects of programs and policies that are designed to enhance public safety in a manner that is
accessible and informative to policy makers (Lipsey, 2005, p. 8). This
text will provide an introduction to the research methods and guidelines necessary to conduct a successful evaluation.
Administrator and Evaluator
As Adams (1975, p. 5) has indicated, there are two significant
actors in the evaluation process: the program administrator and the
evaluator. The administrator must be committed to research and the
creation of an organizational climate that encourages the production, reporting, and use of the findings of the evaluation. The evaluator must guide the conduct of the research process from beginning to
end: its structure and methodology, use of valid measures of performance, and maintaining the balance between theory and practical,
applied policy implications of the results. These two actors must work
together and communicate well to produce a valid and reliable program evaluation. Each has different sources of expertise: the administrator knows what the program was designed to do and how it should
operate, and the evaluator has methodological skills and knows how
to design accurate research projects.
Suchman (1974, p. 5) identified three major aspects of the demand
for evaluation research:
●
The social problem (in our case, crime).
●
The service agencies (the components of the criminal justice
system).
●
The public (who seek protection from crime).
Again, the emphasis is on the determination of the worth of programs and policies designed to prevent crime. Of course, this is why
the term evaluation is so pertinent. In this text, we will be concerned
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
with the determination of the research methods and techniques that
can accurately assess the value of a crime program.
Monitoring is necessary to establish accountability for program results. Program evaluation informs the monitoring process.
Traditionally, such monitoring assumes two forms: external and
internal. Public agencies, like those in the criminal justice system,
are externally accountable to political authorities and must regularly
report to them about activities and processes. Internally, criminal
justice agencies must provide useful, decision-oriented information
on the program compliance of their facilities to flag problems before
they become crises so that timely adaptations can be implemented
(Sylvia & Sylvia, 2012, p. 25). Ultimately, evaluation research can
inform and improve the operations of criminal justice programs and
enhance service delivery.
Strengths and Weaknesses of Program
Evaluation
The evaluation program is both a political and scientific process.
The two issues are tied together inextricably. The scientific validity of the study colors and determines the value and objectivity of
the research findings and their policy implications. This is “applied”
research in that the primary objective of program evaluation is to
determine whether the crime prevention program is reaching its
goals—defined and desired results.
Evaluation research is different from other research in seven basic
ways (Weiss, 1998, pp. 6–8):
1. Use for decision making: The results of evaluation research are
designed with use in mind. The evaluation should provide a basis
for decision making in the future and provide information to
determine whether a program should be continued and expanded
or terminated.
2. Program-derived questions: The research questions are derived
from the goals of the program and its operations and are defined
by the evaluator alone. The core of the study is administrative and
operational: Is the program accomplishing what it is designed to
do? Is it reaching its “target population”—that is, the clients that
the program was supposed to serve? Does the program make
effective and efficient use of its resources, both physical and
financial?
3. Judgmental quality: Objectivity requires that the evaluator focuses
on whether the program is achieving its desired goals. It is imperative that these goals are stated in a clear and measurable fashion
that accurately documents effective performance.
3
4
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
4. Action setting: The most important thing going on is the program,
not the research. The program administrator and staff control
access to information, records, and their clients. The research
must deal with this reality and construct research designs that are
feasible in the real-world setting.
5. Role conflicts: The administrator’s priority is providing program
services, which often makes him or her unresponsive to the needs
of the evaluation. Typically, the administrator believes strongly
in the value and worth of the program services. The judgmental
nature of the findings and the establishment of accountability are
often viewed as a threat to both the program and the administrator personally. The possibility of friction between the program and
research and the administrator and evaluator is almost inevitable.
Programs are often tied to both the ego and professional reputation of the administrator and staff. Some programs (e.g., Drug
Abuse Resistance Education, or D.A.R.E.) are politically attractive,
and as a result have lives of their own that defy objectivity and
rational assessment. Negative outcomes are not always accepted
in a rational manner. In fact, one of the great ironies of evaluation research is that negative findings often fail to kill a program
and positive results seldom save one. This is due to the fact that so
many crime prevention programs are tied to availability of grant
funding. The presence or absence of funding often determines
program survival regardless of the program evaluation research
findings.
6. Publication: Publication of evaluation research is vital to the
establishment of a base of information on effective crime prevention programs. To be published, the research must be carefully
designed and executed and the statistical analysis must be valid
and accurate.
7. Allegiance: The evaluator is clearly conflicted on this aspect. He
or she has obligations to the organization that funds the study,
to the scientific requirements of research objectivity, and to work
for the betterment of society through the determination of program effectiveness. These obligations can be contradictory and
the researcher must face this reality. For example, program officials often need real-time assessments of tactics as they unfold. If
the evaluator discovers problems during the process evaluation of
program implementation that might jeopardize its success, then
the evaluator has an obligation to alert program officials without
compromising ethical concerns (Joyce & Ramsey, 2013, p. 361).
Thus, research findings have utility in that they can guide future
programming and crime policy. However, the results of evaluation research are not always clear and unequivocal, even when
the research design is valid and vital. Findings are often small in
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
magnitude and effect and may be influenced by forces (both social
and political) that cannot be isolated from those of the program itself.
Thus, program administrators and policy makers typically consider
the evaluation research results in combination with other factors
such as public opinion, cost, availability of staff and facilities, and
possible alternatives (Weiss, 1998, pp. 3–4).
Determination of effectiveness is the crucial difficulty facing the
evaluator. The practical problems of obtaining valid data and faithfully executing the research design often conflict with the realities
of operating the crime prevention program. The evaluator never has
complete control over the research process because of the need to
administer the program. Program operations drive the evaluation
and its design. The relationship between the needs of the evaluation and the program must be continually balanced. Ultimately, the
evaluation research design must be flexible enough to address this
relationship. Cooperation between the program administrator and
evaluator must be firmly established and maintained. The research
must be tied to the changing nature of daily program operations.
The ultimate aim of evaluation research is to guide rational policy making that is based on valid research findings. The rationale is
that programs that have been deemed effective will be expanded to
other areas and locations and those that fail will be terminated and
abandoned. In recent years, evidence-based best practices have
grown significantly. Here, the emphasis is on rationality. Ideally,
there should be a systematic, evidence-based foundation for criminal justice policies and programs that will increase both the accountability and effectiveness of the criminal justice system (Mears, 2010,
p. 2). The search for what works has extended across all areas of the
criminal justice system (both adult and juvenile)—police, courts, and
corrections.
Evidence-Based Practices
One form of analysis that categorizes, analyzes, and summarizes
the research information on a particular criminal justice policy or
program is the systematic review. Systematic reviews “use rigorous
methods to locate, appraise, and synthesize findings from criminal
justice program evaluation studies. Typically, they give their criteria
for including studies, conduct an extensive search to find them, code
their key features (especially their methodology), and provide conclusions of their review” (Welch & Farrington, 2001, p. 161).
For example, Braga (2001) conducted a traditional, narrative review of studies of the effects of hot-spot policing on crime.
He reviewed nine studies—five randomized experiments and
5
6
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
four nonequivalent control group quasi-experiments. Overall, the
research findings revealed that focused police efforts (aggressive
disorder enforcement, directed patrols, proactive arrests, and problem solving) can prevent crime in high-risk places that feature concentrations of crime without resulting in crime displacement effects.
However, the study was unable to determine exactly what types
of police interventions are most preferable in controlling crime at
hot-spot locations.
Maryland Report
Probably the most pertinent example of a systematic review
is the “Maryland Report.” This report was written in response to a
request from the U.S. Congress to review existing research on criminal justice programs and identify those determined to be most effective (Sherman et al., 1997, p. 4). The report reviewed more than 500
crime prevention program evaluations and classified them according to both their scientific methodology and their research findings to establish a list of program effectiveness (i.e., what works,
what doesn’t work, and what is promising). Their “Maryland scale of
scientific methods” rested primarily on three factors (Sherman et al.,
1998, p. 4):
●
Control of other variables that might have been the true causes of
any connection between the program and crime.
●
Measurement error from such matters as loss of subjects over
time or low interview response rates.
●
Statistical power to detect the magnitude of program effects.
The authors also classified evaluation reports according to five
additional research design criteria, which are listed in Table 1.1.
The methodological rigor of the research increases with each category level and enhances the validity of the findings. These concepts
will be discussed in detail in Chapter 6.
“What Works”
The authors of the “Maryland Report” used this research-based
classification system to identify programs in three categories: “What
Works,” “What’s Promising,” and “What Doesn’t Work.” Several programs were designated under the category “What Works.” In particular, family and parent training was identified as an effective
intervention for delinquents and adolescents at risk for delinquency.
Coaching of high-risk youth in thinking skills was effective in treating high-risk youth in schools. For adults, vocational training yielded
excellent results for older, male ex-offenders, thus stressing the need
to provide jobs after incarceration. For convicted offenders, rehabilitation programs with risk-focused treatments demonstrated promise.
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
Table 1.1 Research Design Criteria used in the
“Maryland Report”
Level
Attributes
1
There is a correlation between a crime prevention program and a measure of crime or crime risk factors
at a single point in time.
There is a temporal sequence between the program and crime or risk outcome that is clearly observed,
or the presence of a comparison group without demonstrated comparability to the treatment group.
The research features a comparison of two or more comparable units of analysis, one with and one
without the program services.
The research features a comparison between multiple units with and without the program, controlling
for other factors, or using comparison units that evidence only minor differences.
The research features random assignment of comparable units to program and comparison groups.
2
3
4
5
Source: Sherman et al. (1998), pp. 4–5.
In addition, therapeutic community treatment programs for drugusing offenders were effective while they were incarcerated.
In the area of crime prevention, nuisance abatement action on
landlords was designated as a proven method of preventing drug
dealing in rental housing. Extra police patrols were effective in dealing with crime “hot spots.” Monitoring by specialized police units and
incarceration reduced the crime threat posed by high-risk, repeat
offenders, while on-scene arrests controlled employed, domestic
abusers.
“What’s Promising”
Under this category, the authors of the “Maryland Report” noted
several programs that the research identified as having a potential
impact on crime. With law enforcement, one such policy focuses
on proactive drunk-driving arrests with breath testing as a method
of combating driving while intoxicated. Community policing programs that provided citizen meetings to set priorities had promising
results. Polite field interrogation of suspicious persons by the police
had potential crime prevention aspects. Mailing of arrest warrants
to domestic violence suspects who leave the scene before the police
arrive provided a method of solving a problem after the incident took
place. Higher numbers of police officers in cities seemed to impact
the crime rate in those cities. Gang monitoring by community workers and probation and police officers provided a potential deterrent
7
8
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
effect to prevent gang violence. A police focus on proactive arrests for
carrying concealed weapons was also a potential way to prevent violence in communities.
With delinquents, community-based mentoring by Big Brothers/
Big Sisters of America and community-based after-school recreation
programs helped to stem delinquency. Job Corps residential training
programs for at-risk youth offered an alternative, conventional lifestyle to combat delinquency.
For adults, treatment alternatives such as battered women shelters, drug courts, and drug treatment in jails followed by drug testing
in the community offered ways to deal with community-based problems that feed the criminal justice system.
In terms of crime prevention, moving urban public housing residents to suburban homes had the potential to protect crime victims
and eliminate sites that tend to attract offenders and crime. Using
two clerks in already-robbed convenience stores could prevent future
victimization. Target-hardening methods such as street closures, barricades, and rerouting streets were also identified as ways to prevent
crime through environmental design in neighborhoods.
Under both of the preceding categories, programs that had demonstrated treatment effects were identified for the purpose of spreading their impact. Communities faced with similar problems could
implement these programs with a reasonable expectation that they
will be effective.
“What Doesn’t Work”
Of course, it is also useful to identify crime programs that have
proven to be ineffective, such as gun buy-back programs. In particular, the “Maryland Report” identified “sensational” popular programs
that the research documented as failures. Initiatives such as D.A.R.E.,
correctional “boot camps” using traditional military basic training,
“scared straight” programs whereby minor juvenile offenders visit
adult prisons, shock probation, shock parole, residential programs
for juvenile offenders using challenging experiences in rural settings
(“wilderness” programs), and split sentences adding jail time to probation or parole failed to provide the deterrent effect to prevent reoffending. Similarly, home detention with electronic monitoring and
intensive supervision (ISP) on probation or parole failed to prevent
recidivism in community corrections.
Overall, the “Maryland Report” clearly identified some myths and
realities about effective crime prevention and treatment programs.
The ability of the evaluation results to indicate effectiveness helps to
identify programs and policies that can be expanded and have the
potential to reduce crime.
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
Meta-Analysis
Meta-analysis is a more methodologically sophisticated approach
to reviewing the literature on a particular intervention (theory, program, or policy) than systematic review. Pratt (2010) defines both
the nature and purpose of meta-analysis. It “attempts to integrate
the findings of multiple independent tests of a similar hypothesis in
a more objective manner by treating the empirical study as the unit
of analysis” (Pratt, 2010, p. 154). He likens meta-analysis to the computation of a batting average in baseball. In meta-analysis, the effect
size indicates how many hits on average are related to the dependent
variable across the studies considered. In addition, the effect size can
also reflect the relationships between the independent and dependent variables and across varied methodologies (i.e., longitudinal
studies).
In this fashion, meta-analysis provides a distinct advantage
over the traditional narrative reviews of an intervention (e.g., the
“Maryland Report”) where scholars review studies, select them
according to rigor of their methodologies, and then interpret the
importance of their findings. With regard to program evaluation, its
key strategy is to identify all available studies on a policy or program,
code their findings and methodologies into objective categories, and
then conduct quantitative analysis of these data (Wells, 2009, p. 271).
The use of meta-analysis in criminal justice research has grown over
time. Wells (2009) identified 176 meta-analysis studies in criminal
justice published between 1978 and 2006, with most occurring after
2000. In terms of program evaluation, the majority of these metaanalytic studies (99—56.3% of the total number) were “pragmatic” in
nature—that is, assessments of the effectiveness or utility of a practical intervention (Wells, 2009, p. 280).
Meta-analysis provides several distinct differences over traditional
reviews of the literature. First, it can provide a single precise estimate
of the effect size between two variables, thus providing an indication
of the strength of the relationship between them. Second, it is possible to obtain the effect size of the relationship across different methodologies. To control for differences in rigor between methodologies,
it is possible to code each study according to its methodology. Third,
it makes it possible to consider a subject over time. As new studies
on a subject are conducted, they can be added to the meta-analysis
(Pratt, 2010, pp. 155–156). Of course, the quality of the meta-analysis
is dependent on those of the studies included in the review.
Pratt (2010) also addresses the question of when meta-analysis
should be performed. With regard to program evaluation, there are
issues here. First, a meta-analysis can clarify the relationship between
9
10
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
an intervention and its effectiveness when published studies of it
have generated “mixed” results. This is done by considering such factors as the methodological differences across studies as well as the
theoretical variables under consideration. Second and most important to us, meta-analysis can assess whether a policy or program
“works” across time and places by examining the literature on the
intervention and generating an effect size (Pratt, 2010, p. 158). In fact,
meta-analysis was invented for this particular purpose—to assess the
effectiveness of various treatments and interventions in education,
psychology, medicine, and interventions with offenders (Pratt, 2010,
p. 160). In criminal justice, meta-analysis makes it possible to assess
the effectiveness of a policy or program by synthesizing and quantifying the magnitude of effect size (Wells, 2009, p. 269).
An example of this type of meta-analysis is a study of 66 published and unpublished evaluations of prison-based drug treatment
programs (therapeutic communities, residential substance abuse
treatment, group counseling, boot camps designed for drug offenders, and narcotic maintenance programs) (Mitchell, et al., 2007).
The methodological requirements for inclusion in the analysis were
(Mitchell et al., 2007, pp. 356–357):
●
The evaluation utilized an experimental design (with a control
group) or quasi-experimental design (that included a no-treatment
comparison group).
●
It measured both post-program drug use and reoffending.
●
The research was conducted between 1980 and 2004.
●
The evaluation reported enough information to calculate an
effect size.
Overall, the research results indicated that prison participants in
therapeutic communities had lower rates of post-program drug use
and reoffending. Thus, the research on therapeutic communities had
the “most consistent evidence of treatment effectiveness” (Mitchell,
2007, p. 369).
Similarly, Shaffer (2011, pp. 500–501) conducted a meta-analysis
of 60 studies on drug court treatment programs. The studies were
coded along five categories:
1. Study characteristics (e.g., affiliation of authors, type of publication, and publication year).
2. Sample characteristics (e.g., race, gender, age, and criminal
history).
3. General program characteristics (e.g., program length, setting,
adult/juvenile, year implemented, and graduation rate).
4. Methodological characteristics (e.g., study design, sample size,
attrition rate, outcomes, length of follow-up, and statistical power).
5. Outcome characteristics (e.g., type of outcome and calculated
effect size).
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
In addition, the data were coded under 11 categories:
1. Target population (e.g., type of charge, type of offender, and
motivation).
2. Assessment (e.g., areas assessed and method of assessment).
3. Leverage (e.g., drug court model, consequences of failure, and
benefits of graduation).
4. Philosophy (e.g., view toward substance abuse, primary role of
judge, and flexibility of policies regarding rewards and sanctions).
5. Treatment characteristics (e.g., length of treatment, graduation
rate, treatment type, treatment targets, and adolescents’ participation in Alcoholics Anonymous and Narcotics Anonymous
[AA/NA]).
6. Predictability (e.g., system of rewards and punishers and immediacy of response to infractions).
7. Intensity (e.g., average number of contacts and standard conditions and requirements).
8. Service delivery (e.g., single provider, internal provider, and dedicated caseloads).
9. Staff characteristics (e.g., initial training, conference attendance,
and team meetings).
10. Funding (e.g., adequate funding and federal funding).
11. Quality assurance (e.g., internal and external QA and advisory
board).
Therefore, this study focused on the impact of the policies and
procedures of the drug courts upon their effectiveness. The findings of the research indicated that the drug courts were most effective when violent and noncompliant offenders were excluded from
the program. Also notable was the fact that pre-plea drug courts were
the most effective (Shaffer, 2011, p. 513). Thus, the study provided
some information on how drug courts should be operated to achieve
optimal results.
In another meta-analysis of drug courts, Mitchell and colleagues (2012) reviewed 154 independent evaluations of drug courts
(92 aimed at adults, 34 for juveniles, and 28 that targeted DWI
[drinking while intoxicated] offenders). They determined that the
adult drug court participants had lower recidivism rates than nonparticipants—from 38% to 50% less with the treatment effect lasting
three years. However, the DWI clients did not seem to be as effective, and juvenile drug courts had an even smaller impact upon
recidivism.
These studies demonstrate how meta-analysis can provide
focused results on the strength of the effectiveness of criminal justice
programs. They also considered the quality of the research and the
nature of the intervention in their analyses. This type of information
is particularly relevant to decision makers.
11
12
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
Campbell Collaboration (Crime and
Justice Group)
A more recent development to promote the creation of valid evaluation research is the Campbell Collaboration. Named for experimental psychologist and noted methodologist Donald T. Campbell,
the collaboration was established to prepare, maintain, and disseminate evidence-based research on the effects of interventions in
education, social welfare, and crime and justice. Like the “Maryland
Report,” the Campbell Collaboration sponsors and encourages the
development of rigorous and valid evaluation reports that can contribute to the policy-making process. In particular, it’s Crime and
Justice Group aims to prepare and maintain systematic reviews of
crime programs and policies and make them electronically accessible
through its website (http://www.campbellcollaboration.org/crime_
and_justice/index.php) to all concerned parties. The Crime and
Justice Group also promotes the establishment of methodological
criteria for including studies in their reviews, securing research funding, and making the best knowledge about the effectiveness of crime
programs and policies available to all (Farrington & Petrosino 2001).
The website provides abstracts of reports and articles on crime
prevention programs. For example, 35 manuscripts were listed on the
website at the time of writing this book. Many of the studies were on
programs listed under the “Maryland Report,” such as hot-spot policing, drug courts, domestic violence, and early family/parent training
programs. However, recent crime prevention target-hardening techniques were also reviewed, including the effects of improved street
lighting and closed-circuit television surveillance on crime. Offender
treatment programs, such as mentoring programs for juveniles, were
also identified.
Summary
In this chapter, we introduced the purpose of criminal justice programs evaluation. In their most valid form, the results of a program
evaluation can provide information for decision makers to guide
criminal justice policy in a rational manner. Systematic reviews and
meta-analyses of criminal justice program evaluations help to inform
this process. The recent efforts of the “Maryland Report” and the
Campbell Collaboration (Crime and Justice Group) have enhanced
the ability of evaluation research to achieve these goals.
The emphasis of program evaluation is clearly on accountability. In the words of the “Maryland Report,” we want to know what
works—that is, policies and programs that effectively prevent crime
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
and delinquency. The rigorous testing of crime prevention programs
through valid research methods in program evaluation will assure us
that information and knowledge about these efforts will be provided
and, hopefully, disseminated to practitioners and the public.
Discussion Questions
1. Review and discuss the strengths and weaknesses of program
evaluation.
2. How did the “Maryland Report” determine how research on criminal justice programs should be evaluated?
3. In the “Maryland Report,” what programs demonstrated evidence
that they worked, did not work, or were promising?
4. What are the aims of the Campbell Collaboration?
5. Why is it important to determine what works in criminal justice?
How do systematic reviews and meta-analyses help to determine
program effectiveness?
References
Adams, S. (1975). Evaluative research in corrections: A practical guide. Washington,
D.C.: U.S. Department of Justice.
Braga, A. (2001). The Effects of Hot Spots Policing on Crime. Annals of the American
Academy of Political and Social Science, 578, 104–125.
Farrington, D. P., & Petrosino, A. (2001). The campbell collaboration crime and justice
group. The Annals of the Academy of Political and Social Science, 578, 35–49.
Joyce, N. M., & Ramsey, C. H. (2013). Commentary on smart policing. Police Quarterly,
16(3), 358–368.
Lipsey, M. W. (2005). Improving the evaluation of anticrime programs: Committee
on improving evaluation of anticrime programs. Washington, DC: The National
Academies Press.
Mitchell, O., Wilson, D. B., & MacKenzie, D. L. (2007). Does Incarceration-based Drug
Treatment Reduce Recidivism? A Meta-Analytic Synthesis of the Research. Journal
of Experimental Criminology, 3, 353–375.
Mears, D. P. (2010). American criminal justice policy: An evaluation approach to
increasing accountability and effectiveness. New York : Cambridge University Press.
Mitchell, O., Wilson, D. B., Eggers, A., & MacKenzie, D. L. (2012). Assessing the
effectiveness of drug courts on recidivism: A meta-analytic review of traditional
and nontraditional drug courts. Journal of Criminal Justice, 40(1), 60–71.
Mitchell, O., Wilson, D. B., & MacKenzie, D. L. (2007). Does incarceration-based drug
treatment reduce recidivism? a meta-analytic synthesis of the research. Journal of
Experimental Criminology, 3, 353–375.
Pratt, T. C. (2010). Meta-analysis in criminal justice and criminology: What it is, when it’s
useful, and what to watch out for. Journal of Criminal Justice Education, 21(2), 152–167.
Shaffer, D. K. (2011). Looking inside the black box of drug courts: A meta-analytic
review. Justice Quarterly, 28(3), 493–521.
Sherman, L., Gottfredson, D., MacKenzie, D., Eck, J., Reuter, P., & Bushway, S. (1997).
Preventing crime: What works, what doesn’t, what’s promising: A report to the
united states congress. Washington, DC: National Institute of Justice.
13
14
Chapter 1 GETTING STARTED WITH PROGRAM EVALUATION
Sherman, L., Gottfredson, D., MacKenzie, D., Eck, J., Reuter, P., & Bushway, S. D. (1998).
National institute of justice research in brief: What works, what doesn’t, what’s
promising. Washington, DC: National Institute of Justice.
Suchman, E. A. (1974). Evaluative research: Principles and practice in public service
and social action programs. New York : Russell Sage Foundation.
Sylvia, R. D., & Sylvia, K. M. (2012). Program evaluation and planning for the public
manager. Long Grove, IL : Waveland Press.
Weiss, C. (1998). Evaluation research: Methods for assessing program effectiveness.
Engelwood Cliffs, NJ: Prentice-Hall.
Welch, B. C., & Farrington, D. P. (2001). Toward an evidence-based approach of
preventing crime. The Annals of the American Academy of Political and Social
Science, 578, 158–173.
Wells, E. (2009). The uses of meta-analysis in criminal justice research: A quantitative
review. Justice Quarterly, 26(2), 268–294.
References
1 Chapter 1 Getting Started with Program
Evaluation
Adams , S. ( 1975 ). Evaluative research in corrections:
A practical guide . Washington, D.C. : U.S. Department
of Justice .
Braga , A. ( 2001 ). Th e Eff ects of Hot Spots Policing
on Crime . Annals of the American Academy of Political
and Social Science , 578 , 104 – 125 .
Farrington , D. P. , & Petrosino , A. ( 2001 ). Th e
campbell collaboration crime and justice group . Th e
Annals of the Academy of Political and Social Science ,
578 , 35 – 49 .
Joyce , N. M. , & Ramsey , C. H. ( 2013 ). Commentary
on smart policing . Police Quarterly , 16 ( 3 ) , 358 –
368 .
Lipsey , M. W. ( 2005 ). Improving the evaluation of
anticrime programs: Committee on improving evaluation of
anticrime programs . Washington, DC : Th e National
Academies Press .
Mitchell , O. , Wilson , D. B., & Mac
2007 ). Does Incarceration-based Drug
Recidivism? A Meta-Analytic Synthesis
Journal of Experimental Criminology ,
Kenzie , D. L. (
Treatment Reduce
of the Research .
3 , 353 – 375 .
Mears , D. P. ( 2010 ). American criminal justice policy:
An evaluation approach to increasing accountability and
eff ectiveness . New York : Cambridge University Press .
Mitchell , O. , Wilson , D. B. , Eggers , A. , &
MacKenzie , D. L. ( 2012 ). Assessing the eff ectiveness
of drug courts on recidivism: A meta-analytic review of
traditional and nontraditional drug courts . Journal of
Criminal Justice , 40 ( 1 ) , 60 – 71 .
Mitchell , O. , Wilson , D. B. , & MacKenzie , D. L. (
2007 ). Does incarceration-based drug treatment reduce
recidivism? a meta-analytic synthesis of the research .
Journal of Experimental Criminology , 3 , 353 – 375 .
Pratt , T. C. ( 2010 ). Meta-analysis in criminal justice
and criminology: What it is, when it’s useful, and what to
watch out for . Journal of Criminal Justice Education ,
21 ( 2 ) , 152 – 167 .
Shaff er , D. K. ( 2011 ). Looking inside the black box
of drug courts: A meta-analytic review . Justice
Quarterly , 28 ( 3 ) , 493 – 521 .
Sherman , L. , Gottfredson , D. , MacKenzie , D. ,
Eck , J. , Reuter , P. , & Bushway , S. ( 1997 ).
Preventing crime: What works, what doesn’t, what’s
promising: A report to the united states congress .
Washington, DC : National Institute of Justice . Sherman
, L. , Gottfredson , D. , MacKenzie , D. , Eck , J.
, Reuter , P. , & Bushway , S. D. ( 1998 ). National
institute of justice research in brief: What works, what
doesn’t, what’s promising . Washington, DC : National
Institute of Justice . Suchman , E. A. ( 1974 ).
Evaluative research: Principles and practice in public
service and social action programs . New York : Russell
Sage Foundation . Sylvia , R. D. , & Sylvia , K. M. (
2012 ). Program evaluation and planning for the public
manager . Long Grove, IL : Waveland Press . Weiss , C.
( 1998 ). Evaluation research: Methods for assessing
program eff ectiveness . Engelwood Cliff s, NJ :
Prentice-Hall . Welch , B. C. , & Farrington , D. P. (
2001 ). Toward an evidence-based approach of preventing
crime . Th e Annals of the American Academy of Political
and Social Science , 578 , 158 – 173 . Wells , E. (
2009 ). Th e uses of meta-analysis in criminal justice
research: A quantitative review . Justice Quarterly , 26
( 2 ) , 268 – 294 .
2 Chapter 2 Planning a Program Evaluation
American Evaluation Association (2004, July). Publications
. Retrieved January 27, 2013, from American Evaluation
Association. < http://www.eval.org/publications/
GuidingPrinciplesPrintable.asp > .
Behn , R. D. ( 2003 ). Eight purposes that public
managers have for measuring performance . Public
Administration Review , 63 ( 5 ) , 586 – 606 .
Bickman , L. , Rog , D. J. , & Hedrick , T. E. ( 1998
). Applied research design: A practical approach . In L.
Bickman & D. J. Rog (Eds.), Handbook of applied social
research methods (pp. 5 – 38 ) . Th ousand Oaks, CA :
Sage .
Boruch , R. F. ( 1998 ). Randomized controlled
experiments for evaluation and planning . In L. Bickman
& D. J. Rog (Eds.), Handbook of applied social research
methods (pp. 161 – 192 ) . Th ousand Oaks, CA : Sage .
Campbell , D. T. ( 1969 ). Reforms as experiments .
American Psychologist , 24 ( 4 ) , 409 – 429 .
Capowich , G. E. , & Roehl , J. A. ( 1994 ).
Problem-oriented policing: Actions and eff ectiveness in
san diego . In D. Rosenbaum (Ed.), Th e challenge of
community policing: Testing the promises (pp. 127 – 128 )
. Th ousand Oaks, CA : Sage .
Davidson , E. J. ( 2005 ). Evaluation methodology basics:
Th e nuts and bolts of sound evaluation . Th ousand Oaks,
CA : Sage .
Eck , J. C. , & Spelman , W. ( 1987 ). Who ya gonna
call? the police as problem-busters . Crime and
Delinquency , 33 , 31 – 52 .
Goldstein , H. ( 1979 ). Improving policing: A
problem-oriented approach . Crime and Delinquency , 25 ,
236 – 258 .
Goldstein , H. ( 1990 ). Problem-oriented policing . New
York : McGraw Hill .
Inciardi , J. A. , & Siegal , H. A. ( 1981 ). Whoring
around: Some comments on deviance research in the private
sector . Criminology , 19 ( 2 ) , 165 – 183 .
Knowlton , L. W. , & Phillips , C. ( 2013 ). Th e logic
model guidebook: Better strategies for great results (2nd
ed.) . Th ousand Oaks, CA : Sage .
Locke , E. A. , & Latham , G. P. ( 2013 ). New
developments in goal setting and task performance . East
Sussex, UK : Routledge .
McDavid , J. C. , & Hawthorn , L. R. ( 2006 ). Program
evaluation and performance management: An introduction to
practice . Th ousand Oaks, CA : Sage .
Sieber , J. E. ( 1998 ). Planning ethically responsible
research . In L. Bickman & D. J. Rog (Eds.), Handbook
of applied social research methods (pp. 127 – 156 ) . Th
ousand Oaks, CA : Sage .
Suchman , E. A. ( 1974 ). Evaluative research: Principles
and practice in public service and social action programs
. New York : Russell Sage Foundation .
Weiss , C. ( 1998 ). Evaluation research: Methods for
assessing program eff ectiveness . Engelwood Cliff s, NJ
: Prentice-Hall .
3 Chapter 3 Needs Assessment Evaluation
Rossi , P. H. , Freeman , H. W. , & Lipsey , M. W. (
2004 ). Evaluation: A systematic approach ( 7th ed. ) .
Th ousand Oaks, CA : Sage .
Scriven , M. , & Roth , J. ( 1990 ). Special feature:
Needs assessment . Evaluation Review , 11 , 135 – 140 .
Toet , A. , & van Schaik , M. G. ( 2012 ). Eff ects of
signals of disorder on fear of crime in real and virtual
environments . Journal of Environmental Psychology , 32 (
3 ) , 260 – 276 .
Wallace , D. ( 2012 ). Examining fear and stress as
mediators between disorder perceptions and personal
health, depression, and anxiety . Social Science Research
, 41 ( 6 ) , 1515 – 1528 . Page Intentionally Left
Blank
4 Chapter 4 Theory-Driven Evaluation
Akers , R. ( 2009 ). Social learning and social
structure: A general theory of crime and deviance .
Boston : Northeastern University Press .
Chen , H. -T. ( 1990 ). Th eory-driven evaluations .
Newbury Park, CA : Sage .
Chen , H. -T. ( 2005 ). Practical program evaluation .
Th ousand Oaks, CA : Sage .
Chen , H. -T. ( 2011 ). Practical program evaluation:
Assessing and improving planning, implementation, and eff
ectiveness . Th ousand Oaks, CA : Sage .
Chen , H. -T. , & Rossi , P. H. ( 1980 ). Th e
multi-goal, theory-driven approach to evaluation: A model
linking basic and applied social science . Social Forces ,
59 , 106 – 122 .
Chen , H. -T. , & Rossi , P. H. ( 1983 ). Evaluating
with sense: Th e theory-driven approach . Evaluation
Review , 7 , 283 – 302 .
Coryn , C. L. S. , Noakes , L. A. , Westine , C. D. ,
& Schroter , D. C. ( 2011 ). A systematic review of
theory-driven evaluation from practice from 1990 to 2009 .
American Journal of Evaluation , 32 , 199 – 226 .
Esbensen , F. A. , & Osgood , D. W. ( 1999 ). Gang
Resistance Education and Training (GREAT): Results from
the national evaluation . Journal of Research in Crime and
Delinquency , 36 , 194 – 225 .
Gottfredson , M. , & Hirschi , T. ( 1990 ). A general
theory of crime . Palo Alto, CA : Stanford University
Press . MacKenzie , D. L. , & Shaw , J. W. ( 1993 ).
Th e impact of shock incarceration on technical violations
and new criminal activities . Justice Quarterly , 10 ,
463 – 487 . Matthews , B. , Hubbard , D. J. , &
Latessa , E. ( 2001 ). Making the next step: Using
evaluability assessment to improve correctional programming
. Th e Prison Journal , 81 , 454 – 472 . Piquero ,
A. R. , Jennings , W. G. , & Farrington , D. P. ( 2010
). On the malleability of selfcontrol: Th eoretical and
policy implications regarding a general theory of crime .
Justice Quarterly , 27 , 803 – 883 . Rossi , P. H. ,
Lipsey , M. W. , & Freeman , H. E. ( 2004 ).
Evaluation: A systematic approach ( 7th ed. ) . Th ousand
Oaks, CA : Sage . Smith , M. F. ( 1989 ). Evaluability
assessment: A practical approach . Norwell, MA : Kluwer
Academic Publishers . Van Voorhis , P. , & Brown , K.
( 1996 ). Evaluability assessment: A tool for program
development in corrections . Washington, DC : National
Institute of Corrections . [Unpublished monography] .
Van Voorhis , P. , Cullen , F. T. , & Applegate , D. (
1995 ). Evaluating interventions with violent off enders
. Federal Probation , 50 , 17 – 27 . Welsh , W. (
2006 ). Th e need for a comprehensive approach to program
planning, development, and evaluation . Criminology and
Public Policy , 5 , 603 – 614 . Welsh , W. , & Harris
, K. ( 2004 ). Criminal justice policy and planning ( 2nd
ed. ) . Cincinnati: LexisNexis : Anderson Publishing Co
. Wholey , J. S. ( 1979 ). Evaluation: Promise and
performance . Washington, DC : Urban Institute .
Wholey , J. S. ( 1987 ). Evaluability assessment:
Developing program theory . In L. Bickman (Ed.), Using
program theory in evaluation. New Directions for Program
Evaluation, No. 33 . San Francisco : Jossey-Bass .
Additional Readings Mercier , C. , Piat , M. ,
Peladeau , N. , & Dagenais , C. ( 2000 ). An
application of theorydriven evaluation to a drop-in youth
center . Evaluation Review , 24 , 73 – 91 . Wilson ,
D. M. , Gottfredson , D. C. , & Stickle , W. P. ( 2009
). Gender diff erences in eff ects of teen courts on
delinquency: A theory-guided evaluation . Journal of
Criminal Justice , 37 , 21 – 27 .
5 Chapter 5 Process Evaluation
Davidson , E. J. ( 2005 ). Evaluation methodology basics:
Th e nuts and bolts of sound evaluation . Th ousand Oaks,
CA : Sage .
Eck , J. E. ( 2011 ). Assessing responses to problems: An
introductory guide for police problem-solvers .
Washington, DC : U.S. Department of Justice,
CommunityOriented Policing Services .
Harachi , T. W. , Abbott , R. D. , Catalano , R. F. ,
Haggerty , K. P. , & Fleming , C. B. ( 1999 ). Opening
the black box: Using process evaluation measures to assess
implementation and theory building . American Journal of
Community Psychology , 27 ( 5 ) , 711 – 731 .
Krueger , R. , & Casey , M. ( 2009 ). Focus groups: A
practical guide for applied research . Th ousand Oaks, CA
: Sage .
Lowenkamp , C. , & Latessa , E. ( 2003 ). Evaluation of
ohio’s halfway houses and community-based correctional
facilities . Cincinnati : Center for Criminal Justice
Research, University of Cincinnati .
Mears , D. P. ( 2010 ). American criminal justice policy:
An evaluation approach to increasing accountability and
eff ectiveness . New York : Cambridge University Press .
Patton , M. Q. ( 1987 ). How to use qualitative methods
in evaluation . Newbury Park, CA : Sage . Patton , M.
Q. ( 1990 ). Qualitative evaluation and research methods .
Newbury Park, CA : Sage . Sherman , L. W. , & Strang
, H. ( 2004 ). Experimental enthnography: Th e marriage
of qualitative and quantitative research . Th e Annals of
the American Academy of Political and Social Science ,
595 , 204 – 222 . Stewart , D. W. , & Shamdasani , P.
N. ( 1998 ). Focus group research: Exploration and
discovery . In L. Bickman & D. J. Rog (Eds.), Handbook
of applied research methods (pp. 505 – 526 ) . Th ousand
Oaks, CA : Sage . Suchman , E. A. ( 1987 ). Evaluative
research: Principles and practice in public service and
social action programs . New York : Russell Sage
Foundation . Vito , G. F. , Longmire , D. , & Kenney
, J. P. ( 1983 ). Preventing rape: An evaluation of a
multi-faceted program . Police Studies , 6 ( 4 ) , 30 –
36 . 50 . Vito , G. F. , & Tewksbury , R. ( 1998 ).
Th e Jeff erson County (KY) drug court program: An impact
assessment . Federal Probation , LXII ( 2 ) , 46 – 51 .
Walsh , W. F. , Vito , G. F. , Tewksbury , R. , &
Wilson , G. P. ( 2000 ). Fighting back in bright leaf:
Community policing and drug traffi cking in public housing
. American Journal of Criminal Justice , 25 ( 1 ) , 77
– 92 .
6 Chapter 6 Outcome Evaluation
Cook , T. D. , & Campbell , D. T. ( 1979 ).
Quasi-experimentation: Design and analysis issues for fi
eld settings . Boston : Houghton Miffl in .
Duwe , G. ( 2010 ). Prison-based chemical dependency
treatment in Minnesota: An outcome evaluation . Journal
of Experimental Criminology , 6 , 57 – 81 .
Eck , J. E. ( 2006 ). When is a bologna sandwich better
than sex? a defense of small- n case study evaluations .
Journal of Experimental Criminology , 2 , 345 – 362 .
Eck , J. E. ( 2011 ). Assessing responses to problems: An
introductory guide for police problem-solvers .
Washington, DC : U.S. Department of Justice Offi ce of
Community-Oriented Policing Services .
Farrington , D. P. ( 2003 ). Methodological quality
standards for evaluation research . Annals of the
American Academy of Political and Social Science , 587 ,
49 – 68 .
Flexon , J. L. , Stolzenberg , L. , & D’Alessio , S.
J. ( 2009 ). Cheating the hangman: Th e eff ect of Roper
v. Simmons decision on homicides committed by juveniles .
Crime and Delinquency , 57 ( 6 ) , 929 – 949 .
Gottfredson , D. C. , Najaka , S. S. , & Kearley , B.
( 2003 ). Eff ectiveness of drug treatment courts:
Evidence from a randomized trial . Criminology and Public
Policy , 2 ( 2 ) , 171 – 196 .
Higgins , G. E. , Ricketts , M. L. , Griffi th , J.
D. , & Jirard , S. A. ( 2013 ). Race and Juvenile
incarceration: A propensity score matching evaluation .
American Journal of Criminal Justice , 38 ( 1 ) , 1 – 12
.
Lum , C. , & Yang , S. M. ( 2005 ). Why do evaluation
researchers in crime and justice choose non-experimental
methods? Journal of Experimental Criminology , 1 , 191
– 213 .
Palmer , T. , & Petrosino , A. ( 2003 ). Th e
“Experimenting Agency”: Th e California youth authority
research division . Evaluation Review , 27 ( 3 ) , 228 –
266 .
Palmer , T. , VanVoorhis
MacKenzie , D. L. ( 2012
Experimental criminology
Experimental Criminology
, P. , Taxman , F. S. , &
). Insights from ted palmer:
in a diff erent era . Journal of
, 8 ( 2 ) , 103 – 115 .
Reichardt , C. S. , & Mark , M. M. ( 1998 ).
Quasi-experimentation . In L. Bickman & D. J. Rog
(Eds.), Handbook of applied social research methods (pp.
193 – 228 ) . Th ousand Oaks, CA : Sage .
Sherman , L. W. ( 2007 ). Th e power few: Experimental
criminology and the reduction of harm . Journal of
Experimental Criminology , 3 , 299 – 321 .
Somers , J. M. , Currie , L. , Moniruzzaman , A. , &
Patterson , M. ( 2012 ). Drug treatment court of
vancouver: An empirical evaluation of recidivism .
International Journal of Drug Policy , 23 , 393 – 400 .
Tewksbury , R. A. , & Vito , G. F. ( 1994 ). Improving
the education skills of jail inmates: Preliminary program
fi ndings . Federal Probation , LVIII ( 2 ) , 55 – 59 .
Trochim , W. M. , & Donnelly , J. P. ( 2008 ). Research
methods knowledge base . Mason, OH : Atomic Dog .
Vito , G. F. , & Keil , T. J. ( 2004 ). Dangerousness
and the death penalty: An examination of juvenile
homicides in Kentucky . Th e Prison Journal , 84 , 436 –
451 .
Vito , G. F. , & Tewksbury , R. A. ( 1998 ). Th e
impact of treatment: Th e Jeff erson County (Kentucky)
drug court program . Federal Probation , LXII ( 2 ) , 46
– 51 .
Weisburd , D. , Lum , C. M. , & Petrosino , A. ( 2001
). Does research design aff ect study outcomes in
criminal justice? Annals of the American Academy of
Political and Social Science , 578 , 50 – 70 . Page
Intentionally Left Blank
7 Chapter 7 Cost-efficiency Evaluation
Babbie , E. ( 2002 ). Th e basics of social research (
2nd ed. ) . Belmont, CA : Wadsworth .
Craddock , A. ( 2004 ). Estimating criminal justice
system costs and cost-savings benefi ts of day reporting
centers . Journal of Off ender Rehabilitation , 39 , 69
– 98 .
Lauria , D. T. ( 2007 ). Cost-benefi t analysis of
tacoma’s assigned vehicle program . Police Quarterly ,
10 , 192 – 217 .
Levin , H. M. ( 1983 ). Cost-eff ectiveness, a primer:
New perspectives in evaluation ( Vol. 4 ) . Newbury Park,
CA : Sage .
Royse , D. , & Th yer , B. A. ( 1996 ). Program
evaluation: An introduction . Chicago : NelsonHall, Inc .
Th ompson , M. S. ( 1980 ). Benefi t-cost analysis for
program evaluation . Th ousand Oaks, CA : Sage .
8 Chapter 8 Measurement and Data Analysis
Babbie , E. ( 2002 ). Th e basics of social research (
2nd ed. ) . Belmont, CA : Wadsworth .
Blalock , H. M. ( 1979 ). Social statistics . New York :
McGraw-Hill .
DeVellis , R. ( 1991 ). Scale development: Th eory and
applications . Newbury Park, CA : Sage .
Eck , J. E. ( 2010 ). Assessing responses to problems: An
introductory guide for police problem solvers .
Washington, DC : U.S. Department of Justice, Offi ce of
Community-Oriented Policing Services .
Nunnally , J. , & Bernstein , I. ( 1994 ). Psychometric
theory ( 3rd ed. ) . New York : McGraw-Hill .
Vito , G. F. , Tewksbury , R. , & Higgins , G. E. (
2010 ). Evaluation of Kentucky’s early inmate release
initiative: sentence, commutations, public safety, and
recidivism . Federal Probation , 74 , 22 – 26 .
9 Chapter 9 Reporting and Using
Evaluations
Bryson , J. M. , Patton , M. Q. , & Bowman , R. A. (
2011 ). Working with evaluation stakeholders: A
rationale, step-wise approach and toolkit . Evaluation and
Program Planning , 34 , 1 – 12 .
Clarke , R. V. , & Eck , J. E. ( 2005 ). Crime analysis
for problem solvers in 60 small steps . Washington, DC :
U.S. Department of Justice, Offi ce of Community-Oriented
Policing Services .
Innes , C. A. , & Everett , R. S. ( 2008 ). Factors and
conditions infl uencing the use of research in the
criminal justice system . Western Criminology Review , 9
( 1 ) , 49 – 58 .
Leviton , L. C. , & Hughes , E. F. ( 1981 ). Research
on the utilization of evaluations: A review and synthesis
. Evaluation Research , 5 ( 4 ) , 525 – 548 .
Lipton , D. S. ( 1992 ). How to maximize utilization of
evaluation research by policymakers . Annals of the
American Academy of Political and Social Science , 521 ,
175 – 188 .
O’Sullivan , R. G. ( 2012 ). Collaborative evaluation
within a framework of stakeholderoriented evaluation
approaches . Evaluation and Program Planning , 35 , 518
– 522 .
Skogan , W. G. , Hartnett , S. M. , Bump , N. , &
Dubois , J. ( 2008 ). Evaluation of CeaseFire Chicago .
Chicago : Northwestern University . Available online at <
www.ncjrs.gov/ pdffi les1/nij/grants/227181.pdf > .
Synder , H. N. ( 2011 ). Socially responsible
criminology: Quality relevant research with targeted, eff
ective dissemination . Criminology and Public Policy , 10
( 2 ) , 207 – 215 .
Weiss , C. ( 1998 ). Evaluation research: Methods for
assessing program eff ectiveness . Engelwood Cliff s, NJ
: Prentice-Hall .
10 Chapter 10 Looking Ahead: A Call to
Action in Evaluation Research
Aos , S. , Miller , M. , & Drake , E. ( 2006 ).
Evidence-based policy options to reduce future prison
construction, criminal justice costs, and crime rates .
Federal Sentencing Reporter , 19 ( 4 ) , 275 – 290 .
Braga , A. A. , & Hinkle , M. ( 2010 ). Th e
participation of academics in the criminal justice working
group process . In J. Klofas , N. K. Hipple , & E.
McGarrell (Eds.), Th e new criminal justice: American
communities and the changing world of crime control (pp.
114 – 120 ) . New York : Routledge .
Drake , E. K. , Aos , S. , & Miller , M. G. ( 2009 ).
Evidence-based public policy options to reduce crime and
criminal justice costs: Implications in Washington State .
Victims and Off enders , 4 , 170 – 196 .
Landenberger , N. A. , & Lipsey , M. A. ( 2005 ). Th e
postiive eff ects of cognitive-behavioral programs for off
enders: A meta-analysis of factors associated with eff
ective treatment . Journal of Experimental Criminology ,
1 , 451 – 476 .
Latessa , E. J. ( 2004 ). Th e challenge of change:
Correctional programs and evidencebased practices .
Criminology and Public Policy , 3 ( 4 ) , 547 – 560 .
Leeuw , F.
evaluation
particular
Research ,
( 2005 ). Trends and developments in program
in general and criminal justice programs in
. European Journal on Criminal Policy and
11 , 233 – 258 .
Moore , M. H. ( 2002 ). Th e limits of social science in
guiding policy . Criminology and Public Policy , 2 ( 1 )
, 33 – 42 .
Nagin , D. S. , & Weisburd , D. ( 2013 ). Evidence and
public policy: Th e example of evaluation research in
policing . Criminology and Public Policy , 12 ( 4 ) ,
651 – 679 .
Sherman , L. W. ( 2006 ). “To Develop and Test”: Th e
inventive diff erence between evaluation and
experimentation . Journal of Experimental Criminology , 2
, 393 – 406 .
Skogan , W. ( 2010 ). Th e challenge of timeliness and
utility in research and evaluation . In J. Klofas , N.
K. Hipple , & E. McGarrell (Eds.), Th e new criminal
justice: American communities and the changing world of
crime control (pp. 128 – 137 ) . New York : Routledge .
Weatherburn , D. ( 2009 ). Policy and program evaluation:
Recommendations for criminal justice policy analysts and
advisors . Sydney, Australia : NSW Bureau of Crime
Statistics and Research .
Wilson , J. Q. ( 2006 ). Th e need for evaluation
research . Journal of Experimental Criminology , 2 ,
321 – 328 . Page Intentionally Left Blank
Purchase answer to see full
attachment