HSE 410 Module Six Activity Guidelines and Rubric
Overview: As a case manager, you will likely work with diverse populations. It is very important that you learn about these diverse populations, and continue
learning about different groups throughout your career. In this assignment, you will have an opportunity to better understand a minority group. After gaining a
better understanding of the minority group, you will also discuss how your understanding of important aspects of this minority group will help you in working
with and creating case management interventions with this group.
Prompt: Review the module readings. Select a minority group and create a handout with images and key points on the cultural and ethnic aspects that need to
be considered in working with and creating case management interventions with this population. Provide references to support your statements.
Specifically, the following critical elements must be addressed:
•
•
•
•
Identifies and describes a minority group
Identifies and includes at least two images of the selected minority group that illustrate cultural and ethnic aspects which impact interventions
Identifies four key points of the cultural and ethnic aspects of the identified minority group which need to be considered when working with the minority
group
Summarizes considerations that need to be factored in when working with and creating case management interventions for the identified minority group
Guidelines for Submission: Submit your single-page handout as a letter-sized (8 ½ x 11 inches) Microsoft Word document or PDF file. References must be
formatted in APA format.
Critical Elements
Minority Group
Identified
Cultural and
Ethnic Aspects of
Minority Group
Illustrated
Exemplary (100%)
Meets “Proficient” criteria and
uses substantial details to
establish the central issues
identified within the minority
group
Meets “Proficient” criteria and
the images provided support
details as to why the cultural and
ethnic aspects of the minority
group are relevant in client care
Proficient (85%)
Identifies and describes a
minority group
Needs Improvement (55%)
Identifies a minority group but
description of the minority group
lacks details
Not Evident (0%)
No minority group is identified
Includes images illustrating the
cultural or ethnic aspects of the
minority group identified
Includes some cultural or ethnic
aspects of the minority group
identified and/or the images are
not appropriate for illustrating
cultural or ethnic aspects of the
minority group identified
Does not include images
Value
15
25
Cultural and
Ethnic Aspects
Meets “Proficient” criteria and
key points are supported by
relevant research
Identifies key points of the
cultural and ethnic aspects that
need to be considered when
working with this minority group
Summary of
Considerations
Meets “Proficient” criteria and
includes relevant research to
support the conclusions
Articulation of
Response
Submission is free of errors
related to citations, grammar,
spelling, syntax, and organization
and is presented in a professional
and easy-to-read format
Summarizes considerations that
need to be factored in when
working with and creating case
management interventions for
the identified minority group
Submission has no major errors
related to citations, grammar,
spelling, syntax, or organization
Identifies some key points of the
cultural and ethnic aspects that
need to be considered when
working with this minority group
and/or key points lack details
Includes a summary of
considerations with few details or
support
No key points included
25
No summary of considerations
provided
30
Submission has major errors
related to citations, grammar,
spelling, syntax, or organization
that negatively impact readability
and articulation of main ideas
Submission has critical errors
related to citations, grammar,
spelling, syntax, or organization
that prevent understanding of
ideas
Earned Total
5
100%
Conducting Program Evaluation with Hispanics in Rural Settings: Ethical Issues and Evaluation Cha...
Loi, Claudia X Aguado;McDermott, Robert J
American Journal of Health Education; Jul/Aug 2010; 41, 4; ProQuest Central
pg. 252
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
A random sample of American Evaluation Association members was surveyed concerning the
ethical challenges they encountered in their evaluation work Respondents who indicated that
they had faced such challenges differed significantly (in amount and type of evaluation experience, as well as professional discipline) from those who said that they had never encountered
an ethical conflict. Ethical problems associated with the reporting of findings by the evaluator
were, by far, the most frequently mentioned. Also frequently described were conflicts involving
the misinterpretation/misuse of results by stakeholders, contracting with stakeholders, and
adherence to disclosure agreements. Aframeworkfor interpreting the study’sfindings, based on
understanding the subjective commitment of evaluators to the roles of scientist and/or helping
.
professional, is proposed
PROGRAM EVALUATORS
AND ETHICAL CHALLENGES
A National
Survey
MICHAEL MORRIS
ROBIN COHN
University of New Haven
A
ethical issues in program evaluation have received
lthough
deal of attention in the relevant
a
great
scholarly literature, empirical studies of the ethical challenges encountered by evaluators are extremely rare.
The overwhelming majority of analyses employ an anecdotal database that
is sometimes linked to a theory-driven conceptual framework (e.g., Fang and
Ellwein 1990; Mariner 1990; Mathison 1991; Perloff and Perloff 1980;
Sieber and Sanders 1978; Skaburskis 1987; Smith 1985; Windle and Neigher
1978). In a few investigations, evaluators have been asked to respond to a
list of statements representing ethical standards in evaluation (DeBrey 1989;
AUTHORS’ NOTE: An earlier version of this article was presented at the 1992 annual meeting
of the American Evaluation Association in Seattle. We wish to thank the following individuals
for the valuable feedback they provided during the pilot phase of this study: Molly Engle, Joanne
Farley, Jennifer Greene, Sandra Mathison, Hallie Preskill, Michael Reed and Rosalie Torres.
EVALUATION REVIEW, Vol 17 No 6, December 1993 621-642
@ 1993 Sage Publications, Inc
621
622
McKillip and Garberg 1986; Sheinfeld and Lord 1981 ). In these studies,
respondents rate each statement on such dimensions as applicability to their
evaluation work, importance, compatibility with other statements, and level
of personal agreement. As useful as this research is, it does not directly
examine the ethical problems and dilemmas that evaluators experience.
The only published study that straightforwardly addresses these conflicts
was conducted by Newman and Brown (1992). Each item in their
survey
instrument described a violation of one of the 30 evaluation standards
developed by the Joint Committee on Standards for Educational Evaluation
( 1981 ), with respondents providing ratings of both frequency and seriousness
for every violation. The researchers found that experienced evaluators perceived Utility and Feasibility standards to be more frequently violated than
Accuracy and Propriety ones; no major differences between the four areas
were found on the seriousness dimension.
These results are intriguing, but our ability to draw conclusions from them
about the personal experiences of evaluators is limited by the study’s methodology. For example, respondents’ generalized perceptions of violation
frequencies might not necessarily be highly correlated with their estimates
of how often they have encountered those problems in their own evaluation
work. Even more important, the structured nature of the Newman and Brown
(1992) survey inevitably restricts the domain of the possible responses to
those contained in the instrument. Of the 96 evaluation &dquo;pitfalls&dquo; (i.e.,
violations) described by the Joint Committee (1981), only 30-less than a
third-were included in their study. Although pragmatic considerations
undoubtedly played a major role in the researchers’ decision to work with
just a subset of the violations, the fact remains that selection of a different
subset could have generated results significantly different from those actually
obtained in their investigation.
The issue being raised here, however, goes beyond the number of items
chosen for the study. No list of structured items, even one that contained each
of the Joint Committee’s 96 pitfalls, is likely to be exhaustive. There is always
the possibility that such a list will omit scenarios that would be viewed as
ethically problematical by at least some evaluators, As long as it is the
researcher who determines the universe of situations to be evaluated by the
respondent, this limitation is unavoidable.
Against this background, it is noteworthy that no published research on
evaluation ethics has employed the open-ended methodology used by Pope
and Vetter (1992) in their study of ethical conflicts encountered by members
of the American Psychological Association (APA). Respondents in their
survey were simply asked to describe briefly &dquo;an incident that you or a
623
colleague have faced in the last year or two that was ethically challenging to
you&dquo; (p. 398). With this procedure, it is the respondent’s conceptualization
of ethical concerns, not the researcher’s, that serves as the point of departure.
If applied to evaluators, it holds the promise of generating a more accurate
picture of their experiences with self-defined ethical challenges than other
approaches can produce. Consequently, the research described in this article
used a respondent-driven methodology to examine the ethical conflicts faced
by a large, representative sample of evaluators.
METHOD
SUBJECTS
The population from which the survey sample was randomly drawn
consisted of all individuals with United States addresses who were listed in
the 1991 American Evaluation Association (AEA) Directory and for whom
information concerning highest degree, employment setting, and primary
discipline was provided. Of the 2,142 names in the Directory, 1,732 (81%)
met these criteria. A questionnaire was sent to 700 of these individuals. The
initial mailing generated a response rate of 43.3%, with a follow-up mailing
to nonrespondents yielding an additional 22.3%. Thus the overall response
rate was 65.6% (N = 459). This rate compares very favorably with those
obtained in Shadish and Epstein’s (1987) survey of Evaluation Network and
Evaluation Research Society members (57%), Rog’s (1990) survey of the
AEA membership (55%), and Pope and Vetter’s (1992) survey of the APA
membership (51%).
Respondents did not differ significantly from nonrespondents (at p < .05
or less) on any dimension for which we had comparative data: highest degree,
employment setting, primary discipline, and gender.
SURVEY INSTRUMENT
The questionnaire consisted of two sections. Section 1 asked respondents
if they had ever encountered an ethical problem in their work as a program
evaluator to which they had to respond. Those who answered affirmatively
were asked to (1) describe the ethical problems they had faced most frequently in their work (as many as three different problems could be described)
and (2) describe the single most serious ethical problem they had ever
624
encountered. (If a respondent described only one &dquo;frequent&dquo; problem, and
did not answer the question concerning the most serious problem, we treated
the one description that was given as the answer to both questions.) It is
important to note that this section of the survey contained an introduction
instructing respondents to assume that &dquo;the definition of ’ethical’ being used
in this study is exactly the same as your definition of that term.&dquo;
Section 2 solicited background information, with the primary focus being
the extent and nature of the respondent’s evaluation experience.’ Individuals
were asked to indicate the number of years they had worked in program evaluation, as well as the approximate number of evaluations they had actually
conducted. They were also asked about their level of experience (1 = very
little, 5 = a great deal) in five different evaluation areas: evaluability assessment, needs assessment, implementation/process evaluation, outcome/
impact assessment, and cost-benefit analysis. Finally, respondents were
asked to estimate what percentage of their evaluations they had conducted
as an external evaluator and what percentage they had conducted as an
internal evaluator.
CONTENT ANALYSIS PROCEDURES
The Standards for Program Evaluation (SPE) developed by the Evaluation
Research Society Standards Committee (1982) were used to categorize each
one of the ethical problems described by respondents. (Recall that a single
respondent could generate as many as four different problems: the three most
frequent and the one most serious.) More specifically, each problem was
assigned as many as three different code numbers drawn from the list of 55
numbered statements contained in the SPE. The total number of codes
assigned to any given problem depended on how many ethical issues were
raised by that problem. Overall, we assigned an average of 1.84 codes to each
problem.
In applying the SPE to our data we modified the list in one significant
way. Preliminary coding had indicated that five of the SPE statements dealing
with data analysis and interpretation (Standards 31 to 35) were so conceptually similar that, in practice, it was not possible to distinguish between them
with any confidence for coding purposes. Consequently, we decided to
collapse these five standards into one: &dquo;Appropriate analytic procedures
should be applied to the data.&dquo;
We considered, but decided against, using the standards developed by the
Joint Committee on Standards for Educational Evaluation ( 1981 ) to analyze
the responses. During the planning phase of the study we actually applied
625
TABLE 1:
NOTE:
Primary Discipline of Respondents (in percentages) (n 456)
=
Fgures do not total 100% due to rounding.
both sets of standards to pilot data and concluded that the SPE were better
suited to the data. The SPE’s modest advantage lay in their organization;
subgroups of standards are presented in an order roughly corresponding to
the phases of an unfolding evaluation. For many of the ethical problems
described by our respondents, the stage of the evaluation in which the conflict
was perceived to have occurred appeared to be an important consideration.
RESULTS
RESPONDENT CHARACTERISTICS
Nearly
two thirds of the
respondents possessed a doctoral degree (59%
Ph.D., 7% Ed.D), 28% had a master’s, 4% a bachelor’s, and 3% were &dquo;other.&dquo;
Respondents’ primary disciplines are reported in Table 1, where it can be seen
that nearly half identified with either education or psychology.
The largest subgroup of respondents worked in a college or university
setting, with most of the others fairly evenly divided between federal agencies, state agencies, nonprofit organizations, and private business (see Table 2).
With respect to gender, 55% of the respondents were male, and 45% were
female.
EVALUATION EXPERIENCE
A small percentage of the respondents (3%) indicated that they had never
conducted an evaluation; this subgroup was excluded from all subsequent
626
TABLE 2:
NOTE:
Employment Settings of Respondents (in percentages) (n 455)
=
Figures do not total 100% due to rounding.
TABLE 3:
Respondents’ Evaluation Experience (n = 445)
analyses. For the 97% who said that they had done at least one evaluation,
experience in the field averaged 11.4 years (SD 7.3), with a range
spanning from 1 to 44 years. Another indication of the respondents’ considerable degree of experience can be found in Table 3: 60% had conducted 11I
their
=
evaluations.
The nature of respondents’ evaluation activities varied. Overall, their
highest level of involvement was in impact assessment (M = 4.15) and
process evaluation (3.93). There was a modest amount of participation in
needs assessment (2.93), and a low level of involvement in cost-benefit
analysis (2.02) and evaluability assessment (1.87).
Both external and internal evaluators were well represented among the
respondents (see Table 4). At the extremes, those who had done external
evaluations exclusively accounted for 23% of the sample, whereas purely
internal evaluators comprised 18%.
or more
EXPERIENCE WITH ETHICAL CHALLENGES
Nearly two thirds of the respondents (65%) said that they had encountered
ethical problems in their evaluation work, whereas the remainder (35%) said
627
TABLE 4: External Evaluation
Experience (n 429)
=
that they had not. These two groups, hereafter referred to as the &dquo;challenged
group&dquo; and the &dquo;unchallenged group,&dquo; differed significantly from one another
on several dimensions. The challenged group averaged more years of evaluation experience than the unchallenged group (12.6 vs. 8.9, t[434] = 5.09,
p < .001) and had also conducted a greater number of evaluations (scale Ms:
2.96 vs. 2.12, t[443] = 6.95, p < .001 ). The challenged group had also devoted
a greater percentage of its time to external evaluation than had the unchallenged group (Ms: 55% vs. 45%, t[426] = 2.58, p < .01).
Because years of experience and number of evaluations conducted were
highly correlated in our sample (r = .60, p < .001), we examined the
relationship of each of these variables to membership in the challenged group
while holding the other one constant. We found that the significant relationship of number of evaluations conducted to challenged group membership
held for all three levels of years tested (1 to 6 years: t[141]= 3.81, p < .001;
7 to 14 years: t[ 139] = 1.70, p < .05; and 15 to 44 years: t[ 150] = 2.30, p <
.05; all tests one-tailed). In contrast, none of the relationships between years
of evaluation experience and challenged group membership were significant
at the .05 level when the number of evaluations conducted was held constant.
Chi-square analysis revealed that being in the challenged group was
related to one’s highest degree (X2 = 9.89, df = 4, p < .05) and primary
discipline (x2 = 20.9, df = 5, p < .001 ). The percentage of respondents in the
challenged group for each degree subsample is given in parentheses: Ph.D.
(70%), master’s (63%), Ed.D. (55%), bachelor’s (50%), and other (36%).
The corresponding figures for discipline are evaluation (88%), sociology
(87%), research/statistics (69%), psychology (62%), other (61%), and education (56%).
It is noteworthy that, in terms of both degree and discipline, involvement
in the field of education was associated with a relatively low probability of
being in the challenged group. This finding is supported by more refined
analysis. For example, master’s-level respondents who identified education
as their primary discipline were significantly less likely than master’s-level
respondents in other fields to be in the challenged group (46% vs. 68%, X2 =
628
< .05). For those with Ph.D.s the relationship
approached
significance (61 % vs. 73%, X2 3.46, df = 1, p < .07).
3.96, df = 1, p
=
NATURE OF ETHICAL CHALLENGES
This and all remaining sections of the Results focuses on the 290 respondents who comprised the challenged group. In describing the ethical conflicts
they most frequently encountered, these individuals generated 555 different
problem descriptions. In answering the question about the single most serious
conflict they had ever encountered, they produced 263 descriptions. Table 5
presents, for both types of problems, the 10 SPE codes that we assigned most
often to the respondents’ answers.
It is important to note that the unit of analysis for the results reported in
the &dquo;Most Frequent Problems&dquo; column of Table 5 is the respondent, not the
individual problems he or she described. Thus each number in that column
refers to the percentage of the challenged group who mentioned a given issue
at least once when describing the ethical problems they had most frequently
encountered. (The unit-of-analysis distinction is not relevant to the &dquo;Most
Serious Problem&dquo; column because only one serious problem could be generated by each respondent.)
Inspection of Table 5 reveals that the five codes assigned most often in
the most Frequent Problems&dquo; column are the same, and in the same rank
order, as those appearing in the &dquo;Most Serious Problem&dquo; column. Thus there
is a great deal of commonality in the issues underlying respondents’ most
frequent and most serious ethical challenges.
To develop a more detailed understanding of the conflicts associated with
the codes that were most often assigned, we performed a follow-up content
analysis of problems that had been assigned any of the four top-ranked codes
(40, 51, 2, 47). The results of this analysis are presented in Tables 6 through 9.
A review of these results indicates that, where the reporting of evaluation
findings is concerned (Code 40), the major ethical conflict faced by respondents was feeling pressured by stakeholders (usually the primary client) to
distort the facts (Table 6). The problem of how to respond when one stumbles
upon information in an evaluation that is morally or legally volatile is also
worthy of note. Although this was an uncommon occurrence, with only 14%
of the Code 40 group saying that they had encountered it frequently, it was
a component of the conflicts described by nearly one quarter of the Code 40
respondents to the &dquo;most serious problem&dquo; question.
In terms of what can go wrong after evaluation findings are presented
(Code 51), the results show a fairly wide variety of possibilities (Table 7).
0
8
11
s
o
d
#3a
0
15
1;)
T5
jg
m
0
E
w
’D
0
.0
0
c
a
as
2
§
0
0
0
.5
m
-rz
0
0
c
m
c
0
0
w
0
’6
c
a
(a
a
as
0
L
,a10m
c
0
cc
60
3
0
8
8
m
-5
âi
f=
m
’5
2
S0co
’0
0
c8
8
.5
Y
co
’c
m
0m
1;)
a
as
6
c(
’n
J
iii
m
H
c
t-oZ
629
630
TABLE 6:
Challenges
in
Presenting Findings (in percentages)
NOTE: Percentages are not based on the challenged group as a whole but only on
that segment of the challenged group whose problem descriptions were assigned
Code 40.
a. Percentages in this column total more than 100% because it is possible for one
respondent to be represented in as many as three different categories. Within a category, however, a respondent is never counted more than once.
When describing their most frequent problems in this area, nearly a third of
the Code 51 group mentioned suppression or nonuse of the final report. The
use of findings by stakeholders to punish someone (including the evaluator)
was encountered about as often as straightforward distortion or misinterpretation. Because nearly 20% of the Code 51 respondents described the misuse
they witnessed in a nonspecific fashion, the relative magnitudes of the
percentages reported in Table 7 might have differed if more detailed information had been available concerning the circumstances underlying the
answers coded as &dquo;unspecified misuse.&dquo;
The ethical problems encountered during the entry/contracting stage of
the evaluation (Code 2, Table 8) are dominated by what we label the &dquo;stacked
deck&dquo; phenomenon: The evaluator believes that the primary client is not
sincerely committed to obtaining accurate information about the program in
question but only wants the research done to buttress decisions already made,
such as seeking additional support, terminating the program, firing someone,
and so on. The majority of Code 2 respondents to the &dquo;most frequent
problems&dquo; question cited this issue, and it also was a defining characteristic
of over half of the conflicts described by Code 2 respondents to the &dquo;most
serious problem&dquo; item.
Finally, disclosure problems (Code 47) tended to involve either confidentiality or ownership/dissemination issues (Table 9). Whereas the specific
631
TABLE 7:
Challenges of Misinterpretation and Misuse (in percentages)
NOTE: Percentages are not based on the challenged group as a whole but only on
that segment of the challenged group whose problem descriptions were assigned
Code 51.
a. Percentages in this column total more than 100% because it is possible for one
respondent to be represented in as many as three different categories. Within a
category, however, a respondent is never counted more than once.
b. Figures do not total 100% due to rounding.
TABLE 8:
Challenges in Contracting With Stakeholders (in percentages)
NOTE: Percentages are not based on the challenged group as a whole but only on that
segment of the challenged group whose problem descriptions were assigned Code 2.
a. Percentages in this column total more than 100% because it is possible for one
respondent to be represented in as many as three different categories. Within a
category, however, a respondent is never counted more than once.
632
TABLE 9:
Challenges in Adhering to Disclosure Agreements (in percentages)
NOTE: Percentages are not based on the challenged group as a whole but only on
that segment of the challenged group whose problem descriptions were assigned
Code 47.
a. Percentages in this column total more than 100% because it is possible for one
respondent to be represented in as many as three different categories. Within a
category, however, a respondent is never counted more than once.
forms of these conflicts were cited about equally often in response to the
frequency question, the list of serious challenges was led by concerns over
unintentional, inadvertent violations of individual confidentiality. Nearly
half of allCode 47 respondents to the &dquo;most serious problem&dquo; question described such a conflict.
SUBGROUP COMPARISONS
For respondents in the challenged group, it is appropriate to ask whether
the different types of conflicts they reported are related to their background
characteristics. To this end we focused on the eight SPE codes in Table 5 that
appeared in the lists of both the most frequent and most serious problems.
Table 10 summarizes the results of this analysis.
As can be seen, the greatest number of significant relationships involved
Code 40, conflicts in presenting findings. Respondents who experienced
frequent problems in this area averaged more years of experience (13.8 vs.
10.9, t[281] = 3.35, p < .001), had conducted more evaluations (4.1 vs. 3.8,
t[288] = 2.50, p < .05), and had emphasized process evaluation to a greater
than those who had
extent in their work (4.1 vs. 3.8, t[286] = 2.71, p
Purchase answer to see full
attachment