FIELD METHODS
10.1177/1525822X05279903
Guest
et al. / HOW MANY INTERVIEWS ARE ENOUGH?
How Many Interviews Are Enough?
An Experiment with Data Saturation and
Variability
GREG GUEST
ARWEN BUNCE
LAURA JOHNSON
Family Health International
Guidelines for determining nonprobabilistic sample sizes are virtually nonexistent.
Purposive samples are the most commonly used form of nonprobabilistic sampling,
and their size typically relies on the concept of “saturation,” or the point at which no
new information or themes are observed in the data. Although the idea of saturation
is helpful at the conceptual level, it provides little practical guidance for estimating
sample sizes, prior to data collection, necessary for conducting quality research.
Using data from a study involving sixty in-depth interviews with women in two West
African countries, the authors systematically document the degree of data saturation
and variability over the course of thematic analysis. They operationalize saturation
and make evidence-based recommendations regarding nonprobabilistic sample
sizes for interviews. Based on the data set, they found that saturation occurred within
the first twelve interviews, although basic elements for metathemes were present as
early as six interviews. Variability within the data followed similar patterns.
Keywords:
interviewing; saturation; variability; nonprobability sampling; sample size; purposive
While conducting a literature review of guidelines for qualitative research
in the health sciences, we were struck by how often the term theoretical saturation arose. Article after article recommended that purposive sample sizes
be determined by this milestone (e.g., Morse 1995; Sandelowski 1995; Bluff
1997; Byrne 2001; Fossey et al. 2002), and a good number of journals in the
Financial support for this research was provided by the U.S. Agency for International Development through Family Health International, although the views expressed in this article do not
necessarily reflect those of either organization. The authors thank Kerry McLoughlin (Family
Health International), Betty Akumatey (University of Ghana, Legon), and Lawrence Adeokun,
(Association for Reproductive and Family Health, Ibadan, Nigeria). Without their hard work,
this article would not have been possible.
Field Methods, Vol. 18, No. 1, February 2006 59–82
DOI: 10.1177/1525822X05279903
© 2006 Sage Publications
59
60
FIELD METHODS
health sciences require that theoretical saturation be a criterion by which to
justify adequate sample sizes in qualitative inquiry. Saturation has, in fact,
become the gold standard by which purposive sample sizes are determined in
health science research.
Equally striking in our review was that the same literature did a poor job of
operationalizing the concept of saturation, providing no description of how
saturation might be determined and no practical guidelines for estimating
sample sizes for purposively sampled interviews. This dearth led us to carry
out another search through the social and behavioral science literature to see
if, in fact, any generalizable recommendations exist regarding nonprobabilistic sample sizes. After reviewing twenty-four research methods books and
seven databases, our suspicions were confirmed; very little headway has
been made in this regard. Morse’s (1995:147) comments succinctly sum up
the situation; she observed that “saturation is the key to excellent qualitative
work,” but at the same time noted that “there are no published guidelines or
tests of adequacy for estimating the sample size required to reach saturation.”
Our experience, however, tells us that it is precisely a general, numerical
guideline that is most needed, particularly in the applied research sector.
Individuals designing research—lay and experts alike—need to know how
many interviews they should budget for and write into their protocol, before
they enter the field. This article is in response to this need, and we hope it provides an evidence-based foundation on which subsequent researchers can
expand. Using data from a study involving sixty in-depth interviews with
women in two West African countries, we systematically document the
degree of data saturation and variability over the course of our analysis and
make evidence-based recommendations regarding nonprobabilistic sample
sizes.
We stress here that we intentionally do not discuss the substantive findings from our research; they will be presented elsewhere. This is a methodological article, and we felt that including a discussion of our study findings
would be more distracting than informative. We do provide some background for the study, but our focus is mainly on the development and structure of our codebook and its evolution across the analysis process.
NONPROBABILISTIC AND PURPOSIVE SAMPLING
Calculating the adequacy of probabilistic sample sizes is generally straightforward and can be estimated mathematically based on preselected parameters and objectives (i.e., x statistical power with y confidence intervals). In
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
61
theory, all research can (and should when possible) use probabilistic sampling methodology, but in practice, it is virtually impossible to do so in the
field (Bernard 1995:94; Trotter and Schensul 1998:703). This is especially
true for hard-to-reach, stigmatized, or hidden populations.
Research that is field oriented in nature and not concerned with statistical
generalizability often uses nonprobabilistic samples. The most commonly
used samples, particularly in applied research, are purposive (Miles and
Huberman 1994:27). Purposive samples can be of different varieties—
Patton (2002), for example, outlined sixteen types of purposive samples—
but the common element is that participants are selected according to predetermined criteria relevant to a particular research objective. The majority of
articles and books we reviewed recommended that the size of purposive samples be established inductively and sampling continue until “theoretical saturation” (often vaguely defined) occurs. The problem with this approach,
however, is that guidelines for research proposals and protocols often require
stating up front the number of participants to be involved in a study (Cheek
2000). Waiting to reach saturation in the field is generally not an option.
Applied researchers are often stuck with carrying out the number of interviews they prescribe in a proposal, for better or worse.1 A general yardstick is
needed, therefore, to estimate the point at which saturation is likely to occur.
Although numerous works we reviewed explain how to select participants
(e.g., Johnson 1990; Trotter 1991) or provide readers with factors to consider
when determining nonprobabilistic sample sizes (Miles and Huberman
1994; Bernard 1995; Morse 1995; Rubin and Rubin 1995; Flick 1998;
LeCompte and Schensul 1999; Schensul, Schensul, and LeCompte 1999;
Patton 2002), we found only seven sources that provided guidelines for
actual sample sizes. Bernard (2000:178) observed that most ethnographic
studies are based on thirty-sixty interviews, while Bertaux (1981) argued that
fifteen is the smallest acceptable sample size in qualitative research. Morse
(1994:225) outlined more detailed guidelines. She recommended at least six
participants for phenomenological studies; approximately thirty-fifty participants for ethnographies, grounded theory studies, and ethnoscience studies;
and one hundred to two hundred units of the item being studied in qualitative
ethology. Creswell’s (1998) ranges are a little different. He recommended
between five and twenty-five interviews for a phenomenological study and
twenty-thirty for a grounded theory study. Kuzel (1992:41) tied his recommendations to sample heterogeneity and research objectives, recommending
six to eight interviews for a homogeneous sample and twelve to twenty data
sources “when looking for disconfirming evidence or trying to achieve maximum variation.” None of these works present evidence for their recommen-
62
FIELD METHODS
dations. The remaining two references—Romney, Batchelder, and Weller
(1986) and Graves (2002)—do provide rationale for their recommendations
for quantitative data and are discussed later in the article.
STUDY BACKGROUND
The original study for which our data were collected examined perceptions of social desirability bias (SDB) and accuracy of self-reported behavior
in the context of reproductive health research. Self-reports are the most commonly used measure of sexual behavior in the health sciences, and yet concern has been raised about the accuracy of these measures (Brody 1995;
Zenilman et al. 1995; Weinhardt et al. 1998; Schwarz 1999; Weir et al. 1999;
Crosby et al. 2002). One of the key factors identified as affecting report accuracy is a participant’s concern with providing socially desirable answers
(Paulhus 1991; Geary et al. 2003). Our study was therefore designed to
inform HIV research and intervention programs that rely on self-reported
measures of sexual behavior.
Using semistructured, open-ended interviews, we examined how women
talk about sex and their perceptions of self-report accuracy in two West African countries—Nigeria and Ghana. We solicited suggestions for reducing
SDB and improving self-report accuracy within various contexts. In addition, we asked participants to provide feedback regarding methods currently
used to mitigate SDB within the context of HIV research and prevention,
such as audio-computer-assisted self-interviews (ACASI) and manipulating
various aspects of the interview.
METHODS
Sampling and Study Population
A nonprobabilistic, purposive sampling approach was used. We wanted
to interview participants at high risk for acquisition of HIV and who would
be appropriate candidates for HIV prevention programs in the two study
sites. We therefore interviewed women who met at least three basic criteria:
(1) were eighteen years of age or older, (2) had vaginal sex with more than one
male partner in the past three months, and (3) had vaginal sex three or more
times in an average week. Women at the highest risk for HIV in Nigeria and
Ghana tend to be engaged in some form of sex work (although not all selfidentify as sex workers), so fieldworkers recruited sex workers for our study.
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
63
TABLE 1
Sample Characteristics
Ghana (n = 30)
Age
Mean
Range
Years of education
Mean
Range
Number of ethnic groups
Marital status
Single
Married
Divorced/separated
Widowed
Previous research experience
Nigeria (n = 30)
Combined (N = 60)
26.3
20–35
32.0
1–53
29.1
19–53
6.8
0–12
12
10.3
0–17
3
8.5
0–17
15
20 (66.7%)
0
10 (33.3%)
0
13 (43.3%)
13 (43.3%)
1 (3.3%)
14 (46.7%)
2 (6.7%)
6 (20%)
33 (55%)
1 (1.7%)
24 (40%)
2 (3.3%)
19 (31.7%)
In Nigeria, thirty high-risk women were recruited from three sites in the
city of Ibadan, which correspond to different socioeconomic environments:
brothels, a college campus, and known pick-up points for sex workers. The
sampling process was similar in Ghana. Thirty high-risk women were
recruited from greater Accra. Three high-risk sites were identified for
recruitment and included a red light area, a hotel, and a hostel. Table 1 presents the sample characteristics for the two sites.
Data Collection and Analysis
The interview guide consisted of six structured demographically oriented
questions, sixteen open-ended main questions, and fourteen open-ended
subquestions. Subquestions were asked only if a participant’s response to the
initial question did not cover certain topics of interest. All respondents were
asked identical questions in the same sequence, but interviewers probed
inductively on key responses. The guide was divided into the following six
domains of inquiry:
•
•
•
•
•
•
perceptions of sexually oriented research,
discussion of sex and condoms within the community (i.e., among peers),
discussion of sex and condoms within the research context,
interviewer characteristics,
remote interviewing techniques (ACASI, phone interviews), and
manipulating the environment of an interview.
64
FIELD METHODS
Data were collected between September 15 and December 12, 2003.
Interviews were conducted in English, Twi, and Ga in Ghana and in English,
Pidgin English, and Yoruba in Nigeria. All interviews were tape recorded,
and verbatim responses to each question were translated and transcribed by
local researchers, using a standardized transcription protocol (McLellan,
MacQueen, and Niedig 2003). Transcripts were reviewed by the principal
investigator at each site for translation accuracy and revised when necessary.
Thematic analysis was performed on the translated transcripts using Analysis Software for Word-based Records (AnSWR; Centers for Disease Control
and Prevention 2004).
A codebook was developed by two data analysts, using a standard iterative process (MacQueen et al. 1998). In this process, each code definition has
five parts: (1) a “brief definition” to jog the analyst’s memory; (2) a “full definition” that more fully explains the code; (3) a “when to use” section that
gives specific instances, usually based on the data, in which the code should
be applied; (4) a “when not to use” section that gives instances in which the
code might be considered but should not be applied (often because another
code would be more appropriate); and (5) an “example” section of quotes
pulled from the data that are good examples of the code.
In our analysis, the lead analyst created an initial content-based coding
scheme for each set of six interviews. Intercoder agreement was assessed for
every third interview using combined segment-based Kappa scores run on
two double-coded transcripts (Carey, Morgan, and Oxtoby 1996). Coding
discrepancies (individual codes receiving Kappa scores of 0.5 or less) were
discussed and resolved by the analysis team, the codebook revised accordingly, and recoding performed when necessary to ensure consistent application of codes. The resulting overall Kappa score, by individual question, was
0.82. To identify key themes, we ran frequency reports in AnSWR.
THE EXPERIMENT
The Methods section refers to the procedures we used in our substantive
analysis, yet these procedures did not provide us with the data we needed to
determine thematic development and evolution over time and eventually the
point at which saturation occurred in our data. We had to develop additional
procedures and methods to operationalize and document data saturation.
Saturation can be of various types, with the most commonly written about
form being “theoretical saturation.” Glaser and Strauss (1967:65) first
defined this milestone as the point at which “no additional data are being
found whereby the (researcher) can develop properties of the category. As he
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
65
sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated . . . when one category is saturated,
nothing remains but to go on to new groups for data on other categories, and
attempt to saturate these categories also.”
For serious practitioners of the grounded theory approach, the term theoretical saturation refers specifically to the development of theory. Theoretical saturation occurs when all of the main variations of the phenomenon have
been identified and incorporated into the emerging theory. In this approach,
the researcher deliberately searches for extreme variations of each concept in
the theory to exhaustion.
Although theoretical saturation is the most commonly used term in published works, frequency of use within multiple bodies of literature has
resulted in its meaning becoming diffuse and vague. To avoid propagating
this transgression, we rely on a more general notion of data saturation and
operationalize the concept as the point in data collection and analysis when
new information produces little or no change to the codebook. We wanted to
find out how many interviews were needed to get a reliable sense of thematic
exhaustion and variability within our data set. Did six interviews, for example, render as much useful information as twelve, eighteen, twenty-four, or
thirty interviews? Did any new themes, for example, emerge from the data
gathered between interview thirteen and interview thirty? Did adding thirty
more interviews from another country make any difference?
To answer these questions, we documented the progression of theme
identification—that is, the codebook structure—after each set of six interviews, for a total of ten analysis rounds.2 We monitored the code network and
noted any newly created codes and changes to existing code definitions. We
also documented frequency of code application after each set of six interviews was added. The reasoning behind this latter measure was to see if the
relative prevalence of thematic expression across participants changed over
time. It could be the case, to take a hypothetical example, that one code in the
first round of analysis was applied to all six of the transcripts from one site,
implying an initial high prevalence across participants. It could also be true
that the same code was never applied in the remaining twenty-four transcripts and that another code emerged for the first time in the seventh transcript and was applied to the rest of the transcripts for a frequency of twentyfour. We needed to assess this variability.
We created a cumulative audit trail, updating our records after analysis of
each set of six transcripts. So, in our first analysis, we analyzed the first six
transcripts, then added six more in our second analysis for an n of twelve, and
so on. We started with the data from Ghana and kept adding six transcripts
until we had completed all thirty interviews from this site. Six transcripts
66
FIELD METHODS
from Nigeria were then added to the analysis for an n of 36, and the process
was repeated until all sixty of the interviews from both sites had been analyzed and the code definitions finalized. In all, we completed ten successive
and cumulative rounds of analysis on sets of six interviews. Internal codebook structure (conceptually relating codes to one another) was not manipulated until all of the base codes had been identified and all sixty transcripts
coded.
Below, we provide a summary of these data. Specifically, we present data
illustrating the point in analysis when codes were created or definitions
changed. We also examine frequency of code application across participants
and estimate the point at which the distribution of code frequency stabilized.
For all of our analyses, the unit of analysis is the individual participant (i.e.,
transcript) and data items the individual codes (i.e., expressions of themes).
Code Development
After analyzing all thirty interviews from Ghana, the codebook contained
a total of 109 content-driven codes, all of which had been applied to at least
one transcript. Of these codes, 80 (73%) were identified within the first six
transcripts. An additional 20 codes were identified in the next six transcripts,
for a cumulative total of 100, or 92% of all codes applied to the Ghana data.
As one would expect, the remaining 9 codes were identified with progressively less frequency (see Figure 1, the five columns on the left, interviews 1–
30). Clearly, the full range of thematic discovery occurred almost completely
within the first twelve interviews—at least based on the codebook we developed (more on this later).
Surprisingly, not much happened to the number of codes once we started
adding data from the other country. Only five new codes (out of a total of
114) had to be created to accommodate the Nigerian data (see Figure 1, the
five columns on the right, interviews 31–60), one of which was new in substance. Four of the five codes were nonsubstantive in nature but were created
to capture nuances in the Nigerian data. Two of these four new codes were
needed for the unique subgroup of campus-based sex workers in Nigeria who
typically do not refer to themselves or their friends as sex workers or to their
sexual partners as clients. We therefore needed new codes that were the
equivalent of codes used in other transcripts for talk among sex worker
friends and talk about sex worker clients, but without the reference to sex
workers. The result was two codes that were not new in substance but rather
variations of the original codes tailored to the specific situation of the campus-based women. The two other codes were related to researcher qualities.
One covered nonspecific statements that researchers’ behavior and attitudes
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
67
FIGURE 1
Code Creation over the Course of Data Analysis
Number of New Codes
80
20
5
1-6
7-12
2
2
2
0
1
1
1
13-18 19-24 25-30 31-36 37-42 43-48 49-54 55-60
Ghana
Nigeria
Interviews
were important. The other code was akin to an “other” category and captured
infrequently mentioned interviewer qualities.
Code Definition Changes
Creating a team-based codebook undoubtedly requires making changes
to code definitions as data analysis progresses (MacQueen et al. 1998). Since
this process itself will ultimately affect the absolute range of thematic expression identified in the data (e.g., if a code is augmented to be more inclusive of
certain concepts), we documented all code definition changes throughout the
analysis.
Table 2 illustrates the progression of definition changes throughout the
analysis process. A total of thirty-six code revisions occurred throughout the
entire analysis. The majority of changes (seventeen) occurred during the second round of analysis, and after analyzing only twelve interviews, 58% of the
total number of changes had been made. Twenty-two of the thirty-six
changes were singular in nature; definitions were revised twice for seven of
the codes. No definitions were revised more than twice throughout the entire
analysis.
68
FIELD METHODS
TABLE 2
Code Definition Changes by Round of Analysis
Analysis Interviews
Round
Analyzed
Definition
Changes
in Round
Percentage
Cumulative
Frequency
Cumulative
Percentage
Ghana data
1
2
3
4
5
6
6
12
18
24
30
36
4
17
7
3
2
3
11
47
20
8
6
8
4
21
28
31
33
36
11
58
78
86
92
100
7
8
9
10
42
48
54
60
0
0
0
0
0
0
0
0
36
36
36
36
100
100
100
100
Nigeria data
As with the code creation data, adding the Nigerian data to the mix rendered little change to the codebook structure, with only three definition
changes across thirty interviews, and these three changes occurred within the
first six transcripts from this country. It appears that by the time we began
looking at the Nigeria data, the structure of the codebook had been relatively
well established, and incoming data offered few novel insights.
Codebook definitions also did not change much from a qualitative perspective. Of the thirty-six total code revisions, twenty-eight (78%) were
made only to the “when to use” section of the definition, indicating that the
substance of the code definition itself did not really change. Rather, clarifications and instructions for application were made more explicit. Of the ten
codes whose actual definition changed over the course of the project, seven
of the changes made the code more inclusive, thus expanding the conceptual
scope of the definition. For example, the code “religion,” which refers to
“statements of a religious imperative to tell the truth,” was changed to
include both the positive and negative effects of religion on the veracity of
self-reported behavior after analyzing the first set of transcripts from Nigeria
(see Table 3). Three of the ten definition revisions narrowed the scope of the
definition.
In Table 3, we present the definitions and subsequent revisions for the
seven codes that were revised twice to provide examples of how codes were
changed. Space constraints prohibit listing all code changes. Italics indicate
the changes in definition. The number in parentheses at the end of each full
(continued on page 72)
69
Revision 1
Revision 2
Trust
necessary
(52%)
Full definition: Discussion of the
necessity of trust before participant
will discuss sexual matters with
another woman; if there is no trust,
will not talk about it
When to use: Have to know somebody’s “character” before talking
personally; can’t trust just anybody
and often don’t know most of the
women well (R1)
(continued)
Full definition: Discussion of the necessity Full definition: Discussion of the necessity of trust
before discussing sexual matters with another
of trust before participant will discuss sexual matters with another woman; if there is woman; if there is no trust, will not talk about it
When to use: Have to know somebody’s “characno trust, will not talk about it
ter” before talking personally; can’t trust just
When to use: Have to know somebody’s
“character” before talking personally; can’t anybody and often don’t know most of the
women well; includes statements on confitrust just anybody and often don’t know
dences coming back to haunt you; includes setmost of the women well; includes stateting the right atmosphere (sharing own personal
ments on confidences coming back to
information, creating a safe space, etc.) before
haunt you (R2)
asking personal questions (R6)
Full definition: Talk between the par- Full definition: Talk between the participant Full definition: Talk between the participant and a
Talk with
close friend who is a SW
and a close friend who is a SW
ticipant and a close friend who is a
sex worker
When to use: If using the word friend in this When to use: If using the word friend in this disSW
(SW)
cussion, use this code; often occurs as: can only
discussion, use this code; often occurs as:
When to use: Often occurs as: can only
friends
share very personal matters with one to two
can only share very personal matters with
share very personal matters with one
(73%)
close friends (concept of space, can go to their
one to two close friends (concept of space,
to two close friends (concept of
rooms but nobody else’s); use if obvious/
can go to their rooms but nobody else’s)
space, can go to their rooms but
implied that the friend is a SW, even if not spe(R1)
nobody else’s) (R1)
cifically stated (R2)
Original
TABLE 3
Sample Code Definition Revisions
70
Revision 1
Revision 2
Full definition: Talk among the partici- Full definition: Talk among the participants Full definition: Talk among the participants and
people who are not SWs
and people who are not SWs
pants and people who are not SWs
When to use: Includes both participant stat- When to use: Includes both participant stating that
(R2)
she cannot talk to non-SWs about sex and/or
ing that she cannot talk to non-SWs about
condoms and statements that non-SWs don’t
sex and/or condoms and statements that
talk about sex and/or condoms
non-SWs don’t talk about sex and/or conWhen not to use: DO NOT use for statements that
doms (R2)
can talk about sexual issue with a non-SW (that
would be coded as “cwho_sep”) (R3)
Full definition: Religion or religious conviction
Full definition: Statements of a religious
Religion
Full definition: Statements of a reliand beliefs affect a person’s honesty when disimperative to tell the truth (even if not
(25%)
gious imperative to tell the truth
cussing sexual issues
linked to gain)
(even if not linked to gain) (R1)
When to use: Includes people not being hon- When to use: Includes statements of a religious
imperative to tell the truth (even if not linked to
est because they have turned away from
gain); includes people not being honest because
God or are “wicked” (if this is meant in a
they have turned away from God or are
religious sense) (R3)
“wicked” (if this is meant in a religious sense);
includes religion as a barrier to being honest
about sexual issues (R6)
Full definition: Being honest while in the study to
Personal help Full definition: Being honest while in Full definition: Being honest while in the
receive personal help (advice, getting out)
study in order to receive personal help
(62%)
the study to receive personal help
When to use: Can be moral, religious, and/or prag(advice, getting out)
(advice, getting out)
matic, often all in same statement; includes
When to use: Can be moral, religious, When to use: Can be moral, religious, and/or
statements that will be honest because the
pragmatic, often all in same statement; inand/or pragmatic, often all in same
cludes statements that will be honest because answers might help someone else; the “help”
statement (R1)
includes learning (R3)
the answers might help someone else (R2)
Talk with
non-SWs
(15%)
Original
TABLE 3 (continued)
71
Sex work
stigma
(17%)
Reputation
(55%)
Full definition: Reputation and selfimage (don’t want to be shamed/
embarrassed/laughed at, do want to
seem popular) drives respondent’s
(dis)honesty
When to use: Sometimes give higher
numbers because they want to seem
“pretty” and popular; sometimes give
lower numbers because they are
embarrassed by how many men they
sleep with in a day and don’t want to
seem greedy or money hungry; also
code here at the top level if the
respondent says, “I’ll tell the truth
because otherwise I’ll be caught out
in my lies” (R1)
Full definition: Statements on the
stigma of being a SW
When to use: Reactions from others
(“not a human being”; Ghanaians
very insulting about it, accused of
bringing AIDS to Ghana) (R1)
Full definition: Reputation and self-image
(don’t want to be shamed/embarrassed/
laughed at, do want to seem popular)
drives respondent’s (dis)honesty
When to use: Sometimes give higher numbers because they want to seem “pretty”
and popular; sometimes give lower numbers because they are embarrassed by how
many men they sleep with in a day and
don’t want to seem greedy or money hungry; also code here at the top level if the
respondent says, “I’ll tell the truth because
otherwise I’ll be caught out in my lies”;
cannot be honest about not using condoms
because they know that it is “proper” to
use them (R2)
Full definition: Statements on the stigma of
being a SW
When to use: Reactions from others (“not a
human being”; Ghanaians very insulting
about it, accused of bringing AIDS to
Ghana); if talk about SW job as “disgraceful” (implies public stigma) (R1)
Full definition: Statements on the stigma of being
a SW
When to use: Reactions from others (“not a human
being”; Ghanaians very insulting about it,
accused of bringing AIDS to Ghana); if talk
about SW job as “disgraceful” (implies public
stigma); includes perceived stigma (e.g., talk
about hiding while doing this job) (R2)
Full definition: Reputation and self-image (don’t
want to be shamed/embarrassed/laughed at, do
want to seem popular) drives respondent’s
(dis)honesty
When to use: Sometimes give higher numbers
because they want to seem “pretty” and popular;
sometimes give lower numbers because they are
embarrassed by how many men they sleep with
in a day and don’t want to seem greedy or
money hungry or because they are “shy”; also
code here at the top level if the respondent says,
“I’ll tell the truth because otherwise I’ll be
caught out in my lies”; cannot be honest about
not using condoms because they know that it is
“proper” to use them (R3)
72
FIELD METHODS
definition version indicates the round of analysis in which the code was originally created (in the Original column) and when the revision was made (in the
Revision columns). Percentages in parentheses under the code name indicate
the proportion of the sixty transcripts to which the code was applied.
Thematic Prevalence
Another critical dimension we needed to address was the overall relative
importance of codes, for if codes developed in the early stages turned out to
be the most important, doing additional interviews would tend to seriously
diminish returns on time (and money) invested in additional interviews.
Here, we define the importance of a code as the proportion of individual
interviews to which a code is applied. We make the assumption that the number of individuals independently expressing the same idea is a better indicator of thematic importance than the absolute number of times a theme is
expressed and coded. After all, one talkative participant could express the
same idea in twenty of her responses and increase the overall absolute frequency of a code application significantly.
The first question we asked with respect to code frequency was at what
point did relative frequency of code application stabilize, if at all? To assess
this, we used Cronbach’s alpha to measure the reliability of code frequency
distribution as the analysis progressed. We present alpha values between
each successive round of analysis, with each round containing an additional
set of six interviews (see Table 4). The data transition point from one country
to the next is also noted. For the Cronbach’s alpha, .70 or higher is generally
considered an acceptable degree of internal consistency (Nunnally and
Bernstein 1994).
The data in Table 4 show that the alpha value is above .70 between the first
two sets of interviews and that reliability of code frequency distribution
increases as the analysis progresses, with the greatest increase occurring
when the third group of interviews (interviews 13–18) are added. The consistency of application frequency appears to hold even when adding the interviews from Nigeria. In fact, internal consistency is higher for all ten rounds
(i.e., both sites) combined (.9260) than for either of the five rounds of data
analysis exclusive to each site (Ghana = .8766, Nigeria = .9173). Also, when
we average code frequencies for each site and compare the two distributions,
reliability between the two data sets remains high with an alpha of .8267.
Another question we had concerned the frequency dynamics associated
with high prevalence codes, that is, codes applied to many transcripts. Did,
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
73
TABLE 4
Internal Consistency of Code Frequencies
Rounds
Interviews
Cronbach’s Alpha
Ghana only
1–2
1–3
1–4
1–5
1–12
1–18
1–24
1–30
.7048
.7906
.8458
.8766
1–6
1–7
1–8
1–9
1–10
1–36
1–42
1–48
1–54
1–60
.8774
.8935
.9018
.9137
.9260
Ghana and Nigeria
µGhana, µNigeria
1–30, 31–60
.8267
for example, themes that appeared to be important after six or twelve interviews remain important after analyzing all sixty interviews? Using the categorize function in SPSS, we transformed code frequencies into three groups:
low, medium, and high. Based on these data, we found that the majority of
codes that were important in the early stages of analysis remained so
throughout. Of the twenty codes that were applied with a high frequency in
round 1 of the analysis, fifteen (75%) remained in this category throughout
the analysis. Similarly, twenty-six of the thirty-one high-frequency codes
(84%) in the second round of analysis (i.e., after twelve transcripts) remained
in this category during the entire analysis.
We showed above that high-frequency codes in the early stages of our
analysis tended to retain their relative prevalence over time. But were there
any high-frequency codes that emerged later in the analysis and that we
would have missed had we only six or twelve interviews to analyze? The data
in Table 5 address this question. After analyzing all sixty interviews, a total
of thirty-six codes were applied with a high frequency to the transcripts. Of
these, thirty-four (94%) had already been identified within the first six interviews, and thirty-five (97%) were identified after twelve. In terms of the
range of commonly expressed themes, therefore, very little appears to have
been missed in the early stages of analysis.
74
FIELD METHODS
TABLE 5
Presence of High-Prevalence Codes in Early Stages of Analysis
Frequency after
R10 (Sixty Interviews)
High
Medium
Low
Number
of Codes
36
39
39
Percentage Present
in R1 (First Six
Interviews)
94
56
62
Percentage Present
after R2 (First Twelve
Interviews)
97
83
82
DISCUSSION
Based on our analysis, we posit that data saturation had for the most part
occurred by the time we had analyzed twelve interviews. After twelve interviews, we had created 92% (100) of the total number of codes developed for
all thirty of the Ghanaian transcripts (109) and 88% (114) of the total number
of codes developed across two countries and sixty interviews. Moreover,
four of the five new codes identified in the Nigerian data were not novel in
substance but rather were variations on already existing themes. In short,
after analysis of twelve interviews, new themes emerged infrequently and
progressively so as analysis continued.
Code definitions were also fairly stable after the second round of analysis
(twelve interviews), by which time 58% of all thirty-six definition revisions
had occurred. Of the revisions, more than three-fourths clarified specifics
and did not change the core meaning of the code. Variability of code frequency appears to be relatively stable by the twelfth interview as well, and,
while it improved as more batches of interviews were added, the rate of
increase was small and diminished over time.
It is hard to say how generalizable our findings might be. One source of
comparison is consensus theory developed by Romney, Batchelder, and
Weller (1986). Consensus theory is based on the principle that experts tend to
agree more with each other (with respect to their particular domain of expertise) than do novices and uses a mathematical proof to make its case.
Romney, Batchelder, and Weller found that small samples can be quite sufficient in providing complete and accurate information within a particular cultural context, as long as the participants possess a certain degree of expertise
about the domain of inquiry (“cultural competence”). Romney, Batchelder,
and Weller (1986:326) calculated that samples as small as four individuals
can render extremely accurate information with a high confidence level
(.999) if they possess a high degree of competence for the domain of inquiry
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
75
in question (1986:326). Johnson (1990) showed how consensus analysis can
be used as a method for selecting participants for purposive samples.
While consensus theory uses structured questions and deals with knowledge, rather than experiences and perceptions per se, its assumptions and
estimates are still relevant to open-ended questions that deal with perceptions
and beliefs. The first assumption of the theory is that an external truth exists
in the domain being studied, that there is a reality out there that individuals
experience. Some might argue that in the case we presented, there is no external truth because we asked participants their opinions and perceptions, rather
than, say, asking them to identify and name species of plants. This is partially
true, but the individuals in our sample (and in most purposive samples/
subsamples for that matter) share common experiences, and these experiences comprise truths. Many women in our study, for example, talked about
fear of being exposed (i.e., their involvement in sex work) to the public, particularly via the media. Such fear and distrust is a reality in the daily lives of
these women and is thus reflected in the data.
The second and third assumptions within the consensus model are that
participants answer independently of one another and that the questions
asked comprise a coherent domain of knowledge. The former assumption
can be met by ensuring that participants are interviewed independently and in
private. The latter assumption can be achieved by analyzing data collected
from a given instrument compartmentally, by domain. Moreover, the data
themselves can provide insights into the degree to which knowledge of one
domain transfers to another. Themes that are identified across multiple
domains and shared among numerous participants could be identified, post
facto, as part of one larger “domain” of experience.
Our study included a relatively homogeneous population and had fairly
narrow objectives. This brings up three related and important points: interview structure and content and participant homogeneity. With respect to the
first point, we assume a certain degree of structure within interviews; that is,
a similar set of questions would have to be asked of all participants. Otherwise, one could never achieve data saturation; it would be a moving target, as
new responses are given to newly introduced questions. For this reason, our
findings would not be applicable to unstructured and highly exploratory
interview techniques.
With respect to instrument content, the more widely distributed a particular experience or domain of knowledge, the fewer the number of participants
required to provide an understanding of the phenomenon of interest. You
would not need many participants, for example, to find out the name of the
local mayor or whether the local market is open on Sunday. Even a small convenience sample would likely render useful information in this case. Con-
76
FIELD METHODS
versely, as Graves (2002:169) noted, “Lack of widespread agreement among
respondents makes it impossible to specify the ‘correct’ cultural belief.”
It really depends on how you want to use your data and what you want to
achieve from your analysis. As Johnson (1998:153) reminds us, “It is critical
to remember the connection between theory, design (including sampling),
and data analysis from the beginning, because how the data were collected,
both in terms of measurement and sampling, is directly related to how they
can be analyzed.” If the goal is to describe a shared perception, belief, or
behavior among a relatively homogeneous group, then a sample of twelve
will likely be sufficient, as it was in our study. But if one wishes to determine
how two or more groups differ along a given dimension, then you would
likely use a stratified sample of some sort (e.g., a quota sample) and might
purposively select twelve participants per group of interest.
If your aim is to measure the degree of association between two or more
variables using, say, a nonparametric statistic, you would need a larger sample. Graves (2002:72-75) presented an example of a two × two contingency
table of height and weight of the San Francisco 49ers. Using a sample of 30,
Graves calculated a chi-square value of 3.75 for the association between
height and weight. This value is not quite statistically significant at the .05
level. However, when the sample size is doubled, but the relative proportions are kept constant, the chi-square value doubles to 7.5, which is highly
significant. Graves (2002:73) therefore recommended collecting samples of
between 60 and 120 for such correlative analyses (and, the larger the number,
the more ways you can cross-cut your data). 3
Our third point relates to sample homogeneity. We assume a certain
degree of participant homogeneity because in purposive samples, participants are, by definition, chosen according to some common criteria. The
more similar participants in a sample are in their experiences with respect to
the research domain, the sooner we would expect to reach saturation. In our
study, the participants were homogeneous in the sense that they were female
sex workers from West African cities. These similarities appear to have been
enough to render a fairly exhaustive data set within twelve interviews. Inclusion of the younger, campus-based women, however, did require creating a
few new codes relatively late in the analysis process, which may signal that
their lifestyles and experiences are somewhat distinct from their street- and
brothel-based counterparts, but as mentioned earlier, these “new” codes were
really just variations on existing themes. Structuring databases in a way that
allows for a subgroup analysis and that can identify thematic variability
within a sample is one way to assess the cohesiveness of a domain and its
relationship to sample heterogeneity.
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
77
A final issue we wish to raise pertains to codebook structure and the ageold “lumper-splitter problem.” Indeed, we have met qualitative researchers
whose codebooks contain more than five hundred codes (each with values!).
At the other extreme, a researcher may extract only four or five themes from a
large qualitative data set. Clearly, the perception of saturation will differ
between these two instances; as Morse (1995) pointed out, saturation can be
an “elastic” concept. At the crux of the discussion is how and when we define
themes and how we eventually plan to present our data. Ryan and Bernard
(2003) noted that the problem of defining a theme has a long history, and
many terms have been used to describe what we call themes. The authors go
on, however, to define themes as “abstract (and often fuzzy) constructs that
link . . . expressions found in text” and that “come in all shapes and sizes”
(p. 87). Ultimately, themes should be able to be linked to data points; that is,
one should be able to provide evidence of a given theme within the text being
analyzed. In our view, codes are different from themes, in that the former are
formal renderings of the latter. Codes are applied to the data (often electronically), whereas themes emerge from the data.
Ryan and Bernard (2004) asserted that when and how saturation is
reached depends on several things: (1) the number and complexity of data,
(2) investigator experience and fatigue, and (3) the number of analysts
reviewing the data. In addition, some researchers warn that completing analysis too soon runs the risk of missing more in-depth and important content
(Wilson and Hutchinson 1990:123). While true, we feel that conceptualizing
saturation primarily as researcher dependent misses an important point: How
many interviews or data points are enough to achieve one’s research objectives given a set research team? Without a doubt, anyone can find, literally,
an infinite number of ways to parse up and interpret even the smallest of qualitative data sets. At the other extreme, an analyst could gloss over a large data
set and find nothing of interest. In this respect, saturation is reliant on
researcher qualities and has no boundaries. The question we pose, however,
frames the discussion differently and asks, “Given x analyst(s) qualities, y
analytic strategy, and z objective(s), what is the fewest number of interviews
needed to have a solid understanding of a given phenomenon?” Could we,
for example, go back through our data and find new themes to add to the 114
existing ones? Sure we could, but if we used the same analysts and techniques and had the same analytic objectives, it is unlikely. The data are finite,
and the stability of our codebook would bear this out if the original parameters remained constant in a reanalysis.
We have discussed codebook development while processing data, as
would be expected in a grounded theory approach. But many codebook revisions are removed from the data collection process and consist of restructur-
78
FIELD METHODS
ing (usually hierarchically) the relationships between codes after code definitions have been finalized and code application completed. This is true in
our case; we first identified as many codes as we thought were relevant to our
objectives, finalized the codebook, and then discussed overarching themes.
The result of such a process is often a codebook that has several higher level
metathemes that may or may not serve as parent codes to children codes.
Such post hoc rearrangement does not affect saturation per se—since the
range of thematic content in the codebook does not change—but it will likely
influence how we think about and present our data. We should also point out
that a lumper may identify only a few metathemes in the first place and never
have enough codes to bother with ordering themes hierarchically or applying
a data reduction technique.
Regardless of how one derives metathemes from a data set, if it is these
overarching themes that are of primary interest to the researcher, saturation,
for the purpose of data presentation and discussion, will likely occur earlier
in the process than if more fine-grained themes are sought. Our postcoding
data reduction and interpretation process rendered four metathemes. It is difficult to say, post facto, whether we would have had enough context to have
derived these metathemes early on in the process, but in retrospect, looking at
the metathemes and their constituent code frequencies, enough data existed
after six interviews to support these four themes. The basic elements were
there. The connections among the codes that eventually made up the overarching themes, however, may not have been apparent in the early stages of
analysis, or we may have identified several other themes that dwindled in
importance as transcripts were added and the analysis progressed. Nonetheless, the magic number of six interviews is consistent with Morse’s (1994)
(albeit unsubstantiated) recommendation for phenomenological studies. Similar evidence-based recommendations can be found for qualitative research
in technology usability. Nielsen and Landauer (1993) created a mathematical
model based on results of six different projects and demonstrated that six
evaluators (participants) can uncover 80% of the major usability problems
within a system, and that after about twelve evaluators, this diagnostic number tends to level off at around 90%. 4
Our experiment documents thematic codebook development over the
course of analyzing sixty interviews with female sex workers from two West
African countries. Our analysis shows that the codebook we created was
fairly complete and stable after only twelve interviews and remained so even
after incorporating data from a second country. If we were more interested in
high-level, overarching themes, our experiment suggests that a sample of six
interviews may have been sufficient to enable development of meaningful
themes and useful interpretations. We call on other researchers to conduct
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
79
similar experiments to see if, in fact, our results are generalizable to other
domains of inquiry (particularly broader domains), types of groups, or other
forms of data collection methods, such as focus groups, observation, or
historical analysis.
At the same time, we want to caution against assuming that six to twelve
interviews will always be enough to achieve a desired research objective or
using the findings above to justify “quick and dirty” research. Purposive
samples still need to be carefully selected, and twelve interviews will likely
not be enough if a selected group is relatively heterogeneous, the data quality
is poor, and the domain of inquiry is diffuse and/or vague. Likewise, you will
need larger samples if your goal is to assess variation between distinct groups
or correlation among variables. For most research enterprises, however, in
which the aim is to understand common perceptions and experiences among
a group of relatively homogeneous individuals, twelve interviews should
suffice.
NOTES
1. Ethics review committees also usually require that sample sizes be written into protocols, and
deviating from approved sampling procedures can involve time-consuming protocol amendments.
2. We chose sets of six because six is a divisor of thirty, and this number was the smallest recommended sample size we identified within the literature.
3. Note that although chi-square is highly useful for structured categorical responses, it is not
suitable for data collected from an open-ended instrument such as ours. Contingency tables
require that the frequencies in one cell are mutually exclusive and contrastive of other cells in the
table (i.e., an individual weighs either 200 lb or more or 199 lb and less, or a medical intervention
is either successful or not). In the case of open-ended questions, the presence of a trait within an
individual (e.g., expression of a theme) cannot be meaningfully contrasted with the absence of
this trait. That is, the fact that an individual does not mention something during the interview is
not necessarily indicative of its absence or lack of importance.
4. Nielsen and Landauer (1993) also calculated that the highest return on investment was
obtained with about five evaluators. It would be interesting to see if these monetary figures transfer to other domains of research.
REFERENCES
Bernard, H. R. 1995. Research methods in anthropology. Walnut Creek, CA: AltaMira.
⎯⎯⎯. 2000. Social research methods. Thousand Oaks, CA: Sage.
Bertaux, D. 1981. From the life-history approach to the transformation of sociological practice.
In Biography and society: The life history approach in the social sciences, ed. by D. Bertaux,
29–45. London: Sage.
Bluff, R. 1997. Evaluating qualitative research. British Journal of Midwifery 5 (4): 232–35.
80
FIELD METHODS
Brody, S. 1995. Patients misrepresenting their risk factors for AIDS. International Journal of
STD & AIDS 6:392–98.
Byrne, M. 2001. Evaluating the findings of qualitative research. Association of Operating Room
Nurses Journal 73 (3): 703–6.
Carey, J., M. Morgan, and M. Oxtoby. 1996. Intercoder agreement in analysis of responses to
open-ended interview questions: Examples from tuberculosis research. Cultural Anthropology Methods Journal 8:1–5.
Centers for Disease Control and Prevention. 2004. AnSWR: Analysis Software for Word-based
Records, version 6.4. Atlanta, GA: Centers for Disease Control and Prevention.
Cheek, J. 2000. An untold story: Doing funded qualitative research. In Handbook for qualitative
research, 2nd ed., ed. N. Denzin and Y. Lincoln, 401–20. Thousand Oaks, CA: Sage.
Creswell, J. 1998. Qualitative inquiry and research design: Choosing among five traditions.
Thousand Oaks, CA: Sage.
Crosby, R., R. DiClemente, D. Holtgrave, and G. Wingood. 2002. Design, measurement, and
analytical considerations for testing hypotheses relative to condom effectiveness against onviral STIs. Sexually Transmitted Infections 78:228–31.
Flick, U. 1998. An introduction to qualitative research: Theory, method and applications. Thousand Oaks, CA: Sage.
Fossey, E., C. Harvey, F. McDermott, and L. Davidson. 2002. Understanding and evaluating
qualitative research. Australian and New Zealand Journal of Psychiatry 36:717–32.
Geary, C., J. Tchupo, L. Johnson, C. Cheta, and T. Nyiama. 2003. Respondent perspectives on
self-report measures of condom use. AIDS Education and Prevention 15 (6): 499–515.
Glaser, B. and A. Strauss. 1967. The discovery of grounded theory: Strategies for qualitative
research. New York: Aldine Publishing Company.
Graves, T. 2002. Behavioral anthropology: Toward an integrated science of human behavior.
Walnut Creek, CA: Roman & Littlefield.
Johnson, J. C. 1990. Selecting ethnographic informants. Thousand Oaks, CA: Sage.
⎯⎯⎯. 1998. Research design and research strategies. In Handbook of methods in cultural
anthropology, ed. H. R. Bernard, 131–72. Walnut Creek, CA: AltaMira.
Kuzel, A. 1992. Sampling in qualitative inquiry. In Doing qualitative research, ed. B. Crabtree
and W. Miller, 31–44. Newbury Park, CA: Sage.
LeCompte, M., and J. Schensul. 1999. Designing and conducting ethnographic research. Walnut Creek, CA: AltaMira.
MacQueen, K. M., E. McLellan, K. Kay, and B. Milstein. 1998. Codebook development for
team-based qualitative analysis. Cultural Anthropology Methods Journal 10 (12): 31–36.
McLellan, E., K. M. MacQueen, and J. Niedig. 2003. Beyond the qualitative interview: Data
preparation and transcription. Field Methods 15 (1): 63–84.
Miles, M., and A. Huberman. 1994. Qualitative data analysis. 2nd ed. Thousand Oaks, CA:
Sage.
Morse, J. 1994. Designing funded qualitative research. In Handbook for qualitative research, ed.
N. Denzin and Y. Lincoln, 220–35. Thousand Oaks, CA: Sage.
⎯⎯⎯. 1995. The significance of saturation. Qualitative Health Research 5:147–49.
Nielsen, J., and T. K. Landauer. 1993. A mathematical model of the finding of usability problems. Proceedings of INTERCHI 93:206–13.
Nunnally, J. C., and L. H Bernstein. 1994. Psychometric theory. 3rd ed. New York: McGrawHill.
Patton, M. 2002. Qualitative research and evaluation methods. 3rd ed. Thousand Oaks, CA:
Sage.
Guest et al. / HOW MANY INTERVIEWS ARE ENOUGH?
81
Paulhus, D. 1991. Measurement and control of response bias. In Measures of personality and
social psychological attitudes, Vol. 1, ed. J. Robinson, P. R. Shaver, and L. S. Wrightsman,
17–59. New York: Academic Press.
Romney, A., W. Batchelder, and S. Weller. 1986. Culture as consensus: A theory of culture and
informant accuracy. American Anthropologist 88:313–38.
Rubin, H., and I. Rubin. 1995. Qualitative interviewing: The art of hearing data. Thousand
Oaks, CA: Sage.
Ryan, G., and H. R. Bernard. 2003. Techniques to identify themes. Field Methods 15:85–109.
⎯⎯⎯. 2004. Techniques to identify themes in qualitative data, http://www.analytictech.com/
mb870/ryan-bernard_techniques_to_identify_themes_in.htm (accessed September 2004).
Sandelowski, M. 1995. Sample size in qualitative research. Research in Nursing and Health
18:179–83.
Schensul, S., J. Schensul, and M. LeCompte. 1999. Essential ethnographic methods. Walnut
Creek, CA: AltaMira.
Schwarz, N. 1999. Self-reports: How the questions shape the answers. American Psychologist
54:93–105.
Trotter, R., II. 1991. Ethnographic research methods for applied medical anthropology. In Training manual in applied medical anthropology, ed. C. Hill, 180–212. Washington, DC: American Anthropological Association.
Trotter, R., II, and J. Schensul. 1998. Methods in applied anthropology. In Handbook of methods
in cultural anthropology, ed. H. R. Bernard, 691–736. Walnut Creek, CA: AltaMira.
Weinhardt, M., A. Forsyth, M. Carey, B. Jaworski, and L. Durant. 1998. Reliability and validity
of self-report measures of HIV-related sexual behavior: Progress since 1990 and recommendations for research and practice. Archives of Sexual Behavior 27:155–80.
Weir, S., R. Roddy, L. Zekeng, and K. Ryan. 1999. Association between condom use and HIV
infection: A randomised study of self reported condom use measures. Journal of Epidemiological Community Health 53:417–22.
Wilson, H. S., and S. Hutchinson. 1990. Methodologic mistakes in grounded theory. Nursing
Research 45 (2): 122–24.
Zenilman, J., C. Weisman, A. Rompalo, N. Ellish, D. Upchurch, E. Hook, and D. Celentano.
1995. Condom use to prevent STDs: The validity of self-reported condom use. Sexually
Transmitted Diseases 22:15–21.
Greg Guest is a sociobehavioral scientist at Family Health International, where he conducts research on the sociobehavioral aspects of reproductive health. His most recent
work deals with HIV/AIDS prevention and behavioral components of clinical trials in
Africa. Dr. Guest also has an ongoing interest in the ecological dimensions of international health and the integration of qualitative and quantitative methodology. His most
recent publications include “Fear, Hope, and Social Desirability Bias Among Women at
High Risk for HIV in West Africa” (Journal of Family Planning and Reproductive Health
Care, forthcoming), “HIV Vaccine Efficacy Trial Participation: Men-Who-Have-SexWith-Men’s Experience of Risk Reduction Counseling and Perceptions of Behavior
Change” (2005, AIDS Care), and the edited volume Globalization, Health and the Environment: An Integrated Perspective (2005, AltaMira). He is currently co-editing (with
Kathleen MacQueen) the Handbook for Team-Based Qualitative Research.
82
FIELD METHODS
Arwen Bunce is a senior research analyst and qualitative specialist at Family Health
International in North Carolina. Her research interests include the intersection of reproductive health and human rights, and the impact of sociocultural factors on women’s
health and well-being. Previous research experience include research surrounding
access to medical care for immigrants and the construct of self-rated health. Her publications include “The Assessment of Immigration Status in Health Research” (with S.
Loue, Vital Health Statistics, 1999) and “The Effect of Immigration and Welfare Reform
Legislation on Immigrants’ Access to Health Care, Cuyahoga and Lorain Counties”
(with S. Loue and M. Faust, Journal of Immigrant Health, 2000).
Laura Johnson is a research associate at Family Health International. She performs
qualitative and quantitative data analysis on a variety of research topics including:
youth and family planning, reliability of self-reported data, and costs and delivery of
family planning services. Her recent publications include “Respondent Perspectives on
Self-Report Measures of Condom Use” (with C. Waszak Geary, J.P. Tchupo, C. Cheta,
and T. Nyama, AIDS Education and Prevention, 2003), “Excess Capacity and the Cost of
Adding Services at Family Planning Clinics in Zimbabwe” (with B. Janowitz, A. Thompson A, C. West, C. Marangwanda, and N.B. Maggwa, International Family Planning Perspectives, 2002) and “Quality of Care in Family Planning Clinics in Jamaica: Do Clients and Providers Agree?” (with K. Hardee, O.P. McDonald OP, and C. McFarlane,
West Indian Medical Journal, 2001).
Working Toward the Common Good:
An Online University’s Perspectives on Social Change
1
Many institutions of higher education in the United States and indeed around the world
are reaching out to their neighborhoods as a member of the community to contribute to the
common good through research, service, and educational opportunities. In this descriptive study,
the understandings and practices around this kind of activity by one university with a mission of
creating positive social change is explored. While current literature indicates that researchers are
examining campus-community engagements, very little research has been done on community
engagement when the institution works primarily online and the communities involved are
geographically dispersed and dependent on individual choices and preferences. The goal of the
study was to discover how members of one such online university currently understand and
practice the mission to provide a baseline of understandings for curriculum planning and
mentoring student research projects and service activities. Through a series of interviews
conducted with faculty members, students, and alumni, several themes were identified. These
results give rise to several implications for the university in developing its community outreach,
along with some suggestions for further research. The discussion of findings for this university
might have applicability to other institutions of higher education, both online and traditional,
with a similar commitment to the community.
Background to the Study
With the advances in online education and the significant numbers of institutions that
have campuses in multiple locations, the ease with which colleges and universities can
demonstrate mission fulfillment is more challenged. The reach of the university is broader in
such programs and mission efficacy relies on more than confirmed relationships with
constituency groups that are often local to the institution. For online education providers in
particular, the strength of mission fulfillment must rely upon intentional promotion within
2
curricular structures, student services, and philosophical expectations that allow university
members to carry out the institution’s mission in their own communities. Finding references that
speak to mission fulfillment in online and geographically dispersed programs is made
particularly difficult given the limited number of writings that deal with this topic. In fact, a
review of the literature for mission and online learning finds a greater focus on how the decision
to deliver online instruction can become part of the institution’s mission, not upon how the
existing mission can be assured through online delivery (Checkoway, 2001; Johnson, et al.,
2014; Levy, 2003). The complexity of understanding what is meant by “positive social change”,
the mission for the university in this study, adds to the difficulty of using traditional images of
“community” within mission fulfillment.
Defining and Describing Social Change
The term “social change” has been defined and analyzed across the academic disciplines,
reflecting the particular perspective of that discipline and its research agenda. In one study, a
proposal for social change in schools (Jean-Marie, Normore, & Brooks, 2009), the authors
reported that their literature review was aided by such identifiers and organizers as equity,
diversity, social justice, liberatory education, race, gender, ethics, urban school, global
education, critical pedagogy, oppression, social change, social development, and social order,
among others. From the review of the literature around these key terms, Jean-Marie, Normore,
and Brooks see social change as bringing about a “new social order” in which marginalized
peoples would have the same educational and social opportunities as those more privileged.
As the list of identifiers above suggests, the concepts of social justice and equity have
been significant in discussions of social change in education, psychology, and social and cultural
studies (see also Curry-Stevens, 2007; Drury & Reicher, 2009; Moely, Furco, & Reed, 2008; and
3
Peterson, 2009). The writing and advocacy of Paulo Freire, Ivan Illich, civil rights leaders, and
feminists during the last half of the 20th century influenced these understandings and helped
shape the particular emphases of social change in recent decades.
Hoff and Hickling-Hudson (2011) sought descriptors of social change that would be
appropriate for education and noted that Farley, writing in 1990, offered an understanding of
social change as “alterations in behaviour patterns, social relationships, institutions, and social
structure over time” (Hoff & Hickling-Hudson, 2011, 189). However, Hoff and Hickling-Hudson
found this inadequate from an educational point of view because of its value-neutral stance. They
preferred a definition that would give social change a “connotation of social progress or social
development beneficial to society” (189). For this reason, they chose the definition proposed by
Aloni in 2002, which places social change as challenging “trends of discrimination, exploitation,
oppression, and subjugation displayed by groups who regard themselves as favored and, thus,
take privileges for themselves and deprive other groups of the right to a dignified life” (Hoff &
Hickling-Hudson, 2011, 189). In other words, the change in social change is defined here in
positive and value-laden terms that relate more particularly to the agents of social change than to
others they might want to change. They were careful to add that this cannot be cast in universal
or absolute terms, but it is dependent on particular contexts and circumstances (see also Itay,
2008, writing in political science).
and Miller (2006), working in continuing education and innovation studies, respectively,
identified influences on the meaning of social change arising from new political and social
realities. For instance, during the economic recession of the late 1970s and early 1980s,
education was seen to be increasingly determined by the needs and forces of the market and less
by concerns for equity and social justice, a conclusion suggested also by Atkinson (2010) in
4
adult education and Feldman (2001) in economic history. However, we witness today a
movement again toward social justice and equity issues (Ryan & Ruddy, 2015), brought about in
part by Occupy activism (e.g., Cortez, 2013), current political debates, experience in campus
outreach programs (e.g., Patterson, Cronley, West, & Lantz, 2014), social media (e.g., Taha,
Hastings, & Minei, 2015), and exposure to other cultures in a globalized world (e.g., Bossaller,
Frasher, Norris, Marks, & Trott, 2015).
Armstrong and Miller also noted that increasing global and international contact has led
to revisions in the meaning of social purpose narrowly defined in Western terms and contexts
and the “grand narrative” of modernism being replaced by less absolute and dogmatic postmodern discourses, an idea echoed also in adult education by Holst (2007). As a consequence,
projects with a social change purpose are considered to be more effective when local community
partners participate in determining needs and shaping the outcomes collaboratively (Bahng,
2015; Lees, 2007; Lewis, 2004; Nichols, Gaetz, & Phipps, 2015; Silverman & Xiaoming, 2015).
Brennan (2008) added that the social context in which higher education operates today
calls for universities to be responsive in a number of ways to their constituent societies. One of
these responses, playing “a role in constructing the ‘just and stable’ society”, returns the social
change mission to the goals of equity, which he suggested includes equitable access to the
credentials needed to participate as equals in the new societal realities and guarantees of
autonomy and freedom. Furman and Gruenewald (2004), working in educational administration,
described yet another new influence on understandings of social change: ecological concerns.
Their argument was that “environmental crises are inseparable from social crises” (48), primarily
because they usually have to do with the misuse of racial and economic power.
5
Overall, it is apparent that social change and social purpose have been focused primarily
on equity issues, although their working definitions, both implicit and explicit, reflect a spectrum
of meanings ranging from simple activism around race, gender, and poverty, for instance, to
more nuanced understandings of the impact of technology developments, diversity,
globalization, as well as the ecological environment. More recently, this focus has received
renewed attention as the gap between rich and poor is seen to be widening and the middle class
to be diminishing (Gillis & McLellan, 2013; Goldberg, 2012; Guy, 2012).
It is important to keep in mind that “social change” can be either an action or a result,
product or process, noun or verb. While educators need a clear end-in-view for their work with
students, processural understandings of social change may serve them better in planning for the
kinds of learning experiences that will bring about the desired results. The central concept of
“conscientization” in Freire’s writings on social change speaks as much to process as product
(Hickling-Hudson, 2014) and using the concept of “transformation” rather than “results” in
reporting on social change projects (e.g., Sewell, 2005; Silverman & Xiaoming, 2015) further
supports this.
One of the most frequently made distinctions in social change is that between charity and
helping on the one hand and change and justice on the other. In many cases, the distinction is
assumed (e.g., Moely, Furco, & Reed, 2008); in other cases, it is elaborated. In simplest terms,
charity work sets out to help someone; change efforts aim to modify social arrangements toward
equity (Mitchell, 2008). In cultural and social studies, charity has been identified as
“transactional” service; change and social justice as “transformational” (Peterson, 2009, 541,
545). From a social work perspective, charity seeks to discover the immediate elements of a
particular individual’s needs and deal with them; change investigates the wider picture of all
6
those with similar needs and how the whole group might be helped by systemic change (AllenMeares, 2008). In effect, charity addresses the symptoms of a social injustice; change seeks to
remove the root causes (Allen-Meares, Mitchell, 2008, Peterson, 2009). The former participants
can usually see immediate results for their efforts; the latter work for the long term and may
actually never see final results, or at least they will discover that results are usually not
immediately apparent (Mitchell, 2008). At its worst, charity may be patronizing, perpetuating
rather than overcoming the differential in power—the “us versus them” dichotomy—which may
have brought about the need in the first place. At its best, change may not only amend the
situation of the needy but also strengthen authentic relationships among all those involved as it
redistributes and shares power more equally between those who are privileged and those who are
not. In the reciprocity between the needy and change agents, each benefits although in different
ways (Peterson, 2009).
Writing within the context of human services, Netting, O’Connor, & Fauri (2007) picked
up on many of the distinctions between charity and change but put them in an entirely different
light. They replaced charity with focused or peripheral change; that is, advocacy for individuals
providing “relatively short-term interventions designed to gain access to, utilization of, or
improve the existing service delivery system” (60). These interventions are critical in
operationalizing an organization’s mission in that they focus on implementing and achieving the
intent of particular policies and processes. They are usually manifested as case advocacy—
working for “individual clients whose rights have been violated and/or whose access to benefits
have been denied” (p. 63). Netting, O’Connor, and Fauri also substituted “change” with
“transformation” described as “long-term, structural interventions designed to change the status
quo at broad community, state, regional, or even national level” (60). These kinds of
7
interventions may involve “social movement organizations, campaigns for social justice . . . and
coalitions with system reform goals” (60). They may threaten the status quo and are usually
manifested as cause advocacy—working in “an arena, locus of change, or target,” which may be
“an organization . . . legislation, law, and/or community or other large system” (63).
While the literature in general clearly weighs in on the side of change over charity, some
writers have raised points in favor of taking a more holistic view of social change that includes
both charity and change. Netting, O’Connor, and Fauri (2007), for instance, proposed that
because both case advocacy and cause advocacy fall within the professional roles of human
services providers, both must be planned for and their success evaluated. One argument in favor
of a more holistic view is that charity may be needed as a necessary first step to improve
immediate and pressing conditions. Change can then subsequently address the policies and social
institutions that need reform and/or revitalization (Hoff & Hickling-Hudson, 2011). This
argument takes on merit when one considers that change may take time whereas charity may
bring some immediate relief. In a similar vein, charity may also be considered an important first
step to build trust between social change activists and those for whom they work, which, once
established, can be a basis on which to take later steps collectively toward political change
(Peterson, 2009).
Over two decades ago, Boyer claimed, “At no time in our history has the need been
greater for connecting the work of the academy to the social and environmental challenges
beyond the campus (1990, xii).” Duderstadt, a decade later, noting some of the pitfalls to an
institution of higher learning that arise from the expectation that it will “address social needs and
concerns”, nevertheless declares that “it is clear that public service must continue to be an
important responsibility of the American university” (2000, 2003, 146). For the purpose of this
8
study, when individuals associated with colleges and universities find ways to serve their local
communities and contribute to the common good, their efforts are identified as contributing to
positive social change.
Research Method
The goal of this study was to explore and analyze the current state of understanding and
practice around social change at one online university with geographically dispersed students and
faculty. We selected a qualitative research design for this study in an effort to get at the
understandings of faculty members, students, and alumni in their experience of social change
processes and how they make meaning out of those experiences (see Creswell, 2003). The site
selected for the project is a comprehensive, regionally accredited, for-profit institution originally
founded in 1970 as a distance learning institution. It currently enrolls approximately 60,000
students. The institution is an appropriate site for this research in that creating positive social
change was the university’s mission from its founding. The mission statement is prominently
displayed in university publications, shared widely with new faculty members and students, and
frequently discussed in online forums and other venues.
Although the researchers considered both focus and group interviews as data collection
methods, we ultimately decided that individual interviews would provide the richest information
and would also permit comparisons among interview groups. Informed by both the literature
review and the goal for the study, the researchers prepared an interview guide, utilizing cross
referencing between the goals for the research and the interview questions. (The interview
questions are provided in Appendix A.) A research team, consisting of six faculty members,
completed inter-rater reliability training and piloted the interview guide. The study was approved
by the university’s Institutional Review Board and appropriate measures were taken to preserve
9
confidentiality of responses with interviewers signing confidentiality agreements and the
substitution of pseudonyms for real names in any reporting of the study. A small gift card ($50
for Amazon.com) was sent to participants in appreciation for their time and willingness to be
interviewed.
Working in pairs, the researchers interviewed three groups of participants selected via
purposeful, referral sampling from the institution’s faculty, students, and alumni. Interviewees
were identified by their colleagues, teachers, or mentors as active participants in social change
activities and possessing an ability and willingness to articulate their understandings in a
considered way. Eight current students, ten faculty members, and 12 graduates including five
very recent graduates made up the pool of interviewees.
Interviews were conducted via telephone and transcribed verbatim using digital
recordings. For each pair of researchers, there was a lead interviewer and an observer who
debriefed after each interview. The observer also kept interview notes and verified interview
transcripts; member checks were also used to confirm the accuracy of the transcripts. Two
analyses of the responses were undertaken, concurrently but independently, to provide different
perspectives for comparison. The analysis began with the interview transcripts, looking for
recurring ideas and common themes. The initial and open coding identified key participant
responses, followed by a second coding that labeled the nature of the emerging theme. Following
the second coding, the researcher developed working definitions for each theme. The interviews
were coded a third and final time, during which the working definitions provided a framework
for confirming the code, and illustrative quotes were noted.
Coded Analysis
10
Significant Common Themes
When interviewees were asked to define social change and provide examples from their
own experiences, their answers and the responses to follow-up probes yielded richly nuanced and
diverse concepts, spanning a wide spectrum of ideas, reflecting the broad sweep of the
university’s official definition. Themes emerged about the focus on others, the charitable nature
of social change, the way small actions in social change could expand from one or a few to
many, and about the central role of education in changing perspectives and bringing about social
change.
Focus on the “Other”
Most participants gave definitions of social change that were “other”-focused; that is,
social change was seen as an important goal in order to improve some aspect of life for other
people, but not necessarily for themselves. Others might need to benefit from social change, but
the participants in this study did not typically include themselves in the change population. For
instance, Brian, a faculty member, stated that social change “is anything and everything an
individual does to improve the life or lives of others.” In some cases, those “others” had unmet
personal needs: their quality of life was seen as insufficient or their wellbeing was somehow in
question.
Few participants first thought of social systems or community-at-large initiatives as they
discussed social change, but they often added the larger community in an expansion of their
definition. In some cases, this seemed to be added almost as an after-thought. Ray, an
undergraduate faculty member, defined social change “as a group of people who are getting
involved, who are giving of themselves, whether it be in terms of time or money or effort or all
of the above, to make an impact on both individual people’s lives and society as a whole”. Other
11
respondents took in the larger community immediately. Arsi, for instance, an alumna whose
work focused on the intergenerational transfer of learning, spoke of that expansion to the wider
community in these terms: “[S]ocial change has a lot to do with making a contribution to society
that will not only improve individuals’ lives but will collectively improve the environments in
which they live, and that can expand beyond just personal agendas.” Only a few respondents
spoke specifically of social change within the boundaries of democracy and related political
principles, but the possible expansive nature of social change was a clear theme: “Social
change,” stated faculty member Christine, “is tinkering with the world.”
Helping and Altering
Consistent with the focus on “the other” and with a framework that centers on individual
needs, most participants used language associated with helping to describe the actions that
support social change. Typical definitions included words such as “contribute”, “serve”, “give”,
or “provide”, reinforcing the idea that social change is something that participants initiated for
another individual or set of individuals with specific needs. Pam, an alumna who works in
mental health, spoke of “project(s) that will kind of better the populations that they’re serving,”
while Brian spoke of disadvantaged people and the need to “give them the dignity” of a job.
Marg, another alumna, took up the idea of service: “You have something that you see you can
start off with service projects or volunteering and charity work and all of that,” but she extended
this to include a larger context: “I recognize(d) the social injustices taking place everywhere, in
many communities . . .” And Diane, an MBA alumna, stated that “social change is about helping
every individual achieve their potential so that they can reach down and help the next one up.”
In addition to using language that anchored social change within the concept of helping,
many interviewees described their own social change actions in terms of the desired effect on
12
others. They used terms such as “(re)build”, “develop”, “empower”, “improve”, and “modify” to
describe the outcomes of their work for social change. Tom, a faculty member with philosophical
groundings in the quality movement, strives to encourage people to build on the positive. “Social
change is making something better” and encouraging that movement forward.
The Ripple Effect
The vast majority of respondents noted that a single person can be responsible for social
change: only two of the 30 respondents indicated that a “critical mass” (Eileen’s term, further
arbitrarily defined as 30% of a population by Diane) was necessary to effect significant social
change. However, most participants acknowledged that social change can begin with a single
individual but his or her efforts require expansion. Many participants used the term “ripple” to
note the movement from the single person to a group of people, and then to a larger impact. Kim,
a student who came to the university precisely because of the social change mission, is a teacher.
She instructs her own students that “whatever they do should be important to them and make
some kind of ripple.” Alumnus Charlie called it a “gravitational wave,” as in physics, that
ultimately impacts the farthest reaches of the universe.
For the most part, social change was seen in terms of making progress. Paige noted the
idea of “paying it forward” and other interviewees used the concept of moving forward in a
positive way as part of their social change definition. Over half the interviewees thought that
both accentuating the positive and removing the negative were involved in social change, but
nearly as many indicated that a focus on the positive was crucial for social change. Only one
respondent indicated that the single goal of social change was to remove a negative. The notion
of social change by an individual, often for the benefit of another individual, was prevalent.
Changing Perspectives and the Role of Education
13
Participants in each interview group identified education as an important feature of how
they understand and approach social change. Alice, an alumna who had a successful military
career and now focuses her efforts on teaching, put it this way: “Social change to me is being
able to, I guess, implement or work hand-in-hand with students to help them further their
education so that we help our community become a better community. It's making sure that
education is the priority as well as being concerned about the community and the economic
status of the community and the children in the schools.”
Moreover, each group had representatives who spoke of “transformations” in perspective
as a key feature of social change. Brenda, an alumna who studied aging women, linked social
change to changing perspectives: “Social change is taking the norms, the mindset, the
expectations, the assumptions of a society and beginning to shift them, hopefully in a positive
way.” Wendy, an alumna who has started her own school, acknowledged that her hope and her
goal “is that kind of the change that the school is in our community--that it goes beyond just the
children and the families here, but actually that we start this new conversation of what education
can be.” Margaret, a faculty member in human services, spoke of beginning social change at a
“very grassroots level, where you can shape a person’s values, or maybe their attitude, maybe
their beliefs . . . which in turn, basically diffuses out to other aspects of society.”
Secondary Themes
Reliance on Context
The task of articulating a definition of social change was not simple for most participants.
In terms of elaborating on social change definitions and examples, some participants noted the
importance of context. Becky, a doctoral student in Public Policy and Administration, focused on
context: “Let’s see. Well, that depends on the project. It can be an individual that’s changed
14
something in their life or it could be a process that’s changed or it could be a policy. That’s hard
without knowing an example.”
Social Change and Benefit to the Initiator
“Who is social change for?” As respondents considered the beneficiaries of social
change, some admitted that social change action promotes benefit for the change initiator.
Paige noted that the first thing that changes in social change is often the self: “Well, I hope first,
before anything, we’re changing our lives, who we are, what we believe, and what we think. You
have to do that first before you can actually make a difference in the community.” Charlie, an
alumnus who has founded a business to promote cross-cultural communications, spoke similarly
of the need to build the “self” in order to effect social change: “And by doing that I enrolled
[here] and hoped to develop those strengths in myself, which gets back to the Gandhi point that
you become the change you want to see by empowering myself, educating myself, engaging
myself . . .” Arsi proposed that social change serves a dual purpose. “I think it’s not only for the
person that initiates the social change but I think it’s for a broader audience and it can include the
community.” Ray stated that this is a “central truth to the human experience. When you help
people, you personally benefit, and when you help enough people or you get together a large
enough group, you can help society benefit.” Christine admitted, “I think very selfishly. It’s
definitely for myself because of all the things that go with it, but I think the goal is that there will
be some value or benefit for us universally.”
Discussion and Implications
The participants in this study were focused on others. an admirable quality, enacting the
“servant-leadership model” (Greenleaf, 1977, among others) for improving organizational
effectiveness and creating change. A few of the participants acknowledged benefit to themselves
15
in engaging in social change activity, usually in the form of personal satisfaction that can come
about by doing something good for others. One interviewee expressed the even more cynical
view that all we do is tainted by a level of self-serving. Social justice and equity were seen by
some to be objectives for social change action but in the form of bringing about for others what
they themselves already possesses. A few spoke of supporting democracy by their actions, where
all work together for the common good.
The enthusiasm and momentum around helping others was very notable in this group of
interviews. By itself, however, a focus on improving conditions for another may not be
sufficient for thorough-going social change. Under some conditions, especially when root causes
are not addressed, it can be experienced as disempowering and patronizing by the recipients,
creating two levels in a community—the helpers operating from a privileged position and the
helped operating from a position of need and deficit—and neither level is transformed by the
activity. Importantly, it may not always reveal that one might be implicated as a member of a
group that could very well be the source of the problem being addressed.
As indicated, one of the persistent themes in the scholarly discussion of social change is
the clear distinction between charity and helping on the one hand and change and justice on the
other. In the coded analysis made of the definitions and descriptions of social change, the theme
“charity and alteration” was one of the most prominent. It was described as serving or helping
others so that their lives and possibly the lives of an ever widening circle will be changed. The
analysis found that the participants in this study tended to speak more often in terms of “charity”
than “change”.
Real-life examples of social change activity, however, are seldom as clear cut as
descriptions of charity and change in the literature suggest. While charity predominates in the
16
descriptions and actions of the participants, and social change activity was conducted by
individuals or small groups engaged in the same effort and focused on a specific needy group,
and even though most of the change was seen as making a difference in the lives of individuals
being served rather than in the systemic structures that make up society and its institutions, many
nevertheless saw their activities contributing to change in a larger context. Much of this change
was envisaged in terms of hopeful thinking about the long-term potential and “ripple effect” of
their efforts, rather than in terms of the impact of deliberately planned or collaborative action.
The larger changes were considered post hoc effects rather than outcomes planned from the
beginning. Not apparent were strategies based on an analysis of systemic flaws and developed to
address root causes, bringing all players into the planning, and being deliberate about making
long-term and sustainable changes.
The analysis which looked for common themes in the responses produced encouraging
news for those who work in higher education. Both faculty members and students spoke of the
transformative power of education to change perspectives and attitudes. They spoke of the
power of class discussion forums, learning from different others in classes, curricular topics that
specifically addressed needs and opportunities for social change activity, practical projects
undertaken as class assignments, and the example of faculty members and other students who
were engaged in social change activity. Faculty members also spoke of the importance of oneon-one mentoring of students who were in the process of developing a change project.
Suggestions for Further Research
This study opens up several questions that suggest areas for additional research, some
that arise from expanding and strengthening the original study, and others to follow-up on leads
from this study. Among the first set of questions is how wide-spread these views and
17
understandings of social change are within the context of this university. So, this interview study
with its referral sampling approach might usefully be expanded to the whole university
community for a more thorough-going data set, perhaps employing a survey to provide
quantitative measures of the strengths of the many responses represented by the sample in the
original study. Then too, given that this study was conducted in one institution whose mission is
to create positive social change, what would other institutions, traditional and online, find if they
were to conduct similar investigations? This question is important if the institution wants to more
fully realize its social responsibility in community outreach by providing an initial sense of some
of the common themes , with their strengths and weaknesses, that might exist already in the
institution.
In follow-up on leads from this study, studies of teaching and learning strategies might
help determine the most effective for expanding ideas of charity to include a change dimension,
and to prepare students in the skills needed for social change as efforts toward justice and equity
and/or empowerment and agency.
Limitations of the Study
This was an exploratory study whose purpose was to discover the understanding and
practice of positive social change as a component of the mission of a large U.S. online
university. The sample size was small and purposefully selected for the participants’
involvement in social change activities. As a result, it was comprised of a majority of
participants who live and study in the United States. There was a general intent to include
participants with diverse racial and ethnic background and gender. The end result is a range of
values along with diversity in culture, gender, and ethnicity in this group of participants and an
equally wide ranging number and kinds of contexts and opportunities for social action being
18
addressed by them. While this is a limitation of the study, it also is representative of the
complexity of understanding social change and those who are active within it.
Missing from the research design is the involvement of a designated external community
in the project. Our identified “community” includes the faculty, students, and alumni of the
institution. As faculty members, the researchers are part of this community and we relied upon
other faculty, our students, and our alumni to help identify participants, perfect the interview
guide, provide a debriefing after each interview, and support member checks. The University’s
external communities, less well defined, are all the communities in which our students, faculty,
and alumni practice positive social change. The difficulty of creating touch points with all
external community constituency groups, challenging even for land-based institutions, would
have been prohibitive for a study of this size.
Conclusion
The findings of this study indicate that faculty, alumni, and students at this particular
institution show strong passion to make changes for “the greater good.” It also shows that for
those who are actively involved, some of the distinctions made in the literature do not hold in
their understandings and practice of social change. Activities that at first glance might seem to
fall into the category of charity were also undertaken with the expectation of a “ripple effect”
that would manifest itself as change in the broader society. “Helping” and “altering” concepts
were used together to describe the purpose of an activity. In other words, service activities were
often understood to be aiming for social justice or self-efficacy which takes them out of the realm
of simply helping (a potentially disempowering relationship) and into the domain of real change;
from a focus on a single individual or group of individuals toward creating an impact through
these individuals on the wider community. This move from charity to change was not always
19
fully understood by participants and could be strengthened even further by preparing students in
the skills and knowledge to turn their scholarly understandings and personal commitment into
even more effective community engagement and long-lasting impact that more deliberately looks
to creating systemic change.
References
Allen-Meares, P. (2008). Schools of social work contribution to community partnerships: The
renewal of the social compact in higher education. Journal of Human Behavior in the
Social Environment, 18(2), 79–100.
Armstrong, P., & Miller, N. (2006). Whatever happened to social purpose? Adult educators’
stories of political commitment and change. International Journal of Lifelong Education,
25(3), 291-305.
Bahng, G. (2015). Using community partnerships to create critical service learning: The case of
Mar Vista Gardens. Journal of Public Affairs Education, 21(1), 55-70.
Bossaller, J. S., Frasher, J., Norris, S., Marks, C. P., and Trott, B. (2015). Learning about social
justice through experiential learning abroad. Reference and User Services
Quarterly, 54(3), 6-11.
Brennan, J. (2008). Higher education and social change. Higher Education, 56(3), 381-393.
Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the Professorate. Carnegie
Foundation for the Advancement of Teaching.
Checkoway, B. (2001). Renewing the civic mission of the American research university. The
Journal of Higher Education, 72(2), 125-147. Special Issue: The Social Role of Higher
Education.
20
Cortez, G. A. (2013). Occupy public education: A community's struggle for educational
resources in the era of privatization. Equity and Excellence in Education, 46(1), 7-19.
Creswell, J. W. (2003). Research design: Quantitative, qualita...
Purchase answer to see full
attachment