4. After each mistake, you are to flip the next
EUR
o Generator"
switch on the console progressing from left
to right. You note that a label corresponds
to every 5th mark on the dial, with the first
mark labeled slight shock, the 5th mark labeled
moderate shock, the 10th strong shock, and so
on through very strong shock, intense shock,
extreme intensity shock, and danger: severe
shock.
You begin. The learner at first gives some cor-
rect answers, but then he makes a few errors. Soon
you are beyond the 5th mark (moderate shock) and
are moving in the direction of more and more severe
shocks. As you turn the dial, the learner's reactions
increase in intensity from a grunt at the 10th mark Source: From the film Obedience 1968 by Stanley Migram, o Renewed 1998
(strong shock) to painful groans at higher levels, to Alexandra Migram, and distributed by Penn State Media Sales.
anguished cries of "get me out of here at the extreme
intensity shock levels to a deathly silence at the highest level. When you protest at administering the stronger
shocks, the experimenter tells you, "The experiment requires that you continue." Occasionally
he says, "It is
absolutely essential that you continue.
This is a simplified version of the famous Stanley Milgram's obedience
experiments, begun at Yale University in 1960: Outside the laboratory, Milgram
Obedience experiments (Milgram's):
surveyed Yale undergraduates and asked them to indicate at what level they would A series of famous experiments
terminate their shocks if they were in the study. Now, please mark on the console conducted during the 1960s by
below the most severe shock that you would agree to give the learner (Exhibit 3.3). Stanley Milgram, a psychologist from
The average (mean) maximum shock level predicted by the Yale undergraduates
Yale University testing subjects
willingness to cause pain to another
was 935, corresponding to a strong shock. Only one student predicted that he would
person instructed to do so
provide a stimulus above that level, at the very strong level. Responses were similar
from student groups
But the actual average level of shock the 40 adults who volunteered for the experiment administered was
24.53- higher than extreme intensity shock and just short of danger: severe shock. Of Milgram's original 40
subjects, 25 complied entirely with the experimenter's demands, going all the way to the top of the scale (labeled
simply as XXX). Judging from the subjects visibly
high stress, and from their subsequent reports, they
believed that the learner was receiving physically Exhibit 3.3 Shock Meter
painful shocks. (In fact, no electric shocks were
actually delivered.)
We introduce the Milgram experiment not to
discuss obedience to authority but instead to intro
duce research ethics. We refer to Milgram's obedi
15
10
ence studies throughout this chapter because they
20
ultimately had as profound an influence on scientists
thinking about ethics as on how we understand obe
De 25
dience to authority. Although Milgram died in 1984,
the controversy around his work did not. A recent
review of the transcripts and interviews with many
0
Darom
30
participants raises additional concerns even about
the experiment's scientific validity, as well as its eth-
les (Perry 2013)
Very
use of the Social World
Throughout this book, we discuss ethical problems common to various research
rime trials: Trials
methods; in this particular chapter, we present in more detail some of the general
erg, Germany in the
ethical principles that professional social scientists use in monitoring their work.
Vorld War II, in which
ers of Nazi Germany
ath war crimes and
humanity, frequently Historical Background
first trials for people
Ocide.
formal procedures for the protection of participants in research grew out of
lis study: Research
some widely publicized abuses. A defining event occurred in 1946, when the
d by a branch of the
Nuremberg war crime trials exposed horrific medical experiments conducted
tlasting for roughly
in the 1970s), in
during World War II by Nazi doctors in the name of scienced During the 1950s
of African American and 1960s, American military personnel and Pacific Islanders were sometimes
with syphilis were unknowingly exposed to radiation during atomic bomb tests. And in the 1970s,
untreated, without
e to learn about the
Americans were shocked to learn that researchers funded by the U.S. Publis
Health Service had, for decades, studied 399 low-income African American
of the disease
men diagnosed with syphilis in the 1930s to follow the natural course of the
illness (Exhibit 3.4). In the Tuskegee syphilis study, many participants were
not informed of their illness and were denied treatment until 1972, even though a cure (penicillin) was
developed in the 1950s (Jones 1993).
Such egregious violations of human rights resulted in the United States, in the creation of the National
Commission for the Protection of Human Subjects of Biomedical and Behavioral Research The commission's
1979
Exhibit 3.4 Tuskegee Syphilis Experiment
Chapter 3 Ethics in Research 47
Belmont Reports. Department of Health, Education and Welfare 1979) established
There beste principles for the protection of human subjects Exhibit 3.5)
Belmont Report: Report in 1979
of the National Commission for
the Protection of Human Subjects
Respect for person treating personas autonomous agents and protecting
of Biomedical and Behavioral
those with diminished
Research stipulating three basic
ethical principles for the protection of
Beneficente-mintingiblehammsand maximising benefits
human subjects: respect for persons
beneficence, and justice.
13. Justice-distributing benefitsand risks of research fairly
Respect for persons in human subjects
ethics discussions, treating persons
The Department of Health and Human Services and the Food and Drug as autonomous agents and protecting
Administration then translated these principles into specific regulations, which were those with diminished autonomy
adopted in 1991 as the Federal Policy for the Protection of Human Subjects. This Beneficence: Minimizing possible
policy has shaped the course of social science research ever since, and you will have to harms and maximizing benefits
consider it as you design your own research investigations. Some professional asso
Justice: As used in human research
ciations--such as the American Psychological Association, the American Political ethics discussions, distributing
Science Association, the American Sociological Association, university review benefits and risks of research fairly
boards, and ethics committees in other organizations-set standards for the treat- Federal Policy for the Protection of
ment of human subjects by their members, employees, and students, these standards Human Subjects Federal regulations
are designed to comply with the federal policy
codifying basic principles for
Federal regulations require that every institution that seeks federal funding for conducting research on human
biomedical or behavioral research on human subjects have an institutional review
subjects; used as the basis for
professional organizations' guidelines
board (RB) that reviews research proposals if you do research for a class assign-
ment, you may need to prepare a brief IRB proposal, so board members can be sure
Institutional review board (IRB): A
that your project meets all ethical standards. 1RBsatuniversities and other agencies
group of organizational and community
representatives required by federal
apply ethics standards that are set by federal regulations but can be expanded og
Low to review the ethical issues in all
made more specific by the IRB (Sicber 1992:5, 10). To promote adequate review, proposed research that is federally
of ethical issues, the regulations require that IRBs include members with diverse funded, involves human subjects, or
backgrounds The Office for Protection From Research Risks in the National
has any potential for harm to subjects
Institutes of Health monitors IRBs with the exception of research involving drugs Office for Protection From Research
which is the responsibility of the federal Food and Drug Administration)
Risks, National Institutes of Health
Federal agency that monitors
institutional review boards (IRBs)
Ethical Principles
The American Sociological Association (ASA), like
Exhibit 3.5
other professional social science organizations, has
adopted for practicing sociologists, ethical guidelines
that are more specific than the federal regulations.
Respect for Persons
Professional organizations may also review com-
plaints of unethical practices when asked.
The Code of Ethics of the ASA (1997) issumma-
rized at the ASA website (www.sanct.org the com-
plete text of the code is also available at this site
Mostly, ethical issues in research are covered by
four guidelines
1. To protect research subjects
2. To maintain honesty and openness
3. To achieve validers
4. To encourage appropriate application
Belmont Report Principles
Beneficence
Justice
Source US Depa
Each of these guidelines became a focus of the debate about Milgram's experiments, to which
frequently. Did Milgram respect the spirit expressed in these principles? You will find that there
answer to the question of what is (or isn't) ethical research practice.
Protecting Research Subjects
This guideline, our most important, can be divided into four specific directions:
1. Avoid harming research participants.
2. Obtain informed consent.
3. Avoid deception in research, exceptin limited circumstances
4. Maintain privacy and confidentiality
Avoid Harming Research Participants
This standard may seem straightforward, but can be difficult to interpret in specific cases. Does it meam
subjects should not be harmed even mentally or emotionally? That they should feel no anxiety or distress?
The most serious charge leveled against the ethics of Milgram's study was that he had harmed his
jects. A verbatim transcript of one session will give you an idea of what participants experienced as the
generator," which made it appear they were delivering increasingly severe shocks to the learner (Dailgrim
1965:67):
150 volts delivered
165 volts delivered.
You want me to keep going?
me to go on?
That guy is hollering in there....He's liable to have a heart condition. You
180 volts delivered
All right.
195 volts delivered
He can't stand it! I'm not going to kill that man in there! You hear him bolig
Heshollering. He can't stand it....I mean who is going to take responsibility
anything happens to that gentleman? [The experimenter accepts respecte
You see he's hollering. Hear that. Gee, I don't know. The experimenterer 7
experiment requires that you go on. I know it does, sir, but I mean
don't know what he's in for. He's up to 195 volts.
210 volts delivered
225 volts delivered.
240 volts delivered
The experimental manipulation generated "extraordinary tension" (Milgram 1963:37)
Subjects were observed to sweat fremhl
Chapter 3 Ethics in Research | 49
Milgram's "Behavioral Study of Obedience" was published in 1963 in the Journal of Abnormal and
Social Psychology: The next year, the American Psychologist published a critique of the experiment's ethics by
psychologist Diana Baumrind (1964:421). From Baumrind's perspective, the emotional disturbance in subjects
was "potentially harmful because it could easily effect an alteration in the subject's self-image or ability to trust
adult authorities in the future" (p.422). Milgram (1964) quickly countered,
Momentary excitement is not the same as harm. As the experiment progressed there was no indication
of injurious effects in the subjects and as the subjects themselves strongly endorsed the experiment,
the judgment I made was to continue the experiment. (p.849)
Milgram (1963) also attempted to minimize harm to subjects with postexperiment procedures "to assure
that the subject would leave the laboratory in a state of well being" (p. 374). A friendly reconciliation was
arranged between the subject and the victim, and an effort was made to reduce any tensions that arose as a
result of the experiment
In some cases, the dehoaxing or debriefing discussion was extensive, and
Debriefing Aresearchers informing
all subjects were promised (and later received a comprehensive report (Milgram subjects after an experimentabout the
1964: 849). But Baumrind (1964) was unconvinced: "It would be interesting to experiment's purposes and methods
know what sort of procedures could dissipate the type of emotional disturbance just and evaluating subjects personal
described" (p. 422)
reactions to the experimente
When Milgram (1964:849) surveyed subjects in a follow-up,83.7% endorsed the
statement that they were very glad" or "glad to have been in the experiment," 15.1% were neither sorry nor
glad." and just 1.3% were sorry" or "very sorry to have participated. Interviews by a psychiatrist a year later
found no evidence of any traumatic reactions" (Milgram 1974 197). Subsequently, Milgram argued, "The cen-
Video Link
tral moral justification for allowing my experiment is that it was judged acceptable by those who took part in it"
Watch excerpts
(Milgramas cited in Cave & Holm 2003:32).
Zimbardo's Start
In a later article, Baumrind (1985: 168) dismissed the value of the self-reported lack of harm of subjects
esperiment
who had been willing to participate in the experiment and noted that 16% did not endorse the statement that
they were glad they had participated in the experiment. Many social scientists, ethicists, and others concluded
that Milgram's procedures had not harmed subjects and so were justified by the knowledge they produced:
others sided with Baumrind's criticisms (Miller 1986:88-138).
Or.consider the possible harm to subjects in the famous prison simulation study
PS Simulation stuey Zimbardeal
at Stanford University (Haney, Banks,& Zimbardo 1973). Philip Zimbardo's prison Famous study from the early 1970
simulation study was designed to investigate the impact of being either a guard or a orged by Stanfor psychologist
prisoner in a prison, a total institution. The researchers selected apparently stable Philip Zimbardo, demonstrating
and mature young male volunteers and asked them to sign a contract to work for 2 the winnessover college
students quickly to become lash
weeks as a guard or a prisoner in a simulated prison. Within the first 2 days after the
disciplinarians when it in the role
prisoners were incarcerated in a makeshift basement prison, the prisoners began to of med prison guards over
be passive and disorganized, and the guards became "sadistic-verbally and physi- other students interpreted
cally aggressive (Exhibit 3.6). Five "prisoners were soon released for depression, as demonstrating an easy human,
readiness to become cruel
uncontrollable crying, fits of rage, and, in one case, a psychosomatic rash. Instead of
letting things continue for 2 weeks as planned. Zimbardo and his colleagues termi
nated the experiment after 6 days to avoid harming subjects.
Participants playing the prisoner role certainly felt some stress, but postexperiment discussion sessions
seemed to relieve this follow up during the next year indicated no lasting negative effects on the participants
and some benefits in the form of greater insight. And besides, Zimbardo and his colleagues had no way of
predicting the bad outcome:indeed, they were themselves surprised (Haney et al. 1973).
withhelding beneficial treatment can be another way of causing harm to subjects. Sometimes, in
Listen to how Zumba
an ethically debatable practice, researchers will actually withhold treatments from some subjects, knowing
that those treatments would probably help the people, to accurately measure how much they helped. For
research continues
informatica
example, in some recent studies of AIDS drugs conducted in Africa, researchers provided different levels of
Audia Link
Obtain Informed Consent
Just defining informed consent may also be more difficult than it first appears. To be informed, consent must
be given by persons who are competent to consent, have consented voluntarily, are fully informed about the
research, and have comprehended what they have been told (Reynolds 1979). Yet, you probably realize, as did
Baumrind (1985), that because of the inability to communicate perfectly, "Full disclosure of everything that
could possibly affect a given subject's decision to participate is not possible, and therefore cannot be ethically
required" (p. 165).
Obtaining informed consent creates additional challenges for researchers. For instance, the language
of the consent form must be clear and understandable yet sufficiently long and detailed to explain what
will actually happen in the research. Examples A (Exhibit 3.7) and B (Exhibit 3.8) illustrate two different
approaches to these trade-offs. Consent form A was approved by a university for a substance abuse survey
with undergraduate students. It is brief and to the point but leaves quite a bit to the imagination of the
prospective participants. Consent form B reflects the requirements of an academic hospital's IRB. Because
the hospital is used to reviewing research proposals involving drugs and other treatment interventions
with hospital patients, it requires a very detailed and lengthy explanation of procedures and related issues,
even for a simple survey. Requiring prospective participants to sign such lengthy forms can reduce their
willingness to participate in research and perhaps influence their responses if they do agree to participate
(Larson 1993: 114).
Chapter 3 Ethics in F
When an experimental design requires subject deception, researchers may withhold information before
the experiment but then debrief subjects after the experiment ends (Milgram did this). In the debriefing, the
researcher explains what really happened in the experiment, and why, and responds to subjects' questions. A
carefully designed debriefing procedure can often help research participants deal with their anger or embar-
rassment at having been deceived (Sieber 1992: 39-41), thus substituting for fully informed consent before the
experiment.
Exhibit 3.7 Consent Form A
University of Massachusetts Boston
Department of Sociology
October 28, 2014
Dear
The health of students and their use of alcohol and drugs are important concerns for every college and
university. The enclosed survey is about these issues at Mass/Boston. It is sponsored by University
Health Services and the PRIDE Program (Prevention, Resources, Information, and Drug Education). The
questionnaire was developed by graduate students in Applied Sociology, Nursing, and Gerontology.
You were selected for the survey with a scientific, random procedure. Now it is important that you
return the questionnaire so that we can obtain an unbiased description of the undergraduate student body.
Health Services can then use the results to guide campus education and prevention programs.
anonymous. No one will be able to link your survey responses to you. In any case, your standing at the
The survey requires only about 20 minutes to complete. Participation is completely voluntary and
University will not be affected whether or not you choose to participate. Just be sure to return the enclosed
postcard after you mail the questionnaire so that we know we do not have to contact you again.
Please return the survey by November 15th. If you have any questions or comments, call the Ba
program at 287-5680 or Professor Schutt at 287-6250. Also call the PRIDE pmo
summary of our final report
Thank you in advance for your
R
L
R
Ca
Int
EtFinally, some participants can't truly give informed consent. College students, for instance, may feel unable
to refuse if their professor asks them to be in an experiment. Legally speaking, children cannot give consent to
participate in research; a child's legal guardian must give written informed consent to have the child participate
in research (Sieber 1992). Then, the child must in most circumstances be given the opportunity to give or with
hold assent to participate in research, usually by a verbal response to an explanation of the research. Special
protections exist for other vulnerable populations-prisoners, pregnant women, mentally disabled persons,
and educationally or economically disadvantaged persons. And in a sense, anyone deliberately deceived in an
experiment cannot be said to really have given "informed" consent, since the person wasn't honestly told what
would happen.
Social media and digital technologies have in recent years opened the doors to new kinds of ethical
problems in research, by blurring the lines between public and private behavion If you have a Facebook or
Myspace page with 600 "friends," is that your private page, or a public document? In Chapter 8, we'll see how
social researchers are eagerly mining such data for information on people's social networks; "Employers are
looking at people's online postings and Googling information about them, and I think researchers are right
behind them," said Professor Nicholas Christakis (as cited in Rosenbloom 2007: 2), a Harvard sociologist in a
New York Times article in 2007. But the federal guidelines under which institutional review boards are set up
didn't anticipate the Internet. "The [human subject] rules were made for a different world, a pre-Facebook
world," said Samuel D. Gosling, a psychology professor at the University of Texas who uses Facebook as a data
source. "There is a rule that you are allowed to observe public behavior, but it's not clear if online behavior is
public or not" (as cited in Rosenbloom 2007:2).
In truth, though, the public versus private debate is a long-standing issue in social science. Laud
Humphreys (1970) decided that truly informed consent would be impossible to obtain for his study of the social
background of men who engage in hormosexual behavior in public facilities. Humphreys served as a lookout-a
"watch queen"-for men who were entering a public bathroom in a city park with the
intention of having sex. In a number of cases, he then left the bathroom and copied
the license plate numbers of the cars driven by the men. One year later, he visited
the homes of the men and interviewed them as part of a larger study of social issues.
Humphreys changed his appearance so that the men did not recognize him. In his
book Tearoom Trade, Humphreys concluded that the men who engaged in what
were widely viewed as deviant acts were, for the most part, married, suburban men
whose families were unaware of their sexual practices. But debate has continued ever
since about Humphreys's failure to tell the men what he was really doing in the bathroom or why he had come
to their homes for the interview. He was criticized by many, including some faculty members at the University
of Washington who urged that his doctoral degree be withheld. However, many other professors and some
members of the gay community praised Humphreys for helping normalize conceptions of homosexuality
(Miller 1986: 135).
If you served on your university's IRB, would you allow research such as Humphreys's to be conducted?
Research:
Link
Read more
in attaining
informed c
Tearoom Trade Book by Laud
Humphreys investigating the social
background of men who engage ing
homosexual behavior in public facilities
controversially, he did not obtaig
informed consent from his subjects
Avoid Deception in Research, Except in Limited Circumstances
Deception occurs when subjects are misled about research procedures. Frequently, this is done to simulate real-
world conditions in the lab. The goal is to get subjects to accept a true what is false or to give a false impression"
(Korn actually
giving electric shocks to the "stooge would be cruel. Yet, to test obedience, the task had to be troubling for the
subjects. Milgram (1974: 187-188) insisted that the deception was absolutely essential. Many other psychologi
cal and social psychological experiments would be worthless if subjects understood what was really happening
to them while the experiment was in progress But is this sufficient justification to allow the use of deception?
Some important topics have been cleverly studied using deception. Gary Marshall and Philip Zimbardo (of
prison study fame), in a 1979 study, told the student volunteers that they were being injected with a vitamin
supplement to test its effect on visual acuity (Korn 1997:2-3). But to determine the physiological basis of56 Ming Sense of the Social World
her Interview Link
searcher describe
how an works
ad more about
e researchers
suring privacy
their subjects
emotion, they actually injected them with adrenaline, so that their heart rate and sweating would increase, and
a 1972 study, staged fake seizures on subway trains to study helpfulness (Korn 1997: 3-4). Again, would you
then placed them in a room with a student stooge who acted silly. Jane Allyn Piliavin and Irving Piliavin, in
allow such deceptive practices if you were a member of your university's IRB? Giving people stimulating drugs,
apart from the physical dangers, is using their very bodies for research without their knowledge. Faking an
emergency may lessen one's willingness to help in the future or may, in effect, punish the research subjects-
through embarrassment for their reaction to what is really "just an experiment."
But perhaps risk, not deception per se, is the real problem: Elliot Aronson and Judson Mills's (1959) study
of severity of initiation to groups is a good example of experimental research that does not pose greater-than-
everyday risks to subjects but still uses deception. This study was conducted at an all-women's college in the
1950s. The student volunteers who were randomly assigned to the "severe initiation experimental condition
had to read a list of embarrassing words. Even in the 1950s, reading a list of potentially embarrassing words in a
laboratory setting, then listening to a taped discussion, was unlikely to increase the risks to which students were
exposed in their everyday lives. Moreover, the researchers informed subjects that they would be expected to
talk about sex and could decline to participate in the experiment if this requirement would bother them. None
dropped out. To further ensure that no psychological harm was caused, Aronson and Mills explained the true
nature of the experiment to subjects after the experiment. The subjects did not seem perturbed: "None of the
Ss expressed any resentment or annoyance at having been misled. In fact, the majority were intrigued by the
experiment, and several returned at the end of the academic quarter to ascertain the result" (p. 179).
Are you satisfied that this procedure caused no harm? The minimal deception in the Aronson and Mills
experiment, coupled with the lack of any ascertainable risk to subjects and a debriefing, satisfies the ethical
standards for research of most psychologists and IRBs, even today.
Maintain Privacy and Confidentiality
Maintaining privacy and confidentiality after a study is completed is another way to protect subjects, and the
researcher's commitment to that standard should be included in the informed consent agreement (Sieber 1992).
Procedures to protect each subject's privacy, such as locking records and creating special identifying codes,
must be created to minimize the risk of access by unauthorized persons. For the protection of health care data,
the Health Insurance Portability and Accountability Act (HIPAA), passed by Congress in 1996, created
much more stringent regulations. As implemented by the U.S. Department of Health and Human Services in
2000 (and revised in 2002), the HIPAA Final Privacy Rule applies to oral, written, and electronic information
that relates to the past, present, or future physical or mental health or condition of
an individual" (Legal Information Institute, 2006. $1320d|6][B]). The HIPAA Raje
requires that researchers have valid authorization for any use or disclosure of "pro-
tected health information" (PHI) from a health care provider. Waivers of authoriza
tion can be granted in special circumstances (Cava, Cushman, & Goodman 2007)
However, statements about confidentiality should be realistic. In 1993,
sociologist Rik Scarce was jailed for 5 months for contempt of court after refusing
to testify to a grand jury about so-called ecoterrorists. Scarce, a PhD candidate at
Washington State University at the time, was researching radical environmentalists
and may have had information about a 1991 "liberation" raid on an animal research
lab at Washington State. Scarce was eventually released from jail, but he never did
violate the confidentiality he claimed to have promised his informants (Scarce
2005). Laws allow research records to be subpoenaed and may require reporting child
abuse. A researcher also may feel compelled to release information if a health-or life-
threatening situation arises and participants need to be alerted.
The National Institutes of Health can issue a Certificate of Confidentiality to
protect researchers from being legally required to disclose confidential information,
Researchers who focus on high-risk populations or behaviors or sensitive topics, such
Health Insurance Portability and
Accountability Act (PAA) AUS
federal law passed in 1996 that
guarantees, among other things
specified privacy rights for medical
patients, in particular those in
research settings
Confidentiality: Provided by research
in which identifying information that
could be used to link respondents
o their responses is available only to
designated research personnel for
pecific research needs
Certificate of Confidentiality
Document issued by the National
mstitutes of Health to protect
esearchers from being legally required
disclose confidential informationResearch That Matters
You are driving on the highway at about 3 p.m. on a Friday when you see a police officer standing by
lights flashing. The officer motions you to pull off the road and stop in an area marked off with traffic cones
relieved and surprised when someone in plain clothes working with the police officer then walks over t
asks if you would consent to be in a survey. You then notice two large signs that say NATIONAL ROADSIDE
VOLUNTARY SURVEY. You are offered $10 to provide an oral fluid sample and answer a few additiona
drug use.
This is what happened to 10,909 U.S. motorists between July 20 and December 1, 2007, at sites acro
States. Those who agreed to the oral fluid collection were also offered an additional $5 to complete a sho
drug-use disorder questionnaire. Before they drove off, participants were also offered a $50 incentive f
blood sample. Drivers who were found to be too impaired to be able to drive safely (blood alcohol level ab
given a range of options, including switching with an unimpaired passenger, getting a free ride home, or spe
in a local motel (at no expense to them). None were arrested or given citations, and no crashes occurred in
study. Those younger than 21 years and those who were pregnant were given informational brochures be
special risk they face if they consume alcohol.
John H. Lacey and others from the Pacific Institute for Research and Evaluation, C. Debra Furr-Holdem
Hopkins University, and Amy Berning from the National Highway Traffic Safety Administration (NHTSA, w
the study) reported the procedures for this survey in a 2011 article in the Evaluation Review. The survey expl
data collected were maintained as anonymous, so no research participants could be linked to their survey
The 2007 National Roadside Survey identified 10.5% of the drivers as using illegal drugs and 3% as h
medications.
Source Lacey, John H, Tara Kelley-Baker, Robert B. Voes, Eduardo Romano, C. Debra Furr-Holden, Pedro Torres, and Amy Berning, 25
drug-involved driving in the United States: Methodology for the 2007 National Roadside Survey. Evaluation Review 35: 319-353.
Maintaining Honesty and Openness
Protecting subjects, then, is the primary focus of research ethics. But researchers have obligations to other
groups, including the scientific community, whose concern with validity requires that scientists be open in
disclosing their methods and honest in presenting their findings To assess the validity of a researcher's con-
clusions and the ethics of this researcher's procedures, you need to know how the research was conducted.
This means that articles or other reports must include a detailed methodology section, perhaps supplemented
by appendixes containing the research instruments or websites or other contact information where more
information can be obtained. Biases or political motives should be acknowledged because research distorted
y political or personal pressures to find particular outcomes is unlikely to be carried out in an honest and
open fashion.
Gina Perry's (2013) Behind the Shock Machine challenges Milgram's adherence to the goal of honesty and
openness, although his initial 1963 article included a description of study procedures, including details about
the procedures involved in the learning task, administration of the "sample shock," the shock instructions and
the preliminary practice run, the standardized feedback from the "victim" and from the experimenter, and the
measures used. Many more details, including pictures, were provided in Milgram's (1974) subsequent book.
Perry, though, has revealed misleading statements in Milgram's reports.Achieving Valid Results
The pursuit of objective knowledge the goal of validity-justifies our investigations and our claims to the use
of human subjects. We have no business asking people to answer questions, submit to observations, or partici-
pate in experiments if we are simply trying to trumpet our own prejudices or pursue our personal interests. If
however, we approach our research projects objectively, setting aside our predilections in the service of learn-
ing a bit more about human behavior, we can honestly represent our actions as potentially contributing to the
advancement of knowledge.
The details in Milgram's 1963 article and 1974 book on the obedience experiments make a compelling case
for his commitment to achieving valid results-to learning how obedience influences behavior. In Milgram's
(1963) own words,
It has been reliably established that from 1933-45 millions of innocent persons were systematically
slaughtered on command.... Obedience is the psychological mechanism that links individual action
to political purpose. It is the dispositional cement that binds men to systems of authority.... For many
persons obedience may be a deeply ingrained behavior tendency.... Obedience may [also] be ennobling
and educative and refer to acts of charity and kindness, as well as to destruction. (p.371)
Milgram (1963) then explains how he devised experiments to study the process of obedience in a way that
would seem realistic to the subjects and still allow "important variables to be manipulated at several points in
the experiment" (p. 372). Every step in the experiment was carefully designed to ensure that subjects received
identical stimuli and that their responses were measured carefully.
Milgram's (1963) attention to validity is also apparent in his reflections on the particular conditions of his
experiment, for, he notes, "Understanding of the phenomenon of obedience must rest on an analysis of [these
conditions" (p. 377). These particular conditions included the setting for the experiment at Yale University,
its purported "worthy purpose" to advance knowledge about learning and memory, and the voluntary
participation of the subject as well as of the learner-as far as the subject knew. The importance of some of these
"particular conditions" (such as the location at Yale) was then tested in subsequent replications of the basic
experiment (Milgram 1965).
However, not all psychologists agreed that Milgram's approach could achieve valid results. Baumrind's
(1964) critique begins with a rejection of the external validity-the generalizability of the experiment. "The
laboratory is unfamiliar as a setting and the rules of behavior ambiguous.... Therefore, the laboratory is not
the place to study degree of obedience or suggestibility, as a function of a particular experimental condition"
(p. 423). And so, the parallel between authority-subordinate relationships in Hitler's Germany and in Milgram's
laboratory is unclear" (p. 423).
Milgram (1964) quickly published a rejoinder in which he disagreed with (among other things) the notion
that it is inappropriate to study obedience in a laboratory setting "A subject's obedience is no less problematical
because it occurs within a social institution called the psychological experiment" (p. 850).
Milgram (1974: 169-178) also pointed out that his experiment had been replicated in other places and
settings with the same results, that there was considerable evidence that subjects had believed that they actually
were administering shocks, and that the "essence of his experimental manipulation-the request that subjects
comply with a legitimate authority-was shared with the dilemma faced by people in Nazi Germany and
soldiers at the My Lai massacre in Vietnam (Miller 1986: 182-183).
But Baumrind (1985) was still not convinced. In a follow-up article in the American Psychologist, she
argued that "far from illuminating real life, as he claimed. Milgram in fact appeared to have constructed a set of
conditions so internally inconsistent that they could not occur in real life" (p. 171).
Milgram assumed that obedience could fruitfully be studied in the laboratory, Baumrind disagreed. Both,
however, buttressed their ethical arguments with assertions about the external validity (or invalidity) of the
experimental results. They agreed, in other words, that a research study is partly justified by its valid findings-the
knowledge to be gained. If the findings aren't valid, they can't justify the research at all. It is hard to justify any risk for
human subjects, or even any expenditure of time and resources, if our findings tell us nothing about human behaviceKristen Kenny
her family to graduate from college
ng on si
art at the Massachusetts College of Art and soon started work
and
in theater doing everything from set design, hair, and makeup to costume design
and acting. The arts have their fair share of interesting characters; this was the
beginning of Kenny's training in dealing with a variety of difficult personalities and
learning how to listen and how to react.
After years of working a variety of jobs in the entertainment field, Kenny found her-
Kisten Kenny
working as a receptionist in the music industry, a hotbed of difficult personalities, contracts, and negotiations. Within
Kenny had been promoted to assistant talent buyer for small clubs and festivals in the Boston area. This job
ed Kenny develop the skill of reading dense contract documents and being able to identify what contractual clause
age stays and what is deleted. Eventually the music industry started to wane and Kenny was laid off, but a friend
cal hospital who was in dire need of someone who could interpret volumes of documents and deal with bold per-
ties asked her to apply for a job as their IRB administrator. Kenny had no idea what an IRB was, but she attended
gs and conferences to learn the IRB trade. Three years later, Kenny was asked to join the Office of Research and
red Programs at the University of Massachusetts, Boston, as the IRB administrator.
as a research compliance specialist II, Kenny maintains the IRB and other regulatory units and has developed a
curriculum and program for the Office of Research and Sponsored Programs. And if you look hard enough you
her clothing and fabric designs on eBay, Etsy, and her own website.
Encouraging Appropriate Application
Finally, scientists must consider the uses to which their research is puts Although many scientists believe that
personal values should be left outside the laboratory, some feel that it is proper-even necessary-for scientists
to concern themselves with the way their research is used.
Milgram made it clear that he was concerned about the phenomenon of obedience precisely because of its
implications for people's welfare. As you have already learned, his first article (1963) highlighted the atrocities
committed under the Nazis by citizens and soldiers who were "just following orders." In his more
e comprehensive
book on the obedience experiments (1974), he also used his findings to shed light on the atrocities committed in
the Vietnam War at My Lai, slavery, the destruction of the American Indian population, and the internment of
Japanese Americans during World War II. Milgram makes no explicit attempt to "tell us what to do about th
problem. In fact, as a dispassionate psychological researcher, Milgram (1974) tells us, "What the present study
experimental inquiry, and with the aim of understanding rather than judging it from a moral standpoint" (p.nl
[did was] to give the dilemma (of obedience to authority) contemporary format by treating it as subject matter fr
Yet it is impossible to ignore the very practical implications of Milgram's investigations. His resea
highlighted the extent of obedience to authority and identified multiple factors that could be manipulate
lessen blind obedience (such as encouraging dissent by just one group member, removing the subject p
direct contact with the authority figure, and increasing the contact between the subject and the victiml
A widely publicized experiment on the police response to domestic violence, mentioned earlier, provid
an interesting cautionary tale about the uses of science. Lawrence Sherman and Richard Berk (196
violence to be either arrested or simply given a warning. The results of this field experiment indicated th
arranged with the Minneapolis police department for the random assignment of persons accused of domes
those who were arrested were less likely subsequently to commit violent acts against their partners. S
(1993) explicitly cautioned police departments not to adopt mandatory arrest policies based solely on the
results of the Minnealis
and
encourknow that the original finding of a deterrent effect of arrest did not hold up in many other cities where the
experiment was repeated, Sherman (1992: 150-153) later suggested that implementing mandatory arrest
policies might have prevented some subsequent cases of spouse abuse. In particular, in a follow-up study in
Omaha, arrest warrants reduced repeat offenses among spouse abusers who had already left the scene when
police arrived. However, this Omaha finding was not publicized, so it could not be used to improve police
policies. So how much publicity is warranted, and at what point in the research should it occur?
Or what can researchers do if others misinterpret their findings, or use them in misleading ways? For
example, during the 1980s, Murray Straus, a prominent researcher of family violence (wife battering, child abuse,
corporal punishment, and the like) found in his research that in physical altercations between husband and wife,
the wife was just as likely as the husband to throw the first punch. This is a startling finding when taken by itself.
But Straus also learned that regardless of who actually hit first, the wife nearly always wound up being physically
injured far more severely than the man. Whoever started the fight, she lost it (Straus & Gelles 1988). In this respect
(as well as in certain others), Straus's finding that "women hit first as often as men" is quite misleading when taken
by itself. When Straus published his findings, a host of social scientists and feminists protested loudly on the
grounds that his research was likely to be misused by those who believe that wife battering is not, in fact, a serious
problem. It seemed to suggest that, really, men are no worse in their use of violence than are women. Do research-
ers have an obligation to try to correct what seem to be misinterpretations of their findings?
Conclusion
Different kinds of research produce different kinds of ethical problems. Most survey research, for instance, creates
few if any ethical problems and can even be enjoyable for participants. In fact, researchers from Michigan's Institute
for Survey Research interviewed a representative national sample of adults and found that 68% of those who had
participated in a survey were somewhat or very interested in participating in another; the more times respondents
had been interviewed, the more willing they were to participate again (Reynolds 1979: 56-57). Conversely, some
experimental studies in the social sciences that have put people in uncomfortable or embarrassing situations have
generated vociferous complaints and years of debate about ethics (Reynolds 1979; Sjoberg 1967).
Research ethics should be based on a realistic assessment of the overall potential for harm and benefit to
research subjects In this chapter, we have presented some basic guidelines, and examples in other chapters
suggest applications, but answers aren't always obvious. For example, full disclosure of "what is really going on"
in an experimental study is unnecessary if subjects are unlikely to be harmed. In one student observation study
on cafeteria wor for instance, the IRB didn't require consent forms to be signed. The legalistic forms and
signatures, they felt, would be more intrusive or upsetting to workers than the very benign and confidential
research itself. The committee put the feelings of subjects above the strict requirement for consent.
Ultimately, then, these decisions about ethical procedures are not just up to you, as a researcher, to make.
Your university's IRB sets the human subjects protection standards for your institution and will require that
researchers-even, in most cases, students-submit their research proposal to the IRB for review. So an
institutional committee, following professional codes and guidelines, will guard the ethical propriety of your
research; but still, that is an uncertain substitute for your own conscience.
Key Terms
Belmont Report 47
Beneficence 47
Certificate of Confidentiality 56
Federal Policy for the Protection of
Human Subjects 47
Health Insurance
Destability and Accountability
Institutional review board (IRB)
Justice 47
Nuremberg war crime trials #
Obedience experiments
Purchase answer to see full
attachment