A summary on ethics of a current business incident
Please write a summary of a current (within the last 12 months) business incident—found
via the news media—describing the case’s ethics parameters.
1. Please include your assessment of the ethics issues, decisions, and actions within the
2. Please demonstrate the “Ethics-driven decision model” and combine the decision model
with the business incident you chose. The decision model is included in the uploaded
3. Please refer to the sample summary I have uploaded and write the summary based on the
format of the sample summary. You can use boxes and bullet points in the paper similar
to the sample. Thank you!
4. Please write this paper no longer than eight pages.
5. Please also refer to the book: “The honest truth about dishonesty.” You do not necessarily
need to include any contents from the book, but I will upload the pdf file of the book if
you feel like using it.
In the summary:
Step 1: Please include the facts comprehensive and concise. Only relevant to the case
with proper MLA in text citation.
Step 2: Please identify the good and bad dimensions of the issue, and state your thought
Step 3: Please Identify At least three Stakeholders- How they might stand to gain or lose
from the issue.
Step 4: Please provide two alternative courses of actions of the business ethical issue and
one wild scenario.
Step 5: Please evaluate the two alternative courses and the one wild scenario with
regards of all the identified stakeholders presented in step three.
Step 6: Please provide comprehensive rationale behind why you chose the guidance
Step 7: Please make a decision and substantiated your claims based on one of your
alternative course of action.
1. Please use Times New Roman, font 12, and double space.
2. Please use MLA citation.
An Ethics-Focused Decision-Making Model
1. Determine the Facts
Resist an immediate “judgment call”
i. Be aware of how sports, for example, has influenced
all of us to make quick choices of what is within the
b. Gather all facts—not just ones supporting one pointof-view
c. Check multiple sources of information
d. Differentiate between “facts” and “assumptions”
e. “Follow the Money”—who makes money based upon
2. Identify the Ethical Issues Involved
a. Are there dimensions of “right and wrong”; “fair or
unfair”; “good or bad”?
3. Identify All the Stakeholders
a. Who (Groups) and how many?
b. How might I be biased—how are certain
Stakeholders “related” to me?
4. Delineate (and/or Create)
Alternative Courses of Action
a. Generate at least two different courses
b. Imagine one “wild” scenario (This is how creativity
5. Assess How the Alternatives will
a. Envision the results of each alternative
decision/course of action
b. Determine beneficial versus harmful consequences
c. Consider the Law
d. Recognize your own potential “conflict of interest”
6. Seek Guidance
People you respect
Subject matter experts
7. Make a Decision; Act; Monitor the
a. Make a choice
b. Do something constructive
c. Be prepared to DEFEND your choice with logical,
d. “All Solutions Just Create Different Problems”—
Follow up and Correct
An Ethics-Focused Decision-Making Model
Determine the Facts
Identify the Ethical Issues Involved
Identify All the Stakeholders
Delineate (and/or Create) Alternative Courses of Action
Assess How the Alternatives will Affect Stakeholders
Make a Decision; Act; Monitor the Outcome(s)
Comparisons and Contrasts(1984 & Now)
Monitor: “Always eyes watching you and
Monitor: Nowadays, when we look back on
the voice enveloping you. Asleep or awake,
the novel 1984, we may think that stuff is
working or eating, indoors or out of doors, in ridiculous. Every time, every day, there is a
the bath or bed- no escape. Nothing was
group or person who keeps monitoring your
your own except the few cubic centimetres
actions; those are unacceptable for us.
in your skull.”(Orwell, page4) is the most
However, Orwell’s prediction in 1949 seems
famous sentence in the novel 1984. In the
to come true. The differences are the way
society of 1984, which Orwell describes, all
of monitoring. The monitor went from a TV
the people in the community are being
screen switched to a monitor camera and
monitored by a TV screen—every moment
the record of our internet searching. When
of your life observed by the
we are using Google and Facebook, we
government(Thought Police). The
always see some advertising that is similar
government knows all of your actions,
to the thing we searched or purchased
privacy in that society is not important, or
before. Sometimes, we may find out that
we can say there is not any privacy for the
there is some advertising for a product we
people. People living there are like living in
are looking for, but we never searched or
a transparent box.
bought. It is because Google and Facebook
will record the audio of us; we may chart
with our friend and family about the product.
In 2013, Edward Joseph Snowden, a past
CIA employee, published a file in The
Washington Post to show the US
government started a PRISM project in
(Cont’d)2007 to monitor the internet action
and the record of all the American
citizens(Miller). In 2019, there are more
than 6 million monitors in the UK; the
number of monitors per capita is the second
highest in the world(Carlo). UK and US for
our image, they are the countries focus on
privacy and freedom. However, eyes are
watching their cities like the story 1984.
Limit the mind: “In the end, the Party
Limit the mind: Two plus two equal to two
would announce that two and two made
is five always seems unreasonable in our
five, and you would have to believe it. It was society. However, in China, the government
inevitable that they should make that claim
seems to control their people’s thoughts.
sooner or later: the logic of their position
Since 2017, the Chinese government has
demanded it.”(Orwell, page102). Two plus
set up some re-education camps in
two is equal to four is not make sense in our Xinjiang. In 2019, it estimated 1-3millions
society. However, in the novel, 1984 is an
inmates were in these camps(Alan). In
essential thing or is focused on being
these camps, the Chinese government aims
critical. The government in 1984 also make
to reeducate Uyghurs, focus them on
people believe “War is Peace; Freedom is
learning Chinese. Make them change their
slavery; Ignorance is strength.”(Orwell,
religion, China is an atheist country, in the
page 34) Under this totalitarian government, camp, people are focused to give up their
anything can from unreasonable become
religion and must act like most of the
reasonable, and all the people must believe
Chinese people accept the government
and agree that. Language and thought have
party as their faith; they should trust their
been redefined and controlled by the
president, all he did must be right.
government. People are not allowed to
Language and religion have been focused
have their opinion. What they should have
on changing in Xinjiang for Uyghurs. Most
is their government will enable them to
of their culture and thoughts have changed
to a kind which the government loves. If
there are ethnic minorities who disagree or
do not want to change, the government will
send them to the camp. What the Chinese
government is doing is similar to the
government of 1984.
Change the history: “Every record has
Change the history: Current government
been destroyed or falsified, every book was
in our society tries to revise the past. China
rewritten, every picture has been repainted,
never recognises the June Fourth Incident,
every statue and street building has been
how they use the military to kill the student.
renamed, every date has been altered. And
In China, June Fourth is even a banned
the process is continuing day by day and
word. Most of the teenagers do not know
minute by minute.”(Orwell, 1984) Everything what the June Fourth Incident is. The June
in the people, in fact, but it may not. History
Fourth Incident seems to have been erased
may not include history; it may be only a
in Chinese history. In North Korea, people
story. The government revised the history,
believe that the first burger is invented by
rebuilt the truth of every event. People knew their past president Kim Jong Il. These
nothing about the fact; they may think all
governments are trying to hide or revise
the things are reasonable and fair. They
history to make their society seem more
may also think they live in heaven. People
beautiful and perfect. If people are living in
are living out of the truth in every second.
this box may think their community is the
perfect one. However, as we stand outside
of this box, we may think they are living in a
fake place. They may never know the real
history. In a totalitarian and closed society,
history is more comfortable to revise. Like
1984, China, and North Korea, people do
not know how to define a thing as true or
not. They are living a fake history or a story.
Therefore, they are easier to control by the
Controlled media: “There was a whole
Controlled media: In China and North
chain of separate departments dealing with
Korea, most of the news is controlled by the
proletarian literature, music, drama and
government. People can only see the report
entertainment generally. Here were
which is selected by the government. For
produced rubbishy newspapers containing
example, North Korea had news that they
almost nothing except sport, crime and
successfully landed a man on Mars(Mckay).
astrology, sensational five-cent novelettes,
We all know that it is fake; North Korea
films oozing with sex, and sentimental
does not have enough technology to finish
songs …”(Orwell, page55) Government in
that. However, most of the people in North
(Cont’d)1984 controlled everything: what
(Cont’d)Korea believe that. It makes North
you can see, what you can know, and what
Korean think North Korea is the best; they
you can believe. Every day, the thing you
were living in a strong country and enjoying
can know is from the government, they can
a good life. That is the power of the media.
make you feel what is right and wrong. The
We may not think we are living in this kind
government creates your entertainment;
of situation. “Since the Telecommunications
during the joy, the government can make
Act of 1996, the act that reduced the
you accept some values that they want to
Federal Communications Commission
spread. Media in 1984 is the main factor to
(FCC) regulations on cross-ownership, 90%
control people's minds.
of U.S. media, is owned by six companies,
Viacom, News Corporation, Comcast, CBS,
Time Warner and Disney” (Corcoran, 2016;
Lutz, 2012). In the U.S., most of the portion
of the media is controlled by a little number
of people. News must bias to their funding
company or government. Government and
some enterprises are easy to use these
media to control people's mind and their
thinking. Fake news became more and
more; fact checking became harder and
harder. What we see is what people want
us to see.
One-party state: "More commonly, people
One-party state: China and North Korea
who had incurred the displeasure of the
are the samples of one party stay. The
Party simply disappeared and were never
unique thing of the one-party state is that
heard of again.”(Orwell, page57) In the
there is only one party and no one to reject
government party of 1984, what people
the decision made by the party, even if the
should do is believe and agree with the
decision is ridiculous. For example, in 2018,
government. There is only one party, and it
the Chinese president, President Xi,
does not accept no and the people who
removed the limit of president life(BBC). In
disagree with them. Any policy which is
other words, President Xi becomes the king
published by the government is never
of China. Even more absurd, not one vote
opposed but anyone; if some people
against this policy. For North Korea, we
disagree with them, they will disappear. The
always heard some news that the president
government in this one-party state has the
kills the people or the officer who does not
maximum power and is undeniable.
(Cont’d)stand at the side of the president.
China and North Korea are similar to 1984;
we are hard to find the voice of argument or
disagree with the government. The party
would be the largest in the country, all of
the people should believe that and cannot
have any agreement or different opinion.
Inform between each other: “Nearly all
Inform between each other: In China,
children nowadays were horrible. What was
there was a revolution called the Cultural
worst of all was that by means of such
Revolution. During the Cultural Revolution,
organizations as the Spies they were
people are similar to the story of 1984.
systematically turned into ungovernable
People who inform other people, no matter
little savages, and yet this produced in them
their family, friend, or teacher, are
no tendency whatsoever to rebel against
considered as a hero. They had
the discipline of the Party. On the contrary,
brainwashed and considered their leader as
they adored the Party and everything
a god. When people do something which is
connected with it…”(Orwell, page31). In the
not suitable for their leader, those people
society of 1984, all the people around you
will get caught. Most of them are killer, but
can be a spy. The closest person to you
no one thinks there are any problems. This
may be the person who pushes you to hell.
distorted society was kept for ten years. In
People are living with fear in this society.
these ten years, people do not believe each
Everyone who informs others is considered
other. What people believe is to discover
a hero, no matter whether you are children
the wrong thing(to the leader) you are
or adults. Children may even inform their
doing. Everyone wants to inform others;
everyone wants to be a hero. In this society,
everywhere is combined with horrible.
For the comparisons and contrasts, there are a lot of similar things that happened in 1984
that are happening in our society now.
-Monitor of people
-Limit the mind
-Inform between each other
-Change the history
Step2: Identify Ethical Issues involved
Good or bad:
Governments using this kind of high-pressure regime to control their people does not give
out any good result for the people. However, people struggle in these ways. The government
controls freedom, mind, and rights for the people. All the people are living in a transparent
Legal and illegal:
Governments are using an unethical way to control the people, but it is legal. In some
places, like China and North Korea, people have no right to sue the government. Therefore,
the rule and the law is set by the government; what they are doing must be legal.
Step3: Identify all the stakeholders
Citizens are the most considerable portion of a country. They are the majority population
who are suffering. Every action of the government will affect them.
Government is the party that decides what they should do for the country and the citizens.
And decide should they use some way to control the people.
When a totalitarian country is building more robustly, the government may grow influence
around the world, by economics or military. The world will be affected by it, because they
may spread its method to the world.
Step4: Delineate (and/or create) alternative courses of action
-Government keep the way of controlling their people
In this situation, the government does nothing and keeps doing what they are doing.
-Government give up to control their people
Governments give up what they did and give back the freedom to their citizens.
-Wild scenario: Awaken the people
We have to do something(publicise in our social media) to make people realise the world
and society and make them think they should jump off or against the transparent box. Do not
accept living in a transparent box.
Step5: Assess how the alternatives will affect stakeholders
Government party is
Citizens are still
the way of
happy with that
because they can
by the totalitarian
keep control of the
Government give up
The government will
Citizens will get
The threat of the
to control their
get many different
back their freedom
opinions; it is harder
for them to control
the country or set
People may start to
Citizens will fight for
Awaken the people
their freedom and
thought. And they
can get back what
minds may be more
they should have.
will unite to fight for
Step6: Seek Guidance
Who do you talk with?
Why choose them?
How do they tell you/ What
you expect from them?
Citizen of New Zealand
Defectors from North Korea
New Zealand is the freest
What is freedom and what
country of the world
they can have inside liberty?
And how beautiful is it?
They are the people who
How and what they are
realized they should jump
suffering? When and how
off from the box.
they realized they are living
Step7: Make a decision; act; monitor the outcome(s)
1984 is a story or a prediction, but now we discovered that we seem to live in this novel.
What should we do? Accept it and keep living in this situation? No, we should get a chance.
We should awake the people; we have to realize our life has to change. A little power for a
group of people may not be enough against the government, but we can affect other people.
Some defectors from North Korea have published books to tell the world what is the fact
inside the country. They may only have a little power, but they are still trying to affect the
world. How about us and our society? Take action, and avoid our community to become the
same with 1984.
THE (HONEST) TRUTH
How We Lie to Everyone—Especially Ourselves
To my teachers, collaborators, and students, for making research fun and exciting.
And to all the participants who took part in our experiments over the years—you are
the engine of this research, and I am deeply grateful for all your help.
INTRODUCTION: Why Is Dishonesty So Interesting?
From Enron to our own misbehaviors . . . A fascination with cheating . . . Becker’s parking problem
and the birth of rational crime . . . Elderly volunteers and petty thieves . . . Why behavioral
economics and dishonesty?
CHAPTER 1: Testing the Simple Model of Rational Crime (SMORC)
Get rich cheating . . . Tempting people to cheat, the measure of dishonesty . . . What we know
versus what we think we know about dishonesty . . . Cheating when we can’t get caught . . . Market
vendors, cab drivers, and cheating the blind . . . Fishing and tall tales . . . Striking a balance
between truth and cheating.
CHAPTER 2: Fun with the Fudge Factor
Why some things are easier to steal than others . . . How companies pave the way for dishonesty . .
. Token dishonesty . . . How pledges, commandments, honor codes, and paying with cash can
support honesty . . . But lock your doors just the same . . . And a bit about religion, the IRS, and
CHAPTER 2B: Golf
Man versus himself . . . A four-inch lie . . . Whether ’tis nobler in the mind to take the mulligan . . .
CHAPTER 3: Blinded by Our Own Motivations
Craze lines, tattoos, and how conflicts of interest distort our perception . . . How favors affect our
choices . . . Why full disclosure and other policies aren’t fully effective . . . Imagining less
conflicted compensation . . . Disclosure and regulation are the answers—or not.
CHAPTER 4: Why We Blow It When We’re Tired
Why we don’t binge in the morning . . . Willpower: another limited resource . . . Judgment on an
empty stomach . . . How flexing our cognitive and moral muscles can make us more dishonest . . .
Self-depletion and a rational theory of temptation.
CHAPTER 5: Why Wearing Fakes Makes Us Cheat More
The secret language of shoes . . . From ermine to Armani and the importance of signaling . . . Do
knockoffs knock down our standards of honesty? . . . Can gateway fibs lead to monster lies? . . .
When “what the hell” wreaks havoc . . . There’s no such thing as one little white lie . . . Halting the
CHAPTER 6: Cheating Ourselves
Claws and peacock tails . . . When answer keys tell us what we already knew . . . Overly optimistic
IQ scores . . . The Center for Advanced Hindsight . . . Being Kubrick . . . War heroes and sports
heroes who let us down . . . Helping ourselves to a better self-image.
CHAPTER 7: Creativity and Dishonesty: We Are All Storytellers
The tales we tell ourselves and how we create stories we can believe . . . Why creative people are
better liars . . . Redrawing the lines until we see what we want . . . When irritation spurs us onward
. . . How thinking creatively can get us into trouble.
CHAPTER 8: Cheating as an Infection: How We Catch the Dishonesty Germ
Catching the cheating bug . . . One bad apple really does spoil the barrel (unless that apple goes to
the University of Pittsburgh) . . . How ambiguous rules + group dynamics = cultures of cheating . .
. A possible road to ethical health.
CHAPTER 9: Collaborative Cheating: Why Two Heads Aren’t Necessarily Better than One
Lessons from an ambiguous boss . . . All eyes are on you: observation and cheating . . . Working
together to cheat more? . . . Or keeping one another in line . . . Cheating charitably . . . Building
trust and taking liberties . . . Playing well with others.
CHAPTER 10: A Semioptimistic Ending: People Don’t Cheat Enough!
Cheer up! Why we should not be too depressed by this book . . . True crime . . . Cultural differences
in dishonesty . . . Politicians or bankers, who cheats more? . . . How can we improve our moral
List of Collaborators
Bibliography and Additional Readings
About the Author
Also by Dan Ariely
About the Publisher
Why Is Dishonesty So Interesting?
There’s one way to find out if a man is honest—ask him.
If he says “yes,” he is a crook.
—GROUCHO M ARX
My interest in cheating was first ignited in 2002, just a few months after the collapse of Enron. I was
spending the week at some technology-related conference, and one night over drinks I got to meet
John Perry Barlow. I knew John as the erstwhile lyricist for the Grateful Dead, but during our chat I
discovered that he had also been working as a consultant for a few companies—including Enron.
In case you weren’t paying attention in 2001, the basic story of the fall of the Wall Street darling
went something like this: Through a series of creative accounting tricks—helped along by the blind
eye of consultants, rating agencies, the company’s board, and the now-defunct accounting firm Arthur
Andersen, Enron rose to great financial heights only to come crashing down when its actions could no
longer be concealed. Stockholders lost their investments, retirement plans evaporated, thousands of
employees lost their jobs, and the company went bankrupt.
While I was talking to John, I was especially interested in his description of his own wishful
blindness. Even though he consulted for Enron while the company was rapidly spinning out of control,
he said he hadn’t seen anything sinister going on. In fact, he had fully bought into the worldview that
Enron was an innovative leader of the new economy right up until the moment the story was all over
the headlines. Even more surprising, he also told me that once the information was out, he could not
believe that he failed to see the signs all along. That gave me pause. Before talking to John, I assumed
that the Enron disaster had basically been caused by its three sinister C-level architects (Jeffrey
Skilling, Kenneth Lay, and Andrew Fastow), who together had planned and executed a large-scale
accounting scheme. But here I was sitting with this guy, whom I liked and admired, who had his own
story of involvement with Enron, which was one of wishful blindness—not one of deliberate
It was, of course, possible that John and everyone else involved with Enron were deeply corrupt,
but I began to think that there may have been a different type of dishonesty at work—one that relates
more to wishful blindness and is practiced by people like John, you, and me. I started wondering if
the problem of dishonesty goes deeper than just a few bad apples and if this kind of wishful blindness
takes place in other companies as well.* I also wondered whether my friends and I would have
behaved similarly if we had been the ones consulting for Enron.
I became fascinated by the subject of cheating and dishonesty. Where does it come from? What is the
human capacity for both honesty and dishonesty? And, perhaps most important, is dishonesty largely
restricted to a few bad apples, or is it a more widespread problem? I realized that the answer to this
last question might dramatically change how we should try to deal with dishonesty: that is, if only a
few bad apples are responsible for most of the cheating in the world, we might easily be able to
remedy the problem. Human resources departments could screen for cheaters during the hiring
process or they could streamline the procedure for getting rid of people who prove to be dishonest
over time. But if the problem is not confined to a few outliers, that would mean that anyone could
behave dishonestly at work and at home—you and I included. And if we all have the potential to be
somewhat criminal, it is crucially important that we first understand how dishonesty operates and then
figure out ways to contain and control this aspect of our nature.
WHAT DO WE know about the causes of dishonesty? In rational economics, the prevailing notion of
cheating comes from the University of Chicago economist Gary Becker, a Nobel laureate who
suggested that people commit crimes based on a rational analysis of each situation. As Tim Harford
describes in his book The Logic of Life,* the birth of this theory was quite mundane. One day, Becker
was running late for a meeting and, thanks to a scarcity of legal parking, decided to park illegally and
risk a ticket. Becker contemplated his own thought process in this situation and noted that his decision
had been entirely a matter of weighing the conceivable cost—being caught, fined, and possibly towed
—against the benefit of getting to the meeting in time. He also noted that in weighing the costs versus
the benefits, there was no place for consideration of right or wrong; it was simply about the
comparison of possible positive and negative outcomes.
And thus the Simple Model of Rational Crime (SMORC) was born. According to this model, we all
think and behave pretty much as Becker did. Like your average mugger, we all seek our own
advantage as we make our way through the world. Whether we do this by robbing banks or writing
books is inconsequential to our rational calculations of costs and benefits. According to Becker’s
logic, if we’re short on cash and happen to drive by a convenience store, we quickly estimate how
much money is in the register, consider the likelihood that we might get caught, and imagine what
punishment might be in store for us if we are caught (obviously deducting possible time off for good
behavior). On the basis of this cost-benefit calculation, we then decide whether it is worth it to rob
the place or not. The essence of Becker’s theory is that decisions about honesty, like most other
decisions, are based on a cost-benefit analysis.
The SMORC is a very straightforward model of dishonesty, but the question is whether it accurately
describes people’s behavior in the real world. If it does, society has two clear means for dealing with
dishonesty. The first is to increase the probability of being caught (through hiring more police officers
and installing more surveillance cameras, for example). The second is to increase the magnitude of
punishment for people who get caught (for example, by imposing steeper prison sentences and fines).
This, my friends, is the SMORC, with its clear implications for law enforcement, punishment, and
dishonesty in general.
But what if the SMORC’s rather simple view of dishonesty is inaccurate or incomplete? If that is the
case, the standard approaches for overcoming dishonesty are going to be inefficient and insufficient.
If the SMORC is an imperfect model of the causes of dishonesty, then we need to first figure out what
forces really cause people to cheat and then apply this improved understanding to curb dishonesty.
That’s exactly what this book is about.*
Life in SMORCworld
Before we examine the forces that influence our honesty and dishonesty, let’s consider a quick thought
experiment. What would our lives be like if we all strictly adhered to the SMORC and considered
only the costs and benefits of our actions?
If we lived in a purely SMORC-based world, we would run a cost-benefit analysis on all of our
decisions and do what seems to be the most rational thing. We wouldn’t make decisions based on
emotions or trust, so we would most likely lock our wallets in a drawer when we stepped out of our
office for a minute. We would keep our cash under the mattress or lock it away in a hidden safe. We
would be unwilling to ask our neighbors to bring in our mail while we’re on vacation, fearing that
they would steal our belongings. We would watch our coworkers like hawks. There would be no
value in shaking hands as a form of agreement; legal contracts would be necessary for any transaction,
which would also mean that we would likely spend a substantial part of our time in legal battles and
litigation. We might decide not to have kids because when they grew up, they, too, would try to steal
everything we have, and living in our homes would give them plenty of opportunities to do so.
Sure, it is easy to see that people are not saints. We are far from perfect. But if you agree that
SMORCworld is not a correct picture of how we think and behave, nor an accurate description of our
daily lives, this thought experiment suggests that we don’t cheat and steal as much as we would if we
were perfectly rational and acted only in our own self-interest.
Calling All Art Enthusiasts
In April 2011, Ira Glass’s show, This American Life,1 featured a story about Dan Weiss, a young
college student who worked at the John F. Kennedy Center for the Performing Arts in Washington,
D.C. His job was to stock inventory for the center’s gift shops, where a sales force of three hundred
well-intentioned volunteers—mostly retirees who loved theater and music—sold the merchandise to
The gift shops were run like lemonade stands. There were no cash registers, just cash boxes into
which the volunteers deposited cash and from which they made change. The gift shops did a roaring
business, selling more than $400,000 worth of merchandise a year. But they had one big problem: of
that amount, about $150,000 disappeared each year.
When Dan was promoted to manager, he took on the task of catching the thief. He began to suspect
another young employee whose job it was to take the cash to the bank. He contacted the U.S. National
Park Service’s detective agency, and a detective helped him set up a sting operation. One February
night, they set the trap. Dan put marked bills into the cashbox and left. Then he and the detective hid in
the nearby bushes, waiting for the suspect. When the suspected staff member eventually left for the
night, they pounced on him and found some marked bills in his pocket. Case closed, right?
Not so, as it turned out. The young employee stole only $60 that night, and even after he was fired,
money and merchandise still went missing. Dan’s next step was to set up an inventory system with
price lists and sales records. He told the retirees to write down what was sold and what they
received, and—you guessed it—the thievery stopped. The problem was not a single thief but the
multitude of elderly, well-meaning, art-loving volunteers who would help themselves to the goods
and loose cash lying around.
The moral of this story is anything but uplifting. As Dan put it, “We are going to take things from
each other if we have a chance . . . many people need controls around them for them to do the right
THE PRIMARY PURPOSE of this book is to examine the rational cost-benefit forces that are
presumed to drive dishonest behavior but (as you will see) often do not, and the irrational forces that
we think don’t matter but often do. To wit, when a large amount of money goes missing, we usually
think it’s the work of one coldhearted criminal. But as we saw in the art lovers’ story, cheating is not
necessarily due to one guy doing a cost-benefit analysis and stealing a lot of money. Instead, it is more
often an outcome of many people who quietly justify taking a little bit of cash or a little bit of
merchandise over and over. In what follows we will explore the forces that spur us to cheat, and
we’ll take a closer look at what keeps us honest. We will discuss what makes dishonesty rear its ugly
head and how we cheat for our own benefit while maintaining a positive view of ourselves—a facet
of our behavior that enables much of our dishonesty.
Once we explore the basic tendencies that underlie dishonesty, we will turn to some experiments
that will help us discover the psychological and environmental forces that increase and decrease
honesty in our daily lives, including conflicts of interest, counterfeits, pledges, creativity, and simply
being tired. We’ll explore the social aspects of dishonesty too, including how others influence our
understanding of what’s right and wrong, and our capacity for cheating when others can benefit from
our dishonesty. Ultimately, we will attempt to understand how dishonesty works, how it depends on
the structure of our daily environment, and under what conditions we are likely to be more and less
In addition to exploring the forces that shape dishonesty, one of the main practical benefits of the
behavioral economics approach is that it shows us the internal and environmental influences on our
behavior. Once we more clearly understand the forces that really drive us, we discover that we are
not helpless in the face of our human follies (dishonesty included), that we can restructure our
environment, and that by doing so we can achieve better behaviors and outcomes.
It’s my hope that the research I describe in the following chapters will help us understand what
causes our own dishonest behavior and point to some interesting ways to curb and limit it.
And now for the journey . . .
Testing the Simple Model of Rational Crime (SMORC)
Let me come right out and say it. They cheat. You cheat. And yes, I also cheat from time to time.
As a college professor, I try to mix things up a bit in order to keep my students interested in the
material. To this end, I occasionally invite interesting guest speakers to class, which is also a nice
way to reduce the time I spend on preparation. Basically, it’s a win-win-win situation for the guest
speaker, the class, and, of course, me.
For one of these “get out of teaching free” lectures, I invited a special guest to my behavioral
economics class. This clever, well-established man has a fine pedigree: before becoming a legendary
business consultant to prominent banks and CEOs, he had earned his juris doctor and, before that, a
bachelor’s at Princeton. “Over the past few years,” I told the class, “our distinguished guest has been
helping business elites achieve their dreams!”
With that introduction, the guest took the stage. He was forthright from the get-go. “Today I am going
to help you reach your dreams. Your dreams of MONEY!” he shouted with a thumping, Zumba-trainer
voice. “Do you guys want to make some MONEY?”
Everyone nodded and laughed, appreciating his enthusiastic, non-buttoned-down approach.
“Is anybody here rich?” he asked. “I know I am, but you college students aren’t. No, you are all
poor. But that’s going to change through the power of CHEATING! Let’s do it!”
He then recited the names of some infamous cheaters, from Genghis Khan through the present,
including a dozen CEOs, Alex Rodriguez, Bernie Madoff, Martha Stewart, and more. “You all want
to be like them,” he exhorted. “You want to have power and money! And all that can be yours through
cheating. Pay attention, and I will give you the secret!”
With that inspiring introduction, it was now time for a group exercise. He asked the students to close
their eyes and take three deep, cleansing breaths. “Imagine you have cheated and gotten your first ten
million dollars,” he said. “What will you do with this money? You! In the turquoise shirt!”
“A house,” said the student bashfully.
“A HOUSE? We rich people call that a MANSION. You?” he said, pointing to another student.
“To the private island you own? Perfect! When you make the kind of money that great cheaters make,
it changes your life. Is anyone here a foodie?”
A few students raised their hands.
“What about a meal made personally by Jacques Pépin? A wine tasting at Châteauneuf-du-Pape?
When you make enough money, you can live large forever. Just ask Donald Trump! Look, we all know
that for ten million dollars you would drive over your boyfriend or girlfriend. I am here to tell you
that it is okay and to release the handbrake for you!”
By that time most of the students were starting to realize that they were not dealing with a serious
role model. But having spent the last ten minutes sharing dreams about all the exciting things they
would do with their first $10 million, they were torn between the desire to be rich and the recognition
that cheating is morally wrong.
“I can sense your hesitation,” the lecturer said. “You must not let your emotions dictate your actions.
You must confront your fears through a cost-benefit analysis. What are the pros of getting rich by
cheating?” he asked.
“You get rich!” the students responded.
“That’s right. And what are the cons?”
“You get caught!”
“Ah,” said the lecturer, “There is a CHANCE you will get caught. BUT—here is the secret! Getting
caught cheating is not the same as getting punished for cheating. Look at Bernie Ebbers, the ex-CEO
of WorldCom. His lawyer whipped out the ‘Aw, shucks’ defense, saying that Ebbers simply did not
know what was going on. Or Jeff Skilling, former CEO of Enron, who famously wrote an e-mail
saying, ‘Shred the documents, they’re onto us.’ Skilling later testified that he was just being
‘sarcastic’! Now, if these defenses don’t work, you can always skip town to a country with no
Slowly but surely, my guest lecturer—who in real life is a stand-up comedian named Jeff Kreisler
and the author of a satirical book called Get Rich Cheating—was making a hard case for
approaching financial decisions on a purely cost-benefit basis and paying no attention to moral
considerations. Listening to Jeff’s lecture, the students realized that from a perfectly rational
perspective, he was absolutely right. But at the same time they could not help but feel disturbed and
repulsed by his endorsement of cheating as the best path to success.
At the end of the class, I asked the students to think about the extent to which their own behavior fit
with the SMORC. “How many opportunities to cheat without getting caught do you have in a regular
day?” I asked them. “How many of these opportunities do you take? How much more cheating would
we see around us if everyone took Jeff’s cost-benefit approach?”
Setting Up the Testing Stage
Both Becker’s and Jeff’s approach to dishonesty are comprised of three basic elements: (1) the
benefit that one stands to gain from the crime; (2) the probability of getting caught; and (3) the
expected punishment if one is caught. By comparing the first component (the gain) with the last two
components (the costs), the rational human being can determine whether committing a particular crime
is worth it or not.
Now, it could be that the SMORC is an accurate description of the way people make decisions
about honesty and cheating, but the uneasiness experienced by my students (and myself) with the
implications of the SMORC suggests that it’s worth digging a bit further to figure out what is really
going on. (The next few pages will describe in some detail the way we will measure cheating
throughout this book, so please pay attention.)
My colleagues Nina Mazar (a professor at the University of Toronto) and On Amir (a professor at
the University of California at San Diego) and I decided to take a closer look at how people cheat.
We posted announcements all over the MIT campus (where I was a professor at the time), offering
students a chance to earn up to $10 for about ten minutes of their time.* At the appointed time,
participants entered a room where they sat in chairs with small desks attached (the typical exam-style
setup). Next, each participant received a sheet of paper containing a series of twenty different
matrices (structured like the example you see on the next page) and were told that their task was to
find in each of these matrices two numbers that added up to 10 (we call this the matrix task, and we
will refer to it throughout much of this book). We also told them that they had five minutes to solve as
many of the twenty matrices as possible and that they would get paid 50 cents per correct answer (an
amount that varied depending on the experiment). Once the experimenter said, “Begin!” the
participants turned the page over and started solving these simple math problems as quickly as they
Below is a sample of what the sheet of paper looked like, with one matrix enlarged. How quickly
can you find the pair of numbers that adds up to 10?
Figure 1: Matrix Task
This was how the experiment started for all the participants, but what happened at the end of the five
minutes was different depending on the particular condition.
Imagine that you are in the control condition and you are hurrying to solve as many of the twenty
matrices as possible. After a minute passes, you’ve solved one. Two more minutes pass, and you’re
up to three. Then time is up, and you have four completed matrices. You’ve earned $2. You walk up to
the experimenter’s desk and hand her your solutions. After checking your answers, the experimenter
smiles approvingly. “Four solved,” she says and then counts out your earnings. “That’s it,” she says,
and you’re on your way. (The scores in this control condition gave us the actual level of performance
on this task.)
Now imagine you are in another setup, called the shredder condition, in which you have the
opportunity to cheat. This condition is similar to the control condition, except that after the five
minutes are up the experimenter tells you, “Now that you’ve finished, count the number of correct
answers, put your worksheet through the shredder at the back of the room, and then come to the front
of the room and tell me how many matrices you solved correctly.” If you were in this condition you
would dutifully count your answers, shred your worksheet, report your performance, get paid, and be
on your way.
If you were a participant in the shredder condition, what would you do? Would you cheat? And if
so, by how much?
With the results for both of these conditions, we could compare the performance in the control
condition, in which cheating was impossible, to the reported performance in the shredder condition,
in which cheating was possible. If the scores were the same, we would conclude that no cheating had
occurred. But if we saw that, statistically speaking, people performed “better” in the shredder
condition, then we could conclude that our participants overreported their performance (cheated)
when they had the opportunity to shred the evidence. And the degree of this group’s cheating would be
the difference in the number of matrices they claimed to have solved correctly above and beyond the
number of matrices participants actually solved correctly in the control condition.
Perhaps somewhat unsurprisingly, we found that given the opportunity, many people did fudge their
score. In the control condition, participants solved on average four out of the twenty matrices.
Participants in the shredder condition claimed to have solved an average of six—two more than in the
control condition. And this overall increase did not result from a few individuals who claimed to
solve a lot more matrices, but from lots of people who cheated by just a little bit.
More Money, More Cheating?
With this basic quantification of dishonesty under our belts, Nina, On, and I were ready to investigate
what forces motivate people to cheat more and less. The SMORC tells us that people should cheat
more when they stand a chance of getting more money without being caught or punished. That sounds
both simple and intuitively appealing, so we decided to test it next. We set up another version of the
matrix experiment, only this time we varied the amount of money the participants would get for
solving each matrix correctly. Some participants were promised 25 cents per question; others were
promised 50 cents, $1, $2, or $5. At the highest level, we promised some participants a whopping
$10 for each correct answer. What do you think happened? Did the amount of cheating increase with
the amount of money offered?
Before I divulge the answer, I want to tell you about a related experiment. This time, rather than
taking the matrix test themselves, we asked another group of participants to guess how many answers
those in the shredder condition would claim to solve correctly at each level of payment. Their
predictions were that the claims of correctly solved matrices would increase as the amount of money
went up. Essentially, their intuitive theory was the same as the premise of the SMORC. But they were
wrong. It turned out that when we looked at the magnitude of cheating, our participants added two
questions to their scores on average, regardless of the amount of money they could make per question.
In fact, the amount of cheating was slightly lower when we promised our participants the highest
amount of $10 for each correct answer.
Why wouldn’t the level of cheating increase with the amount of money offered? Why was cheating
slightly lower at the highest level of payment? This insensitivity to the amount of reward suggests that
dishonesty is most likely not an outcome of a cost-benefit analysis. If it were, the increase in the
benefit (the amount of money offered) would lead to more cheating. And why was the level of
cheating lowest when the payment was greatest? I suspect that when the amount of money that the
participants could make per question was $10, it was harder for them to cheat and still feel good
about their own sense of integrity (we will come back to this later). At $10 per matrix, we’re not
talking about cheating on the level of, say, taking a pencil from the office. It’s more akin to taking
several boxes of pens, a stapler, and a ream of printer paper, which is much more difficult to ignore
To Catch a Thief
Our next experiment looked at what might happen if participants felt that there was a higher
probability of getting caught cheating. Basically, we inserted the mental equivalent of a partially
operating security camera into the experiment.
We asked one group of participants to shred one half of their worksheet—which meant that if they
were dishonest, we might find some evidence of it. We asked a second group to shred the whole work
sheet, meaning that they could get off scot-free. Finally, we asked a third group to shred the whole
worksheet, leave the testing room, and pay themselves from a sizable bowl of money filled with more
than $100 in small bills and coins. In this self-paying condition, participants could not only cheat and
get away with it, but they could also help themselves to a lot of extra cash.
Again, we asked a different group to predict how many questions, on average, participants would
claim to solve correctly in each condition. Once again, they predicted that the human tendency for
dishonesty would follow the SMORC and that participants would claim to solve more matrices as the
probability of getting caught decreased.
What did we find? Once again, lots of people cheated, but just by a bit, and the level of cheating
was the same across all three conditions (shredding half, shredding all, shredding all and selfpaying).
NOW, YOU MIGHT wonder if the participants in our experiments really believed that in our
experimental setting, they could cheat and not get caught. To make it clear that this was indeed the
case, Racheli Barkan (a professor at Ben-Gurion University of the Negev), Eynav Maharabani (a
master’s candidate working with Racheli), and I carried out another study where either Eynav or a
different research assistant, Tali, proctored the experiment. Eynav and Tali were similar in many
ways—but Eynav is noticeably blind, which meant that it was easier to cheat when she was in charge.
When it was time to pay themselves from the pile of money that was placed on the table in front of the
experimenter, participants could grab as much of the cash as they wanted and Eynav would not be
able to see them do so.
So did they cheat Eynav to a greater degree? They still took a bit more money than they deserved,
but they cheated just as much when Tali supervised the experiments as they did when Eynav was in
These results suggest that the probability of getting caught doesn’t have a substantial influence on the
amount of cheating. Of course, I am not arguing that people are entirely uninfluenced by the likelihood
of being caught—after all, no one is going to steal a car when a policeman is standing nearby—but the
results show that getting caught does not have as great an influence as we tend to expect, and it
certainly did not play a role in our experiments.
YOU MIGHT BE wondering whether the participants in our experiments were using the following
logic: “If I cheat by only a few questions, no one will suspect me. But if I cheat by more than a small
amount, it may raise suspicion and someone might question me about it.”
We tested this idea in our next experiment. This time, we told half of the participants that the
average student in this experiment solves about four matrices (which was true). We told the other half
that the average student solves about eight matrices. Why did we do this? Because if the level of
cheating is based on the desire to avoid standing out, then our participants would cheat in both
conditions by a few matrices beyond what they believed was the average performance (meaning that
they would claim to solve around six matrices when they thought the average was four and about ten
matrices when they thought the average was eight).
So how did our participants behave when they expected others to solve more matrices? They were
not influenced even to a small degree by this knowledge. They cheated by about two extra answers
(they solved four and reported that they had solved six) regardless of whether they thought that others
solved on average four or eight matrices.
This result suggests that cheating is not driven by concerns about standing out. Rather, it shows that
our sense of our own morality is connected to the amount of cheating we feel comfortable with.
Essentially, we cheat up to the level that allows us to retain our self-image as reasonably honest
Into the Wild
Armed with this initial evidence against the SMORC, Racheli and I decided to get out of the lab and
venture into a more natural setting. We wanted to examine common situations that one might encounter
on any given day. And we wanted to test “real people” and not just students (though I have discovered
that students don’t like to be told that they are not real people). Another component missing from our
experimental paradigm up to that point was the opportunity for people to behave in positive and
benevolent ways. In our lab experiments, the best our participants could do was not cheat. But in
many real-life situations, people can exhibit behaviors that are not only neutral but are also charitable
and generous. With this added nuance in mind, we looked for situations that would let us test both the
negative and the positive sides of human nature.
IMAGINE A LARGE farmer’s market spanning the length of a street. The market is located in the
heart of Be’er Sheva, a town in southern Israel. It’s a hot day, and hundreds of merchants have set out
their wares in front of the stores that line both sides of the street. You can smell fresh herbs and sour
pickles, freshly baked bread and ripe strawberries, and your eyes wander over plates of olives and
cheese. The sound of merchants shouting praises of their goods surrounds you: “Rak ha yom!” (only
today), “Matok!” (sweet), “Bezol!” (cheap).
Eynav and Tali entered the market and headed in different directions, Eynav using a white cane to
navigate the market. Each of them approached a few vegetable vendors and asked each of the sellers
to pick out two kilos (about 4.5 pounds) of tomatoes for them while they went on another errand.
Once they made their request, they left for about ten minutes, returned to pick up their tomatoes, paid,
and left. From there they took the tomatoes to another vendor at the far end of the market who had
agreed to judge the quality of the tomatoes from each seller. By comparing the quality of the tomatoes
that were sold to Eynav and to Tali, we could figure out who got better produce and who got worse.
Did Eynav get a raw deal? Keep in mind that from a purely rational perspective, it would have
made sense for the seller to choose his worst-looking tomatoes for her. After all, she could not
possibly benefit from their aesthetic quality. A traditional economist from, say, the University of
Chicago might even argue that in an effort to maximize the social welfare of everyone involved (the
seller, Eynav, and the other consumers), the seller should have sold her the worst-looking tomatoes,
keeping the pretty ones for people who could also enjoy that aspect of the tomatoes. As it turned out,
the visual quality of the tomatoes chosen for Eynav was not worse and, in fact, was superior to those
chosen for Tali. The sellers went out of their way, and at some cost to their business, to choose
higher-quality produce for a blind customer.
WITH THOSE OPTIMISTIC results, we next turned to another profession that is often regarded with
great suspicion: cab drivers. In the taxi world, there is a popular stunt called “long hauling,” which is
the official term for taking passengers who don’t know their way around to their destination via a
lengthy detour, sometimes adding substantially to the fare. For example, a study of cab drivers in Las
Vegas found that some cabbies drive from McCarran International Airport to the Strip by going
through a tunnel to Interstate 215, which can mount to a fare of $92 for what should be a two-mile
Given the reputation that cabbies have, one has to wonder whether they cheat in general and whether
they would be more likely to cheat those who cannot detect their cheating. In our next experiment we
asked Eynav and Tali to take a cab back and forth between the train station and Ben-Gurion
University of the Negev twenty times. The way the cabs on this particular route work is as follows: if
you have the driver activate the meter, the fare is around 25 NIS (about $7). However, there is a
customary flat rate of 20 NIS (about $5.50) if the meter is not activated. In our setup, both Eynav and
Tali always asked to have the meter activated. Sometimes drivers would tell the “amateur”
passengers that it would be cheaper not to activate the meter; regardless, both of them always insisted
on having the meter activated. At the end of the ride, Eynav and Tali asked the cab driver how much
they owed them, paid, left the cab, and waited a few minutes before taking another cab back to the
place they had just left.
Looking at the charges, we found that Eynav paid less than Tali, despite the fact that they both
insisted on paying by the meter. How could this be? One possibility was that the drivers had taken
Eynav on the shortest and cheapest route and had taken Tali for a longer ride. If that were the case, it
would mean that the drivers had not cheated Eynav but that they had cheated Tali to some degree. But
Eynav had a different account of the results. “I heard the cab drivers activate the meter when I asked
them to,” she told us, “but later, before we reached our final destination, I heard many of them turn the
meter off so that the fare would come out close to twenty NIS.” “That certainly never happened to
me,” Tali said. “They never turned off the meter, and I always ended up paying around twenty-five
There are two important aspects to these results. First, it’s clear that the cab drivers did not perform
a cost-benefit analysis in order to optimize their earnings. If they had, they would have cheated Eynav
more by telling her that the meter reading was higher than it really was or by driving her around the
city for a bit. Second, the cab drivers did better than simply not cheat; they took Eynav’s interest into
account and sacrificed some of their own income for her benefit.
Clearly there’s a lot more going on here than Becker and standard economics would have us believe.
For starters, the finding that the level of dishonesty is not influenced to a large degree (to any degree
in our experiments) by the amount of money we stand to gain from being dishonest suggests that
dishonesty is not an outcome of simply considering the costs and benefits of dishonesty. Moreover, the
results showing that the level of dishonesty is unaltered by changes in the probability of being caught
makes it even less likely that dishonesty is rooted in a cost-benefit analysis. Finally, the fact that many
people cheat just a little when given the opportunity to do so suggests that the forces that govern
dishonesty are much more complex (and more interesting) than predicted by the SMORC.
What is going on here? I’d like to propose a theory that we will spend much of this book examining.
In a nutshell, the central thesis is that our behavior is driven by two opposing motivations. On one
hand, we want to view ourselves as honest, honorable people. We want to be able to look at
ourselves in the mirror and feel good about ourselves (psychologists call this ego motivation). On the
other hand, we want to benefit from cheating and get as much money as possible (this is the standard
financial motivation). Clearly these two motivations are in conflict. How can we secure the benefits
of cheating and at the same time still view ourselves as honest, wonderful people?
This is where our amazing cognitive flexibility comes into play. Thanks to this human skill, as long
as we cheat by only a little bit, we can benefit from cheating and still view ourselves as marvelous
human beings. This balancing act is the process of rationalization, and it is the basis of what we’ll
call the “fudge factor theory.”
To give you a better understanding of the fudge factor theory, think of the last time you calculated
your tax return. How did you make peace with the ambiguous and unclear decisions you had to make?
Would it be legitimate to write off a portion of your car repair as a business expense? If so, what
amount would you feel comfortable with? And what if you had a second car? I’m not talking about
justifying our decisions to the Internal Revenue Service (IRS); I’m talking about the way we are able
to justify our exaggerated level of tax deductions to ourselves.
Or let’s say you go out to a restaurant with friends and they ask you to explain a work project you’ve
been spending a lot of time on lately. Having done that, is the dinner now an acceptable business
expense? Probably not. But what if the meal occurred during a business trip or if you were hoping that
one of your dinner companions would become a client in the near future? If you have ever made
allowances of this sort, you too have been playing with the flexible boundaries of your ethics. In
short, I believe that all of us continuously try to identify the line where we can benefit from dishonesty
without damaging our own self-image. As Oscar Wilde once wrote, “Morality, like art, means
drawing a line somewhere.” The question is: where is the line?
I THINK JEROME K. Jerome got it right in his 1889 novel, Three Men in a Boat (to Say Nothing of
the Dog), in which he tells a story about one of the most famously lied-about topics on earth: fishing.
Here’s what he wrote:
I knew a young man once, he was a most conscientious fellow and, when he took to fly-fishing,
he determined never to exaggerate his hauls by more than twenty-five per cent.
“When I have caught forty fish,” said he, “then I will tell people that I have caught fifty, and so
on. But I will not lie any more than that, because it is sinful to lie.”
Although most people haven’t consciously figured out (much less announced) their acceptable rate
of lying like this young man, this overall approach seems to be quite accurate; each of us has a limit to
how much we can cheat before it becomes absolutely “sinful.”
Trying to figure out the inner workings of the fudge factor—the delicate balance between the
contradictory desires to maintain a positive self-image and to benefit from cheating—is what we are
going to turn our attention to next.
Fun with the Fudge Factor
Here’s a little joke for you:
Eight-year-old Jimmy comes home from school with a note from his teacher that says, “Jimmy stole
a pencil from the student sitting next to him.” Jimmy’s father is furious. He goes to great lengths to
lecture Jimmy and let him know how upset and disappointed he is, and he grounds the boy for two
weeks. “And just wait until your mother comes home!” he tells the boy ominously. Finally he
concludes, “Anyway, Jimmy, if you needed a pencil, why didn’t you just say something? Why didn’t
you simply ask? You know very well that I can bring you dozens of pencils from work.”
If we smirk at this joke, it’s because we recognize the complexity of human dishonesty that is
inherent to all of us. We realize that a boy stealing a pencil from a classmate is definitely grounds for
punishment, but we are willing to take many pencils from work without a second thought.
To Nina, On, and me, this little joke suggested the possibility that certain types of activities can
more easily loosen our moral standards. Perhaps, we thought, if we increased the psychological
distance between a dishonest act and its consequences, the fudge factor would increase and our
participants would cheat more. Of course, encouraging people to cheat more is not something we
want to promote in general. But for the purpose of studying and understanding cheating, we wanted to
see what kinds of situations and interventions might further loosen people’s moral standards.
To test this idea, we first tried a university version of the pencil joke: One day, I sneaked into an
MIT dorm and seeded many communal refrigerators with one of two tempting baits. In half of the
refrigerators, I placed six-packs of Coca-Cola; in the others, I slipped in a paper plate with six $1
bills on it. I went back from time to time to visit the refrigerators and see how my Cokes and money
were doing—measuring what, in scientific terms, we call the half-life of Coke and money.
As anyone who has been to a dorm can probably guess, within seventy-two hours all the Cokes
were gone, but what was particularly interesting was that no one touched the bills. Now, the students
could have taken a dollar bill, walked over to the nearby vending machine and gotten a Coke and
change, but no one did.
I must admit that this is not a great scientific experiment, since students often see cans of Coke in
their fridge, whereas discovering a plate with a few dollar bills on it is rather unusual. But this little
experiment suggests that we human beings are ready and willing to steal something that does not
explicitly reference monetary value—that is, something that lacks the face of a dead president.
However, we shy away from directly stealing money to an extent that would make even the most pious
Sunday school teacher proud. Similarly, we might take some paper from work to use in our home
printer, but it would be highly unlikely that we would ever take $3.50 from the petty-cash box, even if
we turned right around and used the money to buy paper for our home printer.
To look at the distance between money and its influence on dishonesty in a more controlled way, we
set up another version of the matrix experiment, this time including a condition where cheating was
one step removed from money. As in our previous experiments, participants in the shredder condition
had the opportunity to cheat by shredding their worksheets and lying about the number of matrices
they’d solved correctly. When the participants finished the task, they shredded their worksheet,
approached the experimenter, and said, “I solved X* matrices, please give me X dollars.”
The innovation in this experiment was the “token” condition. The token condition was similar to the
shredder condition, except that the participants were paid in plastic chips instead of dollars. In the
token condition, once participants finished shredding their worksheets, they approached the
experimenter and said, “I solved X matrices, please give me X tokens.” Once they received their
chips, they walked twelve feet to a nearby table, where they handed in their tokens and received cold,
As it turned out, those who lied for tokens that a few seconds later became money cheated by about
twice as much as those who were lying directly for money. I have to confess that, although I had
suspected that participants in the token condition would cheat more, I was surprised by the increase in
cheating that came with being one small step removed from money. As it turns out, people are more
apt to be dishonest in the presence of nonmonetary objects—such as pencils and tokens—than actual
From all the research I have done over the years, the idea that worries me the most is that the more
cashless our society becomes, the more our moral compass slips. If being just one step removed from
money can increase cheating to such a degree, just imagine what can happen as we become an
increasingly cashless society. Could it be that stealing a credit card number is much less difficult from
a moral perspective than stealing cash from someone’s wallet? Of course, digital money (such as a
debit or credit card) has many advantages, but it might also separate us from the reality of our actions
to some degree. If being one step removed from money liberates people from their moral shackles,
what will happen as more and more banking is done online? What will happen to our personal and
social morality as financial products become more obscure and less recognizably related to money
(think, for example, about stock options, derivatives, and credit default swaps)?
Some Companies Already Know This!
As scientists, we took great care to carefully document, measure, and examine the influence of being
one step removed from money. But I suspect that some companies intuitively understand this principle
and use it to their advantage. Consider, for example, this letter that I received from a young
Dear Dr. Ariely,
I graduated a few years ago with a BA degree in Economics from a prestigious college and
have been working at an economic consulting firm, which provides services to law firms.
The reason I decided to contact you is that I have been observing and participating in a
very well documented phenomenon of overstating billable hours by economic consultants. To
avoid sugar coating it, let’s call it cheating. From the most senior people all the way to the
lowest analyst, the incentive structure for consultants encourages cheating: no one checks to
see how much we bill for a given task; there are no clear guidelines as to what is acceptable;
and if we have the lowest billability among fellow analysts, we are the most likely to get axed.
These factors create the perfect environment for rampant cheating.
The lawyers themselves get a hefty cut of every hour we bill, so they don’t mind if we take
longer to finish a project. While lawyers do have some incentive to keep costs down to avoid
enraging clients, many of the analyses we perform are very difficult to evaluate. Lawyers
know this and seem to use it to their advantage. In effect, we are cheating on their behalf; we
get to keep our jobs and they get to keep an additional profit.
Here are some specific examples of how cheating is carried out in my company:
• A deadline was fast approaching and we were working extremely long hours. Budget
didn’t seem to be an issue and when I asked how much of my day I should bill, my boss (a
midlevel project manager) told me to take the total amount of time I was in the office and
subtract two hours, one for lunch and one for dinner. I said that I had taken a number of
other breaks while the server was running my programs and she said I could count that as
a mental health break that would promote higher productivity later.
• A good friend of mine in the office adamantly refused to overbill and consequently had
an overall billing rate that was about 20 percent lower than the average. I admire his
honesty, but when it was time to lay people off, he was the first to go. What kind of
message does that send to the rest of us?
• One person bills every hour he is monitoring his email for a project, whether or not he
receives any work to do. He is “on-call,” he says.
• Another guy often works from home and seems to bill a lot, but when he is in the office
he never seems to have any work to do.
These kinds of examples go on and on. There is no doubt that I am complicit in this
behavior, but seeing it more clearly makes me want to fix the problems. Do you have any
advice? What would you do in my situation?
Unfortunately, the problems Jonah noted are commonplace, and they are a direct outcome of the way
we think about our own morality. Here is another way to think about this issue: One morning I
discovered that someone had broken the window of my car and stolen my portable GPS system.
Certainly, I was very annoyed, but in terms of its economic impact on my financial future, this crime
had a very small effect. On the other hand, think about how much my lawyers, stockbrokers, mutual
fund managers, insurance agents, and others probably take from me (and all of us) over the years by
slightly overcharging, adding hidden fees, and so on. Each of these actions by itself is probably not
very financially significant, but together they add up to much more than a few navigation devices. At
the same time, I suspect that unlike the person who took my GPS, those white-collar transgressors
think of themselves as highly moral people because their actions are relatively small and, most
important, several steps removed from my pocket.
The good news is that once we understand how our dishonesty increases when we are one or more
steps removed from money, we can try to clarify and emphasize the links between our actions and the
people they can affect. At the same time, we can try to shorten the distance between our actions and
the money in question. By taking such steps, we can become more cognizant of the consequences of
our actions and, with that awareness, increase our honesty.
LESSONS FROM LOCKSMITHS
Not too long ago, one of my students named Peter told me a story that captures our misguided
efforts to decrease dishonesty rather nicely.
One day, Peter locked himself out of his house, so he called around to find a locksmith. It
took him a while to find one who was certified by the city to unlock doors. The locksmith
finally pulled up in his truck and picked the lock in about a minute.
“I was amazed at how quickly and easily this guy was able to open the door,” Peter told me.
Then he passed on a little lesson in morality he learned from the locksmith that day.
In response to Peter’s amazement, the locksmith told Peter that locks are on doors only to
keep honest people honest. “One percent of people will always be honest and never steal,”
the locksmith said. “Another one percent will always be dishonest and always try to pick your
lock and steal your television. And the rest will be honest as long as the conditions are right—
but if they are tempted enough, they’ll be dishonest too. Locks won’t protect you from the
thieves, who can get in your house if they really want to. They will only protect you from the
mostly honest people who might be tempted to try your door if it had no lock.”
After reflecting on these observations, I came away thinking that the locksmith was
probably right. It’s not that 98 percent of people are immoral or will cheat anytime the
opportunity arises; it’s more likely that most of us need little reminders to keep ourselves on
the right path.
How to Get People to Cheat Less
Now that we had figured out how the fudge factor works and how to expand it, as our next step we
wanted to figure out whether we could decrease the fudge factor and get people to cheat less. This
idea, too, was spawned by a little joke:
A visibly upset man goes to see his rabbi one day and says, “Rabbi, you won’t believe what
happened to me! Last week, someone stole my bicycle from synagogue!”
The rabbi is deeply upset by this, but after thinking for a moment, he offers a solution: “Next week
come to services, sit in the front row, and when we recite the Ten Commandments, turn around and
look at the people behind you. And when we get to ‘Thou shalt not steal,’ see who can’t look you in
the eyes and that’s your guy.” The rabbi is very pleased with his suggestion, and so is the man.
At the next service, the rabbi is very curious to learn whether his advice panned out. He waits for
the man by the doors of the synagogue, and asks him, “So, did it work?”
“Like a charm,” the man answers. “The moment we got to ‘Thou shalt not commit adultery,’ I
remembered where I left my bike.”
What this little joke suggests is that our memory and awareness of moral codes (such as the Ten
Commandments) might have an effect on how we view our own behavior.
Inspired by the lesson behind this joke, Nina, On, and I ran an experiment at the University of
California, Los Angeles (UCLA). We took a group of 450 participants and split them into two groups.
We asked half of them to try to recall the Ten Commandments and then tempted them to cheat on our
matrix task. We asked the other half to try to recall ten books they had read in high school before
setting them loose on the matrices and the opportunity to cheat. Among the group who recalled the ten
books, we saw the typical widespread but moderate cheating. On the other hand, in the group that was
asked to recall the Ten Commandments, we observed no cheating whatsoever. And that was despite
the fact that no one in the group was able to recall all ten.
This result was very intriguing. It seemed that merely trying to recall moral standards was enough to
improve moral behavior. In another attempt to test this effect, we asked a group of self-declared
atheists to swear on a Bible and then gave them the opportunity to claim extra earnings on the matrix
task. What did the atheists do? They did not stray from the straight-and-narrow path.
A few years ago I received a letter from a woman named Rhonda who attended the University
of California at Berkeley. She told me about a problem she’d had in her house and how a little
ethical reminder helped her solve it.
She was living near campus with several other people—none of whom knew one another.
When the cleaning people came each weekend, they left several rolls of toilet paper in each of
the two bathrooms. However, by Monday all the toilet paper would be gone. It was a classic
tragedy-of-the-commons situation: because some people hoarded the toilet paper and took
more than their fair share, the public resource was destroyed for everyone else.
After reading about the Ten Commandments experiment on my blog, Rhonda put a note in
one of the bathrooms asking people not to remove toilet paper, as it was a shared commodity.
To her great satisfaction, one roll reappeared in a few hours, and another the next day. In the
other note-free bathroom, however, there was no toilet paper until the following weekend,
when the cleaning people returned.
This little experiment demonstrates how effective small reminders can be in helping us
maintain our ethical standards and, in this case, a fully stocked bathroom.
These experiments with moral reminders suggest that our willingness and tendency to cheat could be
diminished if we are given reminders of ethical standards. But although using the Ten Commandments
and the Bible as honesty-building mechanisms might be helpful, introducing religious tenets into
society on a broader basis as a means to reduce cheating is not very practical (not to mention the fact
that doing so would violate the separation of church and state). So we began to think of more general,
practical, and secular ways to shrink the fudge factor, which led us to test the honor codes that many
universities already use.
To discover whether honor codes work, we asked a group of MIT and Yale students to sign such a
code just before giving half of them a chance to cheat on the matrix tasks. The statement read, “I
understand that this experiment falls under the guidelines of the MIT/Yale honor code.” The students
who were not asked to sign cheated a little bit, but the MIT and Yale students who signed this
statement did not cheat at all. And that was despite the fact that neither university has an honor code
(somewhat like the effect that swearing on the Bible had on the self-declared atheists).
We found that an honor code worked in universities that don’t have an honor code, but what about
universities that have a strong honor code? Would their students cheat less all the time? Or would they
cheat less only when they signed the honor code? Luckily, at the time I was spending some time at the
Institute for Advanced Study at Princeton University, which was a great petri dish in which to test this
Princeton University has a rigorous honor system that’s been around since 1893. Incoming freshmen
receive a copy of the Honor Code Constitution and a letter from the Honor Committee about the honor
system, which they must sign before they can matriculate. They also attend mandatory talks about the
importance of the Honor Code during their first week of school. Following the lectures, the incoming
Princetonians further discuss the system with their dorm advising group. As if that weren’t enough,
one of the campus music groups, the Triangle Club, performs its “Honor Code Song” for the incoming
For the rest of their time at Princeton, students are repeatedly reminded of the honor code: they sign
an honor code at the end of every paper they submit (“This paper represents my own work in
accordance with University regulations”). They sign another pledge for every exam, test, or quiz (“I
pledge my honor that I have not violated the honor code during this examination”), and they receive
biannual reminder e-mails from the Honor Committee.
To see if Princeton’s crash course on morality has a long-term effect, I waited two weeks after the
freshmen finished their ethics training before tempting them to cheat—giving them the same
opportunities as the students at MIT and Yale (which have neither an honor code nor a weeklong
course on academic honesty). Were the Princeton students, still relatively fresh from their immersion
in the honor code, more honest when they completed the matrix task?
Sadly, they were not. When the Princeton students were asked to sign the honor code, they did not
cheat at all (but neither did the MIT or Yale students). However, when they were not asked to sign the
honor code, they cheated just as much as their counterparts at MIT and Yale. It seems that the crash
course, the propaganda on morality, and the existence of an honor code did not have a lasting
influence on the moral fiber of the Princetonians.
These results are both depressing and promising. On the depressing side, it seems that it is very
difficult to alter our behavior so that we become more ethical and that a crash course on morality will
not suffice. (I suspect that this ineffectiveness also applies to much of the ethics training that takes
place in businesses, universities, and business schools.) More generally, the results suggest that it’s
quite a challenge to create a long-term cultural change when it comes to ethics.
On the positive side, it seems that when we are simply reminded of ethical standards, we behave
more honorably. Even better, we discovered that the “sign here” honor code method works both when
there is a clear and substantial cost for dishonesty (which, in the case of Princeton, can entail
expulsion) and when there is no specific cost (as at MIT and Yale). The good news is that people
seem to want to be honest, which suggests that it might be wise to incorporate moral reminders into
situations that tempt us to be dishonest.*
ONE PROFESSOR AT Middle Tennessee State University got so fed up with the cheating among his
MBA students that he decided to employ a more drastic honor code. Inspired by our Ten
Commandments experiment and its effect on honesty, Thomas Tang asked his students to sign an honor
code stating that they would not cheat on an exam. The pledge also stated that they “would be sorry
for the rest of their lives and go to Hell” if they cheated.
The students, who did not necessarily believe in Hell or agree that they were going there, were
outraged. The pledge became very controversial, and, perhaps unsurprisingly, Tang caught a lot of
heat for his effort (he eventually had to revert to the old, Hell-free pledge).
Still, I imagine that in its short existence, this extreme version of the honor code had quite an effect
on the students. I also think the students’ outrage indicates how effective this type of pledge can be.
The future businessmen and women must have felt that the stakes were very high, or they would not
have cared so much. Imagine yourself confronted by such a pledge. How comfortable would you feel
signing it? Would signing it influence your behavior? What if you had to sign it just before filling out
your expense reports?
The possibility of using religious symbols as a way to increase honesty has not escaped
religious scholars. There is a story in the Talmud about a religious man who becomes
desperate for sex and goes to a prostitute. His religion wouldn’t condone this, of course, but at
the time he feels that he has more pressing needs. Once alone with the prostitute, he begins to
undress. As he takes off his shirt, he sees his tzitzit, an undergarment with four pieces of
knotted fringe. Seeing the tzitzit reminds him of the mitzvoth (religious obligations), and he
quickly turns around and leaves the room without violating his religious standards.
Adventures with the IRS
Using honor codes to curb cheating at a university is one thing, but would moral reminders of this type
also work for other types of cheating and in nonacademic environments? Could they help prevent
cheating on, say, tax-reporting and insurance claims? That is what Lisa Shu (a PhD student at Harvard
University), Nina Mazar, Francesca Gino (a professor at Harvard University), Max Bazerman (a
professor at Harvard University), and I set out to test.
We started by restructuring our standard matrix experiment to look a bit like tax reporting. After they
finished solving and shredding the matrix task, we asked participants to write down the number of
questions that they had solved correctly on a form we modeled after the basic IRS 1040EZ tax form.
To make it feel even more as if they were working with a real tax form, it was stated clearly on the
form that their income would be taxed at a rate of 20 percent. In the first section of the form, the
participants were asked to report their “income” (the number of matrices they had solved correctly).
Next, the form included a section for travel expenses, where participants could be reimbursed at a
rate of 10 cents per minute of travel time (up to two hours, or $12) and for the direct cost of their
transportation (up to another $12). This part of the payment was tax exempt (like a business expense).
The participants were then asked to add up all the numbers and come up with their final net payment.
There were two conditions in this experiment: Some of the participants filled out the entire form and
then signed it at the bottom, as is typically done with official forms. In this condition, the signature
acted as verification of the information on the form. In the second condition, participants signed the
form first and only then filled it out. That was our “moral reminder” condition.
What did we find? The participants in the sign-at-the-end condition cheated by adding about four
extra matrices to their score. And what about those who signed at the top? When the signature acted
as a moral reminder, participants claimed only one extra matrix. I am not sure how you feel about
“only” one added matrix—after all, it is still cheating—but given that the one difference between
these two conditions was the location of the signature line, I see this outcome as a promising way to
Our version of the tax form also allowed us to look at the requests for travel reimbursements. Now,
we did not know how much time the participants really spent traveling, but if we assumed that due to
randomization, the average amount of travel time was basically the same in both conditions, we could
see in which condition participants claimed higher travel expenses. What we saw was that the amount
of requests for travel reimbursement followed the same pattern: Those in the signature-at-the-bottom
condition claimed travel expenses averaging $9.62, while those in the moral reminder (signature-atthe-top) condition claimed that they had travel expenses averaging $5.27.
ARMED WITH OUR evidence that when people sign their names to some kind of pledge, it puts them
into a more honest disposition (at least temporarily), we approached the IRS, thinking that Uncle Sam
would be glad to hear of ways to boost tax revenues. The interaction with the IRS went something
ME: By the time taxpayers finish entering all the data onto the form, it is too late. The cheating
is done and over with, and no one will say, “Oh, I need to sign this thing, let me go back and
give honest answers.” You see? If people sign before they enter any data onto the form, they
cheat less. What you need is a signature at the top of the form, and this will remind everyone
that they are supposed to be telling the truth.
IRS: Yes, that’s interesting. But it would be illegal to ask people to sign at the top of the form.
The signature needs to verify the accuracy of the information provided.
ME: How about asking people to sign twice? Once at the top and once at the bottom? That way,
the top signature will act as a pledge—reminding people of their patriotism, moral fiber,
mother, the flag, homemade apple pie—and the signature at the bottom would be for
IRS: Well, that would be confusing.
ME: Have you looked at the tax code or the tax forms recently?
IRS: [No reaction.]
ME: How about this? What if the first item on the tax form asked if the taxpayer would like to
donate twenty-five dollars to a task force to fight corruption? Regardless of the particular
answer, the question will force people to contemplate their standing on honesty and its
importance for society! And if the taxpayer donates money to this task force, they not only
state an opinion, but they also put some money behind their decision, and now they might be
even more likely to follow their own example.
IRS: [Stony silence.]
ME: This approach may have another interesting benefit: You could flag the taxpayers who
decide not to donate to the task force and audit them!
IRS: Do you really want to talk about audits?*
Despite the reaction from the IRS, we were not entirely discouraged, and continued to look for other
opportunities to test our “sign first” idea. We were finally (moderately) successful when we
approached a large insurance company. The company confirmed our already substantiated theory that
most people cheat, but only by a little bit. They told us that they suspect that very few people cheat
flagrantly (committing arson, faking a robbery, and so on) but that many people who undergo a loss of
property seem comfortable exaggerating their loss by 10 to 15 percent. A 32-inch television becomes
40 inches, an 18k necklace becomes 22k, and so on.
I went to their headquarters and got to spend the day with the top folks at this company, trying to
come up with ways to decrease dishonest reporting on insurance claims. We came up with lots of
ideas. For instance, what if people had to declare their losses in highly concrete terms and provide
more specific details (where and when they bought the items) in order to allow less moral flexibility?
Or if a couple lost their house in a flood, what if they had to agree on what was lost (although as we
will see in chapter 8, “Cheating as an Infection,” and chapter 9, “Collaborative Cheating,” this
particular idea might backfire). What if we played religious music when people were on hold? And
of course, what if people had to sign at the top of the claim form or even next to each reported item?
As is the way with such large companies, the people I met with took the ideas to their lawyers. We
waited six months and then finally heard from the lawyers—who said that they were not willing to let
us try any of these approaches.
A few days later, my contact person at the insurance company called me and apologized for not
being able to try any of our ideas. He also told me that there was one relatively unimportant
automobile insurance form that we could use for an experiment. The form asked people to record
their current odometer reading so that the insurance company could calculate how many miles they
had driven the previous year. Naturally, people who want their premium to be lower (I can think of
many) might be tempted to lie and underreport the actual number of miles they drove.
The insurance company gave us twenty thousand forms, and we used them to test our sign-at-the-top
versus the sign-at-the-bottom idea. We kept half of the forms with the “I promise that the information I
am providing is true” statement and signature line on the bottom of the page. For the other half, we
moved the statement and signature line to the top. In all other respects, the two forms were identical.
We mailed the forms to twenty thousand customers and waited a while, and when we got the forms
back we were ready to compare the amount of driving reported on the two types of forms. What did
When we estimated the amount of driving that took place over the last year, those who signed the
form first appeared to have driven on average 26,100 miles, while those who signed at the end of the
form appeared to have driven on average 23,700 miles—a difference of about 2,400 miles. Now, we
don’t know how much those who signed at the top really drove, so we don’t know if they were
perfectly honest—but we do know that they cheated to a much lesser degree. It is also interesting to
note that this magnitude of decreased cheating (which was about 15 percent of the total amount of
driving reported) was similar to the percentage of dishonesty we found in our lab experiments.
TOGETHER, THESE EXPERIMENTAL results suggest that although we commonly think about
signatures as ways to verify information (and of course signatures can be very useful in fulfilling this
purpose), signatures at the top of forms could also act as a moral prophylactic.
COMPANIES ARE ALWAYS RATIONAL!
Many people believe that although individuals might behave irrationally from time to time,
large commercial companies that are run by professionals with boards of directors and
investors will always operate rationally. I never bought into this sentiment, and the more I
interact with companies, the more I find that they are actually far less rational than
individuals (and the more I am convinced that anyone who thinks that companies are rational
has never attended a corporate board meeting).
What do you think happened after we demonstrated to the insurance company that we could
improve honesty in mileage reporting using their forms? Do you think the company was eager
to emend their regular practices? They were not! Or do you think anyone asked (maybe
begged) us to experiment with the much more important problem of exaggerated losses on
property claims—a problem that they estimate costs the insurance industry $24 billion a year?
You guessed it—no one called.
When I ask people how we might reduce crime in society, they usually suggest putting more police on
the streets and applying harsher punishments for offenders. When I ask CEOs of companies what they
would do to solve the problem of internal theft, fraud, overclaiming on expense reports, and sabotage
(when employees do things to hurt their employer with no concrete benefit to themselves), they
usually suggest stricter oversight and tough no-tolerance policies. And when governments try to
decrease corruption or create regulations for more honest behavior, they often push for transparency
(also known as “sunshine policies”) as a cure for society’s ills. Of course, there is little evidence that
any of these solutions work.
By contrast, the experiments described here show that doing something as simple as recalling moral
standards at the time of temptation can work wonders to decrease dishonest behavior and potentially
prevent it altogether. This approach works even if those specific moral codes aren’t a part of our
personal belief system. In fact, it’s clear that moral reminders make it relatively easy to get people to
be more honest—at least for a short while. If your accountant were to ask you to sign an honor code a
moment before filing your taxes or if your insurance agent made you swear that you were telling the
whole truth about that water-damaged furniture, chances are that tax evasion and insurance fraud
would be less common.*
What are we to make of all this? First, we need to recognize that dishonesty is largely driven by a
person’s fudge factor and not by the SMORC. The fudge factor suggests that if we want to take a bite
out of crime, we need to find a way to change the way in which we are able to rationalize our actions.
When our ability to rationalize our selfish desires increases, so does our fudge factor, making us more
comfortable with our own misbehavior and cheating. The other side is true as well; when our ability
to rationalize our actions is reduced, our fudge factor shrinks, making us less comfortable with
misbehaving and cheating. When you consider the range of undesirable behaviors in the world from
this standpoint—from banking practices to backdating stock options, from defaulting on loans and
mortgages to cheating on taxes—there’s a lot more to honesty and dishonesty than rational
Of course, this means that understanding the mechanisms involved in dishonesty is more complex
and that deterring dishonesty is not an easy task—but it also means that uncovering the intricate
relationship between honesty and dishonesty will be a more exciting adventure.
The income tax has made more liars out of the American people than golf has.
There’s a scene in the movie The Legend of Bagger Vance where Matt Damon’s character, Rannulph
Junuh, is attempting to get his golf game back, but he makes a critical error and his ball ends up in the
woods. After making it back onto the green, he moves a twig that is just adjacent to the ball in order
to create a clear path for his shot. As he moves the twig the ball rolls a tiny bit to the side. According
to the rules, he has to count it as a stroke. At that point in the match, Junuh had gained enough of a lead
that if he ignored the rule, he could win, making a comeback and restoring his former glory. His
youthful assistant tearfully begs Junuh to ignore the movement of the ball. “It was an accident,” the
assistant says, “and it’s a stupid rule anyway. Plus, no one would ever know.” Junuh turns to him and
says stoically, “I will. And so will you.”
Even Junuh’s opponents suggest that most likely the ball just wobbled and returned to its former
position or that the light tricked Junuh into thinking that the ball moved. But Junuh insists that the ball
rolled away. The result is an honorably tied game.
That scene was inspired by a real event that occurred during the 1925 U.S. Open. The golfer, Bobby
Jones, noticed that his ball moved ever so slightly as he prepared for his shot in the rough. No one
saw, no one would ever have known, but he called the stroke on himself and went on to lose the
match. When people discovered what he’d done and reporters began to flock to him, Jones famously
asked them not to write about the event, saying “You might as well praise me for not robbing banks.”
This legendary moment of noble honesty is still referred to by those who love the game, and for good
I think this scene—both cinematic and historic—captures the romantic ideal of golf. It’s a
demonstration of man versus himself, showing both his skill and nobility. Perhaps these characteristics of self-reliance, self-monitoring, and high moral standards are why
golf is often used as a metaphor for busines...
Purchase answer to see full