Public Integrity, 20: S89–S105, 2018
Copyright # American Society for Public Administration
ISSN: 1099-9922 print/1558-0989 online
Thinking about Thinking: Beyond Decision-Making
Rationalism and the Emergence of Behavioral Ethics
James S. Bowman
Florida State University
This article examines behavioral ethics propositions, cognitive distortions, and bias reduction
techniques. This is followed by the practitioner and academician response of the public administration profession to this nascent, intriguing field. The purpose is to review critical components of
behavioralism and to examine the present status of behavioral decision science in the practice
and study of public service ethics. How people think about thinking is important, because to address
problems effectively, there must be an understanding of why individuals believe what they believe,
how they think about issues, and why decisions are made.
Keywords: behavioral ethics research, biases, bounded ethicality, decision-making, motivated reasoning
“Only those who understand their own potential for unethical behavior can become the
decision makers that they aspire to be.” Banaji, Bazerman, and Chugh (2003)
People constantly make decisions, but often do not reflect on the process of decision-making
itself. Philosophical and economic rational choice models work well—except when they do not:
almost no one operates with perfect information and calculates all costs and benefits. Not only
do orthodox approaches place considerable cognitive demands on decision-makers, but they
also devalue unconscious, emotional, and tacit elements in making judgments. Further, while
certainly valuable, these decision models (e.g., Svara, 2015) frequently fail to delineate a
relationship between moral theorizing and ethical action. As Gazzaniga (2008, p. 148) observes,
“It has been hard to find any correlation between moral reasoning and proactive moral behavior.
In fact, in most studies, none has been found” (also consult Haidt, 2001, p. 817).
Instead of supposing that people are rational, behavioral ethics investigates whether or not
they act optimally. The data suggest that humans are far less rational than assumed (Ariely,
2008; Bazerman & Tenbensel, 2011; Brooks, 2011), as decisions are made by homo sapiens
not homo economicus. Feelings, intuitions, and perceptions are at least as significant in affecting
behavior as logic, rationality, and calculation. Indeed, neuroscientific studies (e.g., Prehen &
Heekeren, 2014) reveal that judgment and conduct are guided more by emotion than by reason.
The brain is hardwired so that beliefs come first and explanations for belief second.
Rational decision theories, in short, are incomplete because they do not acknowledge
“innate psychological responses when faced with ethical dilemmas” (Bazerman & Tenbensel,
2011, p. 4). This implies that information-processing heuristics—mental shortcuts, hidden
Correspondence should be sent to James S. Bowman, Askew School of Public Administration and Policy, Florida
State University, Tallahassee, FL 32306, USA. E-mail: email@example.com
biases, implicit motives, unacknowledged moral feelings, behavioral patterns, rules of thumb,
personal intuitions—influence conduct (Shao, Aquino, & Freeman, 2008). Many choices are
based, paradoxically, on subconscious thoughts and prejudices. They are often so powerful that
people act in ways inconsistent with their own values. The upshot: humans are not as good as
they think they are, and are apt to act unethically without being cognizant of that behavior
(Banaji et al., 2003).
Reflecting on metacognition (how people think about thinking) is vital for theoretical and
practical reasons. Thus if unethical behavior is intentional, following the classical
decision-making form, then attention should be to those who purposely commit improper acts.
If unethical conduct occurs unconsciously, however, then researchers should also identify
tendencies that negatively influence honest individuals. In addition to research concerns, it is
important to recognize that illusion and self-deception can jeopardize the formulation of policy
when independent analysis and thoughtful judgment are required. As Daniel Kahneman warns,
“[T]he failure of decisionmakers to grapple with the inner workings of their own minds,” risks
avoidable mistakes that could have fateful consequences (quoted in Lewis, 2017, p. 247).
Accordingly, the purpose here is to (a) review critical components of behavioralism, and (b)
portray its present status in the study of public administration ethics (the need for the former will
become apparent upon discussion of the latter). The article assesses behavioral ethics precepts and
blind spots, de-biasing countermeasures, as well as the practitioner and scholarly reaction of the
public administration profession to behavioral ethics. The aim is a more nuanced comprehension
of why moral agents believe what they believe, how individuals think about issues and why decisions
are made, as well as understanding how public administration has received the behavioral sciences.
BEHAVIORAL ETHICS PROPOSITIONS
The field of behavioral economics, as supplemented by psychology and neuroscience, has
existed for nearly four decades. It has fundamentally challenged how the ethics of human
conduct is understood by painting a clearer picture of how men and women think about and
act on information. Yet the application of its findings to societal and ethical problems is largely
a phenomenon of the last ten years (Oliver, 2015; Thaler, 2017; see also Lewis, 2017); among
the many landmark scholarly and popular works that might be cited here are Ariely (2008),
Kahneman (2011), Shafir (2012), and Thaler and Sunstein (2009).
Controlled laboratory and field experiments in cognitive science, social psychology, and
neuroimaging, have repeatedly shown how policymakers, average citizens, and students can
be wrong about their motivations, justifications for their beliefs, and the accuracy of their
memories. The choices that they make may be unconscious, because the brain is set on
automatic pilot: most of the stimuli it receives are processed instinctively, as a mere fraction
of that information makes it into consciousness. This makes brains efficient (not everything
need be attended to at once), but the effect is that judgments are made based on information
of which there is not full awareness (Bennett, 2014).
Investigators have developed a set of complementary principles that help explain decisionmaking. The res ulting insights into systematic errors found in concealed biases, brain quirks,
and behavioral traps highlight the underlying s cience behind dis honorable acts . Actions may
be unintentional and bas ed on inadequate knowledge, involve improper application of moral
THINKING ABOUT THINKING
tenets, and/or simply miss the ethical dimension of issues. Just as people are tricked by visual
illusions, they are fooled by illusions about how they make choices. The problem is that most
individuals are often not rational actors (Ariely, 2008; Mercier & Gorman, 2017); they are
fallible, normal human beings.
Pervasive and overlapping behavioral science claims are examined below. Primarily
descriptive rather than normative, the propositions show how cognitive heuristics, psychological tendencies, social and organizational pressures, and seemingly irrelevant situational factors
can account for the dishonesty of otherwise honest people. These ubiquitous phenomena have
serious consequences, as they distort knowledge, corrupt public discourse, and conceal
solutions to problems. The question may not be so much whether a decision-maker is moral,
but rather in what circumstances and to what degree. It is not only a matter of knowing what
is right, but also about thinking of the meaning and relevance of rectitude in a given situation
(Kaptien, 2013). Ethical issues are often embedded in decisions that appear to lack moral
ramifications. Due to space limitations, the list of comparable concepts that follows is simply
illustrative of some of the more compelling ones (see Samson, 2016a as well as Bowman &
West, 2018 for a fuller discussion and additional citations).
Bounded Rationality/Bounded Ethicality
Bounded rationality, a term coined by Herbert Simon, describes a “behavioral model [in which]
human rationality is very limited, very much bounded by the situation and by homo sapien
computational powers” (1983, p. 34). Individuals often do not have complete and accurate
information, and, even if they did, they have a less-than-perfect capacity for information
processing to reach an optimal choice.
Sub-optimization can overlook significant facts, omit stakeholders, or give insufficient
attention to long-term consequences. Decision-maker rationality also may be affected by
self-interest, false assumptions, subliminal inclinations, innate responses to ethical circumstances, and failures in problem definition (Bazerman & Chugh, 2006). People can be blind
to the obvious and blind to their blindless (Bazerman & Tenbensel, 2011; Kahneman, 2011).
Bounded ethicality results in virtuous men and women making questionable decisions. The
pressures and demands facing managers, for example, can cause them to depend on habit,
instead of deliberation (Chugh, 2004), leading to a related principle: fast and slow thinking.
System1/System 2 Thinking
System 1 is a rapid, intuitive way to process information and yields an instinctive response, or
gut reaction, which can be a useful guide for many decisions. Indeed, Hoomans (2015) reports
that adults make an incredible 35,000 conscious and non-conscious choices every day. In most
situations, there is just not enough time for another approach. System 1 is an effortless,
decision-without-thought default process for arriving at routine judgments in a quick, visceral,
and easy way. Generally, the fast system is efficient and good enough—the immediate, obvious
answer feels right—but it is also prone to prejudice and error. System 2 is a slow-paced,
thoughtful strategy that weighs the merits and demerits of an issue. Judgments made under
stress might rely on System 1 when System 2 is warranted, because the linkages from emotional
brain circuits to cognitive brain components are stronger than those from cognitive circuits to
emotional systems. As Harvard’s David Ropeik observes, the “architecture of the brain ensures
that we feel first and think second” (2012, p. 12).
Haidt (2001) finds that intuitions like hunches are the primary source of moral judgments,
as rational arguments are commonly used post hoc to justify determinations. Compromising
those moral positions, unlike “split-the-difference” economic issues, is often unimaginable.
Moreover, detecting error does not necessarily lead to change: the slow system’s reasoning
ability, in fact, may be invoked to generate rationalizations for decisions already made. Initial
views are also strengthened by “confirmation bias,” as people focus on data that reinforce
their pre-existing opinions. Political scientist Thorson (2016) points out that these “belief
echoes” persist even when misinformation is corrected. Individuals have a built-in tendency
to expect and see what they want to expect and see, so that fact-checkers with their
unwelcome facts are perceived as prejudiced—as prejudiced as those whose facts they check.
Confirmation bias is one of the most persistent errors the brain makes; its concern is less
about objective truth and more about avoiding cognitive dissonance. Dubious beliefs—tax
cuts proportionally increase revenue, healthcare “death panels,” widespread in-person voter
fraud, President Obama is Muslim and/or not a citizen, Iraq–Al-Qa‘ida September 11
collaboration, and global warming denial—exemplify how System 1 thinking interacts with
“motivated reasoning,” a related supposition.
Facts mean little if someone subscribes to a belief different from what the facts dictate. Motivated reasoning indicates that individuals are psychologically geared to maintain their existing
evaluations, independent of facts (Redlawsk, Civettini, & Emmerson, 2010). Opinions are based
on beliefs. People feel what they think, as “emotion assigns value to things and reason can only
make choices on the basis of those valuations” (Brooks, 2011, p. 21). As they become more
informed, such viewpoints—despite their utter implausibility—make them more likely to be
wrong, resulting a kind of invincible ignorance.
This “smart idiot” phenomenon explains why corrections to false information have a “backfire effect”: when presented with documented facts, some people become less likely to believe
them (Mooney, 2012; Sloman & Fernbach, 2017). Thus, if something or someone is disliked by
the “true believer,” contradictory information can be discounted to the point that the object of
dislike will be loathed as much or more than before. The smarter the person is, the greater
ability to rationalize. When faith meets evidence, evidence does not have a chance.
The use of this defense mechanism is psychologically easier than to change beliefs and
admit error. Once something is accepted as true, it is difficult to falsify the belief. If
confidence is lost in society’s institutions and their experts, then that may explain why many
Americans are not affected by facts. This repudiation dangerously undermines the very idea of
objective reality. The illogical denial of real existence, for example, puts both unvaccinated
children and the community at risk. Motivated reasoning, like many behavioral principles,
operates at the subconscious level. Sincere claims can be made that one is not influenced
by prejudice, even though their “unbiased opinion” is self-serving—a process facilitated by
THINKING ABOUT THINKING
How issues are cast influences how people react to them (Kearne & Chugh, 2009). Many
workplace decisions, for instance, have both a business and an ethical dimension, and decisionmakers may give primacy to one or the other. If the situation is seen as a business matter, it could
lead to “ethical fading,” and allow the emotional, impulsive “want” self to be dominant—
especially since people tend to accept the frame that is provided. Improper behavior may occur
instinctively, without deliberation, as moral concerns are set aside in pursuit of other goals like
efficiency. Tenbrunsel and Messick (2004, p. 114) use the term “ethical cleansing” to describe
how individuals “unconsciously transform ethical decisions into ones that are ethically clean.”
Further, “want” choices are made in the present, while “should” choices take place before
and after the decision. Ironically, this “want-should theory” separation of the two selves—
paralleling fast and slow thinking—can allow people to believe they are more virtuous than
is actually the case. There is also a tendency to choose and then engage in faded, faux moral
reasoning to justify the determination. Because human beings generally value morality, they
are motivated to forget the details of their unjust actions in a kind of “ethical amnesia.” In
contrast, when the situation is interpreted primarily in ethical terms, the thoughtful, deliberative
“should” self emerges, and fading and cleansing would not occur. Recognizing both the
business side and the ethical side of judgements is crucial, then, if one seeks to do things right
and do right things. In short, how a problem is framed affects susceptibility to the effects
already noted as well as those discussed next.
Bias and Decision-Making Errors
In considering additional cognitive distortions, mental short cuts and unconscious presumptions,
it is important to recognize that the brain interprets all experiences based on its model of the
world. People hold certain beliefs because they fit the sense they have made of their environment. As already seen, this process can be inaccurate—and convincing—in part because of
unacknowledged competing emotions and conflicting intuitions. Subliminal predispositions,
described below, materialize in differing circumstances, appear in many guises, interact in
pernicious ways to impair judgment, and are not readily susceptible to reasoned debate and
First, “status quo bias” takes place when someone is faced with choices, and the default
option operates (deciding not to decide) to live in the moment. This occurs: (a) by taking the
path of least resistance (“effort aversion”), (b) by valuing what one has (“present preference”)
because she owns it (“endowment effect” or “loss aversion”), and (c) by staying the course
the person can hope to capitalize on “sunk costs” (Schmidt, 2016). It would be a serious error
to underestimate the difficulty in changing lifetime habits: the 600,000 patients who have heart
bypasses each year are told their lifestyle (diet, exercise, smoking) must change because surgery
is a temporary fix. Change or die? The answer is not a question of awareness or knowledge.
Instead, the reaction is the “deaf effect:” over 90% of patients choose death, as immediate
pleasures override long-term survival (Rainer & Geiger, 2011; see also Cotteleer & Murphy,
2015). Applied to management of organizational change, likely losers will fight a lot harder
against reforms than potential winners due to loss aversion.
An additional word about present preference vis-à-vis global warming may be helpful:
people care a lot more about the tangible present than the distant future. Mental space is limited,
and brains emphasize immediate issues. Voters may indicate that they care about climate
change, but it is not a high priority. Policies dealing with climate change that have clear,
short-run benefits are more likely to generate support than those that focus on the long term
(Victor, Obradovich, & Amaya, 2017).
The second predilection, “preference falsification,” happens when someone suppresses what
they think to agree with what others think—“private truths, but public lies” (Kuran, 1997). This
phenomenon may prop up social stability; however when norms erode, otherwise puzzling
events occur, such as the seemingly widespread surprise endorsement of marriage equality,
abrupt condemnation (and support) of the Confederate battle flag and monuments, the long overdue reckoning on sexual harassment, and unexpected enthusiasm for nativism in recent years.
Third, “overconfidence bias” assumes comprehension when comprehension does not exist.
This well-documented tendency shows individuals believe that they know far more than they
actually do. In fact, the least proficient often over estimate their abilities, as cupidity begets
confidence. The greatest enemy of knowledge is not ignorance, but the illusion of knowledge
(Sloman & Fernbach, 2017). Overstating one’s talents, for instance, famously clouds drivers’
judgements: most rate themselves better than average (the “Lake Wobegon effect”). The vast
majority of the population believes that it cannot multi-task well enough to drive safely, yet
many drivers admit to regular texting (Hogan Assessment Systems, 2017). It is not just that they
do not know what they do not know, but that they do not factor their limitations into their
decisions: they are quite certain about uncertainty. The problem is that the incompetent cannot
understand they are incompetent—a perilous phenomenon in management decision-making.
Fourth, when faced with a dilemma, people may predict that they will make an honorable
choice, but when actually faced with the dilemma, they do not. Overestimating their moral
capacity, they nonetheless still consider themselves to be ethical (“ethical ...
Purchase answer to see full