Unit #1: Knowledge, Belief, Bullshit, and Responsibility
WORKLOAD 57
COURSE READER #1
Table of Contents
Title
Page
Reading #1: Why People Believe Conspiracy Theories
3
Reading #2: Michael Shermer: The Pattern Behind Self-Deception
5
Reading #3: Why Facts Don’t Change Our Minds
9
Reading #4: The Internet Isn’t Making Us Dumber — It’s Making Us More ‘Meta-Ignorant’
13
Reading #5: Trump’s ‘Dangerous Disability’? It’s the Dunning-Kruger Effect
16
Reading #6: Why Expertise Matters
18
Reading #7: Climate Science Meets a Stubborn Obstacle: Students
20
Reading #8: Why Fake News Spreads Like Wildfire on Facebook
24
Reading #9: Plato predicted ‘Pizzagate’ (or rather, fake news more generally)
25
Reading #10: Facebook’s Role in Trump’s Win is Clear. No Matter What Mark Zuckerberg Says.
27
Reading #11: 4 Reasons Why People Ignore Facts and Believe Fake News
29
Reading #12: Ban the Term ‘Fake News’
31
Reading #13: Truth Matters
33
Reading #14: Scientists are Testing a “Vaccine” against Climate Change Denial
35
Reading #15: Solving the Problem of Fake News
38
2|Page
Reading #1: Why People Believe Conspiracy Theories
December 11, 2016
After a man took a gun to a pizzeria to investigate a fake conspiracy theory, psychology professor Viren Swami of Anglia Ruskin
University in Cambridge, U.K. discusses why people are susceptible.
MICHEL MARTIN, HOST: It's been a week since a North Carolina man armed with a rifle traveled to a Washington, D.C., pizza
restaurant. He says he was investigating online claims that Hillary Clinton and her campaign manager John Podesta were
running a child sex trafficking ring from the basement of that restaurant. Now, these online claims are false and have been
repeatedly debunked, but it got us wondering what would drive a person to believe wild stories like this, especially when
there are so many facts that clearly disprove them.
So we called up Viren Swami, a professor of social psychology at Anglia Ruskin University in Cambridge in the United Kingdom.
The psychology of conspiracy theories is among his research interests, and I started by asking him if there's a profile of the
kind of a person who tends to believe in conspiracy theories.
VIREN SWAMI: There are a number of psychological traits that we have found to be associated with belief in conspiracy
theories. I would suggest that profiling the conspiracy theorist is going to be quite difficult because, A, there are a range of
different conspiracy theories, and B, I think the conspiracy narrative itself is believed by so many people that to come to a
profile that fits everyone just wouldn't be possible.
MARTIN: How is this different from people who are just, say, bigots - you know, who just adhere to certain fixed beliefs about
certain people?
SWAMI: I think they can be separate. I think there are some times when they may be inherently linked. But I think they're very
different narratives, and they're very different belief systems. I think one of the things about conspiracy theories is particularly in the United States is that nationally representative surveys suggests that up to about 50 percent of the
population believe in at least one conspiracy theory. And there aren't any clear ideological divides. So you find conspiracist
ideation among the left wing, among the middle ground, among the right wing.
Now, the content of the conspiracy theories might differ. So a common right-wing conspiracy theory would be the Obama
birther conspiracy theory, whereas a left-wing conspiracy theory might be this idea that the financial crisis was intentionally
caused in order to extend the power of the Federal Reserve.
So the content might change, but the prevalence of it doesn't seem to differ whether you're left-wing or right-wing. I think
some people who believe in conspiracy theories believe in those ideas because it restores a sense of agency. It gives them a
sense of power. It gives them a sense that they can do something about the world.
Now, one of the things we find quite consistently is that people who believe in conspiracy theories tend to be alienated, tend
to be divorced from mainstream politics.
MARTIN: The New York Times in reporting on this recounts a recent poll conducted by Fairleigh Dickinson University that says
that 63 percent of registered American voters believe in at least one political conspiracy theory.
Now, why might that be, particularly in a country like the United States where people really pride themselves on being
optimistic and can-do? Why would that be such a pervasive part of the character of our politics?
SWAMI: I think there are going to be lots of different explanations for that. I think one explanation would center on the basis
that a huge swathe of the American population feel disaffected, feel alienated. They don't feel like big politics represents
them.
One of the other things I think - as an outsider looking in at American politics, I think one of the big changes we've seen is the
use of a conspiracy narrative as a means of mobilizing people.
3|Page
So Trump and the people around him are using a conspiracy narrative in my view to - not just to kind of have an argument
with people but actually to mobilize people, to use it as a way of getting people involved in a campaign. And I think that was
one of the big changes for me. I don't think historically it's incredibly rare to see that actually happen.
MARTIN: I'm wondering if what your take is on the effect of the Internet on this propensity to believe conspiracy theories. I
think that one might assume that access to more information would attenuate this because people can easily find out that
what they think is true is not true or that there are countervailing facts. But it doesn't seem to work that way.
SWAMI: Human beings have a very natural tendency to take in information that fits their own perspective of the world. And
we tend to reject information or reject evidence that we disagree with. And we do that for a very simple reason. We don't like
it when we feel wrong. We don't like it when people tell us we're wrong because that damages our psychological well-being.
We don't like thinking that our view of the world, our perspective of the world is incorrect.
So what tends to happen is that we look for information; we look for evidence that fits what we already know or what we
already believe, and we try to avoid information or evidence that we either disagree with or that we know doesn't fit with our
perspective. And if someone comes along and says, here's the evidence, your natural tendency's actually to rehearse
arguments against that evidence.
MARTIN: So what does one do? I mean what does one do because people who are in the information-disseminating business
as you are and as we are, frankly, tend to believe that, you know, the facts matter and that this is important and that if you
continue to kind of repeat the truth, that that somehow drives out the false narrative. That doesn't seem to be the case, so is
there another way we should be thinking about this?
SWAMI: I think the first thing I would say is that we need to teach people and teach everyone how to be better critical
thinkers, how to use information, how to understand pieces of information and how to look at information and work out
whether it's good or bad information. That for me is the first step. But I don't think that's going to be enough.
I think if you kind of go along with this idea that conspiracy theories are more likely to emerge when people feel disaffected,
when people feel alienated, then the natural outcome of that - the natural answer to what we should do is that we should be
promoting greater democratic access. We should allow for everyone to be part of a democratic process in which they have a
say, in which they have a voice. And once you start to have that, I think you will start to see the conspiracy theories start to
diminish.
MARTIN: That's Viren Swami. He's a professor of social psychology at Anglia Ruskin University. That's based in Cambridge in
the United Kingdom. Thank you so much for speaking with us.
___________________________________________________________________________________________________
4|Page
Reading #2: Michael Shermer: The Pattern Behind Self-Deception
TED2010 · 19:01 · Filmed Feb 2010
Watch TED Talk (19:01) at:
Subtitles and Transcript
0:12 So since I was here last in '06, we discovered that global climate change is turning out to be a pretty serious issue, so we
covered that fairly extensively in Skeptic magazine. We investigate all kinds of scientific and quasi-scientific controversies, but
it turns out we don't have to worry about any of this because the world's going to end in 2012.
0:32 Another update: You will recall I introduced you guys to the Quadro Tracker. It's like a water dowsing device. It's just a
hollow piece of plastic with an antenna that swivels around. And you walk around, and it points to things. Like if you're looking
for marijuana in students' lockers, it'll point right to somebody. Oh, sorry. (Laughter) This particular one that was given to me
finds golf balls, especially if you're at a golf course and you check under enough bushes. Well, under the category of "What's
the harm of silly stuff like this?" this device, the ADE 651, was sold to the Iraqi government for 40,000 dollars apiece. It's just
like this one, completely worthless, in which it allegedly worked by "electrostatic magnetic ion attraction," which translates to
"pseudoscientific baloney" — would be the nice word — in which you string together a bunch of words that sound good, but it
does absolutely nothing. In this case, at trespass points, allowing people to go through because your little tracker device said
they were okay, actually cost lives. So there is a danger to pseudoscience, in believing in this sort of thing.
1:45 So what I want to talk about today is belief. I want to believe, and you do too. And in fact, I think my thesis here is that
belief is the natural state of things. It is the default option. We just believe. We believe all sorts of things. Belief is natural;
disbelief, skepticism, science, is not natural. It's more difficult. It's uncomfortable to not believe things. So like Fox Mulder on
"X-Files," who wants to believe in UFOs? Well, we all do, and the reason for that is because we have a belief engine in our
brains. Essentially, we are pattern-seeking primates. We connect the dots: A is connected to B; B is connected to C. And
sometimes A really is connected to B, and that's called association learning.
2:30 We find patterns, we make those connections, whether it's Pavlov's dog here associating the sound of the bell with the
food, and then he salivates to the sound of the bell, or whether it's a Skinnerian rat, in which he's having an association
between his behavior and a reward for it, and therefore he repeats the behavior. In fact, what Skinner discovered is that, if
you put a pigeon in a box like this, and he has to press one of these two keys, and he tries to figure out what the pattern is,
and you give him a little reward in the hopper box there — if you just randomly assign rewards such that there is no pattern,
they will figure out any kind of pattern. And whatever they were doing just before they got the reward, they repeat that
particular pattern. Sometimes it was even spinning around twice counterclockwise, once clockwise and peck the key twice.
And that's called superstition, and that, I'm afraid, we will always have with us.
3:22 I call this process "patternicity" — that is, the tendency to find meaningful patterns in both meaningful and meaningless
noise. When we do this process, we make two types of errors. A Type I error, or false positive, is believing a pattern is real
when it's not. Our second type of error is a false negative. A Type II error is not believing a pattern is real when it is. So let's do
a thought experiment. You are a hominid three million years ago walking on the plains of Africa. Your name is Lucy, okay? And
you hear a rustle in the grass. Is it a dangerous predator, or is it just the wind? Your next decision could be the most important
one of your life. Well, if you think that the rustle in the grass is a dangerous predator and it turns out it's just the wind, you've
made an error in cognition, made a Type I error, false positive. But no harm. You just move away. You're more cautious. You're
more vigilant. On the other hand, if you believe that the rustle in the grass is just the wind, and it turns out it's a dangerous
predator, you're lunch. You've just won a Darwin award. You've been taken out of the gene pool.
4:27 Now the problem here is that patternicities will occur whenever the cost of making a Type I error is less than the cost of
making a Type II error. This is the only equation in the talk by the way. We have a pattern detection problem that is assessing
the difference between a Type I and a Type II error is highly problematic, especially in split-second, life-and-death situations.
So the default position is just: Believe all patterns are real — All rustles in the grass are dangerous predators and not just the
wind. And so I think that we evolved ... there was a natural selection for the propensity for our belief engines, our pattern5|Page
seeking brain processes, to always find meaningful patterns and infuse them with these sort of predatory or intentional
agencies that I'll come back to.
5:10 So for example, what do you see here? It's a horse head, that's right. It looks like a horse. It must be a horse. That's a
pattern. And is it really a horse? Or is it more like a frog? See, our pattern detection device, which appears to be located in the
anterior cingulate cortex — it's our little detection device there — can be easily fooled, and this is the problem. For example,
what do you see here? Yes, of course, it's a cow. Once I prime the brain — it's called cognitive priming — once I prime the
brain to see it, it pops back out again even without the pattern that I've imposed on it. And what do you see here? Some
people see a Dalmatian dog. Yes, there it is. And there's the prime. So when I go back without the prime, your brain already
has the model so you can see it again. What do you see here? Planet Saturn. Yes, that's good. How about here? Just shout out
anything you see. That's a good audience, Chris. Because there's nothing in this. Well, allegedly there's nothing.
6:15 This is an experiment done by Jennifer Whitson at U.T. Austin on corporate environments and whether feelings of
uncertainty and out of control makes people see illusory patterns. That is, almost everybody sees the planet Saturn. People
that are put in a condition of feeling out of control are more likely to see something in this, which is allegedly patternless. In
other words, the propensity to find these patterns goes up when there's a lack of control. For example, baseball players are
notoriously superstitious when they're batting, but not so much when they're fielding. Because fielders are successful 90 to 95
percent of the time. The best batters fail seven out of 10 times. So their superstitions, their patternicities, are all associated
with feelings of lack of control and so forth.
7:06 What do you see in this particular one here, in this field? Anybody see an object there? There actually is something here,
but it's degraded. While you're thinking about that, this was an experiment done by Susan Blackmore, a psychologist in
England, who showed subjects this degraded image and then ran a correlation between their scores on an ESP test: How much
did they believe in the paranormal, supernatural, angels and so forth. And those who scored high on the ESP scale, tended to
not only see more patterns in the degraded images but incorrect patterns. Here is what you show subjects. The fish is
degraded 20 percent, 50 percent and then the one I showed you, 70 percent.
7:50 A similar experiment was done by another [Swiss] psychologist named Peter Brugger, who found significantly more
meaningful patterns were perceived on the right hemisphere, via the left visual field, than the left hemisphere. So if you
present subjects the images such that it's going to end up on the right hemisphere instead of the left, then they're more likely
to see patterns than if you put it on the left hemisphere. Our right hemisphere appears to be where a lot of this patternicity
occurs. So what we're trying to do is bore into the brain to see where all this happens.
8:19 Brugger and his colleague, Christine Mohr, gave subjects L-DOPA. L-DOPA's a drug, as you know, given for treating
Parkinson's disease, which is related to a decrease in dopamine. L-DOPA increases dopamine. An increase of dopamine caused
subjects to see more patterns than those that did not receive the dopamine. So dopamine appears to be the drug associated
with patternicity. In fact, neuroleptic drugs that are used to eliminate psychotic behavior, things like paranoia, delusions and
hallucinations, these are patternicities. They're incorrect patterns. They're false positives. They're Type I errors. And if you give
them drugs that are dopamine antagonists, they go away. That is, you decrease the amount of dopamine, and their tendency
to see patterns like that decreases. On the other hand, amphetamines like cocaine are dopamine agonists. They increase the
amount of dopamine. So you're more likely to feel in a euphoric state, creativity, find more patterns.
9:19 In fact, I saw Robin Williams recently talk about how he thought he was much funnier when he was doing cocaine, when
he had that issue, than now. So perhaps more dopamine is related to more creativity. Dopamine, I think, changes our signalto-noise ratio. That is, how accurate we are in finding patterns. If it's too low, you're more likely to make too many Type II
errors. You miss the real patterns. You don't want to be too skeptical. If you're too skeptical, you'll miss the really interesting
good ideas. Just right, you're creative, and yet you don't fall for too much baloney. Too high and maybe you see patterns
everywhere. Every time somebody looks at you, you think people are staring at you. You think people are talking about you.
And if you go too far on that, that's just simply labeled as madness. It's a distinction perhaps we might make between two
Nobel laureates, Richard Feynman and John Nash. One sees maybe just the right number of patterns to win a Nobel Prize. The
other one also, but maybe too many patterns. And we then call that schizophrenia.
6|Page
10:17 So the signal-to-noise ratio then presents us with a pattern-detection problem. And of course you all know exactly what
this is, right? And what pattern do you see here? Again, I'm putting your anterior cingulate cortex to the test here, causing you
conflicting pattern detections. You know, of course, this is Via Uno shoes. These are sandals. Pretty sexy feet, I must say.
Maybe a little Photoshopped. And of course, the ambiguous figures that seem to flip-flop back and forth. It turns out what
you're thinking about a lot influences what you tend to see. And you see the lamp here, I know. Because the lights on here. Of
course, thanks to the environmentalist movement we're all sensitive to the plight of marine mammals. So what you see in this
particular ambiguous figure is, of course, the dolphins, right? You see a dolphin here, and there's a dolphin, and there's a
dolphin. That's a dolphin tail there, guys.
11:16 (Laughter)
11:21 If we can give you conflicting data, again, your ACC is going to be going into hyperdrive. If you look down here, it's fine.
If you look up here, then you get conflicting data. And then we have to flip the image for you to see that it's a set up. The
impossible crate illusion. It's easy to fool the brain in 2D. So you say, "Aw, come on Shermer, anybody can do that in a Psych
101 text with an illusion like that." Well here's the late, great Jerry Andrus' "impossible crate" illusion in 3D, in which Jerry is
standing inside the impossible crate. And he was kind enough to post this and give us the reveal. Of course, camera angle is
everything. The photographer is over there, and this board appears to overlap with this one, and this one with that one, and
so on. But even when I take it away, the illusion is so powerful because of how are brains are wired to find those certain kinds
of patterns.
12:10 This is a fairly new one that throws us off because of the conflicting patterns of comparing this angle with that angle. In
fact, it's the exact same picture side by side. So what you're doing is comparing that angle instead of with this one, but with
that one. And so your brain is fooled. Yet again, your pattern detection devices are fooled.
12:28 Faces are easy to see because we have an additional evolved facial recognition software in our temporal lobes. Here's
some faces on the side of a rock. I'm actually not even sure if this is — this might be Photoshopped. But anyway, the point is
still made. Now which one of these looks odd to you? In a quick reaction, which one looks odd? The one on the left. Okay. So
I'll rotate it so it'll be the one on the right. And you are correct. A fairly famous illusion — it was first done with Margaret
Thatcher. Now, they trade up the politicians every time. Well, why is this happening? Well, we know exactly where it happens,
in the temporal lobe, right across, sort of above your ear there, in a little structure called the fusiform gyrus. And there's two
types of cells that do this, that record facial features either globally, or specifically these large, rapid-firing cells, first look at
the general face. So you recognize Obama immediately. And then you notice something quite a little bit odd about the eyes
and the mouth. Especially when they're upside down, you're engaging that general facial recognition software there.
13:30 Now I said back in our little thought experiment, you're a hominid walking on the plains of Africa. Is it just the wind or a
dangerous predator? What's the difference between those? Well, the wind is inanimate; the dangerous predator is an
intentional agent. And I call this process agenticity. That is the tendency to infuse patterns with meaning, intention and
agency, often invisible beings from the top down. This is an idea that we got from a fellow TEDster here, Dan Dennett, who
talked about taking the intentional stance.
13:59 So it's a type of that expanded to explain, I think, a lot of different things: souls, spirits, ghosts, gods, demons, angels,
aliens, intelligent designers, government conspiracists and all manner of invisible agents with power and intention, are
believed to haunt our world and control our lives. I think it's the basis of animism and polytheism and monotheism. It's the
belief that aliens are somehow more advanced than us, more moral than us, and the narratives always are that they're coming
here to save us and rescue us from on high. The intelligent designer's always portrayed as this super intelligent, moral being
that comes down to design life. Even the idea that government can rescue us — that's no longer the wave of the future, but
that is, I think, a type of agenticity: projecting somebody up there, big and powerful, will come rescue us.
14:46 And this is also, I think, the basis of conspiracy theories. There's somebody hiding behind there pulling the strings,
whether it's the Illuminati or the Bilderbergers. But this is a pattern detection problem, isn't it? Some patterns are real and
some are not. Was JFK assassinated by a conspiracy or by a lone assassin? Well, if you go there — there's people there on any
given day — like when I went there, here — showing me where the different shooters were. My favorite one was he was in
the manhole. And he popped out at the last second, took that shot. But of course, Lincoln was assassinated by a conspiracy.
7|Page
So we can't just uniformly dismiss all patterns like that. Because, let's face it, some patterns are real. Some conspiracies really
are true. Explains a lot, maybe.
15:28 And 9/11 has a conspiracy theory. It is a conspiracy. We did a whole issue on it. Nineteen members of Al Queda plotting
to fly planes into buildings constitutes a conspiracy. But that's not what the "9/11 truthers" think. They think it was an inside
job by the Bush administration. Well, that's a whole other lecture. You know how we know that 9/11 was not orchestrated by
the Bush administration? Because it worked.
15:50 (Laughter)
15:53 (Applause)
15:56 So we are natural-born dualists. Our agenticity process comes from the fact that we can enjoy movies like these.
Because we can imagine, in essence, continuing on. We know that if you stimulate the temporal lobe, you can produce a
feeling of out-of-body experiences, near-death experiences, which you can do by just touching an electrode to the temporal
lobe there. Or you can do it through loss of consciousness, by accelerating in a centrifuge. You get a hypoxia, or a lower
oxygen. And the brain then senses that there's an out-of-body experience. You can use — which I did, went out and did —
Michael Persinger's God Helmet, that bombards your temporal lobes with electromagnetic waves. And you get a sense of outof-body experience.
16:35 So I'm going to end here with a short video clip that sort of brings all this together. It's just a minute and a half. It ties
together all this into the power of expectation and the power of belief. Go ahead and roll it.
16:46 Narrator: This is the venue they chose for their fake auditions for an advert for lip balm.
16:51 Woman: We're hoping we can use part of this in a national commercial, right? And this is test on some lip balms that we
have over here. And these are our models who are going to help us, Roger and Matt. And we have our own lip balm, and we
have a leading brand. Would you have any problem kissing our models to test it?
17:11 Girl: No.
17:13 Woman: You wouldn't? (Girl: No.) Woman: You'd think that was fine.
17:15 Girl: That would be fine. (Woman: Okay.)
17:17 So this is a blind test. I'm going to ask you to go ahead and put a blindfold on. Kay, now can you see anything? (Girl: No.)
Pull it so you can't even see down. (Girl: Okay.)
17:30 Woman: It's completely blind now, right?
17:32 Girl: Yes. (Woman: Okay.)
17:34 Now, what I'm going to be looking for in this test is how it protects your lips, the texture, right, and maybe if you can
discern any flavor or not.
17:45 Girl: Okay. (Woman: Have you ever done a kissing test before?)
17:48 Girl: No.
17:50 Woman: Take a step here. Okay, now I'm going to ask you to pucker up. Pucker up big and lean in just a little bit, okay?
18:26 Woman: Okay. And, Jennifer, how did that feel?
18:30 Jennifer: Good.
18:39 Girl: Oh my God!
18:46 Michael Shermer: Thank you very much. Thank you. Thanks.
8|Page
Reading #3: Why Facts Don’t Change Our Minds
New discoveries about the human mind show the limitations of reason.
The New Yorker
February 27, 2017 Issue
By Elizabeth Kolbert
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented
with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had
subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.
Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the
real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.
As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—
they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told
they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment
was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the
students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought
an average student would get right. At this point, something curious happened. The students in the high-score group said that
they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just
been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that
they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.
“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”
A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of
information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby
daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses
on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a
successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the
safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway
through the study, the students were informed that they’d been misled, and that the information they’d received was entirely
fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a
successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students
in the second group thought he’d embrace it.
Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,”
the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been
enough information to generalize from.
The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that
people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and
elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology
Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally
irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we
come to be this way?
9|Page
In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at
answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central
European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on
the savannas of Africa, and has to be understood in that context.
Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows:
Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as
difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to
solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the
problems posed by living in collaborative groups.
“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of
mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a
social “interactionist” perspective.
Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their
beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified,
confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most
famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had
opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the
other half were against it and thought that it had no effect on crime.
The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other
provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present
what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment
rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally
opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their
views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even
more hostile.
If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation
bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief
that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of
new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been
selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive
function, and that function, they maintain, is related to our “hypersociability.”
Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with
someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about
are our own.
A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants
were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given
a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen
per cent changed their minds in step two.
In step three, participants were shown one of the same problems, along with their answer and the answer of another
participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a
trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half
the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty
per cent now rejected the responses that they’d earlier been satisfied with.
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us
from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were
10 | P a g e
primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt
while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from
winning arguments.
Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the
ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder,
then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the
environment changed too quickly for natural selection to catch up.”
Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive
scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently,
malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.
Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush
toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything
that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually
happen?
In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets,
zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to
rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their selfassessments dropped. (Toilets, it turns out, are more complicated than they appear.)
Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People
believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of
my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been
relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in
our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own
understanding ends and others’ begins.
“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary
between one person’s ideas and knowledge” and “those of other members” of the group.
This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for
new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the
principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new
technologies, incomplete understanding is empowering.
Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a
toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what
I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory
of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on
a map. The farther off base they were about the geography, the more likely they were to favor military intervention.
(Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the
distance from Kiev to Madrid.)
Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge
from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If
your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom
and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more
smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the
Trump Administration.
11 | P a g e
“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed
their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012,
they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay
for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the
proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most
people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they
either agreed or disagreed less vehemently.
Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less
time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and
moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and
change people’s attitudes.”
One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no
room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm
them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be
dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.
In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter,
Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their
concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction
that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first
place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific
studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain
unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their
son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)
The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And
they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research
suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their
beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.
The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way,
they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but
statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the
very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount
it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science.
“The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that
lead to false scientific belief.”
“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November
election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire
country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational
agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦
Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction
for “The Sixth Extinction: An Unnatural History.”
This article appears in other versions of the February 27, 2017, issue, with the headline “That’s What You Think.”
12 | P a g e
Reading #4: The Internet Isn’t Making Us Dumber — It’s Making Us More ‘Meta-Ignorant’
By William Poundstone
New York Magazine
July 27, 2016
At five-foot-six and 270 pounds, the bank robber was impossible to miss. On April 19, 1995, he hit two Pittsburgh banks in
broad daylight. Security cameras picked up good images of his face — he wore no mask — and showed him holding a gun to
the teller. Police made sure the footage was broadcast on the local eleven o’clock news. A tip came in within minutes, and just
after midnight, the police were knocking on the suspect’s door in McKeesport. Identified as McArthur Wheeler, he was
incredulous. “But I wore the juice,” he said.
Wheeler told police he rubbed lemon juice on his face to make it invisible to security cameras. Detectives concluded he was
not delusional, not on drugs — just incredibly mistaken.
Wheeler knew that lemon juice is used as an invisible ink. Logically, then, lemon juice would make his face invisible to
cameras. He tested this out before the heists, putting juice on his face and snapping a selfie with a Polaroid camera. There was
no face in the photo! (Police never figured that out. Most likely Wheeler was no more competent as a photographer than he
was as a bank robber.) Wheeler reported one problem with his scheme: The lemon juice stung his eyes so badly that he could
barely see.
Wheeler went to jail and into the annals of the world’s dumbest criminals. It was such a feature, in the 1996 World Almanac,
that brought Wheeler’s story to the attention of David Dunning, a Cornell psychology professor. He saw in this tale of dimwitted woe something universal. Those most lacking in knowledge and skills are least able to appreciate that lack. This
observation would eventually become known as the Dunning-Kruger effect.
Dunning and a graduate student, Justin Kruger, embarked on series of experiments testing this premise. They quizzed
undergraduate psychology students on grammar, logic, and jokes, then asked the students to estimate their scores and also
estimate how well they did relative to others (on a percentile basis). The students who scored lowest had greatly exaggerated
notions of how well they did. Dunning had expected that, but not the magnitude of the effect. His first reaction to the results
was “Wow.” Those who scored near the bottom estimated that their skills were superior to two-thirds of the other students.
Later research went far beyond the university. For one experiment, Dunning and Kruger recruited gun hobbyists at a
trapshooting and skeet-shooting competition. Volunteers took a ten-question gun safety and knowledge quiz adapted from
one published by the National Rifle Association. Again, the gun owners who knew the least about firearm safety wildly
overestimated their knowledge.
Like most rules, this one has exceptions. “One need not look far,” Dunning and Kruger wrote, “to find individuals with an
impressive understanding of the strategies and techniques of basketball, for instance, yet who could not ‘dunk’ to save their
lives. (These people are called coaches.)” But of course coaches understand their own physical limitations. Similarly, “most
people have no trouble identifying their inability to translate Slovenian proverbs, reconstruct a V-8 engine, or diagnose acute
disseminated encephalomyelitis.”
The Dunning-Kruger effect requires a minimal degree of knowledge and experience in the area about which you are ignorant
(and ignorant of your ignorance). Drivers, as a group, are subject to the effect — bad drivers usually think they’re good
drivers — but those who have never learned how to drive are exempt.
Since Dunning and Kruger first published their results in the 1999 paper “Unskilled and Unaware of It: How Difficulties in
Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments,” the effect named for them has become a meme. It
strikes a universal chord: As Dunning put it, the overconfident airhead “is someone we’ve all met.” Actor John Cleese concisely
explains the Dunning-Kruger effect in a much-shared YouTube video: “If you’re very, very stupid, how can you possibly realize
that you’re very, very stupid? You’d have to be relatively intelligent to realize how stupid you are … And this explains not just
Hollywood but almost the entirety of Fox News.” But the 1999 paper makes clear the authors’ opinion that the first place to
look for a Dunning-Kruger ignoramus is in the mirror.
13 | P a g e
There is now an active field of research into how the internet is changing what we learn and remember. In a 2011 experiment
helmed by Daniel Wegner of Harvard, volunteers were presented with a list of 40 trivia facts — short, pithy statements such as
“An ostrich’s eye is bigger than its brain.” Each person was instructed to type all 40 statements into a computer. Half the
volunteers were told to remember the facts. The other half weren’t. Also, half were informed that their work would be stored
on the computer. The other half were told that it would be erased immediately after the task’s completion.
The volunteers were later given a quiz on the facts they’d typed. Those instructed to remember the information scored no
better than those who hadn’t been told to do so. But those who believed that their work would be erased scored much better
compared to those who believed it would be saved. This was true whether they were trying to remember the facts or not.
The conscious mind exercises little choice in remembering and forgetting. Nobody decides to forget a client’s name or to
remember forever the lyrics of a detested pop tune. It just happens.
The Harvard experiment’s results are consistent with a pragmatic system of memory. It is impossible to remember everything.
The brain must constantly be doing triage on memories, without conscious intervention. And apparently it recognizes that
there is less need to stock our minds with information that can be readily retrieved. (It may be a very long time before you
need to know how big an ostrich’s eyeball is.) So facts are more often forgotten when people believe the facts will be
archived. This phenomenon has earned a name — the Google effect — describing the automatic forgetting of information that
can be found online.
If you take the Google effect to the point of absurdity, selfies would cause amnesia. But a 2013 study conducted by Linda
Henkel of Fairfield University pointed in that direction. Henkel noticed that visitors to art museums are obsessed with taking
cell-phone shots of artworks and often are less interested in looking at the art itself. So she performed an experiment at
Fairfield University’s Bellarmine Museum of Art. Undergraduates took a docent tour in which they were directed to view
specific artworks. Some were instructed to photograph the art, and others were simply told to take note of it. The next day
both groups were quizzed on their knowledge of the artworks. The visitors who snapped pictures were less able to identify
works and to recall visual details.
Our unconscious curators of memory must be aware of how quickly and easily any needed fact can be called up. This implies
that our broadband networks have created a new regime of learning and memory, one in which facts are less likely to be
retained and are more quickly forgotten. In a few years, we’ll probably all be wearing devices that shoot a 24-7 video stream
of our lives. Will social media make amnesiacs of us all?
Uploaded keystrokes are just one of many ways we have of storing information outside our brains. Long before our virtual
social networks, we shared memory, knowledge, and expertise among our real social networks. I’m not a foodie, but I have
friends who can recommend interesting new restaurants. I don’t know doctors, but I have a general practitioner who can
recommend a specialist. We get by in the world, not by knowing everything but by knowing people.
Distributed memory can counteract misinformation — to a degree, anyway. Surveys have shown that most people think
antibiotics will fight viruses. Wrong. But as Dan M. Kahan of Yale points out, it hardly matters. “Most people” are not going to
self-prescribe azithromycin. The important thing is to know that it’s a good idea to go to a doctor when we’re sick and to
follow that doctor’s instructions.
The Google effect is another adaptation to distributed memory. The cloud is a friend who happens to know everything. It’s
always available, provides the answer in seconds, and never gets upset with dumb questions. It’s little wonder we depend on
it to the point of absurdity. Economist Seth Stephens-Davidowitz noted that the third-most-common Google search containing
the phrase “my penis” is “How big is my penis?” You’d think a ruler would have a better answer.
Most — more than 50 percent — of millennials can’t name anyone who shot a U.S. president or discovered a planet; they
don’t know the ancient city celebrated for its hanging gardens, the one destroyed by Mount Vesuvius, or the emperor said to
have fiddled while Rome burned; and most millennials can’t name the single word uttered by the raven in Edgar Allan Poe’s
poem.
14 | P a g e
The conventional reaction to such reports is a blend of shock and amusement. It’s terrible how little young people/ordinary
citizens know — right? It’s worth asking how we know it’s so terrible and whether it’s terrible at all.
Ignorance can be rational. Economist Anthony Downs made that claim in the 1950s. He meant that there are many situations
in which the effort needed to acquire knowledge outweighs the advantage of having it. Maybe you got your diploma and a
high-paying job without ever learning about that poem with the raven in it. Why learn it now?
The contemporary world regards knowledge with ambivalence. We admire learning and retain the view that it is a desirable
end in itself. But our more entitled side sees learning as a means to an end — to social advancement, wealth, power,
something. We are suspicious of education that lacks an ulterior motive; we click on listicles entitled “8 College Degrees With
the Worst Return on Investment.”
Ours is the golden age of rational — and rationalized — ignorance. Information is being produced, devalued, and made
obsolete at a prodigious rate. Every day the culture changes beneath our feet. It is harder than ever to keep up or even to be
sure that keeping up matters anymore. We are left speculating about how important it is to stay current on the Middle East,
contemporary novels, local politics, wearable technology, and college basketball. A friend recently wondered aloud whether it
was okay to not know anything about Game of Thrones. The observation that you can look up any needed bit of information
dodges the issue. You can’t Google a point of view.
The poorly informed don’t necessarily know less. They just know different things. A gamer who spends all his free time playing
video games will have an encyclopedic understanding of those games. He is ill-informed only by arbitrary standards of what’s
important. Not everyone agrees that there is a fixed set of facts that all should know. But absent such a set, the concept of
being well-informed becomes a hopelessly relative one.
Today’s mediascape does not provide much guidance. It encourages us to create personal, solipsistic filters over information,
making it unprecedentedly easy to gorge on news of favorite celebrities, TV shows, teams, political ideologies, and tech toys.
This leaves less time and attention for everything else. The great risk isn’t that the internet is making us less informed or even
misinformed. It’s that it may be making us meta-ignorant — less cognizant of what we don’t know.
Adapted from Head in the Cloud: Why Knowing Things Still Matters When Facts Are So Easy to Look Up by William
Poundstone. Copyright © 2016 by William Poundstone.
_______________________________________________________________________________________
15 | P a g e
Reading #5: Trump’s ‘Dangerous Disability’? It’s the Dunning-Kruger Effect
by Faye Flam
Bloomberg View
May 12, 2017
We’re all ignorant, but Trump takes it to a different level.
Last week, behavioral researchers at Brown University held a colloquium
titled “Analytic thinking, bullshit receptivity, and fake news sensitivity.”
At an informal gathering afterwards, the conversation turned to the notcompletely-unrelated topic of Donald Trump.
Earlier that week, syndicated columnist George Will offered an amateur
diagnosis of sorts. Will’s assessment, based on Trump’s off-base
statements about the Civil War and other topics, was that the president
suffers from a “dangerous disability” -- not only because he’s ignorant,
and ignorant of his ignorance, but because he “does not know what it is
to know something.”
It turns out Will is on to something, and not just because a few academics agree with him. His observations about Trump may
have prompted him to independently discover a kind of meta-incompetence known as the Dunning-Kruger effect.
Brown cognitive psychologist Steven Sloman, who brought up Will’s column after the colloquium, is something of an expert on
ignorance of one’s own ignorance. Earlier this year, he co-authored a book called “The Knowledge Illusion: Why We Never
Think Alone.” The book describes a series of experiments in which people were asked to assess how much they knew about
the way various systems work -- from toilets to single-payer health-care systems. People generally rated their knowledge of
those systems as high -- but then, when asked to explain in detail how those systems actually worked, most couldn’t.
The guest speaker who talked about “bullshit receptivity,” psychologist Gordon Pennycook of Yale, has also done work on selfreflection and overconfidence. That put both of them in a good position to evaluate Will’s analysis of Trump.
Neither went so far as to call Trump’s lack of self-awareness a disorder. Sloman said that the knowledge illusion is a common
form of human fallibility, but Trump takes it to an exceptional degree. And for most people, the knowledge illusion is
punctured once they realize they can’t explain heath care or toilets or zippers as well as they thought. Not so Trump. When
asked to explain something, he changes the subject, his confidence in his knowledge unwavering.
And Trump’s assumptions of expertise go beyond ordinary appliances and policy issues. He recently advised the Navy on how
to improve aircraft carrier technology: “It sounded bad to me. Digital. They have digital. What is digital? And it’s very
complicated, you have to be Albert Einstein to figure it out.”
Both Sloman and Pennycook study a trait, called reflectivity, which can predict whether people are likely to be highly deluded
about their own knowledge. The standard reflectivity test includes questions such as this: If it takes five machines five minutes
to make five widgets, how long would it take 100 machines to make 100 widgets? And this: In a lake, there is a patch of lily
pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for
the patch to cover half of the lake?
For most people, these questions take a moment’s thought, though the less-reflective types will blurt out the most intuitive
and wrong answer: It’s not 100 minutes for the widgets and not 24 days for the pond.
Pennycook borrows the definition of bullshit from philosopher Harry Frankfurt: information that’s presented with no concern
for the truth. In an earlier study, he gave subjects randomly generated phrases such as “Hidden meaning transforms
unparalleled abstract beauty,” which he called “pseudo-profound bullshit.” Low scores on the reflectivity test correlated with
people seeing deep meaning in the statements.
16 | P a g e
In another recent paper, he examined the connection between reflectivity and the Dunning-Kruger effect. Named after
psychologists David Dunning and Justin Kruger, the effect describes the way people who are the least competent at a task
often rate their skills as exceptionally high because they are too ignorant to know what it would mean to have the skill.
Pennycook and his colleagues concluded that the Dunning-Kruger effect applies to people’s very ability to reason and reflect.
They asked subjects to take a test of reflectivity and to guess how they did. Many of those who were unreflective thought they
did well, since they had no idea what it would mean to be reflective and therefore were too incompetent to evaluate their
own performance.
This is getting close to Will’s diagnosis of Donald Trump as a person who thinks he knows everything but in fact doesn’t know
what it is to know something. The Dunning-Kruger effect and the knowledge illusion aren’t disorders, but are part and parcel
of being human. Some people, however, are much more subject to these than others. And Trump seems to occupy an
extreme end of the spectrum.
Is there any hope for Trump? His experience as president may make him more aware of how little he knows, said Sloman.
Trump recently told reporters, for example, that “Nobody knew health care was so complicated.” He couldn’t quite bring
himself to admit being wrong without sharing the blame with the rest of the world -– but perhaps it’s a start.
Faye Flam is a Bloomberg View columnist. She was a staff writer for Science magazine and a columnist for the Philadelphia
Inquirer, and she is the author of “The Score: How the Quest for Sex Has Shaped the Modern Man.”
Read more Follow @fayeflam on Twitter
_______________________________________________________________________________________
17 | P a g e
Reading #6: Why Expertise Matters
April 7, 2017
Adam Frank
I am an expert — a card-carrying, credential-bearing expert.
Of course, there are only a few hundred people on the planet who care about the thing I'm an expert on: the death of stars
like the sun. But, nonetheless, if the death of stars like the sun is something you want to know about, I'm one of the folks to
call. I tell you this not to brag or make my mom proud, but because expertise has been getting a bad rap lately. It's worth a
moment of our time to understand exactly why this "death of expertise" is happening — and what it means for the world.
The attack on expertise was given its most visceral form by British politician Michael Gove during the Brexit campaign last year
when he famously claimed, "people in this country have had enough of experts." The same kinds of issues, however, are also
at stake here in the U.S. in our discussions about "alternative facts," "fake news" and "denial" of various kinds. That issue can
be put as a simple question: When does one opinion count more than another?
By definition, an expert is someone whose learning and experience lets them understand a subject deeper than you or I do
(assuming we're not an expert in that subject, too). The weird thing about having to write this essay at all is this: Who would
have a problem with that? Doesn't everyone want their brain surgery done by an expert surgeon rather than the guy who
fixes their brakes? On the other hand, doesn't everyone want their brakes fixed by an expert auto mechanic rather than a
brain surgeon who has never fixed a flat?
Every day, all of us entrust our lives to experts from airline pilots to pharmacists. Yet, somehow, we've come to a point where
people can put their ignorance on a subject of national importance on display for all to see — and then call it a virtue.
Here at 13.7, we've seen this phenomenon many times. When we had a section for comments, it would quickly fill up with
statements like "the climate is always changing" or "CO2 is a trace gas so it doesn't matter" when we posted pieces on the
science of climate change.
Sometimes, I would try and engage these commenters. I love science — and all I really want is for people to understand the
remarkable process through which we've learned about the world, including Earth's climate. But I'd quickly find these
commenters had virtually no understanding of even the most basic principles of atmospheric science. Worse, they were not
even interested in the fact that there were basic principles. Yet there they were proudly proclaiming to the world that their
opinion trumped all those scientists who'd spent decades studying exactly those principles. It's still always kind of stunning to
me. What if I showed up at their jobs and told them everything they knew and put into daily practice was crap.
How did we reach this remarkable state of affairs? The answer to that question can be found in a new book by Tom Nichols
titled The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters. Nichols is a politically
conservative professor of international relations at the U.S. War College. (He's also a five-time undefeated Jeopardy!
champion, so you don't wanna mess with him.)
First, it's important to note, Nichols is not arguing for a slavish adherence to anything that comes out of an expert's mouth. In
a wonderful essay that preceded the book, he tells us: "It's true that experts can make mistakes, as disasters from thalidomide
to the Challenger explosion tragically remind us." Later he adds:
"Personally, I don't think technocrats and intellectuals should rule the world: we had quite enough of that in the late 20th
century, thank you, and it should be clear now that intellectualism makes for lousy policy without some sort of political
common sense. Indeed, in an ideal world, experts are the servants, not the masters, of a democracy."
But Nichols is profoundly troubled by the willful "know-nothing-ism" he sees around him. Its principle cause, he argues, are
the new mechanisms that shape our discussions (i.e. the Internet and social media). He writes:
18 | P a g e
"There was once a time when participation in public debate, even in the pages of the local newspaper, required submission of
a letter or an article, and that submission had to be written intelligently, pass editorial review, and stand with the author's
name attached... Now, anyone can bum rush the comments section of any major publication. Sometimes, that results in a
free-for-all that spurs better thinking. Most of the time, however, it means that anyone can post anything they want, under
any anonymous cover, and never have to defend their views or get called out for being wrong."
Nichols also points to excesses of partisanship in politics, the weakening of expectations in schools and, finally, to human
nature. The last cause, he says, is particularly troubling. As he puts it:
"It’s called the Dunning-Kruger effect, which says, in sum, that the dumber you are, the more confident you are that you're
not actually dumb. And when you get invested in being aggressively dumb...well, the last thing you want to encounter are
experts who disagree with you, and so you dismiss them in order to maintain your unreasonably high opinion of yourself."
Part of the problem, says Nichols, is that while the democratization of knowledge is great, it's threatened by the strange
insistence that every opinion has equal weight. That's an idea, he rightly says, that has nothing to do with democracy:
"Having equal rights does not mean having equal talents, equal abilities, or equal knowledge. It assuredly does not mean that
'everyone's opinion about anything is as good as anyone else's.' And yet, this is now enshrined as the credo of a fair number of
people despite being obvious nonsense."
Nichols' perspective is an essential one if we are to begin digging ourselves out of the hole we find ourselves in. In a complex,
technological world, most of us are experts at something. More importantly, being a true expert means having a healthy dose
of humility. If you have really studied something and really gone deep into how it works, then you should come away knowing
how much you don't know. In a sense, that is the real definition of an expert — knowing the limits of one's own knowledge.
In the end, we need to become insistent on the value of knowing things and the value of recognizing when others know what
we do not. If we don't, there will be a steep price to pay for the democracy we hold so dear.
Adam Frank is a co-founder of the 13.7 blog, an astrophysics professor at the University of Rochester, a book author and a selfdescribed "evangelist of science." You can keep up with more of what Adam is thinking on Facebook and Twitter:
@adamfrank4
_______________________________________________________________________________________
19 | P a g e
Reading #7: Climate Science Meets a Stubborn Obstacle: Students
The New York Times
By AMY HARMON
JUNE 4, 2017
WELLSTON, Ohio — To Gwen Beatty, a junior at the high school in this proud, struggling, Trump-supporting town, the new
science teacher’s lessons on climate change seemed explicitly designed to provoke her.
So she provoked him back.
When the teacher, James Sutter, ascribed the recent warming of the Earth to heat-trapping gases released by burning fossil
fuels like the coal her father had once mined, she asserted that it could be a result of other, natural causes.
When he described the flooding, droughts and fierce storms that scientists predict within the century if such carbon emissions
are not sharply reduced, she challenged him to prove it. “Scientists are wrong all the time,” she said with a shrug, echoing
those celebrating President Trump’s announcement last week that the United States would withdraw from the Paris climate
accord.
When Mr. Sutter lamented that information about climate change had been removed from the White House website after Mr.
Trump’s inauguration, she rolled her eyes.
“It’s his website,” she said.
For his part, Mr. Sutter occasionally fell short of his goal of providing Gwen — the most vocal of a raft of student climate
skeptics — with calm, evidence-based responses. “Why would I lie to you?” he demanded one morning. “It’s not like I’m
making a lot of money here.”
She was, he knew, a straight-A student. She would have had no trouble comprehending the evidence, embedded in ancient
tree rings, ice, leaves and shells, as well as sophisticated computer models, that atmospheric carbon dioxide is the chief culprit
when it comes to warming the world. Or the graph he showed of how sharply it has spiked since the Industrial Revolution,
when humans began pumping vast quantities of it into the air.
Thinking it a useful soothing device, Mr. Sutter assented to Gwen’s request that she be allowed to sand the bark off the
sections of wood he used to illustrate tree rings during class. When she did so with an energy that, classmates said, increased
during discussion points with which she disagreed, he let it go.
When she insisted that teachers “are supposed to be open to opinions,” however, Mr. Sutter held his ground.
“It’s not about opinions,” he told her. “It’s about the evidence.”
“It’s like you can’t disagree with a scientist or you’re ‘denying science,”’ she sniffed to her friends.
Gwen, 17, could not put her finger on why she found Mr. Sutter, whose biology class she had enjoyed, suddenly so
insufferable. Mr. Sutter, sensing that his facts and figures were not helping, was at a loss. And the day she grew so agitated by
a documentary he was showing that she bolted out of the school left them both shaken.
“I have a runner,” Mr. Sutter called down to the office, switching off the video.
He had chosen the video, an episode from an Emmy-winning series that featured a Christian climate activist and high
production values, as a counterpoint to another of Gwen’s objections, that a belief in climate change does not jibe with
Christianity.
“It was just so biased toward saying climate change is real,” she said later, trying to explain her flight. “And that all these
people that I pretty much am like are wrong and stupid.”
Classroom Culture Wars
20 | P a g e
As more of the nation’s teachers seek to integrate climate science into the curriculum, many of them are reckoning with
students for whom suspicion of the subject is deeply rooted.
In rural Wellston, a former coal and manufacturing town seeking its next act, rejecting the key findings of climate science can
seem like a matter of loyalty to a way of life already under siege. Originally tied, perhaps, to economic self-interest, climate
skepticism has itself become a proxy for conservative ideals of hard work, small government and what people here call “selfsustainability.”
A tractor near Wellston, an area where coal and
manufacturing were once the primary employment
opportunities. Credit Maddie McGarvey for The New
York Times
Assiduously promoted by fossil fuel interests, that
powerful link to a collective worldview largely
explains why just 22 percent of Mr. Trump’s supporters in a 2016 poll said they believed that human activity is warming the
planet, compared with half of all registered voters. And the prevailing outlook among his base may in turn have facilitated the
president’s move to withdraw from the global agreement to battle rising temperatures.
“What people ‘believe’ about global warming doesn’t reflect what they know,” Dan Kahan, a Yale researcher who studies
political polarization, has stressed in talks, papers and blog posts. “It expresses who they are.”
But public-school science classrooms are also proving to be a rare place where views on climate change may shift, research
has found. There, in contrast with much of adult life, it can be hard to entirely tune out new information.
“Adolescents are still heavily influenced by their parents, but they’re also figuring themselves out,” said Kathryn Stevenson, a
researcher at North Carolina State University who studies climate literacy.
Gwen’s father died when she was young, and her mother and uncle, both Trump supporters, doubt climate change as much as
she does.
“If she was in math class and teacher told her two plus two equals four and she argued with him about that, I would say she’s
wrong,” said her uncle, Mark Beatty. “But no one knows if she’s wrong.”
As Gwen clashed with her teacher over the notion of human-caused climate change, one of her best friends, Jacynda Patton,
was still circling the taboo subject. “I learned some stuff, that’s all,’’ Jacynda told Gwen, on whom she often relied to supply
the $2.40 for school lunch that she could not otherwise afford.
Hired a year earlier, Mr. Sutter was the first science teacher at Wellston to emphasize climate science. He happened to do so
at a time when the mounting evidence of the toll that global warming is likely to take, and the Trump administration’s
considerable efforts to discredit those findings, are drawing new attention to the classroom from both sides of the nation’s
culture war.
Since March, the Heartland Institute, a think tank that rejects the scientific consensus on climate change, has sent tens of
thousands of science teachers a book of misinformation titled “Why Scientists Disagree About Global Warming,” in an effort
to influence “the next generation of thought,” said Joseph Bast, the group’s chief executive.
21 | P a g e
The Alliance for Climate Education, which runs assemblies based on the consensus science for high schools across the country,
received new funding from a donor who sees teenagers as the best means of reaching and influencing their parents.
Idaho, however, this year joined several other states that have declined to adopt new science standards that emphasize the
role human activities play in climate change.
At Wellston, where most students live below the poverty line and the needle-strewn bike path that abuts the marching band’s
practice field is known as “heroin highway,” climate change is not regarded as the most pressing issue. And since most
Wellston graduates typically do not go on to obtain a four-year college degree, this may be the only chance many of them
have to study the impact of global warming.
But Mr. Sutter’s classroom shows how curriculum can sometimes influence culture on a subject that stands to have a more
profound impact on today’s high schoolers than their parents.
“I thought it would be an easy A,” said Jacynda, 16, an outspoken Trump supporter. “It wasn’t.”
God’s Gift to Wellston?
Mr. Sutter, who grew up three hours north of Wellston in the largely Democratic city of Akron, applied for the job at Wellston
High straight from a program to recruit science professionals into teaching, a kind of science-focused Teach for America.
He already had a graduate-level certificate in environmental science from the University of Akron and a private sector job
assessing environmental risk for corporations. But a series of personal crises that included his sister’s suicide, he said, had
compelled him to look for a way to channel his knowledge to more meaningful use.
The fellowship gave him a degree in science education in exchange for a three-year commitment to teach in a high-needs Ohio
school district. Megan Sowers, the principal, had been looking for someone qualified to teach an Advanced Placement course,
which could help improve her financially challenged school’s poor performance ranking. She hired him on the spot.
Mr. Sutter walking with his students on a nature trail near the high school,
where he pointed out evidence of climate change. Credit Maddie McGarvey for
The New York Times
But at a school where most teachers were raised in the same southeastern corner of Appalachian Ohio as their students, Mr.
Sutter’s credentials themselves could raise hackles.
“He says, ‘I left a higher-paying job to come teach in an area like this,’” Jacynda recalled. “We’re like, ‘What is that supposed
to mean?”’
“He acts,” Gwen said with her patented eye roll, “like he’s God’s gift to Wellston.”
In truth, he was largely winging it.
Some 20 states, including a handful of red ones, have recently begun requiring students to learn that human activity is a major
cause of climate change, but few, if any, have provided a road map for how to teach it, and most science teachers, according
to one recent survey, spend at most two hours on the subject.
Chagrined to learn that none of his students could recall a school visit by a scientist, Mr. Sutter hosted several graduate
students from nearby Ohio University.
22 | P a g e
On a field trip to a biology laboratory there, many of his students took their first ride on an escalator. To illustrate why some
scientists in the 1970s believed the world was cooling rather than warming (“So why should we believe them now?” students
sometimes asked), he brought in a 1968 push-button phone and a 1980s Nintendo game cartridge.
“Our data and our ability to process it is just so much better now,” he said.
In the A.P. class, Mr. Sutter took an informal poll midway through: In all, 14 of 17 students said their parents thought he was,
at best, wasting their time. “My stepdad says they’re brainwashing me,” one said.
Jacynda’s father, for one, did not raise an eyebrow when his daughter stopped attending Mr. Sutter’s class for a period in the
early winter. A former coal miner who had endured two years of unemployment before taking a construction job, he declined
a request to talk about it.
“I think it’s that it’s taken a lot from him,” Jacynda said. “He sees it as the environmental people have taken his job.”
And having listened to Mr. Sutter reiterate the overwhelming agreement among scientists regarding humanity’s role in global
warming in answer to another classmate’s questions — “What if we’re not the cause of it? What if this is something that’s
natural?” — Jacynda texted the classmate one night using an expletive to refer to Mr. Sutter’s teaching approach.
But even the staunchest climate-change skeptics could not ignore the dearth of snow days last winter, the cap to a year that
turned out to be the warmest Earth has experienced since 1880, according to NASA. The high mark eclipsed the record set just
the year before, which had eclipsed the year before that.
In woods behind the school, where Mr. Sutter had his students scout out a nature trail, he showed them the preponderance
of emerald ash borers, an invasive insect that, because of the warm weather, had not experienced the usual die-off that
winter. There was flooding, too: Once, more than 5.5 inches of rain fell in 48 hours.
The field trip to a local stream where the water runs neon orange also made an impression. Mr. Sutter had the class collect
water samples: The pH levels were as acidic as “the white vinegar you buy at a grocery store,” he told them. And the drainage,
they could see, was from the mine.
It was the realization that she had failed to grasp the damage done to her immediate environment, Jacynda said, that made
her begin to pay more attention. She did some reading. She also began thinking that she might enjoy a job working for the
Environmental Protection Agency — until she learned that, under Mr. Trump, the agency would undergo huge layoffs.
“O.K., I’m not going to lie. I did a 180,” she said that afternoon in the library with Gwen, casting a guilty look at her friend.
“This is happening, and we have to fix it.”
After fleeing Mr. Sutter’s classroom that day, Gwen never returned, a pragmatic decision about which he has regrets. “That’s
one student I feel I failed a little bit,” he said.
As an alternative, Gwen took an online class for environmental science credit, which she does not recall ever mentioning
climate change. She and Jacynda had other things to talk about, like planning a bonfire after prom.
As they tried on dresses last month, Jacynda mentioned that others in their circle, including the boys they had invited to prom,
believed the world was dangerously warming, and that humans were to blame. By the last days of school, most of Mr. Sutter’s
doubters, in fact, had come to that conclusion.
“I know,” Gwen said, pausing for a moment. “Now help me zip this up.”
A version of this article appears in print on June 5, 2017, on Page A1 of the New York edition with the headline: Obstacle for
Climate Science: Skeptical, Stubborn Students.
_______________________________________________________________________________________________________
_
23 | P a g e
Reading #8: Why Fake News Spreads Like Wildfire on Facebook
Chicago Tribune
September 3, 2017
Mark Buchanan
How can we fight back against the fake news infecting our information feeds and political systems? New research suggests
that education and filtering technology might not be enough: The very nature of social media networks could be making us
peculiarly vulnerable.
The intentional spreading of false stories has been credited with swaying such monumental events as last year's Brexit vote in
Britain and the U.S. presidential election. Tech firms such as Alphabet Inc. unit Google and Facebook Inc. have been trying to
find ways to weed it out, or at least help users spot it. Some say we need to start earlier, educating children on how to think
critically.
But understanding the unique epidemiology of fake news may be no less important. Unlike a typical virus, purveyors of
falsehood don't have to infect people at random. Thanks to the wealth of information available on social media and the
advent of targeted advertising, they can go straight for the most susceptible and valuable victims — those most likely to
spread the infection.
This insight emerges from a recent study by three network theorists who ran computer simulations of the way fake news
moves through social networks. Using state-of-the-art machine learning algorithms, they examined how individuals might
learn to recognize false news and sought to identify the most important factors in helping fake news spread.
They found that the most important catalyst of fake news was the precision with which the purveyor targeted an audience —
a task that can easily be accomplished using the data that tech companies routinely gather and sell to advertisers. The key was
to seed an initial cluster of believers, who would share or comment on the item, recommending it to others through Twitter or
Facebook. False stories spread farther when they were initially aimed at poorly informed people who had a hard time telling if
a claim was true or false.
Hence, we've unwittingly engineered a social media environment that is inherently prone to fake news epidemics. When
marketers use information on surfing habits, opinions and social connections to aim ads at people with just the right interests,
this can facilitate beneficial economic exchange. But in the wrong hands, the technology becomes a means for the precision
seeding of propaganda.
It's hard to see how this can change without altering the advertising-centric business model of social media. One of the
network theorists, Christoph Aymanns, suggests that big social media companies could counteract fake news by preventing
advertisers from targeting users on the basis of political views, or even by suspending all targeted ads during election
campaigns. But this might be impossible, given how important such advertising has become to the economy. Alternatively,
opponents of fake news could use the same targeting technology to identify and educate the most vulnerable people — say,
providing them with links to information that might help them avoid being fooled.
The study does offer one positive conclusion: Broad awareness of fake news should tend to work against its success.
Campaigns were much less successful when individuals in the model learned strategies to recognize falsehoods while being
fully aware that purveyors were active. This suggests that public information campaigns can work, as Facebook's seemed to do
ahead of the French election in May.
In other words, fake news is like a weaponized infectious agent. Immunization through education can help, but it might not be
a comprehensive defense.
Mark Buchanan, a physicist and science writer, is the author of the book "Forecast: What Physics, Meteorology and the Natural
Sciences Can Teach Us About Economics."
24 | P a g e
Reading #9: Plato predicted ‘Pizzagate’ (or rather, fake news more generally)
By David Lay Williams December 13, 2016
For conspiracy theorists, “Pizzagate” didn’t end when a man brought a gun to Comet Ping Pong, a Washington pizza
restaurant, in a misguided attempt to rescue nonexistent child sex slaves. Some conspiracy theorists see Edgar Maddison
Welch’s effort as the latest “false flag,” a coverup or distraction orchestrated by the government or other powerful figures.
(Jose Luis Magana/AP)
Last week, a heavily armed man stormed a popular family pizzeria in Washington, on the theory that it harbored child sex
slaves. He shot the lock off an interior door and pointed his assault rifle at one of the pizza chefs before surrendering to police
— but not before searching for signs of an elaborate sex slave scheme at the restaurant.
Of course, there was no such insidious scheme. It was a fiction created through feverish chatter in social media, the latest in
an alarming trend of fake news stories.
Much has been written about these proliferating “news” stories and their possible effects on the 2016 election. As many have
noted, the increasing reliance on the Internet for news has made it harder to distinguish fact from fiction. And growing
political polarization leads many to seek confirmation of their biases rather than facts.
But something deeper is involved, as Plato can help us understand.
In perhaps the best-known pages from his celebrated “Republic,” Plato’s character Socrates asks his interlocutors to imagine
prisoners chained to a cave wall. They have lived there their entire lives, seeing nothing but the shadows on the wall.
Naturally, he observes, “what the prisoners would take for true reality is nothing other than the shadows.”
After one prisoner is released to the outside world, he comes to realize the difference between the shadows and reality.
The Allegory of the Cave can be read in many ways, including politically. The cave prisoners are epistemologically impaired,
unable to distinguish fact from fiction, in part because of external manipulations. Socrates’s account of the cave includes
powerful and manipulative “puppeteers” who cast the shadows that the prisoners mistake for reality. In Plato’s day, those
puppeteers were often “sophists” — skilled and well-compensated mercenary orators who, Plato alleges, constructed
arguments for money without regard for their truth or falsehood.
Today’s puppeteers include the purveyors of fake news. Like the sophists, fake-news authors are typically more interested in
profiteering than in informing citizens.
But manipulation for profit isn’t the only issue, according to Plato. He was a citizen of democratic Athens, the birthplace of
Western democracy, and knew democracy intimately. Democracies, he wrote, celebrate two political values above all: liberty
and equality. While democratic citizens instinctively rally around these ideas (and Plato is sensitive to their virtues), he also
fears that they can give birth to serious social pathologies.
How so? Liberty includes “freedom of speech,” he notes, and therefore the “license in it to do whatever one wants.”
Democratic citizens, in other words, are free to say both what is true and what is false.
Plato might not find this freedom of speech so problematic if it weren’t combined with a pervasive ethos of “equality [for]
equals and unequals alike,” as he puts it, in which opinions “are all alike and must be honored on an equal basis.” The noble
side of this equality is that no opinion should be dismissed just because of anyone’s class or origins. But there’s an ignoble
side, he emphasizes, that puts truth and falsehood on equal footing. And when citizens have the liberty to insist on
falsehoods, they’ll be competing, in the marketplace of ideas, with genuine, demonstrable truths.
And this democratic equality, Plato explains, tends to be especially dismissive of authority — the parents, teachers and
philosophers he thought should govern in a just society.
25 | P a g e
We’ve all seen how lies can obscure the truth and how that combination of liberty and respect for equality has been
complicating public policy. Consider the overwhelming evidence that fossil fuel use is changing the climate, or the expert
research showing no connection between vaccinations and autism — and then the citizens who choose instead to believe
corporate interests, celebrities or conspiracy theorists, as if their speculations were equal to scientific expertise.
Fake news is just the latest manifestation. In a culture where speech is free and all “opinions” are equal, it gets harder and
harder for citizens to tell reputable from disreputable sources of news.
Distressingly, Plato tells us that this is a natural result of mature democracies.
Plato famously offers a solution in his “Republic”: Replace democracy with an aristocratic regime ruled by uniquely gifted and
elaborately trained philosophers who are professionals at telling truth from falsehood.
Even if that were desirable, this solution is as unlikely to be realized today as it was in ancient Athens. Plato offers more
appealing solutions in his later dialogue, “The Laws.” There he suggests a mixed constitution that borrows from many types of
regimes, including democracy and expert rule, allowing some especially well-informed “opinions” to get special consideration
in public deliberations. He also promotes an educational system that inculcates a love of the truth. “Truth is the leader of all
good things,” he writes, “for gods and of all things for human beings.”
If citizens make it a priority to pursue truth, our democracy has some hope. But Plato’s “Republic” advises us that, without
special interventions, the pathologies leading to fake news will be with us for a long time.
David Lay Williams is a professor of political science at DePaul University and the author of “Rousseau’s Platonic
Enlightenment” and essays on Plato’s political philosophy, and is currently writing a book on economic inequality in the history
of political thought.
26 | P a g e
Reading #10: Facebook’s Role in Trump’s Win is Clear. No Matter What Mark Zuckerberg Says.
The Washington Post
By Margaret Sullivan
September 7, 2017
What a ridiculous notion, Mark Zuckerberg scoffed shortly after the election, that his social-media company — innocent, wellintentioned Facebook — could have helped Donald Trump’s win.
“Personally I think the idea that fake news on Facebook . . . influenced the election in any way — I think is a pretty crazy idea,”
he said. “Voters make decisions based on their lived experience.”
In fact, voters make their decisions based on many factors, not just their “lived experience.”
Disinformation spread on Facebook clearly was one — a big one. That was obvious in November. It was obvious in April when
Facebook, to its credit, announced some moves to combat the spread of lies in the form of news stories.
It’s even more obvious now after Wednesday’s news that Facebook sold ads during the campaign to a Russian “troll farm,”
targeting American voters with “divisive social and political messages” that fit right in with Donald Trump’s campaign strategy.
The news, reported Wednesday by The Washington Post, fits right in with the findings of a fascinating recent study by
Harvard’s Berkman Klein Center for Internet and Society. Analyzing reams of data, it documented the huge role that
propaganda, in various forms, played in the 2016 campaign.
“Attempts by the [Hillary] Clinton campaign to define her campaign on competence, experience, and policy positions were
drowned out by coverage of alleged improprieties associated with the Clinton Foundation and emails,” the study said.
The Trump campaign masterfully manipulated these messages. Truth was not a requirement.
And Facebook was the indispensable messenger. As the Harvard study noted: “Disproportionate popularity on Facebook is a
strong indicator of highly partisan and unreliable media.”
We don’t know everything about Facebook’s role in the campaign. What we do know — or certainly ought to know by now —
is to not take Facebook at its word. It always plays down its influence, trying for a benign image of connecting us all in a warm
bath of baby pictures, tropical vacations and games of Candy Crush.
The company recently changed its mission statement, as John Lanchester noted in a blistering takedown in the London Review
of Books, mocking the “canting pieties” of such corporate efforts. What used to be just a soft ideal of “making the world more
open and connected” is now giving people “the power to build community and bring the world closer together.”
The new mission statement didn’t specifically mention bringing Russia and the United States closer together. But Facebook
managed to accomplish that anyway.
Here’s an undeniable fact: Facebook is about advertising. And it is so wildly successful at leveraging our eyeballs and spending
power into ad dollars that it is now valued at nearly $500 billion.
But for all its power and wealth, Facebook is a terribly opaque enterprise. (It recently hired former New York Times public
editor Liz Spayd, a former Post managing editor, to help with “transparency.” Let’s just say that she has her work cut out for
her.)
Facebook also has never acknowledged the glaringly obvious — that it is essentially a media company, where many of its 2
billion active monthly users get the majority of their news and information. As I’ve been pointing out here for more than a
year, it constantly makes editorial decisions, but never owns them.
When its information is false, when it is purchased and manipulated to affect the outcome of an election, the effect is
enormous. When the information purveyors are associated with a foreign adversary — with a clear interest in the outcome of
the American election — we’re into a whole new realm of power.
27 | P a g e
Would Donald Trump be president today if Facebook didn’t exist? Although there is a long list of reasons for his win, there’s
increasing reason to believe the answer is no.
I don’t know how to deal with Facebook’s singular power in the world. But having everyone clearly acknowledge it — including
the company itself — would be a start.
For more by Margaret Sullivan visit wapo.st/sullivan
Margaret Sullivan is The Washington Post’s media columnist. Previously, she was The New York Times public editor, and the
chief editor of The Buffalo News, her hometown paper.
Follow @sulliview
_______________________________________________________________________________________
28 | P a g e
Reading #11: 4 Reasons Why People Ignore Facts and Believe Fake News
Business Insider
Michael Shermer
Mar. 18, 2017
The new year has brought us the apparently new phenomena of fake news and alternative facts, in which black is white, up is
down, and reality is up for grabs.
The inauguration crowds were the largest
ever. No, that was not a “falsehood,” proclaimed
by Kellyanne Conway as she defended Sean
Spicer’s inauguration attendance numbers: “our
press secretary…gave alternative facts to that.”
George Orwell, in fact, was the first to identify
this problem in his classic Politics and the English
Language (1946). In the essay, Orwell explained
that political language “is designed to make lies
sound truthful” and consists largely of
“euphemism, question-begging and sheer cloudy
vagueness.”
BBC
But if fake news and alternative facts is not a
new phenomenon, and popular writers like
Orwell identifi...
Purchase answer to see full
attachment