Synthese (2012) 184:247–259
DOI 10.1007/s11229-010-9773-8
Is knowledge justified true belief?
John Turri
Received: 27 April 2010 / Accepted: 5 August 2010 / Published online: 21 August 2010
© Springer Science+Business Media B.V. 2010
Abstract Is knowledge justified true belief? Most philosophers believe that the
answer is clearly ‘no’, as demonstrated by Gettier cases. But Gettier cases don’t obviously refute the traditional view that knowledge is justified true belief (JTB). There are
ways of resisting Gettier cases, at least one of which is partly successful. Nevertheless,
when properly understood, Gettier cases point to a flaw in JTB, though it takes some
work to appreciate just what it is. The nature of the flaw helps us better understand the
nature of knowledge and epistemic justification. I propose a crucial improvement to
the traditional view, relying on an intuitive and independently plausible metaphysical
distinction pertaining to the manifestation of intellectual powers, which supplements
the traditional components of justification, truth and belief.
Keywords Knowledge · Gettier problem · Epistemic justification ·
Manifestation · Stephen Hetherington · Brian Weatherson
The explication of knowledge as ‘justified true belief’, though it involves many pitfalls[,] . . . is,
I believe, essentially sound.”
– Sellars (1975, p. 99)
1 The end of an era
The textbooks tell us that Edmund Gettier (1963) paper “Is Justified True Belief
Knowledge?” changed the course of epistemology by refuting the traditional view
that knowledge is justified true belief (hereafter JTB) (e.g. Chisholm 1989, 90 ff;
Moser 1992; Feldman 2003, 25 ff.). Gettier produced two cases wherein intuitively
J. Turri (B)
University of Waterloo, Waterloo, ON, Canada
e-mail: john.turri@gmail.com
123
248
Synthese (2012) 184:247–259
the subject gains a justified true belief but fails thereby to know, demonstrating that
justified true belief does not suffice for knowledge. Examples in this mold we call Gettier cases. Gettier was not the first to produce Gettier cases, but that needn’t concern
us here.1
Gettier cases follow a recipe (Zagzebski 1996, pp. 288–289; compare Sosa 1991,
p. 238). Start with a belief sufficiently justified to meet the justification requirement
for knowledge. Then add an element of bad luck that would normally prevent the
justified belief from being true. Lastly add a dose of good luck that “cancels out the
bad,” so the belief ends up true anyhow. It has proven difficult to explain why this
“double luck” precludes knowledge.
Consider this classic Gettier case.
(HUSBAND) Mary enters the house and looks into the living room. A familiar appearance greets her from her husband’s chair. She thinks, “My husband
is home,” and then walks into the den. But Mary misidentified the man in the
chair. It is not her husband, but his brother, whom she had no reason to think was
even in the country. However, her husband was seated along the opposite wall of
the living room, out of Mary’s sight, dozing in a different chair. (Adapted from
Zagzebski 1996, pp. 285–286)2
Virtually all epistemologists intuit that Mary has a justified true belief, but does not
know, that her husband is home.3 Many regard HUSBAND and its ilk as obvious
counterexamples to JTB.
Consider also this case.
(BARN) Henry and his son are driving through the country. Henry pulls over
to stretch his legs, and while doing so regales his son with a list of items currently in view along the roadside. “That’s a tractor. That’s a combine. That’s a
horse. That’s a silo. And that’s a fine barn,” Henry added, pointing to the nearby
roadside barn. It was indeed a fine barn Henry saw. But unbeknownst to them
the locals recently secretly replaced nearly every barn in the county with papiermâché fake barns. Henry happens to see the one real barn in the whole county.
Had he instead set eyes on any of the numerous nearby fakes, he would have
falsely believed it was a barn. (Adapted from Goldman 1976, pp. 172–173)4
Henry has a justified true belief, though intuitions divide over whether he knows.5 But
many regard BARN as a counterexample to JTB.6
1 Matilal 1986, pp. 135–137 teaches us that the classical Indian philosopher Sriharsa constructed similar
examples in the 1100s to confound his opponents. Chisholm 1989, pp. 92–93 reminds us that Meinong and
Russell produced similar examples earlier in the twentieth century.
2 The case resembles Chisholm 1989, p. 93 sheep-in-the-field example.
3 Stanley 2005 denies that a Gettier subject’s belief is justified.
4 Goldman attributes the case to Carl Ginet.
5 Lycan 2006 says Henry does know, and Sosa 2007 defends a view that entails as much (see his treatment
of the “kaleidoscope perceiver” and “jokester” in Chaps. 2 and 5).
6 Some claim that the fake-barn case is not really a Gettier case, even though it appears to threaten JTB.
I set aside this worry here.
123
Synthese (2012) 184:247–259
249
It will be convenient to represent the argument against JTB as follows:
(Anti-JTB)
1. If JTB is true, then the Gettier subject knows.
2. The Gettier subject does not know.
3. So JTB is not true. (From 1 and 2)
Here is the plan for the rest of the paper. Sections 2 and 3 critically evaluate recent defenses of JTB, due to Stephen Hetherington and Brian Weatherson. Section 4 presents
a new defense of JTB. Section 5 explains why knowledge is nevertheless not justified
true belief. Section 6 shows how very close JTB came to getting it right.
2 Near failure
Stephen Hetherington rejects 2 (Hetherington 1999).7 He contends that a Gettier subject knows despite coming perilously close to not knowing, and supplements this by
diagnosing intuitions to the contrary.
Because we are imperfect, much of our knowledge is fallible. Fallible knowing is
a species of failable knowing. You failably know Q just in case (a) you know that Q,
but (b) the following disjunction is true: either (i) your belief and your justification for
Q together are consistent with Q’s non-truth, or (ii) Q’s truth and your justification for
Q together are consistent with your not believing Q, or (iii) Q’s truth and your belief
together are consistent with your lacking justification for Q. Disjunct (i) corresponds
to fallible knowing.8
Failability comes in degrees. Let a failing world be one where we have only two
of the three aforementioned conditions for knowledge—i.e. B (belief) and J (justification) but not T (truth), or T and J but not B, or T and B but not J. If you actually
failably know Q, then the actual world is not a failing world.9 But how easily might it
have turned out to be a failing world? The easier it might have, the more failable your
knowledge is. In other words, the closer the closest world where you lack one of J, T
or B, the more failable your knowledge.
Having thus set the stage, Hetherington’s first key move is to claim, “If failable
knowing can be more or less failable, then in principle there can be instances of failable knowing which are very failable” (Hetherington 1999, p. 571). If this is true, then
the way lies open to interpret Gettier cases as instances of very failable knowledge,
but knowledge nonetheless.
7 Hetherington may ultimately wish to defend only the claim that justified true belief suffices for knowledge,
rather than JTB itself. See Hetherington 1999, p. 174 and 2007.
8 Reed 2002 (esp. 155 n. 21) argues that Hetherington’s characterization of fallible knowing is incorrect.
I set aside this worry here. Knowledge failable via (ii) or (iii) lacks a familiar name; we might call it ‘diffident’ or ‘dogmatic’ respectively, though these labels seem appropriate only when you might easily have
lacked belief or justification while satisfying the other two conditions.
9 More fully spelled out, the actual world is not a failing world relative to you, Q, and the present time. And
technically worlds are too course-grained for Hetherington’s purposes; centered worlds are more appropriate. Since nothing turns on these details I set them aside. Hetherington uses ‘failure world’ rather than
‘failing world’.
123
250
Synthese (2012) 184:247–259
Hetherington’s second key move is to diagnose philosophers’ intuitions to the contrary (Hetherington 1999, 575 ff.). They sense that typical instances of knowledge
differ in quality from what we observe in Gettier cases. They interpret this difference
as that between knowing and not knowing. But here they go wrong. The difference is
rather that between failably knowing and very failably knowing. The double-luck in
Gettier cases ensures that the Gettier subject very easily might have lacked T despite
having B and J. But by the same token the double-luck ensures that the Gettier subject
actually has T, B and J. In a word, Gettier subjects don’t fail to know—they very
failably know (or, as I will sometimes put it, they just barely know).
I emphasize that Hetherington does not claim to have proved Gettier subjects know.
Rather he claims to have motivated a principled alternative interpretation of Gettier
cases, consistent with JTB’s truth. If there are criteria for knowledge, then presumably
it is possible to just barely meet them. Why not say that Gettier subjects just barely
know rather than fail to know?10
Some philosophers respond that they aren’t mistaken in the way Hetherington suggests (e.g. Lycan 2006, p. 162). The basic intuition is just that the Gettier subject
doesn’t know. It does not take a detour through some indeterminate qualitative difference from a normal case of knowledge, which difference is then glossed as that
between knowing and not knowing.
A different response tests whether the supposed mistake occurs in other cases
of very failable knowledge, as Hetherington’s diagnosis would lead us to expect. It
would discredit Hetherington’s diagnosis if we don’t mistake just barely knowing for
not knowing in these other cases.
Consider this case:
(CAT) Catherine sits on her patio, contemplating the recent tragic news. She
perceives a cat slinking in the yard’s corner, and on that basis believes there is a
cat in the yard. But the tragic news so preoccupies Catherine that she could very
easily have failed to form that belief, though it would still have been true (the
cat would still be there) and justified for her (she still would have perceived it).
Catherine clearly knows there is a cat in the yard. But she just barely knows (there is
a very nearby failing world lacking B despite the presence of T and J). Yet this does
not confuse us. We do not mistake her just barely knowing for not knowing.
We can deepen this objection by extending Hetherington’s characterization of failing worlds. Recall that a failing world is one where only two of J, T and B are present.
Let a double-failing world be one where only one of them is present. A nearby double-failing world should make for greater failability than a nearby failing world, and
so make the subject come even closer to not knowing. If Hetherington’s diagnosis is
correct, then we should be even more liable to mistake these cases for not knowing.
10 Compare also Hetherington 1998, p. 456, where he urges us to not commit the “epistemic counterfactuals fallacy”—that is, to not infer that you actually don’t know from the fact that you counterfactually err in a
very close possible world. Neta and Rohrbaugh 2004, p. 401 likewise caution us against overestimating the
significance of threats that “remain purely counterfactual.” Of these they say, “Even though things could
have gone epistemically less well, and almost did go epistemically less well, in point of fact, the threat was
avoided and the actual case remains epistemically unproblematic.” (They deny that Henry knows, however,
because he faces actual imminent epistemic threats.)
123
Synthese (2012) 184:247–259
251
But we do not make that mistake. Consider this case:
(PAIN) The automatic door improbably malfunctions and closes prematurely,
striking Dora hard on her ankle. This causes excruciating pain, on which basis
Dora believes that she is in serious pain. But very easily the door could have
delivered a mere glancing blow, causing only very minor discomfort rather than
pain. Had it done so, due to hypochondria Dora still would have believed that
she was in serious pain. (Adapted from Sosa 2007, p. 26.)
Dora clearly knows that she is in serious pain. But her knowledge is very failable: very
easily she might have had the belief without its being true or justified. Double-failing
worlds lurk nearby. Yet this does not confuse us.
Consider also this case:
(SQUAD) Courtney has been court-martialed and sentenced to death by firing
squad. She refuses the blindfold so she can look her executioners in the eye. All
ten of them aim their rifles at her. The commanding officer takes an unsteady
breath and yells, “Fire!” Nearly impossibly, in unison all ten rifles click but fail
to discharge. Courtney, who had gracefully maintained her nerve throughout,
laughs aloud, “I am lucky to still be alive.” (Compare the “turtle watcher” case
in Unger 1968, p. 160.)
Courtney clearly knows that she is still alive. But in most nearby worlds she is dead,
so neither believes nor has justification that she is alive. Double-failing (indeed, triple-failing) worlds lurk nearby. Yet again this does not confuse us.
In light of these problems for Hetherington’s diagnosis, it pays to reconsider his
claim, “If failable knowing can be more or less failable, then in principle there can
be instances of failable knowing which are very failable.” Hetherington’s opponents
could concede this, but insist that just barely knowing is possible in some ways but
not others. Gettier cases exemplify the latter sort; CAT, PAIN and SQUAD exemplify
the former. Knowledge, they might say, is consistent with some but not all types of
near-miss. Defenders of JTB must look elsewhere for aid in overcoming Gettier cases.
3 Counting costs
Brian Weatherson also suggests rejecting premise 2. He concedes that intuition favors
2, but contends that broader theoretical considerations may favor rejecting it.
Weatherson proposes two main criteria for judging a philosophical theory (Weatherson 2003, p. 11).11 First, to what extent does it respect our pretheoretical intuitions?
Second, how simple and systematic is it? The first criterion is relatively straightforward. You check to see how often it agrees with our intuitions about actual and
possible cases. You also check to see whether it possesses resources to effectively
explain away the disagreement for the cases where it disagrees. (Just as a friend can
11 Initially Weatherson (2003, pp. 8–10) distinguishes four criteria. I’m counting them differently. I’m
also ignoring some subtleties he mentions concerning property “naturalness,” though my presentation of
the second criterion seems to capture its most important aspects.
123
252
Synthese (2012) 184:247–259
sometimes show you respect by properly correcting you, so can a theory respect our
intuitions by occasionally correcting them.) As for the second criterion, short and
clear analyses are simple; analyses that illuminate a concept’s relationships with other
“significant” concepts are systematic.
JTB gives the intuitively wrong verdict in Gettier cases. But this is not decisive
because no theory can respect all our pretheoretical intuitions (Weatherson 2003,
p. 24). Moreover JTB might possess resources for explaining away its disagreement
with intuition in these cases. And even if it possesses no such resources, it is simple
and it might be significantly more simple and systematic than its rivals, enough so to
outweigh any disadvantage due to Gettier cases.
So here we have a framework for rescuing JTB. Unfortunately Weatherson stops
short of fully deploying it. He discusses JTB to illustrate a point about philosophical
method (i.e. counterexamples’ significance), and so goes no further than claiming it
is “prima facie plausible” that JTB excels on the second criterion (Weatherson 2003,
pp. 2, 11, 27). Nevertheless we have seen enough to judge that his strategy will not
succeed.
Consider the view that knowledge is genuinely undefeated justified true belief
(GUJTB) (Klein 1971, 1976).12 While obviously closely related to JTB, it adds further necessary conditions. We need not go into detail about these further conditions,
but suffice it to say that they cause it to return the intuitively correct verdict in standard
Gettier cases. So it outperforms JTB on the first criterion. As for the second criterion,
it sacrifices some simplicity to better match our intuitions; but it is guaranteed to be
more systematic than JTB. Whatever illuminating relationships JTB reveals between
knowledge and other significant concepts, GUJTB will illuminate all of those plus
knowledge’s relationship to defeasibility. Thus Weatherson’s calculus favors GUJTB
over JTB.13
An analogous point is true of my positive proposal presented in Sects. 5 and 6
below. My proposal also adds a further necessary condition to the analysis of knowledge, which reveals its relationship to a concept fundamental to our understanding of
the world, namely, that of an outcome manifesting a disposition. More on all of this
later.
Thus while Weatherson’s methodological observations are plausible, they offer little
solace to JTB’s defenders.
12 Sometimes Klein restricts his definition to inferential knowledge, but that qualification needn’t concern
us.
13 A defender of JTB might object, as an anonymous referee suggested, that Weatherson’s calculus does
no such thing, because JTB does reflect knowledge’s relation to defeasibility; it’s just that, according to
JTB, the relation is superficial. In response, however superficial the relation is, so long as there is one, JTB
fails to reflect it, because it simply ignores defeasibility. So for this objection to clearly succeed, there would
have to be no relationship between defeasibility and knowledge. But then the objection amounts to little
more than simply denying that GUJTB is true. I also worry that responding this way would trivialize the
criterion of being systematic, since a view could always be defended by saying, “No, my view is systematic
because it reveals the relationship between X and Y, namely, that there is none!” In the final analysis, then,
I don’t think this objection succeeds.
123
Synthese (2012) 184:247–259
253
4 Good and bad
A new argument defending JTB is that “bad” counterparts of Gettier subjects know Q,
and these bad counterparts know Q only if Gettier subjects know Q, so Gettier subjects
know Q.
Meet Bad Henry.
(HOOLIGAN) Bad Henry is a hooligan who does bad things. He wants to
destroy a barn. He will destroy a barn. He drives out into the country to find one.
He pulls over after an hour, retrieves his bazooka, and takes aim with unerring
accuracy at the roadside barn he sees. Calm, cool and collected as he pulls the
trigger, he thinks, “That sure is a nice barn . . . now was a nice barn — ha!”
He destroyed the barn. He feels no remorse. He is forever after known as “Bad
Henry, bane of barns.” He is bad — very bad.
Bad Henry knows he is destroying a barn as he pulls the trigger. To know that, he had
to know it was a barn as he took aim. So he did know it was a barn.
Now we add the twist: Bad Henry was in Fake Barn Country and just happened to
shoot at the only barn around. Indeed Bad Henry destroyed the very barn that Good
Henry gazed upon earlier that same day, from the very spot that Good Henry stood
gazing. All the other “barns” were holograms. Nevertheless the intuition remains: Bad
Henry knew he was destroying a barn. So he knew it was a barn.
It’s very plausible to suppose that Bad Henry knows it’s a barn only if Good Henry
knows it’s a barn. Bad Henry does know it’s a barn. So Good Henry knows too.
Meet Bad Mary.
(CHIP) Bad Mary is a vindictive sadist who wants to cause her hapless husband,
Benedict, excruciating pain. She will cause him excruciating pain. Drawing on
her unsurpassed electrical engineering skills, she designs and secretly implants
in Benedict’s neck a small chip that administers excruciatingly painful electrical
shocks. She places the device’s control panel in the den, with transponders in
every room in the house. The device works only if Benedict is home, as Mary
well knows.
Presently Mary enters the house and looks into the living room. A familiar
appearance greets her from Benedict’s chair. She thinks, “My worthless lazy
husband is home – now’s my chance!” and dashes into the den. Eagerly she
activates the chip, which unleashes wave after wave of excruciating electrical
shocks through Benedict’s body. She revels in his frantic screams emanating
from the living room.
She electrocuted her husband. She feels no remorse. She is forever after known
as “Bad Mary, bane of Benedict.” She is bad – very bad.
Bad Mary knew she was electrocuting her husband as she triggered that device. To
know that, she had to know her husband was home. So she did know he was home.
Now we add the twist: Bad Mary misidentified the man in the chair. It isn’t Benedict
but his brother, whom she had no reason to think was even in the country. Benedict was
dozing in a different living-room chair, out of Mary’s sight. Nevertheless the intuition
123
254
Synthese (2012) 184:247–259
remains: Bad Mary knew she was electrocuting her husband. So she did know he was
home.
It’s very plausible to suppose that Bad Mary knows only if Good Mary (the protagonist of HUSBAND) knows. Bad Mary does know. So Good Mary knows too.
Generalizing this line of thought, we may conjecture that for any Gettier subject,
there is a relevantly similar “bad” counterpart who does know, which counterpart
knows only if the Gettier subject knows. So the Gettier subject knows.14
Why think that the bad counterpart knows only if the Gettier subject knows? Because
they have exactly the same evidence, which they use exactly the same way. If it’s good
enough for the one to know, it’s good enough for the other.15
Of course, a critic might say that this cuts both ways, and so we should conclude that
since it isn’t good enough for the Gettier subject to know, then it isn’t good enough for
the bad counterpart to know either. (As the saying goes, one person’s modus ponens
is another’s modus ponens.) But I doubt that JTB’s critics would pretend that it’s as
obvious to them that we should apply modus tollens here, as it is that the original Gettier subjects don’t know. In other words, we’ve shifted to terrain much more favorable
to JTB. JTB’s lonely defenders will certainly view this as progress.
One response to the cases is to liken the claim that Bad Mary knew she was electrocuting her husband (or the claim that Bad Henry knew he was destroying a barn) to the
claim that Medieval European peasants knew that the Earth was flat.16 The latter claim
is either false or involves a sense of ‘know’ different from the sense epistemologists
are interested in. But this maneuver doesn’t succeed. On the one hand, if it’s false that
Medieval peasants knew the Earth was flat, then that’s because it was false that the
Earth was flat, and you can’t know false things. But Gettiered beliefs are by definition
true, so it’s unclear why we should treat the two cases similarly. On the other hand, if
it’s a true claim involving a different and epistemologically irrelevant sense of ‘know’,
then that sense of ‘know’ is non-factive (otherwise, the claim that the peasants knew
would be false). But again, Gettiered beliefs are by definition true, so it’s unclear why
we would employ the non-factive sense to describe the Gettier subject. Much more
would need to be said in order to make the analogy plausible.
Some people have suggested to me that in the context of evaluating action, ‘know’
might be ambiguous between the ascription of a cognitive achievement on the one
hand, and intentional action on the other.17 When we say that Bad Henry knew he was
blowing up a barn, we mean that he intentionally blew up a barn, but this doesn’t entail
that he knew it was a barn. This response will seem ad hoc unless it’s backed up by
14 Beebe and Buckwalter (2010) report experimental results suggesting that people “are more likely to say
that agents know their actions will bring about certain side-effects, if the effects are bad than if they are
good,” which is analogous to Knobe’s (2003) interesting finding that people are more likely to say that an
agent intentionally brought about a bad side-effect than a good one. Beebe and Buckwalter’s work supports
the claim that Bad Henry knows he is destroying a barn, and that Bad Mary knows she is electrocuting her
husband.
15 I imagine that those attracted to the view that your “practical environment” can affect what you know
might have principled grounds for disagreeing with the claim expressed by this conditional. See e.g. Fantl
and McGrath 2002, 2007; Hawthorne 2004, Chap. 4; and Stanley 2005, Chap. 5.
16 Thanks to Allan Hazlett for conversation on this point.
17 As E.J. Coffman and Sharifa Mohamed independently suggested to me.
123
Synthese (2012) 184:247–259
255
actual linguistic evidence of ambiguity. The response also strikes me as implausible,
since this conjunction sounds terrible: ‘Henry knows he’s destroying a barn, but he
doesn’t know it’s a barn’. The most obvious explanation for why it sounds terrible is
that it’s contradictory. Henry’s knowing that he’s destroying a barn straightforwardly
entails his knowing that it’s a barn. Perhaps there is some other explanation of the
conjunction’s infelicity, but it’s the critic’s burden to produce it.
A better response to the argument would be to say that the moral dimension of
the “bad” cases causes a performance error by polluting our intuitions about whether
the bad people know. We recognize that these bad people deserve strong censure for
their actions, and one way of accentuating the blame they deserve is to say that they
“knew” that they were about to do something bad. It makes it sound worse to say that
someone knew his action would have a certain awful consequence, than if you say that
he was justified in thinking that his action would have that awful consequence. The
knowledge-ascription is literally false, but serves a purpose.
This last response strikes me as having some merit. But if it’s the best response
available, then JTB’s proponents will consider themselves to have made real progress.
They will have gone from confronting what was supposed to be an irresistible counterexample to JTB, to confronting a somewhat plausible, but by no means clearly
successful, attempt to undermine intuitions about the bad counterparts.
Even though I agree that JTB is subtly wrong (in a way I’ll explain momentarily),
I still think that the argument reviewed in this section ought to make us reconsider the
intuition that the Gettier subject doesn’t know. We should be less confident now than
we were before that the Gettier subject doesn’t know. To that extent, the argument
succeeds. And given how widely and unquestioningly accepted the intuition is, this is
a noteworthy development.
5 Why knowledge is not merely justified true belief
In these final two sections, I would like to suggest a way of evaluating JTB once we’ve
given up on the claim that it suffers irredeemably from counterexamples. There are
always ways to resist purported counterexamples. Although I do think JTB is on the
right track, I also think it’s false because it omits something important about the nature
of knowledge. The fact that it overlooks something essential to knowledge naturally
leads us to suspect that it will face genuine counterexamples. But counterexamples
are diagnostic tools. They can suggest that an analysis has gone wrong, without necessarily telling us where it goes wrong. An account of the underlying problem is more
valuable than an example suggesting that such a problem exists. I’ll now try to provide
such an account.
Start with the observation that gaining knowledge involves the exercise of intellectual abilities or powers (compare e.g. Aristotle’s Posterior Analytics; Reid 1764,
1785; Kant 1781; Sellars 1956). We gain perceptual knowledge in virtue of exercising our perceptual abilities. We gain knowledge of necessary truths, and knowledgeably deduce consequences therefrom, in virtue of exercising our rational powers. We
gain introspective knowledge of our own mental condition in virtue of exercising our
123
256
Synthese (2012) 184:247–259
reflective powers. As far as I can tell, there is no instance where knowledge is gained
without the subject exercising her intellectual powers.18
Abilities and powers are dispositions.19 For an outcome to manifest a disposition,
it isn’t enough for the outcome to be due to the disposition in just any way. We have
fairly consistent intuitions about this across a broad range of cases. Consider this pair
of cases:
(BROKE) I sat at the table feeding baby Mario his breakfast. Unfortunately my
glass of orange juice was within reach, and Mario swatted it off the table. Spoon
in one hand, baby in the other, I helplessly watched the glass tumble down, down,
down. It broke.
(PACKED) My wife and I were excited as we packed our belongings to move
into our new home. “Remember, we must carefully pack this glass, because it is
very fragile,” said my wife, holding up her favorite glass. I followed her advice.
The glass was carefully packed.
In each case the outcome is in some way due to the glass’s fragility. (Neither outcome obtains only because of fragility—in BROKE Mario and the floor help out, in
PACKED my effort also contributes—but that doesn’t spoil the point.) Yet we all recognize an important difference: the outcomes are not due in the same way to fragility.
In BROKE the glass broke because it was fragile, and its breaking manifests its fragility. In PACKED the glass was carefully packed because it was fragile, but its being
carefully packed does not manifest its fragility. Breaking is the right sort of outcome
to manifest fragility, but being carefully packed isn’t.
Consider also this pair of cases.
(BOIL) You place a cup of water in the microwave and press start. The magnetron generates microwaves that travel into the central compartment, penetrate
the water and excite its molecules. Soon the water boils.
(FIRE) You place a cup of water in the microwave and press start. The magnetron generates microwaves that cause an insufficiently insulated wire in the
control circuit to catch fire, which fire deactivates the magnetron and spreads to
the central compartment. Soon the water boils.
Both outcomes are in some way due to the microwave’s boiling power. But again,
we all recognize an important difference. The outcome in BOIL manifests the microwave’s boiling power, whereas the outcome in FIRE does not. We have a plain way
to mark the distinction in ordinary language: in the former case, but not the latter, the
microwave boils the water.
The pairs of examples highlight a general distinction between (A) an outcome happening merely because of a disposition and (B) an outcome manifesting a disposition.
18 Would innate knowledge be a counterexample to this claim? Not really, because it doesn’t seem like
we gain innate knowledge. Rather, we’re created with it. And we then retain it via our power of memory.
19 Or if they’re not types of disposition, they still share with dispositions the crucial feature I’m concerned
with below, namely, that certain outcomes manifest them. This is all that matters for present purposes, so
for simplicity in the main text I will continue mainly speaking of dispositions.
123
Synthese (2012) 184:247–259
257
No metaphysical theory teaches us this distinction. We understand it perfectly well
pretheoretically.
Since the distinction between A and B is a perfectly general one, it applies to intellectual powers and abilities too. One central outcome of the exercise of intellectual
powers is the formation of true beliefs. We can ask how such an outcome—namely, the
formation of a true belief—relates to the relevant intellectual power. For instance, we
can ask whether it manifests the relevant power. If it fails to do so, then we wouldn’t
expect the subject to gain knowledge, just as in FIRE the microwave doesn’t boil the
water.
Notice a striking similarity between the “double luck” recipe for generating Gettier cases and what happens in FIRE. FIRE exemplifies that exact same pattern. The
microwave initiates a process that would normally result in the water’s boiling. Bad
luck strikes: the magnetron is disabled, which would normally result in the water not
boiling. But then “good” luck strikes: the damaged circuit starts afire, resulting in
the water’s boiling anyhow. Double luck prevents the outcome from manifesting the
relevant power. Exactly the same thing happens in Gettier cases. This should increase
our confidence that we’ve correctly identified a more general pattern of failure that
Gettier cases belong to.20
Given that JTB ignores the central role of intellectual powers in the production of
knowledge, it’s no wonder that the most difficult cases facing JTB involve a defective
relationship between a subject’s intellectual powers and the true belief that their operation helps produce. Of course, there are clever ways of trying to offset the intuitive
cost JTB incurs as a result of Gettier cases, several of which we considered earlier. But
none of those strategies can offset the fact that JTB misses something fundamentally
important about the nature of knowledge, namely, the particular way that intellectual
powers must relate to true beliefs, and how that fits into a more general pattern relating
dispositions to outcomes.
Notice that even though my diagnosis of JTB’s omission is motivated on grounds
independent of Gettier’s discussion, it nevertheless can explain why Gettier cases pose
trouble for JTB. Indeed, equipped with the view that knowledge is true belief manifesting intellectual power, one could predict that JTB will give the wrong verdict precisely
in Gettier-style cases. Combine this with the fact that the proposal emerges naturally
from a fairly innocent observation shared among a diverse group of distinguished
philosophers, and it becomes clear that the proposal merits serious consideration.
6 Explaining JTB’s persistent appeal
Wilfrid Sellars once remarked that despite the Gettier problem, he still thought JTB
was “essentially sound.” The traditionalists, Sellars thought, were basically right when
they said, “K = JTB.”
If we make but one assumption, then my proposal would vindicate both Sellars
and the tradition by showing JTB to be almost right. Traditionalists had all the right
20 For more discussion, see Turri (forthcoming).
123
258
Synthese (2012) 184:247–259
pieces in place, but failed to fully appreciate how they must be interrelated to yield
knowledge.
Assume that a person’s intellectual powers or abilities are the source of epistemic
justification for her.21 Given that assumption, to say ‘knowledge is true belief mani, where the arrow represents
festing intellectual power’ is basically to say
the relation of manifestation. This addition constitutes a small but crucial improvement on the traditional view, relying only on an intuitive and independently plausible
metaphysical distinction to supplement the traditional components.
I think this speaks greatly in favor of the assumption that intellectual powers are the
source of epistemic justification. That is, the fact that the assumption reveals JTB to
be almost right counts in its favor. There’s a reason why so many smart people found
JTB so attractive for so long. Once we’re convinced that JTB is false, the next most
plausible explanation for the attraction is that JTB is close to being true. And if my
analysis here is correct, then JTB hit very close to the mark indeed.
Acknowledgments For helpful conversation and feedback on relevant material, I am happy to thank
Margaux Carter, E.J. Coffman, John Greco, Allan Hazlett, Stephen Hetherington, Sharifa Mohamed, Ram
Neta, Duncan Pritchard, Andrew Rotondo, Ernest Sosa, Angelo Turri, and an anonymous referee.
References
Aristotle. (1941). Posterior analytics. In R. McKeon (Ed.) The basic works of Aristotle (G. R. G. Mure,
Trans.). New York: Random House.
Beebe, J., & Buckwalter, W. (2010). The epistemic side-effect effect. Forthcoming in Mind and Language. . Accessed 23
July 2010.
Bloomfield, P. (2000). Virtue epistemology and the epistemology of virtue. Philosophy and Phenomenological Research, 60(1), 23–43.
Chisholm, R. M. (1989). Theory of knowledge (3rd ed.). Englewood Cliffs, NJ: Prentice Hall.
Fantl, J., & McGrath, M. (2002). Evidence, pragmatics, and justification. The Philosophical Review,
111(1), 67–94.
Fantl, J., & McGrath, M. (2007). On pragmatic encroachment in epistemology. Philosophy and
Phenomenological Research, 75(3), 558–589.
Feldman, R. (2003). Epistemology. Upper Saddle River, NJ: Prentice Hall.
Gettier, E. (1963). Is justified true belief knowledge?. Analysis, 23(6), 121–123.
Goldman, A. (1976). Discrimination and perceptual knowledge. The Journal of Philosophy, 73(20),
771–791.
Greco, J. (1993). Virtues and vices of virtue epistemology. Canadian Journal of Philosophy, 23, 413–432.
Hawthorne, J. (2004). Knowledge and lotteries. Oxford: Oxford University Press.
Hetherington, S. (1998). Actually knowing. The Philosophical Quarterly, 48(193), 453–469.
Hetherington, S. (1999). Knowing failably. The Journal of Philosophy, 96(11), 565–587.
Hetherington, S. (Ed.). (2006). Epistemology futures. Oxford: Oxford University Press.
Hetherington, S. (2007). Is this a world where knowledge has to include justification?. Philosophy and
Phenomenological Research, 75(1), 41–69.
Kant, Immanuel. 1781 [1996]. Critique of pure reason (W. S. Pluhar, Trans.). Indianapolis: Hackett.
Klein, P. (1971). A proposed definition of propositional knowledge. The Journal of Philosophy, 68(16),
471–482.
Klein, P. (1976). Knowledge, causality, and defeasibility. The Journal of Philosophy, 73(20), 792–812.
Knobe, J. (2003). Intentional action and side effects in ordinary language. Analysis, 63(3), 190–194.
21 Compare Sosa (1991), Greco (1993), Zagzebski (1996) and Bloomfield (2000).
123
Synthese (2012) 184:247–259
259
Lycan, W. G. (2006). On the Gettier Problem problem. In S. Hetherington (Ed.), Epistemology futures.
Oxford: Oxford University Press.
Matilal, B. K. (1986). Perception: An essay on classical indian theories of knowledge. Oxford: Oxford
University Press.
Moser, P. K. (1992). The Gettier Problem. In Jonathan & Ernest Sosa (Ed.), A companion to epistemology. Malden, MA: Blackwell.
Neta, R., & Rohrbaugh, G. (2004). Luminosity and the safety of knowledge. Pacific Philosophical
Quarterly, 85(4), 396–406.
Reed, B. (2002). How to think about fallibilism. Philosophical Studies, 107(2), 143–157.
Reid, Thomas. 1764 [1997]. In D. Brookes (Ed.) An inquiry into the human mind on the principles of
common sense. University Park, PA: Pennsylvania State University Press.
Reid, Thomas. 1785 [2002]. In D. Brookes (Ed.) Essays on the intellectual powers of man. Edinburgh:
Edinburgh University Press.
Sellars, W. (1956). Empiricism and the philosophy of mind. In Science, perception and reality, Atascadero,
CA: Ridgeview, 1963.
Sellars, W. (1975). Epistemic principles. In H. Castañeda (Ed.) Action, knowledge, and reality. Indianapolis: Bobbs-Merrill. (Reprinted in Epistemology: An anthology, by E. Sosa, J. Kim, J. Fantl,
& M. McGrath, Eds., 2008, Blackwell).
Sosa, E. (1991). Knowledge in perspective. Cambridge: Cambridge University Press.
Sosa, E. (2007). A virtue epistemology: Apt belief and reflective knowledge, v. 1. Oxford: Oxford
University Press.
Stanley, J. (2005). Knowledge and practical interests. Oxford: Oxford University Press.
Turri, J. (Forthcoming). Manifest failure: The Gettier Problem solved. Forthcoming in Philosophers’
Imprint.
Unger, P. (1968). An analysis of factual knowledge. The Journal of Philosophy, 65(6), 157–170.
Weatherson, B. (2003). What good are counterexamples?. Philosophical Studies, 11(1), 1–31.
Zagzebski, L. (1996). Virtues of the mind: An inquiry into the nature of virtue and the ethical foundations
of knowledge. Cambridge: Cambridge University Press.
123
Philosophical Papers
ISSN: 0556-8641 (Print) 1996-8523 (Online) Journal homepage: https://www.tandfonline.com/loi/rppa20
Justification, Internalism, and Cream Cheese
Anthony Brueckner
To cite this article: Anthony Brueckner (2009) Justification, Internalism, and Cream Cheese,
Philosophical Papers, 38:1, 13-20, DOI: 10.1080/05568640902933403
To link to this article: https://doi.org/10.1080/05568640902933403
Published online: 06 Jun 2009.
Submit your article to this journal
Article views: 75
View related articles
Citing articles: 1 View citing articles
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=rppa20
Philosophical Papers
Vol. 38, No. 1 (March 2009): 13-20
Justification, Internalism, and Cream Cheese
Anthony Brueckner
Abstract: This paper is a critique of John Gibbons’s main example against internalism
about justification in ‘Access Externalism’. I argue that the underdescription of the
example defeats its force against internalism.
There is a controversy in epistemology about the nature of justification:
internalism versus externalism about justification. While there are many
ways of characterizing the dispute, a fairly standard and useful framing
of the issues is this: according to internalism, justification supervenes upon
introspectively accessible properties of the believer. That is, the internalist
about justification holds that necessarily, if believer A and believer B are
indistinguishable in respect of their ‘internal’, introspectively accessible
properties, then they are indistinguishable in respect of the justification
they have for the beliefs they hold.
Externalism about justification is the denial of internalism. So a reliabilist
externalist can hold, for example, that the historical properties of my
belief that a cup is on the table are relevant to the question whether I have
justification for that belief. When asking whether I do have justification for
my cup belief, we can sensibly ask: Does the belief in question issue from a
reliable belief-forming process, a process with a sufficiently high truth ratio
(divide the number of true beliefs issuing from the process by the total
number of beliefs issuing from the process)? The reliabilist credentials of
my cup belief are surely not introspectively accessible to me, though they
are determinative of the justificatory status of my cup belief, according to
the reliabilist externalist about justification.1
1 See, e.g., Alvin Goldman’s ‘What Is Justified Belief?’, in G. S. Pappas (ed.), Justification
and Knowledge: New Studies in Epistemology (Dordrecht: Reidel, 1979).
ISSN 0556-8641 print/ISSN 1996-8523 online
© 2009 The Editorial Board, Philosophical Papers
DOI: 10.1080/05568640902933403
http://www.informaworld.com
14
Anthony Brueckner
In ‘Access Externalism’, John Gibbons calls the foregoing form of
internalism about justification ‘Access Internalism’.2 He holds that the
strategy of arguing against Access Internalism is ‘fairly straighforward’:
‘You just tell some stories’. (Gibbons 2006, 20). That is, you try to
construct a thought experiment that refutes Access Internalism by
counterexample:
If you can have two people who are the same on the inside, in the relevant
sense of that expression, but different on the outside where, intuitively, one
of them is justified but the other is not, then internalism about justification is
false. If internalism is false, externalism is true. No matter how important the
inside is, some of the stuff on the outside matters as well. The presence or
absence of an external fact, all by itself, is directly relevant to justification.
(20-1)
Gibbons’s main example involves his desire for a mushroom, jalapeno,
and cream cheese omelette in the morning. I will recount Gibbons’s
example and go on to argue that it is underdescribed in crucial respects. The
subsequent discussion will show that Gibbons does not have the materials
for a counterexample to Access Internalism. That view about justification
survives Gibbons’s attack.
Here is the example. John checked the refrigerator the night before
to see if all the ingredients for a mushroom, jalapeno, and cream cheese
omelette were present, and they were. Since John’s wife rarely eats
breakfast, he says that in the morning ‘it was reasonable for me to
believe that the ingredients were still there’. (22) However, there was a
note in plain sight on the refrigerator door, saying We’re out of cream
cheese. According to John, ‘I didn’t notice the note, but I should have’.
(22) He continues, ‘After all, this is where we leave notes of this sort in
our house’. (22). John says, ‘I thought I was having cream cheese for
breakfast, but I should have known better’. (22) In general, says John, ‘If
I should have known that ~p [in this case, that I am not having cream
2 See Gibbons’s: ‘Access Externalism’, Mind, 115, pp. 19-39. All page references in the text
are to this article.
Justification, Internalism, and Cream Cheese
15
cheese for breakfast], then I’m not justified in believing that p’. (22)
Consider a second case, a variant on the first. This time no note is on
the door, and there is cream cheese in the fridge. John’s belief that he is
having cream cheese for breakfast is based on the same reasonable
grounds as in the first case.
Unlike in the first case, there was no evidence (or potential evidence)
available to me to override these reasonable grounds. So in this case, I am
justified in my belief about breakfast. (22)
So in the two cases we have ‘two people the same on the inside, different
on the outside, where one is justified and the other is not’. (22)
I think that we need to say a bit more about the two cases. Let
~C=There is no cream cheese in the fridge.3 According to John, in the first
case he should have known that ~C, and hence he is not justified in
believing that C. But why exactly should John have known that ~C?
Because the sign saying that ~C was available to John; ‘When the note is
on the door, I am responsible for seeing it’. (23) But it is not just the
availability of the evidence embodied in the note that makes John
responsible for seeing the note. It also seems crucial that the note fits
into a conventional system. If there were no household convention
regarding the posting of notes regarding the lack of provisions in the
fridge, then John’s failure to notice the available evidence would not
constitute a violation of an epistemic responsibility of his.
Thus, it seems that we are to understand that the posting of the
note—say a 5- x 7-inch white card that is placed between a postcard of
Paris and a postcard of Berlin when it is posted at all—is part of a
conventional system within the Gibbons household. Though the example
is not elaborated, suppose that the details of the convention are as
follows. John is in charge of all the grocery shopping, and when
something in the fridge runs out, then either John or his wife (whoever
first sees the shortage) posts a note in order to remind John to get a
3 I am switching from the proposition I will be not be having cream cheese for breakfast for
reasons of simplicity.
16
Anthony Brueckner
replacement.4 The note, then, is a source of evidence as to what is in the
fridge, evidence as to what is missing from the Gibbons’s standard stock
of items. The absence of a note is also a source of evidence as to what is in
the fridge, evidence as to what is not missing from the standard stock of
items. The Gibbons’s like cream cheese. If there is no note posted that
says that cream cheese is missing, then this is evidence that there is still
cream cheese in the fridge.5
We can suppose that these two sources of evidence are imperfect. For
example, John’s wife, as he says, sometimes writes a note about the lack
of cream cheese but forgets to post it. Similarly, we can suppose that
sometimes a note is posted and left standing even after John has
remedied the deficiency. Nevertheless, if there is no note posted, then
this is good evidence that all is well with the standard stock of items in the
fridge (such as cream cheese). Similarly, if there is a note posted, say to
the effect that the cream cheese is gone, then this is good evidence there is
no cream cheese in the fridge.
Let us now turn to a case in which John, barely conscious at 3:00 AM,
gets some milk out of the fridge. As he falls back asleep, he seems to
vaguely remember that no cream cheese was present in the usual spot in
the fridge; he seems to vaguely remember, instead, a lizard perched in
that spot. Upon awakening, John glumly goes to the fridge, believing
that no cream cheese is inside for the making of his favorite omelette.
He fails to check to see whether a note is posted. In fact, no note is
posted, and there is plenty of cream cheese in the fridge. His sleepaddled apparent memory was merely apparent. Just as John is responsible
4 We can suppose that John is not morally responsible for reading the note so as to have an
up-to-date shopping list. Maybe John is in general horribly exploited by his wife, doing all
the shopping and cleaning while his wife does nothing but eat truffles and listen to operas.
5 We could imagine a conventional system with a different point. Suppose that the
Gibbons’s fridge is so labyrinthine that the posting of a note is simply meant to spare the
reader from searching for fifteen minutes through the endless tins and jars for the
depleted item. Given the structure of Gibbons’s example, what cannot be supposed is that
when no note is posted, this state of affairs has no evidential value whatsoever. No note
posted regarding cream cheese is evidence that neither spouse has found cream cheese to
be lacking.
Justification, Internalism, and Cream Cheese
17
for seeing the note when it is on the door (23), similarly, John is
responsible for seeing that no note is posted when none is. In this case,
then, John should have known that the cream cheese was not depleted,
just as in the very first case, John should have known that the cream
cheese was depleted. In the current case, then, John is not justified in
believing that there is no cream cheese in the fridge. There is evidence
available to John (lack of note) which he has ignored.6
It seems clear that we are being bidden to think that John has an
epistemic responsibility (or obligation) to check the available sources of
evidence involving the posting and non-posting of refrigerator notes,
where this responsibility (or obligation) is relevant to the question
whether John’s beliefs about the fridge’s contents are justified. In the
first case, John fails to check and thus fails to acquire available evidence
(from reading the note) that would undermine his justification for
believing that there is cream cheese in the fridge—evidence which John
is responsible for acquiring given the conventional system. In the third
case, John fails to check and thus fails to acquire available evidence
(from seeing that there is no note posted) that would undermine his
justification for believing that there is no cream cheese in the fridge.7
Let us now return to the second case, where (a) no note is posted, and
(b) John correctly believes that there is cream cheese in the fridge. In
order to get a counterexample to internalism about justification, it has to
be true that (1) John is the same on the inside in both the first and the
second cases, and (2) John is not justified in believing that there is cream
cheese in the fridge in the first case while he is justified in believing that
6 Gibbons’s point does not have to be put in terms of knowledge. For example, in his first
case, suppose that there had been cream cheese in the fridge after all even given the
posting of the note. Even so, John ought to have noticed the evidence present in the note
to the effect that there was no cream cheese in the fridge, and so he ought not to have
believed that there was cream cheese in the fridge. Hence John was not justified in
believing that there was cream cheese in the fridge. I take it that these points are congenial
to Gibbons’s externalist framework.
7 I will not distinguish between an epistemic responsibility to check and an epistemic
obligation to check.
18
Anthony Brueckner
there is cream cheese in the fridge in the second case. In order for (1) to
hold, we must suppose that John does not check the space where notes are
posted in the second case. For if John did check, then he would have a belief
that no note was posted, or a seeming memory of a blank space between
the Paris and Berlin postcards. But then John would not be the same on
the inside in the two cases, since John does not check the relevant space
in the first case and does not believe that no note was posted (and does
not have a seeming memory of a blank space between the Paris and
Berlin postcards).
But now it is not so obvious that John is justified in believing that
there is cream cheese in the fridge in the second case. If John is not
justified in the second case, then: (2) does not hold, and the alleged
counterexample to internalism fails. John’s failure in the second case to
check the available sources of evidence (note posted in relevant space;
no note there posted) seems to constitute a violation of his epistemic
responsibility. True, if he had checked, then the additional evidence he
would have acquired (no note posted) would not have undermined his
belief that there is cream cheese in the fridge. But it seems that John has
violated his epistemic responsibility all the same, and this seems to show
that he is not justified in believing that there is cream cheese in the
fridge in the second case. As Gibbons says, ‘… if the belief that p is the
result of a failure to fulfill your epistemic obligations, if it’s an
irresponsible belief, then you shouldn’t believe that p, and you are not
justified in believing if you do’. (26)
Now either (I) John has an epistemic responsibility/obligation to
check the available sources of evidence regarding the posting and nonposting of refrigerator notes, or (II) John does not have such an
epistemic responsibility/obligation. If (II), then we have been presented
with no case at all against internalism. Suppose, then, that we accept (I).
To go on to say that John is justified in believing that there is cream
cheese in the fridge in the second case is to commit oneself to the
following position. Violating an epistemic responsibility/obligation to
check an available source of evidence regarding p undercuts justification
Justification, Internalism, and Cream Cheese
19
for believing that p only when the source would provide evidence against a
belief that p. Here is an example of such a position. I am charged with
determining the day-by-day location of a battalion of troops. I use
evidential sources A and B (the battalion’s own reckoning of where it is,
and its rear guard’s reckoning of where it is). But it is my epistemic
responsibility to get a final check by GPS before I file my report on the
battalion’s location. According to the position under consideration, my
failure to consult the GPS undercuts my justification for my A- and Bbased location-belief only when the GPS would provide evidence against
my location-belief. This position seems very implausible.
I conclude that Gibbons’s cream cheese cases fail to refute internalism
about justification, since he has not provided us with cases that are the
same on the inside (no checking of the note-posting space in both) but
which differ in respect of justification.
Perhaps we can generate a counterexample to internalism by
considering a pair of cases in which no conventional evidential system
exists. Consider Professor Spacey, a distracted mathematician who is in
the throes of constructing an apparent proof of Riemann’s hypothesis.
He wanders into the common room and sees the dead body of Professor
Termine lying in a pool of blood. Without a glance around the well-lit
room, Professor Spacey turns and leaves, thinking that M=It is at this
point a great mystery as to who killed Termine. The preoccupied Spacey,
however, neglected to notice the presence of a large man holding a
bloody machete, standing frozen in plain sight against a well-lit wall.
Spacey should have taken notice of this evidence, and hence his belief of
M is unjustified.
In an alternative scenario, things unfold in exactly the same way,
except that the large man leaves before Spacey arrives. So there is no
hulking piece of evidence that our professor irresponsibly overlooks. Just
as in the first two cream cheese cases, our protagonist is internally
indistinguishable in the two murder scenarios. Spacey lacks justification
for believing M in the first scenario. If he has justification for believing M
20
in the second scenario, then we have a counterexample to internalism
about justification. But it seems clear that Spacey is unjustified in the
second scenario. It seems that Spacey again has an epistemic
responsibility to make at least a quick visual check of the room. If Spacey
is the same ‘on the inside’ in both scenarios, then we must, as we did, set
up the second scenario in such a way that Spacey fails to make such a
check. So even though there is no available undermining evidence
against M in the second scenario, S is nevertheless unjustified in
believing M in that scenario, in virtue of his failing to fulfill his minimal
epistemic responsibility to make at least a cursory visual check on the
room where a murder has apparently occurred. So even in a pair of cases
where there is no evidential conventional system in force, we fail to see a
counterexample to internalism about justification.8
I conclude that Access Internalism is not refuted by Gibbons’s alleged
counterexample. It can still be maintained that justification supervenes
upon introspectively accessible properties of the believer.
University of California, Santa Barbara
brueckne@philosophy.ucsb.edu
8 Thanks to Wally Siewert for discussion of a similar case.
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
Collective Scientific Knowledge
Melinda Fagan*
Department of Philosophy, Rice University
Abstract
Philosophical debates about collective scientific knowledge concern two distinct theses: (1) groups
are necessary to produce scientific knowledge, and (2) groups have scientific knowledge in their
own right. Thesis (1) has strong support. Groups are required, in many cases of scientific inquiry,
to satisfy methodological norms, to develop theoretical concepts, or to validate the results of
inquiry as scientific knowledge. So scientific knowledge-production is collective in at least three
respects. However, support for (2) is more equivocal. Though some examples suggest that groups
have scientific knowledge independently of their individual members, these cases are also
explained in terms of relational complexes of members’ beliefs.
1. Introduction
Is there collective scientific knowledge? The traditional answer is no. Philosophical
accounts of science and knowledge tend to emphasize individuals, treating groups as epistemic epiphenomena. However, these individualistic assumptions are increasingly questioned.1 The following sections survey these questions, focusing on two collectivist claims:
(1) groups produce scientific knowledge, and (2) groups have scientific knowledge. Section
2 introduces key concepts and distinctions. Section 3 examines support for thesis (1),
describing three ways groups may be required for scientific knowledge-production.
Section 4 examines support for thesis (2), which rests on the premise that scientific groups
have irreducibly collective beliefs. This premise does not follow from thesis (1), but
requires further argument. The most significant argument for this premise rests on Margaret Gilbert’s idea of a ‘plural subject.’ Section 5 sketches Gilbert’s general theory and its
application to scientific belief. But an alternative view, that group belief is a relational
complex of individual beliefs, better accounts for social aspects of science. This alternative,
though compatible with much of plural subjects theory, undercuts support for thesis (2).
Section 6 summarizes the results of this survey, and indicates their broader significance.
2. Preliminaries
Discussions of science and the social are rife with ambiguity and misunderstanding. Some
basic distinctions will be helpful, in navigating this contested theoretical terrain. The traditional analysis of knowledge as justified true belief, though notoriously inadequate, indicates three conditions widely considered necessary for knowledge. Accordingly, a
tripartite distinction of aspects of knowledge can be drawn at the outset:
i) knowledge-producing practices;
ii) knowledge had by a subject; and
iii) what is known (i.e., the content of knowledge).2
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
822 Collective Scientific Knowledge
Nearly all contemporary philosophical discussions of knowledge assume that (iii) is propositional knowledge; that what is known is some proposition p. In this framework, which
I assume throughout, there are three senses in which knowledge could be collective:
ic) knowledge that p is collectively produced;
iic) subject S collectively knows that p; and
iiic) p’s content is collective.
‘Collective’ here means, roughly, ‘irreducibly group-involving.’ Reduction, and therefore
irreducibility, can be understood in different ways; I discuss these concepts further below.
The notion of a group assumed here is more intuitive: a collection of two or more persons acting together for some shared purpose.3 There are evidently groups in this sense,
in scientific and other contexts. The question at issue is how such groups relate to scientific knowledge.
Exactly what is collective differs for each of (ic)-(iiic): the process of acquiring knowledge (ic), epistemic attitude of knowing (iic), or content of a proposition (iiic). The last,
which hinges on the nature of reference, propositional content and social metaphysics, is
beyond the scope of this essay.4 This leaves two theses to consider:
(1) Groups are necessary for producing (some) scientific knowledge.
(2) Some scientific knowledge is irreducibly had by groups.
For brevity, I will refer to the process by which scientific knowledge is produced as
‘inquiry.’5 So thesis (1) could be restated as ‘Groups are necessary for inquiry,’ where
inquiry is understood more narrowly than in common usage. Note that neither collectivist thesis is universal. So, if true, (1) and (2) do not rule out scientific knowledge for individuals. Though some (notably Nelson 1990) have defended the idea that groups rather
than individuals are the primary scientific knowers, most collectivists take it for granted
that individuals have and produce scientific knowledge. Many also treat theses (1) and (2)
as closely connected (e.g., Wray 2007). If the two are conflated, evidence for collective
inquiry is taken as evidence that groups have knowledge in their own right. But it is
important to distinguish theses (1) and (2), for reasons brought out in the following sections.
3. Collective scientific inquiry
In order for a process of inquiry to produce scientific knowledge, at least three conditions
must be satisfied:
(1a) inquiry is properly performed;
(1b) the result of inquiry (p) is true; and
(1c) p is accepted as scientific knowledge.6
If involvement of a group (or groups) is necessary to satisfy any of (1a)-(1c), then inquiry
is collective and thesis (1) is true. Philosophical, historical and social studies of science
suggest that, in many cases, groups are necessary to satisfy all three conditions. I consider
each in turn.
Whether groups are necessary for condition (1a) to be satisfied is difficult to say, as we
lack a general account of scientific method. Methodological standards in science vary
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
Collective Scientific Knowledge
823
enormously across disciplines, locales and historical periods, and new standards and methods of obtaining evidence are continually invented. However, several very broadlyapplied norms for inquiry do demand the involvement of groups. For example, hypotheses must be confirmed by evidence. In fields such as genomics, high-energy physics,
nanotechnology, biomedicine, and many others, assembling evidence sufficient to test a
hypothesis is beyond the ability of any single researcher. A collaborative team is therefore
required to satisfy the norm (Staley 2007, Wray 2002). This holds a fortiori for the norm
that multiple independent lines of evidence are needed to establish a hypothesis. Assembling diverse bodies of evidence and establishing convergence among them is, in many
cases, a job requiring multiple researchers (Galison 1997). More generally, the robustness
and reliability of results is enhanced by diverse perspectives on an object of inquiry,
which interact to reveal implicit, hitherto unquestioned, background assumptions (Longino
2002). So in many (if not all) episodes of inquiry, groups are required to satisfy (1a).
I assume that a proposition’s truth or falsity is independent of processes by which it is
established as scientific knowledge.7 But this does not rule out a necessary role for
groups in satisfying (1b). This is because scientific results are articulated in terms of theoretical concepts. Where those results are true, there is a good fit or mapping between
conceptual and worldly domains (Giere 1988). Developing a conceptual domain such
that good fit can be achieved requires effort. As a matter of historical fact, this effort
involves groups in many cases. In at least some of these, such as the concepts of
induced radioactivity, nuclear fission, and Heisenberg’s Uncertainty Principle, it is difficult to imagine how conceptual developments could have occurred without interactions
among distinct groups of scientists (Andersen 2009, Bouvier 2004). Insofar as collaboration is necessary to develop theoretical concepts, which figure in true theories, groups
are needed to satisfy (1b).
Validation of a result as scientific knowledge marks the end of a process of inquiry. 8
‘Acceptance’ in condition (1c) is thus to be understood as an action which closes an episode of inquiry, not an attitude had by an epistemic subject (see §4). 9 It is tempting to
suppose that such an act of acceptance can only be accomplished by a scientific community. The idea that scientific knowledge is essentially public, and that the end of inquiry
involves community-level acceptance, is deeply-entrenched in our contemporary understanding of science, and embodied in practices of publication and peer review. If this is
correct, then groups are necessary to satisfy (1c). However, given the diversity and flexibility of scientific practices, and their perennially fuzzy boundaries, it seems too strong an
assumption that group acceptance is a universal requirement for producing scientific
knowledge. There may be cases of genuine scientific knowledge in which individual
acceptance suffices to validate a result as scientific knowledge. But at least in many contexts, it takes a scientific community to satisfy (1c).
So thesis (1) is supported by at least three conditions for inquiry. Though to claim that
all inquiry is collective via (1a)-(1c) goes too far, there is good reason to think that some
inquiry is collective, in one or more respects. For thesis (2), however, matters are otherwise. Some collectivists are likely to object at this point that thesis (2) follows automatically from community-level acceptance of scientific knowledge (1c). If a scientific
community accepts a result as scientific knowledge, so the thought goes, then surely the
community itself has that knowledge. However, knowledge that a scientific community
has upon acceptance of a result as scientific knowledge can be interpreted in several ways,
not all of which are collective in an interesting sense. A scientific consensus that p might
simply be the aggregate of individual scientists’ beliefs, or some more sophisticated combination of their attitudes (Tuomela 1992, Corlett 1996, Niiniluoto 2003). Thesis (2)
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
824 Collective Scientific Knowledge
asserts something stronger: that scientific groups have irreducibly collective knowledge. The
latter concept must now be considered more closely.
4. Collective scientific belief
Irreducibly collective scientific knowledge had by a group is, in a sense to be specified,
independent of knowledge had by its individual members. By analogy with the individual
case, necessary conditions for a group S to collectively know that p include:
(2a) S believes that p;
(2b) p is true; and
(2c) S’s belief that p is appropriately based on properly-performed inquiry.
I assume that any further conditions required for S to know that p are independent of
collective aspects of knowledge, and that satisfaction of condition (2b) is similarly independent.10 The two sets of conditions associated with theses (1) and (2) display a kind of
complementarity, with linkages between (1a) and (2c), (2a) and (1c). But these paired
conditions are distinct.
Conditions (1a) and (2c) are both about properly-performed inquiry. Suppose properly-performed inquiry is necessarily collective, in that only a group can accomplish it.11
Then knowledge-production is collective, via condition (1a). It does not follow that
knowledge thereby produced is irreducibly had by a group via (2c). Condition (2c)
demands that a subject’s belief that p be appropriately based on the inquiry that yielded p
as an item of scientific knowledge. But, unless ‘appropriate basing’ requires direct experience or a complete representation of the entire process of inquiry, (2c) does not constrain
the nature of the epistemic subject. And it is implausible that appropriate basing requires
so much.12 If it did, then scientific knowledge would be restricted to those with intimate
experience of the evidential practices involved in its production. However, scientific
knowledge is famously unrestricted in this respect. For example, experiments in highenergy physics are performed by thousands of researchers, and involve evidential practices
that no single individual comprehends in detail (Galison 1997, Knorr Cetina 1999, Staley
2007). This does not, however, prevent individuals from knowing that an experiment has
a certain outcome (Giere 2007).
Appropriate basing does plausibly require that certain social epistemic relations (e.g.,
trust and authority) hold between those involved in inquiry and those who know the
result. And the former include groups, at least in some cases (§3). But this does not mean
that groups are knowing subjects in these cases. For all that has been said so far, of
course, they might be. The point is that, given thesis (1), further argument is needed to
establish thesis (2) via condition (2c). The question of who has scientific knowledge must
be considered in its own right. Condition (2a) is therefore the crux for thesis (2). If
groups cannot have scientific beliefs in their own right, then thesis (2) must be false.
Conditions (2a) and (1c) are similarly connected, though their tie is complicated by the
vexed issue of belief and acceptance. The belief ⁄ acceptance distinction is motivated by
two uncontroversial premises: first, that groups lack the neurological structures and psychological mechanisms characteristic of individual epistemic agents; and second, that
groups can endorse viewpoints distinct from the views of individual members, as in committee decisions, jointly-authored reports, election results, and many other familiar examples. Taken together, these two premises suggest that, while groups cannot have beliefs in
exactly the same way that individuals do, they can be subjects of an epistemic attitude like
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
Collective Scientific Knowledge
825
belief. ‘Acceptance’ is the usual term for this epistemic attitude, which is available to both
individuals and groups (Cohen 1992, Pettit 1992, Wray 2001, Gilbert 2002). It is not
self-evident, however, that groups cannot have beliefs, full-stop. This issue is extensively
debated in philosophy of mind (Schmitt 2003; see Mathiesen 2006 for references). What
is clear is that if ‘belief’ is understood as a psychologically rich concept, making individual
psychology essential, then groups cannot have beliefs. Such a ‘thick’ construal of belief
necessitates an attitude of acceptance that can be attributed to groups. But a psychologically ‘thin’ construal of belief does not.
The term ‘belief’ in condition (2a) refers to a psychologically thin conception, which
does not by definition prohibit groups from having this attitude. A sufficiently-thin conception of belief subsumes thicker concepts of both belief and acceptance, thereby bracketing the question of their relation.13 So my examination of thesis (2) does not take the
belief ⁄ acceptance contrast as a point of departure. This is the approach used by Staley
(2007) to examine collective scientific belief, and recommended by Mathieson (2006) for
epistemology more generally. An alternative approach is to take collective scientific
knowledge as hinging on the contrast between these two ‘thick’ epistemic attitudes,
which differ in aims, mechanisms of formation, shaping influences, and guiding ideals
(Mathiesen 2007, 210–213). But this has the disadvantage of complicating analysis by
multiplying attitudes and encouraging conflation of theses (1) and (2).14
The connection of (1c) and (2a) can now be seen. Suppose the subject that accepts p
as scientific knowledge must be a group; e.g., a scientific community. Then inquiry is
collective, via (1c). It does not thereby follow that knowledge produced by such inquiry
is had by the accepting group, via (2a). Recall that a group’s acceptance of p is an
action that concludes an episode of inquiry (§3). Condition (2a) requires that the group
have an epistemic attitude toward p, belief in the ‘thin’ sense, that is irreducibly collective – i.e., not reducible to analogous epistemic attitudes had by individual members of
the group. This follows from (1c) only given the further premise that a group’s act of
acceptance is or entails an irreducibly collective epistemic attitude: belief that p. It is
plausible that an act of acceptance entails some attitude attributable to the group (Staley
2007, Schmitt forthcoming).15 But establishing that this attitude is irreducibly collective
requires further argument; it does not follow directly from (1c). So although the two
sets of conditions (a-c) are linked, support for (1) does not automatically accrue to (2).
Studies of collective scientific belief that do not explicitly distinguish theses (1) and (2),
but argue from collective features of scientific inquiry such as consensus statements and
jointly-authored papers (Beatty 2006, Wray 2006, 2007, Staley 2007) offer direct support only for the former.16
Arguments for collective knowledge had by scientific groups do not cite features of
inquiry. Instead, they focus on examples in which an epistemic attitude (belief in the psychologically thin sense) is attributed to a group but not its individual members (e.g.,
Gilbert 2000, Beatty 2006, Rolin 2008). This line of argument aims to establish thesis (2)
via (2a), by demonstrating that some scientific groups have beliefs that cannot be reduced
to members’ beliefs. The most straightforward cases of reduction are summative; i.e., a
group’s belief is just the sum or aggregate of individual members’ beliefs. For group G to
summatively believe that p, it is necessary and sufficient that all or most of G’s members
believe that p. Summative group belief is reducible to the beliefs of individual members, in
the sense that the latter provide necessary and sufficient conditions for the former. This is
a classic sense of reduction, though not the only one that bears on questions of scientific
knowledge (Fagan 2011, forthcoming). Scientific consensus is often interpreted as summative. It is common to suppose, for example, that there is consensus in the molecular
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
826 Collective Scientific Knowledge
biology community that biological information flows only from DNA to RNA to protein
(i.e., the Central Dogma), just in case most molecular biologists believe this.17
This example is instructive, because in fact there is no summative consensus on molecular biology’s Central Dogma. Exceptions to the one-way flow of biological information
from DNA to RNA to protein are well-documented, such as reverse transcription from
RNA to DNA and protein-mediated epigenetic processes that impact development and
evolution. Most molecular biologists today do not believe the Central Dogma, in a strict
sense. Yet the idea that biological information follows a linear track from DNA to RNA
to protein is still prevalent, though few molecular biologists would endorse it if pressed.
This and many indicate examples indicate that science involves non-summative group
beliefs that p, for which it is neither necessary nor sufficient that all or most of members
believe that p (Gilbert 2000). Collective belief is often identified with non-summative
belief (e.g., Gilbert 1989, 288–292). This identification presupposes that the summative ⁄ non-summative distinction coincides with the reducible ⁄ irreducible distinction. If
the two distinctions coincide, then non-summative beliefs (for which it is neither necessary nor sufficient that all or most individual members believe that p) are just those group
beliefs that are irreducible to members’ beliefs. Identification of these as collective beliefs
naturally follows, and therefore thesis (2) via (2a). However, the summative ⁄ non-summative and reducible ⁄ irreducible distinctions do not necessarily coincide in this way.
The concept of reduction is notoriously resistant to unequivocal characterization. Even
assuming the classic notion, reduction by necessary and sufficient conditions, there are
other relations than simple summation by which individual and group beliefs can be connected, such that the former provide necessary and sufficient conditions for the latter
(Corlett 1996). For example, a group’s belief may be the belief had by the greatest number of members, the intersection of all members’ beliefs, the belief had by the most
authoritative members, the average or median belief of members (for beliefs with quantitative content), the belief had by representatives of all G’s members after deliberation in
accordance with norms most members of G endorse, etc. Therefore, not all non-summative
group beliefs are irreducible in the classic sense. Furthermore, on a collective interpretation, the Central Dogma example is confusing: the molecular biology community
believes the Central Dogma, though most molecular biologists do not. Unlike stock
collectivist examples of juries and hiring committees, in which a group’s viewpoint is
clearly identified, it is not obvious that the molecular biology community really has a
viewpoint here. Yet the Central Dogma does play some role on the scientific stage. The concept of collective belief irreducibly had by a scientific group does not illuminate it, however.
A more general and fruitful way to think about group belief is relational. The basic idea
is that a group belief is not just a heap of individual beliefs, but includes relations among
them as well: trust, authority, and the like. So relational group beliefs are not simple
things, but systems of members’ beliefs and social epistemic relations among them
(Niiniluoto 2003, 271–273). Tuomela’s ‘positional group belief’ (1995), Corlett’s ‘sophisticated summative belief’ (1996), Ernst & Chant’s ‘equilibrium view’ (2007) and Fagan’s
‘interactive belief’ (2011) are all refinements of this basic relational view. Despite being
framed in terms of groups having irreducibly collective knowledge, Wray’s (2007)
account of epistemic interdependence among members of scientific groups also falls into
this category.18 The relational view provides a plausible account of the Central Dogma:
its persistence in molecular biology today is fully determined by molecular biologists’
beliefs, together with relations of epistemic authority that structure the molecular biology
community. In this case, teaching practices are important. All or most students of molecular biology are taught, by experts whose authority they accept, that the Central Dogma
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
Collective Scientific Knowledge
827
is (basically) correct. So the idea persists, though students who go on to become practicing molecular biologists later learn that it is, strictly speaking, false. So we are not faced
with the stark discontinuity of the collectivist interpretation: the molecular biology community believes the Central Dogma, though most molecular biologists do not. Instead,
the relational view of group belief reveals connections among individual beliefs, and the
role of these connections in the group at issue.
Summative group belief may be treated as a special, limiting case of relational group
belief. For a group to summatively believe that p, it is necessary and sufficient that all or
most members of a group believe that p. For a group to relationally believe that p, it is
necessary and sufficient that belief that p derives from members’ beliefs concerning p and
social epistemic relations among those beliefs. Summative cases are just those involving
minimal social epistemic relations. More typically, relational group belief involves aspects
of social life, which knit the members together into a group. So there is a sense in which
relational beliefs are group-involving. But this does not entail that relational belief is had
by a group as such, as thesis (2) requires. Nothing more than beliefs of individual members, singly or arranged in relations, is posited. The sense in which relational belief is collective is that of thesis (1).19 So the relational account does not support thesis (2) via (2a).
An argument for the latter must support the idea of a group believer, over and above the
individual members.
5. Plural subjects
Examples of both summative and non-summative group belief, as we have seen, are readily interpreted in relational terms. And the relational view has many other attractions: it
avoids introducing a new mode of epistemic agency, dovetails with socio-historical
accounts of scientific inquiry, and encourages investigation of social epistemic systems that
can impact scientific consensus. The collective interpretation, on the other hand, introduces a new epistemic agent, characterizing the group itself as a ‘‘locus of power and
knowledge’’ (Tollefsen 2004). But, so far, this idea has been specified only negatively:
non-summative, irreducible, not determined by members’ beliefs. There is no positive
account of or motivation for a group’s having collective scientific knowledge. Margaret
Gilbert’s theory of ‘‘plural subjects’’ transforms this situation. This theory not only defines
collective scientific belief had by a group, but also provides the ‘missing link’ between
conditions (1c) and (2a). Understandably, then, many studies of collective scientific
knowledge presuppose that there are plural subjects with collective beliefs, in Gilbert’s
sense (e.g., Bouvier 2004, Wray 2006, 2007, Staley 2007, Rolin 2008). Plural subjects
theory is the primary support for thesis (2). So it merits careful consideration.
Gilbert’s central concept is joint commitment.20 Joint commitments, on her view, are
created by two or more individuals mutually expressing willingness to enter into a commitment to believe, intend, or act, in some way, together. So joint commitments depend
on individuals’ attitudes, with respect to their formation. But, once created, a joint commitment constitutes a group as a plural subject: its unity fuses members into a ‘‘corporate
body,’’ whose members have interlocking obligations and entitlements. A group’s joint
commitment is a simple whole – not a complex combination of members’ personal commitments or attitudes. This distinguishes Gilbert’s version of collectivism from relational
group belief (§4). However, Gilbert’s account includes the latter as well, conceived as a
web of obligations and entitlements among members of a plural subject. A consequence
of the simple irreducibility of joint commitment is that individual members of a plural
subject cannot ‘opt out.’ Failure to conform to the obligations imposed by a joint
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
828 Collective Scientific Knowledge
commitment incurs a cost: expulsion from the group, or loss of status, etc. If individuals
tend to prefer to avoid costs, minimize risk, and avoid direct conflicts between personal
and group viewpoints (all plausible assumptions), it follows that joint commitments stabilize human belief and action.
Collective belief is one form of joint commitment. If ‘‘some persons are jointly committed to believe as a body that p,’’ then there is a collective belief that p, had by the
group composed of those persons (Gilbert 2000, 39–41). A joint commitment to believe
that p constitutes a group as a plural subject with the belief that p. Gilbert’s theory
extends to collective action as well: activities performed by a group rather than individuals, such as walking together or jointly writing a paper (1989, 2003, 2006). Indeed, on
this theory, collective belief entails joint acceptance, and vice versa: a group is jointly
committed to believe that p if and only if that group jointly accepts that p (1989,
194–195). So a group’s act of joint acceptance entails an irreducibly collective belief that
p. Because of these conceptual ties, rooted in the concept of joint commitment, condition (2a) follows from (1c) on Gilbert’s theory (see §4). In this way, plural subjects theory
runs theses (1) and (2) together: both stem from joint commitment.
Joint commitments also impose constraints on individual members: to support the
group’s belief that p, either explicitly (asserting ‘‘we believe that p’’) or tacitly (not calling
p into question or expressing doubt that p). Costs of failing to provide such support discourages dissent or questioning of collective belief, and imposes epistemic conformity on
individual members of a group. Given these definitions, conceptual ties and empirical
assumptions about individual behavior, Gilbert’s theory makes the following predictions
about scientific change (Gilbert 2000, Fagan 2011):
(P1) Consensus tends to persist, while doubts or heterodox ideas tend to be suppressed.
(P2) Ideas that challenge scientific consensus tend to come from outsiders or new group
members, for whom costs are less.
(P3) Shift to a new consensus is marked by expressions of support by prestigious scientists.
Gilbert’s theory can therefore be evaluated in the same way as scientific theories that posit
theoretical entities. Collective scientific belief, like classic theoretical entities such as the
electron, is an unobservable theoretical construct, which (realists claim) has important
concrete effects. Perhaps the realist strategies that succeeded with the electron could be
deployed on behalf of a putative social entity. 21
Inference to the best explanation (IBE) is widely used throughout the natural and
social sciences to argue for the existence of unobservable entities and processes. 22 For
an IBE argument to succeed, a theory must not just explain phenomena of interest
(here, features of scientific change), but explain them better than available alternatives.
The relevant alternative in this case is relational group belief. If P1-P3 are borne out by
actual patterns of scientific change, which the relational account of group belief cannot
explain as well, then IBE supports thesis (2), via (2a) and (1c), which are linked on
Gilbert’s theory. Suppose, for the sake of argument, that P1-3 are genuine patterns of
scientific change. 23 Which account of group belief better explains them? Though it is
difficult to say in general what makes one explanation better than another, here there is
a clear answer. The very features that render joint commitment irreducible – its simplicity, unity, and independence from a complex of individuals’ epistemic attitudes – make
its connection to those attitudes mysterious. This is not to say that plural subjects theory
provides no explanation whatsoever of social phenomena. But it cannot provide a better
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
Collective Scientific Knowledge
829
explanation than an alternative that omits the concept of joint commitment. Any social
phenomenon that can be explained in terms of plural subjects of joint commitment can
be explained more simply without irreducible joint commitment, in terms of the network of obligations and entitlements that structures relations of members’ beliefs to one
another - that is, in terms of relational belief. Irreducibly collective scientific belief is
not supported by IBE.
Indeed, much of Gilbert’s account, including the formation of joint commitment and
aspects of joint action, conforms to the relational account (Beatty 2006, Staley 2007,
Wray 2007). This suggests that irreducible joint commitment is something of an idle
wheel in Gilbert’s own theory. Plural subject explanations of social phenomena invoke
obligations and entitlements of members to one another. It is through these networks of
mutual constraint that costs of defaulting on joint commitment are imposed. Irreducible
joint commitment could be jettisoned, and the plural subjects theory would do the same
explanatory work, with fewer stipulations and assumptions. 24 But this trades the idea of
irreducibly collective belief had by groups, for that of a relational system of individuals’
beliefs. Again, the overall result is support for thesis (1), but not thesis (2).
6. Conclusion
While there is good reason to think that production of scientific knowledge is a collective
process (thesis 1), the idea that scientific groups have knowledge of their own (thesis 2)
remains dubious. There are at least three ways groups can be necessary for scientific
knowledge-production: satisfaction of methodological norms, development of theoretical
concepts, and validation of results as scientific knowledge. Historical, sociological and
philosophical studies of science suggest that groups do play these roles in many (if not all)
cases. So there is strong support for thesis (1). But it does not follow, from the involvement of groups in inquiry, that groups are knowers in their own right. Support for thesis
(2) is equivocal at best. Arguments for this thesis presuppose that groups have beliefs. At
least three senses of group belief can be distinguished: summative, relational, and collective. But only the last of these allows for scientific knowledge that is collective in a philosophically interesting sense distinct from thesis (1). The relational alternative undercuts
arguments for (2) based on non-summative examples and Gilbert’s sophisticated plural
subjects theory. These considerations do not definitively prove that groups cannot have
scientific knowledge, and that thesis (2) is false. They show, however, that there is little
reason to accept it.
This conclusion does not, however, vitiate the importance of collective conceptions of
science. A ‘deflated,’ relational modification of plural subjects theory is useful in explicating social aspects of key scientific episodes: engagements with the public (Beatty 2006),
collaborative research (Staley 2007), and conceptual innovation (Andersen 2009). The
moral is not to abandon the idea of collective scientific knowledge, but to focus on
aspects of inquiry that include epistemically significant interactions among individuals.
Crucial roles of groups in scientific knowledge-production can be illuminated without
supposing that groups themselves have scientific knowledge.
Acknowledgements
Many thanks to Hanne Anderson, Michael Bratman, Margaret Gilbert, Frank Hindriks,
Gerhard Preyer, Bill Rehg, Fred Schmitt, Hanoch Sheinman, Brad Wray, and two anonymous reviewers for helpful discussion and comments on the ideas in this essay.
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
830 Collective Scientific Knowledge
Short Biography
Melinda Bonnie Fagan is Assistant Professor of Philosophy at Rice University in Houston, Texas, where she teaches philosophy of science, theory of knowledge and social
epistemology. She has PhDs in Biological Sciences (Stanford University, 1998) and
History and Philosophy of Science (Indiana University, Bloomington, 2007) and has
published over twenty articles and book chapters on biology and philosophy of science.
Her biological research focused on colonial organisms (plants and protochordates) and the
evolution of histocompatibility. Her current research focuses on interrelations of experiment, modeling, and social interaction in biomedicine. She has recently completed a
book on philosophy of science and stem cell research (forthcoming, Palgrave-Macmillan).
Notes
* Correspondence: Department of Philosophy, Rice University, MS 14, PO Box 1892, Houston, TX, USA,
77251-1892. Email: mbf2@rice.edu.
1
For discussions of general social epistemology and group belief, see: Synthese 73 (1987); Episteme 1 (2004); Social
Epistemology 21 (2007), and Schmitt (1994, 2003).
2
These conditions follow Longino (2002, 77).
3
The notion of ‘acting together’ can be analyzed in various ways (e.g., Bratman 1999, Gilbert 2003). Note that
groups so defined are small, face-to-face assemblages; it is possible that enduring social institutions do not qualify as
groups in this sense.
4
The associated collectivist thesis is that the content of (some) scientific knowledge makes ineliminable reference
to groups. Whether this is so depends on the nature of reference and propositional content, as well as social metaphysics. These issues are beyond the scope of this essay.
5
The referent here might be more accurately described as ‘successful inquiry.’
6
These conditions are, again, modeled on the traditional analysis of knowledge as justified true belief.
7
Excepting propositions about those processes or their novel material products.
8
In principle, investigation can always be re-opened, but in practice every episode of inquiry terminates at some point.
9
Thanks to an anonymous reviewer for raising this point.
10
With the possible exception of cases where p’s content refers to collective aspects of knowledge. These cases,
though important in some areas of social science, can be put aside for the purposes of this essay.
11
As a universal claim, this is too strong (see §3). I assume it here only to exhibit the independence of theses (1) and (2).
12
Knorr Cetina (1999) seems to endorse this view; critiqued in Giere (2007).
13
See Schmitt (forthcoming) for more discussion of this issue.
14
Conflation is encouraged because accounts of belief and acceptance use features of inquiry to characterize attitudes had by epistemic agents, effectively running theses (1) and (2) together.
15
Staley’s (2007) concept of ‘‘group belief’’ is just such an attitude: a basis for actions such as issuing a group statement (333, note 1). Thanks to an anonymous reviewer for bringing this point to my attention.
16
The situation is actually more complicated, as most collectivists endorse Gilbert’s plural subjects theory, which
links conditions (1c) and (2a) in a different way. Section 5 addresses Gilbert’s account, and argues against this line
of support of thesis (2).
17
This is Crick’s version of the Central Dogma (a term he coined, perhaps ironically).
18
Wray argues that only groups that are functionally organized to achieve the goal of scientific knowledge such
that members are mutually interdependent can have scientific knowledge, and that research teams, but not sub-fields
or the scientific community as a whole, meet this condition (2007, 340–343). His arguments are therefore, like
other defenses of the relational view, concerned with knowledge-production (thesis 1).
19
Relational group belief therefore meshes with the considerations in §3. For example, groups of interacting scientists (research teams, sub-fields, or whole disciplines) often issue statements derived from member individuals’ beliefs
and social epistemic interactions among them (Wray 2006, Staley 2007).
20
For details, see Gilbert (1989) Chapter 4; also Gilbert (2000, 39–41; 2006, 7).
21
The IBE argument is not explicit in Gilbert (2000). Instead Gilbert, understandably, proceeds on the assumption
that her theory is correct. The IBE argument is, however, easily reconstructed if this assumption is relaxed (details
in Fagan 2011, forthcoming).
22
See Psillos (1999) for a systematic defense of scientific realism based on IBE.
23
In fact, none of P1-P3 have been independently confirmed by empirical studies of science (Wray 2006, Fagan
2011, Fagan forthcoming). So it is premature, at least, to think they need explaining.
ª 2012 The Author
Philosophy Compass ª 2012 Blackwell Publishing Ltd
Philosophy Compass 7/12 (2012): 821–831, 10.1111/j.1747-9991.2012.00528.x
Collective Scientific Knowledge
24
831
More detailed versions of this explanatory argument appear in Fagan (2011, forthcoming).
Works Cited
Andersen, Hanne. ‘Modeling Collective Belief in Science.’ 2nd Biennial Conference of the Society for the Philosophy of Science in Practice, Un...
Purchase answer to see full
attachment