608807
research-article2015
NMS0010.1177/1461444815608807new media & societyMassanari
Article
#Gamergate and The
Fappening: How Reddit’s
algorithm, governance,
and culture support toxic
technocultures
new media & society
2017, Vol. 19(3) 329–346
© The Author(s) 2015
Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
https://doi.org/10.1177/1461444815608807
DOI: 10.1177/1461444815608807
journals.sagepub.com/home/nms
Adrienne Massanari
University of Illinois at Chicago, USA
Abstract
This article considers how the social-news and community site Reddit.com has become
a hub for anti-feminist activism. Examining two recent cases of what are defined as “toxic
technocultures” (#Gamergate and The Fappening), this work describes how Reddit’s design,
algorithm, and platform politics implicitly support these kinds of cultures. In particular, this
piece focuses on the ways in which Reddit’s karma point system, aggregation of material
across subreddits, ease of subreddit and user account creation, governance structure,
and policies around offensive content serve to provide fertile ground for anti-feminist and
misogynistic activism. The ways in which these events and communities reflect certain
problematic aspects of geek masculinity are also considered. This research is informed by
the results of a long-term participant-observation and ethnographic study into Reddit’s
culture and community and is grounded in actor-network theory.
Keywords
Algorithms, design, Gamergate, gender, online communities, online harassment,
platform politics, Reddit, The Fappening, toxic technocultures
Introduction
In 2014, a spate of anti-feminist action and harassment highlighted the ongoing problems
that women face engaging in online spaces. One event, “The Fappening,” centered on
Corresponding author:
Adrienne Massanari, Department of Communication, University of Illinois at Chicago, Chicago, IL 60607, USA.
Email: amass@uic.edu
330
new media & society 19(3)
illegally acquired nudes of celebrities (most prominently Jennifer Lawrence) distributed
and discussed via anonymous image-board 4chan and Reddit.com. The second,
#Gamergate (GG), ostensibly a hashtag “movement” spawned by individuals who purported to be frustrated by a perceived lack of ethics within gaming journalism became a
campaign of systematic harassment of female and minority game developers, journalists,
and critics and their allies. Both were emblematic of an ongoing backlash against women
and their use of technology and participation in public life. Discussions of harassment
online often cast a broad net, focusing on the legal aspects or offering large-scale policy
solutions that might reduce victimization (Citron, 2014). Fewer, however, examine the
ways certain design decisions and assumptions of use unintentionally may enable and/or
implicitly encourage these spaces to become hotbeds of misogynistic activism.
In this article, I examine how the platform and algorithmic politics (Bucher, 2012;
Gillespie, 2010; Van Dijck, 2013) of Reddit.com provides fertile ground for these kinds
of toxic spaces to emerge. By focusing on the ways in which a single platform’s design
and politics can support these kinds of activities, I hope to highlight the ways in which
Reddit implicitly reifies the desires of certain groups (often young, white, cis-gendered,
heterosexual males) while ignoring and marginalizing others. This project is grounded in
actor-network theory (ANT) (Latour, 1992, 2005), which emphasizes the importance of
considering how non-human technological agents (algorithms, scripts, policies) can
shape and are shaped by human activity, and is informed by the results of a 3-year ethnographic study and observation of the Reddit’s communities and culture (Massanari,
2015). In particular, this article focuses on the ways in which Reddit’s karma point system, aggregation of material across subreddits, ease of subreddit and user account creation, governance structure, and policies around offensive content implicitly encourage a
pattern of what I call “toxic technocultures” to take hold and have an outsized presence
on the platform.
Reddit as cultural platform
Despite its growing popularity as a unique platform for user-generated content, and controversial role as a site for citizen journalism, Reddit remains an underexplored space
within new media scholarship. Reddit is an open-source platform on which anyone can
create their own community of interest (subreddit). Individuals can also download the
entire Reddit codebase and use the platform for their own ends. Subreddits are wide and
varied, but often reflect a geek sensibility, with many revolving around computing, science, or fandom interests. Reddit depends on user-submitted and user-created content, as
well as a large number of volunteer moderators who set and enforce the rules of individual subreddits. Creating an account allows one to customize the vast list of subreddits
and subscribe to only those of interest—these then constitute the “front” page for an
individual Redditor (reddit member). When a Redditor first creates an account, they are
subscribed to a default list of subreddits, which are intended to demonstrate the breadth
of the site’s communities.1 While Redditors may curate their feed to unsubscribe from all
default subreddits, they still remain an integral part of the Reddit experience for new and
lurking users—as material from these subreddits often populate /r/all (the default, nonlogged in page that individuals see when visiting http://www.reddit.com).
Massanari
331
In addition to personalizing their front page, Redditors can upvote material they find
interesting or worthwhile and downvote that which they find off-topic or otherwise uninteresting. Highly upvoted material—both links and comments—appears higher on the
site (or subreddit’s) front page and thus receives more attention from viewers. Each link
and comment displays a number of points (score), which corresponds loosely to the
number of upvotes minus the number of downvotes a given item has received.2 This
score translates into karma points for a user’s account, a kind of currency that marks an
individual’s contributions to the Reddit community.
While featuring very basic profile pages, Reddit has less in common with social-networking spaces such as Facebook or Google+ than it does message boards and early
community sites such as the WELL. Because accounts are pseudonymous and easily
created, interactions on the platform’s myriad subreddits tend to feature elements of play
and candor that one might not associate with traditional social-networking spaces that
enforce a “one-name/real name” policy (Massanari, 2015). Presumably to encourage this
sense of play and candor, Reddit’s administrators take an extremely hands-off approach
toward content shared by users. The few rules they enforce prohibit sharing private information (doxxing), or sexualized images of minors, distributing spam, interfering with the
site’s regular functioning, and manipulating voting (reddit.com, 2014).
Reddit has quickly become a popular center of geek culture. Because anyone can create a subreddit on any topic, niche interests are well represented on the site. So finding
others interested in an obscure anime show is easy, as there is probably a subreddit that
already exists for discussing it, or one can easily be created. In addition, Reddit’s default
subreddits (which tend to have the largest subscriber base) skew toward geek interests,
with gaming (/r/gaming), science and technology (/r/science and /r/technology), news
(/r/news and /r/worldnews), and popular culture (/r/Music, /r/movies) landing regularly
on /r/all.3 Also popular are subreddits dedicated to sharing knowledge, such as /r/askscience or /r/explainlikeiamfive. Popular celebrity Redditors include Neil de Grasse Tyson,
famed astrophysicist, philanthropist and Microsoft founder Bill Gates, and former Star
Trek star William Shatner.
Reddit is, of course, a community of communities (as each subreddit is independently
moderated), and thus embraces a multitude of cultures. While many of them share a geek
sensibility, the fact that GG and The Fappening found a welcome home on Reddit is not
to suggest that these events are the direct result of geek culture per se. However, both
events were precipitated by individual actions that do suggest a technological expertise
and embeddedness within the habitus (Bourdieu, 1977) of geek culture (the former
within the gaming community; the latter within a criminal hacking underground).
Likewise, Reddit’s multitude of communities (subreddits) are regulated by the unifying
nature of the platform’s algorithm, which both rewards individual contribution and
emphasizes popular and recent content, and its mostly hands-off moderation policies.
Geek culture and geek masculinity
As discussed earlier, Reddit’s most popular subreddits and general ethos tend to coalesce
around geek interests—technology, science, popular culture (particularly of the science
fiction, fantasy, and comic book variety), and gaming. Thus, some examination of geek
332
new media & society 19(3)
culture and, given the gendered nature of the two cases discussed herein, geek masculinity is warranted. Geeks valorize expertise and specialized knowledge and geek culture
often revolves around the acquisition, sharing, and distribution of this knowledge with
others. They often value wit, cleverness, and craft, negotiating between a sense of collectivism and individualism within the communities of which they are a part (Coleman,
2013)—and the interactions on Reddit’s many subreddits exemplify this tendency. But
despite the ways in which geek culture may welcome and promote deep engagement
with niche, often unpopular interests, it often demonstrates a fraught relationship to
issues of gender and race. As Kendall (2011) argues, the stereotypical image of the nerd4
still conflates interests in computing and technology with a specific kind of gender and
racial formation as it
… conjoins five statements: (1) Computers are an important but problematic type of technology.
(2) Nerds understand and enjoy computers. (3) Those who understand and enjoy computers are
nerds. (4) Nerds are socially inept and undesirable. (5) Nerds are white men. (p. 519)
Likewise, the “revenge fantasies” of Silicon Valley founders, in which the geek or
nerd gains power and moves from a marginal position to dominate their competitors,
almost always valorizes a white man (Fan, 2014). Online interactions in geek-friendly
spaces such as Reddit are equally racialized and gendered and often presume a white
male centrality (Milner, 2013).
So to discuss geek and nerd culture is to discuss masculinity—in particular, white
male masculinity. Like other gender expressions, geek masculinity is both liminal and
performative. However, it both repudiates and reifies elements of hegemonic masculinity
(Connell and Messerschmidt, 2005). For example, geek masculinity often embraces facets of hypermasculinity by valorizing intellect over social or emotional intelligence. At
the same time, geek masculinity rejects other hypermasculine traits, as “the geek” may
show little interest in physical sports and may also demonstrate awkwardness regarding
sexual/romantic relationships (Kendall, 2011).
Despite the increasing cultural acceptance of geek pastimes, those who identify with
geek culture often feel marginal, as their interests are marked by the dominant culture as
odd or weird. Because of this, critiques of the immense amount of capital (particularly
cultural and intellectual capital) that geeks possess may be met with skepticism or outright hostility. Suggesting that geek culture can also be oppressive and marginalize certain populations may create a sense of cognitive dissonance for these individuals, who
likely view themselves as perpetual outsiders and thus are unable or unwilling to recognize their own immense privilege (Penny, 2014). Geek masculinity also embraces a kind
of techno/cyberlibertarian ethos, valuing the notion of a rational, autonomous individual
and meritocratic idealism (Turner, 2006). Therefore, critiques about the limited diversity
of geek communities such as Reddit are often subsumed under a banner of choice—that
the reason more women or people of color do not participate is because they do not want
to—rather than a recognition of the structural barriers that might make participation difficult or unappealing.
Spaces dedicated to geek culture and STEM interests (like Reddit) may exhibit the
tendency to view women as either objects of sexual desire or unwelcome interlopers or
Massanari
333
both—making them doubly unwelcoming for women (Varma, 2007). Herring and
Stoerger’s (2014) work underscores the gendered nature of online discourse generally
and the ways in which it can serve as a barrier to entry for women. Likewise, in his analysis of the free culture movement, Reagle (2012) articulates a number of ways in which
community values and norms come to shape why female participation in these spaces is
contested and fraught. These include the argumentation style often characteristic of geek
culture, the openness of communities which often leads to them being dominated by
trolls or other problematic members, and a “rhetoric of freedom and choice” which overemphasizes individual choice as the reason why women may not participate and ultimately devalues such conversations as infringing upon members’ freedom of speech
(Reagle, 2012: ¶3). All these factors are also at play on Reddit’s platform, but are complicated by the way voting makes material more or less visible on the site.
Toxic technocultures—two cases
Perhaps because of its entanglement with geek masculinity, and its complicated relationship around issues of race and gender, Reddit serves as a nexus for various toxic technocultures to thrive. I am using the phrase “toxic technocultures” to describe the toxic
cultures that are enabled by and propagated through sociotechnical networks such as
Reddit, 4chan, Twitter, and online gaming. Toxic technocultures are related to, but distinct
from other issue-based, networked (boyd, 2011) and affective publics (Papacharissi,
2015), as they may coalesce around a particular issue or event, but tactics used within
these cultures often rely heavily on implicit or explicit harassment of others. The toxic
technocultures I discuss here demonstrate retrograde ideas of gender, sexual identity, sexuality, and race and push against issues of diversity, multiculturalism, and progressivism.
This is not to suggest that individuals within these cultures are not diverse themselves in
terms of their backgrounds, or reasons for participating, or that they all share the same
vision of what the culture is “about.” However, the larger discourse which characterizes a
“toxic technoculture” often relies an Othering of those perceived as outside the culture,
reliance on outmoded and poorly understood applications of evolutionary psychology,
and a valorization of masculinity masquerading as a peculiar form of “rationality.”
Toxic technocultures are unique in their leveraging of sociotechnical platforms as
both a channel of coordination and harassment and their seemingly leaderless, amorphous quality. Members of these communities often demonstrate technological prowess
in engaging in ethically dubious actions such as aggregating public and private content
about the targets of their actions (for potential doxxing purposes or simply their own
enjoyment) and exploiting platform policies that often value aggregating large audiences
while offering little protection from potential harassment victims. At the same time, individuals affiliated with toxic technocultures both champion the power of the community
as a way to effect change or voice displeasure with others they view as being adversaries,
while still distancing themselves from what they perceive as the more ethically dubious
(and illegal) actions of others, suggesting they are “not really part” of whatever toxic
technoculture under which they are acting.
Reddit is merely a recent iteration of a vast number of online spaces where toxic technocultures coalesce and propagate. From the USENET groups to the darknet to 4chan and
334
new media & society 19(3)
other chan-style image boards, toxic technocultures have always thrived in an environment of little accountability, anonymity, and the increased globalization enabled by online
technologies (Bernstein et al., 2011; Pfaffenberger, 1996). However, many of these spaces
remain relatively (and purposefully) inaccessible to the average internet user, often requiring technological expertise to set up proxies (in the case of the darknet) or cultural expertise to understand the myriad memes, in-jokes, and linguistic short-hand that serves the
lingua franca of spaces like 4chan. Reddit is interesting because of its prominence and its
positioning within the online domain as a social news/entertainment/community site (as
in, there is something for everyone). The barriers to entry are few; even if Redditors often
rely on sharing links, commenting, and recounting memes and stories to encourage community connection, a new user can participate by simply voting.
Gamergate (GG)
In August 2014, a blog written by the jilted ex-lover of a female independent game
designer was posted to the SomethingAwful forums in a thread about terrible breakups.
It was quickly removed by moderators, but soon found its way to anonymous imageboard 4chan. Authored by Eron Gjoni, the blog featured excruciating detail about his
ill-fated relationship with Depression Quest (DQ) creator Zoe Quinn and included
screenshots of alleged Facebook message conversations between the two. Quinn had
already been the target of harassment after she initially posted DQ to the Steam Greenlight
service (a platform for independent games still in development to be reviewed and gain
exposure) in 2013, with individuals sending her rape and death threats. But after the post,
Quinn became the centerpiece and token figure in a hateful campaign to delegitimize and
harass women and their allies in the gaming community. Because Gjoni’s blog incorrectly implied that Quinn’s success was due in no small part to her intimate relationships
with games journalists who wrote positive reviews of DQ, some within the gaming community argued that it was just another instance of questionable ethics in games journalism (Stuart, 2014).
Actor and right-wing conservative Adam Baldwin responded early on to the controversy, coining the hashtag GG and became an active supporter of the movement. While
purportedly a reaction to a perceived lack of ethics in digital games journalism that
Quinn’s alleged improprieties represented, those rallying behind the hashtag have instead
used this moment to engage in concentrated harassment of game developers, feminist
critics, and their male allies on Twitter and other platforms. Use of GG or even @mentions of those prominently targeted by harassers (such as Feminist Frequency’s Anita
Sarkeesian) continues to lead to further harassment of private individuals who are perceived as “anti-GG.” For their part, GGs insist that any harassment is done by individuals
not affiliated with the GG community (despite their use of the hashtag). While it is possible that certain people have used GG as a convenient cover to engage in harassment
while not being truly invested in the issues, the lack of public leadership by organizers
means that condemnations of harassment do little to stem the problem (Stuart, 2014). As
Coleman (2013) found in her ethnographic work with Anonymous, one of the most difficult aspects of “leaderless” movements is that some may use them as a kind of cover
for their own selfish ends.
Massanari
335
After discussions of Quinn and GG were finally banned from 4chan by administrator
Christopher “moot” Poole in late September 2014 (which many GGs viewed as the ultimate betrayal and proof that so-called “social justice warriors (SJWs)” were infiltrating
even their most sacred of spaces), they moved to another chan-style board, 8chan.co
(Stuart, 2014). Twitter, 4chan, and 8chan have all been used as spaces for harassment;
however, the public face of GG has centered on Reddit’s /r/KotakuInAction (KIA).
While actual engagement with those perceived as “anti-GG” occurs in spaces such as
Twitter and on YouTube, KIA serves as a hub for information about ongoing attempts to
pressure companies to pull their advertisements from websites considered sympathetic to
social justice in their coverage of the games industry—with gaming website Kotaku
considered a prime offender.
/r/KIA takes its name from yet another subreddit with a strongly anti-feminist bent:
/r/TumblrInAction (TIA), and unsurprisingly, they share some of the same moderators.
Designed originally to satirize the culture of Tumblr, TIA has since shifted to become
a meeting place for Redditors to mock feminism, non-binary and trans* gender identities, and social activism. Likewise, discussions on /r/KIA tend to be strongly antifeminist and often express libertarian and/or conservative political sentiments. Part of
KIA’s prominence within the GG “debate” is likely due to Reddit’s anti-doxxing policies, and that discussions on KIA are moderated and pseudonymous, rather than fully
anonymous as they are on 8chan, making some sort of accountability theoretically
possible, if unlikely.
The Fappening
Around the time that GG was gaining steam in late August 2014, a large cache of stolen
photographs of celebrities was posted to 4chan. Many of the images were private female
celebrity selfies that had been stored using Apple’s iCloud service. While a number of
women were victimized by the hack, many of the images featured Jennifer Lawrence,
star of The Hunger Games series of films. After the stolen photographs were scrubbed
from 4chan, they continued to propagate across the web—most notably on the subreddit
/r/thefappening, which served as a disturbing hub of discussion about the images and the
celebrities involved.5 /r/thefappening was extremely popular—with 100,000 new subscribers signing up in the first 24 hours of its existence (UnholyDemigod, 2014). Because
Reddit’s algorithm is heavily influenced by both new and highly upvoted content, /r/all
featured numerous links to the stolen images. Thus, if a new visitor were to stumble
across Reddit from 30 August until 7 September, when /r/thefappening was finally pulled
from the site and other popular subreddits also banned the images, one would have the
impression that Redditors were obsessed with upvoting, sharing, and discussing nude
pictures of celebrities. The tone of many of /r/thefappening discussions was gleeful, with
few individuals expressing concern over the ethical questions that both dissemination
and viewing the images raised, instead focusing on what additional photographs might
come to light or what other female celebrity might be targeted next.
It is important to note that this was not the first time Jennifer Lawrence had been the
object of Reddit interest. Her forthrightness and self-effacing nature has gained her a loyal
following on the site, particularly as her ethos suggests a kind of authenticity and candor
336
new media & society 19(3)
that many Redditors prize—and her status as a quintessential “cool girl” who embodies
both sexual desirability while remaining unthreatening probably did not hurt (Peterson,
2014). Her presence on the site took several forms: reaction GIFs (animated images that
loop and encapsulate specific, often witty emotional response) injected regularly into
threads, discussion about her down-to-earth nature and approachability, and a subreddit
(/r/jenniferlawrence) dedicated to sharing images and news about her (although, more of
the former than the latter).6 Given this, the discourse on /r/thefappening and /r/thefappeningdiscussion regarding Lawrence’s images was particularly stomach churning—as it
became quickly apparent that some Redditors had no trouble victimizing a person that at
least a portion of the community had previously idolized.
Reddit administrators later noted that the site’s traffic increased exponentially as a
result of /r/thefappening, requiring constant intervention to keep the rest of the site running. Additionally, numerous Digital Millenium Copyright Act (DMCA) infringement
notices were filed on behalf of those who were impacted by the hack also required administrator action. But the subreddit’s ban was not ensured until it was revealed that a number
of the photographs included those of then-underaged gymnast McKayla Maroney, which
constituted a violation of Reddit’s policy prohibiting sexualized images of minors (alienth,
2014b). Part of the reason why Reddit administrators might have been reluctant to ban /r/
thefappening sooner may have been monetary: in 6 days, subscribers purchased enough
Reddit gold (a kind of currency that defrays Reddit’s server costs) to run the entire site for
a month (Greenberg, 2014). So, the reason the /r/thefappening and its associated images
were finally banned from Reddit had little to do with the ethical questions they raised, the
invasion of privacy they represented, or the fact that their viewing and distribution represented a sex crime [as Lawrence later claimed in a Vanity Fair piece (Vanity Fair, 2014)].
And long after /r/thefappening’s demise, the images continued to propagate through many
smaller subreddits—including /r/fappeningdiscussion (still in existence as of August
2015), where any new caches of celebrity nudes continue to be shared.
How Reddit’s design, policies, and culture support toxic
technocultures
While the lurid and public nature of both The Fappening and GG might have inevitably
meant some discussion on Reddit, their outsized presence on the platform is a consequence of its culture, politics, and design. Borrowing from Gillespie (2010), Van Dijck
(2013), and Bucher (2012), and drawing on ANT, I am using the term “platform politics”
to mean the assemblage of design, policies, and norms that encourage certain kinds of
cultures and behaviors to coalesce on platforms while implicitly discouraging others.
Disentangling the community’s norms from the ways those norms are shaped by the
platform and administrative policies becomes difficult in a space such as Reddit, as they
are co-constitutive of one another. In this section, I broaden out from considering just the
cases of The Fappening and GG to argue that the culture and design politics of Reddit
implicitly allows anti-feminist and racist activist communities to take hold.
ANT’s strength as a theoretical framework is that it sensitizes us to the oftenunintended consequences of non-human actants (bots, scripts, algorithms, policies)
and the ways in which they shape online cultures. In this vein, a critical factor that
Massanari
337
shapes the prominence of anti-feminist activity on the platform is karma. As I mentioned earlier, karma is a point system that purports to represent how much Redditors
value a particular account’s contribution. Postings and comments are accompanied by
a point total (score), which is some variation on upvotes minus downvotes that is
fuzzed so that spammers and others are less likely to game the system (jeffzem, 2014).
Scores also affect the visibility of a given comment or posting; when comments are
sorted by the default “best,” those comments that are highly upvoted and have received
a large number of comment replies are listed higher than others.7 Each user account
has an associated amount of karma based on the scores of their comments and postings to Reddit as a whole. This system valorizes individual contributions and suggests
that the site is democratic in terms of what material becomes popular. At the same
time, such a system implicitly incentivizes certain activities that might gain karma for
the Redditor: for example, reposts of popular material across multiple subreddits
(thus the vast spread of material from The Fappening and GG across Reddit) and comments that reflect the general ethos of Reddit’s culture in terms of its cyber/technolibertarian bent, gender politics, and geek sensibilities. As other scholars have noted,
such a system can create “herding” or power law effects around particular material,
biasing individuals to mirror the voting behavior of others (Muchnik et al., 2013).
While many subreddits hide karma totals for a time in an effort to diminish these
kinds of bandwagon effects, such attempts are relatively ineffectual. Also compounding this problem is Reddit’s default sorting filter—users must actively change it if
they would like to see more controversial material, and as comment sections on popular posts can easily go into the hundreds or thousands, it seems likely that most
Redditors simply read the comments deemed “best” by others and vote on those.
While such a system implies that it is directly democratic (suggesting one person = one
vote), the ability for a single individual to create multiple accounts means that it is
also easily gamed.8
Reddit’s aggregation of material across subreddits that it hosts is another design
choice that can implicitly suppress certain types of content and highlight others and also
serves as an unintentional barrier to participation. /r/all, the non-logged-in version of
Reddit’s home page, provides a kind of barometer of the community as a whole. The
specifics of the algorithm used to sort /r/all are complicated, but it generally highlights
material across subreddits that is new and considered popular (meaning highly upvoted).
To make it on to the first pages of /r/all, a subreddit must already have a substantially
large subscriber base (as links only appear there if they have a substantially large score,
which means many of them are the default subreddits to which a person is subscribed
when they first create a Reddit account). Recognizing that some popular material may
not shine the best light on the Reddit community, administrators have allowed subreddit
moderators to choose to opt-out from /r/all. This means some highly subscribed and
highly popular not-safe-for-work (NSFW) subreddits such as /r/gonewild no longer populate the site’s front page unless a person is subscribed to them (alienth, 2014a). But
subreddits are only removed if their own moderators ask for their removal, meaning that
plenty of posts from popular and objectionable subreddits often populate /r/all, including, for example, /r/fatpeoplehate (FPH) (a subreddit devoted to fat-shaming and ridiculing the health at every size movement). While seasoned Redditors often spend their time
338
new media & society 19(3)
curating the subreddits to which they subscribe often to avoid this kind of material, new
users (and lurkers, and non-logged in users) would see /r/all and reasonably assume that
it represents the dominant culture of Reddit. So, the problem becomes circular in nature—
those who do not see themselves or their views reflected in the subreddits populating /r/
all might choose to not participate, further compounding the likelihood that such perspectives do not make it to the top of /r/all.
It is apparent that Reddit administrators are at least somewhat aware of the perception
that /r/all often features a limited subset of material and viewpoints and has tried to
address this by changing the default set of subreddits to which a Redditor is initially
subscribed. In May 2014, a large number of new default subreddits were introduced and
others removed, likely in an attempt to broaden the appeal of Reddit for new users. While
other new subreddits were defaulted with little disagreement, such as /r/OldSchoolCool
and /r/mildlyinteresting, it was the female-oriented /r/TwoXChromosomes (TwoX) that
provoked the most hostility of a portion of the Reddit user base. Some Redditors
expressed dismay and outright anger that they would be confronted by discussions that
might discuss sexual assault, or periods, or female body image. Some inquired why /r/
mensrights (a subreddit dedicated to the men’s rights movement) was not defaulted as
well. Others suggested that TwoX would now become a default space for men on Reddit,
filled with comments like, “As a man …” or “Not a female but ….” Still others argued
that instead of doing more to address the gender imbalance on Reddit, it would backfire
and simply make TwoX a terrible space for the women who had once found it supportive
(sodypop, 2014). Visibility in the form of defaulting did create a toxic environment in
TwoX (at least initially), with individuals being harassed and trolled, and comment
threads subjected to mass downvoting by other Redditors who were angered by the
change, even though all they needed to do was unsubscribe to the subreddit if they did
not want to see it on their front page.
The issue of visibility becomes salient, not just in terms of the ways in which it reflects
Reddit’s sorting algorithm, but also when subreddits, particularly those that do not reflect
a particular kind of (white) geek masculinity, are elevated to prominence via /r/all. A
vocal minority of Redditors can hijack their content, and their subscribers may become
the target of specific harassment efforts. In contrast, material that does align with the
kind of (white) geek masculinity that many within the Reddit community prize faces little resistance. For example, during The Fappening, the stolen images quickly propagated
across subreddits, earned upvotes, and thus appeared with frequency on /r/all. Likewise,
when GG was still an allowable topic on Reddit’s many gaming subreddits (/r/gaming,
/r/Games, and /PCMasterRace, etc.), the same story or video appeared in many different
guises on /r/all. They were then upvoted even more as they became more popular and
were submitted to other subreddits. And because upvotes represent visibility on Reddit,
and earlier votes count more heavily than later ones, downvoting after something has
become extremely popular is likely to have little effect.
Reddit’s platform design also provides little support for discouraging material that
might be objectionable or harassing. The only recourse administrators provide to users is
the ability for individual accounts to report links and comments to moderators. In the
report form, a logged-in Redditor can indicate that the content is breaking one of the five
rules of Reddit, or can provide another short, 100-character explanation. As with other
Massanari
339
sites that rely on “flagging” as a mechanic for handling offensive content, Reddit’s tools
are limited and do little to support a public, deliberative discussion as to why something
might be objectionable (Crawford and Gillespie, 2014). And, there is no clear way to
report an entire subreddit for objectionable content, other than messaging the administrators directly. Additionally, site administrators actively discourage Redditors from engaging in “witch-hunts” by overusing the report tool or downvoting indiscriminately, instead
encouraging them to create their own communities (subreddits) where they can implement their own rules around interactions (Auerbach, 2014). However, Reddit already
functions as a de facto vote-brigading platform, as it encourages large numbers of people
to visit (and comment on) material other sites host. The real issue, as some Redditors
argue, is the lack of transparency around brigading on Reddit proper and a limited set of
moderator tools for handling these events (RobKhonsu, 2015).
The Reddit platform makes it easy to create multiple subreddits and user accounts,
even after they have been banned. For example, after /r/creepshots (dedicated to sharing
sexualized images of unknowing women) was banned, it was reborn as /r/
CandidFashionPolice. Likewise, /r/niggers (banned in 2013) found new life as the
equally odious /r/GreatApes and /r/coontown. And, while the relative ease with which
users can create multiple accounts may encourage individuals to be more honest—allowing them to discuss sensitive personal issues without concern that it might be repeated to
friends or coworkers, for example—it may also enable them to engage in unethical
behavior with little repercussion. A Redditor’s karma and previous postings/comments
may help others determine whether their contributions are productive, but it does not
ensure an account will not be used for harassment or will not continue to submit offensive content unless administrators step in.
The platform’s policies and approach to community governance also encourage the
continued presence of toxic technocultures. Reddit administrators are loathe to intervene
in any meaningful way in content disputes, citing Reddit’s role as a impartial or “neutral”
platform for discussion. As former CEO Yishan Wong noted in a particularly tone-deaf
posting in regards to the company’s decision (later reversed) to not ban /r/thefappening,
“each man is responsible for his [sic] own soul”—suggesting that while he might personally feel that the stolen images were objectionable, that each person had to make that
choice for themselves (yishan, 2014). In the aftermath, Reddit administrators also stated
that they, “feel it is necessary to maintain as neutral a platform as possible, and to let the
communities on Reddit be represented by the actions of the people who participate in
them” (alienth, 2014b). But remaining “neutral” in these cases valorizes the rights of the
majority while often trampling over the rights of others.
Much of administrators’ limited engagement around these issues is the result of a
design choice to aggregate, instead of actually host, content. This means that except for
self-posts (where a Redditor simply creates a text-only posting in a given subreddit) and
the threaded comment discussion that occurs on a given link, most material is hosted on
outside platforms such as Imgur, YouTube, Twitter, and Tumblr. Thus, Reddit administrators do not have to concern themselves with the appropriateness of a given piece of
content—they are simply linking to (and thus, redistributing) material that is already
present online. In the case of The Fappening, administrators could suggest that the platform was merely distributing the content rather than actually hosting it, perhaps
340
new media & society 19(3)
providing a way to circumvent DMCA takedown notices. Likewise, the platform’s rules
banning the sharing of private information could be circumvented somewhat in the case
of GG—Redditors could link to material that was clearly meant to encourage others to
harass or doxx Quinn and others—but because it was hosted elsewhere, it might be
allowed.
Reddit’s reliance on the unpaid labor of its users also has significant implications for
the perpetuation of toxic technocultures on the platform. A substantially large number of
volunteer moderators are responsible for enforcing the rules the subreddits they moderate, encouraging submissions from other Reddit members, adjudicating any conflicts
that arise, and enforcing bans. As scholars have argued, particularly in relation to opensource software development and free culture projects such as Wikipedia (Bruns, 2008;
Reagle, 2012), “free labor is not necessarily exploited labor” (Terranova, 2003).
However, like many other social media platforms, Reddit profits from this unpaid labor,
while shifting responsibility for the content shared to individual users.
Reddit’s platform also provides moderators few tools to deal with the complexities of
moderating subreddits, such as removing offensive content and banning users. Moderators
must rely on third-party plug-ins (again, created by unpaid labor), most of which are
considered insufficient and cumbersome. Because moderators are unpaid positions, it is
not surprising that few individuals are willing to do the time-consuming job, or can do it
well. This means that mini-fiefdoms often spring up, whereby a very few moderators
control a large segment of the subreddits—resulting in a something more akin to an autocratic, rather than democratic, system of governance (Auerbach, 2014). It is incredibly
difficult, too, for powerful moderators to be removed from their positions, however inefficient or problematic they become (see, for example, Alfonso, 2014b). Often moderators
of the more pernicious subreddits coalesce; for example, one of the moderators for the
popular subreddit /r/BlackPeopleTwitter also moderates /r/CuteFemaleCorpses, /r/
BeatingWomen2, and /r/HurtingAnimals.
There seems to be a deep reluctance on the part of the administrators to alienate any
part of their audience, no matter how problematic, as it will mean less traffic and ultimately less revenue for Reddit. In the case of The Fappening, the fluid and decentralized
nature of Reddit meant that these images were not just contained to the /r/thefappening
subreddit. Less prominent subreddits became distribution points for the images, even
after they were clearly identified as illegally obtained. For example, a moderator of /r/
Celebs expressed ambivalence about allowing the images to be submitted, but also
seemed to express glee at what was termed “insane traffic” the subreddit was receiving
because of the hack (atticus138, 2014). Later, these moderators chose to ban the images,
but only after site administrators banned /r/thefappening—most likely because they
feared that /r/Celebs would find the same fate. As I mentioned above, simply allowing
images from The Fappening to propagate for a few days before banning them was
extremely profitable for Reddit’s coffers.
Resisting or questioning the design, policy, and culture of Reddit remains difficult.
Specific attempts to modify aspects platform’s design are numerous, mostly through
modifications made to the CSS file used by subreddits. For example, a number of subreddits hide the down arrow next to postings, often in an attempt to encourage more positive
interactions. Without administrator intervention, however, there is simply very little
Massanari
341
recourse at the disposal of users and moderators who believe that subreddits such as /r/
TheRedPill supports rape culture, or that KIA tacitly condones harassment, other than
the creation of “metasubreddits” such as /r/TheBluePill and /r/GamerGhazi, which
attempt to serve as a discursive counterbalances. The most well known of these are /r/
ShitRedditSays (SRS) and its affiliated subreddits, which act as a sort of feminist, antiracist Reddit-within-reddit. SRS actively critiques the emphasis on karma acquisition
and scoring by inverting upvotes and downvotes, and its moderators enforce a much
stricter policy around the kind of content and speech that is allowed in its subreddits.
However, SRS is the frequent object of many Reddit conspiracy theories (see /r/
SRSMythos) and anti-SRS subreddits (for example, /r/SRSsucks), with its members
often portrayed as “not real Redditors,” and SJWs intent on infringing on others’ free
speech. This echoes the familiar refrain within the gaming and geek communities that
some individuals are not “real gamers” or “real fans”—a label that is almost always
applied to those who challenge or question the idea that these spaces are designed for
white males.
Conclusion
Both GG and The Fappening created an odd paradox, by which individuals associated
with each event viewed their actions as somehow noble (at least in the case of the former)
or at least unproblematic, while engaging in what even superficially could be considered
unethical activity. Both communities raised money for charities that were refused by
recipient foundations. While entirely understandable and unsurprising for anyone outside these toxic technocultures, these refusals were portrayed as being somehow surprising, shocking, or hypocritical by those within. Implied in both cases is the idea that
women should be shamed and deserve lower standard of privacy because of their sexual
activities. Both events are also indicative of a pattern of toxic technocultures that have
gained an outsized presence on the Reddit platform.
Given the fluid, permeable nature of the Internet, it is important to understand how
these kinds of interactions on Reddit are also reflective of and influenced by other platform cultures. Toxic technocultures propagate precisely because of the liminal and fluid
connectedness of Internet platforms. During the height of GG, for example, administrators claimed repeated attempts to doxx Quinn were the result of 4chan brigading Reddit
(cupcake1713, 2014). While possibly true, this also problematically positions certain
uses of Reddit as “more legitimate” than others. Many 4chan users are probably Reddit
users, and vice versa. It is interesting, too, that moderators are willing to consider users
from the outside of Reddit as really being Redditors, but that those within Reddit who
actively contribute to its sometimes-toxic nature are.
As some have noted, anonymous image board culture (as represented by spaces such
as 4chan and 8chan) prizes, “… unfettered emergence of consensus. Moderation is an
unnatural intervention” (A Man In Black, 2014: ¶11). So perhaps the toxic technocultures we see gaining traction on Reddit are partially the result of the kinds of interactions
these anonymous spaces seem to cultivate and prize. However, as I mentioned above, a
number of factors make Reddit in particular a welcoming space. These include the site’s
design, its governance structure and algorithmic logic, administrator unwillingness to
342
new media & society 19(3)
intervene and make universal decisions regarding offensive content, and its reputation as
a geek-friendly environment. I do not mean to suggest that Reddit’s administrators willingly court or are even supportive of the kinds of toxic technocultures that coalesce on
their platform, but they are the indirect consequence of its technological politics. And
although individual administrators may express distaste at the ways in which Reddit is
used, they are loathe to make any concrete changes, effectively suggest a lack of awareness when it comes to how the platform has been, and will continue to be, used to mobilize anti-feminist publics.
Understanding the ways in which toxic technocultures develop, are sustained, and
exploit platform design is imperative. New media scholars as well as activists would be
well served in exploring these publics, however unsavory, from this perspective, as it
could provide insight into alternative designs and/or tools that may combat their spread.
Especially important is considering the ways in which technological design choices of
spaces such as Reddit often implicitly reflect a particular kind of geek masculinity—one
that is laden with problematic assumptions about who can enter these spaces and how
they can participate.
Afterward
During the course of researching and writing this piece, a number of significant changes
to Reddit’s content policy and administrative team occurred. The first change, announced
in February 2015, banned so-called “revenge porn” from the site. In June, new policies to
discourage the harassment of Reddit members eventually led to the banning of /r/FPH and
several other subreddits (kn0thing, 2015). Both policy changes were instituted by interim
CEO Ellen Pao, and the latter in particular led to a kind of uprising by certain individuals
who viewed it as the first of many steps toward the site’s capitulation to political correctness and SJWs. In addition to spamming the site with FPH clones, some Redditors posted
anti-Pao propaganda which dominated the site’s front page for several days. Still other
Redditors wondered why other subreddits, such as the racist /r/coontown, were spared
elimination. Tellingly, subreddits such as /r/KIA were on the anti-Pao frontlines, becoming vocal supporters of a Change.org petition to remove her as CEO (lleti, 2015).
The final straw came in July 2015, when popular Reddit community administrator
and AMA coordinator Victoria Taylor was fired without administrators notifying the
moderators of /r/IAMA and other subreddits who depended on her assistance. This was
viewed by many as further evidence of the dysfunctional relationship between Reddit
administrators and moderators (Lynch and Swearingen, 2015). Blame for the botched
departure of Taylor was placed on Pao’s shoulders. The next week, Pao resigned and cofounder Steve Huffman reemerged to become the site’s new CEO. Ellen Pao (2015)
subsequently penned an opinion piece for The Washington Post suggesting that the
“trolls are winning” on Reddit and across the Internet given the numerous death threats
and invective she received. Meanwhile, newly appointed CEO Huffman suggested that
the policy changes implemented under Pao would stay and would likely be augmented
by several others. Huffman proposed new content rules that would prohibit anything that
“incites harm or violence against an individual or group of people” or “harasses, bullies,
or abuses.” In addition, a new category would be created, much like Reddit’s NSFW
Massanari
343
classification, that would quarantine “indecent” spaces, making them not searchable or
publicly listed (spez, 2015).
It remains to be seen how Reddit will develop in light of these new policies. However,
the resistance by a vocal group of Redditors to these changes provides further evidence that
the technological affordances, and Reddit’s platform politics, have cultivated a space where
toxicity is normalized. Huffman’s proposed solution, to allow but not publicize nor profit
from hate-filled subreddits, does nothing to stem the underlying problem. Members of subreddits such as /r/coontown (banned as of August 2015) or /r/CuteFemaleCorpses do not
stay contained to their own toxic spaces, but are participants in other, more mainstream areas
of Reddit. This means that their retrograde views continue to be implicitly legitimized by
Reddit administrators. Most disturbingly, the notion that advertising revenue will not be collected from these objectionable subreddits effectively means that the rest of Reddit—including its anti-racist, feminist, and progressive spaces—would in fact be subsidizing the
existence of its toxic neighbors. Such a choice could indeed lead to the capitulation of Reddit
to the “trolls” (as Pao calls them), unless something radically shifts in the coming months.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship,
and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this
article.
Notes
1.
2.
3.
4.
5.
6.
7.
8.
A list of defaults can be found at https://www.reddit.com/r/defaults. URLs for subreddits are
shortened to /r/subredditname for readability. Capitalization reflects the subreddit’s name as
listed at http://www.reddit.com/reddits.
Reddit’s algorithm weights votes logarithmically, so that the first 10 votes are counted more
than the next 100 and so forth (Salihefendic, 2010).
A list of the most-subscribed subreddits is at http://redditmetrics.com/top.
I am using the terms “nerd” and “geek” interchangeably.
The subreddit’s name is a portmanteau of a slang term for masturbation popular on Reddit,
“fap,” and “the happening.”
The stolen images were posted to /r/jenniferlawrence for a time, but later removed by moderators. In response, the sidebar of the subreddit was changed to add a new rule that specified
leaks were no longer allowed.
Other comment/link filters include “top,” which pushes the top-most voted comments/links
(and their responses) to the top, “new,” which orders the comment/link threads by time of submission, “hot,” which reorders the thread to indicate which comments or links are currently
being upvoted, and “controversial,” which filters the comments/links by those that have the
most similar number of upvotes and downvotes (Salihefendic, 2010).
See the case of /u/Unidan, a popular Redditor who was found to be engaging in vote manipulation by creating a number of sockpuppet accounts which he used to upvote his own contributions and downvote those with whom he disagreed (Alfonso, 2014a).
344
new media & society 19(3)
References
Alfonso F III (2014a) How Unidan went from being Reddit’s most beloved user to its most disgraced. Available at: http://www.dailydot.com/news/reddit-unidan-shadownban-vote-manipulation-ben-eisenkop/ (accessed 6 August 2015).
Alfonso F III (2014b) Meet the Reddit power user who helped bring down r/technology. Available
at:
http://www.dailydot.com/politics/reddit-maxwellhill-moderator-technology-flaw/
(accessed 6 August 2015).
alienth (2014a) Experimental Reddit change: subreddits may now opt-out of /r/all. Available at:
http://www.reddit.com/r/changelog/comments/2a32sq/experimental_reddit_change_subreddits_may_now/ (accessed 6 August 2015).
alienth (2014b) Time to talk. Available at: https://www.reddit.com/r/announcements/comments/2fpdax/
time_to_talk/ (accessed 6 August 2015).
A Man In Black (2014) How imageboard culture shaped Gamergate. Available at: http://boingboing.net/2014/12/31/how-imageboard-culture-shaped.html (accessed 6 August 2015).
atticus138 (2014) This shit is bananas. Available at: http://www.reddit.com/r/Celebs/comments/2f4woz/
this_shit_is_bananas/ (accessed 6 August 2015).
Auerbach D (2014) Does Reddit have a transparency problem? Available at: http://www.slate.
com/articles/technology/technology/2014/10/reddit_scandals_does_the_site_have_a_transparency_problem.html (accessed 6 August 2015).
Bernstein MS, Monroy-Hernández A, Harry D, et al. (2011) 4chan and /b/: an analysis of anonymity and ephemerality in a large online community. In: Proceedings of the ICWSM 2011,
Barcelona, Spain, 17–21 July 2011, pp. 50–57. Menlo Park, CA: AAAI Press.
Bourdieu P (1977) Outline of a Theory of Practice. Cambridge: Cambridge University Press.
boyd d (2011) Social network sites as networked publics: affordances, dynamics, and implications.
In: Papacharissi Z (ed.) Networked Self: Identity, Community, and Culture on Social Network
Sites. New York: Routledge, pp. 39–58.
Bruns A (2008) Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New
York: Peter Lang.
Bucher T (2012) Want to be on the top? Algorithmic power and the threat of invisibility on
Facebook. New Media & Society 14: 1164–1180.
Citron DK (2014) Hate Crimes in Cyberspace. Cambridge, MA: Harvard University Press.
Coleman EG (2013) Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton, NJ:
Princeton University Press.
Connell RW and Messerschmidt JW (2005) Hegemonic masculinity: rethinking the concept.
Gender & Society 19: 829–859.
Crawford K and Gillespie T (2014) What is a flag for? Social media reporting tools and the
vocabulary of a complaint. New Media & Society. Epub ahead of print 15 July. DOI:
10.1177/1461444814543163.
cupcake1713 (2014) Latest Zoe Quinn drama explodes. Spiritual successors takes on the job of
undertaker and ferryman across the Styx to /r/Shadowban (comment). Available at: http://
www.reddit.com/r/SubredditDrama/comments/2ecvrb/latest_zoe_quinn_drama_explodes/
cjypzmn (accessed 6 August 2015).
Fan CT (2014) Not all nerds. Available at: http://thenewinquiry.com/essays/not-all-nerds/
(accessed 6 August 2015).
Gillespie T (2010) The politics of “platforms.” New Media & Society 12: 347–364.
Greenberg A (2014) Hacked celeb pics made Reddit enough cash to run its servers for a month.
Available at: http://www.wired.com/2014/09/celeb-pics-reddit-gold/ (accessed 6 August
2015).
Massanari
345
Herring SC and Stoerger S (2014) Gender and (a)nonymity in computer-mediated communication.
In: Ehrlich S, Meyerhoff M and Holmes J (eds) The Handbook of Language, Gender, and
Sexuality. 2nd ed. Hoboken, NJ: John Wiley and Sons, pp. 567–586.
jeffzem (2014) How is a comment’s karma calculated? Available at: http://www.reddit.com/r/
TheoryOfReddit/comments/2ikhvu/how_is_a_comments_karma_calculated/ (accessed 6
August 2015).
Kendall L (2011) “White and nerdy”: computers, race, and the nerd stereotype. Journal of Popular
Culture 44: 505–524.
kn0thing (2015) Promote ideas, protect people (comment). Available at: http://www.reddit.com/r/
blog/comments/35ym8t/promote_ideas_protect_people/cr917vo (accessed 6 August 2015).
Latour B (1992) Where are the missing masses? The sociology of a few mundane artifacts. In:
Bijker WE and Law J (eds) Shaping Technology/Building Society. Cambridge, MA: MIT
Press, pp. 225–257.
Latour B (2005) Reassembling the Social: An Introduction to Actor-Network Theory. Oxford:
Oxford University Press.
lleti (2015) 100,000 people have now signed the Change.org petition, requesting that Ellen “from
my cold, dead hands” Pao step down as CEO of Reddit Inc. Available at: https://www.
reddit.com/r/KotakuInAction/comments/3c4tvh/100000_people_have_now_signed_the_
changeorg/ (accessed 6 August 2015).
Lynch B and Swearingen C (2015) Why we shut down Reddit’s “Ask Me Anything” forum.
Available at: http://www.nytimes.com/2015/07/08/opinion/why-we-shut-down-reddits-askme-anything-forum.html (accessed 6 August 2015).
Massanari AL (2015) Participatory Culture, Community, and Play: Learning from Reddit. New
York: Peter Lang.
Milner RM (2013) FCJ-156 Hacking the social: Internet memes, identity antagonism, and the logic
of lulz. The Fibreculture Journal, issue 22. Available at: http://twentytwo.fibreculturejournal.
org/fcj-156-hacking-the-social-internet-memes-identity-antagonism-and-the-logic-of-lulz/
(accessed 6 August 2015).
Muchnik L, Aral S and Taylor SJ (2013) Social influence bias: a randomized experiment. Science
341: 647–651.
Pao E (2015) Former Reddit CEO Ellen Pao: the trolls are winning the battle for the Internet.
Available at: https://www.washingtonpost.com/opinions/we-cannot-let-the-internet-trollswin/2015/07/16/91b1a2d2-2b17-11e5-bd33-395c05608059_story.html (accessed 6 August
2015).
Papacharissi Z (2015) Affective Publics: Sentiment, Technology, and Politics. New York: Oxford
University Press.
Penny L (2014) On nerd entitlement. Available at: http://www.newstatesman.com/laurie-penny/
on-nerd-entitlement-rebel-alliance-empire (accessed 6 August 2015).
Peterson AH (2014) Jennifer Lawrence and the history of cool girls. Available at: http://www.
buzzfeed.com/annehelenpetersen/jennifer-lawrence-and-the-history-of-cool-girls (accessed
6 August 2015).
Pfaffenberger B (1996) “If I want it, it’s ok”: usenet and the (outer) limits of free speech. The
Information Society 12: 365–386.
Reagle J (2012) “Free as in sexist?” Free culture and the gender gap. First Monday 18(1). Available
at: http://firstmonday.org/article/view/4291/3381
reddit.com (2014) Rules of Reddit. Available at: http://www.reddit.com/rules/ (accessed 6 August
2015).
346
new media & society 19(3)
RobKhonsu (2015) Promote ideas, protect people (comment). Available at: http://www.reddit.
com/r/blog/comments/35ym8t/promote_ideas_protect_people/cr967zt?context=1 (accessed
6 August 2015).
Salihefendic A (2010) How Reddit ranking algorithms work. Available at: http://amix.dk/blog/
post/19588 (accessed 6 August 2015).
sodypop (2014) What’s that, Lassie? The old defaults fell down a well? (comment). Available at:
http://www.reddit.com/r/blog/comments/24yqep/whats_that_lassie_the_old_defaults_fell_
down_a/chbyaqq (accessed 6 August 2015).
spez (2015) Let’s talk content. AMA. Available at: https://www.reddit.com/r/announcements/
comments/3djjxw/lets_talk_content_ama/ (accessed 6 August 2015).
Stuart K (2014) Zoe Quinn: “All Gamergate has done is ruin people’s lives.” Available at: http://
www.theguardian.com/technology/2014/dec/03/zoe-quinn-gamergate-interview (accessed 6
August 2015).
Terranova T (2003) Free labor: producing culture for the digital economy. Available at: http://
www.electronicbookreview.com/thread/technocapitalism/voluntary (accessed 6 August
2015).
Turner F (2006) From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network,
and the Rise of Digital Utopianism. Chicago, IL: University of Chicago Press.
UnholyDemigod (2014) The Fappening. Available at: http://www.reddit.com/r/MuseumOfReddit/
comments/2pclqw/the_fappening/ (accessed 6 August 2015).
Van Dijck J (2013) The Culture of Connectivity: A Critical History of Social Media. Oxford:
Oxford University Press.
Vanity Fair (2014) Cover exclusive: Jennifer Lawrence calls photo hacking a “Sex Crime.” Available
at: http://www.vanityfair.com/vf-hollywood/2014/10/jennifer-lawrence-cover (accessed 6 August
2015).
Varma R (2007) Women in computing: the role of geek culture. Science as Culture 16: 359–376.
yishan (2014) Every man is responsible for his own soul. Available at: http://www.redditblog.
com/2014/09/every-man-is-responsible-for-his-own.html (accessed 6 August 2015).
Author biography
Adrienne Massanari is an Assistant Professor in the Department of Communication at the University
of Illinois at Chicago.
Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism,
New York University Press, 2018.
Introduction
The Power of Algorithms
A Society, Searching
This book is about the power of algorithms in the age of neoliberalism
Copyright © 2018. New York University Press. All rights reserved.
Searching
forthose
Black
Girlsdecisions reinforce oppressive social relaand the ways
digital
tionships and enact new modes of racial profiling, which I have termed
technological redlining. By making visible the ways that capital, race, and
gender are factors in creating unequal conditions, I am bringing light
Searching
for People and Communities
to various forms of technological redlining that are on the rise. The
near-ubiquitous use of algorithmically driven software, both visible and
invisible to everyday people, demands a closer inspection of what values
are prioritized
such automated
decisionsystems. Typically,
Searching
forinProtections
from
Searchmaking
Engines
the practice of redlining has been most often used in real estate and
banking circles, creating and deepening inequalities by race, such that,
for example, people of color are more likely to pay higher interest rates
The
Future just
of Knowledge
Public
or premiums
because they in
arethe
Black
or Latino, especially if they live
in low-income neighborhoods. On the Internet and in our everyday uses
of technology, discrimination is also embedded in computer code and,
increasingly,
intelligence
technologies that we are reliant on,
The
Futureinofartificial
Information
Culture
by choice or not. I believe that artificial intelligence will become a major
human rights issue in the twenty-first century. We are only beginning to
understand the long-term consequences of these decision-making tools
Conclusion
in both masking and deepening social inequality. This book is just the
start of trying to make these consequences visible. There will be many
more, by myself and others, who will try to make sense of the consequences of automated decision making through algorithms in society.
Epilogue
Part of the challenge of understanding algorithmic oppression is to
understand that mathematical formulations to drive automated decisions are made by human beings. While we often think of terms such as
“big data” and “algorithms” as being benign, neutral, or objective, they
are anything but. The people who make these decisions hold all types of
1
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Copyright © 2018. New York University Press. All rights reserved.
2
| Introduction
values, many of which openly promote racism, sexism, and false notions
of meritocracy, which is well documented in studies of Silicon Valley
and other tech corridors.
For example, in the midst of a federal investigation of Google’s alleged
persistent wage gap, where women are systematically paid less than men
in the company’s workforce, an “antidiversity” manifesto authored by
James Damore went viral in August 2017,1 supported by many Google
employees, arguing that women are psychologically inferior and incapable of being as good at software engineering as men, among other
patently false and sexist assertions. As this book was moving into press,
many Google executives and employees were actively rebuking the assertions of this engineer, who reportedly works on Google search infrastructure. Legal cases have been filed, boycotts of Google from the
political far right in the United States have been invoked, and calls for
greater expressed commitments to gender and racial equity at Google
and in Silicon Valley writ large are under way. What this antidiversity
screed has underscored for me as I write this book is that some of the
very people who are developing search algorithms and architecture are
willing to promote sexist and racist attitudes openly at work and beyond,
while we are supposed to believe that these same employees are developing “neutral” or “objective” decision-making tools. Human beings are
developing the digital platforms we use, and as I present evidence of the
recklessness and lack of regard that is often shown to women and people
of color in some of the output of these systems, it will become increasingly difficult for technology companies to separate their systematic and
inequitable employment practices, and the far-right ideological bents of
some of their employees, from the products they make for the public.
My goal in this book is to further an exploration into some of these
digital sense-making processes and how they have come to be so fundamental to the classification and organization of information and at
what cost. As a result, this book is largely concerned with examining the
commercial co-optation of Black identities, experiences, and communities in the largest and most powerful technology companies to date,
namely, Google. I closely read a few distinct cases of algorithmic oppression for the depth of their social meaning to raise a public discussion of the broader implications of how privately managed, black-boxed
information-sorting tools have become essential to many data-driven
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Copyright © 2018. New York University Press. All rights reserved.
Introduction
|
3
decisions. I want us to have broader public conversations about the implications of the artificial intelligentsia for people who are already systematically marginalized and oppressed. I will also provide evidence and
argue, ultimately, that large technology monopolies such as Google need
to be broken up and regulated, because their consolidated power and
cultural influence make competition largely impossible. This monopoly
in the information sector is a threat to democracy, as is currently coming to the fore as we make sense of information flows through digital
media such as Google and Facebook in the wake of the 2016 United
States presidential election.
I situate my work against the backdrop of a twelve-year professional
career in multicultural marketing and advertising, where I was invested
in building corporate brands and selling products to African Americans
and Latinos (before I became a university professor). Back then, I believed, like many urban marketing professionals, that companies must
pay attention to the needs of people of color and demonstrate respect
for consumers by offering services to communities of color, just as is
done for most everyone else. After all, to be responsive and responsible
to marginalized consumers was to create more market opportunity. I
spent an equal amount of time doing risk management and public relations to insulate companies from any adverse risk to sales that they
might experience from inadvertent or deliberate snubs to consumers of
color who might perceive a brand as racist or insensitive. Protecting my
former clients from enacting racial and gender insensitivity and helping
them bolster their brands by creating deep emotional and psychological attachments to their products among communities of color was my
professional concern for many years, which made an experience I had
in fall 2010 deeply impactful. In just a few minutes while searching on
the web, I experienced the perfect storm of insult and injury that I could
not turn away from. While Googling things on the Internet that might
be interesting to my stepdaughter and nieces, I was overtaken by the
results. My search on the keywords “black girls” yielded HotBlackPussy.
com as the first hit.
Hit indeed.
Since that time, I have spent innumerable hours teaching and researching all the ways in which it could be that Google could completely
fail when it came to providing reliable or credible information about
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
4
|
Introduction
Copyright © 2018. New York University Press. All rights reserved.
Figure I.1. First search result on keywords “black girls,” September 2011.
women and people of color yet experience seemingly no repercussions
whatsoever. Two years after this incident, I collected searches again, only
to find similar results, as documented in figure I.1.
In 2012, I wrote an article for Bitch magazine about how women and
feminism are marginalized in search results. By August 2012, Panda (an
update to Google’s search algorithm) had been released, and pornography was no longer the first series of results for “black girls”; but other
girls and women of color, such as Latinas and Asians, were still pornified. By August of that year, the algorithm changed, and porn was suppressed in the case of a search on “black girls.” I often wonder what kind
of pressures account for the changing of search results over time. It is
impossible to know when and what influences proprietary algorithmic
design, other than that human beings are designing them and that they
are not up for public discussion, except as we engage in critique and
protest.
This book was born to highlight cases of such algorithmically driven
data failures that are specific to people of color and women and to underscore the structural ways that racism and sexism are fundamental
to what I have coined algorithmic oppression. I am writing in the spirit
of other critical women of color, such as Latoya Peterson, cofounder of
the blog Racialicious, who has opined that racism is the fundamental
application program interface (API) of the Internet. Peterson has argued that anti-Blackness is the foundation on which all racism toward
other groups is predicated. Racism is a standard protocol for organizing behavior on the web. As she has said, so perfectly, “The idea of a
n*gger API makes me think of a racism API, which is one of our core
arguments all along—oppression operates in the same formats, runs the
same scripts over and over. It is tweaked to be context specific, but it’s
all the same source code. And the key to its undoing is recognizing how
many of us are ensnared in these same basic patterns and modifying our
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Copyright © 2018. New York University Press. All rights reserved.
Introduction
|
5
own actions.”2 Peterson’s allegation is consistent with what many people
feel about the hostility of the web toward people of color, particularly
in its anti-Blackness, which any perusal of YouTube comments or other
message boards will serve up. On one level, the everyday racism and
commentary on the web is an abhorrent thing in itself, which has been
detailed by others; but it is entirely different with the corporate platform
vis-à-vis an algorithmically crafted web search that offers up racism and
sexism as the first results. This process reflects a corporate logic of either
willful neglect or a profit imperative that makes money from racism and
sexism. This inquiry is the basis of this book.
In the following pages, I discuss how “hot,” “sugary,” or any other
kind of “black pussy” can surface as the primary representation of Black
girls and women on the first page of a Google search, and I suggest that
something other than the best, most credible, or most reliable information output is driving Google. Of course, Google Search is an advertising
company, not a reliable information company. At the very least, we must
ask when we find these kinds of results, Is this the best information?
For whom? We must ask ourselves who the intended audience is for a
variety of things we find, and question the legitimacy of being in a “filter
bubble,”3 when we do not want racism and sexism, yet they still find
their way to us. The implications of algorithmic decision making of this
sort extend to other types of queries in Google and other digital media
platforms, and they are the beginning of a much-needed reassessment
of information as a public good. We need a full-on reevaluation of the
implications of our information resources being governed by corporatecontrolled advertising companies. I am adding my voice to a number
of scholars such as Helen Nissenbaum and Lucas Introna, Siva Vaidhyanathan, Alex Halavais, Christian Fuchs, Frank Pasquale, Kate Crawford, Tarleton Gillespie, Sarah T. Roberts, Jaron Lanier, and Elad Segev,
to name a few, who are raising critiques of Google and other forms of
corporate information control (including artificial intelligence) in hopes
that more people will consider alternatives.
Over the years, I have concentrated my research on unveiling the
many ways that African American people have been contained and
constrained in classification systems, from Google’s commercial search
engine to library databases. The development of this concentration was
born of my research training in library and information science. I think
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Copyright © 2018. New York University Press. All rights reserved.
6
| Introduction
of these issues through the lenses of critical information studies and critical race and gender studies. As marketing and advertising have directly
shaped the ways that marginalized people have come to be represented
by digital records such as search results or social network activities, I
have studied why it is that digital media platforms are resoundingly
characterized as “neutral technologies” in the public domain and often,
unfortunately, in academia. Stories of “glitches” found in systems do not
suggest that the organizing logics of the web could be broken but, rather,
that these are occasional one-off moments when something goes terribly
wrong with near-perfect systems. With the exception of the many scholars whom I reference throughout this work and the journalists, bloggers, and whistleblowers whom I will be remiss in not naming, very few
people are taking notice. We need all the voices to come to the fore and
impact public policy on the most unregulated social experiment of our
times: the Internet.
These data aberrations have come to light in various forms. In 2015,
U.S. News and World Report reported that a “glitch” in Google’s algorithm led to a number of problems through auto-tagging and facialrecognition software that was apparently intended to help people search
through images more successfully. The first problem for Google was that
its photo application had automatically tagged African Americans as
“apes” and “animals.”4 The second major issue reported by the Post was
that Google Maps searches on the word “N*gger”5 led to a map of the
White House during Obama’s presidency, a story that went viral on the
Internet after the social media personality Deray McKesson tweeted it.
These incidents were consistent with the reports of Photoshopped
images of a monkey’s face on the image of First Lady Michelle Obama
that were circulating through Google Images search in 2009. In 2015,
you could still find digital traces of the Google autosuggestions that associated Michelle Obama with apes. Protests from the White House led
to Google forcing the image down the image stack, from the first page,
so that it was not as visible.6 In each case, Google’s position is that it
is not responsible for its algorithm and that problems with the results
would be quickly resolved. In the Washington Post article about “N*gger
House,” the response was consistent with other apologies by the company: “‘Some inappropriate results are surfacing in Google Maps that
should not be, and we apologize for any offense this may have caused,’
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Copyright © 2018. New York University Press. All rights reserved.
Figure I.2. Google Images results for the keyword “gorillas,” April 7, 2016.
Figure I.3. Google Maps search on “N*gga House” leads to the White House,
April 7, 2016.
7
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Copyright © 2018. New York University Press. All rights reserved.
Figure I.4. Tweet by Deray McKesson about Google Maps search and the White
House, 2015.
8
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Introduction
|
9
Figure I.5. Standard Google’s “related” searches associates “Michelle Obama” with the
term “ape.”
a Google spokesperson told U.S. News in an email late Tuesday. ‘Our
teams are working to fix this issue quickly.’”7
Copyright © 2018. New York University Press. All rights reserved.
***
These human and machine errors are not without consequence, and
there are several cases that demonstrate how racism and sexism are
part of the architecture and language of technology, an issue that needs
attention and remediation. In many ways, these cases that I present are
specific to the lives and experiences of Black women and girls, people
largely understudied by scholars, who remain ever precarious, despite
our living in the age of Oprah and Beyoncé in Shondaland. The implications of such marginalization are profound. The insights about sexist
or racist biases that I convey here are important because information
organizations, from libraries to schools and universities to governmental
agencies, are increasingly reliant on or being displaced by a variety of
web-based “tools” as if there are no political, social, or economic consequences of doing so. We need to imagine new possibilities in the area of
information access and knowledge generation, particularly as headlines
about “racist algorithms” continue to surface in the media with limited
discussion and analysis beyond the superficial.
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Copyright © 2018. New York University Press. All rights reserved.
10
|
Introduction
Inevitably, a book written about algorithms or Google in the twentyfirst century is out of date immediately upon printing. Technology is
changing rapidly, as are technology company configurations via mergers, acquisitions, and dissolutions. Scholars working in the fields of
information, communication, and technology struggle to write about
specific moments in time, in an effort to crystallize a process or a phenomenon that may shift or morph into something else soon thereafter.
As a scholar of information and power, I am most interested in communicating a series of processes that have happened, which provide
evidence of a constellation of concerns that the public might take up
as meaningful and important, particularly as technology impacts social
relations and creates unintended consequences that deserve greater attention. I have been writing this book for several years, and over time,
Google’s algorithms have admittedly changed, such that a search for
“black girls” does not yield nearly as many pornographic results now
as it did in 2011. Nonetheless, new instances of racism and sexism keep
appearing in news and social media, and so I use a variety of these cases
to make the point that algorithmic oppression is not just a glitch in the
system but, rather, is fundamental to the operating system of the web.
It has direct impact on users and on our lives beyond using Internet
applications. While I have spent considerable time researching Google,
this book tackles a few cases of other algorithmically driven platforms to
illustrate how algorithms are serving up deleterious information about
people, creating and normalizing structural and systemic isolation, or
practicing digital redlining, all of which reinforce oppressive social and
economic relations.
While organizing this book, I have wanted to emphasize one main
point: there is a missing social and human context in some types of
algorithmically driven decision making, and this matters for everyone engaging with these types of technologies in everyday life. It is of
particular concern for marginalized groups, those who are problematically represented in erroneous, stereotypical, or even pornographic
ways in search engines and who have also struggled for nonstereotypical or nonracist and nonsexist depictions in the media and in libraries.
There is a deep body of extant research on the harmful effects of stereotyping of women and people of color in the media, and I encourage
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
Copyright © 2018. New York University Press. All rights reserved.
Introduction
|
11
readers of this book who do not understand why the perpetuation of
racist and sexist images in society is problematic to consider a deeper
dive into such scholarship.
This book is organized into six chapters. In chapter 1, I explore the
important theme of corporate control over public information, and I
show several key Google searches. I look to see what kinds of results
Google’s search engine provides about various concepts, and I offer a
cautionary discussion of the implications of what these results mean in
historical and social contexts. I also show what Google Images offers on
basic concepts such as “beauty” and various professional identities and
why we should care.
In chapter 2, I discuss how Google Search reinforces stereotypes, illustrated by searches on a variety of identities that include “black girls,”
“Latinas,” and “Asian girls.” Previously, in my work published in the
Black Scholar,8 I looked at the postmortem Google autosuggest searches
following the death of Trayvon Martin, an African American teenager
whose murder ignited the #BlackLivesMatter movement on Twitter
and brought attention to the hundreds of African American children,
women, and men killed by police or extrajudicial law enforcement. To
add a fuller discussion to that research, I elucidate the processes involved
in Google’s PageRank search protocols, which range from leveraging
digital footprints from people9 to the way advertising and marketing
interests influence search results to how beneficial this is to the interests
of Google as it profits from racism and sexism, particularly at the height
of a media spectacle.
In chapter 3, I examine the importance of noncommercial search engines and information portals, specifically looking at the case of how a
mass shooter and avowed White supremacist, Dylann Roof, allegedly
used Google Search in the development of his racial attitudes, attitudes
that led to his murder of nine African American AME Church members
while they worshiped in their South Carolina church in the summer
of 2015. The provision of false information that purports to be credible news, and the devastating consequences that can come from this
kind of algorithmically driven information, is an example of why we
cannot afford to outsource and privatize uncurated information on the
increasingly neoliberal, privatized web. I show how important records
Noble, Safiya Umoja. Algorithms of Oppression : How Search Engines Reinforce Racism. New York: New York University
Press, 2018. Accessed April 25, 2020. ProQuest Ebook Central.
Created from unh on 2020-04-25 07:19:27.
The Trauma Floor
The secret lives of Facebook moderators in America
By Casey Newton Feb 25, 2019, 8:00am EST
Content warning: This story contains discussion of serious mental health issues and racism.
The panic attacks started after Chloe watched a man die.
She spent the past three and a half weeks in training, trying to harden herself against the daily
onslaught of disturbing posts: the hate speech, the violent attacks, the graphic pornography. In a
few more days, she will become a full-time Facebook content moderator, or what the company
she works for, a professional services vendor named Cognizant, opaquely calls a “process
executive.”
For this portion of her education, Chloe will have to moderate a Facebook post in front of her
fellow trainees. When it’s her turn, she walks to the front of the room, where a monitor displays
a video that has been posted to the world’s largest social network. None of the trainees have seen
it before, Chloe included. She presses play.
The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he
screams and begs for his life. Chloe’s job is to tell the room whether this post should be
removed. She knows that section 13 of the Facebook community standards prohibits videos that
depict the murder of one or more people. When Chloe explains this to the class, she hears her
voice shaking.
Returning to her seat, Chloe feels an overpowering urge to sob. Another trainee has gone up to
review the next post, but Chloe cannot concentrate. She leaves the room, and begins to cry so
hard that she has trouble breathing.
No one tries to comfort her. This is the job she was hired to do. And for the 1,000 people like
Chloe moderating content for Facebook at the Phoenix site, and for 15,000 content reviewers
around the world, today is just another day at the office.
Over the past three months, I interviewed a dozen current and former employees of Cognizant in
Phoenix. All had signed non-disclosure agreements with Cognizant in which they pledged not to
discuss their work for Facebook — or even acknowledge that Facebook is Cognizant’s client.
The shroud of secrecy is meant to protect employees from users who may be angry about a
content moderation decision and seek to resolve it with a known Facebook contractor. The
NDAs are also meant to prevent contractors from sharing Facebook users’ personal information
with the outside world, at a time of intense scrutiny over data privacy issues.
Page 2 of 14
But the secrecy also insulates Cognizant and Facebook from criticism about their working
conditions, moderators told me. They are pressured not to discuss the emotional toll that their job
takes on them, even with loved ones, leading to increased feelings of isolation and anxiety. To
protect them from potential retaliation, both from their employers and from Facebook users, I
agreed to use pseudonyms for everyone named in this story except Cognizant’s vice president of
operations for business process services, Bob Duncan, and Facebook’s director of global partner
vendor management, Mark Davidson.
Collectively, the employees described a workplace that is perpetually teetering on the brink of
chaos. It is an environment where workers cope by telling dark jokes about committing suicide,
then smoke weed during breaks to numb their emotions. It’s a place where employees can be
fired for making just a few errors a week — and where those who remain live in fear of the
former colleagues who return seeking vengeance.
It’s a place where, in stark contrast to the perks lavished on Facebook employees, team leaders
micromanage content moderators’ every bathroom and prayer break; where employees,
desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and
a room reserved for lactating mothers; where people develop severe anxiety while still in
training, and continue to struggle with trauma symptoms long after they leave; and where the
counseling that Cognizant offers them ends the moment they quit — or are simply let go.
The moderators told me it’s a place where the conspiracy videos and memes that they see each
day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea
that the Earth is flat. A former employee told me he has begun to question certain aspects of the
Holocaust. Another former employee, who told me he has mapped every escape route out of his
house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”
Chloe cries for a while in the break room, and then in the bathroom, but begins to worry that she
is missing too much training. She had been frantic for a job when she applied, as a recent college
graduate with no other immediate prospects. When she becomes a full-time moderator, Chloe
will make $15 an hour — $4 more than the minimum wage in Arizona, where she lives, and
better than she can expect from most retail jobs.
The tears eventually stop coming, and her breathing returns to normal. When she goes back to
the training room, one of her peers is discussing another violent video. She sees that a drone is
shooting people from the air. Chloe watches the bodies go limp as they die.
She leaves the room again.
Eventually a supervisor finds her in the bathroom, and offers a weak hug. Cognizant makes a
counselor available to employees, but only for part of the day, and he has yet to get to work.
Chloe waits for him for the better part of an hour.
When the counselor sees her, he explains that she has had a panic attack. He tells her that, when
she graduates, she will have more control over the Facebook videos than she had in the training
Page 3 of 14
room. You will be able to pause the video, he tells her, or watch it without audio. Focus on your
breathing, he says. Make sure you don’t get too caught up in what you’re watching.
“He said not to worry — that I could probably still do the job,” Chloe says. Then she catches
herself: “His concern was: don’t worry, you can do the job.”
On May 3, 2017, Mark Zuckerberg announced the expansion of Facebook’s “community
operations” team. The new employees, who would be added to 4,500 existing moderators, would
be responsible for reviewing every piece of content reported for violating the company’s
community standards. By the end of 2018, in response to criticism of the prevalence of violent
and exploitative content on the social network, Facebook had more than 30,000 employees
working on safety and security — about half of whom were content moderators.
The moderators include some full-time employees, but Facebook relies heavily on contract labor
to do the job. Ellen Silver, Facebook’s vice president of operations, said in a blog post last year
that the use of contract labor allowed Facebook to “scale globally” — to have content
moderators working around the clock, evaluating posts in more than 50 languages, at more than
20 sites around the world.
The use of contract labor also has a practical benefit for Facebook: it is radically cheaper. The
median Facebook employee earns $240,000 annually in salary, bonuses, and stock options. A
content moderator working for Cognizant in Arizona, on the other hand, will earn just $28,800
per year. The arrangement helps Facebook maintain a high profit margin. In its most recent
quarter, the company earned $6.9 billion in profits, on $16.9 billion in revenue. And while
Zuckerberg had warned investors that Facebook’s investment in security would reduce the
company’s profitability, profits were up 61 percent over the previous year.
Since 2014, when Adrian Chen detailed the harsh working conditions for content moderators at
social networks for Wired, Facebook has been sensitive to the criticism that it is traumatizing
some of its lowest-paid workers. In her blog post, Silver said that Facebook assesses potential
moderators’ “ability to deal with violent imagery,” screening them for their coping skills.
Page 4 of 14
Bob Duncan, who oversees Cognizant’s content moderation operations in North America, says
recruiters carefully explain the graphic nature of the job to applicants. “We share examples of the
kinds of things you can see … so that they have an understanding,” he says. “The intention of all
that is to ensure people understand it. And if they don’t feel that work is potentially suited for
them based on their situation, they can make those decisions as appropriate.”
Until recently, most Facebook content moderation has been done outside the United States. But
as Facebook’s demand for labor has grown, it has expanded its domestic operations to include
sites in California, Arizona, Texas, and Florida.
The United States is the company’s home and one of the countries in which it is most popular,
says Facebook’s Davidson. American moderators are more likely to have the cultural context
necessary to evaluate U.S. content that may involve bullying and hate speech, which often
involve country-specific slang, he says.
Facebook also worked to build what Davidson calls “state-of-the-art facilities, so they replicated
a Facebook office and had that Facebook look and feel to them. That was important because
there’s also a perception out there in the market sometimes … that our people sit in very dark,
dingy basements, lit only by a green screen. That’s really not the case.”
It is true that Cognizant’s Phoenix location is neither dark nor dingy. And to the extent that it
offers employees desks with computers on them, it may faintly resemble other Facebook offices.
But while employees at Facebook’s Menlo Park headquarters work in an airy, sunlit complex
designed by Frank Gehry, its contractors in Arizona labor in an often cramped space where long
lines for the few available bathroom stalls can take up most of employees’ limited break time.
And while Facebook employees enjoy a wide degree of freedom in how they manage their days,
Cognizant workers’ time is managed down to the second.
A content moderator named Miguel arrives for the day shift just before it begins, at 7 a.m. He’s
one of about 300 workers who will eventually filter into the workplace, which occupies two
floors in a Phoenix office park.
Security personnel keep watch over the entrance, on the lookout for disgruntled ex-employees
and Facebook users who might confront moderators over removed posts. Miguel badges in to the
office and heads to the lockers. There are barely enough lockers to go around, so some
employees have taken to keeping items in them overnight to ensure they will have one the next
day.
The lockers occupy a narrow hallway that, during breaks, becomes choked with people. To
protect the privacy of the Facebook users whose posts they review, workers are required to store
their phones in lockers while they work.
Writing utensils and paper are also not allowed, in case Miguel might be tempted to write down a
Facebook user’s personal information. This policy extends to small paper scraps, such as gum
wrappers. Smaller items, like hand lotion, are required to be placed in clear plastic bags so they
are always visible to managers.
Page 5 of 14
To accommodate four daily shifts — and high employee turnover — most people will not be
assigned a permanent desk on what Cognizant calls “the production floor.” Instead, Miguel finds
an open workstation and logs in to a piece of software known as the Single Review Tool, or
SRT. When he is ready to work, he clicks a button labeled “resume reviewing,” and dives into
the queue of posts.
Last April, a year after many of the documents had been published in the Guardian, Facebook
made public the community standards by which it attempts to govern its 2.3 billion monthly
users. In the months afterward, Motherboard and Radiolab published detailed investigations into
the challenges of moderating such a vast amount of speech.
Those challenges include the sheer volume of posts; the need to train a global army of low-paid
workers to consistently apply a single set of rules; near-daily changes and clarifications to those
rules; a lack of cultural or political context on the part of the moderators; missing context in
posts that makes their meaning ambiguous; and frequent disagreements among moderators about
whether the rules should apply in individual cases.
Despite the high degree of difficulty in applying such a policy, Facebook has instructed
Cognizant and its other contractors to emphasize a metric called “accuracy” over all else.
Accuracy, in this case, means that when Facebook audits a subset of contractors’ decisions, its
full-time employees agree with the contractors. The company has set an accuracy target of 95
percent, a number that always seems just out of reach. Cognizant has never hit the target for a
sustained period of time — it usually floats in the high 80s or low 90s, and was hovering around
92 at press time.
Miguel diligently applies the policy — even though, he tells me, it often makes no sense to him.
A post calling someone “my favorite n-----” is allowed to stay up, because under the policy it is
considered “explicitly positive content.”
“Autistic people should be sterilized” seems offensive to him, but it stays up as well. Autism is
not a “protected characteristic” the way race and gender are, and so it doesn’t violate the policy.
(“Men should be sterilized” would be taken down.)
In January, Facebook distributes a policy update stating that moderators should take into account
recent romantic upheaval when evaluating posts that express hatred toward a gender. “I hate all
men” has alway...
Purchase answer to see full
attachment