Module 2 TEDTalk We Are Building a Dystopia Just to Make People Click on Ads Analysis

User Generated





  • Watch the video and read the transcript prior to answering the analysis questions on the video.
  • Take notes on the relevant content in the Tedtalk.
  • Choose 2 analysis questions you would like to respond to in the discussion board.


  • Watch video and take notes in a column format since you will be taking notes on the article, "Post Modern Consumerism and the Construction of the Self" and the Tedtalk-We're Building a Dystopia Just to Make People Click on Ads. These are 2 sources on similar topics.
  • Compare both sources and find similarities or differences and focus on your objective, whether you want to disagree or agree with the author's/speaker's purpose.

3.We’re Building a Dystopia_Transcript

Read and take notes on the transcript.

  • Highlight unfamiliar vocabulary and paraphrase content.
  • Include 4 of the unfamiliar words and paraphrase(s) in your summary/response essay

4.We're Building a Dystopia-Analysis Questions(1)

  • Watch and read the transcript of the Tedtalk and respond to analysis questions.
  • Choose 2 analysis questions to respond to on the discussion board.
  • Generate/ask 1 discussion question based on your understanding of the Tedtalk in a post on the discussion board.

5.Sample Summary & Response with Analysis Questions

  • Read through the student sample.
  • Scroll down to the end of the essay and answer analysis questions.
  • Remember to annotate the sample by underlining and putting brackets for important details.

View lecture and focus on the guidelines for writing a summary and response. Each section will be 175 words for a total of 350 words. You can format the response with more than one paragraph when addressing the varying points. The summary, however, has to be one paragraph. Remember that you will include author's last name and title of the article or title of Tedtalk in quotes. Even if you paraphrase, citation is not required for this assignment.
Attached is the summary response essay rubric indicating the required criteria and evaluation for the assignment. You can review it before you write your essay and then gauge the expectations when you proofread your essay.

Unformatted Attachment Preview

We’re Building a Dystopia Just to Make People Click on Ads So, when people voice fears of artificial intelligence, very often, they invoke images of humanoid robots run amok. You know? Terminator? You know, that might be something to consider, but that's a distant threat. Or, we fret about digital surveillance with metaphors from the past. "1984," George Orwell's "1984," it's hitting the bestseller lists again. It's a great book, but it's not the correct dystopia for the 21st century. What we need to fear most is not what artificial intelligence will do to us on its own, but how the people in power will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden, subtle and unexpected ways. Much of the technology that threatens our freedom and our dignity in the near-term future is being developed by companies in the business of capturing and selling our data and our attention to advertisers and others: Facebook, Google, Amazon, Alibaba, Tencent. 01:18 Now, artificial intelligence has started bolstering their business as well. And it may seem like artificial intelligence is just the next thing after online ads. It's not. It's a jump in category. It's a whole different world, and it has great potential. It could accelerate our understanding of many areas of study and research. But to paraphrase a famous Hollywood philosopher, "With prodigious potential comes prodigious risk." 01:53 Now let's look at a basic fact of our digital lives, online ads. Right? We kind of dismiss them. They seem crude, ineffective. We've all had the experience of being followed on the web by an ad based on something we searched or read. You know, you look up a pair of boots and for a week, those boots are following you around everywhere you go. Even after you succumb and buy them, they're still following you around. We're kind of inured to that kind of basic, cheap manipulation. We roll our eyes and we think, "You know what? These things don't work." Except, online, the digital technologies are not just ads. Now, to understand that, let's think of a physical world example. You know how, at the checkout counters at supermarkets, near the cashier, there's candy and gum at the eye level of kids? That's designed to make them whine at their parents just as the parents are about to sort of check out. Now, that's a persuasion architecture. It's not nice, but it kind of works. That's why you see it in every supermarket. Now, in the physical world, such persuasion architectures are kind of limited, because you can only put so many things by the cashier. Right? And the candy and gum, it's the same for everyone, even though it mostly works only for people who have whiny little humans beside them. In the physical world, we live with those limitations. 03:26 In the digital world, though, persuasion architectures can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent to everyone's phone private screen, so it's not visible to us. And that's different. And that's just one of the basic things that artificial intelligence can do. 03:57 Now, let's take an example. Let's say you want to sell plane tickets to Vegas. Right? So in the old world, you could think of some demographics to target based on experience and what you can guess. You might try to advertise to, oh, men between the ages of 25 and 35, or people who have a high limit on their credit card, or retired couples. Right? That's what you would do in the past. 04:20 With big data and machine learning, that's not how it works anymore. So, to imagine that, think of all the data that Facebook has on you: every status update you ever typed, every Messenger conversation, every place you logged in from, all your photographs that you uploaded there. If you start typing something and change your mind and delete it, Facebook keeps those and analyzes them, too. Increasingly, it tries to match you with your offline data. It also purchases a lot of data from data brokers. It could be everything from your financial records to a good chunk of your browsing history. Right? In the US, such data is routinely collected, collated and sold. In Europe, they have tougher rules. 05:16 So, what happens then is, by churning through all that data, these machine-learning algorithms -- that's why they're called learning algorithms -- they learn to understand the characteristics of people who purchased tickets to Vegas before. When they learn this from existing data, they also learn how to apply this to new people. So, if they're presented with a new person, they can classify whether that person is likely to buy a ticket to Vegas or not. Fine. You're thinking, an offer to buy tickets to Vegas. I can ignore that. But the problem isn't that. The problem is, we no longer really understand how these complex algorithms work. We don't understand how they're doing this categorization. It's giant matrices, thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data, understands anymore how exactly it's operating any more than you'd know what I was thinking right now if you were shown a cross section of my brain. It's like we're not programming anymore, we're growing intelligence that we don't truly understand. 06:45 And these things only work if there's an enormous amount of data, so they also encourage deep surveillance on all of us so that the machine learning algorithms can work. That's why Facebook wants to collect all the data it can about you. The algorithms work better. 07:01 So, let's push that Vegas example a bit. What if the system that we do not understand was picking up that it's easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become over-spenders, compulsive gamblers. They could do this, and you'd have no clue that's what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, "That's why I couldn't publish it." I was like, "Couldn't publish what?" He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on. 07:59 Now, the problem isn't solved if he doesn't publish it, because there are already companies that are developing this kind of technology, and a lot of the stuff is just off the shelf. This is not very difficult anymore. 08:14 Do you ever go on YouTube meaning to watch one video and an hour later you've watched 27? You know how YouTube has this column on the right that says, "Up next" and it auto-plays something? It's an algorithm picking what it thinks that you might be interested in and maybe not find on your own. It's not a human editor. It is what algorithms do. It picks up on what you have watched and what people like you have watched, and infers that that must be what you're interested in, what you want more of, and just shows you more. It sounds like a benign and useful feature, except when it isn't. 08:54 So, in 2016, I attended rallies of then-candidate Donald Trump to study as a scholar the movement supporting him. I study social movements, so I was studying it, too. And then I wanted to write something about one of his rallies, so I watched it a few times on YouTube. YouTube started recommending to me and auto-playing to me white supremacist videos in increasing order of extremism. If I watched one, it served up one even more extreme and auto-played that one, too. If you watch Hillary Clinton or Bernie Sanders content, YouTube recommends and auto-plays conspiracy left, and it goes downhill from there. 09:44 Well, you might be thinking, this is politics, but it's not. This isn't about politics. This is just the algorithm figuring out human behavior. I once watched a video about vegetarianism on YouTube and YouTube recommended and auto-played a video about being vegan. It's like you're never hardcore enough for YouTube. 10:05 (Laughter) 10:06 So, what's going on? Now, YouTube's algorithm is proprietary, but here's what I think is going on. The algorithm has figured out that if you can entice people into thinking that you can show them something more hardcore, they're more likely to stay on the site watching video after video going down that rabbit hole while Google serves them ads. Now, with nobody minding the ethics of the store, these sites can profile people who are Jew haters, who think that Jews are parasites and who have such explicit antiSemitic content, and let you target them with ads. They can also mobilize algorithms to find for you look-alike audiences, people who do not have such explicit anti-Semitic content on their profile but who the algorithm detects may be susceptible to such messages, and lets you target them with ads, too. Now, this may sound like an implausible example, but this is real. ProPublica investigated this and found that you can indeed do this on Facebook, and Facebook helpfully offered up suggestions on how to broaden that audience. BuzzFeed tried it for Google, and very quickly they found, yep, you can do it on Google, too. And it wasn't even expensive. The ProPublica reporter spent about 30 dollars to target this category. 11:55 So last year, Donald Trump's social media manager disclosed that they were using Facebook dark posts to demobilize people, not to persuade them, but to convince them not to vote at all. And to do that, they targeted specifically, for example, African-American men in key cities like Philadelphia, and I'm going to read exactly what he said. I'm quoting. 12:22 They were using "nonpublic posts whose viewership the campaign controls so that only the people we want to see it see it. We modeled this. It will dramatically affect her ability to turn these people out." 12:38 What's in those dark posts? We have no idea. Facebook won't tell us. 12:44 So, Facebook also algorithmically arranges the posts that your friends put on Facebook, or the pages you follow. It doesn't show you everything chronologically. It puts the order in the way that the algorithm thinks will entice you to stay on the site longer. 13:03 Now, so this has a lot of consequences. You may be thinking somebody is snubbing you on Facebook. The algorithm may never be showing your post to them. The algorithm is prioritizing some of them and burying the others. 13:21 Experiments show that what the algorithm picks to show you can affect your emotions. But that's not all. It also affects political behavior. So, in 2010, in the midterm elections, Facebook did an experiment on 61 million people in the US that was disclosed after the fact. So, some people were shown, "Today is election day," the simpler one, and some people were shown the one with that tiny tweak with those little thumbnails of your friends who clicked on "I voted." This simple tweak. OK? So, the pictures were the only change, and that post shown just once turned out an additional 340,000 voters in that election, according to this research as confirmed by the voter rolls. A fluke? No. Because in 2012, they repeated the same experiment. And that time, that civic message shown just once turned out an additional 270,000 voters. For reference, the 2016 US presidential election was decided by about 100,000 votes. Now, Facebook can also very easily infer what your politics are, even if you've never disclosed them on the site. Right? These algorithms can do that quite easily. What if a platform with that kind of power decides to turn out supporters of one candidate over the other? How would we even know about it? 15:18 Now, we started from someplace seemingly innocuous -- online adds following us around -- and we've landed someplace else. As a public and as citizens, we no longer know if we're seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we're just at the beginning stages of this. These algorithms can quite easily infer things like your people's ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and genders, just from Facebook likes. These algorithms can identify protesters even if their faces are partially concealed. These algorithms may be able to detect people's sexual orientation just from their dating profile pictures. 16:26 Now, these are probabilistic guesses, so they're not going to be 100 percent right, but I don't see the powerful resisting the temptation to use these technologies just because there are some false positives, which will of course create a whole other layer of problems. Imagine what a state can do with the immense amount of data it has on its citizens. China is already using face detection technology to identify and arrest people. And here's the tragedy: we're building this infrastructure of surveillance authoritarianism merely to get people to click on ads. And this won't be Orwell's authoritarianism. This isn't "1984." Now, if authoritarianism is using overt fear to terrorize us, we'll all be scared, but we'll know it, we'll hate it and we'll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they're doing it at scale through our private screens so that we don't even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider's web and we may not even know we're in it. 18:14 So, Facebook's market capitalization is approaching half a trillion dollars. It's because it works great as a persuasion architecture. But the structure of that architecture is the same whether you're selling shoes or whether you're selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that's what's got to change. 18:54 Now, don't get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I've written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it's not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it's not the intent or the statements people in technology make that matter, it's the structures and business models they're building. And that's the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don't work on the site, it doesn't work as a persuasion architecture, or its power of influence is of great concern. It's either one or the other. It's similar for Google, too. 20:17 So, what can we do? This needs to change. Now, I can't offer a simple recipe, because we need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to face and try to deal with the lack of transparency created by the proprietary algorithms, the structural challenge of machine learning's opacity, all this indiscriminate data that's being collected about us. We have a big task in front of us. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won't be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don't see how we can postpone this conversation anymore. These structures are organizing how we function and they're controlling what we can and we cannot do. And many of these ad-financed platforms, they boast that they're free. In this context, it means that we are the product that's being sold. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue. 22:15 (Applause) 22:22 So, to go back to that Hollywood paraphrase, we do want the prodigious potential of artificial intelligence and digital technology to blossom, but for that, we must face this prodigious menace, openeyed and now. 22:40 Thank you. 22:42 (Applause) TedTalk: “We’re Building a Dystopia-Just to Make People Click on Ads” Analysis Questions: • What are algorithms? How are they monitored and by whom? • What is the impact they have on SNS users? • Are the consequences greater in a broader perspective? • Is raising awareness about the insidious nature of ads mobilizing users? Or is this constant surveillance out of one’s control? • What do you think the speaker means by “dystopia”? Are you familiar with “dystopian” characteristics? • Do you agree/disagree and why? • Read through the transcript and generate one discussion question from the content. Summary Response Essay Rubric Criteria Summary/Critical Reading Response/Thesis Needs Improvement Captures some of the main ideas, though missing a summary of the larger argument. Most in original language. Little engagement with text and/or partial understanding of material. May interject opinion rather than neutrally summarizing the author’s ideas. The thesis lacks clarity or a clear argument and engagement with text. Includes an introduction and conclusion, yet they contain ambiguities or irrelevant information. Body paragraphs include Organization/Development summary and critique, but sequencing of ideas is illogical or hard to follow. Ineffective transitions. Grammar/Syntax The essay contains patterns of sentence level incoherence and lacks sentence variety and effective word choice. Includes many distracting errors in grammar, spelling, and punctuation. Meets Expectations Accurately summarizes main argument in original language, may overlook one or two key supporting points. Demonstrates engagement with and solid understanding of material. Exceeds Expectations Neutrally and accurately synthesizes the main ideas and argument of text, along with key supporting claims, in original language. Demonstrates deep intellectual engagement with and understanding of material. The thesis demonstrates engagement with text and a clear compelling argument in response to the text, regarding its effectiveness or significance, though may be lacking in specificity or detail. Provides an organizational structure that includes an effective introduction and conclusion. Body paragraphs include well-developed summary and critique with effective topic sentences, sequencing of ideas and transitions, though some development may be needed. Tone is rhetorically effective, and ideas are clearly articulated using precise word choice and varied sentence structures, though some sentences may include minor phrasing or word choice issues. Contains no distracting patterns of grammar, spelling and punctuation errors. The thesis demonstrates insightful engagement with and a clear, detailed, compelling argument in response to the text, regarding its effectiveness or significance. Provides an effective organizational structure that includes sophisticated introductory and conclusion paragraphs. All body paragraphs include well-developed summary and critique with strong topic sentences, effective sequencing of ideas and smooth transitions. Tone is rhetorically effective, and ideas are clearly articulated using precise word choice and varied sentence structures. Grammar, spelling, and punctuation are conventionally appropriate with very few errors, none of which interferes with coherence.
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer


Writing Module 2-Outline

I. Part 1
A. In this talk, Zeynep Tufekci argues that artificial intelligence is likely to make the
world an unsafe place.
B. Artificial intelligence, on its own, cannot cause harm, but it is going to be
employed by people in power to control and manipulate the consumers in
unexpected ways
C. Algorithms impact social network users in that they are used to influence and
manipulate social network users.
D. Algorithms impact social network users in that they are used to influence and
manipulate social network users.

II. Part 2

A. In the article "Post Modern Consumerism and the Construction of the Self," the
author tries to outline how the concept of consumerism shapes people's identity in
the modern era
B. Although the article Post Modern Consumerism and the Construction of the Self"
and the Tedtalk-We're Building a Dystopia Just to Make People Click on Ads
dealing with the same topic, these two sources compare in several ways
C. both sources agree that advisements are used to influence consumers into
purchasing particular items
D. However, these two sources differ in that the article focuses on human behaviors,
while the ted talk focuses on modern technology impacts.

III. Part 3

A. Artificial intelligence
B. We fret about digital surveillance
C. Dystopia
D. Digital lives


A. I believe that under the insidious nature of ads empowers an individual and that
constant surveillance is never out of an individual's control
B. When someone is aware of how advertisements are designed to manipulate people
into purchasing, they became able to control their influence
C. When the speaker speaks about a dystopian society, she refers to an undesirable
society and frightening.

V. Question 5
A. In this summary, the student’s primary idea is to understand the impacts of
modern communication modes on language abilities.
B. Specifically, the student focuses on how internet messaging and texting have
natively impacted grammar developments among contemporary teenagers
C. This essay chronologically flows beginning from the general to the specific.
D. Throughout the essay, the student uses the article as a point of reference.

E. The student’s final argument, in summary, is that although technology ha may
have had negative impacts on grammar, it has not stopped people from
communicating, and there is no need to worry.
F. One major weakness of the summary is that the inclusion of questions responding
below them interferes with its flow.

Student Sample

“Texting, Techspeak…”
Because of the evolution of societies, languages and the way we practice them have changed
over time. (Indeed, due to technology, the modification of our way of living since the 90’s has had a
direct impact on the way we communicate with others). Internet, text messaging and all those new
practices are new tools which allow us to use our language in new ways. Drew P. Cingel and S.
Shyam Sundar wrote an interesting essay dealing with this issue. Through this article, the authors
explored the adolescent’s habits and used research in order to determine if technology has a bad
effect on English grammar. Two specific questions emerged. The first one is: (“What is the
relationship between the number of text messages an adolescent sends and his/her scores on a
grammar test?)The second question is “Is the use of different styles of adaptation common in text
messages related differently to the grammar assessment scores of adolescents?” In the will to find an
accurate answer, the two authors defined the notions of two theories: used different methods the
social cognitive theory and the low road theory.
Basically, the social cognitive theory explains that people do not build their behavior by
trying new methods to express themselves, but rather learn through observation in a social
context. by observing others and the effect it has on the expression of their personalities. In other
words, people tend to “copy” or “reproduce” behaviors they see instead of looking and searching
deep down inside themselves their own way to express their personalities. How does this process
affect writing? “[The implications of the study denote] that adolescents who learn techspeak
through observation should be are more motivated to recreate the language in an effort to keep up
with their peers and also the speed requirements of interacting via text messaging.” In order to
respond to the question “has the social cognitive theory had a negative effect on adolescent grammar


skills?, the authors speak about studies that found that “students who learn writing through
observation develop a better base for writing.”
In addition to the social cognitive theory, Cingel and Sundar also use an alternative theory of
learning called the Low Road/High Road theory. This theory explains that “two tasks similar in
nature, such as composing a text message and composing standard English writing, will involve an
automatic transfer of skills.” The authors claimed that “over time, such a direct, unconscious transfer
of textual adaptations from techspeak to informal writing in school is likely to result in lower
grammatical ability.” This approach led authors to two hypotheses which are: “the more an
adolescent sends or receives text messages each day, the lower their scores on a grammar
assessment” and “the more adaptations an adolescent reports using in sent text messages, the lower
their score on a grammar assessment”.
To resume, this article demonstrates that texting, chatting, and all those new practices have a
negative impact on the grammar skills of adolescents. Abbreviations, omissions and substitutions
adversely alter the language we have spoken and written for a long time. And obviously, that is
true, all the research on the topic and theories highlight the negative effect of those new practices.
Having said that, it appears to me that there is a huge vast difference between the negative effects it
has on grammar skills compared to the negative effects it has on adolescents. I think that a lot of
people who read this article can misunderstand and oversimplify the real impact those new practices
can have. In other words, my question is is the simplification of grammar really something that has a
bad effect on our communication methods?”
In order to respond to this question, I reflect on this first idea: what is the role/purpose of
languages? Our languages, whether oral or written, are used in order to transmit feelings, ideas, and
personalities. The only goal of language is to communicate with our peers, and so to existing to exist
through our core values and our behaviors. One’s way of speaking will be different according to who


you are, where you come from, the education you receive and, the most important criterion who you
want to be. For example, a book writer will need perfect grammar skills, while a farmer
(theoretically) will not necessitate develop less needs to use those skills in order to make a living.
We are all different, and differentiate ourselves by the way we speak or we write is a good thing,
since we are understood by our interlocutors.
Additionally, it can also be a disadvantage if the restructuring of language is involuntary. we
do are not volunteer. As an example, when I started to play the guitar, I learnt by myself. Then, I
started to take some classes where I showed my professor some of my compositions I made. Those
compositions were very complex, the tempo was very strange, but I did not know it was because I
did not have any competences the ability to analyze what I was doing. My professor told me “it is not
a bad thing to do things in your own way, since you understand what you are doing�...

Nice! Really impressed with the quality.


Similar Content

Related Tags