Annotated Bibliography
Purpose
This assignment will help you to analyze, summarize, and evaluate various types of
research material.
Remember: You are conducting research on ONE of the topics presented in Module 4:
1. Social Media and Free Speech OR 2. Threat of Artificial intelligence OR 3. Voting
Rights. The sources examined in this Annotated Bibliography will be used in your
Argument Essay. In essence, this annotated bibliography will become the basis of your
Works Cited page for the Argument Essay.
Definition: An annotated bibliography is your works cited list, with added information
for each source. In addition to the citation, each annotation should include a
concise summary of and evaluation of each of the sources. Each annotation should be a
strong paragraph (about 150 words), double spaced. The annotations should be
alphabetized by citation, just as you would alphabetize your works cited.
Directions:
1. Before you begin the assignment, click on the learning resource "What is an
Annotated Bibliography and How do I Write One." Also review
carefully "How to Get Started With Your Argument Essay Research."
2. Create an MLA formatted Annotated Bibliography that lists and annotates
at least FIVE independently researched scholarly and/or critical articles on the
topic you are writing about (remember to CHOOSE ONE). Credible sources
may include, scholarly articles, newspaper articles/ editorials, news
organizations such as NPR, CNN.
3. Save your document as a Word or Google docs and submit it as a file upload.
11/1/21, 2:23 PM
EBSCOhost
EBSCO Publishing Citation Format: MLA 8th Edition (Modern Language Assoc.):
NOTE: Review the instructions at http://support.ebsco.com.eznvcc.vccs.edu/help/?int=ehost&lang=&feature_id=MLA
and make any necessary corrections before using. Pay special attention to personal names, capitalization, and
dates. Always consult your library resources for the exact formatting and punctuation guidelines.
Works Cited
Simms, Andrew. “Beware an AI-Fuelled World.” New Scientist, vol. 240, no. 3204, Nov. 2018, pp. 24–25.
EBSCOhost, doi:10.1016/S0262-4079(18)32127-4.
Section:
COMMENT
Beware an AI-fuelled world
Fears of an artificial intelligence apocalypse make the news, but it is AI-driven inequality we
should worry about
ONE of the biggest potential impacts of artificial intelligence is often overlooked. Rather than the
frequently touted extremes of technological utopia or an end to humanity, Al could entrench and deepen
the status quo, intensifying business as usual by ramping up overconsumption and inequality. For many
scientists, this is a big concern.
Scientists for Global Responsibility, a campaign group for scientists and engineers that I work for,
recently surveyed its 750 members about Al's effects on the future. Nine out of 10 respondents thought
that Al would deliver more power and economic benefit to corporations than to citizens. Eight out of 10
thought Al would lead to a dystopian future, rather than a utopian or unchanged one.
Mark Carney, governor of the Bank of England, recently said Al is part of a fourth industrial revolution,
which will not only tilt the balance of power further away from low-paid workers to the owners of capital,
but "substantially boost productivity and supply". In other words, Al will enable us to make a lot more
stuff using fewer people, and as a result is likely to worsen overconsumption and unemployment levels.
Predictions about how Al and automation will affect work suggest that anything from 35 to 50 per cent of
all jobs could be at risk, according to the University of Oxford and the Bank of England. Profits from this
change are likely to flow to corporations and their owners rather than their workforces, deepening
inequality and exacerbating the decline in wages relative to global wealth seen over the past 20 years.
Alarm bells about an Al dystopia are already ringing. There are fears about the development of weapons
that decide for themselves who to kill-so-called killer robots - and about how Al could lead to a
supercharged surveillance society, where everything you do is tracked and recorded. Al is also being
used to intensify environmentally damaging resource extraction. An embattled oil and gas industry sees
https://web-s-ebscohost-com.eznvcc.vccs.edu/ehost/delivery?sid=bd3a2637-68b1-48b0-904d-0700312fdd65%40redis&vid=1&ReturnUrl=https%3a%2… 1/2
11/1/21, 2:23 PM
EBSCOhost
Al as a "godsend", as one leading industry journal put it, and is now using the technology to help find
new places to look.
The recent UN special report on meeting the 1.5°C climate target concluded that rapid, far-reaching and
unprecedented transitions were needed across the whole of society, with low energy demand and low
material consumption being the top priorities. This seems at odds with using Al to seek more fossil fuels.
But the worst effects will only happen if we let them. My colleagues and I are calling for 20 per cent of all
Al research funding to be used to assess its potential benefits and harms, so that we can make informed
choices. Technology isn't destiny. We don't have to do something just because we can.
PHOTO (COLOR)
~~~~~~~~
By Andrew Simms
Copyright of New Scientist is the property of New Scientist Ltd. and its content may not be copied or
emailed to multiple sites or posted to a listserv without the copyright holder's express written permission.
However, users may print, download, or email articles for individual use.
https://web-s-ebscohost-com.eznvcc.vccs.edu/ehost/delivery?sid=bd3a2637-68b1-48b0-904d-0700312fdd65%40redis&vid=1&ReturnUrl=https%3a%2… 2/2
11/1/21, 2:24 PM
EBSCOhost
EBSCO Publishing Citation Format: MLA 8th Edition (Modern Language Assoc.):
NOTE: Review the instructions at http://support.ebsco.com.eznvcc.vccs.edu/help/?int=ehost&lang=&feature_id=MLA
and make any necessary corrections before using. Pay special attention to personal names, capitalization, and
dates. Always consult your library resources for the exact formatting and punctuation guidelines.
Works Cited
Revell, Timothy. “‘AIs Are Really Dumb. They Don’t Even Have the Intelligence of a 6-Month-Old.’” New
Scientist, vol. 242, no. 3233, June 2019, pp. 44–45. EBSCOhost, eznvcc.vccs.edu/login?
url=https://https://search.ebscohost.com/login.aspx?direct=true&db=iih&AN=137120267&site=ehostlive&scope=site.
'AIs are really dumb. They don't even have the intelligence of a 6-month-old'
Yoshua Bengio is one of the founding fathers of artificial intelligence. Timothy Revell finds out
why he doesn't fear a machine apocalypse
OVER the past decade, machine intelligence has vastly improved. That is in large part due to deep
learning, a technique that gives computers the ability to teach themselves. It underpins everything from
world-beating chess and Go algorithms to digital voice assistants like Amazon's Alexa and Apple's Siri.
Yoshua Bengio is one of the pioneers of deep learning, and has spent his career at the forefront of AI
research. He was recently awarded the A. M. Turing Award, which is often called the Nobel prize of
computing, along with two other deep learning pioneers: Geoffrey Hinton at the University of Toronto
and Google, and Yann LeCun, who is chief AI scientist at Facebook. The trio will split the $1 million
prize.
Bengio remains in academia at the University of Montreal, Canada, but co-founded an AI incubator and
advises on a couple of startups. He resisted the draw of a juicy Silicon Valley salary, because he
believes "humans are more important than money". That being said, he isn't humanity's biggest fan. For
all his optimism about the future of machines, he wouldn't put it past us to mess things up.
How do you think Al will be able to actually help people?
The progress we have made in machine learning has been pretty amazing, and it can empower almost
any sector of society.
I have invested a lot of effort in healthcare applications, especially the goal of machines being able to
diagnose cancer from medical images. I also think there is a lot of potential when it comes to climate
change, such as using Al to help predict the consequences of societal transformations. And we have
been very active in looking at how machine learning can help promote human rights, by detecting
gender bias, for example.
https://web-p-ebscohost-com.eznvcc.vccs.edu/ehost/delivery?sid=e0e9e180-3b26-49dc-8c79-f844626456f1%40redis&vid=1&ReturnUrl=https%3a%2f…
1/3
11/1/21, 2:24 PM
EBSCOhost
"Brains are very complicated machines, but I think we will figure out the principle of intelligence"
Does Al have any potential risks?
How much time do I have? Lethal autonomous weapons, for example, are morally wrong because
computers don't understand moral values. That means they might not be able to question the legitimacy
of an order. They are also wrong for security reasons, as they can threaten the global balance of power.
That could be dangerous for all of us.
AI can also be used to control people, monitoring where they are and reinforcing the power of
authoritarian governments in a way that wasn't possible before. For example, we know that China has
hundreds of millions of cameras in the streets, equipped with the technology for facial recognition.
Another related danger that people talk about less is how AI could be used to influence others. Think
about advertising, the influence of which is usually seen as benign, and extend those techniques to
politics. It might not matter if I buy Coke or Pepsi, but it does matter if I vote for Trump or not, right?
Then you have the issue of bias and diversity, where AI can reinforce some of the negative aspects of
our current societies. And finally, there is the concentration of power. As a powerful technology left to its
own devices, it is just going to reinforce the power of those who control it, and that is bad for democracy
as well.
Do you think the big technology companies, like Google or Facebook, have too much power?
Yes. I think the trend is concerning. There is a snowball effect. The more data you have, the more
customers you have. And if you are rich, you can pay for the best researchers. This is bad for innovation
and it's bad for democracy. It is bad for innovation, because innovation comes from diversity, from many
different people with different goals trying different things. And then for democracy. . . well, democracy,
what does it mean? It is power to the people. If power is concentrated in a few hands, that isn't good.
How do you feel about the future of Al?
I am very optimistic about the science making a lot of progress in the coming decades. I think brains are
very complicated machines, but I think we will figure out the principles of intelligence, which will help us
make better AI.
So I am very optimistic on that side, but I also realise that it might take decades, or even centuries, for
all we know. But we will get there, unless we self-destruct in some way as a social organisation.
Is an Al apocalypse something people should be afraid of?
Well I'm not.
Why not?
Because that scenario just doesn't fit my understanding of the science of AI right now. I don't see it as
credible. Now, I don't have a crystal ball, and the science of AI 50 years from now will be very different. I
think we need to be prudent, and it is good that there are people who are thinking about these issues,
but making it a political or social question at this point is very premature. We should be worrying about
those other, shorter term issues that, for sure, are happening and need our attention.
Are the short-term issues unstoppable, or is there a way to halt them?
There is a way, but it isn't an easy one. We need to have society at large understand those issues and
bring them to the forefront of the political agenda, so that governments act as they should.
https://web-p-ebscohost-com.eznvcc.vccs.edu/ehost/delivery?sid=e0e9e180-3b26-49dc-8c79-f844626456f1%40redis&vid=1&ReturnUrl=https%3a%2f…
2/3
11/1/21, 2:24 PM
EBSCOhost
Finally, what do people get wrong about Al?
So many people overestimate the intelligence of these systems. AIs are really dumb. They don't
understand the world. They don't understand humans. They don't even have the intelligence of a 6month-old. Yes, they can beat the world champion at Go. But that doesn't mean they can do anything
else.
A related misconception is that intelligent robots are taking over the world. People project their own
emotions and feelings onto the machines that we will build in the future and think, "well, if I was that
machine, I would be angry that all these guys have been enslaving me, and now I'm out for revenge".
But I think this is nonsense. We are designing those machines, which means the real danger is if an AI
gets into the wrong hands, and is then used in ways that will hurt us. It isn't that the AI is malevolent, it is
the humans that are stupid and/or greedy.
PHOTO (COLOR)
~~~~~~~~
By Timothy Revell, is assistant news editor at New Scientist. Follow him on Twitter @timothyrevell
Copyright of New Scientist is the property of New Scientist Ltd. and its content may not be copied or
emailed to multiple sites or posted to a listserv without the copyright holder's express written permission.
However, users may print, download, or email articles for individual use.
https://web-p-ebscohost-com.eznvcc.vccs.edu/ehost/delivery?sid=e0e9e180-3b26-49dc-8c79-f844626456f1%40redis&vid=1&ReturnUrl=https%3a%2f…
3/3
INSERT SURNAME HERE: 1
THE TREAT OF ARTIFICIAL INTELLIGENCE
AI (Artificial Intelligence) is one of the controversial topics in today's world. With the
technological advancement, machines have been enabled to teach and execute tasks by
themselves. However, the aspect of AI has been received with mixed reactions where some
researchers are deeply in its support while others view it as kind of “bullshit”. For instance,
Timothy Revell suggests that AI doesn’t have intelligence but instead machines are just
performing what they are technically instructed by humans. On contrary, Andrew Simms
believes that AI is part of the future industrial revolution which is likely to present a lot of
demerits (unemployment) alongside some merits (high productivity).
In his arguments, Timothy suggests that some individuals overestimate the aspect of
artificial intelligence. They are completely dumb because they cannot understand the dynamics
of the world, in particular humans. Although they can destroy the world within a minute, that
doesn’t mean they can capable of anything more because they are literally powered by humans
(Revell 44).
Additionally, AI is only dangerous when it’s left in irresponsible hands. For example the
aspect of diversity and bias. This comes when their power is concentrated in some areas like
those who own social media platforms such as Facebook. When the technology is left to those
who own it, it will only benefit those who regulate it which is a negative quality for democracy.
Timothy states that deadly autonomous arms are morally incorrect because computers cannot
understand moral values (Revell 44). That’s why they cannot question the legality of their
actions because they are literally dumb.
2
On the other hand, Andrew Simms believes that AI will contribute to the future industrial
a revolution where the corporations will be having much stuff using few people. Its
consequences will be an escalation of unemployment levels (Simms 24). Also, there are fears
that Advancement in AI will bring intelligencer weapons that will be able to decide who to or not
kill. There will be a society where everything will be recorded. Andrew Simms sense dangers
associated with AI and suggests the utilization of at least 20 percent of AI funding in the
execution of research concerning the harms and the benefits of continued advancement in AI
(Simms 24).
Although both authors present reasonable viewpoints regarding AI, I would strongly
support Andrew Simms’s argument that we should not do something just because we are
capable. There should be funded research exclusively for examining the merits and the demerits
of advancement in AI. This would enable informed decision-making. The effects of
technological advancements are already evident especially in corporate settings where the human
workforce being replaced by artificial machines. Unemployment has already begun setting in.
This is in alignment with Andrew’s viewpoint that the future of AI will enable high productivity
with fewer human stuffing but the worst of it will be the state of unemployment.
3
Work cited
Revell, Timothy. “‘AIs Are Really Dumb. They Don’t Even Have the Intelligence of a 6-MonthOld.’” New Scientist, vol. 242, no. 3233, 2019, pp. 44–45. Crossref, doi:10.1016/s02624079(19)31038-3.
Simms, Andrew. “Beware an AI-Fuelled World.” New Scientist, vol. 240, no. 3204, 2018, pp.
24–25. Crossref, doi:10.1016/s0262-4079(18)32127-4.
Purchase answer to see full
attachment