Disclaimer: This is a machine generated PDF of selected content from our databases. This functionality is provided solely for your convenience
and is in no way intended to replace original scanned PDF. Neither Cengage Learning nor its licensors make any representations or warranties
with respect to the machine generated PDF. The PDF is automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our
systems. CENGAGE LEARNING AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES,
INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NONINFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the machine generated PDF is subject to
all use restrictions contained in The Cengage Learning Subscription and License Agreement and/or the Gale Virtual Reference Library Terms
and Conditions and by using the machine generated PDF functionality you agree to forgo any and all claims against Cengage Learning or its
licensors for your use of the machine generated PDF functionality and any output derived therefrom.
Autonomous Robotic Technology Could Pose a Serious Threat to Humanity
Robotic Technology. 2014.
COPYRIGHT 2014 Greenhaven Press, a part of Gale, Cengage Learning
Full Text:
Article Commentary
Moral Machines: Teaching Robots Right from Wrong, by Wendell Wallach & Colin Allen (2009), 2714w from Chp. 12 "Dangers, Rights, and
Responsibilities," pp.189-190, 191-197. By permission of Oxford University Press, USA. Copyright © 2009 by Wallach & Allen. All rights
reserved. Used with permission.
"If and when (ro)bots develop a high degree of autonomy, simple control systems that restrain inappropriate behavior will
not be adequate."
In the following viewpoint, Wendell Wallach and Colin Allen argue that predicting whether autonomous robots with advanced
artificial intelligence will benefit or harm humanity is difficult. However, they reason, assuming that engineers can simply design
moral robots is unrealistic. Creating robots in today's moral environment may not prevent unforeseen future consequences, Wallach
and Allen assert. Indeed, they maintain, robots with moral restraints might generate robotic offspring that are more likely to survive
without these restraints. To gain public support for robotic developments, Wallach and Allen conclude, engineers must ensure that
autonomous robots pose no threat to humanity. Wallach is a consultant with Yale University's Interdisciplinary Center for Bioethics.
Allen is professor of history and the philosophy of science at Indiana University.
As you read, consider the following questions:
1. According to roboticist Jordan Pollack, as cited by Wallach and Allen, why are self-replicating robots unlikely to post a major threat?
2. What does AI scientist Hugo de Garis think is a potential negative impact of AI research, according to the authors?
3. In the authors' opinion, what might serve as surrogate forms of "punishment" for autonomous robots?
Tomorrow's Headlines:
"Robots March on Washington Demanding Their Civil Rights"
"Terrorist Avatar Bombs Virtual Holiday Destination"
"Nobel Prize in Literature Awarded to IBM's Deep-Bluedora"
"Genocide Charges Leveled at FARL (Fuerzas Armadas Roboticas de Liberacion)"
"Nanobots Repair Perforated Heart"
"VTB (Virtual Transaction Bot) Amasses Personal Fortune in Currency Market"
"UN Debates Prohibition of Self-Replicating AI [artificial intelligence]"
"Serial Stalker Targets Robotic Sex Workers"
Are these headlines that will appear in this century or merely fodder for science fiction writers? In recent years, an array of serious computer
scientists, legal theorists, and policy experts have begun addressing the challenges posed by highly intelligent (ro)bots participating with humans
in the commerce of daily life. Noted scientists like Ray Kurzweil and Hans Moravec talk enthusiastically about (ro)bots whose intelligence will
be superior to that of humans, and how humans will achieve a form of eternal life by uploading their minds into computer systems. Their
predictions of the advent of computer systems with intelligence comparable to humans around 2020-50 are based on a computational theory of
mind and the projection of Moore's law1 over the next few decades. Legal scholars debate whether a conscious AI may be designated a "person"
for legal purposes, or eventually have rights equal to those of humans. Policy planners reflect on the need to regulate the development of
technologies that could potentially threaten human existence as humans have known it. The number of articles on building moral decision-making
faculties into (ro)bots is a drop in the proverbial bucket in comparison to the flood of writing addressing speculative future scenarios....
No Robot Rebellion
The futuristic (ro)botic literature spins scenarios of intelligent machines acting as moral or immoral agents, beyond the control of the engineers
who built them. (Ro)bots play a pivotal role in both Utopian and dystopian visions.
Speculation that AI systems will soon equal if not surpass humans in their intelligence feed technological fantasies and fears regarding a future
robot takeover. Perhaps, as some versions of the future predict, a species of self-replicating (ro)bots will indeed threaten to overwhelm humanity.
However, Bill Joy's famous jeremiad in Wired (2000) against self-reproducing technology notwithstanding, self-replicating robots are unlikely to
be a major threat. The roboticist Jordan Pollack of Brandeis University points out that unlike pathogens or replicating nanotechnology, (ro)bots
require significant resources both in the form of raw materials and infrastructure to reproduce themselves. Arresting (ro)bot reproduction is a
simple matter of destroying the infrastructure or shutting down the supply chain. Daniel Wilson also captured some of the absurdity in overblown
fears of a robot takeover in his dryly humorous yet informative How to Survive a Robot Uprising: Tips on Defending Yourself Against the Coming
Rebellion.
Nonetheless, tactics for stopping large robots from replicating are not likely to be successful when dealing with tiny nanobots. On the other hand,
nanobots, even in this age of miniaturization, are unlikely to be very intelligent. Intelligent or not, the gray goo scenarios beloved by alarmists in
which self-replicating nanobots eat all the organic material on earth symbolize the serious ethical challenges posed by nanotechnology. And it is
also possible, as Michael Crichton dramatized in his novel Prey, that groups of nanobots working together might display threatening swarm
behavior.
Confronting the Singularity
Futurists interested in the advent of a Singularity2 or advanced systems with AGI [artificial general intelligence] commonly refer to the need for
friendly AI. The idea of friendly AI is meant to capture the importance of ensuring that such systems will not destroy humanity. However, it is
often hard to tell how committed those who speak of this project are to the hard work that would be necessary to make AI friendly, or whether they
are giving this project lip service in order to quell the apprehension that advanced AI may not be benign—a fear that might lead to policies that
interfere with the headlong charge toward superhuman AI.
The concept of friendly AI was conceived and developed by Eliezer Yudkovsky, a cofounder of the Singularity Institute for AI. The institute
assumes that the accelerating development of IT [information technology] will eventually produce smarter-than-human AI and has as its stated
goal to confront the opportunities and risks posed by this challenge. Eliezer is a brilliant young man whose ideas sometimes border on genius. He
is almost religiously devoted to the belief that a Singularity is inevitable. His thoughts on making AI friendly presume systems will soon have
advanced faculties that will facilitate training them to value humans and to be sensitive to human considerations.
Yudkovsky proposes that the value of being "friendly" to humans is the top-down principle that must be integrated into AGI systems well before a
speculative critical juncture known as the "hard takeoff." As opposed to a "soft takeoff," where the transition to a Singularity takes place over a
long period of time, the "hard takeoff" theory predicts that this transition will happen very abruptly, perhaps taking only a few days. The idea is
that once a system with near-human faculties turns inward and begins modifying its own code, its development could take off exponentially. The
fear is that such a system will soon far exceed humans in its capacities and, if it is not friendly to humans, might treat humans no better than
humans treat nonhuman animals or even insects.
Wiring AI with Values
Ben Goertzel does not believe that Yudkovsky's friendly AI strategy is likely to be successful. Goertzel is one of the leading scientists working on
building an AGI. His Novamente project is presently directed at building an AGI that functions within the popular online universe Second Life,
and he believes that this will be possible within the next decade given adequate funding. Goertzel's concern is that being "friendly" to humans is
not likely to be a natural value for an AGI and therefore is less likely to survive successful rounds of self-modification. He proposes that an AGI be
designed around a number of basic values. In a working paper on AI morality, Goertzel makes a distinction between those abstract basic
values—for example, creating diversity, preserving existing patterns that have proved valuable, and keeping oneself healthy—that might be easy
to ground in the system's architecture and hard-to-implement basic values that would need to be learned through experience. Among these "hard
basic values" are preserving life and making other intelligent or living systems happy. Without experience, it would be difficult for the system to
understand what life or happiness is.
Goertzel suggests that it will be possible to "explicitly wire the AGI with the Easy basic values: ones that are beneficial to humans but also natural
in the context of the AGI itself (hence relatively likely to be preserved through the AGI's ongoing self-modification process)," and he advocates the
strategy of using "an experiential training approach to give the system the Hard basic values." He properly tempers these suggestions with a dose
of humility:
Finally, at risk of becoming tiresome, I will emphasize one more time that all these remarks are just speculation and intuitions. It is
my belief that we will gain a much better practical sense for these issues when we have subhuman but still moderately intelligent
AGI's [sic] to experiment with. Until that time, any definitive assertions about the correct route to moral AGI would be badly out of
place.
Making Moral Machines
We agree with Goertzel that although it may be important to reflect on serious future possibilities arising from intelligent systems, it will be
difficult to make headway on formulating strategies for making those systems moral. First, computer scientists will need to discover which
platforms are likely to lead toward a (ro)bot with AGI.
Peter Norvig, director of research at Google and coauthor of the classic modern textbook Artificial Intelligence: A Modern Approach, is among
those who believe that morality for machines will have to be developed alongside AI and should not be solely dependent on future advances. By
now, it should be evident that this is also how we view the challenge of developing moral machines.
Fears that advances in (ro)botic technology might be damaging to humanity underscore the responsibility of scientists to address moral
considerations during the development of systems. One AI scientist particularly sensitive to the challenges that advanced AI could pose is Hugo de
Garis, who heads the Artificial Intelligence Group at Wuhan University in Wuhan, China. De Garis is working on building brains out of billions of
artificial neurons. He has been particularly vocal in pointing out the potential negative impact from AI research, including his own. He foresees a
war between those who are supportive of advanced artilects (a term he has derived from "artificial intellects" to refer to ultraintelligent machines)
and those who fear artilects.
Nick Bostrom, a philosopher who founded both the World Transhumanist Association and the Future of Humanity Institute at Oxford University,
proposes that superintelligent machines will far surpass humans in the quality of their ethical thinking. However, Bostrom cautions that given that
such machines would be intellectually superior and unstoppable, it behooves their designers to provide them with human-friendly motivations.
Bostrom, like Josh Storrs Hall, ... generally holds that superintelligent systems will act in a way that is beneficial to humanity. Michael Ray LaChat
of the Methodist Theological School in Ohio goes a step further in predicting the development of AI into an entity that "will become as morally
perfect as human beings can imagine.... The empathetic imagination of this entity will take into account the suffering and pain of all truly sentient
beings in the process of decision-making.... Human persons will increasingly come to rely on the moral decisions of this entity." Perhaps, as
LaChat's writings suggest, the word "entity" should be replaced with the word "deity."
The Risks of Blind Faith
We do not pretend to be able to predict the future of AI. Nevertheless, the more optimistic scenarios are, to our skeptical minds, based on
assumptions that border on blind faith. It is far from clear which platforms will be the most successful for building advanced forms of AI. Different
platforms will pose different challenges, and different remedies for those challenges. (Ro)bots with emotions, for example, represent a totally
different species from (ro)bots without emotions.
However, we agree that systems with a high degree of autonomy, with or without superintelligence, will need to be provided with something like
human-friendly motivations or a virtuous character. Unfortunately, there will always be individuals and corporations who develop systems for their
own ends. That is, the goals and values they program into (ro)bots may not serve the good of humanity. Those who formulate public policy will
certainly direct attention to this prospect. It would be most helpful if engineers took the potential for misuse into account in designing advanced AI
systems.
The development of systems without appropriate ethical restraints or motivations can have far-reaching consequences, even when (ro)bots have
been developed for socially beneficial ends.... The U.S. Department of Defense is particularly interested in replacing humans in dangerous
military enterprises with robots. One stated goal is saving the lives of human soldiers. Presumably, robot soldiers will not be programmed with
anything as restrictive as Asimov's First Law.3 Will, for example, the desirability of saving human lives by building robotic soldiers for combat
outweigh the difficulty of guaranteeing that such machines are controllable and can't be misused?
From the perspective of designing moral machines, the importance of the futuristic scenarios is that they function as cautionary tales, warning
engineers to be on guard that solutions to present problems will not hold unintended future consequences. For example, what will happen when
military robots come into contact with service robots in a home that have been programmed with Asimov's First Law? Initially, one might assume
that very little would change for either the military or the service robot, but eventually, as robots acquired the capacity to reprogram or restructure
the way they process information, more serious consequences might result from this meeting, including the prospect that one robot would
reprogram the other.
Small, Incremental Changes
In the meantime, the more pressing concern is that very small incremental changes made to structures through which an AMA [artificial moral
agent] acquires and processes the information it is assimilating will lead to subtle, disturbing, and potentially destructive behavior. For instance, a
robot that is learning about social factors related to trust might overgeneralize irrelevant features, for example eye, hair, or skin color, and develop
unwanted prejudices as a result.
Learning systems may well be one of the better options for developing sophisticated AMAs, but the approach holds its own set of unique issues.
During adolescence, learning systems will need to be quarantined, sheltering humans from their trials and errors. The better-learning (ro)bots will
be open systems—expanding the breadth of information they integrate, learning from their environment, other machines, and humans. There is
always the prospect that a learning system will acquire knowledge that conflicts directly with its in-built restraints. Whether an individual (ro)bot
will "be conflicted" by such knowledge or use it in a way that circumvents restraints we do not know. Of particular concern is the possibility that a
learning system could discover a way to override control mechanisms that function as built-in restraints.
If and when (ro)bots develop a high degree of autonomy, simple control systems that restrain inappropriate behavior will not be adequate. How can
engineers build in values and control mechanisms that are difficult, if not impossible, for the system to circumvent? In effect, advanced systems
will require values or moral propensities that are integral to the system's overall design and that the system neither can nor would consider
dismantling. This was Asimov's vision of a robot's positronic brain being designed around the Three Laws.
The Value of a Bottom-Up Approach
One of the attractions of a bottom-up approach to the design of AMAs is that control mechanisms serving as restraints on the system's behavior
might evolve in a manner where they are indeed integrated into the overall design of the system. In effect, integral internal restraints would act like
a conscience that could not be circumvented except in pursuit of a goal whose importance to humanity was clear. Storrs Hall and others have
stressed this point in favoring the evolutionary approaches for building a conscience for machines. However, bottom-up evolution spawns a host of
progeny, and those that adapt and survive will not necessarily be only those whose values are transparently benign from the perspective of humans.
Punishment—from shame to incarceration—or at least fear of punishment plays some role in human development and in restraining inappropriate
behavior. Unfortunately, it is doubtful that the notion of being punished would have any lasting effect in the development of (ro)bots. Could a
(ro)bot really be designed to fear being turned off? Certainly something corresponding to a sense of failure or even shame might be programmed
into a future (ro)botic system to occur if it is unsuccessful at achieving its goals. Furthermore, mechanisms for hampering the system's pursuit of
its goals, for example slowing down its information or energy supply if it violates norms, might serve as surrogate forms of "punishment" for
simple autonomous (ro)bots. However, more advanced machines will certainly find ways to circumvent these controls to discover their own
sources of energy and information.
A Pandora's Box
The restraining influence of authentic feelings of failure or shame suggests the value of a (ro)bot having emotions of its own. Unfortunately,
introducing emotions into (ro)bots is a virtual Pandora's box filled with both benefits and ethical challenges. As William Edmonson, a lecturer in
the School of Computer Science at the University of Birmingham, writes, "emotionally immature Robots will present humans with strange
behaviours and these might raise ethical concerns. Additionally, of course, the Robots themselves may present the ethical challenge—is it
unethical to construct Robots with emotions, or unethical to build them without emotions?"
From both a technical and moral perspective, building (ro)bots capable of feeling psychological and physical pain is not a simple matter.... While
undesirable emotions may play an important role in human moral development, introducing them into (ro)bots for this purpose alone is likely to
create many more problems than it solves.
Designing or evolving restraints that are integral to the overall architecture of a (ro)bot is among the more fascinating challenges future roboticists
will need to address. Their success in developing adequate control systems may well determine the technological feasibility of designing AMAs,
and whether the public will support building systems that display a high degree of autonomy.
Footnotes:Footnotes
1. Moore’s law originally held that the capacity of transistors would double every 18 months. This law was later applied to computational capacity
and in the eyes of some can be applied to other technologies.
2. The Singularity refers to an event in which an explosive growth in artificial intelligence (AI) exceeds the intelligence of humans. Theorists
believe that when AIs come to understand their own construction and see ways to improve themselves, these self-improvements could lead to the
singularity. Because humans cannot understand an intelligence beyond their own, predicting the outcome of the singularity is, many reason,
impossible.
3. Science Fiction writer Isaac Asimov posited three laws of robotics: 1) A robot may not harm a human, nor, by inaction allow a human to come
to harm. 2) A robot must obey an order from a human, except when that order conflicts with the first law. 3) A robot must protect its own
existence, as long as that protection does not conflict with the first or second law.
Books
Yoseph Bar-Cohen, David Hanson, and Adi MaromThe Coming Robot Revolution: Expectations and Fears About Emerging Intelligent,
Humanlike Machines. New York: Springer, 2009.
Michael Barnes and Florian Jentsch, eds.Human-Robot Interactions in Future Military Operations. Burlington, VT: Ashgate, 2010.
Shahzad Bashir and Robert D. Crews, eds.Under the Drones: Modern Lives in the Afghanistan-Pakistan Borderlands. Cambridge, MA:
Harvard University Press, 2012.
Gregory Benford and Elisabeth MalartreBeyond Human: Living with Robots and Cyborgs. New York: Forge, 2008.
Rodney BrooksFlesh and Machines: How Robots Will Change Us. New York: Pantheon Books, 2002.
Erik Brynjolfsson and Andrew McAfeeRace Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving
Productivity, and Irreversibly Transforming Employment and the Economy. Lexington, MA: Digital Frontier Press, 2012.
Peter H. Diamandis and Steven KotlerAbundance: The Future Is Better than You Think. New York: Free Press, 2012.
Martin FordThe Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future. United States: Acculant, 2009.
Keith Frankish and William Ramsey, eds.Cambridge Handbook of Artificial Intelligence. Cambridge: Cambridge University Press, 2011.
E.L. Gaston, ed.Laws of War and 21st-Century Conflict. New York: International Debate Education Association, 2012.
J.P. Gunderson and L.F. GundersonRobots, Reasoning, and Reification. New York: Springer, 2009.
J. Storrs HallBeyond AI: Creating the Conscience of the Machine. Amherst, NY: Prometheus Books, 2007.
Jeff Hawkins and Sandra BlakesleeOn Intelligence. New York: Times Books, 2004.
Armin KrishnanKiller Robots: Legality and Ethicality of Autonomous Weapons. Burlington, VT: Ashgate, 2009.
James Howard KunstlerToo Much Magic: Wishful Thinking, Technology, and the Fate of the Nation. New York: Atlantic Monthly Press,
2012.
Ray KurzweilHow to Create a Mind: The Secret of Human Thought Revealed. New York: Viking, 2012.
Ray KurzweilThe Singularity Is Near: When Humans Transcend Biology. New York: Penguin, 2005.
Roger D. Launius and Howard E. McCurdyRobots in Space: Technology, Evolution, and Interplanetary Travel. Baltimore: Johns Hopkins
University Press, 2008.
David N.L. LevyRobots Unlimited: Life in a Virtual Age. Wellesley, MA: A.K. Peters, 2006.
Patrick Lin, Keith Abney, and George A. Bekey, eds.Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT
Press, 2011.
David McFarlandGuilty Robots, Happy Dogs: The Question of Alien Minds. New York: Oxford University Press, 2008.
Gaurav S. Sukhatme, ed.The Path to Autonomous Robots: Essays in Honor of George A. Bekey. New York: Springer, 2009.
Jocelyne TroccazMedical Robotics. Hoboken, NJ: Wiley, 2012.
Kevin WarwickArtificial Intelligence: The Basics. New York: Routledge, 2012.
Periodicals
Marshal Brain "Robotic Nation," 2011. www.marshallbrain.com.
Sal Gentile "From Watson to Siri: As Machines Replace Humans Are They Creating Inequality Too?," Public Broadcasting Service, October
25, 2011. www.pbs.org.
Alexandra Godfrey "What's New in Minimally Invasive Surgery: Robotic Technology Speeds Recovery and Improves Outcomes," Journal
of the American Academy of Physician Assistants, October 21, 2010.
C. Gopinath "When Robots Replace People at the Workplace," Hindu Business Line, February 26, 2012. www.thehindubusinessline.com.
Helen Greiner "Time for Robots to Get Real," New Scientist, January 21, 2012.
Satyandra Gupta, as told to Rachel Ehrenberg "We, Robot: What Real-life Machines Can and Can't Do," Science News, October 9, 2010.
www.sciencenews.org.
William Lazonick "Robots Don't Destroy Jobs; Rapacious Corporate Executives Do," AlterNet, December 31, 2012. www.alternet.org.
Space Daily "Finding ET May Require Giant Robot Leap," May 2, 2012. www.spacedaily.com.
Alice G. Walton "The Robots of Medicine: Do the Benefits Outweigh the Costs?," Atlantic Monthly, March 2012.
Source Citation (MLA 8th Edition)
Wallach, Wendell, and Colin Allen. "Autonomous Robotic Technology Could Pose a Serious Threat to Humanity." Robotic Technology, edited by
Louise Gerdes, Greenhaven Press, 2014. Opposing Viewpoints. Opposing Viewpoints in Context,
http://link.galegroup.com/apps/doc/EJ3010899205/OVIC?u=viva2_jsrcc&sid=OVIC&xid=49f085ba. Accessed 7 Sept. 2018. Originally
published as "Dangers, Rights, and Responsibilities," Moral Machines: Teaching Robots Right from Wrong,, vol. 189, USA, 2009.
Gale Document Number: GALE|EJ3010899205
Purchase answer to see full
attachment