AI researcher says amoral robots pose a danger to humanity
Rensselaer Polytechnic professor wants robots to understand good from evil
By Sharon Gaudin
Senior Writer, Computerworld | Mar 7, 2014 3:01 PM PT
With robots becoming increasingly powerful, intelligent and autonomous, a scientist at
Rensselaer Polytechnic Institute says it's time to start making sure they know the
difference between good and evil.
"I'm worried about both whether it's people making machines do evil things or the
machines doing evil things on their own," said Selmer Bringsjord, professor of cognitive
science, computer science and logic and philosophy at RPI in Troy, N.Y. "The more powerful
the robot is, the higher the stakes are. If robots in the future have autonomy..., that's a
recipe for disaster.
"If we were to totally ignore this, we would cease to exist," he added.
Bringsjord has been studying artificial intelligence, or AI, since he was a grad student in
1985 and he's been working hand-in-hand with robots for the past 17 years. Now he's
trying to figure out how he can code morality into a machine.
That effort, on many levels, is a daunting task.
Robots are only now beginning to act autonomously. A DARPA robotics challenge late last
year showed just how much human control robots -- especially, humanoid robots -- still
need. The same is true with weaponized autonomous robots, which the U.S. military has
said need human controllers for big, and potentially lethal, decisions.
But what happens in 10 or 20 years when robots have advanced exponentially and are
working in homes as human aides and care givers? What happens when robots are fully at
work in the military or law enforcement, or have control of a nation's missile defense
system?
It will be critical that these machines know the difference between a good action and one
that is harmful or deadly.
Bringsjord said it may be impossible to give a robot the right answer on how to act in every
situation it encounters because there are too many variables. Complicating matters is the
question of who will ultimately decide what is right and wrong in a world with so many
shades of gray.
Giving robots a sense of good and bad could come down to basic principles. As author,
professor and visionary Isaac Asimov noted in writing the The Three Laws of Robotics , a
robot will have to be encoded with at least three basic rules.
1. Don't hurt a human being, or through inaction, allow a human being to be hurt.
2. A robot must obey the orders a human gives it unless those orders would result in a human
being harmed.
3. A robot must protect its own existence as long as it does not conflict with the first two laws.
"We'd have to agree on the ethical theories that we'd base any rules on," said Bringsjord.
"I'm concerned that we're not anticipating these simple ethical decisions that humans have
to handle every day. My concern is that there's no work on anticipating these kinds of
decisions. We're just going ahead with the technology without thinking about ethical
reasoning."
Even when those needs are anticipated, any rules about right and wrong would have to be
built into the machine's operating system so it would be more difficult for a user or hacker
to override them and put the robot to ill usage.
Mark Bunger, a research director at Lux Research, said it's not crazy to think that robots
without a sense of morality could cause a lot of trouble.
"This is a very immature field," said Bunger. "The whole field of ethics spends a lot of time
on the conundrums, the trade-offs. Do you save your mother or a drowning girl? There's
hundreds of years of philosophy looking at these questions.... We don't even know how to
do it. Is there a way to do this in the operating system? Even getting robots to understand
the context they're in, not to mention making a decision about it, is very difficult. How do
we give a robot an understanding about what it's doing?"
Dan Olds, an analyst with The Gabriel Consulting Group, noted that robots will be the most
useful to us when they can act on their own. However the more autonomous they are, the
more they need to have a set of rules to guide their actions.
Part of the problem is that robots are advancing without nearly as much thought being
given to their guiding principles.
"We want robots that can act on their own," said Olds. "As robots become part of our daily
lives, they will have plenty of opportunities to crush and shred us. This may sound like
some far off future event, but it's not as distant as some might think.
"We can't build an infant machine and let it grow up in a human environment so it can
learn like a human child would learn," said Bringsjord. "We have to figure out the ethics
and then figure out how to turn ethics into logical mathematical terms."
He also noted that robots need to be able to make decisions about what they should and
shouldn't do -- and make those decisions quickly.
Bringsjord noted, "You don't want a robot that never washes the darned dishes because it's
standing there wondering if there's an ethical decision to be made."
Autonomous Machines and Artificial Intelligence
Robots are coming for your job: and faster than you think
Humans Need Not Apply
Should a Driverless Car Decide Who Lives or Dies?
Daniel Suarez: The kill decision shouldn't belong to a robot
The Dangers of Autonomous Weaponized Robots: Noel Sharkey Interview
Does Artificial Intelligence Pose a Threat?
A panel of experts discusses the prospect of machines capable of autonomous reasoning
By
TED GREENWALD
May 10, 2015 11:08 p.m. ET
Paging Sarah Connor!
After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream.
Between Apple’s Siri and Amazon’s Alexa, IBM’s Watson and Google Brain, machines that
understand the world and respond productively suddenly seem imminent.
The combination of immense Internet-connected networks and machine-learning
algorithms has yielded dramatic advances in machines’ ability to understand spoken and
visual communications, capabilities that fall under the heading “narrow” artificial
intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far
behind? And at that point, what’s to keep them from improving themselves until they have
no need for humanity?
The prospect has unleashed a wave of anxiety. “I think the development of full artificial
intelligence could spell the end of the human race,” astrophysicist Stephen Hawking told
the BBC. Tesla founder Elon Musk called AI “our biggest existential threat.” Former
Microsoft Chief Executive Bill Gates has voiced his agreement.
How realistic are such concerns? And how urgent? We assembled a panel of experts from
industry, research and policy-making to consider the dangers—if any—that lie ahead.
Taking part in the discussion are Jaan Tallinn, a co-founder of Skype and the think tanks
Centre for the Study of Existential Risk and the Future of Life Institute;Guruduth S.
Banavar, vice president of cognitive computing at IBM’s Thomas J. Watson Research
Center; and Francesca Rossi, a professor of computer science at the University of Padua, a
fellow at the Radcliffe Institute for Advanced Study at Harvard University and president of
the International Joint Conferences on Artificial Intelligence, the main international
gathering of researchers in AI.
Here are edited excerpts from their conversation.
What’s the risk?
WSJ: Does AI pose a threat to humanity?
MR. BANAVAR: Fueled by science-fiction novels and movies, popular treatment of this
topic far too often has created a false sense of conflict between humans and machines.
“Intelligent machines” tend to be great at tasks that humans are not so good at, such as
sifting through vast data. Conversely, machines are pretty bad at things that humans are
excellent at, such as common-sense reasoning, asking brilliant questions and thinking out
of the box. The combination of human and machine, which we consider the foundation of
cognitive computing, is truly revolutionizing how we solve complex problems in every field.
AI-based systems are already making our lives better in so many ways: Consider
automated stock-trading agents, aircraft autopilots, recommendation systems, industrial
robots, fraud detectors and search engines. In the last five to 10 years, machine-learning
algorithms and advanced computational infrastructure have enabled us to build many new
applications.
However, it’s important to realize that those algorithms can only go so far. More complex
symbolic systems are needed to achieve major progress—and that’s a tall order. Today’s
neuroscience and cognitive science barely scratch the surface of human intelligence.
My personal view is that the sensationalism and speculation around general-purpose,
human-level machine intelligence is little more than good entertainment.
MR. TALLINN: Today’s AI is unlikely to pose a threat. Once we shift to discussing long-term
effects of general AI (which, for practical purposes, we might define as AI that’s able to do
strategy, science and AI development better than humans), we run into the
superintelligence control problem.
WSJ: What is the superintelligence control problem?
MR. TALLINN: Even fully autonomous robots these days have off switches that allow
humans to have ultimate control. However, the off switch only works because it is outside
the domain of the robot. For instance, a chess computer is specific to the domain of chess
rules, so it is unaware that its opponent can pull the plug to abort the game.
However, if we consider superintelligent machines that can represent the state of the world
in general and make predictions about the consequences of someone hitting their off
switch, it might become very hard for humans to use that switch if the machine is
programmed (either explicitly or implicitly) to prevent that from happening.
WSJ: How serious could this problem be?
MR. TALLINN: It’s a purely theoretical problem at this stage. But it would be prudent to
assume that a superintelligent AI would be constrained only by the laws of physics and the
initial programming given to its early ancestor.
The initial programming is likely to be a function of our knowledge of physics—and we
know that’s still incomplete! Should we find ourselves in a position where we need to
specify to an AI, in program code, “Go on from here and build a great future for us,” we’d
better be very certain we know how reality works.
As to your question, it could be a serious problem. It is important to retain some control
over the positions of atoms in our universe [and not inadvertently give control over them
to an AI].
MS. ROSSI: AI is already more “intelligent” than humans in narrow domains, some of which
involve delicate decision making. Humanity is not threatened by them, but many people
could be affected by their decisions. Examples are autonomous online trading agents,
health-diagnosis support systems and soon autonomous cars and weapons.
We need to assess their potential dangers in the narrow domains where they will function
and make them safe, friendly and aligned with human values. This is not an easy task, since
even humans are not rationally following their principles most of the time.
Affecting everyday life
WSJ: What potential dangers do you have in mind for narrow-domain AI?
MS. ROSSI: Consider automated trading systems. A bad decision in these systems may be
(and has been) a financial disaster for many people. That will also be the case for selfdriving cars. Some of their decisions will be critical and possibly affect lives.
WSJ: Guru, how do you view the risks?
MR. BANAVAR: Any discussion of risk has two sides: the risk of doing it and the risk of not
doing it. We already know the practical risk today of decisions made with incomplete
information by imperfect professionals—thousands of lives, billions of dollars and slow
progress in critical fields like health care. Based on IBM’s experience with implementing
Watson in multiple industries, I maintain that narrow-domain AI significantly mitigates
these risks.
I will not venture into the domain of general AI, since it is anybody’s speculation. My
personal opinion is that we repeatedly underestimate the complexity of implementing it.
There simply are too many unknown unknowns.
WSJ: What proactive steps is International Business Machines taking to mitigate
risks arising from its AI technology?
MR. BANAVAR: Cognitive systems, like other modern computing systems, are built using
cloud-computing infrastructure, algorithmic code and huge amounts of data. The behavior
of these systems can be logged, tracked and audited for violations of policy. These cognitive
systems are not autonomous, so their code, data and infrastructure themselves need to be
protected against attacks. People who access and update any of these components can be
controlled.
The data can be protected through strong encryption and its integrity managed through
digital signatures. The algorithmic code can be protected using vulnerability scanning and
other verification techniques. The infrastructure can be protected through isolation,
intrusion protection and so on.
These mechanisms are meant to support AI safety policies that emerge from a deeper
analysis of the perceived risks. Such policies need to be identified by bodies like the SEC,
FDA and more broadly NIST, which generally implement standards for safety and security
in their respective domains.
WSJ: Watson is helping doctors with diagnoses. Can it be held responsible for a
mistake that results in harm?
MR. BANAVAR: Watson doesn’t provide diagnoses. It digests huge amounts of medical data
to provide insights and options to doctors in the context of specific cases. A doctor could
consider those insights, as well as other factors, when evaluating treatment options. And
the doctor can dig into the evidence supporting each of the options. But, ultimately, the
doctor makes the final diagnostic decision.
MS. ROSSI: Doctors make mistakes all the time, not because they are bad, but because they
can’t possibly know everything there is to know about a disease. Systems like Watson will
help them make fewer mistakes.
MR. TALLINN: I’ve heard about research into how doctors compare to automated
statistical systems when it comes to diagnosis. The conclusion was that the doctors, at least
on average, were worse. What’s more, when doctors second-guessed the system, they made
the result worse.
MR. BANAVAR: On the whole, I believe it is beneficial to have more complete information
from Watson. I, for one, would personally prefer that anytime as a patient!
The human impact
WSJ: Some experts believe that AI is already taking jobs away from people. Do you
agree?
MR. TALLINN: Technology has always had the tendency to make jobs obsolete. I’m
reminded of an Uber driver whose services I used a while ago. His seat was surrounded by
numerous gadgets, and he demonstrated enthusiastically how he could dictate my
destination address to a tablet and receive driving instructions. I pointed out to him that, in
a few years, maybe the gadgets themselves would do the driving. To which he gleefully
replied that then he could sit back and relax—leaving me to quietly shake my head in the
back seat. I do believe the main effect of self-driving cars will come not from their
convenience but from the massive impact they will have on the job market.
In the long run, we should think about how to organize society around something other
than near-universal employment.
MR. BANAVAR: From time immemorial, we have built tools to help us do things we can’t
do. Each generation of tools has made us rethink the nature and types of jobs. Productivity
goes up, professions are redefined, new professions are created and some professions
become obsolete. Cognitive systems, which can enhance and scale the capabilities of our
minds, have the potential to be even more transformative.
The key question will be how to build institutions to quickly train professionals to exploit
cognitive systems as their assistants. Once learned, these skills will make every individual a
better professional, and this will set a new bar for the nature of expertise.
WSJ: How should the AI community prepare?
MR. TALLINN: There is significant uncertainty about the time horizons and whether a
general AI is possible at all. (Though, being a physicist, I don’t see anything in physics that
would prevent it!) Crucially, though, the uncertainty does not excuse us from thinking
about the control problem. Proper research into this is just getting started and might take
decades, because the problem appears very hard.
MS. ROSSI: I believe we can design narrowly intelligent AI machines in a way that most
undesired effects are eliminated. We need to align their values with ours and equip them
with guiding principles and priorities, as well as conflict-resolution abilities that match
ours. If we do that in narrowly intelligent machines, they will be the building blocks of
general AI systems that will be safe enough to not threaten humanity.
MR. BANAVAR: In the early 1990s, when it became apparent the health-care industry
would be computerized, patient-rights activists in multiple countries began a process that
resulted in confidentiality regulations a decade later. In the U.S. as in other places, it is now
technologically feasible to track HIPAA compliance, and it is possible to enforce the liability
regulations for violations. Similarly, the serious question to ask in the context of narrowdomain AI is, what are the rights that could be violated, and what are the resulting
liabilities?
MS. ROSSI: As we have safety checks that need to be passed by anybody who wants to sell a
human-driven car, there will need to be new checks to be passed by self-driving cars. Not
only will the code running in such cars need to be carefully verified and validated, but we
will also need to check that the decisions will be made according to ethical and moral
principles that we would agree on.
MR. BANAVAR: What are the rights of drivers, passengers, and passersby in a world with
self-driving cars? Is it a consumer’s right to limit the amount of information that can be
exchanged between a financial adviser and her cognitive assistant? Who is liable for the
advice—the financial adviser, the financial-services organization, the builder of the
cognitive assistant or the curator of the data? These are as much questions about today’s
world, [about how we regulate] autonomous individuals and groups with independent
goals, as they are about a future world with machine intelligence.
Mr. Greenwald is a news editor for The Wall Street Journal in San Francisco. He can be
reached at ted.greenwald@wsj.com.
Stephen Hawking, Elon Musk, and Bill Gates Warn About
Artificial Intelligence
By Michael Sainato • 08/19/15 12:30pm
Stephen Hawking, Elon Musk, and Bill Gates. (Photo: Getty Images)
Some of the most popular sci-fi movies—2001: A Space Odyssey, The Terminator, The
Matrix, Transcendence, Ex Machina, and many others—have been based on the notion that
artificial intelligence will evolve to a point at which humanity will not be able to control its
own creations, leading to the demise of our entire civilization. This fear of rapid technology
growth and our increasing dependence on it is certainly warranted, given the capabilities of
current machines built for military purposes.
Already, technology has had a significant impact on warfare since the Iraq war began in
2001. Unmanned drones provide sustained surveillance and swift attacks on targets, and
small robots are used to disarm improvised explosive devices. The military is
currently funding research to produce more autonomous and self-aware robots to diminish
the need for human soldiers to risk their lives. Founder of Boston Dynamics, Marc Raiber,
released a video showing a terrifying six-foot tall, 320-lb. humanoid robot named Atlas,
running freely in the woods. The company, which was bought by Google in 2013 and
receives grant money from the Department of Defense, is working on developing an even
more agile version.
The inherent dangers of such powerful technology have inspired several leaders in the
scientific community to voice concerns about Artificial Intelligence.
“Success in creating AI would be the biggest event in human history,” wrote Stephen
Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might
also be the last, unless we learn how to avoid the risks. In the near term, world militaries
are considering autonomous-weapon systems that can choose and eliminate targets.”
Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow
biological evolution, couldn’t compete and would be superseded by A.I.”
The technology described by Mr. Hawking has already begun in several forms, ranging from
U.S. scientists using computers to input algorithms predicting the military strategies of
Islamic extremists to companies such as Boston Dynamics that have built successfully
mobile robots, steadily improving upon each prototype they create.
Mr. Hawking recently joined Elon Musk, Steve Wozniak, and hundreds of others in issuing a
letter unveiled at the International Joint Conference last month in Buenos Aires, Argentina.
The letter warns that artificial intelligence can potentially be more dangerous than nuclear
weapons.
The ethical dilemma of bestowing moral responsibilities on robots calls for rigorous safety
and preventative measures that are fail-safe, or the threats are too significant to risk.
Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a
2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m
increasingly inclined to think that there should be some regulatory oversight, maybe at the
national and international level, just to make sure that we don’t do something very foolish.”
Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as
a means to “just keep an eye on what’s going on with artificial intelligence. I think there is
potentially a dangerous outcome there.”
Microsoft co-founder Bill Gates has also expressed concerns about Artificial
Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the
camp that is concerned about super intelligence. First the machines will do a lot of jobs for
us and not be super intelligent. That should be positive if we manage it well. A few decades
after that though the intelligence is strong enough to be a concern. I agree with Elon Musk
and some others on this and don’t understand why some people are not concerned.”
The threats enumerated by Hawking, Musk, and Gates are real and worthy of our
immediate attention, despite the immense benefits artificial intelligence can potentially
bring to humanity. As robot technology increases steadily toward the advancements
necessary to facilitate widespread implementation, it is becoming clear that robots are
going to be in situations that pose a number of courses of action. The ethical dilemma of
bestowing moral responsibilities on robots calls for rigorous safety and preventative
measures that are fail-safe, or the threats are too significant to risk.
Purchase answer to see full
attachment