write an essay

User Generated

lruhn010

Economics

Description

Should you have a stake in artificial intelligence research

Science fiction offers numerous stories of intelligent machine attempting or succeeding in exterminating or enslaving humans, including the “Matrix “and “Terminator” movies. The basis for these stories is that research ends up having cataclysmic and unintended consequences. This is an old theme in literature, since “Frankenstein” touches on this same theme. Although sometimes in the story narrative the consequences are completely anticipatable (something that seems very innocent leads to disaster), the level of care exercised generally affects the probability of accident, and this likely applies here. If it is possible that work on artificial intelligent could lead to the rise of malevolent, intelligent machines, don’t we all have an interest in making sure the research is done in as safe and responsible manner possible? Or alternatively, does AI create what can be thought of as a risk externality? A risk externality is where a risk of harm is created for third parties, but harm may not materialize. Drunk diving is a risk externality, because the drunk driver could crash at any time and harm third parties, although not every impaired diver ends up crashing. I think this also counts as a cataclysmic risk externality because the harm involved could easily be nation or world-wide.

For this assignment I want you to think about and write 2-3 pages on this topic. Some questions you might want to address include: should risk truly create an externality even when an accident or harm does not happen? If so, should risk externalities be subject to government regulation to impose safety provisions before an accident occurs, or should the party generating the risk be free to operate without government permission or oversight provided it is financially liable for the losses if the harm occurs? Does the level of potential disaster, the cataclysmic element of the destruction of “life as we know it” matter here? Does it matter as you think about this issue if the risk of a cataclysmic outcome is “real” or “imaginary”? if so, how can we demonstrate that the “Matrix” or “Terminator” dangers are real, short of the disaster occurring(or Arnold Schwarzenegger appearing through time travel), at which point it is too late to take precautions? Other types of research or activities which might pose similar risks, like say genetic research producing a super virus, or nuclear engineering, so if you think you should have decision rights over the conduct of work in AI, how broadly can this be extended?

You should write 2-3 pages on this topic.

User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Attached.

Running head: AI AND EXTERNALITIES

AI and Externalities
Student’s Name
Professor’s Name
Course
Date

1

AI AND EXTERNALITIES

2
AI and Externalities

The advancement in technology has seen the increased attempts, through research, to create
a computer machine that poses intelligence that could enable it to perform tasks that were initially
reserved for human experts. Through research, artificial intelligence continuously makes it
possible for machines to perform tasks such as cancer diagnosis or use of driverless cars in
Washington District of Columbia (Scherer, 2016). Besides the explicit benefits, the research into
the field of artificial intelligence is linked to risks on human beings and their interactions. For
example, the use of Artificial intelligence in the health sector to replace doctor (even at
research/development stage) could be accompanied by fai...


Anonymous
Great study resource, helped me a lot.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags