Analyzing a logical Fallacy

User Generated

kz1_

Writing

Description

In this paper we should analyze a logical fallacy and write an essay about it. In the attachment below, 4 articles that has different fallacies. We should only choose one fallacy from only one article and analyze it. Also, the paper prompt is attached and includes all the requirements.

Unformatted Attachment Preview

identity crisis. In a sense it’s keeping the brain in a sort of time warp. [ARTICLE 1] Is Facebook Really Causing an Identity Crisis? It is hard to see how living this way on a daily basis will not result in brains, or rather minds, different from those of previous generations. I often wonder whether real conversation in real time may eventually give way to these sanitised and easier screen dialogues, in much the same way as killing, skinning and butchering an animal to eat has been replaced by the convenience of packages of meat on the supermarket shelf. By Shea Bennett August 5, 2011 This week I read with interest the various news reports surrounding comments made by Baroness Greenfield, noted professor of synaptic pharmacology at Lincoln College, Oxford, about how social media is causing what she refers to as an “identity crisis” among its users. The Baroness was, of course, speaking to The Daily Mail, that veritable bastion of British newspaper journalism. She made similar remarks to the paper back in 2009, where she shared her horror about what social networking was doing to the nation’s precious children. The Baroness, who sits in the House of Lords within the British Parliament, and is a life peer of Ot Moor in the county of Oxfordshire, has claimed that the increased focus on the development of Internet friendships has the potential to ‘rewire’ the brain, leaving people craving instant gratification and nullifying their ability to concentrate for long periods of time. Before making these proclamations, the Baroness spent several years immersed in all aspects of social media, building up thousands of likes on her Facebook page and an army of followers on Twitter. She used the platforms to share and digest information, and was mindful that, in the interest of balance, she positioned herself in such a way that her research and findings would be both accurate and fair. She suggested that some Facebook users feel obliged to act like mini celebrities in an effort to attract attention, and only do things in their lives that are “Facebook worthy.” “It’s almost as if people are living in a world that’s not a real world, but a world where what counts is what people think of you or (if they) can click on you,” she said. “Think of the implications for society if people worry more about what other people think about them than what they think about themselves.” Oh, wait. Sorry. She didn’t do any of that. In fact, the Baroness didn’t do anything, except regurgitate the same biased and largely unfounded claptrap to the same downmarket rag. She doesn’t have a Twitter profile and the only presence she has on Facebook is to be found in one of the bog-standard (and, I have to say, eternally irritating) Wikipedialooting community pages. Quite. The Baroness had a few choice words to say about that other social network, too: There’s no new research. I’m not convinced there was any old research. In fact, it wouldn’t surprise me if the Baroness hadn’t actually spoken to the Mail at all, and they’d simply re-written the original story from a slightly different angle. What concerns me is the banality of so much that goes out on Twitter. Why should someone be interested in what someone else has had for breakfast? It reminds me of a small child, “Look at me Mummy, I’m doing this. Look at me Mummy I’m doing that.” It’s almost as if they’re in some kind of In the past Greenfield has been heavily criticised for this line of thinking, notably by British science writer Ben Goldacre, who observed that the Baroness’s pronouncements are never supported by 1 accompanying evidence, and that she shows alarming naivety by not being able to predict “that her ‘speculations’ and ‘hypotheses’ will inevitably result in scare stories in the press.” We no longer have to read The New Yorker to read the best, because the best is anywhere and everywhere. Indeed, there’s every chance that the best won’t be found in The New Yorker. Sure, Gladwell’s critique received a ton of attention, but most of that was generated by Twitter. And that’s the oldest trick in the book – if you want to benefit from the most publicity, criticize the thing that’s the most public. On Goldacre, the Baroness replied that he was “like the people who denied that smoking caused cancer.” Yes: this is the kind of monster that we’re dealing with. Of course, Greenfield is hardly alone in exposing her almost total disregard for the merits of Facebook et al. Last year, pseudo social intellectual Malcolm Gladwell poopooed the net benefits of social media, notably in its potential as a force for change. “Why does it matter who is eating whose lunch on the Internet? Are people who log on to their Facebook page really the best hope for us all?” he opined. This is the same Malcolm Gladwell who has openly admitted that he doesn’t have much use for the internet. These are the same fears that are paralyzing the newspaper industry. But news isn’t going away – it’s simply the way that we consume news that is changing, and anyone who resists will (and must) be left behind. After all, it’s less than a decade ago since we were all walking around listening to a handful of tracks on our portable CD players. Then Steve Jobs gave us the iPod, and things were never the same. My 10year old son doesn’t even know what a CD is. And why would he? They were rubbish, and couldn’t hold a candle to the sheer, majestic and, I dare say, unmatched glory of vinyl. The medium has also made marketing maestro Seth Godin look uncomfortably out of touch, and even a little bit hypocritical. “I don’t use Twitter. It’s not really me,” he has said. “I also don’t actively use Facebook, and I’m not adding any friends, though I still have an account for the day when I no doubt will.” Strange, then, that he seems to make observations about the subject in every second or third article that he publishes on his blog. [ARTICLE 2] Pro and Con: Should Gene Editing Be Performed on Human Embryos? All of this opinion, of course (because that’s all that it is), comes from the same place: fear. Fear of the unknown. Of change. Fear that what you’ve been doing and saying and preaching and selling your entire life no longer works. Fear that the world has passed you by. That you don’t get it. That you’ve turned into your parents. How did that happen? When did you suddenly become old-fashioned? The most potent use of the new gene editing technique CRISPR is also the most controversial: tweaking the genomes of human embryos to eliminate genes that cause disease. We don’t allow it now. Should we ever? It’s greed, as well. Ironically, for attention, which can be so leveraged through social channels, but it’s also greed for respect. The Greenfields and Gladwells of this world cannot so easily be assured of this when the internet, and social media in particularly, dramatically levels the playing field, allowing the thoughts and ideas of anyone to rise to the very top – if they’re good enough. PRO: RESEARCH ON GENE EDITING IN HUMANS MUST CONTINUE By John Harris In February of this year, the Human Fertilization and Embryology Authority in the United Kingdom approved a request by the Francis Crick Institute in 2 consent,” he has said, constitute “strong arguments against engaging in” gene editing. London to modify human embryos using the new gene editing technique CRISPR-Cas9. This is the second time human embryos have been employed in such research, and the first time their use has been sanctioned by a national regulatory authority. The scientists at the Institute hope to cast light on early embryo development—work which may eventually lead to safer and more successful fertility treatments. This makes no sense at all. We have literally no choice but to make decisions for future people without considering their consent. All parents do this all the time, either because the children are too young to consent, or because they do not yet exist. George Bernard Shaw and Isadora Duncan knew this. When, allegedly, she said to him “why don’t we make a baby together … with my looks and your brains it cannot fail” she was proposing a deliberate germline determining decision in the hope of affecting their future child. Shaw’s more sober response—“Yes but what if it has my looks and your brains!”—identifies a different possible, but from the child’s perspective equally nonconsensual, outcome. Rightly, neither Shaw nor his possible partner thought their decision needed to wait for the consent of the resulting child. The embryos, provided by patients undergoing in vitro fertilization, will not be allowed to develop beyond seven days. But in theory—and eventually in practice—CRISPR could be used to modify disease-causing genes in embryos brought to term, removing the faulty script from the genetic code of that person’s future descendants as well. Proponents of such “human germline editing” argue that it could potentially decrease, or even eliminate, the incidence of many serious genetic diseases, reducing human suffering worldwide. Opponents say that modifying human embryos is dangerous and unnatural, and does not take into account the consent of future generations. Who is right? Needless to say, parents and scientists should think responsibly, based on the best available combination of evidence and argument, about how their decisions will affect future generations. However, their decision-making simply cannot include the consent of the future children. Let’s start with the objection that embryo modification is unnatural, or amounts to playing God. This argument rests on the premise that natural is inherently good. But diseases are natural, and humans by the millions fall ill and die prematurely—all perfectly naturally. If we protected natural creatures and natural phenomena simply because they are natural, we would not be able to use antibiotics to kill bacteria or otherwise practice medicine, or combat drought, famine, or pestilence. The health care systems maintained by every developed nation can aptly be characterized as a part of what I have previously called “a comprehensive attempt to frustrate the course of nature.” What’s natural is neither good nor bad. Natural substances or natural therapies are only better that unnatural ones if the evidence supports such a conclusion. Finally, there’s the argument that modifying genomes is inherently dangerous because we can’t know all the ways it will affect the individual. But those who fear the risks of gene editing don’t take into account the inherent dangers in the “natural” way we reproduce. Two-thirds of human embryos fail to develop successfully, most of them within the first month of pregnancy. And every year, 7.9 million children—6 percent of total births worldwide—are born with a serious defect of genetic or partially genetic origin. Indeed so risky is unprotected sex that, had it been invented as a reproductive technology rather than found as part of our evolved biology, it is highly doubtful it would ever have been licensed for human use. Certainly we need to know as much as possible about the risks of gene-editing human embryos before such research can proceed. But when the suffering and death caused by such terrible single- The matter of consent has been raised by Francis Collins, director of the National Institutes of Health. “Ethical issues presented by altering the germline in a way that affects the next generation without their 3 gene disorders as cystic fibrosis and Huntington’s disease might be averted, the decision to delay such research should not be made lightly. Just as justice delayed is justice denied, so, too, therapy delayed is therapy denied. That denial costs human lives, day after day. embryos or gametes to produce a child—and in some 40 countries, passed laws against it. The issue of human germline modification stayed on a slow simmer during the first decade of the 21st century. But it roared to a boil in April 2015, when researchers at Sun Yat-sen University announced they had used CRISPR to edit the genomes of nonviable human embryos. Their experiment was not very successful in technical terms, but it did focus the world’s attention. CON: DO NOT OPEN THE DOOR TO EDITING GENES IN FUTURE HUMANS By Marcy Darnovsky This is not an entirely new question. The prospect of creating genetically modified humans was openly debated back in the late 1990s, more than a decade and a half before CRISPR came on the scene and several years before the human genome had been fully mapped. In December 2015, controversy about using CRISPR to produce children was a key agenda item at the International Summit on Human Gene Editing organized by the national science academies of the United States, the United Kingdom, and China. Nearly every speaker agreed that at present, making irreversible changes to every cell in the bodies of future children and all their descendants would constitute extraordinarily risky human experimentation. By all accounts, far too much is unknown about issues including off-target mutations (unintentional edits to the genome), persistent editing effects, genetic mechanisms in embryonic and fetal development, and longer-term health and safety consequences. It wasn’t long before we saw provocative headlines about designer babies. Princeton mouse biologist Lee Silver, writing in Time magazine in 1999, imagined a fertility clinic of the near future that offered “Organic Enhancement” for everyone, including people with “no fertility problems at all.” He even wrote the ad copy: “Keep in mind, you must act before you get pregnant. Don't be sorry after she's born. This really is a once-in-a-lifetime opportunity for your child-to-be.” Conversations about putting new gene editing tools into fertility clinics need to begin with an obvious but often overlooked point: By definition, germline gene editing would not treat any existing person’s medical needs. At best, supporters can say that it might re-weight the genetic lottery in favor of different outcomes for future people—but the unknown mechanisms of both CRISPR and human biology suggest that unforeseeable outcomes are close to inevitable. The gene editing tool known as CRISPR catapulted into scientific laboratories and headlines a few short years ago. Fast on its heels came the reemergence of a profoundly consequential controversy: Should these new techniques be used to engineer the traits of future children, who would pass their altered genes to all the generations that follow? During the same millennial shift, policymakers in dozens of countries came to a very different conclusion about the genetic possibilities on the horizon. They wholeheartedly supported gene therapies that scientists hoped (and are still hoping) can safely, effectively, and affordably target a wide a range of diseases. But they rejected human germline modification—using genetically altered Beyond technical issues are profound social and political questions. Would germline gene editing be justifiable, in spite of the risks, for parents who might transmit an inherited disease? It’s certainly not necessary. Parents can have children unaffected by the disease they have or carry by using thirdparty eggs or sperm, an increasingly common way to form families. Some heterosexual couples may hesitate to use this option because they want a child 4 who is not just spared a deleterious gene in their lineage, but is also genetically related to both of them. They can do that too, with the embryo screening technique called pre-implantation genetic diagnosis (PGD), a widely available procedure used in conjunction with in vitro fertilization. In opening the door to one kind of germline modification, we are likely opening it to all kinds. Permitting human germline gene editing for any reason would likely lead to its escape from regulatory limits, to its adoption for enhancement purposes, and to the emergence of a market-based eugenics that would exacerbate already existing discrimination, inequality, and conflict. We need not and should not risk these outcomes. PGD itself raises social and ethical concerns about what kind of traits should be selected or de-selected. These questions are particularly important from a disability rights perspective (which means they’re important for all of us). But screening embryos for disease is far safer for resulting children than engineering new traits with germline gene editing would be. Yet this existing alternative is often omitted from accounts of the controversy about gene editing for reproduction. [ARTICLE 3] Nicholas Negroponte: Internet Access is a Human Right By BIG THINK EDITORS September 2014 It is true that a few couples—a very small number—would not be able to produce unaffected embryos, and so could not use PGD to prevent disease inheritance. Should we permit germline gene editing for their sake? If we did, could we limit its use to cases of serious disease risk? What constitutes a human right? Abstractly, a human right is one that is inherent and inalienable to all human beings. They are the elements of social life any individual should reasonably expect to be granted solely for the fact that they are alive. According to the Universal Declaration of Human Rights, there exist thirty such elements ranging from the Right to Equality to Freedom of Religion to the Right to Rest and Leisure. Some are more abstract than others, some more integral to survival than the rest. Near the end of the list is the Right to Education, which is the focus of Big Think expert Nicholas Negroponte's recent interview, featured today on this site and embedded below: From a policy perspective, how would we draw the distinction between a medical and enhancement purpose for germline modification? In which category would we put short stature, for example? We know that taller people tend to earn more money. So do people with paler skins. Should arranging for children with financially or socially “efficient” varieties of height and complexion be considered medical intervention? Think back to the hypothetical fertility clinic offering “Organic Enhancement” as a “once-in-alifetime opportunity for your child-to-be.” Think back to the 1997 movie Gattaca, about a society in which the genetically enhanced— merely perceived to be biologically superior—are born into the physical reality of those whom we might now call the one percent. These are fictional accounts, but they are also warnings of a possible human (or not so human) future. The kinds of social changes they foresee, once set in motion, could be as difficult to reverse as the genetic changes we’re talking about. You'll notice that Negroponte employs the transitive property to include an addendum to the Right to Education. In the 21st century access to the internet is inextricably linked to a proper, thorough education. Therefore, the internet is, or should be considered, a human right: And Internet access is such a fundamental part of learning that by extension it is almost certainly a human right and within a very short period of time it will be particularly because of those who don’t have schools, those who have to do their learning on their 5 own. And for them Internet access is access to other people. It’s not so much the knowledge. It’s not the Wikipedia but it’s the connection to others, particularly kids to other kids – peer to peer learning. So yes, Internet access will be a human right. At the moment it’s edging up to it and probably not everybody agrees but they will shortly. views in a speech before the Internet Innovation Alliance, a coalition of businesses and nonprofits. O'Rielly described five "governing principles" that regulators should rely on, including his argument that Internet access should not be considered a necessity. Here's what he said: It is important to note that Internet access is not a necessity in the day-to-day lives of Americans and doesn’t even come close to the threshold to be considered a basic human right. I am not in any way trying to diminish the significance of the Internet in our daily lives. I recognized earlier how important it may be for individuals and society as a whole. But, people do a disservice by overstating its relevancy or stature in people’s lives. People can and do live without Internet access, and many lead very successful lives. Instead, the term “necessity” should be reserved to those items that humans cannot live without, such as food, shelter, and water. It's a fascinating argument that would no doubt ruffle the feathers of those who believe a list of essential human rights should be kept brief to preserve its magnitude. But if the avenue to selfbetterment is one that mustn't ever be obstructed, certainly the internet resides there. Negroponte goes on to propose and posit various ways to help people living in remote parts of the world obtain web access by way of geostationary satellites. It would "only" cost a couple billion dollars, which sounds like a lot but Negroponte tosses out the argument that it's less than what the world routinely wastes for more selfish endeavors. If the U.N. is really that dedicated to protecting and promoting human rights, they may want to look into Negroponte's altruistic proposal. It is even more ludicrous to compare Internet access to a basic human right. In fact, it is quite demeaning to do so in my opinion. Human rights are standards of behavior that are inherent in every human being. They are the core principles underpinning human interaction in society. These include liberty, due process or justice, and freedom of religious beliefs. I find little sympathy with efforts to try to equate Internet access with these higher, fundamental concepts. From a regulator’s perspective, it is important to recognize the difference between a necessity or a human right and goods such as access to the Internet. Avoiding the use of such rhetorical traps is wise. [ARTICLE 4] Internet access “not a necessity or human right,” says FCC Republican The FCC is required by Congress to expand broadband access. By Jon Brodkin June 26, 2015 O'Rielly's other governing principles are that "the Internet cannot be stopped," that we should "understand how the Internet economy works" and "follow the law; don't make it up," and that "the benefits of regulation must outweigh the burdens." O'Rielly was nominated to the commission by President Barack Obama and confirmed by the Senate; the president nominates both Democratic and Republican commissioners, ensuring that the ruling party maintains a 3-2 advantage. Federal Communications Commission member Michael O’Rielly yesterday argued that "Internet access is not a necessity or human right" and called this one of the most important "principles for regulators to consider as it relates to the Internet and our broadband economy." O'Rielly, one of two Republicans on the Democratic-majority commission, outlined his 6 While O'Rielly is certainly correct that one can live without Internet access but not food or water, the FCC is essentially required by Congress to act on the presumption that all Americans should have Internet access. The Telecommunications Act of 1996 requires the FCC to "encourage the deployment on a reasonable and timely basis of advanced telecommunications capability to all Americans" by implementing "price cap regulation, regulatory forbearance, measures that promote competition in the local telecommunications market, or other regulating methods that remove barriers to infrastructure investment." other communications services but no strict requirement that everyone in the US be offered broadband. Availability varies widely throughout the country, with many rural customers lacking fast, reliable Internet service. World Wide Web inventor Tim Berners-Lee says that Web access should be considered a human right. "Access to the Web is now a human right," BernersLee said in a 2011 speech. "It's possible to live without the Web. It's not possible to live without water. But if you've got water, then the difference between somebody who is connected to the Web and is part of the information society, and someone who (is not) is growing bigger and bigger." The FCC is required to determine on a regular basis whether broadband is being extended to all Americans "in a reasonable and timely fashion" and must "take immediate action to accelerate deployment" if it finds this isn't happening. The last time the FCC did this was in January of this year; O'Rielly voted against the FCC's conclusion that broadband isn't being deployed quickly enough and that the definition of broadband should be changed to support higher-bandwidth applications. A United Nations report in 2011 said disconnecting people from the Internet is a human rights violation. Vint Cerf, who co-created the networking technology that made the Internet possible, wrote that Internet access is not a human right, arguing that "technology is an enabler of rights, not a right itself... at one time if you didn’t have a horse it was hard to make a living. But the important right in that case was the right to make a living, not the right to a horse. Today, if I were granted a right to have a horse, I’m not sure where I would put it." O'Rielly and Wheeler have disagreed on several other votes affecting broadband availability and the terms under which it's offered. O'Rielly cast unsuccessful, dissenting votes against Wheeler's plan to reclassify Internet providers as common carriers and impose net neutrality rules, against Wheeler's plan to overturn state laws that protect Internet providers from municipal competition, and against Wheeler's plan to use the LifeLine phone service subsidy program to subsidize broadband for poor people. FCC Chairman Tom Wheeler said in a speech today that "broadband should be available to everyone everywhere." Wheeler: If slow speeds are enough, why do you heavily promote faster service? The FCC was created in 1934 with the mandate to ensure universal access to telephone service at reasonable prices. Today there is a "Universal Service Fund" to subsidize access to Internet and 7 ENGLISH 124 PAPER #2 RECONSTRUCTING FALLACIES (200 points—Peer Review Draft Due November 14; Revised Draft Due November 21) OVERVIEW In this unit, we have discussed the nature of logical fallacies and their role in everyday argumentation, especially online. Likewise, we have discussed a variety of issues related to technological progress, human nature, and internet access. Choose a logical fallacy from one of the texts for this unit OR one that you have discovered in your own research that pertains to the issues we have discussed. If you choose to find your own, the fallacy should be in some “official” text: an op-ed, a news article, a claim made in an interview, or in an organization’s text for dissemination (i.e. on their website, in a leaflet, in a video, etc.) Construct a version of the argument in which the fallacy is removed and replaced with a rhetorically valid claim that relies on relevant and sound reasoning and evidence rather than flawed logic. Make sure to avoid editorializing or otherwise including your own feelings on the topic. To use an analogy, you’re a bit like a mechanic here: you don’t care about the make and model of the customer’s vehicle; you are only repairing the carburetor. Thus, you should not alter the substance of the argument, but simply help to strengthen it by removing and replacing the fallacy with a valid claim. If the argument in which the fallacy exists is, at its center, ethically dubious or morally challenged, explain why it cannot be repaired by simply removing and replacing the fallacy. Summarize and explicate the argument in which the fallacy is used. You may write more than 1600 words, but not less than 1400. Analyze the fallacy, centering on why and how it weakens the argument, as well as what the implications of that weakening might be (social, cultural, political, etc.). Make sure you explicitly identify and explicate the fallacy (i.e. appeal to fear, appeal to authority, etc.) It is possible that a single claim could contain multiple fallacies; if so, analyze each in turn. This work is the most important component of your paper. You must use a minimum of four sources: (1) The source of the fallacy itself; and (3) Other sources that helps put the fallacy or the argument in which it sits in context. All sources must be properly integrated into the paper in MLA format (search ‘Quoting, Paraphrasing, and Summarizing’ on the OWL for more information). These sources must be literary, journalistic, or academic in nature and not subject to serious challenges to their credibility. I recommend using the Grossmont College Library “Gateway to Research” INSTRUCTIONS Write a 1400-1600 word paper in which you…
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Attached.

Outline
Analyzing Logical Fallacy
Fallacy: Internet Access is a Human Right
I.
II.

Analysis
Works Cited


Surname 1
Name
Institution
Tutor
Date
Fallacy: Internet Access is a Human Right
In this article “Internet Access is a Human Right,” Nicholas Negroponte argues that
internet is and should be a human right. In making reference to the definition of human rights, he
argues that human rights are inalienable and inherent to human beings. Such rights are tenets of
social life that each individual is granted for the purposes of survival (Negroponte 1). However,
some rights are more integral to survival than others. In particular, Negroponte relies on the right
to education in defending his position that internet access is indeed a human right. Since access
to education is a human right, Negroponte argues that access to intent is an important part of the
learning process and thus need to be classified as a human right. In his view, those who do have
schools or would want to do learning on their own need internet to do so. Consequently, this
makes internet access as equally important as the learning itself and thus, a human right.
Generally, a superficial examination of the issue regarding internet access as a human
right would imply that indeed it is as claimed by Negroponte. In his argument, Negroponte
clearly convinces that internet access is important in acquisition of education in the modern era.
Given that education is a basic human right, then we are obliged to recognize internet access as
basic human rights (Negroponte 1). However, before coming to this conclusion, we need to
examine one major issue relating to the basic characteristics of human rights. According to

Surname 2
Cullen (312), one basic characteristic of human rights is that they are universal and inalienable.
This is to mean that everyone across the globe is entitled to them. To some extent, the universal
and inalienable nature of human rights makes internet access as equally important since no one
should be limited on their intent to access the internet. However, one major dilemma relates to
the fact that certain regions of the world are inaccessible to the internet and as such, the
individuals in those regions are denied those rights (Akrivopoulou 12). One question worth
asking is whether legal m...


Anonymous
Great! 10/10 would recommend using Studypool to help you study.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Related Tags