English Essay

User Generated

zngurjx12

Humanities

Description

i already have done my essay draft, i just need to be corrected and well set up.

directions are in the files

Unformatted Attachment Preview

Checklist for Essays – Abbreviated Use this version for pasting at the bottom of your essays. My paper has 1” margins. My paper is double-spaced throughout. (There are no extra line spaces between paragraphs.) My paper includes a correct MLA heading. My paper is written in Times New Roman 12-point font. Each page includes my last name a space and the page number in the header at the upper right hand corner of each page. When required, my paper includes a correctly formatted Works Cited page. Authors are referred to by their full, correctly spelled names the first time. Authors are referred to by their correctly spelled last names, thereafter. If introduced, an author’s profession or area of expertise is stated. Titles of authors’ works are stated and correctly punctuated/capitalized. Unnecessary words are eliminated when introducing authors and their works. The titles of shorter works (essays, short videos, poems, chapters in books) are placed in quotation marks (i.e., “Plate Tectonics”). The first letter of each key word in the title is capitalized. The titles of longer works (books, journal titles, full-length films) are italicized (i.e.., Anatomy). The first letter of each key word in the title is capitalized. This text is written to an academic audience outside of this class. The text does not assume that my audience has read the same things I have and essay fully explains key concepts and provides context. Throughout the essay, statements are made that reflect my judgment based on my interpretation of evidence and my reasoning. They are not based on personal belief and opinion. My statements demonstrate that I have arrived at conclusions that I am committed to, recognizing that my audience wants to read works by authors who take clear stands and support them. I use the present tense throughout the essay when referring to source texts, unless the source refers to something that actually happened in history or is expected to happen in the future. I use rhetorically accurate verbs (e.g., argue, assert, claim, examine, explain, identify) to characterize the statements I and other authors make. I don’t use “says” or “mentions.” I don’t use “I believe” or “In my opinion.” Instead, I refer to my “judgments” based on evidence sound reasoning. A citation follows every paraphrase, summary, or direct quotation. Each citation is placed at the end of a sentence. Each citation is placed in parentheses. EXAMPLE: (Chan 7). Generally, in-text citations are formatted, as follows: (AuthorLastName Page#). I’ve used the In-Text Citation Cheat Sheet or other authoritative source for preparing my in-text citations. Periods are placed AFTER the closing parenthesis of each citation, not after the sentence. There are no commas or other punctuation between the author and the page number in the basic (Author #) in-text citation. Every direct quotation is surrounded by quotation marks and accurately/exactly records the words from its source text. Every quotation is introduced (or ends with) a signal phrase. Every quotation is explained in my own words. Quotations are not placed at the very beginning or end of a paragraph. Quotations flow grammatically with the rest of the text. Omitted words are shown using ellipses (but these are only used in the middle of a quotation). I do not use ellipses at the beginnings or ends of quotation when omitting words. ‘Single quotation marks’ are used when a title or quote is placed inside of “double quotation marks.” I have checked my work and corrected awkward phrasing, wordiness, grammar errors, and incorrect words (often typos). I have read my work aloud to catch additional awkward phrasing, wordiness, grammar errors, and incorrect words (often typos). I have placed periods and commas inside of quotation marks when they are directly next to each other. (Remember that sentences with in-text citations have no periods or commas until after the in-text citation.) I have eliminated “etc.” and replaced it with “for example” or “including.” I don’t use the pronoun “you” in this paper. I don’t make unnecessary shifts between first-, second-, and third-person pronouns in my sentences and paragraphs. Students often shift wildly from we, you, it, they when talking about the same thing or when it’s unnecessary, creating confusion. I have corrected shifts in verb tense (e.g., past, present, future, conditional). I have checked my sentences for singular/plural agreement and corrected any errors. I’ve addressed all aspects of the prompt. • • • • • • • • • • • • • Submit the final revision of your 4-5 page essay that fulfills the prompt, has been MLA formatted/cited, and revised to ensure that all checklist items have been addressed. Review the assignment sheet carefully before you begin developing your draft. Develop and defend a strong and focused main claim. The assignment sheet includes many questions and suggestions to help you think about this, as does the handout, “Developing Claims.” Include at least ONE direct quotation and ONE paraphrase from different provided sources in each body paragraph of the essay. Include a counterargument from an actual source and rebut it. Include your own experience with the topic you’ve selected. Note: A Works Cited list is only required if you’ve used sources I haven’t provided. Outside sources are not required. Final Submission: Attach a Word file (preferable) or PDF file that is properly MLA formatted Paste the completed abbreviated checklist at the bottom of your essay or attach it as a separate document. The abbreviated version is more succinct and doesn’t include the explanation the full version does. Check off each item after carefully addressing each. Note that all final essays will be submitted to SafeAssign, the college’s plagiarism detection tool. Deductions: 5-10 point deduction if Checklist items haven’t been checked off and completed. 5 point deduction for consistent MLA in-text citation and paper formatting issues. You are required to use a resource to ensure your in-text citations are accurate. 5-10 point deduction for not meeting the minimum page count. The minimum page number must include text to the bottom of the page. Resources that I wrote the essay from: • https://www.youtube.com/watch?v=t7Xr3AsBEK4&feature=youtu.be • You are not a gadget • How Social Media Is a Toxic Mirror Instructor comment: The essay is strong, although the counterargument rebuttal could use some development. I am concerned about its origins. The thinker who produced the draft could have written a much, much stronger peer review 1 Connected, but Alone A proper exposition of the context of this discussion must appreciate the advances that human race has made, where it is right now. The twenty-first century is a particularly interesting time to be a human being. For the first time in recorded history, a species has full autonomy over itself and its environment. Homo sapiens sapiens has sophisticated methods of communications. It has successfully tamed every ecosystem on the planet. Human beings harness energy and use it to travel and engage in industry. The most unique aspect of this evolution is that it is not biological. It is technological. In the last four decades, individuals have managed to improve the methods of communication exponentially. ` Early inventors like Alexander Graham Bell created the telephone. But then that is ancient history compared to the most basic tools of communication today. The mobile phone is ubiquitous. Tech has moved so far ahead; from smoke signals, horns, messenger birds and couriers that we now have audio-visual channels of communication readily available to the public. But then the thesis of this paper is not to regurgitate what must be common knowledge on the advances of humanity’s interaction. It is to show that information and communication technology (ICT) has not brought people closer together; that its impact has been the exact opposite. It is sad that anyone would ever have to say this, tragic even. The reason is that ICT came about was to make communication easier. It is something of an anathema that technology would take away the humanity of something so intrinsic to normal behavior. Sherry Turkle says that we are, “Connected but alone.” The argument is that technology has brought people so close together that it has now begun to isolate them. ICT has narrowed the number of people a reasonable 2 person would come in contact. People are hived off from each other based on their interests, hobbies, and likes. The currently prevailing rationale is that specializing the needs of each person will help them to enjoy the vast amounts of information on the internet. The truth is that peoples lose their diversity. The constant flux of similar details makes individuals homogenous. ICT has become so commercialized that its primary objective is no longer to connect people, it has become to get messages across. The commercial utilities of mass media have hijacked communication. Today, advertisements seem to matter more to these service providers than actual volumes and quality of communication. The exchange of different human emotions and ideas is tempered by grouping mechanisms such as social media groups, subscriptions and paid access make it difficult for people to come together. One can say that the McDonaldization of society has seeped into how we interact with each other. Sherry Turkle’s TED Talk excoriates people for their use of the internet. At the very beginning of her talk, she recounts when she first came on the show. That was roughly six years before. Then, her daughter was five years old. She was present in the audience and sat on the front row. In the next talk, her daughter sent her a text, “Mon, you are gonna rock!”. (1:39). While she was categorical that the emotions between her daughter had not changed, the humanity in the way, thy showed support for each other varied. She notes that suddenly, it seemed that the technology had overtaken the human spirit. “Technology is taking us where we don’t wanna go,” Sherry says that while she still likes the advantages that ICT has to offer, she is wary of the centrality of the advances it makes on human beings. These devices have become such a part of everyday life that they threaten to usurp human emotion. They change the people intrinsically. For instance, people text or email 3 during board meetings. A few years ago, that would have seemed odd, wrong even. Today, Sherry Turkle sarcastically speaks of a new skill, texting while maintaining eye contact. That shows just how ingrained mobile telephony has become to the society. Children complain that their parent no longer has enough time for them because the tech we have allows one to send emails and receive them at the dinner table (3:43). She calls it being ’alone together.' People have immersed themselves into the virtual world so much that regular interaction suffers for it. The virtual plane has made it possible for individuals to experience things that were previously impossible. It opens up great possibilities and the media can create ‘ideals.' These are particularly dangerous to the impressionable mind. Rachel Simmons says that movies, magazines, and television enforce a virtual set of ideas on them. They carry this impression to the real world, where they are inapplicable at best. That warps the fabric of how their identity portrays in the real world. Studies show that the image and self-esteem of the average female college student are directly correlatable to how their peer views them on visual social media. Instagram and Snapchat give teens the means to gain the approval of others and to compare themselves with others. A study found out that teenagers that actively sought the approval of others on Facebook were more likely to link their sense of self-worth to their looks. A sixteen-year-old confessed to the progressing blur between a simple like on social media and the feeling of acceptance. She said that. “It affects teens subconsciously just seeing how many likes they get and how much attention they get just for how they look.” That point to a dangerous shift in perception. The real world values someone by their character, their deeds, and their relationships. A generation of individuals that relies on shallow standards such as one’s appearance is bound to lose the plot when the exigencies of real life face them (Simmons). 4 While many people seem to gain fulfillment from pontificating on the adverse effects of ICT, one must acknowledge that technology only mirrors society. Society is driven by primal needs even though many would instead hide behind civility (Lanier 34). The wellness industry, for one, is a reflection of this. Simmons says that body-image worshipping has become so prevalent because of the quantity of information readily available has multiplied. However, even without the internet, people would still want to have well-toned bodies, regardless of the means of achieving that. The point is that blaming the internet for people’s primal desires is escapist. It avoids the real problem. The counter-argument seems very plausible in the face of this discussion. What one must always keep in mind, however, is the fact that the advances in tech make it easier for these primal needs to be nurtured. The internet and proliferation of mobile phones create industries that thrive on primal human desires. They normalize hedonism and that never ends well. While it is true that ICT mirrors society. It also magnifies it. Industries use hyperbole to manipulate the audience to a commercial end. That makes it as much evil as the capitalists that ply the public with images enhanced with Photoshop which is not realistic. Teenagers are losing their self-worth at the altar of likes and approval on the internet instead of focusing on building their characters and loving themselves for who they are. These are just some of the things that show how technology is stealing the being from humanity. Work Cited Lanier, Jaron. You Are Not A Gadget: A Manifesto. 1st ed. New York: Alfred Knopf, 2010. Print. 5 Simmons, Rachel. "How Social Media Is A Toxic Mirror." Time Magazine 2016: n. pag. Print. Turkle, Sherry. Connected, But Alone?. 2012. Web. 8 Oct. 2017. Checklist for Essays – Abbreviated Use this version for pasting at the bottom of your essays. My paper has 1” margins. My paper is double-spaced throughout. (There are no extra line spaces between paragraphs.) 6 My paper includes a correct MLA heading. My paper is written in Times New Roman 12-point font. Each page includes my last name a space and the page number in the header at the upper right hand corner of each page. When required, my paper includes a correctly formatted Works Cited page. Authors are referred to by their full, correctly spelled names the first time. Authors are referred to by their correctly spelled last names, thereafter. If introduced, an author’s profession or area of expertise is stated. Titles of authors’ works are stated and correctly punctuated/capitalized. Unnecessary words are eliminated when introducing authors and their works. The titles of shorter works (essays, short videos, poems, chapters in books) are placed in quotation marks (i.e., “Plate Tectonics”). The first letter of each key word in the title is capitalized. The titles of longer works (books, journal titles, full-length films) are italicized (i.e.., Anatomy). The first letter of each key word in the title is capitalized. This text is written to an academic audience outside of this class. The text does not assume that my audience has read the same things I have and essay fully explains key concepts and provides context. Throughout the essay, statements are made that reflect my judgment based on my interpretation of evidence and my reasoning. They are not based on personal belief and opinion. My statements demonstrate that I have arrived at conclusions that I am committed to, recognizing that my audience wants to read works by authors who take clear stands and support them. I use the present tense throughout the essay when referring to source texts, unless the source refers to something that actually happened in history or is expected to happen in the future. 7 I use rhetorically accurate verbs (e.g., argue, assert, claim, examine, explain, identify) to characterize the statements I and other authors make. I don’t use “says” or “mentions.” I don’t use “I believe” or “In my opinion.” Instead, I refer to my “judgments” based on evidence sound reasoning. A citation follows every paraphrase, summary, or direct quotation. Each citation is placed at the end of a sentence. Each citation is placed in parentheses. EXAMPLE: (Chan 7). Generally, in-text citations are formatted, as follows: (AuthorLastName Page#). I’ve used the In-Text Citation Cheat Sheet or other authoritative source for preparing my in-text citations. Periods are placed AFTER the closing parenthesis of each citation, not after the sentence. There are no commas or other punctuation between the author and the page number in the basic (Author #) in-text citation. Every direct quotation is surrounded by quotation marks and accurately/exactly records the words from its source text. Every quotation is introduced (or ends with) a signal phrase. Every quotation is explained in my own words. Quotations are not placed at the very beginning or end of a paragraph. Quotations flow grammatically with the rest of the text. Omitted words are shown using ellipses (but these are only used in the middle of a quotation). I do not use ellipses at the beginnings or ends of quotation when omitting words. ‘Single quotation marks’ are used when a title or quote is placed inside of “double quotation marks.” I have checked my work and corrected awkward phrasing, wordiness, grammar errors, and 8 incorrect words (often typos). I have read my work aloud to catch additional awkward phrasing, wordiness, grammar errors, and incorrect words (often typos). I have placed periods and commas inside of quotation marks when they are directly next to each other. (Remember that sentences with in-text citations have no periods or commas until after the in-text citation.) I have eliminated “etc.” and replaced it with “for example” or “including.” I don’t use the pronoun “you” in this paper. I don’t make unnecessary shifts between first-, second-, and third-person pronouns in my sentences and paragraphs. Students often shift wildly from we, you, it, they when talking about the same thing or when it’s unnecessary, creating confusion. I have corrected shifts in verb tense (e.g., past, present, future, conditional). I have checked my sentences for singular/plural agreement and corrected any errors. I’ve addressed all aspects of the prompt. How Social Media Is a Toxic Mirror Time Magazine, Ideas Column, Aug 19, 2016 Rachel Simmons Rachel Simmons is a leadership development specialist at Smith College and the author of Odd Girl Out and The Curse of the Good Girl We’ve long understood that movies, magazines and television damage teens’ body image by enforcing a “thin ideal.” Less known is the impact of social media on body confidence. With the rapid aging down of smart phone ownership, most parents spend “digital parenting” time on character coaching, making sure their kids think before they post and refrain from cyberbullying. For at least a decade, educators like me have argued that social media’s biggest threat was its likeness to a bathroom wall, letting teens sling insults with the recklessness that comes only with anonymity. Not anymore. Social media has also become a toxic mirror. Earlier this year, psychologists found robust cross-cultural evidence linking social media use to body image concerns, dieting, body surveillance, a drive for thinness and self-objectification in adolescents. Note: that doesn’t mean social media cause the problems, but that there’s a strong association between them. Visual platforms like Facebook, Instagram and Snapchat deliver the tools that allow teens to earn approval for their appearance and compare themselves to others. The most vulnerable users, researchers say, are the ones who spend most of their time posting, commenting on and comparing themselves to photos. One studyfound that female college students who did this on Facebook were more likely to link their selfworth to their looks. Interestingly, while girls report more body image disturbance and disordered eating than boys—studies have shown both can be equally damaged by social media. And thanks to an array of free applications, selfie-holics now have the power to alter their bodies in pictures in a way that’s practically on par with makeup and other beauty products. If the Internet has been called a great democratizer, perhaps what social media has done is let anyone enter the beauty pageant. Teens can cover up pimples, whiten teeth and even airbrush with the swipe of a finger, curating their own image to become prettier, thinner and hotter. All this provides an illusion of control: if I spend more time and really work at it, I can improve at being beautiful. “I don’t get to choose how I’m going to leave my apartment today,” one young woman told me. “If I could, my body would look different. But I can choose which picture makes my arms look thinner.” But invariably, the line between a “like” and feeling ranked becomes blurred. “I think it affects teens subconsciously just seeing how many likes they get and how much attention get just for how they look,” one 16-year-old told me. What teens share online is dwarfed by what they consume. Pre-Internet, you had to hoof it to the grocery store to find a magazine with celebrity bodies—or at least filch your mother’s copy from the bathroom. Now the pictures are as endless as they are available. Teens can spend hours fixating on the toned arms or glutes of celebrities, who hawk their bodies as much as their talent. The meteoric rise of the “wellness” industry online has launched an entire industry of fitness celebrities on social media. Millions of followers embrace their regimens for diet and exercise, but increasingly, the drive for “wellness” and “clean eating” has become stealthy cover for more dieting and deprivation. This year, an analysis of 50 so-called “fitspiration” websites revealed messaging that was indistinguishable, at times, from pro-anorexia (pro-ana) or “thinspiration” websites. Both contained strong language inducing guilt about weight or the body, and promoted dieting, restraint and fat and weight stigmatization. Writing in Vice, 24-year-old Ruby Tandoh recounted how a focus on “healthy” and “clean” eating and “lifestyle” enabled her to hide her increasingly disordered eating and deflect concerned peers. “I had found wellness,” she wrote. “I was not well.” Many teens are media-literate about movies and magazines; they take in digitally altered images with a critical eye. Less clear is how social media literate they are. The older adolescents I work with often shrug off conversations about the perils of social media with a “duh” or “I know that already.” That doesn’t mean they’re not listening, or feeling worried that their bodies don’t measure up. (Besides, this population is hardly famous for gushing gratitude for parental advice.) So what can parents do? Ask teens their opinion of the ways people modify their own appearance online: Why do people do it? What do they gain, and from whom? Sometimes just naming a feeling as normal can make a young adult feel less alone. It never hurts to tell your teen they matter more than their looks. As they peer into the mirror on the screen, a good old-fashioned “I love you exactly as you are” may be more timely than ever. 1 This book is dedicated to my friends and colleagues in the digital revolution. Thank you for considering my challenges constructively, as they are intended. Thanks to Lilly for giving me yearning, and Ellery for giving me eccentricity, to Lena for the mrping, and to Lilibell, for teaching me to read anew. 2 CONTENTS PREFACE PART ONE What is a Person? Chapter 1 Missing Persons Chapter 2 An Apocalypse of Self-Abdication Chapter 3 The Noosphere Is Just Another Name for Everyone‟s Inner Troll PART TWO What Will Money Be? Chapter 4 Digital Peasant Chic Chapter 5 The City Is Built to Music Chapter 6 The Lords of the Clouds Renounce Free Will in Order to Become Infinitely Lucky Chapter 7 The Prospects for Humanistic Cloud Economics Chapter 8 Three Possible Future Directions PART THREE The Unbearable Thinness of Flatness Chapter 9 Retropolis Chapter 10 Digital Creativity Eludes Flat Places Chapter 11 All Hail the Membrane PART FOUR Making The Best of Bits Chapter 12 I Am a Contrarian Loop Chapter 13 One Story of How Semantics Might Have Evolved PART FIVE Future Humors Chapter 14 Home at Last (My Love Affair with Bachelardian Neoteny) Acknowledgments 3 Preface IT‟S EARLY in the twenty-first century, and that means that these words will mostly be read by nonpersons—automatons or numb mobs composed of people who are no longer acting as individuals. The words will be minced into atomized search-engine keywords within industrial cloud computing facilities located in remote, often secret locations around the world. They will be copied millions of times by algorithms designed to send an advertisement to some person somewhere who happens to resonate with some fragment of what I say. They will be scanned, rehashed, and misrepresented by crowds of quick and sloppy readers into wikis and automatically aggregated wireless text message streams. Reactions will repeatedly degenerate into mindless chains of anonymous insults and inarticulate controversies. Algorithms will find correlations between those who read my words and their purchases, their romantic adventures, their debts, and, soon, their genes. Ultimately these words will contribute to the fortunes of those few who have been able to position themselves as lords of the computing clouds. The vast fanning out of the fates of these words will take place almost entirely in the lifeless world of pure information. Real human eyes will read these words in only a tiny minority of the cases. And yet it is you, the person, the rarity among my readers, I hope to reach. The words in this book are written for people, not computers. I want to say: You have to be somebody before you can share yourself. 1 PART ONE What is a Person? 2 CHAPTER 1 Missing Persons SOFTWARE EXPRESSES IDEAS about everything from the nature of a musical note to the nature of personhood. Software is also subject to an exceptionally rigid process of “lock-in.” Therefore, ideas (in the present era, when human affairs are increasingly software driven) have become more subject to lock-in than in previous eras. Most of the ideas that have been locked in so far are not so bad, but some of the so-called web 2.0 ideas are stinkers, so we ought to reject them while we still can. Speech is the mirror of the soul; as a man speaks, so is he. PUBLILIUS SYRUS Fragments Are Not People Something started to go wrong with the digital revolution around the turn of the twenty-first century. The World Wide Web was flooded by a torrent of petty designs sometimes called web 2.0. This ideology promotes radical freedom on the surface of the web, but that freedom, ironically, is more for machines than people. Nevertheless, it is sometimes referred to as “open culture.” Anonymous blog comments, vapid video pranks, and lightweight mashups may seem trivial and harmless, but as a whole, this widespread practice of fragmentary, impersonal communication has demeaned interpersonal interaction. Communication is now often experienced as a superhuman phenomenon that towers above individuals. A new generation has come of age with a reduced expectation of what a person can be, and of who each person might become. The Most Important Thing About a Technology Is How It Changes People When I work with experimental digital gadgets, like new variations on virtual reality, in a lab environment, I am always reminded of how small changes in the details of a digital design can have profound unforeseen effects on the experiences of the humans who are playing with it. The slightest change in something as seemingly trivial as the ease of use of a button can sometimes completely alter behavior patterns. For instance, Stanford University researcher Jeremy Bailenson has demonstrated that changing the height of one‟s avatar in immersive virtual reality transforms self-esteem and social self-perception. Technologies are extensions of ourselves, and, like the avatars in Jeremy‟s lab, our identities can be shifted by the quirks of gadgets. It is impossible to work with information technology without also engaging in social engineering. One might ask, “If I am blogging, twittering, and wikiing a lot, how does that change who I am?” or “If the „hive mind‟ is my audience, who am I?” We inventors of digital technologies are like stand-up comedians or neurosurgeons, in that our work resonates with deep 3 philosophical questions; unfortunately, we‟ve proven to be poor philosophers lately. When developers of digital technologies design a program that requires you to interact with a computer as if it were a person, they ask you to accept in some corner of your brain that you might also be conceived of as a program. When they design an internet service that is edited by a vast anonymous crowd, they are suggesting that a random crowd of humans is an organism with a legitimate point of view. Different media designs stimulate different potentials in human nature. We shouldn‟t seek to make the pack mentality as efficient as possible. We should instead seek to inspire the phenomenon of individual intelligence. “What is a person?” If I knew the answer to that, I might be able to program an artificial person in a computer. But I can‟t. Being a person is not a pat formula, but a quest, a mystery, a leap of faith. Optimism It would be hard for anyone, let alone a technologist, to get up in the morning without the faith that the future can be better than the past. Back in the 1980s, when the internet was only available to small number of pioneers, I was often confronted by people who feared that the strange technologies I was working on, like virtual reality, might unleash the demons of human nature. For instance, would people become addicted to virtual reality as if it were a drug? Would they become trapped in it, unable to escape back to the physical world where the rest of us live? Some of the questions were silly, and others were prescient. How Politics Influences Information Technology I was part of a merry band of idealists back then. If you had dropped in on, say, me and John Perry Barlow, who would become a cofounder of the Electronic Frontier Foundation, or Kevin Kelly, who would become the founding editor of Wired magazine, for lunch in the 1980s, these are the sorts of ideas we were bouncing around and arguing about. Ideals are important in the world of technology, but the mechanism by which ideals influence events is different than in other spheres of life. Technologists don‟t use persuasion to influence you—or, at least, we don‟t do it very well. There are a few master communicators among us (like Steve Jobs), but for the most part we aren‟t particularly seductive. We make up extensions to your being, like remote eyes and ears (web-cams and mobile phones) and expanded memory (the world of details you can search for online). These become the structures by which you connect to the world and other people. These structures in turn can change how you conceive of yourself and the world. We tinker with your philosophy by direct manipulation of your cognitive experience, not indirectly, through argument. It takes only a tiny group of engineers to create technology that can shape the entire future of human experience with incredible speed. Therefore, crucial arguments about the human relationship with technology should take place between developers and users before such direct manipulations are designed. This book is about those arguments. The design of the web as it appears today was not inevitable. In the early 1990s, there 4 were perhaps dozens of credible efforts to come up with a design for presenting networked digital information in a way that would attract more popular use. Companies like General Magic and Xanadu developed alternative designs with fundamentally different qualities that never got out the door. A single person, Tim Berners-Lee, came to invent the particular design of today‟s web. The web as it was introduced was minimalist, in that it assumed just about as little as possible about what a web page would be like. It was also open, in that no page was preferred by the architecture over another, and all pages were accessible to all. It also emphasized responsibility, because only the owner of a website was able to make sure that their site was available to be visited. Berners-Lee‟s initial motivation was to serve a community of physicists, not the whole world. Even so, the atmosphere in which the design of the web was embraced by early adopters was influenced by idealistic discussions. In the period before the web was born, the ideas in play were radically optimistic and gained traction in the community, and then in the world at large. Since we make up so much from scratch when we build information technologies, how do we think about which ones are best? With the kind of radical freedom we find in digital systems comes a disorienting moral challenge. We make it all up—so what shall we make up? Alas, that dilemma—of having so much freedom—is chimerical. As a program grows in size and complexity, the software can become a cruel maze. When other programmers get involved, it can feel like a labyrinth. If you are clever enough, you can write any small program from scratch, but it takes a huge amount of effort (and more than a little luck) to successfully modify a large program, especially if other programs are already depending on it. Even the best software development groups periodically find themselves caught in a swarm of bugs and design conundrums. Little programs are delightful to write in isolation, but the process of maintaining large-scale software is always miserable. Because of this, digital technology tempts the programmer‟s psyche into a kind of schizophrenia. There is constant confusion between real and ideal computers. Technologists wish every program behaved like a brand-new, playful little program, and will use any available psychological strategy to avoid thinking about computers realistically. The brittle character of maturing computer programs can cause digital designs to get frozen into place by a process known as lock-in. This happens when many software programs are designed to work with an existing one. The process of significantly changing software in a situation in which a lot of other software is dependent on it is the hardest thing to do. So it almost never happens. Occasionally, a Digital Eden Appears One day in the early 1980s, a music synthesizer designer named Dave Smith casually made up a way to represent musical notes. It was called MIDI. His approach conceived of music from a keyboard player‟s point of view. MIDI was made of digital patterns that represented keyboard events like “key-down” and “key-up.” That meant it could not describe the curvy, transient expressions a singer or a saxophone player can produce. It could only describe the tile mosaic world of the keyboardist, not the watercolor world of the violin. But there was no reason for MIDI to be concerned with the whole 5 of musical expression, since Dave only wanted to connect some synthesizers together so that he could have a larger palette of sounds while playing a single keyboard. In spite of its limitations, MIDI became the standard scheme to represent music in software. Music programs and synthesizers were designed to work with it, and it quickly proved impractical to change or dispose of all that software and hardware. MIDI became entrenched, and despite Herculean efforts to reform it on many occasions by a multi-decade-long parade of powerful international commercial, academic, and professional organizations, it remains so. Standards and their inevitable lack of prescience posed a nuisance before computers, of course. Railroad gauges—the dimensions of the tracks—are one example. The London Tube was designed with narrow tracks and matching tunnels that, on several of the lines, cannot accommodate air-conditioning, because there is no room to ventilate the hot air from the trains. Thus, tens of thousands of modern-day residents in one of the world‟s richest cities must suffer a stifling commute because of an inflexible design decision made more than one hundred years ago. But software is worse than railroads, because it must always adhere with absolute perfection to a boundlessly particular, arbitrary, tangled, intractable messiness. The engineering requirements are so stringent and perverse that adapting to shifting standards can be an endless struggle. So while lock-in may be a gangster in the world of railroads, it is an absolute tyrant in the digital world. Life on the Curved Surface of Moore’s Law The fateful, unnerving aspect of information technology is that a particular design will occasionally happen to fill a niche and, once implemented, turn out to be unalterable. It becomes a permanent fixture from then on, even though a better design might just as well have taken its place before the moment of entrenchment. A mere annoyance then explodes into a cataclysmic challenge because the raw power of computers grows exponentially. In the world of computers, this is known as Moore‟s law. Computers have gotten millions of times more powerful, and immensely more common and more connected, since my career began—which was not so very long ago. It‟s as if you kneel to plant a seed of a tree and it grows so fast that it swallows your whole village before you can even rise to your feet. So software presents what often feels like an unfair level of responsibility to technologists. Because computers are growing more powerful at an exponential rate, the designers and programmers of technology must be extremely careful when they make design choices. The consequences of tiny, initially inconsequential decisions often are amplified to become defining, unchangeable rules of our lives. MIDI now exists in your phone and in billions of other devices. It is the lattice on which almost all the popular music you hear is built. Much of the sound around us—the ambient music and audio beeps, the ring-tones and alarms—are conceived in MIDI. The whole of the human auditory experience has become filled with discrete notes that fit in a grid. Someday a digital design for describing speech, allowing computers to sound better than they do now when they speak to us, will get locked in. That design might then be adapted to music, and perhaps a more fluid and expressive sort of digital music will be developed. But even if that happens, a thousand years from now, when a descendant of ours is traveling at relativistic 6 speeds to explore a new star system, she will probably be annoyed by some awful beepy MIDI-driven music to alert her that the antimatter filter needs to be recalibrated. Lock-in Turns Thoughts into Facts Before MIDI, a musical note was a bottomless idea that transcended absolute definition. It was a way for a musician to think, or a way to teach and document music. It was a mental tool distinguishable from the music itself. Different people could make transcriptions of the same musical recording, for instance, and come up with slightly different scores. After MIDI, a musical note was no longer just an idea, but a rigid, mandatory structure you couldn‟t avoid in the aspects of life that had gone digital. The process of lock-in is like a wave gradually washing over the rulebook of life, culling the ambiguities of flexible thoughts as more and more thought structures are solidified into effectively permanent reality. We can compare lock-in to scientific method. The philosopher Karl Popper was correct when he claimed that science is a process that disqualifies thoughts as it proceeds—one can, for example, no longer reasonably believe in a flat Earth that sprang into being some thousands of years ago. Science removes ideas from play empirically, for good reason. Lock-in, however, removes design options based on what is easiest to program, what is politically feasible, what is fashionable, or what is created by chance. Lock-in removes ideas that do not fit into the winning digital representation scheme, but it also reduces or narrows the ideas it immortalizes, by cutting away the unfathomable penumbra of meaning that distinguishes a word in natural language from a command in a computer program. The criteria that guide science might be more admirable than those that guide lock-in, but unless we come up with an entirely different way to make software, further lock-ins are guaranteed. Scientific progress, by contrast, always requires determination and can stall because of politics or lack of funding or curiosity. An interesting challenge presents itself: How can a musician cherish the broader, less-defined concept of a note that preceded MIDI, while using MIDI all day long and interacting with other musicians through the filter of MIDI? Is it even worth trying? Should a digital artist just give in to lock-in and accept the infinitely explicit, finite idea of a MIDI note? If it‟s important to find the edge of mystery, to ponder the things that can‟t quite be defined—or rendered into a digital standard—then we will have to perpetually seek out entirely new ideas and objects, abandoning old ones like musical notes. Throughout this book, I‟ll explore whether people are becoming like MIDI notes—overly defined, and restricted in practice to what can be represented in a computer. This has enormous implications: we can conceivably abandon musical notes, but we can‟t abandon ourselves. When Dave made MIDI, I was thrilled. Some friends of mine from the original Macintosh team quickly built a hardware interface so a Mac could use MIDI to control a synthesizer, and I worked up a quick music creation program. We felt so free—but we should have been more thoughtful. By now, MIDI has become too hard to change, so the culture has changed to make it seem fuller than it was initially intended to be. We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate. It wasn‟t Dave‟s fault. How could he have known? 7 Digital Reification: Lock-in Turns Philosophy into Reality A lot of the locked-in ideas about how software is put together come from an old operating system called UNIX. It has some characteristics that are related to MIDI. While MIDI squeezes musical expression through a limiting model of the actions of keys on a musical keyboard, UNIX does the same for all computation, but using the actions of keys on typewriter-like keyboards. A UNIX program is often similar to a simulation of a person typing quickly. There‟s a core design feature in UNIX called a “command line interface.” In this system, you type instructions, you hit “return,” and the instructions are carried out.* A unifying design principle of UNIX is that a program can‟t tell if a person hit return or a program did so. Since real people are slower than simulated people at operating keyboards, the importance of precise timing is suppressed by this particular idea. As a result, UNIX is based on discrete events that don‟t have to happen at a precise moment in time. The human organism, meanwhile, is based on continuous sensory, cognitive, and motor processes that have to be synchronized precisely in time. (MIDI falls somewhere in between the concept of time embodied in UNIX and in the human body, being based on discrete events that happen at particular times.) UNIX expresses too large a belief in discrete abstract symbols and not enough of a belief in temporal, continuous, nonabstract reality; it is more like a typewriter than a dance partner. (Perhaps typewriters or word processors ought to always be instantly responsive, like a dance partner—but that is not yet the case.) UNIX tends to “want” to connect to reality as if reality were a network of fast typists. If you hope for computers to be designed to serve embodied people as well as possible people, UNIX would have to be considered a bad design. I discovered this in the 1970s, when I tried to make responsive musical instruments with it. I was trying to do what MIDI does not, which is work with fluid, hard-to-notate aspects of music, and discovered that the underlying philosophy of UNIX was too brittle and clumsy for that. The arguments in favor of UNIX focused on how computers would get literally millions of times faster in the coming decades. The thinking was that the speed increase would overwhelm the timing problems I was worried about. Indeed, today‟s computers are millions of times faster, and UNIX has become an ambient part of life. There are some reasonably expressive tools that have UNIX in them, so the speed increase has sufficed to compensate for UNIX‟s problems in some cases. But not all. I have an iPhone in my pocket, and sure enough, the thing has what is essentially UNIX in it. An unnerving element of this gadget is that it is haunted by a weird set of unpredictable user interface delays. One‟s mind waits for the response to the press of a virtual button, but it doesn‟t come for a while. An odd tension builds during that moment, and easy intuition is replaced by nervousness. It is the ghost of UNIX, still refusing to accommodate the rhythms of my body and my mind, after all these years. I‟m not picking in particular on the iPhone (which I‟ll praise in another context later on). I could just as easily have chosen any contemporary personal computer. Windows isn‟t UNIX, but it does share UNIX‟s idea that a symbol is more important than the flow of time and the underlying continuity of experience. The grudging relationship between UNIX and the temporal world in which the human 8 body moves and the human mind thinks is a disappointing example of lock-in, but not a disastrous one. Maybe it will even help make it easier for people to appreciate the old-fashioned physical world, as virtual reality gets better. If so, it will have turned out to be a blessing in disguise. Entrenched Software Philosophies Become Invisible Through Ubiquity An even deeper locked-in idea is the notion of the file. Once upon a time, not too long ago, plenty of computer scientists thought the idea of the file was not so great. The first design for something like the World Wide Web, Ted Nelson‟s Xanadu, conceived of one giant, global file, for instance. The first iteration of the Macintosh, which never shipped, didn‟t have files. Instead, the whole of a user‟s productivity accumulated in one big structure, sort of like a singular personal web page. Steve Jobs took the Mac project over from the fellow who started it, the late Jef Raskin, and soon files appeared. UNIX had files; the Mac as it shipped had files; Windows had files. Files are now part of life; we teach the idea of a file to computer science students as if it were part of nature. In fact, our conception of files may be more persistent than our ideas about nature. I can imagine that someday physicists might tell us that it is time to stop believing in photons, because they have discovered a better way to think about light—but the file will likely live on. The file is a set of philosophical ideas made into eternal flesh. The ideas expressed by the file include the notion that human expression comes in severable chunks that can be organized as leaves on an abstract tree—and that the chunks have versions and need to be matched to compatible applications. What do files mean to the future of human expression? This is a harder question to answer than the question “How does the English language influence the thoughts of native English speakers?” At least you can compare English speakers to Chinese speakers, but files are universal. The idea of the file has become so big that we are unable to conceive of a frame large enough to fit around it in order to assess it empirically. What Happened to Trains, Files, and Musical Notes Could Happen Soon to the Definition of a Human Being It‟s worth trying to notice when philosophies are congealing into locked-in software. For instance, is pervasive anonymity or pseudonymity a good thing? It‟s an important question, because the corresponding philosophies of how humans can express meaning have been so ingrained into the interlocked software designs of the internet that we might never be able to fully get rid of them, or even remember that things could have been different. We ought to at least try to avoid this particularly tricky example of impending lock-in. Lock-in makes us forget the lost freedoms we had in the digital past. That can make it harder to see the freedoms we have in the digital present. Fortunately, difficult as it is, we can still try to change some expressions of philosophy that are on the verge of becoming locked in place in the tools we use to understand one another and the world. 9 A Happy Surprise The rise of the web was a rare instance when we learned new, positive information about human potential. Who would have guessed (at least at first) that millions of people would put so much effort into a project without the presence of advertising, commercial motive, threat of punishment, charismatic figures, identity politics, exploitation of the fear of death, or any of the other classic motivators of mankind. In vast numbers, people did something cooperatively, solely because it was a good idea, and it was beautiful. Some of the more wild-eyed eccentrics in the digital world had guessed that it would happen—but even so it was a shock when it actually did come to pass. It turns out that even an optimistic, idealistic philosophy is realizable. Put a happy philosophy of life in software, and it might very well come true! Technology Criticism Shouldn’t Be Left to the Luddites But not all surprises have been happy. This digital revolutionary still believes in most of the lovely deep ideals that energized our work so many years ago. At the core was a sweet faith in human nature. If we empowered individuals, we believed, more good than harm would result. The way the internet has gone sour since then is truly perverse. The central faith of the web‟s early design has been superseded by a different faith in the centrality of imaginary entities epitomized by the idea that the internet as a whole is coming alive and turning into a superhuman creature. The designs guided by this new, perverse kind of faith put people back in the shadows. The fad for anonymity has undone the great opening-of-everyone‟s-windows of the 1990s. While that reversal has empowered sadists to a degree, the worst effect is a degradation of ordinary people. Part of why this happened is that volunteerism proved to be an extremely powerful force in the first iteration of the web. When businesses rushed in to capitalize on what had happened, there was something of a problem, in that the content aspect of the web, the cultural side, was functioning rather well without a business plan. Google came along with the idea of linking advertising and searching, but that business stayed out of the middle of what people actually did online. It had indirect effects, but not direct ones. The early waves of web activity were remarkably energetic and had a personal quality. People created personal “homepages,” and each of them was different, and often strange. The web had flavor. Entrepreneurs naturally sought to create products that would inspire demand (or at least hypothetical advertising opportunities that might someday compete with Google) where there was no lack to be addressed and no need to be filled, other than greed. Google had discovered a new permanently entrenched niche enabled by the nature of digital technology. It turns out that the digital system of representing people and ads so they can be matched is like MIDI. It is an example of how digital technology can cause an explosive increase in the importance of the “network effect.” Every element in the system—every computer, every person, every bit—comes to depend on relentlessly detailed adherence to a common standard, a common point of exchange. 10 Unlike MIDI, Google‟s secret software standard is hidden in its computer cloud* instead of being replicated in your pocket. Anyone who wants to place ads must use it, or be out in the cold, relegated to a tiny, irrelevant subculture, just as digital musicians must use MIDI in order to work together in the digital realm. In the case of Google, the monopoly is opaque and proprietary. (Sometimes locked-in digital niches are proprietary, and sometimes they aren‟t. The dynamics are the same in either case, though the commercial implications can be vastly different.) There can be only one player occupying Google‟s persistent niche, so most of the competitive schemes that came along made no money. Behemoths like Facebook have changed the culture with commercial intent, but without, as of this time of writing, commercial achievement.* In my view, there were a large number of ways that new commercial successes might have been realized, but the faith of the nerds guided entrepreneurs on a particular path. Voluntary productivity had to be commoditized, because the type of faith I‟m criticizing thrives when you can pretend that computers do everything and people do nothing. An endless series of gambits backed by gigantic investments encouraged young people entering the online world for the first time to create standardized presences on sites like Facebook. Commercial interests promoted the widespread adoption of standardized designs like the blog, and these designs encouraged pseudonymity in at least some aspects of their designs, such as the comments, instead of the proud extroversion that characterized the first wave of web culture. Instead of people being treated as the sources of their own creativity, commercial aggregation and abstraction sites presented anonymized fragments of creativity as products that might have fallen from the sky or been dug up from the ground, obscuring the true sources. Tribal Accession The way we got here is that one subculture of technologists has recently become more influential than the others. The winning subculture doesn‟t have a formal name, but I‟ve sometimes called the members “cybernetic totalists” or “digital Maoists.” The ascendant tribe is composed of the folks from the open culture/Creative Commons world, the Linux community, folks associated with the artificial intelligence approach to computer science, the web 2.0 people, the anticontext file sharers and remashers, and a variety of others. Their capital is Silicon Valley, but they have power bases all over the world, wherever digital culture is being created. Their favorite blogs include Boing Boing, TechCrunch, and Slashdot, and their embassy in the old country is Wired. Obviously, I‟m painting with a broad brush; not every member of the groups I mentioned subscribes to every belief I‟m criticizing. In fact, the groupthink problem I‟m worried about isn‟t so much in the minds of the technologists themselves, but in the minds of the users of the tools the cybernetic totalists are promoting. The central mistake of recent digital culture is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only the people were ever meaningful. When I refer to the tribe, I am not writing about some distant “them.” The members of 11 the tribe are my lifelong friends, my mentors, my students, my colleagues, and my fellow travelers. Many of my friends disagree with me. It is to their credit that I feel free to speak my mind, knowing that I will still be welcome in our world. On the other hand, I know there is also a distinct tradition of computer science that is humanistic. Some of the better-known figures in this tradition include the late Joseph Weizenbaum, Ted Nelson, Terry Winograd, Alan Kay, Bill Buxton, Doug Englebart, Brian Cantwell Smith, Henry Fuchs, Ken Perlin, Ben Schneiderman (who invented the idea of clicking on a link), and Andy Van Dam, who is a master teacher and has influenced generations of protégés, including Randy Pausch. Another important humanistic computing figure is David Gelernter, who conceived of a huge portion of the technical underpinnings of what has come to be called cloud computing, as well as many of the potential practical applications of clouds. And yet, it should be pointed out that humanism in computer science doesn‟t seem to correlate with any particular cultural style. For instance, Ted Nelson is a creature of the 1960s, the author of what might have been the first rock musical (Anything & Everything), something of a vagabond, and a counterculture figure if ever there was one. David Gelernter, on the other hand, is a cultural and political conservative who writes for journals like Commentary and teaches at Yale. And yet I find inspiration in the work of them both. Trap for a Tribe The intentions of the cybernetic totalist tribe are good. They are simply following a path that was blazed in earlier times by well-meaning Freudians and Marxists—and I don‟t mean that in a pejorative way. I‟m thinking of the earliest incarnations of Marxism, for instance, before Stalinism and Maoism killed millions. Movements associated with Freud and Marx both claimed foundations in rationality and the scientific understanding of the world. Both perceived themselves to be at war with the weird, manipulative fantasies of religions. And yet both invented their own fantasies that were just as weird. The same thing is happening again. A self-proclaimed materialist movement that attempts to base itself on science starts to look like a religion rather quickly. It soon presents its own eschatology and its own revelations about what is really going on—portentous events that no one but the initiated can appreciate. The Singularity and the noosphere, the idea that a collective consciousness emerges from all the users on the web, echo Marxist social determinism and Freud‟s calculus of perversions. We rush ahead of skeptical, scientific inquiry at our peril, just like the Marxists and Freudians. Premature mystery reducers are rent by schisms, just like Marxists and Freudians always were. They find it incredible that I perceive a commonality in the membership of the tribe. To them, the systems Linux and UNIX are completely different, for instance, while to me they are coincident dots on a vast canvas of possibilities, even if much of the canvas is all but forgotten by now. At any rate, the future of religion will be determined by the quirks of the software that gets locked in during the coming decades, just like the futures of musical notes and personhood. Where We Are on the Journey 12 It‟s time to take stock. Something amazing happened with the introduction of the World Wide Web. A faith in human goodness was vindicated when a remarkably open and unstructured information tool was made available to large numbers of people. That openness can, at this point, be declared “locked in” to a significant degree. Hurray! At the same time, some not-so-great ideas about life and meaning were also locked in, like MIDI‟s nuance-challenged conception of musical sound and UNIX‟s inability to cope with time as humans experience it. These are acceptable costs, what I would call aesthetic losses. They are counterbalanced, however, by some aesthetic victories. The digital world looks better than it sounds because a community of digital activists, including folks from Xerox Parc (especially Alan Kay), Apple, Adobe, and the academic world (especially Stanford‟s Don Knuth) fought the good fight to save us from the rigidly ugly fonts and other visual elements we‟d have been stuck with otherwise. Then there are those recently conceived elements of the future of human experience, like the already locked-in idea of the file, that are as fundamental as the air we breathe. The file will henceforth be one of the basic underlying elements of the human story, like genes. We will never know what that means, or what alternatives might have meant. On balance, we‟ve done wonderfully well! But the challenge on the table now is unlike previous ones. The new designs on the verge of being locked in, the web 2.0 designs, actively demand that people define themselves downward. It‟s one thing to launch a limited conception of music or time into the contest for what philosophical idea will be locked in. It is another to do that with the very idea of what it is to be a person. Why It Matters If you feel fine using the tools you use, who am I to tell you that there is something wrong with what you are doing? But consider these points: Emphasizing the crowd means deemphasizing individual humans in the design of society, and when you ask people not to be people, they revert to bad moblike behaviors. This leads not only to empowered trolls, but to a generally unfriendly and unconstructive online world. Finance was transformed by computing clouds. Success in finance became increasingly about manipulating the cloud at the expense of sound financial principles. There are proposals to transform the conduct of science along similar lines. Scientists would then understand less of what they do. Pop culture has entered into a nostalgic malaise. Online culture is dominated by trivial mashups of the culture that existed before the onset of mashups, and by fandom responding to the dwindling outposts of centralized mass media. It is a culture of reaction without action. Spirituality is committing suicide. Consciousness is attempting to will itself out of existence. 13 It might seem as though I‟m assembling a catalog of every possible thing that could go wrong with the future of culture as changed by technology, but that is not the case. All of these examples are really just different aspects of one singular, big mistake. The deep meaning of personhood is being reduced by illusions of bits. Since people will be inexorably connecting to one another through computers from here on out, we must find an alternative. We have to think about the digital layers we are laying down now in order to benefit future generations. We should be optimistic that civilization will survive this challenging century, and put some effort into creating the best possible world for those who will inherit our efforts. Next to the many problems the world faces today, debates about online culture may not seem that pressing. We need to address global warming, shift to a new energy cycle, avoid wars of mass destruction, support aging populations, figure out how to benefit from open markets without being disastrously vulnerable to their failures, and take care of other basic business. But digital culture and related topics like the future of privacy and copyrights concern the society we‟ll have if we can survive these challenges. Every save-the-world cause has a list of suggestions for “what each of us can do”: bike to work, recycle, and so on. I can propose such a list related to the problems I‟m talking about: Don‟t post anonymously unless you really might be in danger. If you put effort into Wikipedia articles, put even more effort into using your personal voice and expression outside of the wiki to help attract people who don‟t yet realize that they are interested in the topics you contributed to. Create a website that expresses something about who you are that won‟t fit into the template available to you on a social networking site. Post a video once in a while that took you one hundred times more time to create than it takes to view. Write a blog post that took weeks of reflection before you heard the inner voice that needed to come out. If you are twittering, innovate in order to find a way to describe your internal state instead of trivial external events, to avoid the creeping danger of believing that objectively described events define you, as they would define a machine. These are some of the things you can do to be a person instead of a source of fragments to be exploited by others. There are aspects to all these software designs that could be retained more humanistically. A design that shares Twitter‟s feature of providing ambient continuous contact between people could perhaps drop Twitter‟s adoration of fragments. We don‟t really know, 14 because it is an unexplored design space. As long as you are not defined by software, you are helping to broaden the identity of the ideas that will get locked in for future generations. In most arenas of human expression, it‟s fine for a person to love the medium they are given to work in. Love paint if you are a painter; love a clarinet if you are a musician. Love the English language (or hate it). Love of these things is a love of mystery. But in the case of digital creative materials, like MIDI, UNIX, or even the World Wide Web, it‟s a good idea to be skeptical. These designs came together very recently, and there‟s a haphazard, accidental quality to them. Resist the easy grooves they guide you into. If you love a medium made of software, there‟s a danger that you will become entrapped in someone else‟s recent careless thoughts. Struggle against that! The Importance of Digital Politics There was an active campaign in the 1980s and 1990s to promote visual elegance in software. That political movement bore fruit when it influenced engineers at companies like Apple and Microsoft who happened to have a chance to steer the directions software was taking before lock-in made their efforts moot. That‟s why we have nice fonts and flexible design options on our screens. It wouldn‟t have happened otherwise. The seemingly unstoppable mainstream momentum in the world of software engineers was pulling computing in the direction of ugly screens, but that fate was avoided before it was too late. A similar campaign should be taking place now, influencing engineers, designers, businesspeople, and everyone else to support humanistic alternatives whenever possible. Unfortunately, however, the opposite seems to be happening. Online culture is filled to the brim with rhetoric about what the true path to a better world ought to be, and these days it‟s strongly biased toward an antihuman way of thinking. The Future The true nature of the internet is one of the most common topics of online discourse. It is remarkable that the internet has grown enough to contain the massive amount of commentary about its own nature. The promotion of the latest techno-political-cultural orthodoxy, which I am criticizing, has become unceasing and pervasive. The New YorkTimes, for instance, promotes so-called open digital politics on a daily basis even though that ideal and the movement behind it are destroying the newspaper, and all other newspapers. * It seems to be a case of journalistic Stockholm syndrome. There hasn‟t yet been an adequate public rendering of an alternative worldview that opposes the new orthodoxy. In order to oppose orthodoxy, I have to provide more than a few jabs. I also have to realize an alternative intellectual environment that is large enough to roam in. Someone who has been immersed in orthodoxy needs to experience a figure-ground reversal in order to gain perspective. This can‟t come from encountering just a few heterodox thoughts, but only from a new encompassing architecture of interconnected thoughts that can engulf a person 15 with a different worldview. So, in this book, I have spun a long tale of belief in the opposites of computationalism, the noosphere, the Singularity, web 2.0, the long tail, and all the rest. I hope the volume of my contrarianism will foster an alternative mental environment, where the exciting opportunity to start creating a new digital humanism can begin. An inevitable side effect of this project of deprogramming through immersion is that I will direct a sustained stream of negativity onto the ideas I am criticizing. Readers, be assured that the negativity eventually tapers off, and that the last few chapters are optimistic in tone. * The style of UNIX commands has, incredibly, become part of pop culture. For instance, the URLs (universal resource locators) that we use to find web pages these days, like http://www.jaronlanier.com/, are examples of the kind of key press sequences that are ubiquitous in UNIX. * “Cloud” is a term for a vast computing resource available over the internet. You never know where the cloud resides physically. Google, Microsoft, IBM, and various government agencies are some of the proprietors of computing clouds. * Facebook does have advertising, and is surely contemplating a variety of other commercial plays, but so far has earned only a trickle of income, and no profits. The same is true for most of the other web 2.0 businesses. Because of the enhanced network effect of all things digital, it‟s tough for any new player to become profitable in advertising, since Google has already seized a key digital niche (its ad exchange). In the same way, it would be extraordinarily hard to start a competitor to eBay or Craigslist. Digital network architectures naturally incubate monopolies. That is precisely why the idea of the noosphere, or a collective brain formed by the sum of all the people connected on the internet, has to be resisted with more force than it is promoted. * Today, for instance, as I write these words, there was a headline about R, a piece of geeky statistical software that would never have received notice in the Times if it had not been “free.” R‟s nonfree competitor Stata was not even mentioned. (Ashlee Vance, “Data Analysts Captivated by R‟s Power,” New York Times, January 6, 2009.) 16 CHAPTER 2 An Apocalypse of Self-Abdication THE IDEAS THAT I hope will not be locked in rest on a philosophical foundation that I sometimes call cybernetic totalism. It applies metaphors from certain strains of computer science to people and the rest of reality. Pragmatic objections to this philosophy are presented. What Do You Do When the Techies Are Crazier Than the Luddites? The Singularity is an apocalyptic idea originally proposed by John von Neumann, one of the inventors of digital computation, and elucidated by figures such as Vernor Vinge and Ray Kurzweil. There are many versions of the fantasy of the Singularity. Here‟s the one Marvin Minsky used to tell over the dinner table in the early 1980s: One day soon, maybe twenty or thirty years into the twenty-first century, computers and robots will be able to construct copies of themselves, and these copies will be a little better than the originals because of intelligent software. The second generation of robots will then make a third, but it will take less time, because of the improvements over the first generation. The process will repeat. Successive generations will be ever smarter and will appear ever faster. People might think they‟re in control, until one fine day the rate of robot improvement ramps up so quickly that superintelligent robots will suddenly rule the Earth. In some versions of the story, the robots are imagined to be microscopic, forming a “gray goo” that eats the Earth; or else the internet itself comes alive and rallies all the net-connected machines into an army to control the affairs of the planet. Humans might then enjoy immortality within virtual reality, because the global brain would be so huge that it would be absolutely easy—a no-brainer, if you will—for it to host all our consciousnesses for eternity. The coming Singularity is a popular belief in the society of technologists. Singularity books are as common in a computer science department as Rapture images are in an evangelical bookstore. (Just in case you are not familiar with the Rapture, it is a colorful belief in American evangelical culture about the Christian apocalypse. When I was growing up in rural New Mexico, Rapture paintings would often be found in places like gas stations or hardware stores. They would usually include cars crashing into each other because the virtuous drivers had suddenly disappeared, having been called to heaven just before the onset of hell on Earth. The immensely popular Left Behind novels also describe this scenario.) There might be some truth to the ideas associated with the Singularity at the very largest scale of reality. It might be true that on some vast cosmic basis, higher and higher forms of consciousness inevitably arise, until the whole universe becomes a brain, or something along those lines. Even at much smaller scales of millions or even thousands of years, it is more exciting to imagine humanity evolving into a more wonderful state than we can presently articulate. The only alternatives would be extinction or stodgy stasis, which would be a little disappointing and sad, so let us hope for transcendence of the human condition, as we now 17 understand it. The difference between sanity and fanaticism is found in how well the believer can avoid confusing consequential differences in timing. If you believe the Rapture is imminent, fixing the problems of this life might not be your greatest priority. You might even be eager to embrace wars and tolerate poverty and disease in others to bring about the conditions that could prod the Rapture into being. In the same way, if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring. But in either case, the rest of us would never know if you had been right. Technology working well to improve the human condition is detectable, and you can see that possibility portrayed in optimistic science fiction like Star Trek. The Singularity, however, would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new super-consciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living. You Need Culture to Even Perceive Information Technology Ever more extreme claims are routinely promoted in the new digital climate. Bits are presented as if they were alive, while humans are transient fragments. Real people must have left all those anonymous comments on blogs and video clips, but who knows where they are now, or if they are dead? The digital hive is growing at the expense of individuality. Kevin Kelly says that we don‟t need authors anymore, that all the ideas of the world, all the fragments that used to be assembled into coherent books by identifiable authors, can be combined into one single, global book. Wired editor Chris Anderson proposes that science should no longer seek theories that scientists can understand, because the digital cloud will understand them better anyway. * Antihuman rhetoric is fascinating in the same way that self-destruction is fascinating: it offends us, but we cannot look away. The antihuman approach to computation is one of the most baseless ideas in human history. A computer isn‟t even there unless a person experiences it. There will be a warm mass of patterned silicon with electricity coursing through it, but the bits don‟t mean anything without a cultured person to interpret them. This is not solipsism. You can believe that your mind makes up the world, but a bullet will still kill you. A virtual bullet, however, doesn‟t even exist unless there is a person to recognize it as a representation of a bullet. Guns are real in a way that computers are not. Making People Obsolete So That Computers Seem More Advanced Many of today‟s Silicon Valley intellectuals seem to have embraced what used to be speculations as certainties, without the spirit of unbounded curiosity that originally gave rise to them. Ideas that were once tucked away in the obscure world of artificial intelligence labs have gone mainstream in tech culture. The first tenet of this new culture is that all of reality, including humans, is one big information system. That doesn‟t mean we are condemned to a meaningless 18 existence. Instead there is a new kind of manifest destiny that provides us with a mission to accomplish. The meaning of life, in this view, is making the digital system we call reality function at ever-higher “levels of description.” People pretend to know what “levels of description” means, but I doubt anyone really does. A web page is thought to represent a higher level of description than a single letter, while a brain is a higher level than a web page. An increasingly common extension of this notion is that the net as a whole is or soon will be a higher level than a brain. There‟s nothing special about the place of humans in this scheme. Computers will soon get so big and fast and the net so rich with information that people will be obsolete, either left behind like the characters in Rapture novels or subsumed into some cyber-superhuman something. Silicon Valley culture has taken to enshrining this vague idea and spreading it in the way that only technologists can. Since implementation speaks louder than words, ideas can be spread in the designs of software. If you believe the distinction between the roles of people and computers is starting to dissolve, you might express that—as some friends of mine at Microsoft once did—by designing features for a word processor that are supposed to know what you want, such as when you want to start an outline within your document. You might have had the experience of having Microsoft Word suddenly determine, at the wrong moment, that you are creating an indented outline. While I am all for the automation of petty tasks, this is different. From my point of view, this type of design feature is nonsense, since you end up having to work more than you would otherwise in order to manipulate the software‟s expectations of you. The real function of the feature isn‟t to make life easier for people. Instead, it promotes a new philosophy: that the computer is evolving into a life-form that can understand people better than people can understand themselves. Another example is what I call the “race to be most meta.” If a design like Facebook or Twitter depersonalizes people a little bit, then another service like Friendfeed—which may not even exist by the time this book is published—might soon come along to aggregate the previous layers of aggregation, making individual people even more abstract, and the illusion of high-level metaness more celebrated. Information Doesn’t Deserve to Be Free “Information wants to be free.” So goes the saying. Stewart Brand, the founder of the Whole Earth Catalog, seems to have said it first. I say that information doesn‟t deserve to be free. Cybernetic totalists love to think of the stuff as if it were alive and had its own ideas and ambitions. But what if information is inanimate? What if it‟s even less than inanimate, a mere artifact of human thought? What if only humans are real, and information is not? Of course, there is a technical use of the term “information” that refers to something entirely real. This is the kind of information that‟s related to entropy. But that fundamental kind of information, which exists independently of the culture of an observer, is not the same as the kind we can put in computers, the kind that supposedly wants to be free. Information is alienated experience. You can think of culturally decodable information as a potential form of experience, very much as you can think of a brick resting on a ledge as storing potential energy. When the brick is 19 prodded to fall, the energy is revealed. That is only possible because it was lifted into place at some point in the past. In the same way, stored information might cause experience to be revealed if it is prodded in the right way. A file on a hard disk does indeed contain information of the kind that objectively exists. The fact that the bits are discernible instead of being scrambled into mush—the way heat scrambles things—is what makes them bits. But if the bits can potentially mean something to someone, they can only do so if they are experienced. When that happens, a commonality of culture is enacted between the storer and the retriever of the bits. Experience is the only process that can de-alienate information. Information of the kind that purportedly wants to be free is nothing but a shadow of our own minds, and wants nothing on its own. It will not suffer if it doesn‟t get what it wants. But if you want to make the transition from the old religion, where you hope God will give you an afterlife, to the new religion, where you hope to become immortal by getting uploaded into a computer, then you have to believe information is real and alive. So for you, it will be important to redesign human institutions like art, the economy, and the law to reinforce the perception that information is alive. You demand that the rest of us live in your new conception of a state religion. You need us to deify information to reinforce your faith. The Apple Falls Again It‟s a mistake with a remarkable origin. Alan Turing articulated it, just before his suicide. Turing‟s suicide is a touchy subject in computer science circles. There‟s an aversion to talking about it much, because we don‟t want our founding father to seem like a tabloid celebrity, and we don‟t want his memory trivialized by the sensational aspects of his death. The legacy of Turing the mathematician rises above any possible sensationalism. His contributions were supremely elegant and foundational. He gifted us with wild leaps of invention, including much of the mathematical underpinnings of digital computation. The highest award in computer science, our Nobel Prize, is named in his honor. Turing the cultural figure must be acknowledged, however. The first thing to understand is that he was one of the great heroes of World War II. He was the first “cracker,” a person who uses computers to defeat an enemy‟s security measures. He applied one of the first computers to break a Nazi secret code, called Enigma, which Nazi mathematicians had believed was unbreakable. Enigma was decoded by the Nazis in the field using a mechanical device about the size of a cigar box. Turing reconceived it as a pattern of bits that could be analyzed in a computer, and cracked it wide open. Who knows what world we would be living in today if Turing had not succeeded? The second thing to know about Turing is that he was gay at a time when it was illegal to be gay. British authorities, thinking they were doing the most compassionate thing, coerced him into a quack medical treatment that was supposed to correct his homosexuality. It consisted, bizarrely, of massive infusions of female hormones. In order to understand how someone could have come up with that plan, you have to remember that before computers came along, the steam engine was a preferred metaphor for understanding human nature. All that sexual pressure was building up and causing the machine to malfunction, so the opposite essence, the female kind, ought to balance it out and reduce the pressure. This story should serve as a cautionary tale. The common use of computers, as we 20 understand them today, as sources for models and metaphors of ourselves is probably about as reliable as the use of the steam engine was back then. Turing developed breasts and other female characteristics and became terribly depressed. He committed suicide by lacing an apple with cyanide in his lab and eating it. Shortly before his death, he presented the world with a spiritual idea, which must be evaluated separately from his technical achievements. This is the famous Turing test. It is extremely rare for a genuinely new spiritual idea to appear, and it is yet another example of Turing‟s genius that he came up with one. Turing presented his new offering in the form of a thought experiment, based on a popular Victorian parlor game. A man and a woman hide, and a judge is asked to determine which is which by relying only on the texts of notes passed back and forth. Turing replaced the woman with a computer. Can the judge tell which is the man? If not, is the computer conscious? Intelligent? Does it deserve equal rights? It‟s impossible for us to know what role the torture Turing was enduring at the time played in his formulation of the test. But it is undeniable that one of the key figures in the defeat of fascism was destroyed, by our side, after the war, because he was gay. No wonder his imagination pondered the rights of strange creatures. When Turing died, software was still in such an early state that no one knew what a mess it would inevitably become as it grew. Turing imagined a pristine, crystalline form of existence in the digital realm, and I can imagine it might have been a comfort to imagine a form of life apart from the torments of the body and the politics of sexuality. It‟s notable that it is the woman who is replaced by the computer, and that Turing‟s suicide echoes Eve‟s fall. The Turing Test Cuts Both Ways Whatever the motivation, Turing authored the first trope to support the idea that bits can be alive on their own, independent of human observers. This idea has since appeared in a thousand guises, from artificial intelligence to the hive mind, not to mention many overhyped Silicon Valley start-ups. It seems to me, however, that the Turing test has been poorly interpreted by generations of technologists. It is usually presented to support the idea that machines can attain whatever quality it is that gives people consciousness. After all, if a machine fooled you into believing it was conscious, it would be bigoted for you to still claim it was not. What the test really tells us, however, even if it‟s not necessarily what Turing hoped it would say, is that machine intelligence can only be known in a relative sense, in the eyes of a human beholder.* The AI way of thinking is central to the ideas I‟m criticizing in this book. If a machine can be conscious, then the computing cloud is going to be a better and far more capacious consciousness than is found in an individual person. If you believe this, then working for the benefit of the cloud over individual people puts you on the side of the angels. But the Turing test cuts both ways. You can‟t tell if a machine has gotten smarter or if you‟ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you‟ve let your sense of personhood degrade in order to make the illusion work for you? 21 People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species‟ bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous. The same ambiguity that motivated dubious academic AI projects in the past has been repackaged as mass culture today. Did that search engine really know what you want, or are you playing along, lowering your standards to make it seem clever? While it‟s to be expected that the human perspective will be changed by encounters with profound new technologies, the exercise of treating machine intelligence as real requires people to reduce their mooring to reality. A significant number of AI enthusiasts, after a protracted period of failed experiments in tasks like understanding natural language, eventually found consolation in the adoration for the hive mind, which yields better results because there are real people behind the curtain. Wikipedia, for instance, works on what I call the Oracle illusion, in which knowledge of the human authorship of a text is suppressed in order to give the text superhuman validity. Traditional holy books work in precisely the same way and present many of the same problems. This is another of the reasons I sometimes think of cybernetic totalist culture as a new religion. The designation is much more than an approximate metaphor, since it includes a new kind of quest for an afterlife. It‟s so weird to me that Ray Kurzweil wants the global computing cloud to scoop up the contents of our brains so we can live forever in virtual reality. When my friends and I built the first virtual reality machines, the whole point was to make this world more creative, expressive, empathic, and interesting. It was not to escape it. A parade of supposedly distinct “big ideas” that amount to the worship of the illusions of bits has enthralled Silicon Valley, Wall Street, and other centers of power. It might be Wikipedia or simulated people on the other end of the phone line. But really we are just hearing Turing‟s mistake repeated over and over. Or Consider Chess Will trendy cloud-based economics, science, or cultural processes outpace old-fashioned approaches that demand human understanding? No, because it is only encounters with human understanding that allow the contents of the cloud to exist. Fragment liberation culture breathlessly awaits future triumphs of technology that will bring about the Singularity or other imaginary events. But there are already a few examples of how the Turing test has been approximately passed, and has reduced personhood. Chess is one. The game of chess possesses a rare combination of qualities: it is easy to understand the rules, but it is hard to play well; and, most important, the urge to master it seems timeless. Human players achieve ever higher levels of skill, yet no one will claim that the quest is over. Computers and chess share a common ancestry. Both originated as tools of war. Chess began as a battle simulation, a mental martial art. The design of chess reverberates even further into the past than that—all the way back to our sad animal ancestry of pecking orders and competing clans. Likewise, modern computers were developed to guide missiles and break secret military codes. Chess and computers are both direct descendants of the violence that drives evolution in 22 the natural world, however sanitized and abstracted they may be in the context of civilization. The drive to compete is palpable in both computer science and chess, and when they are brought together, adrenaline flows. What makes chess fascinating to computer scientists is precisely that we‟re bad at it. From our point of view, human brains routinely do things that seem almost insuperably difficult, like understanding sentences—yet we don‟t hold sentence-comprehension tournaments, because we find that task too easy, too ordinary. Computers fascinate and frustrate us in a similar way. Children can learn to program them, yet it is extremely difficult for even the most accomplished professional to program them well. Despite the evident potential of computers, we know full well that we have not thought of the best programs to write. But all of this is not enough to explain the outpouring of public angst on the occasion of Deep Blue‟s victory in May 1997 over world chess champion Gary Kasparov, just as the web was having its first major influences on popular culture. Regardless of all the old-media hype, it was clear that the public‟s response was genuine and deeply felt. For millennia, mastery of chess had indicated the highest, most refined intelligence—and now a computer could play better than the very best human. There was much talk about whether human beings were still special, whether computers were becoming our equal. By now, this sort of thing wouldn‟t be news, since people have had the AI way of thinking pounded into their heads so much that it is sounding like believable old news. The AI way of framing the event was unfortunate, however. What happened was primarily that a team of computer scientists built a very fast machine and figured out a better way to represent the problem of how to choose the next move in a chess game. People, not machines, performed this accomplishment. The Deep Blue team‟s central victory was one of clarity and elegance of thought. In order for a computer to beat the human chess champion, two kinds of progress had to converge: an increase in raw hardware power and an improvement in the sophistication and clarity with which the decisions of chess play are represented in software. This dual path made it hard to predict the year, but not the eventuality, that a computer would triumph. If the Deep Blue team had not been as good at the software problem, a computer would still have become the world champion at some later date, thanks to sheer brawn. So the suspense lay in wondering not whether a chess-playing computer would ever beat the best human chess player, but to what degree programming elegance would play a role in the victory. Deep Blue won earlier than it might have, scoring a point for elegance. The public reaction to the defeat of Kasparov left the computer science community with an important question, however. Is it useful to portray computers themselves as intelligent or humanlike in any way? Does this presentation serve to clarify or to obscure the role of computers in our lives? Whenever a computer is imagined to be intelligent, what is really happening is that humans have abandoned aspects of the subject at hand in order to remove from consideration whatever the computer is blind to. This happened to chess itself in the case of the Deep Blue-Kasparov tournament. There is an aspect of chess that is a little like poker—the staring down of an opponent, the projection of confidence. Even though it is relatively easier to write a program to “play” poker than to play chess, poker is really a game centering on the subtleties of nonverbal communication between people, such as bluffing, hiding emotion, understanding your 23 opponents‟ psychologies, and knowing how to bet accordingly. In the wake of Deep Blue‟s victory, the poker side of chess has been largely overshadowed by the abstract, algorithmic aspect—while, ironically, it was in the poker side of the game that Kasparov failed critically. Kasparov seems to have allowed himself to be spooked by the computer, even after he had demonstrated an ability to defeat it on occasion. He might very well have won if he had been playing a human player with exactly the same move-choosing skills as Deep Blue (or at least as Deep Blue existed in 1997). Instead, Kasparov detected a sinister stone face where in fact there was absolutely nothing. While the contest was not intended as a Turing test, it ended up as one, and Kasparov was fooled. As I pointed out earlier, the idea of AI has shifted the psychological projection of adorable qualities from computer programs alone to a different target: computer-plus-crowd constructions. So, in 1999 a wikilike crowd of people, including chess champions, gathered to play Kasparov in an online game called “Kasparov versus the World.” In this case Kasparov won, though many believe that it was only because of back-stabbing between members of the crowd. We technologists are ceaselessly intrigued by rituals in which we attempt to pretend that people are obsolete. The attribution of intelligence to machines, crowds of fragments, or other nerd deities obscures more than it illuminates. When people are told that a computer is intelligent, they become prone to changing themselves in order to make the computer appear to work better, instead of demanding that the computer be changed to become more useful. People already tend to defer to computers, blaming themselves when a digital gadget or online service is hard to use. Treating computers as intelligent, autonomous entities ends up standing the process of engineering on its head. We can‟t afford to respect our own designs so much. The Circle of Empathy The most important thing to ask about any technology is how it changes people. And in order to ask that question I‟ve used a mental device called the “circle of empathy” for many years. Maybe you‟ll find it useful as well. (The Princeton philosopher often associated with animal rights, Peter Singer, uses a similar term and idea, seemingly a coincident coinage.) An imaginary circle of empathy is drawn by each person. It circumscribes the person at some distance, and corresponds to those things in the world that deserve empathy. I like the term “empathy” because it has spiritual overtones. A term like “sympathy” or “allegiance” might be more precise, but I want the chosen term to be slightly mystical, to suggest that we might not be able to fully understand what goes on between us and others, that we should leave open the possibility that the relationship can‟t be represented in a digital database. If someone falls within your circle of empathy, you wouldn‟t want to see him or her killed. Something that is clearly outside the circle is fair game. For instance, most people would place all other people within the circle, but most of us are willing to see bacteria killed when we brush our teeth, and certainly don‟t worry when we see an inanimate rock tossed aside to keep a trail clear. The tricky part is that some entities reside close to the edge of the circle. The deepest controversies often involve whether something or someone should lie just inside or just outside the circle. For instance, the idea of slavery depends on the placement of the slave outside the circle, to make some people nonhuman. Widening the circle to include all people and end slavery 24 has been one of the epic strands of the human story—and it isn‟t quite over yet. A great many other controversies fit well in the model. The fight over abortion asks whether a fetus or embryo should be in the circle or not, and the animal rights debate asks the same about animals. When you change the contents of your circle, you change your conception of yourself. The center of the circle shifts as its perimeter is changed. The liberal impulse is to expand the circle, while conservatives tend to want to restrain or even contract the circle. Empathy Infla...
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Good luck in your study and if you need any further help in your assignments, please let me know.Goodbye

Surname 1
Name:
Course:
Instructor:
Date:
Connected, but Alone
A proper exposition of the context of this discussion must appreciate the advances that
human race has made, where it is right now. The twenty-first century is a particularly interesting
time to be a human being. For the first time in recorded history, a species has full autonomy over
itself and its environment. Homo sapiens sapiens has sophisticated methods of communications.
It has successfully tamed every ecosystem on the planet. Human beings harness energy and use it
to travel and engage in industry. The most unique aspect of this evolution is that it is not
biological. It is technological. In the last four decades, individuals have managed to improve the
methods of communication exponentially.
In the ancient history, the early inventors like Alexander Graham Bell created the
telephone. However, the modern day communication technology has led to the development of
more advanced tools of communication. The mobile phone is ubiquitous. Tech has moved so far
ahead; from smoke signals, horns, messenger birds and couriers that we now have audio-visual
channels of communication readily available to the public. But then the thesis of this paper is not
to regurgitate what must be common knowledge on the advances of humanity’s interaction.
Rather, this paper will illustrate that information and communication technology (ICT) has not
brought people closer together; that its impact has been the exact opposite.
It is sad that anyone would ever have to say this, tragic even. The major reason for the
development of ICT was to ease communication. It is something of an anathema that technology

Surname 2
would take away the humanity of something so intrinsic to normal behavior. Sherry Turkle says
that we are, “Connected but alone.” The argument is that technology has brought people so close
together that it has now begun to isolate them. ICT has narrowed the number of people a
reasonable person would come in contact. People are hived off from each other based on their
interests, hobbies, and likes that the technology is providing. The currently prevailing rationale is
that specializing the needs of each person will help them to enjoy the vast amounts of
information on the internet. The truth is that peoples lose their diversity. The constant flux of
similar details makes individuals homogenous.
The increased commercialization of the ICT has shifted its primary objective from
connecting people to passing of messages across. The commercial utilities...


Anonymous
Great! 10/10 would recommend using Studypool to help you study.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Related Tags