Write 2 essays - 2 pages each

User Generated




1. Both Michaels and McAfee & Brynjolfsson believe that our world has changed in specific ways, and that we must consider a number of factors in order to fully take advantage of our place in society. What factors are of chief concern in each text? How do these authors’ ideas about society, and how to survive in it, contrast and compare? 2 pages.

2. Consider the different approaches that Wong, Rhode, and Kay & Shipman have to success in the modern world, and the different obstacles they see to that success. How do their ideas relate to one another? To the other authors in Unit 2? In Unit 1? 2 pages.

Files & links:

Kay, Katty and Claire Shipman. “The Confidence Gap.”

Wong, Grace. “Silicon Valley’s Other Diversity Problem: Age of Bias in Tech.”

Please use attached files, but in a file of "The Second Machine Age" from McAfee & Brynjolfsson, just use only ch12.

Files' name "1" - "4" are Michaels's files.

Unformatted Attachment Preview

ERIK BRYNJOLFSSON ANDREW MCAFEE To Martha Pavlakis, the love of my life. To my parents, David McAfee and Nancy Haller, who prepared me for the second machine age by giving me every advantage a person could have. Chapter 1 THE BIG STORIES Chapter 2 THE SKILLS OF THE NEW MACHINES: TECHNOLOGY RACES AHEAD Chapter 3 MOORE’S LAW AND THE SECOND HALF OF THE CHESSBOARD Chapter 4 THE DIGITIZATION OF JUST ABOUT EVERYTHING Chapter 5 INNOVATION: DECLINING OR RECOMBINING? Chapter 6 ARTIFICIAL AND HUMAN INTELLIGENCE IN THE SECOND MACHINE AGE Chapter 7 COMPUTING BOUNTY Chapter 8 BEYOND GDP Chapter 9 THE SPREAD Chapter 10 THE BIGGEST WINNERS: STARS AND SUPERSTARS Chapter 11 IMPLICATIONS OF THE BOUNTY AND THE SPREAD Chapter 12 LEARNING TO RACE WITH MACHINES: RECOMMENDATIONS FOR INDIVIDUALS Chapter 13 POLICY RECOMMENDATIONS Chapter 14 LONG-TERM RECOMMENDATIONS Chapter 15 TECHNOLOGY AND THE FUTURE (Which Is Very Different from “Technology Is the Future”) Acknowledgments Notes Illustration Sources Index “Techno lo gy is a gift o f Go d. After the gift o f life it is perhaps the greatest o f Go d’s gifts. It is the mo ther o f civilizatio ns, o f arts and o f sciences.” —Freeman Dyso n WHAT HAVE BEEN THE most important developments in human history? As anyone investigating this question soon learns, it’s difficult to answer. For one thing, when does ‘human history’ even begin? Anatomically and behaviorally modern Homo sapiens, equipped with language, fanned out from their African homeland some sixty thousand years ago.1 By 25,000 BCE2 they had wiped out the Neanderthals and other hominids, and thereafter faced no competition from other big-brained, upright-walking species. We might consider 25,000 BCE a reasonable time to start tracking the big stories of humankind, were it not for the development-retarding ice age earth was experiencing at the time.3 In his book Why the West Rules—For Now, anthropologist Ian Morris starts tracking human societal progress in 14,000 BCE, when the world clearly started getting warmer. Another reason it’s a hard question to answer is that it’s not clear what criteria we should use: what constitutes a truly important development? Most of us share a sense that it would be an event or advance that significantly changes the course of things—one that ‘bends the curve’ of human history. Many have argued that the domestication of animals did just this, and is one of our earliest important achievements. The dog might well have been domesticated before 14,000 BCE, but the horse was not; eight thousand more years would pass before we started breeding them and keeping them in corrals. The ox, too, had been tamed by that time (ca. 6,000 BCE) and hitched to a plow. Domestication of work animals hastened the transition from foraging to farming, an important development already underway by 8,000 BCE.4 Agriculture ensures plentiful and reliable food sources, which in turn enable larger human settlements and, eventually, cities. Cities in turn make tempting targets for plunder and conquest. A list of important human developments should therefore include great wars and the empires they yielded. The Mongol, Roman, Arab, and Ottoman empires—to name just four— were transformative; they affected kingdoms, commerce, and customs over immense areas. Of course, some important developments have nothing to do with animals, plants, or fighting men; some are simply ideas. Philosopher Karl Jaspers notes that Buddha (563–483 BCE), Confucius (551–479 BCE), and Socrates (469–399 BCE) all lived quite close to one another in time (but not in place). In his analysis these men are the central thinkers of an ‘Axial Age’ spanning 800–200 BCE. Jaspers calls this age “a deep breath bringing the most lucid consciousness” and holds that its philosophers brought transformative schools of thought to three major civilizations: Indian, Chinese, and European.5 The Buddha also founded one of the world’s major religions, and common sense demands that any list of major human developments include the establishment of other major faiths like Hinduism, Judaism, Christianity, and Islam. Each has influenced the lives and ideals of hundreds of millions of people.6 Many of these religions’ ideas and revelations were spread by the written word, itself a fundamental innovation in human history. Debate rages about precisely when, where, and how writing was invented, but a safe estimate puts it in Mesopotamia around 3,200 BCE. Written symbols to facilitate counting also existed then, but they did not include the concept of zero, as basic as that seems to us now. The modern numbering system, which we call Arabic, arrived around 830 CE.7 The list of important developments goes on and on. The Athenians began to practice democracy around 500 BCE. The Black Death reduced Europe’s population by at least 30 percent during the latter half of the 1300s. Columbus sailed the ocean blue in 1492, beginning interactions between the New World and the Old that would transform both. T he History of Humanity in One Graph How can we ever get clarity about which of these developments is the most important? All of the candidates listed above have passionate advocates—people who argue forcefully and persuasively for one development’s sovereignty over all the others. And in Why the West Rules —For Now Morris confronts a more fundamental debate: whether any attempt to rank or compare human events and developments is meaningful or legitimate. Many anthropologists and other social scientists say it is not. Morris disagrees, and his book boldly attempts to quantify human development. As he writes, “reducing the ocean of facts to simple numerical scores has drawbacks but it also has the one great merit of forcing everyone to confront the same evidence—with surprising results.” 8 In other words, if we want to know which developments bent the curve of human history, it makes sense to try to draw that curve. Morris has done thoughtful and careful work to quantify what he terms social development (“a group’s ability to master its physical and intellectual environment to get things done”) over time.* As Morris suggests, the results are surprising. In fact, they’re astonishing. They show that none of the developments discussed so far has mattered very much, at least in comparison to something else—something that bent the curve of human history like nothing before or since. Here’s the graph, with total worldwide human population graphed over time along with social development; as you can see, the two lines are nearly identical: F I GU R E 1. 1 N u m eri c a l l y S p ea ki n g , M o st o f H u m a n H i st o ry I s Bo ri n g . For many thousands of years, humanity was a very gradual upward trajectory. Progress was achingly slow, almost invisible. Animals and farms, wars and empires, philosophies and religions all failed to exert much influence. But just over two hundred years ago, something sudden and profound arrived and bent the curve of human history—of population and social development —almost ninety degrees. Engines of Progress By now you’ve probably guessed what it was. This is a book about the impact of technology, after all, so it’s a safe bet that we’re opening it this way in order to demonstrate how important technology has been. And the sudden change in the graph in the late eighteenth century corresponds to a development we’ve heard a lot about: the Industrial Revolution, which was the sum of several nearly simultaneous developments in mechanical engineering, chemistry, metallurgy, and other disciplines. So you’ve most likely figured out that these technological developments underlie the sudden, sharp, and sustained jump in human progress. If so, your guess is exactly right. And we can be even more precise about which technology was most important. It was the steam engine or, to be more precise, one developed and improved by James Watt and his colleagues in the second half of the eighteenth century. Prior to Watt, steam engines were highly inefficient, harnessing only about one percent of the energy released by burning coal. Watt’s brilliant tinkering between 1765 and 1776 increased this more than threefold.9 As Morris writes, this made all the difference: “Even though [the steam] revolution took several decades to unfold . . . it was nonetheless the biggest and fastest transformation in the entire history of the world.” 10 The Industrial Revolution, of course, is not only the story of steam power, but steam started it all. More than anything else, it allowed us to overcome the limitations of muscle power, human and animal, and generate massive amounts of useful energy at will. This led to factories and mass production, to railways and mass transportation. It led, in other words, to modern life. The Industrial Revolution ushered in humanity’s first machine age—the first time our progress was driven primarily by technological innovation—and it was the most profound time of transformation our world has ever seen.* The ability to generate massive amounts of mechanical power was so important that, in Morris’s words, it “made mockery of all the drama of the world’s earlier history.” 11 F I GU R E 1. 2 Wh a t Ben t t h e C u rve o f H u m a n H i st o ry? T h e I n d u st ri a l R evo l u t i o n . Now comes the second machine age. Computers and other digital advances are doing for mental power—the ability to use our brains to understand and shape our environments—what the steam engine and its descendants did for muscle power. They’re allowing us to blow past previous limitations and taking us into new territory. How exactly this transition will play out remains unknown, but whether or not the new machine age bends the curve as dramatically as Watt’s steam engine, it is a very big deal indeed. This book explains how and why. For now, a very short and simple answer: mental power is at least as important for progress and development—for mastering our physical and intellectual environment to get things done —as physical power. So a vast and unprecedented boost to mental power should be a great boost to humanity, just as the ealier boost to physical power so clearly was. Playing Catch-Up We wrote this book because we got confused. For years we have studied the impact of digital technologies like computers, software, and communications networks, and we thought we had a decent understanding of their capabilities and limitations. But over the past few years, they started surprising us. Computers started diagnosing diseases, listening and speaking to us, and writing high-quality prose, while robots started scurrying around warehouses and driving cars with minimal or no guidance. Digital technologies had been laughably bad at a lot of these things for a long time—then they suddenly got very good. How did this happen? And what were the implications of this progress, which was astonishing and yet came to be considered a matter of course? We decided to team up and see if we could answer these questions. We did the normal things business academics do: read lots of papers and books, looked at many different kinds of data, and batted around ideas and hypotheses with each other. This was necessary and valuable, but the real learning, and the real fun, started when we went out into the world. We spoke with inventors, investors, entrepreneurs, engineers, scientists, and many others who make technology and put it to work. Thanks to their openness and generosity, we had some futuristic experiences in today’s incredible environment of digital innovation. We’ve ridden in a driverless car, watched a computer beat teams of Harvard and MIT students in a game of Jeopardy!, trained an industrial robot by grabbing its wrist and guiding it through a series of steps, handled a beautiful metal bowl that was made in a 3D printer, and had countless other mind-melting encounters with technology. Where We Are This work led us to three broad conclusions. The first is that we’re living in a time of astonishing progress with digital technologies—those that have computer hardware, software, and networks at their core. These technologies are not brand-new; businesses have been buying computers for more than half a century, and Time magazine declared the personal computer its “Machine of the Year” in 1982. But just as it took generations to improve the steam engine to the point that it could power the Industrial it took generations to improve the steam engine to the point that it could power the Industrial Revolution, it’s also taken time to refine our digital engines. We’ll show why and how the full force of these technologies has recently been achieved and give examples of its power. “Full,” though, doesn’t mean “mature.” Computers are going to continue to improve and to do new and unprecedented things. By “full force,” we mean simply that the key building blocks are already in place for digital technologies to be as important and transformational to society and the economy as the steam engine. In short, we’re at an inflection point—a point where the curve starts to bend a lot—because of computers. We are entering a second machine age. Our second conclusion is that the transformations brought about by digital technology will be profoundly beneficial ones. We’re heading into an era that won’t just be different; it will be better, because we’ll be able to increase both the variety and the volume of our consumption. When we phrase it that way—in the dry vocabulary of economics—it almost sounds unappealing. Who wants to consume more and more all the time? But we don’t just consume calories and gasoline. We also consume information from books and friends, entertainment from superstars and amateurs, expertise from teachers and doctors, and countless other things that are not made of atoms. Technology can bring us more choice and even freedom. When these things are digitized—when they’re converted into bits that can be stored on a computer and sent over a network—they acquire some weird and wonderful properties. They’re subject to different economics, where abundance is the norm rather than scarcity. As we’ll show, digital goods are not like physical ones, and these differences matter. Of course, physical goods are still essential, and most of us would like them to have greater volume, variety, and quality. Whether or not we want to eat more, we’d like to eat better or different meals. Whether or not we want to burn more fossil fuels, we’d like to visit more places with less hassle. Computers are helping accomplish these goals, and many others. Digitization is improving the physical world, and these improvements are only going to become more important. Among economic historians there’s wide agreement that, as Martin Weitzman puts it, “the long-term growth of an advanced economy is dominated by the behavior of technical progress.” 12 As we’ll show, technical progress is improving exponentially. Our third conclusion is less optimistic: digitization is going to bring with it some thorny challenges. This in itself should not be too surprising or alarming; even the most beneficial developments have unpleasant consequences that must be managed. The Industrial Revolution was accompanied by soot-filled London skies and horrific exploitation of child labor. What will be their modern equivalents? Rapid and accelerating digitization is likely to bring economic rather than environmental disruption, stemming from the fact that as computers get more powerful, companies have less need for some kinds of workers. Technological progress is going to leave behind some people, perhaps even a lot of people, as it races ahead. As we’ll demonstrate, there’s never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there’s never been a worse time to be a worker with only ‘ordinary’ skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate. Over time, the people of England and other countries concluded that some aspects of the Industrial Revolution were unacceptable and took steps to end them (democratic government and technological progress both helped with this). Child labor no longer exists in the UK, and London air contains less smoke and sulfur dioxide now than at any time since at least the late 1500s.13 The challenges of the digital revolution can also be met, but first we have to be clear on what they are. It’s important to discuss the likely negative consequences of the second machine age and start a dialogue about how to mitigate them—we are confident that they’re not insurmountable. But they won’t fix themselves, either. We’ll offer our thoughts on this important topic in the chapters to come. So this is a book about the second machine age unfolding right now—an inflection point in the history of our economies and societies because of digitization. It’s an inflection point in the right direction—bounty instead of scarcity, freedom instead of constraint—but one that will bring with it some difficult challenges and choices. This book is divided into three sections. The first, composed of chapters 1 through 6, describes the fundamental characteristics of the second machine age. These chapters give many examples of recent technological progress that seem like the stuff of science fiction, explain why they’re happening now (after all, we’ve had computers for decades), and reveal why we should be confident that the scale and pace of innovation in computers, robots, and other digital gear is only going to accelerate in the future. The second part, consisting of chapters 7 through 11, explores bounty and spread, the two economic consequences of this progress. Bounty is the increase in volume, variety, and quality and the decrease in cost of the many offerings brought on by modern technological progress. It’s the best economic news in the world today. Spread, however, is not so great; it’s everbigger differences among people in economic success—in wealth, income, mobility, and other important measures. Spread has been increasing in recent years. This is a troubling development for many reasons, and one that will accelerate in the second machine age unless we intervene. The final section—chapters 12 through 15—discusses what interventions will be appropriate and effective for this age. Our economic goals should be to maximize the bounty while mitigating the negative effects of the spread. We’ll offer our ideas about how to best accomplish these aims, both in the short term and in the more distant future, when progress really has brought us into a world so technologically advanced that it seems to be the stuff of science fiction. As we stress in our concluding chapter, the choices we make from now on will determine what kind of world that is. * Mo rris defines human so cial develo pment as co nsisting o f fo ur attributes: energy capture (per-perso n calo ries o btained fro m the enviro nment fo r fo o d, ho me and co mmerce, industry and agriculture, and transpo rtatio n), o rganizatio n (the size o f the largest city), war-making capacity (number o f tro o ps, po wer and speed o f weapo ns, lo gistical capabilities, and o ther similar facto rs), and info rmatio n techno lo gy (the so phisticatio n o f available to o ls fo r sharing and pro cessing info rmatio n, and the extent o f their use). Each o f these is co nverted into a number that varies o ver time fro m zero to 250. Overall so cial develo pment is simply the sum o f these fo ur numbers. Because he was interested in co mpariso ns between the West (Euro pe, Meso po tamia, and No rth America at vario us times, depending o n which was mo st advanced) and the East (China and Japan), he calculated so cial develo pment separately fo r each area fro m 14,000 BCE to 2000 CE. In 2000, the East was higher o nly in o rganizatio n (since To kyo was the wo rld’s largest city) and had a so cial develo pment sco re o f 564.83. The West’s sco re in 2000 was 906.37. We average the two sco res. * We refer to the Industrial Revo lutio n as the first machine age. Ho wever, “the machine age” is also a label used by so me eco no mic histo rians to refer to a perio d o f rapid techno lo gical pro gress spanning the late nineteenth and early twentieth centuries. This same perio d is called by o thers the Seco nd Industrial Revo lutio n, which is ho w we’ll refer to it in later chapters. “Any sufficiently advanced techno lo gy is indistinguishable fro m magic.” —Arthur C. Clarke IN 2012, we went for a drive in a car that had no driver. During a research visit to Google’s Silicon Valley headquarters, we got to ride in one of the company’s autonomous vehicles, developed as part of its Chauffeur project. Initially we had visions of cruising in the back seat of a car that had no one in the front seat, but Google is understandably skittish about putting obviously autonomous autos on the road. Doing so might freak out pedestrians and other drivers, or attract the attention of the police. So we sat in the back while two members of the Chauffeur team rode up front. When one of the Googlers hit the button that switched the car into fully automatic driving mode while we were headed down Highway 101, our curiosities—and self-preservation instincts—engaged. The 101 is not always a predictable or calm environment. It’s nice and straight, but it’s also crowded most of the time, and its traffic flows have little obvious rhyme or reason. At highway speeds the consequences of driving mistakes can be serious ones. Since we were now part of the ongoing Chauffeur experiment, these consequences were suddenly of more than just intellectual interest to us. The car performed flawlessly. In fact, it actually provided a boring ride. It didn’t speed or slalom among the other cars; it drove exactly the way we’re all taught to in driver’s ed. A laptop in the car provided a real-time visual representation of what the Google car ‘saw’ as it proceeded along the highway—all the nearby objects of which its sensors were aware. The car recognized all the surrounding vehicles, not just the nearest ones, and it remained aware of them no matter where they moved. It was a car without blind spots. But the software doing the driving was aware that cars and trucks driven by humans do have blind spots. The laptop screen displayed the software’s best guess about where all these blind spots were and worked to stay out of them. We were staring at the screen, paying no attention to the actual road, when traffic ahead of us came to a complete stop. The autonomous car braked smoothly in response, coming to a stop a safe distance behind the car in front, and started moving again once the rest of the traffic did. All the while the Googlers in the front seat never stopped their conversation or showed any nervousness, or indeed much interest at all in current highway conditions. Their hundreds of hours in the car had convinced them that it could handle a little stop-and-go traffic. By the time we pulled back into the parking lot, we shared their confidence. T H E S U M M ER O F T he New New Division of Labor Our ride that day on the 101 was especially weird for us because, only a few years earlier, we were sure that computers would not be able to drive cars. Excellent research and analysis, conducted by colleagues who we respect a great deal, concluded that driving would remain a human task for the foreseeable future. How they reached this conclusion, and how technologies like Chauffeur started to overturn it in just a few years, offers important lessons about digital progress. In 2004 Frank Levy and Richard Murnane published their book The New Division of Labor.1 The division they focused on was between human and digital labor—in other words, between people and computers. In any sensible economic system, people should focus on the tasks and jobs where they have a comparative advantage over computers, leaving computers the work for which they are better suited. In their book Levy and Murnane offered a way to think about which tasks fell into each category. One hundred years ago the previous paragraph wouldn’t have made any sense. Back then, computers were humans. The word was originally a job title, not a label for a type of machine. Computers in the early twentieth century were people, usually women, who spent all day doing arithmetic and tabulating the results. Over the course of decades, innovators designed machines that could take over more and more of this work; they were first mechanical, then electro-mechanical, and eventually digital. Today, few people if any are employed simply to do arithmetic and record the results. Even in the lowest-wage countries there are no human computers, because the nonhuman ones are far cheaper, faster, and more accurate. If you examine their inner workings, you realize that computers aren’t just number crunchers, they’re symbols processors. Their circuitry can be interpreted in the language of ones and zeroes, but equally validly as true or false, yes or no, or any other symbolic system. In principle, they can do all manner of symbolic work, from math to logic to language. But digital novelists are not yet available, so people still write all the books that appear on fiction bestseller lists. We also haven’t yet computerized the work of entrepreneurs, CEOs, scientists, nurses, restaurant busboys, or many other types of workers. Why not? What is it about their work that makes it harder to digitize than what human computers used to do? Computers Are Good at Following Rules . . . These are the questions Levy and Murnane tackled in The New Division of Labor, and the answers they came up with made a great deal of sense. The authors put information processing tasks—the foundation of all knowledge work—on a spectrum. At one end are tasks like arithmetic that require only the application of well-understood rules. Since computers are really good at following rules, it follows that they should do arithmetic and similar tasks. Levy and Murnane go on to highlight other types of knowledge work that can also be expressed as rules. For example, a person’s credit score is a good general predictor of whether they’ll pay back their mortgage as promised, as is the amount of the mortgage relative to the person’s wealth, income, and other debts. So the decision about whether or not to give someone a mortgage can be effectively boiled down to a rule. Expressed in words, a mortgage rule might say, “If a person is requesting a mortgage of amount M and they have a credit score of V or higher, annual income greater than I or total wealth greater than W, and total debt no greater than D, then approve the request.” When expressed in computer code, we call a mortgage rule like this an algorithm. Algorithms are simplifications; they can’t and don’t take everything into account (like a billionaire uncle who has included the applicant in his will and likes to rock-climb without ropes). Algorithms do, however, include the most common and important things, and they generally work quite well at tasks like predicting payback rates. Computers, therefore, can and should be used for mortgage approval.* . . . But Lousy at Pattern Recognition At the other end of Levy and Murnane’s spectrum, however, lie information processing tasks that cannot be boiled down to rules or algorithms. According to the authors, these are tasks that draw on the human capacity for pattern recognition. Our brains are extraordinarily good at taking in information via our senses and examining it for patterns, but we’re quite bad at describing or figuring out how we’re doing it, especially when a large volume of fast-changing information arrives at a rapid pace. As the philosopher Michael Polanyi famously observed, “We know more than we can tell.” 2 When this is the case, according to Levy and Murnane, tasks can’t be computerized and will remain in the domain of human workers. The authors cite driving a vehicle in traffic as an example of such as task. As they write, As the driver makes his left turn against traffic, he co nfro nts a wall o f images and so unds generated by o nco ming cars, traffic lights, sto refro nts, billbo ards, trees, and a traffic po liceman. Using his kno wledge, he must estimate the size and po sitio n o f each o f these o bjects and the likeliho o d that they po se a hazard. . . . The truck driver [has] the schema to reco gnize what [he is] co nfro nting. But articulating this kno wledge and embedding it in so ftware fo r all but highly structured situatio ns are at present eno rmo usly difficult tasks. . . . Co mputers canno t easily substitute fo r humans in [jo bs like driving]. So Much for That Distinction We were convinced by Levy and Murnane’s arguments when we read The New Division of Labor in 2004. We were further convinced that year by the initial results of the DARPA Grand Challenge for driverless cars. DARPA, the Defense Advanced Research Projects Agency, was founded in 1958 (in response to the Soviet Union’s launch of the Sputnik satellite) and tasked with spurring technological progress that might have military applications. In 2002 the agency announced its first Grand Challenge, which was to build a completely autonomous vehicle that could complete a 150-mile course through California’s Mojave Desert. Fifteen entrants performed well enough in a qualifying run to compete in the main event, which was held on March 13, 2004. The results were less than encouraging. Two vehicles didn’t make it to the starting area, one flipped over in the starting area, and three hours into the race only four cars were still operational. The “winning” Sandstorm car from Carnegie Mellon University covered 7.4 miles (less than 5 percent of the total) before veering off the course during a hairpin turn and getting stuck on an embankment. The contest’s $1 million prize went unclaimed, and Popular Science called the event “DARPA’s Debacle in the Desert.” 3 Within a few years, however, the debacle in the desert became the ‘fun on the 101’ that we experienced. Google announced in an October 2010 blog post that its completely autonomous cars had for some time been driving successfully, in traffic, on American roads and highways. By the time we took our ride in the summer of 2012 the Chauffeur project had grown into a small fleet of vehicles that had collectively logged hundreds of thousands of miles with no human involvement and with only two accidents. One occurred when a person was driving the Chauffeur car; the other happened when a Google car was rear-ended (by a human driver) Chauffeur car; the other happened when a Google car was rear-ended (by a human driver) while stopped at a red light.4 To be sure, there are still many situations that Google’s cars can’t handle, particularly complicated city traffic or off-road driving or, for that matter, any location that has not already been meticulously mapped in advance by Google. But our experience on the highway convinced us that it’s a viable approach for the large and growing set of everyday driving situations. Self-driving cars went from being the stuff of science fiction to on-the-road reality in a few short years. Cutting-edge research explaining why they were not coming anytime soon was outpaced by cutting-edge science and engineering that brought them into existence, again in the space of a few short years. This science and engineering accelerated rapidly, going from a debacle to a triumph in a little more than half a decade. Improvement in autonomous vehicles reminds us of Hemingway’s quote about how a man goes broke: “Gradually and then suddenly.” 5 And self-driving cars are not an anomaly; they’re part of a broad, fascinating pattern. Progress on some of the oldest and toughest challenges associated with computers, robots, and other digital gear was gradual for a long time. Then in the past few years it became sudden; digital gear started racing ahead, accomplishing tasks it had always been lousy at and displaying skills it was not supposed to acquire anytime soon. Let’s look at a few more examples of surprising recent technological progress. Good Listeners and Smooth T alkers In addition to pattern recognition, Levy and Murnane highlight complex communication as a domain that would stay on the human side in the new division of labor. They write that, “Conversations critical to effective teaching, managing, selling, and many other occupations require the transfer and interpretation of a broad range of information. In these cases, the possibility of exchanging information with a computer, rather than another human, is a long way off.” 6 In the fall of 2011, Apple introduced the iPhone 4S featuring “Siri,” an intelligent personal assistant that worked via a natural-language user interface. In other words, people talked to it just as they would talk to another human being. The software underlying Siri, which originated at the California research institute SRI International and was purchased by Apple in 2010, listened to what iPhone users were saying to it, tried to identify what they wanted, then took action and reported back to them in a synthetic voice. After Siri had been out for about eight months, Kyle Wagner of technology blog Gizmodo listed some of its most useful capabilities: “You can ask about the scores of live games —‘What’s the score of the Giants game?’—or about individual player stats. You can also make OpenTable reservations, get Yelp scores, ask about what movies are playing at a local theater and then see a trailer. If you’re busy and can’t take a call, you can ask Siri to remind you to call the person back later. This is the kind of everyday task for which voice commands can actually be incredibly useful.” 7 T h e Gizmodo post ended with caution: “That actually sounds pretty cool. Just with the obvious Siri criterion: If it actually works.” 8 Upon its release, a lot of people found that Apple’s intelligent personal assistant didn’t work well. It didn’t understand what they were saying, asked for repeated clarifications, gave strange or inaccurate answers, and put them off with responses like “I’m really sorry about this, but I can’t take any requests right now. Please try again in a little while.” Analyst Gene Munster catalogued questions with which Siri had trouble: • Where is Elvis buried? Responded, “I can’t answer that for you.” It thought the person’s name was Elvis Buried. • When did the movie Cinderella come out? Responded with a movie theater search on Yelp. • When is the next Halley’s Comet? Responded, “You have no meetings matching Halley’s.” • I want to go to Lake Superior. Responded with directions to the company Lake Superior X-Ray.9 Siri’s sometimes bizarre and frustrating responses became well known, but the power of the technology is undeniable. It can come to your aid exactly when you need it. On the same trip that afforded us some time in an autonomous car, we saw this firsthand. After a meeting in San Francisco, we hopped in our rental car to drive down to Google’s headquarters in Mountain View. We had a portable GPS device with us, but didn’t plug it in and turn it on because we thought we knew how to get to our next destination. We didn’t, of course. Confronted with an Escherian maze of elevated highways, off-ramps, and surface streets, we drove around looking for an on-ramp while tensions mounted. Just when our meeting at Google, this book project, and our professional relationship seemed in serious jeopardy, Erik pulled out his phone and asked Siri for “directions to U.S. 101 South.” The phone responded instantly and flawlessly: the screen turned into a map showing where we were and how to find the elusive on-ramp. We could have pulled over, found the portable GPS and turned it on, typed in our destination, and waited for our routing, but we didn’t want to exchange information that way. We wanted to speak a question and hear and see (because a map was involved) a reply. Siri provided exactly the natural language interaction we were looking for. A 2004 review of the previous half-century’s research in automatic speech recognition (a critical part of natural language processing) opened with the admission that “Human-level speech recognition has proved to be an elusive goal,” but less than a decade later major elements of that goal have been reached. Apple and other companies have made robust natural language processing technology available to hundreds of millions of people via their mobile phones.10 As noted by Tom Mitchell, who heads the machine-learning department at Carnegie Mellon University: “We’re at the beginning of a ten-year period where we’re going to transition from computers that can’t understand language to a point where computers can understand quite a bit about language.” 11 Digital Fluency: T he Babel Fish Goes to Work Natural language processing software is still far from perfect, and computers are not yet as good as people at complex communication, but they’re getting better all the time. And in tasks like translation from one language to another, surprising developments are underway: while computers’ communication abilities are not as deep as those of the average human being, they’re much broader. A person who speaks more than one language can usually translate between them with reasonable accuracy. Automatic translation services, on the other hand, are impressive but rarely error-free. Even if your French is rusty, you can probably do better than Google Translate with the sentence “Monty Python’s ‘Dirty Hungarian Phrasebook’ sketch is one of their funniest ones.” Google offered, “Sketch des Monty Python ‘Phrasebook sale hongrois’ est l’un des plus drôles les leurs.” This conveys the main gist, but has serious grammatical problems. There is less chance you could have made progress translating this sentence (or any other) into Hungarian, Arabic, Chinese, Russian, Norwegian, Malay, Yiddish, Swahili, Esperanto, or any of the other sixty-three languages besides French that are part of the Google Translate service. But Google will attempt a translation of text from any of these languages into any other, instantaneously and at no cost for anyone with Web access.12 The Translate service’s smartphone app lets users speak more than fifteen of these languages into the phone and, in response, will produce synthesized, translated speech in more than half of the fifteen. It’s a safe bet that even the world’s most multilingual person can’t match this breadth. For years instantaneous translation utilities have been the stuff of science fiction (most not ably The Hitchhiker’s Guide to the Galaxy’s Babel Fish, a strange creature that once inserted in the ear allows a person to understand speech in any language).13 Google Translate and similar services are making it a reality today. In fact, at least one such service is being used right now to facilitate international customer service interactions. The translation services company Lionbridge has partnered with IBM to offer GeoFluent, an online application that instantly translates chats between customers and troubleshooters who do not share a language. In an initial trial, approximately 90 percent of GeoFluent users reported that it was good enough for business purposes.14 Human Superiority in Jeopardy! Computers are now combining pattern matching with complex communication to quite literally beat people at their own games. In 2011, the February 14 and 15 episodes of the TV game show Jeopardy! included a contestant that was not a human being. It was a supercomputer called Watson, developed by IBM specifically to play the game (and named in honor of legendary IBM CEO Thomas Watson, Sr.). Jeopardy! debuted in 1964 and in 2012 was the fifth most popular syndicated TV program in America.15 On a typical day almost 7 million people watch host Alex Trebek ask trivia questions on various topics as contestants vie to be the first to answer them correctly.* The show’s longevity and popularity stem from its being easy to understand yet extremely hard to play well. Almost everyone knows the answers to some of the questions in a given episode, but very few people know the answers to almost all of them. Questions cover a wide range of topics, and contestants are not told in advance what those topics will be. Players also have to be simultaneously fast, bold, and accurate—fast because they compete against one another for the chance to answer each question; bold because they have to try to answer a lot of questions, especially harder ones, in order to accumulate enough money to win; and accurate because money is subtracted for each incorrect answer. Jeopardy!’s producers further challenge contestants with puns, rhymes, and other kinds of wordplay. A clue might ask, for example, for “A rhyming reminder of the past in the city of the NBA’s Kings.” 16 To answer correctly, a player would have to know what the acronym NBA stood for (in this case, it’s the National Basketball Association, not the National Bank Act or chemical compound n-Butylamine), which city the NBA’s Kings play in (Sacramento), and that the clue’s demand for a rhyming reminder of the past meant that the right answer is “What is a Sacramento memento?” instead of a “Sacramento souvenir” or any other factually correct response. Responding correctly to clues like these requires mastery of pattern matching and complex communication. And winning at Jeopardy! requires doing both things repeatedly, accurately, and almost instantaneously. During the 2011 shows, Watson competed against Ken Jennings and Brad Rutter, two of the best knowledge workers in this esoteric industry. Jennings won Jeopardy! a record seventy-four times in a row in 2004, taking home more than $3,170,000 in prize money and becoming something of a folk hero along the way.17 In fact, Jennings is sometimes given credit for the existence of Watson.18 According to one story circulating within IBM, Charles Lickel, a research manager at the company interested in pushing the frontiers of artificial intelligence, was having dinner in a steakhouse in Fishkill, New York, one night in the fall of 2004. At 7 p.m., he noticed that many of his fellow diners got up and went into the adjacent bar. When he followed them to find out what was going on, he saw that they were clustered in front of the bar’s TV watching Jennings extend his winning streak beyond fifty matches. Lickel saw that a match between Jennings and a Jeopardy!-playing supercomputer would be extremely popular, in addition to being a stern test of a computer’s pattern matching and complex communication abilities. Since Jeopardy! is a three-way contest, the ideal third contestant would be Brad Rutter, who beat Jennings in the show’s 2005 Ultimate Tournament of Champions and won more than $3,400,000.19 Both men had packed their brains with information of all kinds, were deeply familiar with the game and all of its idiosyncrasies, and knew how to handle pressure. These two humans would be tough for any machine to beat, and the first versions of Watson weren’t even close. Watson could be ‘tuned’ by its programmers to be either more aggressive in answering questions (and hence more likely to be wrong) or more conservative and accurate. In December 2006, shortly after the project started, when Watson was tuned to try to answer 70 percent of the time (a relatively aggressive approach) it was only able to come up with the right response approximately 15 percent of the time. Jennings, in sharp contrast, answered about 90 percent of questions correctly in games when he buzzed in first (in other words, won the right to respond) 70 percent of the time.20 But Watson turned out to be a very quick learner. The supercomputer’s performance on the aggression vs. accuracy tradeoff improved quickly, and by November 2010, when it was aggressive enough to win the right to answer 70 percent of a simulated match’s total questions, it answered about 85 percent of them correctly. This was impressive improvement, but it still didn’t put the computer in the same league as the best human players. The Watson team kept working until mid-January of 2011, when the matches were recorded for broadcast in February, but no one knew how well their creation would do against Jennings and Rutter. Watson trounced them both. It correctly answered questions on topics ranging from “Olympic Oddities” (responding “pentathlon” to “A 1976 entry in the ‘modern’ this was kicked out for wiring his epee to score points without touching his foe”) to “Church and State” (realizing that the answers all contained one or the other of these words, the computer answered “gestate” when told “It can mean to develop gradually in the mind or to carry during pregnancy”). While the supercomputer was not perfect (for example, it answered “chic” instead of “class” when asked about “stylish elegance, or students who all graduated in the same year” as part of the category “Alternate Meanings”), it was very good. Watson was also extremely fast, repeatedly buzzing in before Jennings and Rutter to win the right to answer questions. In the first of the two games played, for example, Watson buzzed in first 43 times, then answered correctly 38 times. Jennings and Rutter combined to buzz in only 33 times over the course of the same game.21 At the end of the two-day tournament, Watson had amassed $77,147, more than three times as much as either of its human opponents. Jennings, who came in second, added a personal note on his answer to the tournament’s final question: “I for one welcome our new computer overlords.” He later elaborated, “Just as factory jobs were eliminated in the twentieth century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.” 22 T he Paradox of Robotic ‘Progress’ A final important area where we see a rapid recent acceleration in digital improvement is robotics—building machines that can navigate through and interact with the physical world of factories, warehouses, battlefields, and offices. Here again we see progress that was very gradual, then sudden. The word robot entered the English language via the 1921 Czech play, R.U.R. (Rossum’s “Universal” Robots) by Karel Capek, and automatons have been an object of human fascination ever since.23 During the Great Depression, magazine and newspaper stories speculated that robots would wage war, commit crimes, displace workers, and even beat boxer Jack Dempsey.24 Isaac Asimov coined the term robotics in 1941 and provided ground rules for the young discipline the following year with his famous Three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.25 Asimov’s enormous influence on both science fiction and real-world robot-making has persisted for seventy years. But one of those two communities has raced far ahead of the other. Science fiction has given us the chatty and loyal R2-D2 and C-3PO, Battlestar Galactica’s ominous Cylons, the terrible Terminator, and endless varieties of androids, cyborgs, and replicants. Decades of robotics research, in contrast, gave us Honda’s ASIMO, a humanoid robot best known for a spectacularly failed demo that showcased its inability to follow Asimov’s third law. At a 2006 presentation to a live audience in Tokyo, ASIMO attempted to walk up a shallow flight of stairs that had been placed on the stage. On the third step, the robot’s knees buckled and it fell over backward, smashing its faceplate on the floor.26 ASIMO has since recovered and demonstrated skills like walking up and down stairs, kicking a soccer ball, and dancing, but its shortcomings highlight a broad truth: a lot of the things humans find easy and natural to do in the physical world have been remarkably difficult for robots to master. As the roboticist Hans Moravec has observed, “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” 27 This situation has come to be known as Moravec’s paradox, nicely summarized by Wikipedia as “the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.” 28* Moravec’s insight is broadly accurate, and important. As the cognitive scientist Steven Pinker puts it, “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. . . . As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.” 29 Pinker’s point is that robotics experts have found it fiendishly difficult to build machines that match the skills of even the least-trained manual worker. iRobot’s Roomba, for example, can’t do everything a maid does; it just vacuums the floor. More than ten million Roombas have been sold, but none of them is going to straighten the magazines on a coffee table. When it comes to work in the physical world, humans also have a huge flexibility advantage over machines. Automating a single activity, like soldering a wire onto a circuit board or fastening two parts together with screws, is pretty easy, but that task must remain constant over time and take place in a ‘regular’ environment. For example, the circuit board must show up in exactly the same orientation every time. Companies buy specialized machines for tasks like these, have their engineers program and test them, then add them to their assembly lines. Each time the task changes—each time the location of the screw holes move, for example— production must stop until the machinery is reprogrammed. Today’s factories, especially large ones in high-wage countries, are highly automated, but they’re not full of general-purpose robots. They’re full of dedicated, specialized machinery that’s expensive to buy, configure, and reconfigure. Rethinking Factory Automation Rodney Brooks, who co-founded iRobot, noticed something else about modern, highly automated factory floors: people are scarce, but they’re not absent. And a lot of the work they do is repetitive and mindless. On a line that fills up jelly jars, for example, machines squirt a precise amount of jelly into each jar, screw on the top, and stick on the label, but a person places the empty jars on the conveyor belt to start the process. Why hasn’t this step been automated? Because in this case the jars are delivered to the line twelve at a time in cardboard boxes that don’t hold them firmly in place. This imprecision presents no problem to a person (who simply sees the jars in the box, grabs them, and puts them on the conveyor belt), but traditional industrial automation has great difficulty with jelly jars that don’t show up in exactly the same place every time. In 2008 Brooks founded a new company, Rethink Robotics, to pursue and build untraditional industrial automation: robots that can pick and place jelly jars and handle the countless other imprecise tasks currently done by people in today’s factories. His ambition is to make some progress against Moravec’s paradox. What’s more, Brooks envisions creating robots that won’t need to be programmed by high-paid engineers; instead, the machines can be taught to do a task (or retaught to do a new one) by shop floor workers, each of whom need less than an hour of training to learn how to instruct their new mechanical colleagues. Brooks’s machines are cheap, too. At about $20,000, they’re a small fraction of the cost of current industrial robots. We got a sneak peek at these potential paradox-busters shortly before Rethink’s public unveiling of their first line of robots, named Baxter. Brooks invited us to the company’s headquarters in Boston to see these automatons, and to see what they could do. Baxter is instantly recognizable as a humanoid robot. It has two burly, jointed arms with claw-like grips for hands; a torso; and a head with an LCD face that swivels to ‘look at’ the nearest person. It doesn’t have legs, though; Rethink sidestepped the enormous challenges of automatic locomotion by putting Baxter on wheels and having it rely on people to get from place to place. The company’s analyses suggest that it can still do lots of useful work without the ability to move under his own power. To train Baxter, you grab it by the wrist and guide the arm through the motions you want it to carry out. As you do this, the arm seems weightless; its motors are working so you don’t have to. The robot also maintains safety; the two arms can’t collide (the motors resist you if you try to make this happen) and they automatically slow down if Baxter senses a person within their range. These and many other design features make working with this automaton a natural, intuitive, and nonthreatening experience. When we first approached it, we were nervous about catching a robot arm to the face, but this apprehension faded quickly, replaced by curiosity. Brooks showed us several Baxters at work in the company’s demo area. They were blowing past Moravec’s paradox—sensing and manipulating lots of different objects with ‘hands’ ranging from grips to suction cups. The robots aren’t as fast or fluid as a well-trained human worker at full speed, but they might not need to be. Most conveyor belts and assembly lines do not operate at full human speed; they would tire people out if they did. Baxter has a few obvious advantages over human workers. It can work all day every day without needing sleep, lunch, or coffee breaks. It also won’t demand healthcare from its employer or add to the payroll tax burden. And it can do two completely unrelated things at once; its two arms are capable of operating independently. Coming Soon to Assembly Lines, Warehouses, and Hallways Near You After visiting Rethink and seeing Baxter in action, we understood why Texas Instruments Vice President Remi El-Ouazzane said in early 2012, “We have a firm belief that the robotics market is on the cusp of exploding.” There’s a lot of evidence to support his view. The volume and variety of robots in use at companies is expanding rapidly, and innovators and entrepreneurs have recently made deep inroads against Moravec’s paradox.30 Kiva, another young Boston-area company, has taught its automatons to move around warehouses safely, quickly, and effectively. Kiva robots look like metal ottomans or squashed R2-D2s. They scuttle around buildings at about knee-height, staying out of the way of humans and one another. They’re low to the ground so they can scoot underneath shelving units, lift them up, and bring them to human workers. After these workers grab the products they need, the robot whisks the shelf away and another shelf-bearing robot takes its place. Software tracks where all the products, shelves, robots, and people are in the warehouse, and orchestrates the continuous dance of the Kiva automatons. In March of 2012, Kiva was acquired by Amazon—a leader in advanced warehouse logistics—for more than $750 million in cash.31 Boston Dynamics, yet another New England startup, has tackled Moravec’s paradox headon. The company builds robots aimed at supporting American troops in the field by, among other things, carrying heavy loads over rough terrain. Its BigDog, which looks like a giant metal mastiff with long skinny legs, can go up steep hills, recover from slips on ice, and do other very dog-like things. Balancing a heavy load on four points while moving over an uneven landscape is a truly nasty engineering problem, but Boston Dynamics has been making good progress. As a final example of recent robotic progress, consider the Double, which is about as different from the BigDog as possible. Instead of trotting through rough enemy terrain, the Double rolls over cubicle carpets and hospital hallways carrying an iPad. It’s essentially an upside-down pendulum with motorized wheels at the bottom and a tablet at the top of a fourto five-foot stick. The Double provides telepresence—it lets the operator ‘walk around’ a distant building and see and hear what’s going on. The camera, microphone, and screen of the iPad serve as the eyes, ears, and face of the operator, who sees and hears what the iPad sees and hears. The Double itself acts as the legs, transporting the whole assembly around in response to commands from the operator. Double Robotics calls it “the simplest, most elegant way to be somewhere else in the world without flying there.” The first batch of Doubles, priced at $2,499, sold out soon after the technology was announced in the fall of 2012.32 The next round of robotic innovation might put the biggest dent in Moravec’s paradox ever. In 2012 DARPA announced another Grand Challenge; instead of autonomous cars, this one was about automatons. The DARPA Robotics Challenge (DRC) combined tool use, mobility, sensing, telepresence, and many other long-standing challenges in the field. According to the website of the agency’s Tactical Technology Office, The primary technical go al o f the DRC is to develo p gro und ro bo ts capable o f executing co mplex tasks in dangero us, degraded, human-engineered enviro nments. Co mpetito rs in the DRC are expected to fo cus o n ro bo ts that can use standard to o ls and equipment co mmo nly available in human enviro nments, ranging fro m hand to o ls to vehicles, with an emphasis o n adaptability to to o ls with diverse specificatio ns.33 With the DRC, DARPA is asking the robotics community to build and demonstrate highfunctioning humanoid robots by the end of 2014. According to an initial specification supplied by the agency, they will have to be able to drive a utility vehicle, remove debris blocking an entryway, climb a ladder, close a valve, and replace a pump. 34 These seem like impossible requirements, but we’ve been assured by highly knowledgeable colleagues—ones competing in the DRC, in fact—that they’ll be met. Many saw the 2004 Grand Challenge as instrumental in accelerating progress with autonomous vehicles. There’s an excellent chance that the DRC will be similarly important at getting us past Moravec’s paradox. More Evidence T hat We’re at an Inflection Point Self-driving cars, Jeopardy! champion supercomputers, and a variety of useful robots have all appeared just in the past few years. And these innovations are not just lab demos; they’re showing off their skills and abilities in the messy real world. They contribute to the impression that we’re at an inflection point—a bend in the curve where many technologies that used to be found only in science fiction are becoming everyday reality. As many other examples show, this is an accurate impression. On the Star Trek television series, devices called tricorders were used to scan and record three kinds of data: geological, meteorological, and medical. Today’s consumer smartphones serve all these purposes; they can be put to work as seismographs, real-time weather radar maps, and heart- and breathing-rate monitors.35 And, of course, they’re not limited to these domains. They also work as media players, game platforms, reference works, cameras, and GPS devices. On Star Trek, tricorders and person-to-person communicators were separate devices, but in the real world the two have merged in the smartphone. They enable their users to simultaneously access and generate huge amounts of information as they move around. This opens up the opportunity for innovations that venture capitalist John Doerr calls “SoLoMo”—social, local, and mobile.36 Computers historically have been very bad at writing real prose. In recent times they have been able to generate grammatically correct but meaningless sentences, a state of affairs that’s been mercilessly exploited by pranksters. In 2008, for example, the International Conference on Computer Science and Software Engineering accepted the paper “Towards the Simulation of E-commerce” and invited its author to chair a session. This paper was ‘written’ by SCIgen, a program from the MIT Computer Science and Artificial Intelligence Lab that “generates random Computer Science research papers.” SCIgen’s authors wrote that, “Our aim here is to maximize amusement, rather than coherence,” and after reading the abstract of “Towards the Simulation of E-commerce” it’s hard to argue with them:37 Recent advances in co o perative techno lo gy and classical co mmunicatio n are based entirely o n the assumptio n that the Internet and active netwo rks are no t in co nflict with o bject-o riented languages. In fact, few info rmatio n theo rists wo uld disagree with the visualizatio n o f DHTs that made refining and po ssibly simulating 8 bitarchitectures a reality, which embo dies the co mpelling principles o f electrical engineering.38 Recent developments make clear, though, that not all computer-generated prose is nonsensical. Forbes.com has contracted with the company Narrative Science to write the corporate earnings previews that appear on the website. These stories are all generated by algorithms without human involvement. And they’re indistinguishable from what a human would write: Fo rbes Earning Preview: H.J. Heinz A quality first quarter earnings anno uncement co uld push shares o f H.J. Heinz (HNZ) to a new 52-week high as the price is just 49 cents o ff the milesto ne heading into the co mpany’s earnings release o n Wednesday, August 29, 2012. The Wall Street co nsensus is 80 cents per share, up 2.6 percent fro m a year ago when H.J repo rted earnings o f 78 cents per share. The co nsensus estimate remains unchanged o ver the past mo nth, but it has decreased fro m three mo nths ago when it was 82 cents. Analysts are expecting earnings o f $3.52 per share fo r the fiscal year. Analysts pro ject revenue to fall 0.3 percent year-o ver-year to $2.84 billio n fo r the quarter, after being $2.85 billio n a year ago . Fo r the year, revenue is pro jected to ro ll in at $11.82 billio n.39 Even computer peripherals like printers are getting in on the act, demonstrating useful capabilities that seem straight out of science fiction. Instead of just putting ink on paper, they are making complicated three-dimensional parts out of plastic, metal, and other materials. 3D printing, also sometimes called “additive manufacturing,” takes advantage of the way computer printers work: they deposit a very thin layer of material (ink, traditionally) on a base (paper) in a pattern determined by the computer. Innovators reasoned that there is nothing stopping printers from depositing layers one on top of the other. And instead of ink, printers can also deposit materials like liquid plastic that gets cured into a solid by ultraviolet light. Each layer is very thin—somewhere around onetenth of a millimeter—but over time a three-dimensional object takes shape. And because of the way it is built up, this shape can be quite complicated—it can have voids and tunnels in it, and even parts that move independently of one another. At the San Francisco headquarters of Autodesk, a leading design software company, we handled a working adjustable wrench that was printed as a single part, no assembly required.40 This wrench was a demonstration product made out of plastic, but 3D printing has expanded into metals as well. Autodesk CEO Carl Bass is part of the large and growing community of additive manufacturing hobbyists and tinkerers. During our tour of his company’s gallery, a showcase of all the products and projects enabled by Autodesk software, he showed us a beautiful metal bowl he designed on a computer and had printed out. The bowl had an elaborate lattice pattern on its sides. Bass said that he’d asked friends of his who were experienced in working with metal—sculptors, ironworkers, welders, and so on—how the bowl was made. None of them could figure out how the lattice was produced. The answer was that a laser had built up each layer by fusing powdered metal. 3D printing today is not just for art projects like Bass’s bowl. It’s used by countless companies every day to make prototypes and model parts. It’s also being used for final parts ranging from plastic vents and housings on NASA’s next-generation Moon rover to a metal prosthetic jawbone for an eighty-three-year-old woman. In the near future, it might be used to print out replacement parts for faulty engines on the spot instead of maintaining stockpiles of them in inventory. Demonstration projects have even shown that the technique could be used to build concrete houses.41 Most of the innovations described in this chapter have occurred in just the past few years. They’ve taken place in areas where improvement had been frustratingly slow for a long time, and where the best thinking often led to the conclusion that it wouldn’t speed up. But then digital progress became sudden after being gradual for so long. This happened in multiple areas, from artificial intelligence to self-driving cars to robotics. How did this happen? Was it a fluke—a confluence of a number of lucky one-time advances? No, it was not. The digital progress we’ve seen recently is certainly impressive, but it’s just a small indication of what’s to come. It’s the dawn of the second machine age. To understand why it’s unfolding now, we need to understand the nature of technological progress in the era of digital hardware, software, and networks. In particular, we need to understand its three key characteristics: that it is exponential, digital, and combinatorial. The next three chapters will discuss each of these in turn. * In the years leading up to the Great Recessio n that began in 2007, co mpanies were giving mo rtgages to peo ple with lo wer and lo wer credit sco res, inco me, and wealth, and higher and higher debt levels. In o ther wo rds, they either rewro te o r igno red their previo us mo rtgage appro val algo rithms. It wasn’t that the o ld mo rtgage algo rithms sto pped wo rking; it was that they sto pped being used. * To be precise, Trebek reads answers and the co ntestants have to state the questio n that wo uld give rise to this answer. * Senso rimo to r skills are tho se that invo lve sensing the physical wo rld and co ntro lling the bo dy to mo ve thro ugh it. “The greatest sho rtco ming o f the human race is o ur inability to understand the expo nential functio n.” —Albert A. Bartlett AL T H O U GH H E’S C O F O U N D ER O F Intel, a major philanthropist, and recipient of the Presidential Medal of Freedom, Gordon Moore is best known for a prediction he made, almost as an aside, in a 1965 article. Moore, then working at Fairchild Semiconductor, wrote an article for Electronics magazine with the admirably direct title “Cramming More Components onto Integrated Circuits.” At the time, circuits of this type—which combined many different kinds of electrical components onto a single chip made primarily of silicon—were less than a decade old, but Moore saw their potential. He wrote that, “Integrated circuits will lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.” 1 The article’s most famous forecast, however, and the one that has made Moore a household name, concerned the component cramming of the title: The co mplexity fo r minimum co mpo nent co sts has increased at a rate o f ro ughly a facto r o f two per year. . . . Certainly o ver the sho rt term this rate can be expected to co ntinue, if no t to increase. Over the lo nger term, the rate o f increase is a bit mo re uncertain, altho ugh there is no reaso n to believe it will no t remain nearly co nstant fo r at least ten years.2 This is the original statement of Moore’s Law, and it’s worth dwelling for a moment on its implications. “Complexity for minimum component costs” here essentially means the amount of integrated circuit computing power you could buy for one dollar. Moore observed that over the relatively brief history of his industry this amount had doubled each year: you could buy twice as much power per dollar in 1963 as you could in 1962, twice as much again in 1964, and twice as much again in 1965. Moore predicted this state of affairs would continue, perhaps with some change to timing, for at least another ten years. This bold statement forecast circuits that would be more than five hundred times as powerful in 1975 as they were in 1965.* As it turned out, however, Moore’s biggest mistake was in being too conservative. His “law” has held up astonishingly well for over four decades, not just one, and has been true for digital progress in areas other than integrated circuits. It’s worth noting that the time required for digital doubling remains a matter of dispute. In 1975 Moore revised his estimate upward from one year to two, and today it’s common to use eighteen months as the doubling period for general computing power. Still, there’s no dispute that Moore’s Law has proved remarkably prescient for almost half a century.3 It’s Not a Law: It’s a Bunch of Good Ideas Moore’s Law is very different from the laws of physics that govern thermodynamics or Newtonian classical mechanics. Those laws describe how the universe works; they’re true no matter what we do. Moore’s Law, in contrast, is a statement about the work of the computer industry’s engineers and scientists; it’s an observation about how constant and successful their efforts have been. We simply don’t see this kind of sustained success in other domains. There was no period of time when cars got twice as fast or twice as fuel efficient every year or two for fifty years. Airplanes don’t consistently have the ability to fly twice as far, or trains the ability to haul twice as much. Olympic runners and swimmers don’t cut their times in half over a generation, let alone a couple of years. So how has the computer industry kept up this amazing pace of improvement? There are two main reasons. First, while transistors and the other elements of computing are constrained by the laws of physics just like cars, airplanes, and swimmers, the constraints in the digital world are much looser. They have to do with how many electrons per second can be put through a channel etched in an integrated circuit, or how fast beams of light can travel through fiber-optic cable. At some point digital progress bumps up against its constraints and Moore’s Law must slow down, but it takes awhile. Henry Samueli, chief technology officer of chipmaker Broadcom Corporation, predicted in 2013 that “Moore’s Law is coming to an end—in the next decade it will pretty much come to an end so we have 15 years or so.” 4 But smart people have been predicting the end of Moore’s Law for a while now, and they’ve been proved wrong over and over again.5 This is not because they misunderstood the physics involved, but because they underestimated the people working in the computer industry. The second reason that Moore’s Law has held up so well for so long is what we might call ‘brilliant tinkering’—finding engineering detours around the roadblocks thrown up by physics. When it became difficult to cram integrated circuits more tightly together, for example, chip makers instead layered them on top of one another, opening up a great deal of new real estate. When communications traffic threatened to outstrip the capacity even of fiber-optic cable, engineers developed wavelength division multiplexing (WDM), a technique for transmitting many beams of light of different wavelengths down the same single glass fiber at the same time. Over and over again brilliant tinkering has found ways to skirt the limitations imposed by physics. As Intel executive Mike Marberry puts it, “If you’re only using the same technology, then in principle you run into limits. The truth is we’ve been modifying the technology every five or seven years for 40 years, and there’s no end in sight for being able to do that.” 6 This constant modification has made Moore’s Law the central phenomenon of the computer age. Think of it as a steady drumbeat in the background of the economy. Charting the Power of Constant Doubling Once this doubling has been going on for some time, the later numbers overwhelm the earlier ones, making them appear irrelevant. To see this, let’s look at a hypothetical example. Imagine that Erik gives Andy a tribble, the fuzzy creature with a high reproductive rate made famous in an episode of Star Trek. Every day each tribble gives birth to another tribble, so Andy’s menagerie doubles in size each day. A geek would say in this case that the tribble family is experiencing exponential growth. That’s because the mathematical expression for how many tribbles there are on day x is 2x – 1, where the x – 1 is referred to as an exponent. Exponential growth like this is fast growth; after two weeks Andy has more than sixteen thousand of the creatures. Here’s a graph of how his tribble family grows over time: F I GU R E 3. 1 T ri b b l es o ver T i m e: T h e P o wer o f C o n st a n t D o u b l i n g This graph is accurate, but misleading in an important sense. It seems to show that all the action occurs in the last couple of days, with nothing much happening in the first week. But the same phenomenon—the daily doubling of tribbles—has been going on the whole time with no accelerations or disruptions. This steady exponential growth is what’s really interesting about Erik’s ‘gift’ to Andy. To make it more obvious, we have to change the spacing of the numbers on the graph. The graph we’ve already drawn has standard linear spacing; each segment of the vertical axis indicates two thousand more tribbles. This is fine for many purposes but, as we’ve seen, it’s not great for showing exponential growth. To highlight it better, we’ll change to logarithmic spacing, where each segment of the vertical axis represents a tenfold increase in tribbles: an increase first from 1 to 10, then from 10 to 100, then from 100 to 1,000, and so on. In other words, we scale the axis by powers of 10 or orders of magnitude. Logarithmic graphs have a wonderful property: they show exponential growth as a perfectly straight line. Here’s what the growth of Andy’s tribble family looks like on a logarithmic scale: F I GU R E 3. 2 T ri b b l es o ver T i m e: T h e P o wer o f C o n st a n t D o u b l i n g This view emphasizes the steadiness of the doubling over time rather than the large numbers at the end. Because of this, we often use logarithmic scales for graphing doublings and other exponential growth series. They show up as straight lines and their speed is easier to evaluate; the bigger the exponent, the faster they grow, and the steeper the line. Impoverished Emperors, Headless Inventors, and the Second Half of the Chessboard Our brains are not well equipped to understand sustained exponential growth. In particular, we severely underestimate how big the numbers can get. Inventor and futurist Ray Kurzweil retells an old story to drive this point home. The game of chess originated in present-day India during the sixth century CE, the time of the Gupta Empire.7 As the story goes, it was invented by a very clever man who traveled to Pataliputra, the capital city, and presented his brainchild to the emperor. The ruler was so impressed by the difficult, beautiful game that he invited the inventor to name his reward. The inventor praised the emperor’s generosity and said, “All I desire is some rice to feed my family.” Since the emperor’s largess was spurred by the invention of chess, the inventor suggested they use the chessboard to determine the amount of rice he would be given. “Place one single grain of rice on the first square of the board, two on the second, four on the third, and so on,” the inventor proposed, “so that each square receives twice as many grains as the previous.” “Make it so,” the emperor replied, impressed by the inventor’s apparent modesty. Moore’s Law and the tribble exercise allow us to see what the emperor did not: sixty-three instances of doubling yields a fantastically big number, even when starting with a single unit. If his request were fully honored, the inventor would wind up with 264 –1, or more than eighteen quintillion grains of rice. A pile of rice this big would dwarf Mount Everest; it’s more rice than has been produced in the history of the world. Of course, the emperor could not honor such a request. In some versions of the story, once he realizes that he’s been tricked, he has the inventor beheaded. Kurzweil tells the story of the inventor and the emperor in his 2000 book The Age of Spiritual Machines: When Computers Exceed Human Intelligence. He aims not only to illustrate the power of sustained exponential growth but also to highlight the point at which the numbers start to become so big they are inconceivable: After thirty-two squares, the empero r had given the invento r abo ut 4 billio n grains o f rice. That’s a reaso nable quantity—abo ut o ne large field’s wo rth—and the empero r did start to take no tice. But the empero r co uld still remain an empero r. And the invento r co uld still retain his head. It was as they headed into the seco nd half o f the chessbo ard that at least o ne o f them go t into tro uble.8 Kurzweil’s great insight is that while numbers do get large in the first half of the chessboard, we still come across them in the real world. Four billion does not necessarily outstrip our intuition. We experience it when harvesting grain, assessing the fortunes of the world’s richest people today, or tallying up national debt levels. In the second half of the chessboard, however —as numbers mount into trillions, quadrillions, and quintillions—we lose all sense of them. We also lose sense of how quickly numbers like these appear as exponential growth continues. Kurzweil’s distinction between the first and second halves of the chessboard inspired a quick calculation. Among many other things, the U.S. Bureau of Economic Analysis (BEA) tracks American companies’ expenditures. The BEA first noted “information technology” as a distinct corporate investment category in 1958. We took that year as the starting point for when Moore’s Law entered the business world, and used eighteen months as the doubling period. After thirty-two of these doublings, U.S. businesses entered the second half of the chessboard when it comes to the use of digital gear. That was in 2006. Of course, this calculation is just a fun little exercise, not anything like a serious attempt to identify the one point at which everything changed in the world of corporate computing. You could easily argue with the starting point of 1958 and a doubling period of eighteen months. Changes to either assumption would yield a different break point between the first and second halves of the chessboard. And business technologists were not only innovating in the second half; as we’ll discuss later, the breakthroughs of today and tomorrow rely on, and would be impossible without, those of the past. We present this calculation here because it underscores an important idea: that exponential growth eventually leads to staggeringly big numbers, ones that leave our intuition and experience behind. In other words, things get weird in the second half of the chessboard. And like the emperor, most of us have trouble keeping up. One of the things that sets the second machine age apart is how quickly that second half of the chessboard can arrive. We’re not claiming that no other technology has ever improved exponentially. In fact, after the one-time burst of improvement in the steam engine Watt’s innovations created, additional tinkering led to exponential improvement over the ensuing two hundred years. But the exponents were relatively small, so it only went through about three or four doublings in efficiency during that period.9 It would take a millennium to reach the second half of the chessboard at that rate. In the second machine age, the doublings happen much faster and exponential growth is much more salient. Second-Half T echnologies Our quick doubling calculation also helps us understand why progress with digital technologies feels so much faster these days and why we’ve seen so many recent examples of science fiction becoming business reality. It’s because the steady and rapid exponential growth of Moore’s Law has added up to the point that we’re now in a different regime of computing: we’re now in the second half of the chessboard. The innovations we described in the previous chapter—cars that drive themselves in traffic; Jeopardy!-champion supercomputers; autogenerated news stories; cheap, flexible factory robots; and inexpensive consumer devices that are simultaneously communicators, tricorders, and computers—have all appeared since 2006, as have countless other marvels that seem quite different from what came before. One of the reasons they’re all appearing now is that the digital gear at their hearts is finally both fast and cheap enough to enable them. This wasn’t the case just a decade ago. What does digital progress look like on a logarithmic scale? Let’s take a look. F I GU R E 3. 3 T h e M a n y D i m en si o n s o f M o o re’s L a w This graph shows that Moore’s Law is both consistent and broad; it’s been in force for a long time (decades, in some cases) and applies to many types of digital progress. As you look at it, keep in mind that if it used standard linear scaling on the vertical axis, all of those straight-ish lines would look like the first graph above of Andy’s tribble family—horizontal most of the way, then suddenly close to vertical at the end. And there would really be no way to graph them all together—the numbers involved are just too different. Logarithmic scaling takes care of these issues and allows us to get a clear overall picture of improvement in digital gear. It’s clear that many of the critical building blocks of computing—microchip density, processing speed, storage capacity, energy efficiency, download speed, and so on—have been improving at exponential rates for a long time. To understand the real-world impacts of Moore’s Law, let’s compare the capabilities of computers separated by only a few doubling periods. The ASCI Red, the first product of the U.S. government’s Accelerated Strategic Computing Initiative, was the world’s fastest supercomputer when it was introduced in 1996. It cost $55 million to develop and its one hundred cabinets occupied nearly 1,600 square feet of floor space (80 percent of a tennis court) at Sandia National Laboratories in New Mexico.10 Designed for calculation-intensive tasks like simulating nuclear tests, ASCI Red was the first computer to score above one teraflop—one trillion floating point operations* per second—on the standard benchmark test for computer speed. To reach this speed it used eight hundred kilowatts per hour, about as much as eight hundred homes would. By 1997, it had reached 1.8 teraflops. Nine years later another computer hit 1.8 teraflops. But instead of simulating nuclear explosions, it was devoted to drawing them and other complex graphics in all their realistic, real-time, three-dimensional glory. It did this not for physicists, but for video game players. This computer was the Sony PlayStation 3, which matched the ASCI Red in performance, yet cost about five hundred dollars, took up less than a tenth of a square meter, and drew about two hundred watts.11 In less than ten years exponential digital progress brought teraflop calculating power from a single government lab to living rooms and college dorms all around the world. The PlayStation 3 sold approximately 64 million units. The ASCI Red was taken out of service in 2006. Exponential progress has made possible many of the advances discussed in the previous chapter. IBM’s Watson draws on a plethora of clever algorithms, but it would be uncompetitive without computer hardware that is about one hundred times more powerful than Deep Blue, its chess-playing predecessor that beat the human world champion, Garry Kasparov, in a 1997 match. Speech recognition applications like Siri require lots of computing power, which became available on mobile phones like Apple’s iPhone 4S (the first phone that came with Siri installed). The iPhone 4S was about as powerful, in fact, as Apple’s top-of-the-line Powerbook G4 laptop had been a decade earlier. As all of these innovations show, exponential progress allows technology to keep racing ahead and makes science fiction reality in the second half of the chessboard. Not Just for Computers Anymore: T he Spread of Moore’s Law Another comparison across computer generations highlights not only the power of Moore’s Law but also its wide reach. As is the case with the ASCI Red and the PlayStation 3, the Cray2 supercomputer (introduced in 1985) and iPad 2 tablet (introduced in 2011) had almost identical peak calculation speeds. But the iPad also had a speaker, microphone, and headphone jack. It had two cameras; the one on the front of the device was Video Graphics Array (VGA) quality, while the one on the back could capture high-definition video. Both could also take still photographs, and the back camera had a 5x digital zoom. The tablet had receivers that allowed it to participate in both wireless telephone and Wi-Fi networks. It also had a GPS receiver, digital compass, accelerometer, gyroscope, and light sensor. It had no builtin keyboard, relying instead on a high-definition touch screen that could track up to eleven points of contact simultaneously.12 It fit all of this capability into a device that cost much less than $1,000 and was smaller, thinner, and lighter than many magazines. The Cray-2, which cost more than $35 million (in 2011 dollars), was by comparison deaf, dumb, blind, and immobile.13 Apple was able to cram all of this functionality in the iPad 2 because a broad shift has taken place in recent decades: sensors like microphones, cameras, and accelerometers have moved from the analog world to the digital one. They became, in essence, computer chips. As they did so, they became subject to the exponential improvement trajectories of Moore’s Law. Digital gear for recording sounds was in use by the 1960s, and an Eastman Kodak engineer built the first modern digital camera in 1975.14 Early devices were expensive and clunky, but quality quickly improved and prices dropped. Kodak’s first digital single-lens reflex camera, the DCS 100, cost about $13,000 when it was introduced in 1991; it had a maximum resolution of 1.3 megapixels and stored its images in a separate, ten-pound hard drive that users slung over their shoulders. However, the pixels per dollar available from digital cameras doubled about every year (a phenomenon known as “Hendy’s Law” after Kodak Australia employee Barry Hendy, who documented it), and all related gear got exponentially smaller, lighter, cheaper, and better over time.15 Accumulated improvement in digital sensors meant that twenty years after the DCS 100, Apple could include two tiny cameras, capable of both still and video photography, on the iPad 2. And when it introduced a new iPad the following year, the rear camera’s resolution had improved by a factor of more than seven. Machine Eyes As Moore’s Law works over time on processors, memory, sensors, and many other elements of computer hardware (a notable exception is batteries, which haven’t improved their performance at an exponential rate because they’re essentially chemical devices, not digital ones), it does more than just make computing devices faster, cheaper, smaller, and lighter. It also allows them to do things that previously seemed out of reach. Researchers in artificial intelligence have long been fascinated (some would say obsessed) with the problem of simultaneous localization and mapping, which they refer to as SLAM. SLAM is the process of building up a map of an unfamiliar building as you’re navigating through it— where are the doors? where are stairs? what are all the things I might trip over?—and also keeping track of where you are within it (so you can find your way back downstairs and out the front door). For the great majority of humans, SLAM happens with minimal conscious thought. But teaching machines to do it has been a huge challenge. Researchers thought a great deal about which sensors to give a robot (cameras? lasers? sonar?) and how to interpret the reams of data they provide, but progress was slow. As a 2008 review of the topic summarized, SLAM “is one of the fundamental challenges of robotics . . . [but it] seems that almost all the current approaches can not perform consistent maps for large areas, mainly due to the increase of the computational cost and due to the uncertainties that become prohibitive when the scenario becomes larger.” 16 In short, sensing a sizable area and immediately crunching all the resulting data were thorny problems preventing real progress with SLAM. Until, that is, a $150 video-game accessory came along just two years after the sentences above were published. In November 2010 Microsoft first offered the Kinect sensing device as an addition to its Xbox gaming platform. The Kinect could keep track of two active players, monitoring as many as twenty joints on each. If one player moved in front of the other, the device made a best guess about the obscured person’s movements, then seamlessly picked up all joints once he or she came back into view. Kinect could also recognize players’ faces, voices, and gestures and do so across a wide range of lighting and noise conditions. It accomplished this with digital sensors including a microphone array (which pinpointed the source of sound better than a single microphone could), a standard video camera, and a depth perception system that both projected and detected infrared light. Several onboard processors and a great deal of proprietary software converted the output of these sensors into information that game designers could use.17 At launch, all of this capability was packed into a four-inch-tall device less than a foot wide that retailed for $149.99. The Kinect sold more than eight million units in the sixty days after its release (more than either the iPhone or iPad) and currently holds the Guinness World Record for the fastestselling consumer electronics device of all time.18 The initial family of Kinect-specific games let players play darts, exercise, brawl in the streets, and cast spells à la Harry Potter.19 These, however, did not come close to exhausting the system’s possibilities. In August of 2011 at the SIGGRAPH (short for the Association of Computing Machinery’s Special Interest Group on Graphics and Interactive Techniques) conference in Vancouver, British Columbia, a team of Microsoft employees and academics used Kinect to “SLAM” the door shut on a long-standing challenge in robotics. SIGGRAPH is the largest and most prestigious gathering devoted to research and practice on digital graphics, attended by researchers, game designers, journalists, entrepreneurs, and most others interested in the field. This made it an appropriate place for Microsoft to unveil what the Creators Project website called “The Self-Hack That Could Change Everything.”* 20 This was the KinectFusion, a project that used the Kinect to tackle the SLAM problem. In a video shown at SIGGRAPH 2011, a person picks up a Kinect and points it around a typical office containing chairs, a potted plant, and a desktop computer and monitor.21 As he does, the video splits into multiple screens that show what the Kinect is able to sense. It immediately becomes clear that if the Kinect is not completely solving the SLAM problem for the room, it’s coming close. In real time, Kinect draws a three-dimensional map of the room and all the objects in it, including a coworker. It picks up the word DELL pressed into the plastic on the back of the computer monitor, even though the letters are not colored and only one millimeter deeper that the rest of the monitor’s surface. The device knows where it is in the room at all times, and even knows how virtual ping-pong balls would bounce around if they were dropped into the scene. As the technology blog Engadget put it in a post-SIGGRAPH entry, “The Kinect took 3D sensing to the mainstream, and moreover, allowed researchers to pick up a commodity product and go absolutely nuts.” 22 In June of 2011, shortly before SIGGRAPH, Microsoft had made available a Kinect software development kit (SDK) giving programmers everything they needed to start writing PC software that made use of the device. After the conference there was a great deal of interest in using the Kinect for SLAM, and many teams in robotics and AI research downloaded the SDK and went to work. In less than a year, a team of Irish and American researchers led by our colleague John Leonard of MIT’s Computer Science and Artificial Intelligence Lab announced Kintinuous, a “spatially extended” version of KinectFusion. With Kintinuous, users could use a Kinect to scan large indoor volumes like apartment buildings and even outdoor environments (which the team scanned by holding a Kinect outside a car window during a nighttime drive). At the end of the paper describing their work, the Kintinuous researchers wrote, “In the future we will extend the system to implement a full SLAM approach.” 23 We don’t think it will be long until they announce success. When given to capable technologists, the exponential power of Moore’s Law eventually makes even the toughest problems tractable. Cheap and powerful digital sensors are essential components of some of the science-fiction technologies discussed in the previous chapter. The Baxter robot has multiple digital cameras and an array of force and position detectors. All of these would have been unworkably expensive, clunky, and imprecise just a short time ago. A Google autonomous car incorporates several sensing technologies, but its most important ‘eye’ is a Cyclopean LIDAR (a combination of “LIght” and “raDAR”) assembly mounted on the roof. This rig, manufactured by Velodyne, contains sixty-four separate laser beams and an equal number of detectors, all mounted in a housing that rotates ten times a second. It generates about 1.3 million data points per second, which can be assembled by onboard computers into a real-time 3D picture extending one hundred meters in all directions. Some early commercial LIDAR systems available around the year 2000 cost up to $35 million, but in mid-2013 Velodyne’s assembly for self-navigating vehicles was priced at approximately $80,000, a figure that will fall much further in the future. David Hall, the company’s founder and CEO, estimates that mass production would allow his product’s price to “drop to the level of a camera, a few hundred dollars.” 24 All these examples illustrate the first element of our three-part explanation of why we’re now in the second machine age: steady exponential improvement has brought us into the second half of the chessboard—into a time when what’s come before is no longer a particularly reliable guide to what will happen next. The accumulated doubling of Moore’s Law, and the ample doubling still to come, gives us a world where supercomputer power becomes available to toys in just a few years, where ever-cheaper sensors enable inexpensive solutions to previously intractable problems, and where science fiction keeps becoming reality. Sometimes a difference in degree (in other words, more of the same) becomes a difference in kind (in other words, different than anything else). The story of the second half of the chessboard alerts us that we should be aware that enough exponential progress can take us to astonishing places. Multiple recent examples convince us that we’re already there. * Since 2 9 = 512 * Multiplying 62.34 by 24358.9274 is an example o f a flo ating po int o peratio n. The decimal po int in such o peratio ns is allo wed to ‘flo at’ instead o f being fixed in the same place fo r bo th numbers. * In this co ntext, a “hack” is an effo rt to get inside the guts o f a piece o f digital gear and use it fo r an uno rtho do x purpo se. A self-hack is o ne carried o ut by the co mpany that made the gear in the first place. “When yo u can measure what yo u are speaking abo ut, and express it in numbers, yo u kno w so mething abo ut it; but when yo u canno t express it in numbers, yo ur kno wledge is o f a meagre and unsatisfacto ry kind.” —Lo rd Kelvin “HEY, H AVE YO U H EAR D about . . . ?” “You’ve got to check out . . . ” Questions and recommendations like these are the stuff of everyday life. They’re how we learn about new things from our friends, family, and colleagues, and how we spread the word about exciting things we’ve come across. Traditionally, such cool hunting ended with the name of a band, restaurant, place to visit, TV show, book, or movie. In the digital age, sentences like these frequently end with the name of a website or a gadget. And right now, they’re often about a smartphone application. Both of the major technology platforms in this market—Apple’s iOS and Google’s Android—have more than five hundred thousand applications available.1 There are plenty of “Top 10” and “Best of” lists available to help users find the cream of the smartphone app crop, but traditional word of mouth has retained its power. Not long ago Matt Beane, a doctoral student at the MIT Sloan School of Management and a member of our Digital Frontier team, gave us a tip. “You’ve got to check out Waze; it’s amazing.” But when we found out it was a G...
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer


Succeeding in a Changed World - Outline
Thesis Statement: The two articles provide some similarities in the way people should deal with
the new concept of the world through defining their unique path but differ on several instances
on what is important for the human to thrive in such environments.
I. Introduction
II. Michael’s monoculture
III. The second machine age
IV. Similarities
A. Awareness is vital
B. Learning the systems
V. Difference
A. Avoiding the monoculture versus integrating in the machine age
VI. Conclusion


Succeeding in a Changed World




Succeeding in a Changed World
The article by Michaels and that by McAfee and Brynjolson address two issues in the
modern world that have seen extensive changes in the recent past. Michaels addresses the issue
of the economic story of the modern world where every aspect of the society is modified to some
economic value. McAfee and Brynjolson, on the other hand, address the issue of the everevolving technologies and their threat of defeating the human abilities. The two articles provide
some similarities in the way people should deal with the new concept of the world through
defining their unique path but differ in several instances on what is important for the human to
thrive in such environments.
The monoculture, according to Michaels, is the overtaking of cultural, social, and
personal values by the economic worth of every aspect of the modern life. The author recognizes
a rift between personal values of beauty, love, and honesty among others and the core business
values of efficiency and maximum productivity. Likewise, he advances that the monoculture
consistently drives social relationships to become transactional. The community is also run as a
business since most entities seek to make profits. This degradation of values and the traditional
social wellbeing is further seen in an art which is viewed as “creative industries within the world
of markets” (Michaels, 2011).
Similar to the economic monoculture, McAfee and Brynjolson a...

Really helpful material, saved me a great deal of time.