Proof Reading Rhetorical Analysis Assignment

User Generated

Nyrk977

Writing

Description

I would like someone to help for proof reading on my essay, including checking over the text, to detect errors in spelling, meaning, punctuation, grammar and format.

Here is my assignment requirement, essay and document I analyzed.

Unformatted Attachment Preview

Zhao 1 Fangyi Zhao LLD 100A September 10, 2017 Professor Alkire Rhetorical Analysis of Professional Writing Introduction In a society, which resources and temptations increase frequently, a public life (such as political movement, social movements, etc.) and personal life (such as a job interview, daily shopping, etc.) requires more and more communicating power. Communication’s effect depends on the size of the rhetoric skills, the amount of rhetorical resources rather than just the amount of knowledge. Therefore, most of the professional journals or articles include rhetoric. The most commonly used of the writers are rhetorical appeals and rhetorical strategies. There are four important elements which are logos, ethos, pathos and kairos when writer want to appeal readers. Logos are rational and logical appeals, pathos are emotional appeals, ethos are ethical appeals, and kairos are timing, place and culture appeals. Writers may also use some rhetorical strategies like description, narration, exemplification, and definitions to help prove the argument. To be a computer science major’s student, coding seems like the biggest part for us. However, algorithm is the soul of the computer, and it can’t be separate with any software. When complete a software, different programmers will use different way to achieve it. Nevertheless, how to find the best software in these different versions? Most of people may consider interface, user interaction, and functionality as conditions to estimate software. But, for a professional way, the software which spend the less time is the best software. In order to achieve this purpose, algorithm make a great contribution. The article is from the textbook of my CS-146 class, Introduction to Algorithms, which is published in 2009 and written by Cormen, Leiserson, Rivest, and Stein. Section 1.1 The Role of Algorithm in Computer introduces what is algorithm, and what is the role of algorithm in computer by using some rhetorical appeal and rhetorical strategies. This article introduces the basic concept of algorithm in computer and use some examples of the application to show how the algorithms apply in different case. The purpose of Cormen, Leiserson, Rivest, and Stein’s article is to give common a basic idea and concept of algorithm. Because it is a professional article, authors use Logos and Ethos which are logic and authority in this section rather than pathos. The main rhetorical strategies that authors use are definition, exemplification, and compare and contrast. Some definitions and examples make the article more creditability and logically. I will examine more detail of appeals and strategies in my following article. Zhao 2 Analysis of Rhetorical Appeals Logos Logos, which means logic, is the main rhetorical appeal using in this article because it helps readers use logic to understand each step or element in the whole concept. Cormen, Leiserson, Rivest, and Stein use some facts to show how algorithm solve the problem. For example, Human Genome Project identified all the 100,000 genes in human DNA, determined the sequences, sorted the information, and analyzed the data. In order to achieve, they must use sophisticated algorithms. That helps readers could clearly know how huge the project is, how complicated it is, and how important the algorithm is. Furthermore, authors use other applications to show the important of algorithm. Oil companies want to know how to get maximum profit, internet service providers want to serve customers more effectively, and political candidates want to buy the most useful advertisement to get the chance to win the election. Authors show that all of the problem that in the above can be solve by linear programming which is also included in algorithm. These facts helps author persuade readers algorithm play big role in computing. Ethos Ethos, which means reliability and creditability is another rhetorical appeal using in the article. Cormen, Leiserson, Rivest, and Stein ever mentioned algorithm should correct in order to get a correct answer. However, they also stated the incorrect condition. Even though common think incorrect answer will produce by the wrong algorithm, they still wrote down the special case, which is answer could be right when people can control their error rate. These statements made them reliability because they are not arbitrary to say algorithm need always be right, and it remain some special cases sometimes. Authors also spend lots of time on writing the appendix from A to D, which include example, graph, statistic, probability, and data, to further explain the concepts mentioned in the book. Furthermore, authors are all professional like “Thomas H. Cormen is a professor of Computer Science and former Director of the Institute for Writing and Rhetoric at Dartmouth College, Charles E. Leiserson is Professor of Computer Science and Engineering at the Massachusetts Institute of Technology, Ronald L. Rivest is Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, and Clifford Stein is Professor of Industrial Engineering and Operations Research at Columbia University” (Web MIT 2017) . For another example, they use a large amount of reference, which is wrote down in the last of the book, to help strong their creditability. Therefore, ethos are another main rhetoric appeal in this article. Zhao 3 Analysis of Rhetorical Strategies Definition Definition is a common rhetoric strategy in the academic essay to help writers define an idea or explain an academic term. In the article “Section 1.1 The Role of Algorithm in Computer”, authors introduced the definition of the algorithm in the beginning: “Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output” (Introduction to Algorithms 5). Authors used definition to explain what is algorithm, which takes the input and use some special procedures to get the output. Algorithm is a key word for this article. Therefore, using definition to introduce the concept of algorithm helps authors illustrated more details about their thesis and also helps readers establish some senses about the algorithm. Exemplification Exemplification is another effective rhetoric strategy. In this article, authors use exemplification, which means using examples to help describe or analysis the thesis, to show how algorithm works. For example, authors gave the readers an array which is input sequence, {31, 41, 59, 26, 41, 58}. After a sorting algorithm worked, system will give an output sequence like {26, 31, 41, 41, 58, 59}. From the example, readers could easily find the input is unordered array which includes some numbers. Sorting algorism make sequence change from unordered to ordered. Authors used numbers and professional phrases to make readers know what sorting algorithm is. For another example, authors gave us a condition that a mechanical design the library with different parts. However, some of the parts may include other parts. Therefore, it needs to list an order in order to make each part can appear before other parts use it. If the mechanical design includes n parts, so, there exists n! possible orders. From function we could know n! is a factorial function which grows faster than other function. Therefore, in order to calculate which one is the possible way to solve the problem need a complicated algorithm which called instance of topological sorting. Even though the example include some of the professional concept, reader can also easily understand the importance of algorithm. Similarly, these kind of questions need computer to compute. That also shows how important the algorithm is in the computing. Overall, exemplification is an effective way to help author to present their idea in this article. Zhao 4 Comparison and contrast Comparison and Contrast is a rhetorical strategy to help writers develop and analysis the article’s ideas. “Comparison examine similarities; contrasts examine differences” (23 Alkire). Authors use compare and contrast to help analysis some complicated definition or vividly describe some ideas. In this article, authors use lots of example to show some of the problems like the transport company want to find the shortest path to decrease the cost, Internet routing node want find the shortest way to pass message in order to save time, and someone want to find the nearest way as early as possible to arrive at the destination, have some candidate solutions and find “best” in the solutions when using the algorithm. Furthermore, authors use contrast to give another example to show not each problem has sat of candidate solution when using the algorithm. “For example, suppose we are given a set of numerical values representing samples of a signal, and we want to compute the discrete Fourier transform of these samples. The discrete Fourier transform converts the time domain to the frequency domain, producing a set of numerical coefficients, so that we can determine the strength of various frequencies in the sampled signal.” (Introduction to Algorithms 9). From the example, author use another example which want to compute the discrete Fourier transform which can’t have a set of solution but still can be solved by another algorithm which called the ‘fast Fourier transform’. From the contrast, authors show that algorithm not only can solve ‘find best question’, but also can solve ‘discrete question’. That also shows there exists lots of different kinds of questions can be solved by algorithm. This rhetorical strategy helps authors enhance their thesis that algorithm is really important in computing. Conclusion In the article, authors use logos to introduce the concept of algorithm step by step and use some facts to emphasis the role of algorithm. They also use ethos to supplement the definition method of application of algorithm to show their profession and creditability. Definition of rhetorical strategy helps readers understand the concept of algorithm and how it related to algorithm in computing. Exemplification bring lots of example to help writers show how algorithm works in actual life and how important of the role in computing. Finally, writers use contrast to extend the application of the algorithm and reinforce the ability of algorithm in computing in order to show their thesis. Zhao 5 https://mitpress.mit.edu/books/introduction-algorithms T H O M A S H. C O R M E N C H A R L E S E. L E I S E R S O N R O N A L D L. R I V E S T C L I F F O R D STEIN INTRODUCTION TO ALGORITHMS T H I R D E D I T I O N Introduction to Algorithms Third Edition Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Introduction to Algorithms Third Edition The MIT Press Cambridge, Massachusetts London, England c 2009 Massachusetts Institute of Technology  All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email special sales@mitpress.mit.edu. This book was set in Times Roman and Mathtime Pro 2 by the authors. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Introduction to algorithms / Thomas H. Cormen . . . [et al.].—3rd ed. p. cm. Includes bibliographical references and index. ISBN 978-0-262-03384-8 (hardcover : alk. paper)—ISBN 978-0-262-53305-8 (pbk. : alk. paper) 1. Computer programming. 2. Computer algorithms. I. Cormen, Thomas H. QA76.6.I5858 2009 005.1—dc22 2009008593 10 9 8 7 6 5 4 3 2 Contents Preface xiii I Foundations Introduction 3 1 The Role of Algorithms in Computing 5 1.1 Algorithms 5 1.2 Algorithms as a technology 11 2 Getting Started 16 2.1 Insertion sort 16 2.2 Analyzing algorithms 23 2.3 Designing algorithms 29 3 Growth of Functions 43 3.1 Asymptotic notation 43 3.2 Standard notations and common functions 4 ? 5 ? 53 Divide-and-Conquer 65 4.1 The maximum-subarray problem 68 4.2 Strassen’s algorithm for matrix multiplication 75 4.3 The substitution method for solving recurrences 83 4.4 The recursion-tree method for solving recurrences 88 4.5 The master method for solving recurrences 93 4.6 Proof of the master theorem 97 Probabilistic Analysis and Randomized Algorithms 114 5.1 The hiring problem 114 5.2 Indicator random variables 118 5.3 Randomized algorithms 122 5.4 Probabilistic analysis and further uses of indicator random variables 130 vi Contents II Sorting and Order Statistics Introduction 6 7 8 9 147 Heapsort 151 6.1 Heaps 151 6.2 Maintaining the heap property 6.3 Building a heap 156 6.4 The heapsort algorithm 159 6.5 Priority queues 162 154 Quicksort 170 7.1 Description of quicksort 170 7.2 Performance of quicksort 174 7.3 A randomized version of quicksort 7.4 Analysis of quicksort 180 Sorting in Linear Time 191 8.1 Lower bounds for sorting 8.2 Counting sort 194 8.3 Radix sort 197 8.4 Bucket sort 200 179 191 Medians and Order Statistics 213 9.1 Minimum and maximum 214 9.2 Selection in expected linear time 215 9.3 Selection in worst-case linear time 220 III Data Structures Introduction 10 11 ? 229 Elementary Data Structures 232 10.1 Stacks and queues 232 10.2 Linked lists 236 10.3 Implementing pointers and objects 10.4 Representing rooted trees 246 Hash Tables 253 11.1 Direct-address tables 254 11.2 Hash tables 256 11.3 Hash functions 262 11.4 Open addressing 269 11.5 Perfect hashing 277 241 Contents 12 ? 13 14 vii Binary Search Trees 286 12.1 What is a binary search tree? 286 12.2 Querying a binary search tree 289 12.3 Insertion and deletion 294 12.4 Randomly built binary search trees 299 Red-Black Trees 308 13.1 Properties of red-black trees 13.2 Rotations 312 13.3 Insertion 315 13.4 Deletion 323 308 Augmenting Data Structures 339 14.1 Dynamic order statistics 339 14.2 How to augment a data structure 14.3 Interval trees 348 345 IV Advanced Design and Analysis Techniques Introduction 357 15 Dynamic Programming 359 15.1 Rod cutting 360 15.2 Matrix-chain multiplication 370 15.3 Elements of dynamic programming 378 15.4 Longest common subsequence 390 15.5 Optimal binary search trees 397 16 Greedy Algorithms 414 16.1 An activity-selection problem 415 16.2 Elements of the greedy strategy 423 16.3 Huffman codes 428 16.4 Matroids and greedy methods 437 16.5 A task-scheduling problem as a matroid ? ? 17 Amortized Analysis 451 17.1 Aggregate analysis 452 17.2 The accounting method 456 17.3 The potential method 459 17.4 Dynamic tables 463 443 viii Contents V Advanced Data Structures Introduction 18 B-Trees 484 18.1 Definition of B-trees 488 18.2 Basic operations on B-trees 491 18.3 Deleting a key from a B-tree 499 19 Fibonacci Heaps 505 19.1 Structure of Fibonacci heaps 507 19.2 Mergeable-heap operations 510 19.3 Decreasing a key and deleting a node 518 19.4 Bounding the maximum degree 523 20 van Emde Boas Trees 531 20.1 Preliminary approaches 532 20.2 A recursive structure 536 20.3 The van Emde Boas tree 545 21 Data Structures for Disjoint Sets 561 21.1 Disjoint-set operations 561 21.2 Linked-list representation of disjoint sets 564 21.3 Disjoint-set forests 568 21.4 Analysis of union by rank with path compression ? VI 481 Graph Algorithms Introduction 587 22 Elementary Graph Algorithms 589 22.1 Representations of graphs 589 22.2 Breadth-first search 594 22.3 Depth-first search 603 22.4 Topological sort 612 22.5 Strongly connected components 615 23 Minimum Spanning Trees 624 23.1 Growing a minimum spanning tree 625 23.2 The algorithms of Kruskal and Prim 631 573 Contents 24 ix Single-Source Shortest Paths 643 24.1 The Bellman-Ford algorithm 651 24.2 Single-source shortest paths in directed acyclic graphs 24.3 Dijkstra’s algorithm 658 24.4 Difference constraints and shortest paths 664 24.5 Proofs of shortest-paths properties 671 25 All-Pairs Shortest Paths 684 25.1 Shortest paths and matrix multiplication 686 25.2 The Floyd-Warshall algorithm 693 25.3 Johnson’s algorithm for sparse graphs 700 26 Maximum Flow 708 26.1 Flow networks 709 26.2 The Ford-Fulkerson method 714 26.3 Maximum bipartite matching 732 26.4 Push-relabel algorithms 736 26.5 The relabel-to-front algorithm 748 ? ? 655 VII Selected Topics Introduction 769 27 Multithreaded Algorithms 772 27.1 The basics of dynamic multithreading 774 27.2 Multithreaded matrix multiplication 792 27.3 Multithreaded merge sort 797 28 Matrix Operations 813 28.1 Solving systems of linear equations 813 28.2 Inverting matrices 827 28.3 Symmetric positive-definite matrices and least-squares approximation 832 29 Linear Programming 843 29.1 Standard and slack forms 850 29.2 Formulating problems as linear programs 29.3 The simplex algorithm 864 29.4 Duality 879 29.5 The initial basic feasible solution 886 859 x Contents 30 Polynomials and the FFT 898 30.1 Representing polynomials 900 30.2 The DFT and FFT 906 30.3 Efficient FFT implementations 915 31 Number-Theoretic Algorithms 926 31.1 Elementary number-theoretic notions 927 31.2 Greatest common divisor 933 31.3 Modular arithmetic 939 31.4 Solving modular linear equations 946 31.5 The Chinese remainder theorem 950 31.6 Powers of an element 954 31.7 The RSA public-key cryptosystem 958 31.8 Primality testing 965 31.9 Integer factorization 975 ? ? 32 ? 33 String Matching 985 32.1 The naive string-matching algorithm 988 32.2 The Rabin-Karp algorithm 990 32.3 String matching with finite automata 995 32.4 The Knuth-Morris-Pratt algorithm 1002 Computational Geometry 1014 33.1 Line-segment properties 1015 33.2 Determining whether any pair of segments intersects 33.3 Finding the convex hull 1029 33.4 Finding the closest pair of points 1039 34 NP-Completeness 1048 34.1 Polynomial time 1053 34.2 Polynomial-time verification 1061 34.3 NP-completeness and reducibility 1067 34.4 NP-completeness proofs 1078 34.5 NP-complete problems 1086 35 Approximation Algorithms 1106 35.1 The vertex-cover problem 1108 35.2 The traveling-salesman problem 1111 35.3 The set-covering problem 1117 35.4 Randomization and linear programming 35.5 The subset-sum problem 1128 1123 1021 Contents xi VIII Appendix: Mathematical Background Introduction A 1143 Summations 1145 A.1 Summation formulas and properties A.2 Bounding summations 1149 1145 B Sets, Etc. 1158 B.1 Sets 1158 B.2 Relations 1163 B.3 Functions 1166 B.4 Graphs 1168 B.5 Trees 1173 C Counting and Probability 1183 C.1 Counting 1183 C.2 Probability 1189 C.3 Discrete random variables 1196 C.4 The geometric and binomial distributions 1201 C.5 The tails of the binomial distribution 1208 ? D Matrices 1217 D.1 Matrices and matrix operations D.2 Basic matrix properties 1222 Bibliography Index 1251 1231 1217 Preface Before there were computers, there were algorithms. But now that there are computers, there are even more algorithms, and algorithms lie at the heart of computing. This book provides a comprehensive introduction to the modern study of computer algorithms. It presents many algorithms and covers them in considerable depth, yet makes their design and analysis accessible to all levels of readers. We have tried to keep explanations elementary without sacrificing depth of coverage or mathematical rigor. Each chapter presents an algorithm, a design technique, an application area, or a related topic. Algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The book contains 244 figures—many with multiple parts—illustrating how the algorithms work. Since we emphasize efficiency as a design criterion, we include careful analyses of the running times of all our algorithms. The text is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Because it discusses engineering issues in algorithm design, as well as mathematical aspects, it is equally well suited for self-study by technical professionals. In this, the third edition, we have once again updated the entire book. The changes cover a broad spectrum, including new chapters, revised pseudocode, and a more active writing style. To the teacher We have designed this book to be both versatile and complete. You should find it useful for a variety of courses, from an undergraduate course in data structures up through a graduate course in algorithms. Because we have provided considerably more material than can fit in a typical one-term course, you can consider this book to be a “buffet” or “smorgasbord” from which you can pick and choose the material that best supports the course you wish to teach. xiv Preface You should find it easy to organize your course around just the chapters you need. We have made chapters relatively self-contained, so that you need not worry about an unexpected and unnecessary dependence of one chapter on another. Each chapter presents the easier material first and the more difficult material later, with section boundaries marking natural stopping points. In an undergraduate course, you might use only the earlier sections from a chapter; in a graduate course, you might cover the entire chapter. We have included 957 exercises and 158 problems. Each section ends with exercises, and each chapter ends with problems. The exercises are generally short questions that test basic mastery of the material. Some are simple self-check thought exercises, whereas others are more substantial and are suitable as assigned homework. The problems are more elaborate case studies that often introduce new material; they often consist of several questions that lead the student through the steps required to arrive at a solution. Departing from our practice in previous editions of this book, we have made publicly available solutions to some, but by no means all, of the problems and exercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to these solutions. You will want to check this site to make sure that it does not contain the solution to an exercise or problem that you plan to assign. We expect the set of solutions that we post to grow slowly over time, so you will need to check it each time you teach the course. We have starred (?) the sections and exercises that are more suitable for graduate students than for undergraduates. A starred section is not necessarily more difficult than an unstarred one, but it may require an understanding of more advanced mathematics. Likewise, starred exercises may require an advanced background or more than average creativity. To the student We hope that this textbook provides you with an enjoyable introduction to the field of algorithms. We have attempted to make every algorithm accessible and interesting. To help you when you encounter unfamiliar or difficult algorithms, we describe each one in a step-by-step manner. We also provide careful explanations of the mathematics needed to understand the analysis of the algorithms. If you already have some familiarity with a topic, you will find the chapters organized so that you can skim introductory sections and proceed quickly to the more advanced material. This is a large book, and your class will probably cover only a portion of its material. We have tried, however, to make this a book that will be useful to you now as a course textbook and also later in your career as a mathematical desk reference or an engineering handbook. Preface xv What are the prerequisites for reading this book?  You should have some programming experience. In particular, you should understand recursive procedures and simple data structures such as arrays and linked lists.  You should have some facility with mathematical proofs, and especially proofs by mathematical induction. A few portions of the book rely on some knowledge of elementary calculus. Beyond that, Parts I and VIII of this book teach you all the mathematical techniques you will need. We have heard, loud and clear, the call to supply solutions to problems and exercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions for a few of the problems and exercises. Feel free to check your solutions against ours. We ask, however, that you do not send your solutions to us. To the professional The wide range of topics in this book makes it an excellent handbook on algorithms. Because each chapter is relatively self-contained, you can focus in on the topics that most interest you. Most of the algorithms we discuss have great practical utility. We therefore address implementation concerns and other engineering issues. We often provide practical alternatives to the few algorithms that are primarily of theoretical interest. If you wish to implement any of the algorithms, you should find the translation of our pseudocode into your favorite programming language to be a fairly straightforward task. We have designed the pseudocode to present each algorithm clearly and succinctly. Consequently, we do not address error-handling and other software-engineering issues that require specific assumptions about your programming environment. We attempt to present each algorithm simply and directly without allowing the idiosyncrasies of a particular programming language to obscure its essence. We understand that if you are using this book outside of a course, then you might be unable to check your solutions to problems and exercises against solutions provided by an instructor. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions for some of the problems and exercises so that you can check your work. Please do not send your solutions to us. To our colleagues We have supplied an extensive bibliography and pointers to the current literature. Each chapter ends with a set of chapter notes that give historical details and references. The chapter notes do not provide a complete reference to the whole field xvi Preface of algorithms, however. Though it may be hard to believe for a book of this size, space constraints prevented us from including many interesting algorithms. Despite myriad requests from students for solutions to problems and exercises, we have chosen as a matter of policy not to supply references for problems and exercises, to remove the temptation for students to look up a solution rather than to find it themselves. Changes for the third edition What has changed between the second and third editions of this book? The magnitude of the changes is on a par with the changes between the first and second editions. As we said about the second-edition changes, depending on how you look at it, the book changed either not much or quite a bit. A quick look at the table of contents shows that most of the second-edition chapters and sections appear in the third edition. We removed two chapters and one section, but we have added three new chapters and two new sections apart from these new chapters. We kept the hybrid organization from the first two editions. Rather than organizing chapters by only problem domains or according only to techniques, this book has elements of both. It contains technique-based chapters on divide-and-conquer, dynamic programming, greedy algorithms, amortized analysis, NP-Completeness, and approximation algorithms. But it also has entire parts on sorting, on data structures for dynamic sets, and on algorithms for graph problems. We find that although you need to know how to apply techniques for designing and analyzing algorithms, problems seldom announce to you which techniques are most amenable to solving them. Here is a summary of the most significant changes for the third edition:  We added new chapters on van Emde Boas trees and multithreaded algorithms, and we have broken out material on matrix basics into its own appendix chapter.  We revised the chapter on recurrences to more broadly cover the divide-andconquer technique, and its first two sections apply divide-and-conquer to solve two problems. The second section of this chapter presents Strassen’s algorithm for matrix multiplication, which we have moved from the chapter on matrix operations.  We removed two chapters that were rarely taught: binomial heaps and sorting networks. One key idea in the sorting networks chapter, the 0-1 principle, appears in this edition within Problem 8-7 as the 0-1 sorting lemma for compareexchange algorithms. The treatment of Fibonacci heaps no longer relies on binomial heaps as a precursor. Preface xvii  We revised our treatment of dynamic programming and greedy algorithms. Dynamic programming now leads off with a more interesting problem, rod cutting, than the assembly-line scheduling problem from the second edition. Furthermore, we emphasize memoization a bit more than we did in the second edition, and we introduce the notion of the subproblem graph as a way to understand the running time of a dynamic-programming algorithm. In our opening example of greedy algorithms, the activity-selection problem, we get to the greedy algorithm more directly than we did in the second edition.  The way we delete a node from binary search trees (which includes red-black trees) now guarantees that the node requested for deletion is the node that is actually deleted. In the first two editions, in certain cases, some other node would be deleted, with its contents moving into the node passed to the deletion procedure. With our new way to delete nodes, if other components of a program maintain pointers to nodes in the tree, they will not mistakenly end up with stale pointers to nodes that have been deleted.  The material on flow networks now bases flows entirely on edges. This approach is more intuitive than the net flow used in the first two editions.  With the material on matrix basics and Strassen’s algorithm moved to other chapters, the chapter on matrix operations is smaller than in the second edition.  We have modified our treatment of the Knuth-Morris-Pratt string-matching algorithm.  We corrected several errors. Most of these errors were posted on our Web site of second-edition errata, but a few were not.  Based on many requests, we changed the syntax (as it were) of our pseudocode. We now use “D” to indicate assignment and “==” to test for equality, just as C, C++, Java, and Python do. Likewise, we have eliminated the keywords do and then and adopted “//” as our comment-to-end-of-line symbol. We also now use dot-notation to indicate object attributes. Our pseudocode remains procedural, rather than object-oriented. In other words, rather than running methods on objects, we simply call procedures, passing objects as parameters.  We added 100 new exercises and 28 new problems. We also updated many bibliography entries and added several new ones.  Finally, we went through the entire book and rewrote sentences, paragraphs, and sections to make the writing clearer and more active. xviii Preface Web site You can use our Web site, http://mitpress.mit.edu/algorithms/, to obtain supplementary information and to communicate with us. The Web site links to a list of known errors, solutions to selected exercises and problems, and (of course) a list explaining the corny professor jokes, as well as other content that we might add. The Web site also tells you how to report errors or make suggestions. How we produced this book Like the second edition, the third edition was produced in LATEX 2" . We used the Times font with mathematics typeset using the MathTime Pro 2 fonts. We thank Michael Spivak from Publish or Perish, Inc., Lance Carnes from Personal TeX, Inc., and Tim Tregubov from Dartmouth College for technical support. As in the previous two editions, we compiled the index using Windex, a C program that we wrote, and the bibliography was produced with B IBTEX. The PDF files for this book were created on a MacBook running OS 10.5. We drew the illustrations for the third edition using MacDraw Pro, with some of the mathematical expressions in illustrations laid in with the psfrag package for LATEX 2" . Unfortunately, MacDraw Pro is legacy software, having not been marketed for over a decade now. Happily, we still have a couple of Macintoshes that can run the Classic environment under OS 10.4, and hence they can run MacDraw Pro—mostly. Even under the Classic environment, we find MacDraw Pro to be far easier to use than any other drawing software for the types of illustrations that accompany computer-science text, and it produces beautiful output.1 Who knows how long our pre-Intel Macs will continue to run, so if anyone from Apple is listening: Please create an OS X-compatible version of MacDraw Pro! Acknowledgments for the third edition We have been working with the MIT Press for over two decades now, and what a terrific relationship it has been! We thank Ellen Faran, Bob Prior, Ada Brunstein, and Mary Reilly for their help and support. We were geographically distributed while producing the third edition, working in the Dartmouth College Department of Computer Science, the MIT Computer 1 We investigated several drawing programs that run under Mac OS X, but all had significant shortcomings compared with MacDraw Pro. We briefly attempted to produce the illustrations for this book with a different, well known drawing program. We found that it took at least five times as long to produce each illustration as it took with MacDraw Pro, and the resulting illustrations did not look as good. Hence the decision to revert to MacDraw Pro running on older Macintoshes. Preface xix Science and Artificial Intelligence Laboratory, and the Columbia University Department of Industrial Engineering and Operations Research. We thank our respective universities and colleagues for providing such supportive and stimulating environments. Julie Sussman, P.P.A., once again bailed us out as the technical copyeditor. Time and again, we were amazed at the errors that eluded us, but that Julie caught. She also helped us improve our presentation in several places. If there is a Hall of Fame for technical copyeditors, Julie is a sure-fire, first-ballot inductee. She is nothing short of phenomenal. Thank you, thank you, thank you, Julie! Priya Natarajan also found some errors that we were able to correct before this book went to press. Any errors that remain (and undoubtedly, some do) are the responsibility of the authors (and probably were inserted after Julie read the material). The treatment for van Emde Boas trees derives from Erik Demaine’s notes, which were in turn influenced by Michael Bender. We also incorporated ideas from Javed Aslam, Bradley Kuszmaul, and Hui Zha into this edition. The chapter on multithreading was based on notes originally written jointly with Harald Prokop. The material was influenced by several others working on the Cilk project at MIT, including Bradley Kuszmaul and Matteo Frigo. The design of the multithreaded pseudocode took its inspiration from the MIT Cilk extensions to C and by Cilk Arts’s Cilk++ extensions to C++. We also thank the many readers of the first and second editions who reported errors or submitted suggestions for how to improve this book. We corrected all the bona fide errors that were reported, and we incorporated as many suggestions as we could. We rejoice that the number of such contributors has grown so great that we must regret that it has become impractical to list them all. Finally, we thank our wives—Nicole Cormen, Wendy Leiserson, Gail Rivest, and Rebecca Ivry—and our children—Ricky, Will, Debby, and Katie Leiserson; Alex and Christopher Rivest; and Molly, Noah, and Benjamin Stein—for their love and support while we prepared this book. The patience and encouragement of our families made this project possible. We affectionately dedicate this book to them. T HOMAS H. C ORMEN C HARLES E. L EISERSON RONALD L. R IVEST C LIFFORD S TEIN February 2009 Lebanon, New Hampshire Cambridge, Massachusetts Cambridge, Massachusetts New York, New York Introduction to Algorithms Third Edition I Foundations Introduction This part will start you thinking about designing and analyzing algorithms. It is intended to be a gentle introduction to how we specify algorithms, some of the design strategies we will use throughout this book, and many of the fundamental ideas used in algorithm analysis. Later parts of this book will build upon this base. Chapter 1 provides an overview of algorithms and their place in modern computing systems. This chapter defines what an algorithm is and lists some examples. It also makes a case that we should consider algorithms as a technology, alongside technologies such as fast hardware, graphical user interfaces, object-oriented systems, and networks. In Chapter 2, we see our first algorithms, which solve the problem of sorting a sequence of n numbers. They are written in a pseudocode which, although not directly translatable to any conventional programming language, conveys the structure of the algorithm clearly enough that you should be able to implement it in the language of your choice. The sorting algorithms we examine are insertion sort, which uses an incremental approach, and merge sort, which uses a recursive technique known as “divide-and-conquer.” Although the time each requires increases with the value of n, the rate of increase differs between the two algorithms. We determine these running times in Chapter 2, and we develop a useful notation to express them. Chapter 3 precisely defines this notation, which we call asymptotic notation. It starts by defining several asymptotic notations, which we use for bounding algorithm running times from above and/or below. The rest of Chapter 3 is primarily a presentation of mathematical notation, more to ensure that your use of notation matches that in this book than to teach you new mathematical concepts. 4 Part I Foundations Chapter 4 delves further into the divide-and-conquer method introduced in Chapter 2. It provides additional examples of divide-and-conquer algorithms, including Strassen’s surprising method for multiplying two square matrices. Chapter 4 contains methods for solving recurrences, which are useful for describing the running times of recursive algorithms. One powerful technique is the “master method,” which we often use to solve recurrences that arise from divide-andconquer algorithms. Although much of Chapter 4 is devoted to proving the correctness of the master method, you may skip this proof yet still employ the master method. Chapter 5 introduces probabilistic analysis and randomized algorithms. We typically use probabilistic analysis to determine the running time of an algorithm in cases in which, due to the presence of an inherent probability distribution, the running time may differ on different inputs of the same size. In some cases, we assume that the inputs conform to a known probability distribution, so that we are averaging the running time over all possible inputs. In other cases, the probability distribution comes not from the inputs but from random choices made during the course of the algorithm. An algorithm whose behavior is determined not only by its input but by the values produced by a random-number generator is a randomized algorithm. We can use randomized algorithms to enforce a probability distribution on the inputs—thereby ensuring that no particular input always causes poor performance—or even to bound the error rate of algorithms that are allowed to produce incorrect results on a limited basis. Appendices A–D contain other mathematical material that you will find helpful as you read this book. You are likely to have seen much of the material in the appendix chapters before having read this book (although the specific definitions and notational conventions we use may differ in some cases from what you have seen in the past), and so you should think of the Appendices as reference material. On the other hand, you probably have not already seen most of the material in Part I. All the chapters in Part I and the Appendices are written with a tutorial flavor. 1 The Role of Algorithms in Computing What are algorithms? Why is the study of algorithms worthwhile? What is the role of algorithms relative to other technologies used in computers? In this chapter, we will answer these questions. 1.1 Algorithms Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output. We can also view an algorithm as a tool for solving a well-specified computational problem. The statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational procedure for achieving that input/output relationship. For example, we might need to sort a sequence of numbers into nondecreasing order. This problem arises frequently in practice and provides fertile ground for introducing many standard design techniques and analysis tools. Here is how we formally define the sorting problem: Input: A sequence of n numbers ha1 ; a2 ; : : : ; an i. Output: A permutation (reordering) ha10 ; a20 ; : : : ; an0 i of the input sequence such that a10  a20      an0 . For example, given the input sequence h31; 41; 59; 26; 41; 58i, a sorting algorithm returns as output the sequence h26; 31; 41; 41; 58; 59i. Such an input sequence is called an instance of the sorting problem. In general, an instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem. 6 Chapter 1 The Role of Algorithms in Computing Because many programs use it as an intermediate step, sorting is a fundamental operation in computer science. As a result, we have a large number of good sorting algorithms at our disposal. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, the architecture of the computer, and the kind of storage devices to be used: main memory, disks, or even tapes. An algorithm is said to be correct if, for every input instance, it halts with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not halt at all on some input instances, or it might halt with an incorrect answer. Contrary to what you might expect, incorrect algorithms can sometimes be useful, if we can control their error rate. We shall see an example of an algorithm with a controllable error rate in Chapter 31 when we study algorithms for finding large prime numbers. Ordinarily, however, we shall be concerned only with correct algorithms. An algorithm can be specified in English, as a computer program, or even as a hardware design. The only requirement is that the specification must provide a precise description of the computational procedure to be followed. What kinds of problems are solved by algorithms? Sorting is by no means the only computational problem for which algorithms have been developed. (You probably suspected as much when you saw the size of this book.) Practical applications of algorithms are ubiquitous and include the following examples:  The Human Genome Project has made great progress toward the goals of identifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this information in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. Although the solutions to the various problems involved are beyond the scope of this book, many methods to solve these biological problems use ideas from several of the chapters in this book, thereby enabling scientists to accomplish tasks while using resources efficiently. The savings are in time, both human and machine, and in money, as more information can be extracted from laboratory techniques.  The Internet enables people all around the world to quickly access and retrieve large amounts of information. With the aid of clever algorithms, sites on the Internet are able to manage and manipulate this large volume of data. Examples of problems that make essential use of algorithms include finding good routes on which the data will travel (techniques for solving such problems appear in 1.1 Algorithms 7 Chapter 24), and using a search engine to quickly find pages on which particular information resides (related techniques are in Chapters 11 and 32).  Electronic commerce enables goods and services to be negotiated and exchanged electronically, and it depends on the privacy of personal information such as credit card numbers, passwords, and bank statements. The core technologies used in electronic commerce include public-key cryptography and digital signatures (covered in Chapter 31), which are based on numerical algorithms and number theory.  Manufacturing and other commercial enterprises often need to allocate scarce resources in the most beneficial way. An oil company may wish to know where to place its wells in order to maximize its expected profit. A political candidate may want to determine where to spend money buying campaign advertising in order to maximize the chances of winning an election. An airline may wish to assign crews to flights in the least expensive way possible, making sure that each flight is covered and that government regulations regarding crew scheduling are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming, which we shall study in Chapter 29. Although some of the details of these examples are beyond the scope of this book, we do give underlying techniques that apply to these problems and problem areas. We also show how to solve many specific problems, including the following:  We are given a road map on which the distance between each pair of adjacent intersections is marked, and we wish to determine the shortest route from one intersection to another. The number of possible routes can be huge, even if we disallow routes that cross over themselves. How do we choose which of all possible routes is the shortest? Here, we model the road map (which is itself a model of the actual roads) as a graph (which we will meet in Part VI and Appendix B), and we wish to find the shortest path from one vertex to another in the graph. We shall see how to solve this problem efficiently in Chapter 24.  We are given two ordered sequences of symbols, X D hx1 ; x2 ; : : : ; xm i and Y D hy1 ; y2 ; : : : ; yn i, and we wish to find a longest common subsequence of X and Y . A subsequence of X is just X with some (or possibly all or none) of its elements removed. For example, one subsequence of hA; B; C; D; E; F; Gi would be hB; C; E; Gi. The length of a longest common subsequence of X and Y gives one measure of how similar these two sequences are. For example, if the two sequences are base pairs in DNA strands, then we might consider them similar if they have a long common subsequence. If X has m symbols and Y has n symbols, then X and Y have 2m and 2n possible subsequences, 8 Chapter 1 The Role of Algorithms in Computing respectively. Selecting all possible subsequences of X and Y and matching them up could take a prohibitively long time unless m and n are very small. We shall see in Chapter 15 how to use a general technique known as dynamic programming to solve this problem much more efficiently.  We are given a mechanical design in terms of a library of parts, where each part may include instances of other parts, and we need to list the parts in order so that each part appears before any part that uses it. If the design comprises n parts, then there are nŠ possible orders, where nŠ denotes the factorial function. Because the factorial function grows faster than even an exponential function, we cannot feasibly generate each possible order and then verify that, within that order, each part appears before the parts using it (unless we have only a few parts). This problem is an instance of topological sorting, and we shall see in Chapter 22 how to solve this problem efficiently.  We are given n points in the plane, and we wish to find the convex hull of these points. The convex hull is the smallest convex polygon containing the points. Intuitively, we can think of each point as being represented by a nail sticking out from a board. The convex hull would be represented by a tight rubber band that surrounds all the nails. Each nail around which the rubber band makes a turn is a vertex of the convex hull. (See Figure 33.6 on page 1029 for an example.) Any of the 2n subsets of the points might be the vertices of the convex hull. Knowing which points are vertices of the convex hull is not quite enough, either, since we also need to know the order in which they appear. There are many choices, therefore, for the vertices of the convex hull. Chapter 33 gives two good methods for finding the convex hull. These lists are far from exhaustive (as you again have probably surmised from this book’s heft), but exhibit two characteristics that are common to many interesting algorithmic problems: 1. They have many candidate solutions, the overwhelming majority of which do not solve the problem at hand. Finding one that does, or one that is “best,” can present quite a challenge. 2. They have practical applications. Of the problems in the above list, finding the shortest path provides the easiest examples. A transportation firm, such as a trucking or railroad company, has a financial interest in finding shortest paths through a road or rail network because taking shorter paths results in lower labor and fuel costs. Or a routing node on the Internet may need to find the shortest path through the network in order to route a message quickly. Or a person wishing to drive from New York to Boston may want to find driving directions from an appropriate Web site, or she may use her GPS while driving. 1.1 Algorithms 9 Not every problem solved by algorithms has an easily identified set of candidate solutions. For example, suppose we are given a set of numerical values representing samples of a signal, and we want to compute the discrete Fourier transform of these samples. The discrete Fourier transform converts the time domain to the frequency domain, producing a set of numerical coefficients, so that we can determine the strength of various frequencies in the sampled signal. In addition to lying at the heart of signal processing, discrete Fourier transforms have applications in data compression and multiplying large polynomials and integers. Chapter 30 gives an efficient algorithm, the fast Fourier transform (commonly called the FFT), for this problem, and the chapter also sketches out the design of a hardware circuit to compute the FFT. Data structures This book also contains several data structures. A data structure is a way to store and organize data in order to facilitate access and modifications. No single data structure works well for all purposes, and so it is important to know the strengths and limitations of several of them. Technique Although you can use this book as a “cookbook” for algorithms, you may someday encounter a problem for which you cannot readily find a published algorithm (many of the exercises and problems in this book, for example). This book will teach you techniques of algorithm design and analysis so that you can develop algorithms on your own, show that they give the correct answer, and understand their efficiency. Different chapters address different aspects of algorithmic problem solving. Some chapters address specific problems, such as finding medians and order statistics in Chapter 9, computing minimum spanning trees in Chapter 23, and determining a maximum flow in a network in Chapter 26. Other chapters address techniques, such as divide-and-conquer in Chapter 4, dynamic programming in Chapter 15, and amortized analysis in Chapter 17. Hard problems Most of this book is about efficient algorithms. Our usual measure of efficiency is speed, i.e., how long an algorithm takes to produce its result. There are some problems, however, for which no efficient solution is known. Chapter 34 studies an interesting subset of these problems, which are known as NP-complete. Why are NP-complete problems interesting? First, although no efficient algorithm for an NP-complete problem has ever been found, nobody has ever proven 10 Chapter 1 The Role of Algorithms in Computing that an efficient algorithm for one cannot exist. In other words, no one knows whether or not efficient algorithms exist for NP-complete problems. Second, the set of NP-complete problems has the remarkable property that if an efficient algorithm exists for any one of them, then efficient algorithms exist for all of them. This relationship among the NP-complete problems makes the lack of efficient solutions all the more tantalizing. Third, several NP-complete problems are similar, but not identical, to problems for which we do know of efficient algorithms. Computer scientists are intrigued by how a small change to the problem statement can cause a big change to the efficiency of the best known algorithm. You should know about NP-complete problems because some of them arise surprisingly often in real applications. If you are called upon to produce an efficient algorithm for an NP-complete problem, you are likely to spend a lot of time in a fruitless search. If you can show that the problem is NP-complete, you can instead spend your time developing an efficient algorithm that gives a good, but not the best possible, solution. As a concrete example, consider a delivery company with a central depot. Each day, it loads up each delivery truck at the depot and sends it around to deliver goods to several addresses. At the end of the day, each truck must end up back at the depot so that it is ready to be loaded for the next day. To reduce costs, the company wants to select an order of delivery stops that yields the lowest overall distance traveled by each truck. This problem is the well-known “traveling-salesman problem,” and it is NP-complete. It has no known efficient algorithm. Under certain assumptions, however, we know of efficient algorithms that give an overall distance which is not too far above the smallest possible. Chapter 35 discusses such “approximation algorithms.” Parallelism For many years, we could count on processor clock speeds increasing at a steady rate. Physical limitations present a fundamental roadblock to ever-increasing clock speeds, however: because power density increases superlinearly with clock speed, chips run the risk of melting once their clock speeds become high enough. In order to perform more computations per second, therefore, chips are being designed to contain not just one but several processing “cores.” We can liken these multicore computers to several sequential computers on a single chip; in other words, they are a type of “parallel computer.” In order to elicit the best performance from multicore computers, we need to design algorithms with parallelism in mind. Chapter 27 presents a model for “multithreaded” algorithms, which take advantage of multiple cores. This model has advantages from a theoretical standpoint, and it forms the basis of several successful computer programs, including a championship chess program. 1.2 Algorithms as a technology 11 Exercises 1.1-1 Give a real-world example that requires sorting or a real-world example that requires computing a convex hull. 1.1-2 Other than speed, what other measures of efficiency might one use in a real-world setting? 1.1-3 Select a data structure that you have seen previously, and discuss its strengths and limitations. 1.1-4 How are the shortest-path and traveling-salesman problems given above similar? How are they different? 1.1-5 Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough. 1.2 Algorithms as a technology Suppose computers were infinitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer. If computers were infinitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice (for example, your implementation should be well designed and documented), but you would most often use whichever method was the easiest to implement. Of course, computers may be fast, but they are not infinitely fast. And memory may be inexpensive, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. You should use these resources wisely, and algorithms that are efficient in terms of time or space will help you do so. 12 Chapter 1 The Role of Algorithms in Computing Efficiency Different algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than differences due to hardware and software. As an example, in Chapter 2, we will see two algorithms for sorting. The first, known as insertion sort, takes time roughly equal to c1 n2 to sort n items, where c1 is a constant that does not depend on n. That is, it takes time roughly proportional to n2 . The second, merge sort, takes time roughly equal to c2 n lg n, where lg n stands for log2 n and c2 is another constant that also does not depend on n. Insertion sort typically has a smaller constant factor than merge sort, so that c1 < c2 . We shall see that the constant factors can have far less of an impact on the running time than the dependence on the input size n. Let’s write insertion sort’s running time as c1 n  n and merge sort’s running time as c2 n  lg n. Then we see that where insertion sort has a factor of n in its running time, merge sort has a factor of lg n, which is much smaller. (For example, when n D 1000, lg n is approximately 10, and when n equals one million, lg n is approximately only 20.) Although insertion sort usually runs faster than merge sort for small input sizes, once the input size n becomes large enough, merge sort’s advantage of lg n vs. n will more than compensate for the difference in constant factors. No matter how much smaller c1 is than c2 , there will always be a crossover point beyond which merge sort is faster. For a concrete example, let us pit a faster computer (computer A) running insertion sort against a slower computer (computer B) running merge sort. They each must sort an array of 10 million numbers. (Although 10 million numbers might seem like a lot, if the numbers are eight-byte integers, then the input occupies about 80 megabytes, which fits in the memory of even an inexpensive laptop computer many times over.) Suppose that computer A executes 10 billion instructions per second (faster than any single sequential computer at the time of this writing) and computer B executes only 10 million instructions per second, so that computer A is 1000 times faster than computer B in raw computing power. To make the difference even more dramatic, suppose that the world’s craftiest programmer codes insertion sort in machine language for computer A, and the resulting code requires 2n2 instructions to sort n numbers. Suppose further that just an average programmer implements merge sort, using a high-level language with an inefficient compiler, with the resulting code taking 50n lg n instructions. To sort 10 million numbers, computer A takes 2  .107 /2 instructions D 20,000 seconds (more than 5.5 hours) ; 1010 instructions/second while computer B takes 1.2 Algorithms as a technology 13 50  107 lg 107 instructions  1163 seconds (less than 20 minutes) : 107 instructions/second By using an algorithm whose running time grows more slowly, even with a poor compiler, computer B runs more than 17 times faster than computer A! The advantage of merge sort is even more pronounced when we sort 100 million numbers: where insertion sort takes more than 23 days, merge sort takes under four hours. In general, as the problem size increases, so does the relative advantage of merge sort. Algorithms and other technologies The example above shows that we should consider algorithms, like computer hardware, as a technology. Total system performance depends on choosing efficient algorithms as much as on choosing fast hardware. Just as rapid advances are being made in other computer technologies, they are being made in algorithms as well. You might wonder whether algorithms are truly that important on contemporary computers in light of other advanced technologies, such as  advanced computer architectures and fabrication technologies,  easy-to-use, intuitive, graphical user interfaces (GUIs),  object-oriented systems,  integrated Web technologies, and  fast networking, both wired and wireless. The answer is yes. Although some applications do not explicitly require algorithmic content at the application level (such as some simple, Web-based applications), many do. For example, consider a Web-based service that determines how to travel from one location to another. Its implementation would rely on fast hardware, a graphical user interface, wide-area networking, and also possibly on object orientation. However, it would also require algorithms for certain operations, such as finding routes (probably using a shortest-path algorithm), rendering maps, and interpolating addresses. Moreover, even an application that does not require algorithmic content at the application level relies heavily upon algorithms. Does the application rely on fast hardware? The hardware design used algorithms. Does the application rely on graphical user interfaces? The design of any GUI relies on algorithms. Does the application rely on networking? Routing in networks relies heavily on algorithms. Was the application written in a language other than machine code? Then it was processed by a compiler, interpreter, or assembler, all of which make extensive use 14 Chapter 1 The Role of Algorithms in Computing of algorithms. Algorithms are at the core of most technologies used in contemporary computers. Furthermore, with the ever-increasing capacities of computers, we use them to solve larger problems than ever before. As we saw in the above comparison between insertion sort and merge sort, it is at larger problem sizes that the differences in efficiency between algorithms become particularly prominent. Having a solid base of algorithmic knowledge and technique is one characteristic that separates the truly skilled programmers from the novices. With modern computing technology, you can accomplish some tasks without knowing much about algorithms, but with a good background in algorithms, you can do much, much more. Exercises 1.2-1 Give an example of an application that requires algorithmic content at the application level, and discuss the function of the algorithms involved. 1.2-2 Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in 8n2 steps, while merge sort runs in 64n lg n steps. For which values of n does insertion sort beat merge sort? 1.2-3 What is the smallest value of n such that an algorithm whose running time is 100n2 runs faster than an algorithm whose running time is 2n on the same machine? Problems 1-1 Comparison of running times For each function f .n/ and time t in the following table, determine the largest size n of a problem that can be solved in time t, assuming that the algorithm to solve the problem takes f .n/ microseconds. Notes for Chapter 1 1 second 15 1 minute 1 hour 1 day 1 month 1 year 1 century lg n p n n n lg n n2 n3 2n nŠ Chapter notes There are many excellent texts on the general topic of algorithms, including those by Aho, Hopcroft, and Ullman [5, 6]; Baase and Van Gelder [28]; Brassard and Bratley [54]; Dasgupta, Papadimitriou, and Vazirani [82]; Goodrich and Tamassia [148]; Hofri [175]; Horowitz, Sahni, and Rajasekaran [181]; Johnsonbaugh and Schaefer [193]; Kingston [205]; Kleinberg and Tardos [208]; Knuth [209, 210, 211]; Kozen [220]; Levitin [235]; Manber [242]; Mehlhorn [249, 250, 251]; Purdom and Brown [287]; Reingold, Nievergelt, and Deo [293]; Sedgewick [306]; Sedgewick and Flajolet [307]; Skiena [318]; and Wilf [356]. Some of the more practical aspects of algorithm design are discussed by Bentley [42, 43] and Gonnet [145]. Surveys of the field of algorithms can also be found in the Handbook of Theoretical Computer Science, Volume A [342] and the CRC Algorithms and Theory of Computation Handbook [25]. Overviews of the algorithms used in computational biology can be found in textbooks by Gusfield [156], Pevzner [275], Setubal and Meidanis [310], and Waterman [350]. 2 Getting Started This chapter will familiarize you with the framework we shall use throughout the book to think about the design and analysis of algorithms. It is self-contained, but it does include several references to material that we introduce in Chapters 3 and 4. (It also contains several summations, which Appendix A shows how to solve.) We begin by examining the insertion sort algorithm to solve the sorting problem introduced in Chapter 1. We define a “pseudocode” that should be familiar to you if you have done computer programming, and we use it to show how we shall specify our algorithms. Having specified the insertion sort algorithm, we then argue that it correctly sorts, and we analyze its running time. The analysis introduces a notation that focuses on how that time increases with the number of items to be sorted. Following our discussion of insertion sort, we introduce the divide-and-conquer approach to the design of algorithms and use it to develop an algorithm called merge sort. We end with an analysis of merge sort’s running time. 2.1 Insertion sort Our first algorithm, insertion sort, solves the sorting problem introduced in Chapter 1: Input: A sequence of n numbers ha1 ; a2 ; : : : ; an i. Output: A permutation (reordering) ha10 ; a20 ; : : : ; an0 i of the input sequence such that a10  a20      an0 . The numbers that we wish to sort are also known as the keys. Although conceptually we are sorting a sequence, the input comes to us in the form of an array with n elements. In this book, we shall typically describe algorithms as programs written in a pseudocode that is similar in many respects to C, C++, Java, Python, or Pascal. If you have been introduced to any of these languages, you should have little trouble 2.1 Insertion sort 17 ♣♣ ♣ ♣♣ 10 5♣ ♣ 4 ♣♣ ♣♣ ♣ ♣ ♣ ♣ ♣♣ ♣ 7 ♣ 0 ♣♣ ♣ 5♣ ♣♣ ♣ 4 2♣ ♣ ♣ ♣ ♣♣ ♣ ♣♣ 7 ♣ 2 ♣ 1 Figure 2.1 Sorting a hand of cards using insertion sort. reading our algorithms. What separates pseudocode from “real” code is that in pseudocode, we employ whatever expressive method is most clear and concise to specify a given algorithm. Sometimes, the clearest method is English, so do not be surprised if you come across an English phrase or sentence embedded within a section of “real” code. Another difference between pseudocode and real code is that pseudocode is not typically concerned with issues of software engineering. Issues of data abstraction, modularity, and error handling are often ignored in order to convey the essence of the algorithm more concisely. We start with insertion sort, which is an efficient algorithm for sorting a small number of elements. Insertion sort works the way many people sort a hand of playing cards. We start with an empty left hand and the cards face down on the table. We then remove one card at a time from the table and insert it into the correct position in the left hand. To find the correct position for a card, we compare it with each of the cards already in the hand, from right to left, as illustrated in Figure 2.1. At all times, the cards held in the left hand are sorted, and these cards were originally the top cards of the pile on the table. We present our pseudocode for insertion sort as a procedure called I NSERTION S ORT, which takes as a parameter an array AŒ1 : : n containing a sequence of length n that is to be sorted. (In the code, the number n of elements in A is denoted by A:length.) The algorithm sorts the input numbers in place: it rearranges the numbers within the array A, with at most a constant number of them stored outside the array at any time. The input array A contains the sorted output sequence when the I NSERTION -S ORT procedure is finished. 18 Chapter 2 Getting Started 1 2 3 4 5 6 (a) 5 2 4 6 1 3 1 2 3 4 5 6 (d) 2 4 5 6 1 3 1 2 3 4 5 6 (b) 2 5 4 6 1 3 1 2 3 4 5 6 (e) 1 2 4 5 6 3 1 2 3 4 5 6 (c) 2 4 5 6 1 3 1 2 3 4 5 6 (f) 1 2 3 4 5 6 Figure 2.2 The operation of I NSERTION -S ORT on the array A D h5; 2; 4; 6; 1; 3i. Array indices appear above the rectangles, and values stored in the array positions appear within the rectangles. (a)–(e) The iterations of the for loop of lines 1–8. In each iteration, the black rectangle holds the key taken from AŒj , which is compared with the values in shaded rectangles to its left in the test of line 5. Shaded arrows show array values moved one position to the right in line 6, and black arrows indicate where the key moves to in line 8. (f) The final sorted array. I NSERTION -S ORT .A/ 1 for j D 2 to A:length 2 key D AŒj  3 // Insert AŒj  into the sorted sequence AŒ1 : : j  1. 4 i D j 1 5 while i > 0 and AŒi > key 6 AŒi C 1 D AŒi 7 i D i 1 8 AŒi C 1 D key Loop invariants and the correctness of insertion sort Figure 2.2 shows how this algorithm works for A D h5; 2; 4; 6; 1; 3i. The index j indicates the “current card” being inserted into the hand. At the beginning of each iteration of the for loop, which is indexed by j , the subarray consisting of elements AŒ1 : : j  1 constitutes the currently sorted hand, and the remaining subarray AŒj C 1 : : n corresponds to the pile of cards still on the table. In fact, elements AŒ1 : : j  1 are the elements originally in positions 1 through j  1, but now in sorted order. We state these properties of AŒ1 : : j  1 formally as a loop invariant: At the start of each iteration of the for loop of lines 1–8, the subarray AŒ1 : : j  1 consists of the elements originally in AŒ1 : : j  1, but in sorted order. We use loop invariants to help us understand why an algorithm is correct. We must show three things about a loop invariant: 2.1 Insertion sort 19 Initialization: It is true prior to the first iteration of the loop. Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration. Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct. When the first two properties hold, the loop invariant is true prior to every iteration of the loop. (Of course, we are free to use established facts other than the loop invariant itself to prove that the loop invariant remains true before each iteration.) Note the similarity to mathematical induction, where to prove that a property holds, you prove a base case and an inductive step. Here, showing that the invariant holds before the first iteration corresponds to the base case, and showing that the invariant holds from iteration to iteration corresponds to the inductive step. The third property is perhaps the most important one, since we are using the loop invariant to show correctness. Typically, we use the loop invariant along with the condition that caused the loop to terminate. The termination property differs from how we usually use mathematical induction, in which we apply the inductive step infinitely; here, we stop the “induction” when the loop terminates. Let us see how these properties hold for insertion sort. Initialization: We start by showing that the loop invariant holds before the first loop iteration, when j D 2.1 The subarray AŒ1 : : j  1, therefore, consists of just the single element AŒ1, which is in fact the original element in AŒ1. Moreover, this subarray is sorted (trivially, of course), which shows that the loop invariant holds prior to the first iteration of the loop. Maintenance: Next, we tackle the second property: showing that each iteration maintains the loop invariant. Informally, the body of the for loop works by moving AŒj  1, AŒj  2, AŒj  3, and so on by one position to the right until it finds the proper position for AŒj  (lines 4–7), at which point it inserts the value of AŒj  (line 8). The subarray AŒ1 : : j  then consists of the elements originally in AŒ1 : : j , but in sorted order. Incrementing j for the next iteration of the for loop then preserves the loop invariant. A more formal treatment of the second property would require us to state and show a loop invariant for the while loop of lines 5–7. At this point, however, 1 When the loop is a for loop, the moment at which we check the loop invariant just prior to the first iteration is immediately after the initial assignment to the loop-counter variable and just before the first test in the loop header. In the case of I NSERTION -S ORT , this time is after assigning 2 to the variable j but before the first test of whether j  A: length. 20 Chapter 2 Getting Started we prefer not to get bogged down in such formalism, and so we rely on our informal analysis to show that the second property holds for the outer loop. Termination: Finally, we examine what happens when the loop terminates. The condition causing the for loop to terminate is that j > A:length D n. Because each loop iteration increases j by 1, we must have j D n C 1 at that time. Substituting n C 1 for j in the wording of loop invariant, we have that the subarray AŒ1 : : n consists of the elements originally in AŒ1 : : n, but in sorted order. Observing that the subarray AŒ1 : : n is the entire array, we conclude that the entire array is sorted. Hence, the algorithm is correct. We shall use this method of loop invariants to show correctness later in this chapter and in other chapters as well. Pseudocode conventions We use the following conventions in our pseudocode.  Indentation indicates block structure. For example, the body of the for loop that begins on line 1 consists of lines 2–8, and the body of the while loop that begins on line 5 contains lines 6–7 but not line 8. Our indentation style applies to if-else statements2 as well. Using indentation instead of conventional indicators of block structure, such as begin and end statements, greatly reduces clutter while preserving, or even enhancing, clarity.3  The looping constructs while, for, and repeat-until and the if-else conditional construct have interpretations similar to those in C, C++, Java, Python, and Pascal.4 In this book, the loop counter retains its value after exiting the loop, unlike some situations that arise in C++, Java, and Pascal. Thus, immediately after a for loop, the loop counter’s value is the value that first exceeded the for loop bound. We used this property in our correctness argument for insertion sort. The for loop header in line 1 is for j D 2 to A:length, and so when this loop terminates, j D A:length C 1 (or, equivalently, j D n C 1, since n D A:length). We use the keyword to when a for loop increments its loop 2 In an if-else statement, we indent else at the same level as its matching if. Although we omit the keyword then, we occasionally refer to the portion executed when the test following if is true as a then clause. For multiway tests, we use elseif for tests after the first one. 3 Each pseudocode procedure in this book appears on one page so that you will not have to discern levels of indentation in code that is split across pages. 4 Most block-structured languages have equivalent constructs, though the exact syntax may differ. Python lacks repeat-until loops, and its for loops operate a little differently from the for loops in this book. 2.1 Insertion sort 21 counter in each iteration, and we use the keyword downto when a for loop decrements its loop counter. When the loop counter changes by an amount greater than 1, the amount of change follows the optional keyword by.  The symbol “//” indicates that the remainder of the line is a comment.  A multiple assignment of the form i D j D e assigns to both variables i and j the value of expression e; it should be treated as equivalent to the assignment j D e followed by the assignment i D j .  Variables (such as i, j , and key) are local to the given procedure. We shall not use global variables without explicit indication.  We access array elements by specifying the array name followed by the index in square brackets. For example, AŒi indicates the ith element of the array A. The notation “: :” is used to indicate a range of values within an array. Thus, AŒ1 : : j  indicates the subarray of A consisting of the j elements AŒ1; AŒ2; : : : ; AŒj .  We typically organize compound data into objects, which are composed of attributes. We access a particular attribute using the syntax found in many object-oriented programming languages: the object name, followed by a dot, followed by the attribute name. For example, we treat an array as an object with the attribute length indicating how many elements it contains. To specify the number of elements in an array A, we write A:length. We treat a variable representing an array or object as a pointer to the data representing the array or object. For all attributes f of an object x, setting y D x causes y:f to equal x:f . Moreover, if we now set x:f D 3, then afterward not only does x:f equal 3, but y:f equals 3 as well. In other words, x and y point to the same object after the assignment y D x. Our attribute notation can “cascade.” For example, suppose that the attribute f is itself a pointer to some type of object that has an attribute g. Then the notation x:f :g is implicitly parenthesized as .x:f /:g. In other words, if we had assigned y D x:f , then x:f :g is the same as y:g. Sometimes, a pointer will refer to no object at all. In this case, we give it the special value NIL.  We pass parameters to a procedure by value: the called procedure receives its own copy of the parameters, and if it assigns a value to a parameter, the change is not seen by the calling procedure. When objects are passed, the pointer to the data representing the object is copied, but the object’s attributes are not. For example, if x is a parameter of a called procedure, the assignment x D y within the called procedure is not visible to the calling procedure. The assignment x:f D 3, however, is visible. Similarly, arrays are passed by pointer, so that 22 Chapter 2 Getting Started a pointer to the array is passed, rather than the entire array, and changes to individual array elements are visible to the calling procedure.  A return statement immediately transfers control back to the point of call in the calling procedure. Most return statements also take a value to pass back to the caller. Our pseudocode differs from many programming languages in that we allow multiple values to be returned in a single return statement.  The boolean operators “and” and “or” are short circuiting. That is, when we evaluate the expression “x and y” we first evaluate x. If x evaluates to FALSE, then the entire expression cannot evaluate to TRUE, and so we do not evaluate y. If, on the other hand, x evaluates to TRUE, we must evaluate y to determine the value of the entire expression. Similarly, in the expression “x or y” we evaluate the expression y only if x evaluates to FALSE. Short-circuiting operators allow us to write boolean expressions such as “x ¤ NIL and x:f D y” without worrying about what happens when we try to evaluate x:f when x is NIL.  The keyword error indicates that an error occurred because conditions were wrong for the procedure to have been called. The calling procedure is responsible for handling the error, and so we do not specify what action to take. Exercises 2.1-1 Using Figure 2.2 as a model, illustrate the operation of I NSERTION -S ORT on the array A D h31; 41; 59; 26; 41; 58i. 2.1-2 Rewrite the I NSERTION -S ORT procedure to sort into nonincreasing instead of nondecreasing order. 2.1-3 Consider the searching problem: Input: A sequence of n numbers A D ha1 ; a2 ; : : : ; an i and a value . Output: An index i such that  D AŒi or the special value NIL if  does not appear in A. Write pseudocode for linear search, which scans through the sequence, looking for . Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties. 2.1-4 Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in 2.2 Analyzing algorithms 23 an .n C 1/-element array C . State the problem formally and write pseudocode for adding the two integers. 2.2 Analyzing algorithms Analyzing an algorithm has come to mean predicting the resources that the algorithm requires. Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time that we want to measure. Generally, by analyzing several candidate algorithms for a problem, we can identify a most efficient one. Such analysis may indicate more than one viable candidate, but we can often discard several inferior algorithms in the process. Before we can analyze an algorithm, we must have a model of the implementation technology that we will use, including a model for the resources of that technology and their costs. For most of this book, we shall assume a generic oneprocessor, random-access machine (RAM) model of computation as our implementation technology and understand that our algorithms will be implemented as computer programs. In the RAM model, instructions are executed one after another, with no concurrent operations. Strictly speaking, we should precisely define the instructions of the RAM model and their costs. To do so, however, would be tedious and would yield little insight into algorithm design and analysis. Yet we must be careful not to abuse the RAM model. For example, what if a RAM had an instruction that sorts? Then we could sort in just one instruction. Such a RAM would be unrealistic, since real computers do not have such instructions. Our guide, therefore, is how real computers are designed. The RAM model contains instructions commonly found in real computers: arithmetic (such as add, subtract, multiply, divide, remainder, floor, ceiling), data movement (load, store, copy), and control (conditional and unconditional branch, subroutine call and return). Each such instruction takes a constant amount of time. The data types in the RAM model are integer and floating point (for storing real numbers). Although we typically do not concern ourselves with precision in this book, in some applications precision is crucial. We also assume a limit on the size of each word of data. For example, when working with inputs of size n, we typically assume that integers are represented by c lg n bits for some constant c  1. We require c  1 so that each word can hold the value of n, enabling us to index the individual input elements, and we restrict c to be a constant so that the word size does not grow arbitrarily. (If the word size could grow arbitrarily, we could store huge amounts of data in one word and operate on it all in constant time—clearly an unrealistic scenario.) 24 Chapter 2 Getting Started Real computers contain instructions not listed above, and such instructions represent a gray area in the RAM model. For example, is exponentiation a constanttime instruction? In the general case, no; it takes several instructions to compute x y when x and y are real numbers. In restricted situations, however, exponentiation is a constant-time operation. Many computers have a “shift left” instruction, which in constant time shifts the bits of an integer by k positions to the left. In most computers, shifting the bits of an integer by one position to the left is equivalent to multiplication by 2, so that shifting the bits by k positions to the left is equivalent to multiplication by 2k . Therefore, such computers can compute 2k in one constant-time instruction by shifting the integer 1 by k positions to the left, as long as k is no more than the number of bits in a computer word. We will endeavor to avoid such gray areas in the RAM model, but we will treat computation of 2k as a constant-time operation when k is a small enough positive integer. In the RAM model, we do not attempt to model the memory hierarchy that is common in contemporary computers. That is, we do not model caches or virtual memory. Several computational models attempt to account for memory-hierarchy effects, which are sometimes significant in real programs on real machines. A handful of problems in this book examine memory-hierarchy effects, but for the most part, the analyses in this book will not consider them. Models that include the memory hierarchy are quite a bit more complex than the RAM model, and so they can be difficult to work with. Moreover, RAM-model analyses are usually excellent predictors of performance on actual machines. Analyzing even a simple algorithm in the RAM model can be a challenge. The mathematical tools required may include combinatorics, probability theory, algebraic dexterity, and the ability to identify the most significant terms in a formula. Because the behavior of an algorithm may be different for each possible input, we need a means for summarizing that behavior in simple, easily understood formulas. Even though we typically select only one machine model to analyze a given algorithm, we still face many choices in deciding how to express our analysis. We would like a way that is simple to write and manipulate, shows the important characteristics of an algorithm’s resource requirements, and suppresses tedious details. Analysis of insertion sort The time taken by the I NSERTION -S ORT procedure depends on the input: sorting a thousand numbers takes longer than sorting three numbers. Moreover, I NSERTION S ORT can take different amounts of time to sort two input sequences of the same size depending on how nearly sorted they already are. In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. To do so, we need to define the terms “running time” and “size of input” more carefully. 2.2 Analyzing algorithms 25 The best notion for input size depends on the problem being studied. For many problems, such as sorting or computing discrete Fourier transforms, the most natural measure is the number of items in the input—for example, the array size n for sorting. For many other problems, such as multiplying two integers, the best measure of input size is the total number of bits needed to represent the input in ordinary binary notation. Sometimes, it is more appropriate to describe the size of the input with two numbers rather than one. For instance, if the input to an algorithm is a graph, the input size can be described by the numbers of vertices and edges in the graph. We shall indicate which input size measure is being used with each problem we study. The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed. It is convenient to define the notion of step so that it is as machine-independent as possible. For the moment, let us adopt the following view. A constant amount of time is required to execute each line of our pseudocode. One line may take a different amount of time than another line, but we shall assume that each execution of the ith line takes time ci , where ci is a constant. This viewpoint is in keeping with the RAM model, and it also reflects how the pseudocode would be implemented on most actual computers.5 In the following discussion, our expression for the running time of I NSERTION S ORT will evolve from a messy formula that uses all the statement costs ci to a much simpler notation that is more concise and more easily manipulated. This simpler notation will also make it easy to determine whether one algorithm is more efficient than another. We start by presenting the I NSERTION -S ORT procedure with the time “cost” of each statement and the number of times each statement is executed. For each j D 2; 3; : : : ; n, where n D A:length, we let tj denote the number of times the while loop test in line 5 is executed for that value of j . When a for or while loop exits in the usual way (i.e., due to the test in the loop header), the test is executed one time more than the loop body. We assume that comments are not executable statements, and so they take no time. 5 There are some subtleties here. Computational steps that we specify in English are often variants of a procedure that requires more than just a constant amount of time. For example, later in this book we might say “sort the points by x-coordinate,” which, as we shall see, takes more than a constant amount of time. Also, note that a statement that calls a subroutine takes constant time, though the subroutine, once invoked, may take more. That is, we separate the process of calling the subroutine—passing parameters to it, etc.—from the process of executing the subroutine. 26 Chapter 2 Getting Started I NSERTION -S ORT .A/ 1 for j D 2 to A:length 2 key D AŒj  3 // Insert AŒj  into the sorted sequence AŒ1 : : j  1. 4 i D j 1 5 while i > 0 and AŒi > key 6 AŒi C 1 D AŒi 7 i D i 1 8 AŒi C 1 D key cost c1 c2 times n n1 0 c4 c5 c6 c7 c8 n1 n P 1 n t PjnD2 j .t  1/ PjnD2 j .t j D2 j  1/ n1 The running time of the algorithm is the sum of running times for each statement executed; a statement that takes ci steps to execute and executes n times will contribute ci n to the total running time.6 To compute T .n/, the running time of I NSERTION -S ORT on an input of n values, we sum the products of the cost and times columns, obtaining T .n/ D c1 n C c2 .n  1/ C c4 .n  1/ C c5 n X j D2 C c7 n X tj C c6 n X .tj  1/ j D2 .tj  1/ C c8 .n  1/ : j D2 Even for inputs of a given size, an algorithm’s running time may depend on which input of that size is given. For example, in I NSERTION -S ORT, the best case occurs if the array is already sorted. For each j D 2; 3; : : : ; n, we then find that AŒi  key in line 5 when i has its initial value of j  1. Thus tj D 1 for j D 2; 3; : : : ; n, and the best-case running time is T .n/ D c1 n C c2 .n  1/ C c4 .n  1/ C c5 .n  1/ C c8 .n  1/ D .c1 C c2 C c4 C c5 C c8 /n  .c2 C c4 C c5 C c8 / : We can express this running time as an C b for constants a and b that depend on the statement costs ci ; it is thus a linear function of n. If the array is in reverse sorted order—that is, in decreasing order—the worst case results. We must compare each element AŒj  with each element in the entire sorted subarray AŒ1 : : j  1, and so tj D j for j D 2; 3; : : : ; n. Noting that 6 This characteristic does not necessarily hold for a resource such as memory. A statement that references m words of memory and is executed n times does not necessarily reference mn distinct words of memory. 2.2 Analyzing algorithms n X j D2 j D 27 n.n C 1/ 1 2 and n X n.n  1/ .j  1/ D 2 j D2 (see Appendix A for a review of how to solve these summations), we find that in the worst case, the running time of I NSERTION -S ORT is   n.n C 1/ 1 T .n/ D c1 n C c2 .n  1/ C c4 .n  1/ C c5 2     n.n  1/ n.n  1/ C c7 C c8 .n  1/ C c6 2 2  c c6 c7  2  c5 c6 c7 5 C C n C c1 C c2 C c4 C   C c8 n D 2 2 2 2 2 2  .c2 C c4 C c5 C c8 / : We can express this worst-case running time as an2 C bn C c for constants a, b, and c that again depend on the statement costs ci ; it is thus a quadratic function of n. Typically, as in insertion sort, the running time of an algorithm is fixed for a given input, although in later chapters we shall see some interesting “randomized” algorithms whose behavior can vary even for a fixed input. Worst-case and average-case analysis In our analysis of insertion sort, we looked at both the best case, in which the input array was already sorted, and the worst case, in which the input array was reverse sorted. For the remainder of this book, though, we shall usually concentrate on finding only the worst-case running time, that is, the longest running time for any input of size n. We give three reasons for this orientation.  The worst-case running time of an algorithm gives us an upper bound on the running time for any input. Knowing it provides a guarantee that the algorithm will never take any longer. We need not make some educated guess about the running time and hope that it never gets much worse.  For some algorithms, the worst case occurs fairly often. For example, in searching a database for a particular piece of information, the searching algorithm’s worst case will often occur when the information is not present in the database. In some applications, searches for absent information may be frequent. 28 Chapter 2 Getting Started  The “average case” is often roughly as bad as the worst case. Suppose that we randomly choose n numbers and apply insertion sort. How long does it take to determine where in subarray AŒ1 : : j  1 to insert element AŒj ? On average, half the elements in AŒ1 : : j  1 are less than AŒj , and half the elements are greater. On average, therefore, we check half of the subarray AŒ1 : : j  1, and so tj is about j=2. The resulting average-case running time turns out to be a quadratic function of the input size, just like the worst-case running time. In some particular cases, we shall be interested in the average-case running time of an algorithm; we shall see the technique of probabilistic analysis applied to various algorithms throughout this book. The scope of average-case analysis is limited, because it may not be apparent what constitutes an “average” input for a particular problem. Often, we shall assume that all inputs of a given size are equally likely. In practice, this assumption may be violated, but we can sometimes use a randomized algorithm, which makes random choices, to allow a probabilistic analysis and yield an expected running time. We explore randomized algorithms more in Chapter 5 and in several other subsequent chapters. Order of growth We used some simplifying abstractions to ease our analysis of the I NSERTION S ORT procedure. First, we ignored the actual cost of each statement, using the constants ci to represent these costs. Then, we observed that even these constants give us more detail than we really need: we expressed the worst-case running time as an2 C bn C c for some constants a, b, and c that depend on the statement costs ci . We thus ignored not only the actual statement costs, but also the abstract costs ci . We shall now make one more simplifying abstraction: it is the rate of growth, or order of growth, of the running time that really interests us. We therefore consider only the leading term of a formula (e.g., an2 ), since the lower-order terms are relatively insignificant for large values of n. We also ignore the leading term’s constant coefficient, since constant factors are less significant than the rate of growth in determining computational efficiency for large inputs. For insertion sort, when we ignore the lower-order terms and the leading term’s constant coefficient, we are left with the factor of n2 from the leading term. We write that insertion sort has a worst-case running time of ‚.n2 / (pronounced “theta of n-squared”). We shall use ‚-notation informally in this chapter, and we will define it precisely in Chapter 3. We usually consider one algorithm to be more efficient than another if its worstcase running time has a lower order of growth. Due to constant factors and lowerorder terms, an algorithm whose running time has a higher order of growth might take less time for small inputs than an algorithm whose running time has a lower 2.3 Designing algorithms 29 order of growth. But for large enough inputs, a ‚.n2 / algorithm, for example, will run more quickly in the worst case than a ‚.n3 / algorithm. Exercises 2.2-1 Express the function n3 =1000  100n2  100n C 3 in terms of ‚-notation. 2.2-2 Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element in AŒ1. Then find the second smallest element of A, and exchange it with AŒ2. Continue in this manner for the first n  1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n  1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in ‚-notation. 2.2-3 Consider linear search again (see Exercise 2.1-3). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in ‚-notation? Justify your answers. 2.2-4 How can we modify almost any algorithm to have a good best-case running time? 2.3 Designing algorithms We can choose from a wide range of algorithm design techniques. For insertion sort, we used an incremental approach: having sorted the subarray AŒ1 : : j  1, we inserted the single element AŒj  into its proper place, yielding the sorted subarray AŒ1 : : j . In this section, we examine an alternative design approach, known as “divideand-conquer,” which we shall explore in more detail in Chapter 4. We’ll use divideand-conquer to design a sorting algorithm whose worst-case running time is much less than that of insertion sort. One advantage of divide-and-conquer algorithms is that their running times are often easily determined using techniques that we will see in Chapter...
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer


Anonymous
Really useful study material!

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags