k− 1, the kth smallest element in the entire array can be found as the kth smallest element in the left part of the partitioned array. And if s

This page intentionally left blank Vice President and Editorial Director, ECS Marcia Horton Editor-in-Chief Michael Hirsch Acquisitions Editor Matt Goldstein Editorial Assistant Chelsea Bell Vice President, Marketing Patrice Jones Marketing Manager Yezan Alayan Senior Marketing Coordinator Kathryn Ferranti Marketing Assistant Emma Snider Vice President, Production Vince O’Brien Managing Editor Jeff Holcomb Production Project Manager Kayla Smith-Tarbox Senior Operations Supervisor Alan Fischer Manufacturing Buyer Lisa McDowell Art Director Anthony Gemmellaro Text Designer Sandra Rigney Cover Designer Anthony Gemmellaro Cover Illustration Jennifer Kohnke Media Editor Daniel Sandin Full-Service Project Management Windfall Software Composition Windfall Software, using ZzTEX Printer/Binder Courier Westford Cover Printer Courier Westford Text Font Times Ten Copyright © 2012, 2007, 2003 Pearson Education, Inc., publishing as Addison-Wesley. All rights reserved. Printed in the United States of America. This publication is protected by Copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use material from this work, please submit a written request to Pearson Education, Inc., Permissions Department, One Lake Street, Upper Saddle River, New Jersey 07458, or you may fax your request to 201-236-3290. This is the eBook of the printed book and may not include any media, Website access codes or print supplements that may come packaged with the bound book. Many of the designations by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed in initial caps or all caps. Library of Congress Cataloging-in-Publication Data Levitin, Anany. Introduction to the design & analysis of algorithms / Anany Levitin. — 3rd ed. p. cm. Includes bibliographical references and index. ISBN-13: 978-0-13-231681-1 ISBN-10: 0-13-231681-1 1. Computer algorithms. I. Title. II. Title: Introduction to the design and analysis of algorithms. QA76.9.A43L48 2012 005.1—dc23 2011027089 15 14 13 12 11—CRW—10987654321 ISBN 10: 0-13-231681-1 ISBN 13: 978-0-13-231681-1 Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montreal Toronto Delhi Mexico City Sao Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo This page intentionally left blank Brief Contents New to the Third Edition xvii Preface xix 1 Introduction 1 2 Fundamentals of the Analysis of Algorithm Efﬁciency 41 3 Brute Force and Exhaustive Search 97 4 Decrease-and-Conquer 131 5 Divide-and-Conquer 169 6 Transform-and-Conquer 201 7 Space and Time Trade-Offs 253 8 Dynamic Programming 283 9 Greedy Technique 315 10 Iterative Improvement 345 11 Limitations of Algorithm Power 387 12 Coping with the Limitations of Algorithm Power 423 Epilogue 471 APPENDIX A Useful Formulas for the Analysis of Algorithms 475 APPENDIX B Short Tutorial on Recurrence Relations 479 References 493 Hints to Exercises 503 Index 547 v This page intentionally left blank Contents New to the Third Edition xvii Preface xix 1 Introduction 1 1.1 What Is an Algorithm? 3 Exercises 1.1 7 1.2 Fundamentals of Algorithmic Problem Solving 9 Understanding the Problem 9 Ascertaining the Capabilities of the Computational Device 9 Choosing between Exact and Approximate Problem Solving 11 Algorithm Design Techniques 11 Designing an Algorithm and Data Structures 12 Methods of Specifying an Algorithm 12 Proving an Algorithm’s Correctness 13 Analyzing an Algorithm 14 Coding an Algorithm 15 Exercises 1.2 17 1.3 Important Problem Types 18 Sorting 19 Searching 20 String Processing 20 Graph Problems 21 Combinatorial Problems 21 Geometric Problems 22 Numerical Problems 22 Exercises 1.3 23 vii viii Contents 1.4 Fundamental Data Structures 25 Linear Data Structures 25 Graphs 28 Trees 31 Sets and Dictionaries 35 Exercises 1.4 37 Summary 38 2 Fundamentals of the Analysis of Algorithm Efﬁciency 41 2.1 The Analysis Framework 42 Measuring an Input’s Size 43 Units for Measuring Running Time 44 Orders of Growth 45 Worst-Case, Best-Case, and Average-Case Efﬁciencies 47 Recapitulation of the Analysis Framework 50 Exercises 2.1 50 2.2 Asymptotic Notations and Basic Efﬁciency Classes 52 Informal Introduction 52 O-notation 53 -notation 54 -notation 55 Useful Property Involving the Asymptotic Notations 55 Using Limits for Comparing Orders of Growth 56 Basic Efﬁciency Classes 58 Exercises 2.2 58 2.3 Mathematical Analysis of Nonrecursive Algorithms 61 Exercises 2.3 67 2.4 Mathematical Analysis of Recursive Algorithms 70 Exercises 2.4 76 2.5 Example: Computing the nth Fibonacci Number 80 Exercises 2.5 83 2.6 Empirical Analysis of Algorithms 84 Exercises 2.6 89 2.7 Algorithm Visualization 91 Summary 94 Contents ix 3 Brute Force and Exhaustive Search 97 3.1 Selection Sort and Bubble Sort 98 Selection Sort 98 Bubble Sort 100 Exercises 3.1 102 3.2 Sequential Search and Brute-Force String Matching 104 Sequential Search 104 Brute-Force String Matching 105 Exercises 3.2 106 3.3 Closest-Pair and Convex-Hull Problems by Brute Force 108 Closest-Pair Problem 108 Convex-Hull Problem 109 Exercises 3.3 113 3.4 Exhaustive Search 115 Traveling Salesman Problem 116 Knapsack Problem 116 Assignment Problem 119 Exercises 3.4 120 3.5 Depth-First Search and Breadth-First Search 122 Depth-First Search 122 Breadth-First Search 125 Exercises 3.5 128 Summary 130 4 Decrease-and-Conquer 131 4.1 Insertion Sort 134 Exercises 4.1 136 4.2 Topological Sorting 138 Exercises 4.2 142 4.3 Algorithms for Generating Combinatorial Objects 144 Generating Permutations 144 Generating Subsets 146 Exercises 4.3 148 x Contents 4.4 Decrease-by-a-Constant-Factor Algorithms 150 Binary Search 150 Fake-Coin Problem 152 Russian Peasant Multiplication 153 Josephus Problem 154 Exercises 4.4 156 4.5 Variable-Size-Decrease Algorithms 157 Computing a Median and the Selection Problem 158 Interpolation Search 161 Searching and Insertion in a Binary Search Tree 163 The Game of Nim 164 Exercises 4.5 166 Summary 167 5 Divide-and-Conquer 169 5.1 Mergesort 172 Exercises 5.1 174 5.2 Quicksort 176 Exercises 5.2 181 5.3 Binary Tree Traversals and Related Properties 182 Exercises 5.3 185 5.4 Multiplication of Large Integers and Strassen’s Matrix Multiplication 186 Multiplication of Large Integers 187 Strassen’s Matrix Multiplication 189 Exercises 5.4 191 5.5 The Closest-Pair and Convex-Hull Problems by Divide-and-Conquer 192 The Closest-Pair Problem 192 Convex-Hull Problem 195 Exercises 5.5 197 Summary 198 Contents xi 6 Transform-and-Conquer 201 6.1 Presorting 202 Exercises 6.1 205 6.2 Gaussian Elimination 208 LU Decomposition 212 Computing a Matrix Inverse 214 Computing a Determinant 215 Exercises 6.2 216 6.3 Balanced Search Trees 218 AVL Trees 218 2-3 Trees 223 Exercises 6.3 225 6.4 Heaps and Heapsort 226 Notion of the Heap 227 Heapsort 231 Exercises 6.4 233 6.5 Horner’s Rule and Binary Exponentiation 234 Horner’s Rule 234 Binary Exponentiation 236 Exercises 6.5 239 6.6 Problem Reduction 240 Computing the Least Common Multiple 241 Counting Paths in a Graph 242 Reduction of Optimization Problems 243 Linear Programming 244 Reduction to Graph Problems 246 Exercises 6.6 248 Summary 250 7 Space and Time Trade-Offs 253 7.1 Sorting by Counting 254 Exercises 7.1 257 7.2 Input Enhancement in String Matching 258 Horspool’s Algorithm 259 xii Contents Boyer-Moore Algorithm 263 Exercises 7.2 267 7.3 Hashing 269 Open Hashing (Separate Chaining) 270 Closed Hashing (Open Addressing) 272 Exercises 7.3 274 7.4 B-Trees 276 Exercises 7.4 279 Summary 280 8 Dynamic Programming 283 8.1 Three Basic Examples 285 Exercises 8.1 290 8.2 The Knapsack Problem and Memory Functions 292 Memory Functions 294 Exercises 8.2 296 8.3 Optimal Binary Search Trees 297 Exercises 8.3 303 8.4 Warshall’s and Floyd’s Algorithms 304 Warshall’s Algorithm 304 Floyd’s Algorithm for the All-Pairs Shortest-Paths Problem 308 Exercises 8.4 311 Summary 312 9 Greedy Technique 315 9.1 Prim’s Algorithm 318 Exercises 9.1 322 9.2 Kruskal’s Algorithm 325 Disjoint Subsets and Union-Find Algorithms 327 Exercises 9.2 331 9.3 Dijkstra’s Algorithm 333 Exercises 9.3 337 Contents xiii 9.4 Huffman Trees and Codes 338 Exercises 9.4 342 Summary 344 10 Iterative Improvement 345 10.1 The Simplex Method 346 Geometric Interpretation of Linear Programming 347 An Outline of the Simplex Method 351 Further Notes on the Simplex Method 357 Exercises 10.1 359 10.2 The Maximum-Flow Problem 361 Exercises 10.2 371 10.3 Maximum Matching in Bipartite Graphs 372 Exercises 10.3 378 10.4 The Stable Marriage Problem 380 Exercises 10.4 383 Summary 384 11 Limitations of Algorithm Power 387 11.1 Lower-Bound Arguments 388 Trivial Lower Bounds 389 Information-Theoretic Arguments 390 Adversary Arguments 390 Problem Reduction 391 Exercises 11.1 393 11.2 Decision Trees 394 Decision Trees for Sorting 395 Decision Trees for Searching a Sorted Array 397 Exercises 11.2 399 11.3 P, NP, and NP-Complete Problems 401 P and NP Problems 402 NP-Complete Problems 406 Exercises 11.3 409 xiv Contents 11.4 Challenges of Numerical Algorithms 412 Exercises 11.4 419 Summary 420 12 Coping with the Limitations of Algorithm Power 423 12.1 Backtracking 424 n-Queens Problem 425 Hamiltonian Circuit Problem 426 Subset-Sum Problem 427 General Remarks 428 Exercises 12.1 430 12.2 Branch-and-Bound 432 Assignment Problem 433 Knapsack Problem 436 Traveling Salesman Problem 438 Exercises 12.2 440 12.3 Approximation Algorithms for NP-Hard Problems 441 Approximation Algorithms for the Traveling Salesman Problem 443 Approximation Algorithms for the Knapsack Problem 453 Exercises 12.3 457 12.4 Algorithms for Solving Nonlinear Equations 459 Bisection Method 460 Method of False Position 464 Newton’s Method 464 Exercises 12.4 467 Summary 468 Epilogue 471 APPENDIX A Useful Formulas for the Analysis of Algorithms 475 Properties of Logarithms 475 Combinatorics 475 Important Summation Formulas 476 Sum Manipulation Rules 476 Contents xv Approximation of a Sum by a Deﬁnite Integral 477 Floor and Ceiling Formulas 477 Miscellaneous 477 APPENDIX B Short Tutorial on Recurrence Relations 479 Sequences and Recurrence Relations 479 Methods for Solving Recurrence Relations 480 Common Recurrence Types in Algorithm Analysis 485 References 493 Hints to Exercises 503 Index 547 This page intentionally left blank New to the Third Edition Reordering of chapters to introduce decrease-and-conquer before divide- and-conquer Restructuring of chapter 8 on dynamic programming, including all new intro- ductory material and new exercises focusing on well-known applications More coverage of the applications of the algorithms discussed Reordering of select sections throughout the book to achieve a better align- ment of speciﬁc algorithms and general algorithm design techniques Addition of the Lomuto partition and Gray code algorithms Seventy new problems added to the end-of-chapter exercises, including algo- rithmic puzzles and questions asked during job interviews xvii This page intentionally left blank Preface The most valuable acquisitions in a scientiﬁc or technical education are the general-purpose mental tools which remain serviceable for a life-time. —George Forsythe, “What to do till the computer scientist comes.” (1968) Algorithms play the central role both in the science and practice of computing. Recognition of this fact has led to the appearance of a considerable number of textbooks on the subject. By and large, they follow one of two alternatives in presenting algorithms. One classiﬁes algorithms according to a problem type. Such a book would have separate chapters on algorithms for sorting, searching, graphs, and so on. The advantage of this approach is that it allows an immediate comparison of, say, the efﬁciency of different algorithms for the same problem. The drawback of this approach is that it emphasizes problem types at the expense of algorithm design techniques. The second alternative organizes the presentation around algorithm design techniques. In this organization, algorithms from different areas of computing are grouped together if they have the same design approach. I share the belief of many (e.g., [BaY95]) that this organization is more appropriate for a basic course on the design and analysis of algorithms. There are three principal reasons for emphasis on algorithm design techniques. First, these techniques provide a student with tools for designing algorithms for new problems. This makes learning algorithm design techniques a very valuable endeavor from a practical standpoint. Second, they seek to classify multitudes of known algorithms according to an underlying design idea. Learning to see such commonality among algorithms from different application areas should be a major goal of computer science education. After all, every science considers classiﬁcation of its principal subject as a major if not the central point of its discipline. Third, in my opinion, algorithm design techniques have utility as general problem solving strategies, applicable to problems beyond computing. xix xx Preface Unfortunately, the traditional classiﬁcation of algorithm design techniques has several serious shortcomings, from both theoretical and educational points of view. The most signiﬁcant of these shortcomings is the failure to classify many important algorithms. This limitation has forced the authors of other textbooks to depart from the design technique organization and to include chapters dealing with speciﬁc problem types. Such a switch leads to a loss of course coherence and almost unavoidably creates a confusion in students’ minds. New taxonomy of algorithm design techniques My frustration with the shortcomings of the traditional classiﬁcation of algorithm design techniques has motivated me to develop a new taxonomy of them [Lev99], which is the basis of this book. Here are the principal advantages of the new taxonomy: The new taxonomy is more comprehensive than the traditional one. It includes several strategies—brute-force, decrease-and-conquer, transform-and-con- quer, space and time trade-offs, and iterative improvement—that are rarely if ever recognized as important design paradigms. The new taxonomy covers naturally many classic algorithms (Euclid’s algo- rithm, heapsort, search trees, hashing, topological sorting, Gaussian elimi- nation, Horner’s rule—to name a few) that the traditional taxonomy cannot classify. As a result, the new taxonomy makes it possible to present the stan- dard body of classic algorithms in a uniﬁed and coherent fashion. It naturally accommodates the existence of important varieties of several design techniques. For example, it recognizes three variations of decrease- and-conquer and three variations of transform-and-conquer. It is better aligned with analytical methods for the efﬁciency analysis (see Appendix B). Design techniques as general problem solving strategies Most applications of the design techniques in the book are to classic problems of computer science. (The only innovation here is an inclusion of some material on numerical algorithms, which are covered within the same general framework.) But these design techniques can be considered general problem solving tools, whose applications are not limited to traditional computing and mathematical problems. Two factors make this point particularly important. First, more and more computing applications go beyond the traditional domain, and there are reasons to believe that this trend will strengthen in the future. Second, developing students’ problem solving skills has come to be recognized as a major goal of college education. Among all the courses in a computer science curriculum, a course on the design and analysis of algorithms is uniquely suitable for this task because it can offer a student speciﬁc strategies for solving problems. I am not proposing that a course on the design and analysis of algorithms should become a course on general problem solving. But I do believe that the Preface xxi unique opportunity provided by studying the design and analysis of algorithms should not be missed. Toward this goal, the book includes applications to puzzles and puzzle-like games. Although using puzzles in teaching algorithms is certainly not a new idea, the book tries to do this systematically by going well beyond a few standard examples. Textbook pedagogy My goal was to write a text that would not trivialize the subject but would still be readable by most students on their own. Here are some of the things done toward this objective. Sharing the opinion of George Forsythe expressed in the epigraph, I have sought to stress major ideas underlying the design and analysis of algorithms. In choosing speciﬁc algorithms to illustrate these ideas, I limited the number of covered algorithms to those that demonstrate an underlying design technique or an analysis method most clearly. Fortunately, most classic algorithms satisfy this criterion. In Chapter 2, which is devoted to efﬁciency analysis, the methods used for analyzing nonrecursive algorithms are separated from those typically used for analyzing recursive algorithms. The chapter also includes sections devoted to empirical analysis and algorithm visualization. The narrative is systematically interrupted by questions to the reader. Some of them are asked rhetorically, in anticipation of a concern or doubt, and are answered immediately. The goal of the others is to prevent the reader from drifting through the text without a satisfactory level of comprehension. Each chapter ends with a summary recapping the most important concepts and results discussed in the chapter. The book contains over 600 exercises. Some of them are drills; others make important points about the material covered in the body of the text or intro- duce algorithms not covered there at all. A few exercises take advantage of Internet resources. More difﬁcult problems—there are not many of them— are marked by special symbols in the Instructor’s Manual. (Because marking problems as difﬁcult may discourage some students from trying to tackle them, problems are not marked in the book itself.) Puzzles, games, and puzzle-like questions are marked in the exercises with a special icon. The book provides hints to all the exercises. Detailed solutions, except for programming projects, are provided in the Instructor’s Manual, available to qualiﬁed adopters through Pearson’s Instructor Resource Center. (Please contact your local Pearson sales representative or go to www.pearsonhighered .com/irc to access this material.) Slides in PowerPoint are available to all readers of this book via anonymous ftp at the CS Support site: http://cssupport .pearsoncmg.com/. xxii Preface Changes for the third edition There are a few changes in the third edition. The most important is the new order of the chapters on decrease-and-conquer and divide-and-conquer. There are several advantages in introducing decrease-and-conquer before divide-and-conquer: Decrease-and-conquer is a simpler strategy than divide-and-conquer. Decrease-and-conquer is applicable to more problems than divide-and-con- quer. The new order makes it possible to discuss insertion sort before mergesort and quicksort. The idea of array partitioning is now introduced in conjunction with the selection problem. I took advantage of an opportunity to do this via the one- directional scan employed by Lomuto’s algorithm, leaving the two-directional scan used by Hoare’s partitioning to a later discussion in conjunction with quicksort. Binary search is now considered in the section devoted to decrease-by-a- constant-factor algorithms, where it belongs. The second important change is restructuring of Chapter 8 on dynamic pro- gramming. Speciﬁcally: The introductory section is completely new. It contains three basic examples that provide a much better introduction to this important technique than computing a binomial coefﬁcient, the example used in the ﬁrst two editions. All the exercises for Section 8.1 are new as well; they include well-known applications not available in the previous editions. I also changed the order of the other sections in this chapter to get a smoother progression from the simpler applications to the more advanced ones. The other changes include the following. More applications of the algorithms discussed are included. The section on the graph-traversal algorithms is moved from the decrease-and-conquer chapter to the brute-force and exhaustive-search chapter, where it ﬁts better, in my opinion. The Gray code algorithm is added to the section dealing with algorithms for generating combinatorial objects. The divide- and-conquer algorithm for the closest-pair problem is discussed in more detail. Updates include the section on algorithm visualization, approximation algorithms for the traveling salesman problem, and, of course, the bibliography. I also added about 70 new problems to the exercises. Some of them are algo- rithmic puzzles and questions asked during job interviews. Prerequisites The book assumes that a reader has gone through an introductory programming course and a standard course on discrete structures. With such a background, he or she should be able to handle the book’s material without undue difﬁculty. Preface xxiii Still, fundamental data structures, necessary summation formulas, and recurrence relations are reviewed in Section 1.4, Appendix A, and Appendix B, respectively. Calculus is used in only three sections (Section 2.2, 11.4, and 12.4), and to a very limited degree; if students lack calculus as an assured part of their background, the relevant portions of these three sections can be omitted without hindering their understanding of the rest of the material. Use in the curriculum The book can serve as a textbook for a basic course on design and analysis of algorithms organized around algorithm design techniques. It might contain slightly more material than can be covered in a typical one-semester course. By and large, portions of Chapters 3 through 12 can be skipped without the danger of making later parts of the book incomprehensible to the reader. Any portion of the book can be assigned for self-study. In particular, Sections 2.6 and 2.7 on empirical analysis and algorithm visualization, respectively, can be assigned in conjunction with projects. Here is a possible plan for a one-semester course; it assumes a 40-class meeting format. Lecture Topic Sections 1 Introduction 1.1–1.3 2, 3 Analysis framework; O, , notations 2.1, 2.2 4 Mathematical analysis of nonrecursive algorithms 2.3 5, 6 Mathematical analysis of recursive algorithms 2.4, 2.5 (+ App. B) 7 Brute-force algorithms 3.1, 3.2 (+ 3.3) 8 Exhaustive search 3.4 9 Depth-ﬁrst search and breadth-ﬁrst search 3.5 10, 11 Decrease-by-one: insertion sort, topological sorting 4.1, 4.2 12 Binary search and other decrease-by-a-constant- factor algorithms 4.4 13 Variable-size-decrease algorithms 4.5 14, 15 Divide-and-conquer: mergesort, quicksort 5.1–5.2 16 Other divide-and-conquer examples 5.3 or 5.4 or 5.5 17–19 Instance simpliﬁcation: presorting, Gaussian elimi- nation, balanced search trees 6.1–6.3 20 Representation change: heaps and heapsort or Horner’s rule and binary exponentiation 6.4 or 6.5 21 Problem reduction 6.6 22–24 Space-time trade-offs: string matching, hashing, B- trees 7.2–7.4 25–27 Dynamic programming algorithms 3 from 8.1–8.4 xxiv Preface 28–30 Greedy algorithms: Prim’s, Kruskal’s, Dijkstra’s, Huffman’s 9.1–9.4 31–33 Iterative improvement algorithms 3 from 10.1–10.4 34 Lower-bound arguments 11.1 35 Decision trees 11.2 36 P, NP , and NP-complete problems 11.3 37 Numerical algorithms 11.4 (+ 12.4) 38 Backtracking 12.1 39 Branch-and-bound 12.2 40 Approximation algorithms for NP-hard problems 12.3 Acknowledgments I would like to express my gratitude to the reviewers and many readers who have shared with me their opinions about the ﬁrst two editions of the book and suggested improvements and corrections. The third edition has certainly ben- eﬁted from the reviews by Andrew Harrington (Loyola University Chicago), David Levine (Saint Bonaventure University), Stefano Lombardi (UC Riverside), Daniel McKee (Mansﬁeld University), Susan Brilliant (Virginia Commonwealth University), David Akers (University of Puget Sound), and two anonymous re- viewers. My thanks go to all the people at Pearson and their associates who worked on my book. I am especially grateful to my editor, Matt Goldstein; the editorial assistant, Chelsea Bell; the marketing manager, Yez Alayan; and the production supervisor, Kayla Smith-Tarbox. I am also grateful to Richard Camp for copyedit- ing the book, Paul Anagnostopoulos of Windfall Software and Jacqui Scarlott for its project management and typesetting, and MaryEllen Oliver for proofreading the book. Finally, I am indebted to two members of my family. Living with a spouse writing a book is probably more trying than doing the actual writing. My wife, Maria, lived through several years of this, helping me any way she could. And help she did: over 400 ﬁgures in the book and the Instructor’s Manual were created by her. My daughter Miriam has been my English prose guru over many years. She read large portions of the book and was instrumental in ﬁnding the chapter epigraphs. Anany Levitin anany.levitin@villanova.edu June 2011 This page intentionally left blank 1 Introduction Two ideas lie gleaming on the jeweler’s velvet. The ﬁrst is the calculus, the second, the algorithm. The calculus and the rich body of mathematical analysis to which it gave rise made modern science possible; but it has been the algorithm that has made possible the modern world. —David Berlinski, The Advent of the Algorithm, 2000 Why do you need to study algorithms? If you are going to be a computer professional, there are both practical and theoretical reasons to study algo- rithms. From a practical standpoint, you have to know a standard set of important algorithms from different areas of computing; in addition, you should be able to design new algorithms and analyze their efﬁciency. From the theoretical stand- point, the study of algorithms, sometimes called algorithmics, has come to be recognized as the cornerstone of computer science. David Harel, in his delightful book pointedly titled Algorithmics: the Spirit of Computing, put it as follows: Algorithmics is more than a branch of computer science. It is the core of computer science, and, in all fairness, can be said to be relevant to most of science, business, and technology. [Har92, p. 6] But even if you are not a student in a computing-related program, there are compelling reasons to study algorithms. To put it bluntly, computer programs would not exist without algorithms. And with computer applications becoming indispensable in almost all aspects of our professional and personal lives, studying algorithms becomes a necessity for more and more people. Another reason for studying algorithms is their usefulness in developing an- alytical skills. After all, algorithms can be seen as special kinds of solutions to problems—not just answers but precisely deﬁned procedures for getting answers. Consequently, speciﬁc algorithm design techniques can be interpreted as problem- solving strategies that can be useful regardless of whether a computer is involved. Of course, the precision inherently imposed by algorithmic thinking limits the kinds of problems that can be solved with an algorithm. You will not ﬁnd, for example, an algorithm for living a happy life or becoming rich and famous. On 1 2 Introduction the other hand, this required precision has an important educational advantage. Donald Knuth, one of the most prominent computer scientists in the history of algorithmics, put it as follows: A person well-trained in computer science knows how to deal with algorithms: how to construct them, manipulate them, understand them, analyze them. This knowledge is preparation for much more than writing good computer programs; it is a general-purpose mental tool that will be a deﬁnite aid to the understanding of other subjects, whether they be chemistry, linguistics, or music, etc. The reason for this may be understood in the following way: It has often been said that a person does not really understand something until after teaching it to someone else. Actually, a person does not really understand something until after teaching it to a computer, i.e., expressing it as an algorithm...Anattempt to formalize things as algorithms leads to a much deeper understanding than if we simply try to comprehend things in the traditional way. [Knu96, p. 9] We take up the notion of algorithm in Section 1.1. As examples, we use three algorithms for the same problem: computing the greatest common divisor. There are several reasons for this choice. First, it deals with a problem familiar to ev- erybody from their middle-school days. Second, it makes the important point that the same problem can often be solved by several algorithms. Quite typically, these algorithms differ in their idea, level of sophistication, and efﬁciency. Third, one of these algorithms deserves to be introduced ﬁrst, both because of its age—it ap- peared in Euclid’s famous treatise more than two thousand years ago—and its enduring power and importance. Finally, investigation of these three algorithms leads to some general observations about several important properties of algo- rithms in general. Section 1.2 deals with algorithmic problem solving. There we discuss several important issues related to the design and analysis of algorithms. The different aspects of algorithmic problem solving range from analysis of the problem and the means of expressing an algorithm to establishing its correctness and analyzing its efﬁciency. The section does not contain a magic recipe for designing an algorithm for an arbitrary problem. It is a well-established fact that such a recipe does not exist. Still, the material of Section 1.2 should be useful for organizing your work on designing and analyzing algorithms. Section 1.3 is devoted to a few problem types that have proven to be partic- ularly important to the study of algorithms and their application. In fact, there are textbooks (e.g., [Sed11]) organized around such problem types. I hold the view—shared by many others—that an organization based on algorithm design techniques is superior. In any case, it is very important to be aware of the princi- pal problem types. Not only are they the most commonly encountered problem types in real-life applications, they are used throughout the book to demonstrate particular algorithm design techniques. Section 1.4 contains a review of fundamental data structures. It is meant to serve as a reference rather than a deliberate discussion of this topic. If you need 1.1 What Is an Algorithm? 3 a more detailed exposition, there is a wealth of good books on the subject, most of them tailored to a particular programming language. 1.1 What Is an Algorithm? Although there is no universally agreed-on wording to describe this notion, there is general agreement about what the concept means: An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in a ﬁnite amount of time. This deﬁnition can be illustrated by a simple diagram (Figure 1.1). The reference to “instructions” in the deﬁnition implies that there is some- thing or someone capable of understanding and following the instructions given. We call this a “computer,” keeping in mind that before the electronic computer was invented, the word “computer” meant a human being involved in perform- ing numeric calculations. Nowadays, of course, “computers” are those ubiquitous electronic devices that have become indispensable in almost everything we do. Note, however, that although the majority of algorithms are indeed intended for eventual computer implementation, the notion of algorithm does not depend on such an assumption. As examples illustrating the notion of the algorithm, we consider in this section three methods for solving the same problem: computing the greatest common divisor of two integers. These examples will help us to illustrate several important points: The nonambiguity requirement for each step of an algorithm cannot be com- promised. The range of inputs for which an algorithm works has to be speciﬁed carefully. The same algorithm can be represented in several different ways. There may exist several algorithms for solving the same problem. problem algorithm input output"computer" FIGURE 1.1 The notion of the algorithm. 4 Introduction Algorithms for the same problem can be based on very different ideas and can solve the problem with dramatically different speeds. Recall that the greatest common divisor of two nonnegative, not-both-zero integers m and n, denoted gcd(m, n), is deﬁned as the largest integer that divides both m and n evenly, i.e., with a remainder of zero. Euclid of Alexandria (third century b.c.) outlined an algorithm for solving this problem in one of the volumes of his Elements most famous for its systematic exposition of geometry. In modern terms, Euclid’s algorithm is based on applying repeatedly the equality gcd(m, n) = gcd(n, m mod n), where m mod n is the remainder of the division of m by n, until m mod n is equal to 0. Since gcd(m, 0) = m (why?), the last value of m is also the greatest common divisor of the initial m and n. For example, gcd(60, 24) can be computed as follows: gcd(60, 24) = gcd(24, 12) = gcd(12, 0) = 12. (If you are not impressed by this algorithm, try ﬁnding the greatest common divisor of larger numbers, such as those in Problem 6 in this section’s exercises.) Here is a more structured description of this algorithm: Euclid’s algorithm for computing gcd(m, n) Step 1 If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2. Step 2 Divide m by n and assign the value of the remainder to r. Step 3 Assign the value of n to m and the value of r to n. Go to Step 1. Alternatively, we can express the same algorithm in pseudocode: ALGORITHM Euclid(m, n) //Computes gcd(m, n) by Euclid’s algorithm //Input: Two nonnegative, not-both-zero integers m and n //Output: Greatest common divisor of m and n while n = 0 do r ← m mod n m ← n n ← r return m How do we know that Euclid’s algorithm eventually comes to a stop? This follows from the observation that the second integer of the pair gets smaller with each iteration and it cannot become negative. Indeed, the new value of n on the next iteration is m mod n, which is always smaller than n (why?). Hence, the value of the second integer eventually becomes 0, and the algorithm stops. 1.1 What Is an Algorithm? 5 Just as with many other problems, there are several algorithms for computing the greatest common divisor. Let us look at the other two methods for this prob- lem. The ﬁrst is simply based on the deﬁnition of the greatest common divisor of m and n as the largest integer that divides both numbers evenly. Obviously, such a common divisor cannot be greater than the smaller of these numbers, which we will denote by t = min{m, n}. So we can start by checking whether t divides both m and n: if it does, t is the answer; if it does not, we simply decrease t by 1 and try again. (How do we know that the process will eventually stop?) For example, for numbers 60 and 24, the algorithm will try ﬁrst 24, then 23, and so on, until it reaches 12, where it stops. Consecutive integer checking algorithm for computing gcd(m, n) Step 1 Assign the value of min{m, n} to t. Step 2 Divide m by t. If the remainder of this division is 0, go to Step 3; otherwise, go to Step 4. Step 3 Divide n by t.If the remainder of this division is 0, return the value of t as the answer and stop; otherwise, proceed to Step 4. Step 4 Decrease the value of t by 1. Go to Step 2. Note that unlike Euclid’s algorithm, this algorithm, in the form presented, does not work correctly when one of its input numbers is zero. This example illustrates why it is so important to specify the set of an algorithm’s inputs explicitly and carefully. The third procedure for ﬁnding the greatest common divisor should be famil- iar to you from middle school. Middle-school procedure for computing gcd(m, n) Step 1 Find the prime factors of m. Step 2 Find the prime factors of n. Step 3 Identify all the common factors in the two prime expansions found in Step 1 and Step 2. (If p is a common factor occurring pm and pn times in m and n, respectively, it should be repeated min{pm, pn} times.) Step 4 Compute the product of all the common factors and return it as the greatest common divisor of the numbers given. Thus, for the numbers 60 and 24, we get 60 = 2 . 2 . 3 . 5 24 = 2 . 2 . 2 . 3 gcd(60, 24) = 2 . 2 . 3 = 12. Nostalgia for the days when we learned this method should not prevent us from noting that the last procedure is much more complex and slower than Euclid’s algorithm. (We will discuss methods for ﬁnding and comparing running times of algorithms in the next chapter.) In addition to inferior efﬁciency, the middle- school procedure does not qualify, in the form presented, as a legitimate algorithm. Why? Because the prime factorization steps are not deﬁned unambiguously: they 6 Introduction require a list of prime numbers, and I strongly suspect that your middle-school math teacher did not explain how to obtain such a list. This is not a matter of unnecessary nitpicking. Unless this issue is resolved, we cannot, say, write a program implementing this procedure. Incidentally, Step 3 is also not deﬁned clearly enough. Its ambiguity is much easier to rectify than that of the factorization steps, however. How would you ﬁnd common elements in two sorted lists? So, let us introduce a simple algorithm for generating consecutive primes not exceeding any given integer n>1. It was probably invented in ancient Greece and is known as the sieve of Eratosthenes (ca. 200 b.c.). The algorithm starts by initializing a list of prime candidates with consecutive integers from 2 to n. Then, on its ﬁrst iteration, the algorithm eliminates from the list all multiples of 2, i.e., 4, 6, and so on. Then it moves to the next item on the list, which is 3, and eliminates its multiples. (In this straightforward version, there is an overhead because some numbers, such as 6, are eliminated more than once.) No pass for number 4 is needed: since 4 itself and all its multiples are also multiples of 2, they were already eliminated on a previous pass. The next remaining number on the list, which is used on the third pass, is 5. The algorithm continues in this fashion until no more numbers can be eliminated from the list. The remaining integers of the list are the primes needed. As an example, consider the application of the algorithm to ﬁnding the list of primes not exceeding n = 25: 2345678910111213141516171819202122232425 2 3579 1113151719212325 2 3 5 7 11 13 17 19 23 25 23 5 7 1113 1719 23 For this example, no more passes are needed because they would eliminate num- bers already eliminated on previous iterations of the algorithm. The remaining numbers on the list are the consecutive primes less than or equal to 25. What is the largest number p whose multiples can still remain on the list to make further iterations of the algorithm necessary? Before we answer this question, let us ﬁrst note that if p is a number whose multiples are being eliminated on the current pass, then the ﬁrst multiple we should consider is p . p because all its smaller multiples 2p,...,(p− 1)p have been eliminated on earlier passes through the list. This observation helps to avoid eliminating the same number more than once. Obviously, p . p should not be greater than n, and therefore p cannot exceed√ n rounded down (denoted √ n using the so-called ﬂoor function). We assume in the following pseudocode that there is a function available for computing √ n ; alternatively, we could check the inequality p . p ≤ n as the loop continuation condition there. ALGORITHM Sieve(n) //Implements the sieve of Eratosthenes //Input: A positive integer n>1 //Output: Array L of all prime numbers less than or equal to n 1.1 What Is an Algorithm? 7 for p ← 2 to n do A[p] ← p for p ← 2 to √ n do //see note before pseudocode if A[p] = 0//p hasn’t been eliminated on previous passes j ← p ∗ p while j ≤ n do A[j] ← 0 //mark element as eliminated j ← j + p //copy the remaining elements of A to array L of the primes i ← 0 for p ← 2 to n do if A[p] = 0 L[i] ← A[p] i ← i + 1 return L So now we can incorporate the sieve of Eratosthenes into the middle-school procedure to get a legitimate algorithm for computing the greatest common divi- sor of two positive integers. Note that special care needs to be exercised if one or both input numbers are equal to 1: because mathematicians do not consider 1 to be a prime number, strictly speaking, the method does not work for such inputs. Before we leave this section, one more comment is in order. The exam- ples considered in this section notwithstanding, the majority of algorithms in use today—even those that are implemented as computer programs—do not deal with mathematical problems. Look around for algorithms helping us through our daily routines, both professional and personal. May this ubiquity of algorithms in to- day’s world strengthen your resolve to learn more about these fascinating engines of the information age. Exercises 1.1 1. Do some research on al-Khorezmi (also al-Khwarizmi), the man from whose name the word “algorithm” is derived. In particular, you should learn what the origins of the words “algorithm” and “algebra” have in common. 2. Given that the ofﬁcial purpose of the U.S. patent system is the promotion of the “useful arts,” do you think algorithms are patentable in this country? Should they be? 3. a. Write down driving directions for going from your school to your home with the precision required from an algorithm’s description. b. Write down a recipe for cooking your favorite dish with the precision required by an algorithm. 4. Design an algorithm for computing √ n for any positive integer n. Besides assignment and comparison, your algorithm may only use the four basic arithmetical operations. 8 Introduction 5. Design an algorithm to ﬁnd all the common elements in two sorted lists of numbers. For example, for the lists 2, 5, 5, 5 and 2, 2, 3, 5, 5, 7, the output should be 2, 5, 5. What is the maximum number of comparisons your algorithm makes if the lengths of the two given lists are m and n, respectively? 6. a. Find gcd(31415, 14142) by applying Euclid’s algorithm. b. Estimate how many times faster it will be to ﬁnd gcd(31415, 14142) by Euclid’s algorithm compared with the algorithm based on checking con- secutive integers from min{m, n} down to gcd(m, n). 7. Prove the equality gcd(m, n) = gcd(n, m mod n) for every pair of positive integers m and n. 8. What does Euclid’s algorithm do for a pair of integers in which the ﬁrst is smaller than the second? What is the maximum number of times this can happen during the algorithm’s execution on such an input? 9. a. What is the minimum number of divisions made by Euclid’s algorithm among all inputs 1 ≤ m, n ≤ 10? b. What is the maximum number of divisions made by Euclid’s algorithm among all inputs 1 ≤ m, n ≤ 10? 10. a. Euclid’s algorithm, as presented in Euclid’s treatise, uses subtractions rather than integer divisions. Write pseudocode for this version of Euclid’s algorithm. b. Euclid’s game (see [Bog]) starts with two unequal positive integers on the board. Two players move in turn. On each move, a player has to write on the board a positive number equal to the difference of two numbers already on the board; this number must be new, i.e., different from all the numbers already on the board. The player who cannot move loses the game. Should you choose to move ﬁrst or second in this game? 11. The extended Euclid’s algorithm determines not only the greatest common divisor d of two positive integers m and n but also integers (not necessarily positive) x and y, such that mx + ny = d. a. Look up a description of the extended Euclid’s algorithm (see, e.g., [KnuI, p. 13]) and implement it in the language of your choice. b. Modify your program to ﬁnd integer solutions to the Diophantine equation ax + by = c with any set of integer coefﬁcients a, b, and c. 12. Locker doors There are n lockers in a hallway, numbered sequentially from 1ton. Initially, all the locker doors are closed. You make n passes by the lockers, each time starting with locker #1. On the ith pass, i = 1, 2,...,n, you toggle the door of every ith locker: if the door is closed, you open it; if it is open, you close it. After the last pass, which locker doors are open and which are closed? How many of them are open? 1.2 Fundamentals of Algorithmic Problem Solving 9 1.2 Fundamentals of Algorithmic Problem Solving Let us start by reiterating an important point made in the introduction to this chapter: We can consider algorithms to be procedural solutions to problems. These solutions are not answers but speciﬁc instructions for getting answers. It is this emphasis on precisely deﬁned constructive procedures that makes computer science distinct from other disciplines. In particular, this distinguishes it from the- oretical mathematics, whose practitioners are typically satisﬁed with just proving the existence of a solution to a problem and, possibly, investigating the solution’s properties. We now list and brieﬂy discuss a sequence of steps one typically goes through in designing and analyzing an algorithm (Figure 1.2). Understanding the Problem From a practical perspective, the ﬁrst thing you need to do before designing an algorithm is to understand completely the problem given. Read the problem’s description carefully and ask questions if you have any doubts about the problem, do a few small examples by hand, think about special cases, and ask questions again if needed. There are a few types of problems that arise in computing applications quite often. We review them in the next section. If the problem in question is one of them, you might be able to use a known algorithm for solving it. Of course, it helps to understand how such an algorithm works and to know its strengths and weaknesses, especially if you have to choose among several available algorithms. But often you will not ﬁnd a readily available algorithm and will have to design your own. The sequence of steps outlined in this section should help you in this exciting but not always easy task. An input to an algorithm speciﬁes an instance of the problem the algorithm solves. It is very important to specify exactly the set of instances the algorithm needs to handle. (As an example, recall the variations in the set of instances for the three greatest common divisor algorithms discussed in the previous section.) If you fail to do this, your algorithm may work correctly for a majority of inputs but crash on some “boundary” value. Remember that a correct algorithm is not one that works most of the time, but one that works correctly for all legitimate inputs. Do not skimp on this ﬁrst step of the algorithmic problem-solving process; otherwise, you will run the risk of unnecessary rework. Ascertaining the Capabilities of the Computational Device Once you completely understand a problem, you need to ascertain the capabilities of the computational device the algorithm is intended for. The vast majority of 10 Introduction Understand the problem Decide on: computational means, exact vs. approximate solving, algorithm design technique Design an algorithm Prove correctness Analyze the algorithm Code the algorithm FIGURE 1.2 Algorithm design and analysis process. algorithms in use today are still destined to be programmed for a computer closely resembling the von Neumann machine—a computer architecture outlined by the prominent Hungarian-American mathematician John von Neumann (1903– 1957), in collaboration with A. Burks and H. Goldstine, in 1946. The essence of this architecture is captured by the so-called random-access machine (RAM). Its central assumption is that instructions are executed one after another, one operation at a time. Accordingly, algorithms designed to be executed on such machines are called sequential algorithms. The central assumption of the RAM model does not hold for some newer computers that can execute operations concurrently, i.e., in parallel. Algorithms that take advantage of this capability are called parallel algorithms. Still, studying the classic techniques for design and analysis of algorithms under the RAM model remains the cornerstone of algorithmics for the foreseeable future. 1.2 Fundamentals of Algorithmic Problem Solving 11 Should you worry about the speed and amount of memory of a computer at your disposal? If you are designing an algorithm as a scientiﬁc exercise, the answer is a qualiﬁed no. As you will see in Section 2.1, most computer scientists prefer to study algorithms in terms independent of speciﬁcation parameters for a particular computer. If you are designing an algorithm as a practical tool, the answer may depend on a problem you need to solve. Even the “slow” computers of today are almost unimaginably fast. Consequently, in many situations you need not worry about a computer being too slow for the task. There are important problems, however, that are very complex by their nature, or have to process huge volumes of data, or deal with applications where the time is critical. In such situations, it is imperative to be aware of the speed and memory available on a particular computer system. Choosing between Exact and Approximate Problem Solving The next principal decision is to choose between solving the problem exactly or solving it approximately. In the former case, an algorithm is called an exact algo- rithm; in the latter case, an algorithm is called an approximation algorithm.Why would one opt for an approximation algorithm? First, there are important prob- lems that simply cannot be solved exactly for most of their instances; examples include extracting square roots, solving nonlinear equations, and evaluating def- inite integrals. Second, available algorithms for solving a problem exactly can be unacceptably slow because of the problem’s intrinsic complexity. This happens, in particular, for many problems involving a very large number of choices; you will see examples of such difﬁcult problems in Chapters 3, 11, and 12. Third, an ap- proximation algorithm can be a part of a more sophisticated algorithm that solves a problem exactly. Algorithm Design Techniques Now, with all the components of the algorithmic problem solving in place, how do you design an algorithm to solve a given problem? This is the main question this book seeks to answer by teaching you several general design techniques. What is an algorithm design technique? An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving problems algorithmically that is applicable to a variety of problems from different areas of computing. Check this book’s table of contents and you will see that a majority of its chapters are devoted to individual design techniques. They distill a few key ideas that have proven to be useful in designing algorithms. Learning these techniques is of utmost importance for the following reasons. First, they provide guidance for designing algorithms for new problems, i.e., problems for which there is no known satisfactory algorithm. Therefore—to use the language of a famous proverb—learning such techniques is akin to learning 12 Introduction to ﬁsh as opposed to being given a ﬁsh caught by somebody else. It is not true, of course, that each of these general techniques will be necessarily applicable to every problem you may encounter. But taken together, they do constitute a powerful collection of tools that you will ﬁnd quite handy in your studies and work. Second, algorithms are the cornerstone of computer science. Every science is interested in classifying its principal subject, and computer science is no exception. Algorithm design techniques make it possible to classify algorithms according to an underlying design idea; therefore, they can serve as a natural way to both categorize and study algorithms. Designing an Algorithm and Data Structures While the algorithm design techniques do provide a powerful set of general ap- proaches to algorithmic problem solving, designing an algorithm for a particular problem may still be a challenging task. Some design techniques can be simply inapplicable to the problem in question. Sometimes, several techniques need to be combined, and there are algorithms that are hard to pinpoint as applications of the known design techniques. Even when a particular design technique is ap- plicable, getting an algorithm often requires a nontrivial ingenuity on the part of the algorithm designer. With practice, both tasks—choosing among the general techniques and applying them—get easier, but they are rarely easy. Of course, one should pay close attention to choosing data structures appro- priate for the operations performed by the algorithm. For example, the sieve of Eratosthenes introduced in Section 1.1 would run longer if we used a linked list instead of an array in its implementation (why?). Also note that some of the al- gorithm design techniques discussed in Chapters 6 and 7 depend intimately on structuring or restructuring data specifying a problem’s instance. Many years ago, an inﬂuential textbook proclaimed the fundamental importance of both algo- rithms and data structures for computer programming by its very title: Algorithms + Data Structures = Programs [Wir76]. In the new world of object-oriented pro- gramming, data structures remain crucially important for both design and analysis of algorithms. We review basic data structures in Section 1.4. Methods of Specifying an Algorithm Once you have designed an algorithm, you need to specify it in some fashion. In Section 1.1, to give you an example, Euclid’s algorithm is described in words (in a free and also a step-by-step form) and in pseudocode. These are the two options that are most widely used nowadays for specifying algorithms. Using a natural language has an obvious appeal; however, the inherent ambi- guity of any natural language makes a succinct and clear description of algorithms surprisingly difﬁcult. Nevertheless, being able to do this is an important skill that you should strive to develop in the process of learning algorithms. Pseudocode is a mixture of a natural language and programming language- like constructs. Pseudocode is usually more precise than natural language, and its 1.2 Fundamentals of Algorithmic Problem Solving 13 usage often yields more succinct algorithm descriptions. Surprisingly, computer scientists have never agreed on a single form of pseudocode, leaving textbook authors with a need to design their own “dialects.” Fortunately, these dialects are so close to each other that anyone familiar with a modern programming language should be able to understand them all. This book’s dialect was selected to cause minimal difﬁculty for a reader. For the sake of simplicity, we omit declarations of variables and use indentation to show the scope of such statements as for, if, and while. As you saw in the previous section, we use an arrow “←” for the assignment operation and two slashes “//” for comments. In the earlier days of computing, the dominant vehicle for specifying algo- rithms was a ﬂowchart, a method of expressing an algorithm by a collection of connected geometric shapes containing descriptions of the algorithm’s steps. This representation technique has proved to be inconvenient for all but very simple algorithms; nowadays, it can be found only in old algorithm books. The state of the art of computing has not yet reached a point where an algorithm’s description—be it in a natural language or pseudocode—can be fed into an electronic computer directly. Instead, it needs to be converted into a computer program written in a particular computer language. We can look at such a program as yet another way of specifying the algorithm, although it is preferable to consider it as the algorithm’s implementation. Proving an Algorithm’s Correctness Once an algorithm has been speciﬁed, you have to prove its correctness. That is, you have to prove that the algorithm yields a required result for every legitimate input in a ﬁnite amount of time. For example, the correctness of Euclid’s algorithm for computing the greatest common divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n) (which, in turn, needs a proof; see Problem 7 in Exercises 1.1), the simple observation that the second integer gets smaller on every iteration of the algorithm, and the fact that the algorithm stops when the second integer becomes 0. For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex. A common technique for proving correctness is to use mathemati- cal induction because an algorithm’s iterations provide a natural sequence of steps needed for such proofs. It might be worth mentioning that although tracing the algorithm’s performance for a few speciﬁc inputs can be a very worthwhile activ- ity, it cannot prove the algorithm’s correctness conclusively. But in order to show that an algorithm is incorrect, you need just one instance of its input for which the algorithm fails. The notion of correctness for approximation algorithms is less straightforward than it is for exact algorithms. For an approximation algorithm, we usually would like to be able to show that the error produced by the algorithm does not exceed a predeﬁned limit. You can ﬁnd examples of such investigations in Chapter 12. 14 Introduction Analyzing an Algorithm We usually want our algorithms to possess several qualities. After correctness, by far the most important is efﬁciency. In fact, there are two kinds of algorithm efﬁciency: time efﬁciency, indicating how fast the algorithm runs, and space ef- ﬁciency, indicating how much extra memory it uses. A general framework and speciﬁc techniques for analyzing an algorithm’s efﬁciency appear in Chapter 2. Another desirable characteristic of an algorithm is simplicity. Unlike efﬁ- ciency, which can be precisely deﬁned and investigated with mathematical rigor, simplicity, like beauty, is to a considerable degree in the eye of the beholder. For example, most people would agree that Euclid’s algorithm is simpler than the middle-school procedure for computing gcd(m, n), but it is not clear whether Eu- clid’s algorithm is simpler than the consecutive integer checking algorithm. Still, simplicity is an important algorithm characteristic to strive for. Why? Because sim- pler algorithms are easier to understand and easier to program; consequently, the resulting programs usually contain fewer bugs. There is also the undeniable aes- thetic appeal of simplicity. Sometimes simpler algorithms are also more efﬁcient than more complicated alternatives. Unfortunately, it is not always true, in which case a judicious compromise needs to be made. Yet another desirable characteristic of an algorithm is generality. There are, in fact, two issues here: generality of the problem the algorithm solves and the set of inputs it accepts. On the ﬁrst issue, note that it is sometimes easier to design an algorithm for a problem posed in more general terms. Consider, for example, the problem of determining whether two integers are relatively prime, i.e., whether their only common divisor is equal to 1. It is easier to design an algorithm for a more general problem of computing the greatest common divisor of two integers and, to solve the former problem, check whether the gcd is 1 or not. There are situations, however, where designing a more general algorithm is unnecessary or difﬁcult or even impossible. For example, it is unnecessary to sort a list of n numbers to ﬁnd its median, which is its n/2th smallest element. To give another example, the standard formula for roots of a quadratic equation cannot be generalized to handle polynomials of arbitrary degrees. As to the set of inputs, your main concern should be designing an algorithm that can handle a set of inputs that is natural for the problem at hand. For example, excluding integers equal to 1 as possible inputs for a greatest common divisor algorithm would be quite unnatural. On the other hand, although the standard formula for the roots of a quadratic equation holds for complex coefﬁcients, we would normally not implement it on this level of generality unless this capability is explicitly required. If you are not satisﬁed with the algorithm’s efﬁciency, simplicity, or generality, you must return to the drawing board and redesign the algorithm. In fact, even if your evaluation is positive, it is still worth searching for other algorithmic solutions. Recall the three different algorithms in the previous section for computing the greatest common divisor: generally, you should not expect to get the best algorithm on the ﬁrst try. At the very least, you should try to ﬁne-tune the algorithm you 1.2 Fundamentals of Algorithmic Problem Solving 15 already have. For example, we made several improvements in our implementation of the sieve of Eratosthenes compared with its initial outline in Section 1.1. (Can you identify them?) You will do well if you keep in mind the following observation of Antoine de Saint-Exup´ery, the French writer, pilot, and aircraft designer: “A designer knows he has arrived at perfection not when there is no longer anything to add, but when there is no longer anything to take away.”1 Coding an Algorithm Most algorithms are destined to be ultimately implemented as computer pro- grams. Programming an algorithm presents both a peril and an opportunity. The peril lies in the possibility of making the transition from an algorithm to a pro- gram either incorrectly or very inefﬁciently. Some inﬂuential computer scientists strongly believe that unless the correctness of a computer program is proven with full mathematical rigor, the program cannot be considered correct. They have developed special techniques for doing such proofs (see [Gri81]), but the power of these techniques of formal veriﬁcation is limited so far to very small programs. As a practical matter, the validity of programs is still established by testing. Testing of computer programs is an art rather than a science, but that does not mean that there is nothing in it to learn. Look up books devoted to testing and debugging; even more important, test and debug your program thoroughly whenever you implement an algorithm. Also note that throughout the book, we assume that inputs to algorithms belong to the speciﬁed sets and hence require no veriﬁcation. When implementing algorithms as programs to be used in actual applications, you should provide such veriﬁcations. Of course, implementing an algorithm correctly is necessary but not sufﬁcient: you would not like to diminish your algorithm’s power by an inefﬁcient implemen- tation. Modern compilers do provide a certain safety net in this regard, especially when they are used in their code optimization mode. Still, you need to be aware of such standard tricks as computing a loop’s invariant (an expression that does not change its value) outside the loop, collecting common subexpressions, replac- ing expensive operations by cheap ones, and so on. (See [Ker99] and [Ben00] for a good discussion of code tuning and other issues related to algorithm program- ming.) Typically, such improvements can speed up a program only by a constant factor, whereas a better algorithm can make a difference in running time by orders of magnitude. But once an algorithm is selected, a 10–50% speedup may be worth an effort. 1. I found this call for design simplicity in an essay collection by Jon Bentley [Ben00]; the essays deal with a variety of issues in algorithm design and implementation and are justiﬁably titled Programming Pearls. I wholeheartedly recommend the writings of both Jon Bentley and Antoine de Saint-Exup´ery. 16 Introduction A working program provides an additional opportunity in allowing an em- pirical analysis of the underlying algorithm. Such an analysis is based on timing the program on several inputs and then analyzing the results obtained. We dis- cuss the advantages and disadvantages of this approach to analyzing algorithms in Section 2.6. In conclusion, let us emphasize again the main lesson of the process depicted in Figure 1.2: As a rule, a good algorithm is a result of repeated effort and rework. Even if you have been fortunate enough to get an algorithmic idea that seems perfect, you should still try to see whether it can be improved. Actually, this is good news since it makes the ultimate result so much more enjoyable. (Yes, I did think of naming this book The Joy of Algorithms.) On the other hand, how does one know when to stop? In the real world, more often than not a project’s schedule or the impatience of your boss will stop you. And so it should be: perfection is expensive and in fact not always called for. Designing an algorithm is an engineering-like activity that calls for compromises among competing goals under the constraints of available resources, with the designer’s time being one of the resources. In the academic world, the question leads to an interesting but usually difﬁcult investigation of an algorithm’s optimality. Actually, this question is not about the efﬁciency of an algorithm but about the complexity of the problem it solves: What is the minimum amount of effort any algorithm will need to exert to solve the problem? For some problems, the answer to this question is known. For example, any algorithm that sorts an array by comparing values of its elements needs about n log2 n comparisons for some arrays of size n (see Section 11.2). But for many seemingly easy problems such as integer multiplication, computer scientists do not yet have a ﬁnal answer. Another important issue of algorithmic problem solving is the question of whether or not every problem can be solved by an algorithm. We are not talking here about problems that do not have a solution, such as ﬁnding real roots of a quadratic equation with a negative discriminant. For such cases, an output indicating that the problem does not have a solution is all we can and should expect from an algorithm. Nor are we talking about ambiguously stated problems. Even some unambiguous problems that must have a simple yes or no answer are “undecidable,” i.e., unsolvable by any algorithm. An important example of such a problem appears in Section 11.3. Fortunately, a vast majority of problems in practical computing can be solved by an algorithm. Before leaving this section, let us be sure that you do not have the misconception—possibly caused by the somewhat mechanical nature of the diagram of Figure 1.2—that designing an algorithm is a dull activity. There is nothing further from the truth: inventing (or discovering?) algorithms is a very creative and rewarding process. This book is designed to convince you that this is the case. 1.2 Fundamentals of Algorithmic Problem Solving 17 Exercises 1.2 1. Old World puzzle A peasant ﬁnds himself on a riverbank with a wolf, a goat, and a head of cabbage. He needs to transport all three to the other side of the river in his boat. However, the boat has room for only the peasant himself and one other item (either the wolf, the goat, or the cabbage). In his absence, the wolf would eat the goat, and the goat would eat the cabbage. Solve this problem for the peasant or prove it has no solution. (Note: The peasant is a vegetarian but does not like cabbage and hence can eat neither the goat nor the cabbage to help him solve the problem. And it goes without saying that the wolf is a protected species.) 2. New World puzzle There are four people who want to cross a rickety bridge; they all begin on the same side. You have 17 minutes to get them all across to the other side. It is night, and they have one ﬂashlight. A maximum of two people can cross the bridge at one time. Any party that crosses, either one or two people, must have the ﬂashlight with them. The ﬂashlight must be walked back and forth; it cannot be thrown, for example. Person 1 takes 1 minute to cross the bridge, person 2 takes 2 minutes, person 3 takes 5 minutes, and person 4 takes 10 minutes. A pair must walk together at the rate of the slower person’s pace. (Note: According to a rumor on the Internet, interviewers at a well-known software company located near Seattle have given this problem to interviewees.) 3. Which of the following formulas can be considered an algorithm for comput- ing the area of a triangle whose side lengths are given positive numbers a, b, and c? a. S = p(p − a)(p − b)(p − c), where p = (a + b + c)/2 b. S = 1 2 bc sin A, where A is the angle between sides b and c c. S = 1 2 aha, where ha is the height to base a 4. Write pseudocode for an algorithm for ﬁnding real roots of equation ax2 + bx + c = 0 for arbitrary real coefﬁcients a, b, and c. (You may assume the availability of the square root function sqrt(x).) 5. Describe the standard algorithm for ﬁnding the binary representation of a positive decimal integer a. in English. b. in pseudocode. 6. Describe the algorithm used by your favorite ATM machine in dispensing cash. (You may give your description in either English or pseudocode, which- ever you ﬁnd more convenient.) 7. a. Can the problem of computing the number π be solved exactly? b. How many instances does this problem have? c. Look up an algorithm for this problem on the Internet. 18 Introduction 8. Give an example of a problem other than computing the greatest common divisor for which you know more than one algorithm. Which of them is simpler? Which is more efﬁcient? 9. Consider the following algorithm for ﬁnding the distance between the two closest elements in an array of numbers. ALGORITHM MinDistance(A[0..n − 1]) //Input: Array A[0..n − 1] of numbers //Output: Minimum distance between two of its elements dmin ←∞ for i ← 0 to n − 1 do for j ← 0 to n − 1 do if i = j and |A[i] − A[j]| < dmin dmin ←|A[i] − A[j]| return dmin Make as many improvements as you can in this algorithmic solution to the problem. If you need to, you may change the algorithm altogether; if not, improve the implementation given. 10. One of the most inﬂuential books on problem solving, titled How To Solve It [Pol57], was written by the Hungarian-American mathematician George P´olya (1887–1985). P´olya summarized his ideas in a four-point summary. Find this summary on the Internet or, better yet, in his book, and compare it with the plan outlined in Section 1.2. What do they have in common? How are they different? 1.3 Important Problem Types In the limitless sea of problems one encounters in computing, there are a few areas that have attracted particular attention from researchers. By and large, their interest has been driven either by the problem’s practical importance or by some speciﬁc characteristics making the problem an interesting research subject; fortunately, these two motivating forces reinforce each other in most cases. In this section, we are going to introduce the most important problem types: Sorting Searching String processing Graph problems Combinatorial problems Geometric problems Numerical problems 1.3 Important Problem Types 19 These problems are used in subsequent chapters of the book to illustrate different algorithm design techniques and methods of algorithm analysis. Sorting The sorting problem is to rearrange the items of a given list in nondecreasing order. Of course, for this problem to be meaningful, the nature of the list items must allow such an ordering. (Mathematicians would say that there must exist a relation of total ordering.) As a practical matter, we usually need to sort lists of numbers, characters from an alphabet, character strings, and, most important, records similar to those maintained by schools about their students, libraries about their holdings, and companies about their employees. In the case of records, we need to choose a piece of information to guide sorting. For example, we can choose to sort student records in alphabetical order of names or by student number or by student grade-point average. Such a specially chosen piece of information is called a key. Computer scientists often talk about sorting a list of keys even when the list’s items are not records but, say, just integers. Why would we want a sorted list? To begin with, a sorted list can be a required output of a task such as ranking Internet search results or ranking students by their GPAscores. Further, sorting makes many questions about the list easier to answer. The most important of them is searching: it is why dictionaries, telephone books, class lists, and so on are sorted. You will see other examples of the usefulness of list presorting in Section 6.1. In a similar vein, sorting is used as an auxiliary step in several important algorithms in other areas, e.g., geometric algorithms and data compression. The greedy approach—an important algorithm design technique discussed later in the book—requires a sorted input. By now, computer scientists have discovered dozens of different sorting algo- rithms. In fact, inventing a new sorting algorithm has been likened to designing the proverbial mousetrap. And I am happy to report that the hunt for a better sorting mousetrap continues. This perseverance is admirable in view of the fol- lowing facts. On the one hand, there are a few good sorting algorithms that sort an arbitrary array of size n using about n log2 n comparisons. On the other hand, no algorithm that sorts by key comparisons (as opposed to, say, comparing small pieces of keys) can do substantially better than that. There is a reason for this embarrassment of algorithmic riches in the land of sorting. Although some algorithms are indeed better than others, there is no algorithm that would be the best solution in all situations. Some of the algorithms are simple but relatively slow, while others are faster but more complex; some work better on randomly ordered inputs, while others do better on almost-sorted lists; some are suitable only for lists residing in the fast memory, while others can be adapted for sorting large ﬁles stored on a disk; and so on. Two properties of sorting algorithms deserve special mention. A sorting algo- rithm is called stable if it preserves the relative order of any two equal elements in its input. In other words, if an input list contains two equal elements in positions i and j where i102 in Table 2.1.) For example, it would take about 4 . 1010 years for a computer making a trillion (1012) operations per second to execute 2100 operations. Though this is incomparably faster than it would have taken to execute 100!operations, it is still longer than 4.5 billion (4.5 . 109) years— the estimated age of the planet Earth. There is a tremendous difference between the orders of growth of the functions 2n and n!, yet both are often referred to as “exponential-growth functions” (or simply “exponential”) despite the fact that, strictly speaking, only the former should be referred to as such. The bottom line, which is important to remember, is this: Algorithms that require an exponential number of operations are practical for solving only problems of very small sizes. Another way to appreciate the qualitative difference among the orders of growth of the functions in Table 2.1 is to consider how they react to, say, a twofold increase in the value of their argument n. The function log2 n increases in value by just 1 (because log2 2n = log2 2 + log2 n = 1 + log2 n); the linear function increases twofold, the linearithmic function n log2 n increases slightly more than twofold; the quadratic function n2 and cubic function n3 increase fourfold and 2.1 The Analysis Framework 47 eightfold, respectively (because (2n)2 = 4n2 and (2n)3 = 8n3); the value of 2n gets squared (because 22n = (2n)2); and n! increases much more than that (yes, even mathematics refuses to cooperate to give a neat answer for n!). Worst-Case, Best-Case, and Average-Case Efﬁciencies In the beginning of this section, we established that it is reasonable to measure an algorithm’s efﬁciency as a function of a parameter indicating the size of the algorithm’s input. But there are many algorithms for which running time depends not only on an input size but also on the speciﬁcs of a particular input. Consider, as an example, sequential search. This is a straightforward algorithm that searches for a given item (some search key K) in a list of n elements by checking successive elements of the list until either a match with the search key is found or the list is exhausted. Here is the algorithm’s pseudocode, in which, for simplicity, a list is implemented as an array. It also assumes that the second condition A[i] = K will not be checked if the ﬁrst one, which checks that the array’s index does not exceed its upper bound, fails. ALGORITHM SequentialSearch(A[0..n − 1],K) //Searches for a given value in a given array by sequential search //Input: An array A[0..n − 1] and a search key K //Output: The index of the ﬁrst element in A that matches K // or −1 if there are no matching elements i ← 0 while i0isin(n2), but so are, among inﬁnitely many others, n2 + sin n and n2 + log n. (Can you explain why?) Hopefully, this informal introduction has made you comfortable with the idea behind the three asymptotic notations. So now come the formal deﬁnitions. O-notation DEFINITION A function t(n) is said to be in O(g(n)), denoted t(n) ∈ O(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that t(n) ≤ cg(n) for all n ≥ n0. The deﬁnition is illustrated in Figure 2.1 where, for the sake of visual clarity, n is extended to be a real number. As an example, let us formally prove one of the assertions made in the introduction: 100n + 5 ∈ O(n2). Indeed, 100n + 5 ≤ 100n + n (for all n ≥ 5) = 101n ≤ 101n2. Thus, as values of the constants c and n0 required by the deﬁnition, we can take 101 and 5, respectively. Note that the deﬁnition gives us a lot of freedom in choosing speciﬁc values for constants c and n0. For example, we could also reason that 100n + 5 ≤ 100n + 5n (for all n ≥ 1) = 105n to complete the proof with c = 105 and n0 = 1. 54 Fundamentals of the Analysis of Algorithm Efﬁciency doesn't matter nn0 cg(n) t(n) FIGURE 2.1 Big-oh notation: t(n) ∈ O(g(n)). doesn't matter nn0 cg(n) t(n) FIGURE 2.2 Big-omega notation: t(n) ∈ (g(n)). -notation DEFINITION A function t(n) is said to be in (g(n)), denoted t(n) ∈ (g(n)), if t(n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that t(n) ≥ cg(n) for all n ≥ n0. The deﬁnition is illustrated in Figure 2.2. Here is an example of the formal proof that n3 ∈ (n2): n3 ≥ n2 for all n ≥ 0, i.e., we can select c = 1 and n0 = 0. 2.2 Asymptotic Notations and Basic Efﬁciency Classes 55 doesn't matter nn0 c2g(n) c1g(n) t(n) FIGURE 2.3 Big-theta notation: t(n) ∈ (g(n)). -notation DEFINITION A function t(n) is said to be in (g(n)), denoted t(n) ∈ (g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some nonnegative integer n0 such that c2g(n) ≤ t(n) ≤ c1g(n) for all n ≥ n0. The deﬁnition is illustrated in Figure 2.3. For example, let us prove that 1 2 n(n − 1) ∈ (n2). First, we prove the right inequality (the upper bound): 1 2 n(n − 1) = 1 2 n2 − 1 2 n ≤ 1 2 n2 for all n ≥ 0. Second, we prove the left inequality (the lower bound): 1 2 n(n − 1) = 1 2 n2 − 1 2 n ≥ 1 2 n2 − 1 2 n1 2 n (for all n ≥ 2) = 1 4 n2. Hence, we can select c2 = 1 4 ,c1 = 1 2 , and n0 = 2. Useful Property Involving the Asymptotic Notations Using the formal deﬁnitions of the asymptotic notations, we can prove their general properties (see Problem 7 in this section’s exercises for a few simple examples). The following property, in particular, is useful in analyzing algorithms that comprise two consecutively executed parts. 56 Fundamentals of the Analysis of Algorithm Efﬁciency THEOREM If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}). (The analogous assertions are true for the and notations as well.) PROOF The proof extends to orders of growth the following simple fact about four arbitrary real numbers a1,b1,a2,b2:ifa1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2 max{b1,b2}. Since t1(n) ∈ O(g1(n)), there exist some positive constant c1 and some non- negative integer n1 such that t1(n) ≤ c1g1(n) for all n ≥ n1. Similarly, since t2(n) ∈ O(g2(n)), t2(n) ≤ c2g2(n) for all n ≥ n2. Let us denote c3 = max{c1,c2} and consider n ≥ max{n1,n2} so that we can use both inequalities. Adding them yields the following: t1(n) + t2(n) ≤ c1g1(n) + c2g2(n) ≤ c3g1(n) + c3g2(n) = c3[g1(n) + g2(n)] ≤ c32 max{g1(n), g2(n)}. Hence, t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}), with the constants c and n0 required by the O deﬁnition being 2c3 = 2 max{c1,c2} and max{n1,n2}, respectively. So what does this property imply for an algorithm that comprises two consec- utively executed parts? It implies that the algorithm’s overall efﬁciency is deter- mined by the part with a higher order of growth, i.e., its least efﬁcient part: t1(n) ∈ O(g1(n)) t2(n) ∈ O(g2(n)) t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}). For example, we can check whether an array has equal elements by the following two-part algorithm: ﬁrst, sort the array by applying some known sorting algorithm; second, scan the sorted array to check its consecutive elements for equality. If, for example, a sorting algorithm used in the ﬁrst part makes no more than 1 2 n(n − 1) comparisons (and hence is in O(n2)) while the second part makes no more than n − 1 comparisons (and hence is in O(n)), the efﬁciency of the entire algorithm will be in O(max{n2,n}) = O(n2). Using Limits for Comparing Orders of Growth Though the formal deﬁnitions of O, , and are indispensable for proving their abstract properties, they are rarely used for comparing the orders of growth of two speciﬁc functions. A much more convenient method for doing so is based on 2.2 Asymptotic Notations and Basic Efﬁciency Classes 57 computing the limit of the ratio of two functions in question. Three principal cases may arise: limn→∞ t(n) g(n) = ⎧ ⎨ ⎩ 0 implies that t(n) has a smaller order of growth than g(n), c implies that t(n) has the same order of growth as g(n), ∞ implies that t(n) has a larger order of growth than g(n).3 Note that the ﬁrst two cases mean that t(n) ∈ O(g(n)), the last two mean that t(n) ∈ (g(n)), and the second case means that t(n) ∈ (g(n)). The limit-based approach is often more convenient than the one based on the deﬁnitions because it can take advantage of the powerful calculus techniques developed for computing limits, such as L’Hˆopital’s rule limn→∞ t(n) g(n) = limn→∞ t (n) g (n) and Stirling’s formula n!≈ √ 2πn n e n for large values of n. Here are three examples of using the limit-based approach to comparing orders of growth of two functions. EXAMPLE 1 Compare the orders of growth of 1 2 n(n − 1) and n2. (This is one of the examples we used at the beginning of this section to illustrate the deﬁnitions.) limn→∞ 1 2 n(n − 1) n2 = 1 2 limn→∞ n2 − n n2 = 1 2 limn→∞(1 − 1 n) = 1 2 . Since the limit is equal to a positive constant, the functions have the same order of growth or, symbolically, 1 2 n(n − 1) ∈ (n2). EXAMPLE 2 Compare the orders of growth of log2 n and √ n. (Unlike Exam- ple 1, the answer here is not immediately obvious.) limn→∞ log2 n√ n = limn→∞ log2 n √ n = limn→∞ log2 e 1 n 1 2 √ n = 2 log2 e limn→∞ 1√ n = 0. Since the limit is equal to zero, log2 n has a smaller order of growth than √ n. (Since limn→∞ log2 n√ n = 0, we can use the so-called little-oh notation: log2 n ∈ o( √ n). Unlike the big-Oh, the little-oh notation is rarely used in analysis of algorithms.) 3. The fourth case, in which such a limit does not exist, rarely happens in the actual practice of analyzing algorithms. Still, this possibility makes the limit-based approach to comparing orders of growth less general than the one based on the deﬁnitions of O, , and . 58 Fundamentals of the Analysis of Algorithm Efﬁciency EXAMPLE 3 Compare the orders of growth of n! and 2n. (We discussed this informally in Section 2.1.) Taking advantage of Stirling’s formula, we get limn→∞ n! 2n = limn→∞ √ 2πn n e n 2n = limn→∞ √ 2πn nn 2nen = limn→∞ √ 2πn n 2e n =∞. Thus, though 2n grows very fast, n!grows still faster. We can write symbolically that n!∈ (2n); note, however, that while the big-Omega notation does not preclude the possibility that n! and 2n have the same order of growth, the limit computed here certainly does. Basic Efﬁciency Classes Even though the efﬁciency analysis framework puts together all the functions whose orders of growth differ by a constant multiple, there are still inﬁnitely many such classes. (For example, the exponential functions an have different orders of growth for different values of base a.) Therefore, it may come as a surprise that the time efﬁciencies of a large number of algorithms fall into only a few classes. These classes are listed in Table 2.2 in increasing order of their orders of growth, along with their names and a few comments. You could raise a concern that classifying algorithms by their asymptotic efﬁ- ciency would be of little practical use since the values of multiplicative constants are usually left unspeciﬁed. This leaves open the possibility of an algorithm in a worse efﬁciency class running faster than an algorithm in a better efﬁciency class for inputs of realistic sizes. For example, if the running time of one algorithm is n3 while the running time of the other is 106n2, the cubic algorithm will outperform the quadratic algorithm unless n exceeds 106. A few such anomalies are indeed known. Fortunately, multiplicative constants usually do not differ that drastically. As a rule, you should expect an algorithm from a better asymptotic efﬁciency class to outperform an algorithm from a worse class even for moderately sized inputs. This observation is especially true for an algorithm with a better than exponential running time versus an exponential (or worse) algorithm. Exercises 2.2 1. Use the most appropriate notation among O, , and to indicate the time efﬁciency class of sequential search (see Section 2.1) a. in the worst case. b. in the best case. c. in the average case. 2. Use the informal deﬁnitions of O, ,and to determine whether the follow- ing assertions are true or false. 2.2 Asymptotic Notations and Basic Efﬁciency Classes 59 TABLE 2.2 Basic asymptotic efﬁciency classes Class Name Comments 1 constant Short of best-case efﬁciencies, very few reasonable examples can be given since an algorithm’s running time typically goes to inﬁnity when its input size grows inﬁnitely large. log n logarithmic Typically, a result of cutting a problem’s size by a constant factor on each iteration of the algorithm (see Section 4.4). Note that a logarithmic algorithm cannot take into account all its input or even a ﬁxed fraction of it: any algorithm that does so will have at least linear running time. n linear Algorithms that scan a list of size n (e.g., sequential search) belong to this class. n log n linearithmic Many divide-and-conquer algorithms (see Chapter 5), including mergesort and quicksort in the average case, fall into this category. n2 quadratic Typically, characterizes efﬁciency of algorithms with two embedded loops (see the next section). Elemen- tary sorting algorithms and certain operations on n × n matrices are standard examples. n3 cubic Typically, characterizes efﬁciency of algorithms with three embedded loops (see the next section). Several nontrivial algorithms from linear algebra fall into this class. 2n exponential Typical for algorithms that generate all subsets of an n-element set. Often, the term “exponential” is used in a broader sense to include this and larger orders of growth as well. n! factorial Typical for algorithms that generate all permutations of an n-element set. a. n(n + 1)/2 ∈ O(n3) b. n(n + 1)/2 ∈ O(n2) c. n(n + 1)/2 ∈ (n3) d. n(n + 1)/2 ∈ (n) 3. For each of the following functions, indicate the class (g(n)) the function belongs to. (Use the simplest g(n) possible in your answers.) Prove your assertions. a. (n2 + 1)10 b. 10n2 + 7n + 3 c. 2n lg(n + 2)2 + (n + 2)2 lg n 2 d. 2n+1 + 3n−1 e. log2 n 60 Fundamentals of the Analysis of Algorithm Efﬁciency 4. a. Table 2.1 contains values of several functions that often arise in the analysis of algorithms. These values certainly suggest that the functions log n, n, n log2 n, n2,n3, 2n,n! are listed in increasing order of their order of growth. Do these values prove this fact with mathematical certainty? b. Prove that the functions are indeed listed in increasing order of their order of growth. 5. List the following functions according to their order of growth from the lowest to the highest: (n − 2)!, 5lg(n + 100)10, 22n, 0.001n4 + 3n3 + 1, ln2 n, 3 √ n, 3n. 6. a. Prove that every polynomial of degree k, p(n) = aknk + ak−1nk−1 + ...+ a0 with ak > 0, belongs to (nk). b. Prove that exponential functions an have different orders of growth for different values of base a>0. 7. Prove the following assertions by using the deﬁnitions of the notations in- volved, or disprove them by giving a speciﬁc counterexample. a. If t(n) ∈ O(g(n)), then g(n) ∈ (t(n)). b. (αg(n)) = (g(n)), where α>0. c. (g(n)) = O(g(n)) ∩ (g(n)). d. For any two nonnegative functions t(n) and g(n) deﬁned on the set of nonnegative integers, either t(n) ∈ O(g(n)), or t(n) ∈ (g(n)), or both. 8. Prove the section’s theorem for a. notation. b. notation. 9. We mentioned in this section that one can check whether all elements of an array are distinct by a two-part algorithm based on the array’s presorting. a. If the presorting is done by an algorithm with a time efﬁciency in(n log n), what will be a time-efﬁciency class of the entire algorithm? b. If the sorting algorithm used for presorting needs an extra array of size n, what will be the space-efﬁciency class of the entire algorithm? 10. The range of a ﬁnite nonempty set of n real numbers S is deﬁned as the differ- ence between the largest and smallest elements of S. For each representation of S given below, describe in English an algorithm to compute the range. Indi- cate the time efﬁciency classes of these algorithms using the most appropriate notation (O, , or ). a. An unsorted array b. A sorted array c. A sorted singly linked list d. A binary search tree 2.3 Mathematical Analysis of Nonrecursive Algorithms 61 11. Lighter or heavier? You have n>2 identical-looking coins and a two-pan balance scale with no weights. One of the coins is a fake, but you do not know whether it is lighter or heavier than the genuine coins, which all weigh the same. Design a (1) algorithm to determine whether the fake coin is lighter or heavier than the others. 12. Door in a wall You are facing a wall that stretches inﬁnitely in both direc- tions. There is a door in the wall, but you know neither how far away nor in which direction. You can see the door only when you are right next to it. De- sign an algorithm that enables you to reach the door by walking at most O(n) steps where n is the (unknown to you) number of steps between your initial position and the door. [Par95] 2.3 Mathematical Analysis of Nonrecursive Algorithms In this section, we systematically apply the general framework outlined in Section 2.1 to analyzing the time efﬁciency of nonrecursive algorithms. Let us start with a very simple example that demonstrates all the principal steps typically taken in analyzing such algorithms. EXAMPLE 1 Consider the problem of ﬁnding the value of the largest element in a list of n numbers. For simplicity, we assume that the list is implemented as an array. The following is pseudocode of a standard algorithm for solving the problem. ALGORITHM MaxElement(A[0..n − 1]) //Determines the value of the largest element in a given array //Input: An array A[0..n − 1] of real numbers //Output: The value of the largest element in A maxval ← A[0] for i ← 1 to n − 1 do if A[i] >maxval maxval ← A[i] return maxval The obvious measure of an input’s size here is the number of elements in the array, i.e., n. The operations that are going to be executed most often are in the algorithm’s for loop. There are two operations in the loop’s body: the comparison A[i] > maxval and the assignment maxval ← A[i]. Which of these two operations should we consider basic? Since the comparison is executed on each repetition of the loop and the assignment is not, we should consider the comparison to be the algorithm’s basic operation. Note that the number of comparisons will be the same for all arrays of size n; therefore, in terms of this metric, there is no need to distinguish among the worst, average, and best cases here. 62 Fundamentals of the Analysis of Algorithm Efﬁciency Let us denote C(n) the number of times this comparison is executed and try to ﬁnd a formula expressing it as a function of size n. The algorithm makes one comparison on each execution of the loop, which is repeated for each value of the loop’s variable i within the bounds 1 and n − 1, inclusive. Therefore, we get the following sum for C(n): C(n) = n−1 i=1 1. This is an easy sum to compute because it is nothing other than 1 repeated n − 1 times. Thus, C(n) = n−1 i=1 1 = n − 1 ∈ (n). Here is a general plan to follow in analyzing nonrecursive algorithms. General Plan for Analyzing the Time Efﬁciency of Nonrecursive Algorithms 1. Decide on a parameter (or parameters) indicating an input’s size. 2. Identify the algorithm’s basic operation. (As a rule, it is located in the inner- most loop.) 3. Check whether the number of times the basic operation is executed depends only on the size of an input. If it also depends on some additional property, the worst-case, average-case, and, if necessary, best-case efﬁciencies have to be investigated separately. 4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.4 5. Using standard formulas and rules of sum manipulation, either ﬁnd a closed- form formula for the count or, at the very least, establish its order of growth. Before proceeding with further examples, you may want to review Appen- dix A, which contains a list of summation formulas and rules that are often useful in analysis of algorithms. In particular, we use especially frequently two basic rules of sum manipulation u i=l cai = c u i=l ai, (R1) u i=l (ai ± bi) = u i=l ai ± u i=l bi, (R2) 4. Sometimes, an analysis of a nonrecursive algorithm requires setting up not a sum but a recurrence relation for the number of times its basic operation is executed. Using recurrence relations is much more typical for analyzing recursive algorithms (see Section 2.4). 2.3 Mathematical Analysis of Nonrecursive Algorithms 63 and two summation formulas u i=l 1 = u − l + 1 where l ≤ u are some lower and upper integer limits, (S1) n i=0 i = n i=1 i = 1 + 2 + ...+ n = n(n + 1) 2 ≈ 1 2 n2 ∈ (n2). (S2) Note that the formula n−1 i=1 1 = n − 1, which we used in Example 1, is a special case of formula (S1) for l = 1 and u = n − 1. EXAMPLE 2 Consider the element uniqueness problem: check whether all the elements in a given array of n elements are distinct. This problem can be solved by the following straightforward algorithm. ALGORITHM UniqueElements(A[0..n − 1]) //Determines whether all the elements in a given array are distinct //Input: An array A[0..n − 1] //Output: Returns “true” if all the elements in A are distinct // and “false” otherwise for i ← 0 to n − 2 do for j ← i + 1 to n − 1 do if A[i] = A[j] return false return true The natural measure of the input’s size here is again n, the number of elements in the array. Since the innermost loop contains a single operation (the comparison of two elements), we should consider it as the algorithm’s basic operation. Note, however, that the number of element comparisons depends not only on n but also on whether there are equal elements in the array and, if there are, which array positions they occupy. We will limit our investigation to the worst case only. By deﬁnition, the worst case input is an array for which the number of element comparisons Cworst(n) is the largest among all arrays of size n. An inspection of the innermost loop reveals that there are two kinds of worst-case inputs—inputs for which the algorithm does not exit the loop prematurely: arrays with no equal elements and arrays in which the last two elements are the only pair of equal elements. For such inputs, one comparison is made for each repetition of the innermost loop, i.e., for each value of the loop variable j between its limits i + 1 and n − 1; this is repeated for each value of the outer loop, i.e., for each value of the loop variable i between its limits 0 and n − 2. Accordingly, we get 64 Fundamentals of the Analysis of Algorithm Efﬁciency Cworst(n) = n−2 i=0 n−1 j=i+1 1 = n−2 i=0 [(n − 1) − (i + 1) + 1] = n−2 i=0 (n − 1 − i) = n−2 i=0 (n − 1) − n−2 i=0 i = (n − 1) n−2 i=0 1 − (n − 2)(n − 1) 2 = (n − 1)2 − (n − 2)(n − 1) 2 = (n − 1)n 2 ≈ 1 2 n2 ∈ (n2). We also could have computed the sum n−2 i=0 (n − 1 − i) faster as follows: n−2 i=0 (n − 1 − i) = (n − 1) + (n − 2) + ...+ 1 = (n − 1)n 2 , where the last equality is obtained by applying summation formula (S2). Note that this result was perfectly predictable: in the worst case, the algorithm needs to compare all n(n − 1)/2 distinct pairs of its n elements. EXAMPLE 3 Given two n × n matrices A and B, ﬁnd the time efﬁciency of the deﬁnition-based algorithm for computing their product C = AB. By deﬁnition, C is an n × n matrix whose elements are computed as the scalar (dot) products of the rows of matrix A and the columns of matrix B: ABC col. j Ci,j[]row i *= where C[i, j] = A[i, 0]B[0,j] + ...+ A[i, k]B[k, j] + ...+ A[i, n − 1]B[n − 1,j] for every pair of indices 0 ≤ i, j ≤ n − 1. ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1],B[0..n − 1, 0..n − 1]) //Multiplies two square matrices of order n by the deﬁnition-based algorithm //Input: Two n × n matrices A and B //Output: Matrix C = AB for i ← 0 to n − 1 do for j ← 0 to n − 1 do C[i, j] ← 0.0 for k ← 0 to n − 1 do C[i, j] ← C[i, j] + A[i, k] ∗ B[k, j] return C 2.3 Mathematical Analysis of Nonrecursive Algorithms 65 We measure an input’s size by matrix order n. There are two arithmetical operations in the innermost loop here—multiplication and addition—that, in principle, can compete for designation as the algorithm’s basic operation. Actually, we do not have to choose between them, because on each repetition of the innermost loop each of the two is executed exactly once. So by counting one we automatically count the other. Still, following a well-established tradition, we consider multiplication as the basic operation (see Section 2.1). Let us set up a sum for the total number of multiplications M(n) executed by the algorithm. (Since this count depends only on the size of the input matrices, we do not have to investigate the worst-case, average-case, and best-case efﬁciencies separately.) Obviously, there is just one multiplication executed on each repetition of the algorithm’s innermost loop, which is governed by the variable k ranging from the lower bound 0 to the upper bound n − 1. Therefore, the number of multiplications made for every pair of speciﬁc values of variables i and j is n−1 k=0 1, and the total number of multiplications M(n) is expressed by the following triple sum: M(n) = n−1 i=0 n−1 j=0 n−1 k=0 1. Now, we can compute this sum by using formula (S1) and rule (R1) given above. Starting with the innermost sum n−1 k=0 1, which is equal to n (why?), we get M(n) = n−1 i=0 n−1 j=0 n−1 k=0 1 = n−1 i=0 n−1 j=0 n = n−1 i=0 n2 = n3. This example is simple enough so that we could get this result without all the summation machinations. How? The algorithm computes n2 elements of the product matrix. Each of the product’s elements is computed as the scalar (dot) product of an n-element row of the ﬁrst matrix and an n-element column of the second matrix, which takes n multiplications. So the total number of multiplica- tions is n . n2 = n3. (It is this kind of reasoning that we expected you to employ when answering this question in Problem 2 of Exercises 2.1.) If we now want to estimate the running time of the algorithm on a particular machine, we can do it by the product T (n) ≈ cmM(n) = cmn3, where cm is the time of one multiplication on the machine in question. We would get a more accurate estimate if we took into account the time spent on the additions, too: T (n) ≈ cmM(n) + caA(n) = cmn3 + can3 = (cm + ca)n3, 66 Fundamentals of the Analysis of Algorithm Efﬁciency where ca is the time of one addition. Note that the estimates differ only by their multiplicative constants and not by their order of growth. You should not have the erroneous impression that the plan outlined above always succeeds in analyzing a nonrecursive algorithm. An irregular change in a loop variable, a sum too complicated to analyze, and the difﬁculties intrinsic to the average case analysis are just some of the obstacles that can prove to be insur- mountable. These caveats notwithstanding, the plan does work for many simple nonrecursive algorithms, as you will see throughout the subsequent chapters of the book. As a last example, let us consider an algorithm in which the loop’s variable changes in a different manner from that of the previous examples. EXAMPLE 4 The following algorithm ﬁnds the number of binary digits in the binary representation of a positive decimal integer. ALGORITHM Binary(n) //Input: A positive decimal integer n //Output: The number of binary digits in n’s binary representation count ← 1 while n>1 do count ← count + 1 n ←n/2 return count First, notice that the most frequently executed operation here is not inside the while loop but rather the comparison n>1 that determines whether the loop’s body will be executed. Since the number of times the comparison will be executed is larger than the number of repetitions of the loop’s body by exactly 1, the choice is not that important. A more signiﬁcant feature of this example is the fact that the loop variable takes on only a few values between its lower and upper limits; therefore, we have to use an alternative way of computing the number of times the loop is executed. Since the value of n is about halved on each repetition of the loop, the answer should be about log2 n. The exact formula for the number of times the comparison n>1 will be executed is actually log2 n+1—the number of bits in the binary representation of n according to formula (2.1). We could also get this answer by applying the analysis technique based on recurrence relations; we discuss this technique in the next section because it is more pertinent to the analysis of recursive algorithms. 2.3 Mathematical Analysis of Nonrecursive Algorithms 67 Exercises 2.3 1. Compute the following sums. a. 1 + 3 + 5 + 7 + ...+ 999 b. 2 + 4 + 8 + 16 + ...+ 1024 c. n+1 i=3 1 d. n+1 i=3 i e. n−1 i=0 i(i + 1) f. n j=1 3j+1 g. n i=1 n j=1 ij h. n i=1 1/i(i + 1) 2. Find the order of growth of the following sums. Use the (g(n)) notation with the simplest function g(n) possible. a. n−1 i=0 (i2+1)2 b. n−1 i=2 lg i2 c. n i=1(i + 1)2i−1 d. n−1 i=0 i−1 j=0(i + j) 3. The sample variance of n measurements x1,...,xn can be computed as either n i=1(xi −¯x)2 n − 1 where ¯x = n i=1 xi n or n i=1 x2 i − ( n i=1 xi)2/n n − 1 . Find and compare the number of divisions, multiplications, and additions/ subtractions (additions and subtractions are usually bunched together) that are required for computing the variance according to each of these formulas. 4. Consider the following algorithm. ALGORITHM Mystery(n) //Input: A nonnegative integer n S ← 0 for i ← 1 to n do S ← S + i ∗ i return S a. What does this algorithm compute? b. What is its basic operation? c. How many times is the basic operation executed? d. What is the efﬁciency class of this algorithm? e. Suggest an improvement, or a better algorithm altogether, and indicate its efﬁciency class. If you cannot do it, try to prove that, in fact, it cannot be done. 68 Fundamentals of the Analysis of Algorithm Efﬁciency 5. Consider the following algorithm. ALGORITHM Secret(A[0..n − 1]) //Input: An array A[0..n − 1] of n real numbers minval ← A[0]; maxval ← A[0] for i ← 1 to n − 1 do if A[i] < minval minval ← A[i] if A[i] > maxval maxval ← A[i] return maxval − minval Answer questions (a)–(e) of Problem 4 about this algorithm. 6. Consider the following algorithm. ALGORITHM Enigma(A[0..n − 1, 0..n − 1]) //Input: A matrix A[0..n − 1, 0..n − 1] of real numbers for i ← 0 to n − 2 do for j ← i + 1 to n − 1 do if A[i, j] = A[j, i] return false return true Answer questions (a)–(e) of Problem 4 about this algorithm. 7. Improve the implementation of the matrix multiplication algorithm (see Ex- ample 3) by reducing the number of additions made by the algorithm. What effect will this change have on the algorithm’s efﬁciency? 8. Determine the asymptotic order of growth for the total number of times all the doors are toggled in the locker doors puzzle (Problem 12 in Exercises 1.1). 9. Prove the formula n i=1 i = 1 + 2 + ...+ n = n(n + 1) 2 either by mathematical induction or by following the insight of a 10-year-old school boy named Carl Friedrich Gauss (1777–1855) who grew up to become one of the greatest mathematicians of all times. 2.3 Mathematical Analysis of Nonrecursive Algorithms 69 10. Mental arithmetic A10×10 table is ﬁlled with repeating numbers on its diagonals as shown below. Calculate the total sum of the table’s numbers in your head (after [Cra07, Question 1.33]). 123 23 3 … … … … 910 91011 91011 17 17 18 17 18 19 91011 91011 91011 91011 91011 91011 10 11 11. Consider the following version of an important algorithm that we will study later in the book. ALGORITHM GE(A[0..n − 1, 0..n]) //Input: An n × (n + 1) matrix A[0..n − 1, 0..n] of real numbers for i ← 0 to n − 2 do for j ← i + 1 to n − 1 do for k ← i to n do A[j, k] ← A[j, k] − A[i, k] ∗ A[j, i] /A[i, i] a. Find the time efﬁciency class of this algorithm. b. What glaring inefﬁciency does this pseudocode contain and how can it be eliminated to speed the algorithm up? 12. von Neumann’s neighborhood Consider the algorithm that starts with a single square and on each of its n iterations adds new squares all around the outside. How many one-by-one squares are there after n iterations? [Gar99] (In the parlance of cellular automata theory, the answer is the number of cells in the von Neumann neighborhood of range n.) The results for n = 0, 1, and 2 are illustrated below. 70 Fundamentals of the Analysis of Algorithm Efﬁciency n = 0 n = 1 n = 2 13. Page numbering Find the total number of decimal digits needed for num- bering pages in a book of 1000 pages. Assume that the pages are numbered consecutively starting with 1. 2.4 Mathematical Analysis of Recursive Algorithms In this section, we will see how to apply the general framework for analysis of algorithms to recursive algorithms. We start with an example often used to introduce novices to the idea of a recursive algorithm. EXAMPLE 1 Compute the factorial function F (n) = n! for an arbitrary nonneg- ative integer n. Since n!= 1 . .... (n − 1) . n = (n − 1)!. n for n ≥ 1 and 0! = 1 by deﬁnition, we can compute F (n) = F(n− 1) . n with the following recursive algorithm. ALGORITHM F(n) //Computes n! recursively //Input: A nonnegative integer n //Output: The value of n! if n = 0 return 1 else return F(n− 1) ∗ n For simplicity, we consider n itself as an indicator of this algorithm’s input size (rather than the number of bits in its binary expansion). The basic operation of the algorithm is multiplication,5 whose number of executions we denote M(n). Since the function F (n) is computed according to the formula F (n) = F(n− 1) . n for n>0, 5. Alternatively, we could count the number of times the comparison n = 0 is executed, which is the same as counting the total number of calls made by the algorithm (see Problem 2 in this section’s exercises). 2.4 Mathematical Analysis of Recursive Algorithms 71 the number of multiplications M(n) needed to compute it must satisfy the equality M(n) = M(n− 1) to compute F(n−1) + 1 to multiply F(n−1) by n for n>0. Indeed, M(n− 1) multiplications are spent to compute F(n− 1), and one more multiplication is needed to multiply the result by n. The last equation deﬁnes the sequence M(n) that we need to ﬁnd. This equa- tion deﬁnes M(n) not explicitly, i.e., as a function of n, but implicitly as a function of its value at another point, namely n − 1. Such equations are called recurrence relations or, for brevity, recurrences. Recurrence relations play an important role not only in analysis of algorithms but also in some areas of applied mathematics. They are usually studied in detail in courses on discrete mathematics or discrete structures; a very brief tutorial on them is provided in Appendix B. Our goal now is to solve the recurrence relation M(n) = M(n − 1) + 1, i.e., to ﬁnd an explicit formula for M(n) in terms of n only. Note, however, that there is not one but inﬁnitely many sequences that satisfy this recurrence. (Can you give examples of, say, two of them?) To determine a solution uniquely, we need an initial condition that tells us the value with which the sequence starts. We can obtain this value by inspecting the condition that makes the algorithm stop its recursive calls: if n = 0 return 1. This tells us two things. First, since the calls stop when n = 0, the smallest value of n for which this algorithm is executed and hence M(n) deﬁned is 0. Second, by inspecting the pseudocode’s exiting line, we can see that when n = 0, the algorithm performs no multiplications. Therefore, the initial condition we are after is M(0) = 0. the calls stop when n = 0 no multiplications when n = 0 Thus, we succeeded in setting up the recurrence relation and initial condition for the algorithm’s number of multiplications M(n): M(n) = M(n− 1) + 1 for n>0, (2.2) M(0) = 0. Before we embark on a discussion of how to solve this recurrence, let us pause to reiterate an important point. We are dealing here with two recursively deﬁned functions. The ﬁrst is the factorial function F (n) itself; it is deﬁned by the recurrence F (n) = F(n− 1) . n for every n>0, F(0) = 1. The second is the number of multiplications M(n) needed to compute F (n) by the recursive algorithm whose pseudocode was given at the beginning of the section. 72 Fundamentals of the Analysis of Algorithm Efﬁciency As we just showed, M(n) is deﬁned by recurrence (2.2). And it is recurrence (2.2) that we need to solve now. Though it is not difﬁcult to “guess” the solution here (what sequence starts with 0 when n = 0 and increases by 1 on each step?), it will be more useful to arrive at it in a systematic fashion. From the several techniques available for solving recurrence relations, we use what can be called the method of backward substitutions. The method’s idea (and the reason for the name) is immediately clear from the way it applies to solving our particular recurrence: M(n) = M(n− 1) + 1 substitute M(n− 1) = M(n− 2) + 1 = [M(n− 2) + 1] + 1 = M(n− 2) + 2 substitute M(n− 2) = M(n− 3) + 1 = [M(n− 3) + 1] + 2 = M(n− 3) + 3. After inspecting the ﬁrst three lines, we see an emerging pattern, which makes it possible to predict not only the next line (what would it be?) but also a general formula for the pattern: M(n) = M(n− i)+ i. Strictly speaking, the correctness of this formula should be proved by mathematical induction, but it is easier to get to the solution as follows and then verify its correctness. What remains to be done is to take advantage of the initial condition given. Since it is speciﬁed for n = 0, we have to substitute i = n in the pattern’s formula to get the ultimate result of our backward substitutions: M(n) = M(n− 1) + 1 = ...= M(n− i) + i = ...= M(n− n) + n = n. You should not be disappointed after exerting so much effort to get this “obvious” answer. The beneﬁts of the method illustrated in this simple example will become clear very soon, when we have to solve more difﬁcult recurrences. Also, note that the simple iterative algorithm that accumulates the product of n consecutive integers requires the same number of multiplications, and it does so without the overhead of time and space used for maintaining the recursion’s stack. The issue of time efﬁciency is actually not that important for the problem of computing n!, however. As we saw in Section 2.1, the function’s values get so large so fast that we can realistically compute exact values of n! only for very small n’s. Again, we use this example just as a simple and convenient vehicle to introduce the standard approach to analyzing recursive algorithms. Generalizing our experience with investigating the recursive algorithm for computing n!, we can now outline a general plan for investigating recursive algo- rithms. General Plan for Analyzing the Time Efﬁciency of Recursive Algorithms 1. Decide on a parameter (or parameters) indicating an input’s size. 2. Identify the algorithm’s basic operation. 2.4 Mathematical Analysis of Recursive Algorithms 73 3. Check whether the number of times the basic operation is executed can vary on different inputs of the same size; if it can, the worst-case, average-case, and best-case efﬁciencies must be investigated separately. 4. Set up a recurrence relation, with an appropriate initial condition, for the number of times the basic operation is executed. 5. Solve the recurrence or, at least, ascertain the order of growth of its solution. EXAMPLE 2 As our next example, we consider another educational workhorse of recursive algorithms: the Tower of Hanoi puzzle. In this puzzle, we (or mythical monks, if you do not like to move disks) have n disks of different sizes that can slide onto any of three pegs. Initially, all the disks are on the ﬁrst peg in order of size, the largest on the bottom and the smallest on top. The goal is to move all the disks to the third peg, using the second one as an auxiliary, if necessary. We can move only one disk at a time, and it is forbidden to place a larger disk on top of a smaller one. The problem has an elegant recursive solution, which is illustrated in Fig- ure 2.4. To move n>1 disks from peg 1 to peg 3 (with peg 2 as auxiliary), we ﬁrst move recursively n − 1 disks from peg 1 to peg 2 (with peg 3 as auxiliary), then move the largest disk directly from peg 1 to peg 3, and, ﬁnally, move recursively n − 1 disks from peg 2 to peg 3 (using peg 1 as auxiliary). Of course, if n = 1, we simply move the single disk directly from the source peg to the destination peg. 13 2 FIGURE 2.4 Recursive solution to the Tower of Hanoi puzzle. 74 Fundamentals of the Analysis of Algorithm Efﬁciency Let us apply the general plan outlined above to the Tower of Hanoi problem. The number of disks n is the obvious choice for the input’s size indicator, and so is moving one disk as the algorithm’s basic operation. Clearly, the number of moves M(n) depends on n only, and we get the following recurrence equation for it: M(n) = M(n− 1) + 1 + M(n− 1) for n>1. With the obvious initial condition M(1) = 1, we have the following recurrence relation for the number of moves M(n): M(n) = 2M(n− 1) + 1 for n>1, (2.3) M(1) = 1. We solve this recurrence by the same method of backward substitutions: M(n) = 2M(n− 1) + 1 sub. M(n− 1) = 2M(n− 2) + 1 = 2[2M(n− 2) + 1] + 1 = 22M(n− 2) + 2 + 1 sub. M(n− 2) = 2M(n− 3) + 1 = 22[2M(n− 3) + 1] + 2 + 1 = 23M(n− 3) + 22 + 2 + 1. The pattern of the ﬁrst three sums on the left suggests that the next one will be 24M(n− 4) + 23 + 22 + 2 + 1, and generally, after i substitutions, we get M(n) = 2iM(n− i) + 2i−1 + 2i−2 + ...+ 2 + 1 = 2iM(n− i) + 2i − 1. Since the initial condition is speciﬁed for n = 1, which is achieved for i = n − 1, we get the following formula for the solution to recurrence (2.3): M(n) = 2n−1M(n− (n − 1)) + 2n−1 − 1 = 2n−1M(1) + 2n−1 − 1 = 2n−1 + 2n−1 − 1 = 2n − 1. Thus, we have an exponential algorithm, which will run for an unimaginably long time even for moderate values of n (see Problem 5 in this section’s exercises). This is not due to the fact that this particular algorithm is poor; in fact, it is not difﬁcult to prove that this is the most efﬁcient algorithm possible for this problem. It is the problem’s intrinsic difﬁculty that makes it so computationally hard. Still, this example makes an important general point: One should be careful with recursive algorithms because their succinctness may mask their inefﬁciency. When a recursive algorithm makes more than a single call to itself, it can be useful for analysis purposes to construct a tree of its recursive calls. In this tree, nodes correspond to recursive calls, and we can label them with the value of the parameter (or, more generally, parameters) of the calls. For the Tower of Hanoi example, the tree is given in Figure 2.5. By counting the number of nodes in the tree, we can get the total number of calls made by the Tower of Hanoi algorithm: C(n) = n−1 l=0 2l (where l is the level in the tree in Figure 2.5) = 2n − 1. 2.4 Mathematical Analysis of Recursive Algorithms 75 n n – 1 n – 1 n – 2 n – 2 n – 2 n – 2 22 11 2 11 2 11 11 FIGURE 2.5 Tree of recursive calls made by the recursive algorithm for the Tower of Hanoi puzzle. The number agrees, as it should, with the move count obtained earlier. EXAMPLE 3 As our next example, we investigate a recursive version of the algorithm discussed at the end of Section 2.3. ALGORITHM BinRec(n) //Input: A positive decimal integer n //Output: The number of binary digits in n’s binary representation if n = 1 return 1 else return BinRec(n/2) + 1 Let us set up a recurrence and an initial condition for the number of addi- tions A(n) made by the algorithm. The number of additions made in computing BinRec(n/2) is A(n/2), plus one more addition is made by the algorithm to increase the returned value by 1. This leads to the recurrence A(n) = A(n/2) + 1 for n>1. (2.4) Since the recursive calls end when n is equal to 1 and there are no additions made then, the initial condition is A(1) = 0. The presence of n/2 in the function’s argument makes the method of back- ward substitutions stumble on values of n that are not powers of 2. Therefore, the standard approach to solving such a recurrence is to solve it only for n = 2k and then take advantage of the theorem called the smoothness rule (see Appendix B), which claims that under very broad assumptions the order of growth observed for n = 2k gives a correct answer about the order of growth for all values of n. (Alter- natively, after getting a solution for powers of 2, we can sometimes ﬁne-tune this solution to get a formula valid for an arbitrary n.) So let us apply this recipe to our recurrence, which for n = 2k takes the form 76 Fundamentals of the Analysis of Algorithm Efﬁciency A(2k) = A(2k−1) + 1 for k>0, A(20) = 0. Now backward substitutions encounter no problems: A(2k) = A(2k−1) + 1 substitute A(2k−1) = A(2k−2) + 1 = [A(2k−2) + 1] + 1 = A(2k−2) + 2 substitute A(2k−2) = A(2k−3) + 1 = [A(2k−3) + 1] + 2 = A(2k−3) + 3 ... ... = A(2k−i) + i ... = A(2k−k) + k. Thus, we end up with A(2k) = A(1) + k = k, or, after returning to the original variable n = 2k and hence k = log2 n, A(n) = log2 n ∈ (log n). In fact, one can prove (Problem 7 in this section’s exercises) that the exact solution for an arbitrary value of n is given by just a slightly more reﬁned formula A(n) = log2 n. This section provides an introduction to the analysis of recursive algorithms. These techniques will be used throughout the book and expanded further as necessary. In the next section, we discuss the Fibonacci numbers; their analysis involves more difﬁcult recurrence relations to be solved by a method different from backward substitutions. Exercises 2.4 1. Solve the following recurrence relations. a. x(n) = x(n − 1) + 5 for n>1,x(1) = 0 b. x(n) = 3x(n − 1) for n>1,x(1) = 4 c. x(n) = x(n − 1) + n for n>0,x(0) = 0 d. x(n) = x(n/2) + n for n>1,x(1) = 1 (solve for n = 2k) e. x(n) = x(n/3) + 1 for n>1,x(1) = 1 (solve for n = 3k) 2. Set up and solve a recurrence relation for the number of calls made by F (n), the recursive algorithm for computing n!. 3. Consider the following recursive algorithm for computing the sum of the ﬁrst n cubes: S(n) = 13 + 23 + ...+ n3. 2.4 Mathematical Analysis of Recursive Algorithms 77 ALGORITHM S(n) //Input: A positive integer n //Output: The sum of the ﬁrst n cubes if n = 1 return 1 else return S(n − 1) + n ∗ n ∗ n a. Set up and solve a recurrence relation for the number of times the algo- rithm’s basic operation is executed. b. How does this algorithm compare with the straightforward nonrecursive algorithm for computing this sum? 4. Consider the following recursive algorithm. ALGORITHM Q(n) //Input: A positive integer n if n = 1 return 1 else return Q(n − 1) + 2 ∗ n − 1 a. Set up a recurrence relation for this function’s values and solve it to deter- mine what this algorithm computes. b. Set up a recurrence relation for the number of multiplications made by this algorithm and solve it. c. Set up a recurrence relation for the number of additions/subtractions made by this algorithm and solve it. 5. Tower of Hanoi a. In the original version of the Tower of Hanoi puzzle, as it was published in the 1890s by ´Edouard Lucas, a French mathematician, the world will end after 64 disks have been moved from a mystical Towerof Brahma. Estimate the number of years it will take if monks could move one disk per minute. (Assume that monks do not eat, sleep, or die.) b. How many moves are made by the ith largest disk (1 ≤ i ≤ n) in this algorithm? c. Find a nonrecursive algorithm for the Tower of Hanoi puzzle and imple- ment it in the language of your choice. 6. Restricted Tower of Hanoi Consider the version of the Tower of Hanoi puzzle in which n disks have to be moved from peg A to peg C using peg B so that any move should either place a disk on peg B or move a disk from that peg. (Of course, the prohibition of placing a larger disk on top of a smaller one remains in place, too.) Design a recursive algorithm for this problem and ﬁnd the number of moves made by it. 78 Fundamentals of the Analysis of Algorithm Efﬁciency 7. a. Prove that the exact number of additions made by the recursive algorithm BinRec(n) for an arbitrary positive decimal integer n is log2 n. b. Set up a recurrence relation for the number of additions made by the nonrecursive version of this algorithm (see Section 2.3, Example 4) and solve it. 8. a. Design a recursive algorithm for computing 2n for any nonnegative integer n that is based on the formula 2n = 2n−1 + 2n−1. b. Set up a recurrence relation for the number of additions made by the algorithm and solve it. c. Draw a tree of recursive calls for this algorithm and count the number of calls made by the algorithm. d. Is it a good algorithm for solving this problem? 9. Consider the following recursive algorithm. ALGORITHM Riddle(A[0..n − 1]) //Input: An array A[0..n − 1] of real numbers if n = 1 return A[0] else temp ← Riddle(A[0..n − 2]) if temp ≤ A[n − 1] return temp else return A[n − 1] a. What does this algorithm compute? b. Set up a recurrence relation for the algorithm’s basic operation count and solve it. 10. Consider the following algorithm to check whether a graph deﬁned by its adjacency matrix is complete. ALGORITHM GraphComplete(A[0..n − 1, 0..n − 1]) //Input: Adjacency matrix A[0..n − 1, 0..n − 1]) of an undirected graph G //Output: 1 (true) if G is complete and 0 (false) otherwise if n = 1 return 1 //one-vertex graph is complete by deﬁnition else if not GraphComplete(A[0..n − 2, 0..n − 2]) return 0 else for j ← 0 to n − 2 do if A[n − 1,j] = 0 return 0 return 1 What is the algorithm’s efﬁciency class in the worst case? 11. The determinant of an n × n matrix 2.4 Mathematical Analysis of Recursive Algorithms 79 A = ⎡ ⎢⎢⎣ a00 ... a0 n−1 a10 ... a1 n−1... ... an−10 ... an−1 n−1 ⎤ ⎥⎥⎦ , denoted det A, can be deﬁned as a00 for n = 1 and, for n>1, by the recursive formula det A = n−1 j=0 sja0 j det Aj, where sj is +1 if j is even and −1ifj is odd, a0 j is the element in row 0 and column j, and Aj is the (n − 1) × (n − 1) matrix obtained from matrix A by deleting its row 0 and column j. a. Set up a recurrence relation for the number of multiplications made by the algorithm implementing this recursive deﬁnition. b. Without solving the recurrence, what can you say about the solution’s order of growth as compared to n!? 12. von Neumann’s neighborhood revisited Find the number of cells in the von Neumann neighborhood of range n (Problem 12 in Exercises 2.3) by setting up and solving a recurrence relation. 13. Frying hamburgers There are n hamburgers to be fried on a small grill that can hold only two hamburgers at a time. Each hamburger has to be fried on both sides; frying one side of a hamburger takes 1 minute, regardless of whether one or two hamburgers are fried at the same time. Consider the following recursive algorithm for executing this task in the minimum amount of time. If n ≤ 2, fry the hamburger or the two hamburgers together on each side. If n>2, fry any two hamburgers together on each side and then apply the same procedure recursively to the remaining n − 2 hamburgers. a. Set up and solve the recurrence for the amount of time this algorithm needs to fry n hamburgers. b. Explain why this algorithm does not fry the hamburgers in the minimum amount of time for all n>0. c. Give a correct recursive algorithm that executes the task in the minimum amount of time. 14. Celebrity problem A celebrity among a group of n people is a person who knows nobody but is known by everybody else. The task is to identify a celebrity by only asking questions to people of the form “Do you know him/her?” Design an efﬁcient algorithm to identify a celebrity or determine that the group has no such person. How many questions does your algorithm need in the worst case? 80 Fundamentals of the Analysis of Algorithm Efﬁciency 2.5 Example: Computing the nth Fibonacci Number In this section, we consider the Fibonacci numbers, a famous sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,... (2.5) that can be deﬁned by the simple recurrence F (n) = F(n− 1) + F(n− 2) for n>1 (2.6) and two initial conditions F(0) = 0,F(1) = 1. (2.7) The Fibonacci numbers were introduced by Leonardo Fibonacci in 1202 as a solution to a problem about the size of a rabbit population (Problem 2 in this section’s exercises). Many more examples of Fibonacci-like numbers have since been discovered in the natural world, and they have even been used in predicting the prices of stocks and commodities. There are some interesting applications of the Fibonacci numbers in computer science as well. For example, worst-case inputs for Euclid’s algorithm discussed in Section 1.1 happen to be consecutive elements of the Fibonacci sequence. In this section, we brieﬂy consider algorithms for computing the nth element of this sequence. Among other beneﬁts, the discussion will provide us with an opportunity to introduce another method for solving recurrence relations useful for analysis of recursive algorithms. To start, let us get an explicit formula for F (n). If we try to apply the method of backward substitutions to solve recurrence (2.6), we will fail to get an easily discernible pattern. Instead, we can take advantage of a theorem that describes solutions to a homogeneous second-order linear recurrence with constant co- efﬁcients ax(n) + bx(n − 1) + cx(n − 2) = 0, (2.8) where a, b, and c are some ﬁxed real numbers (a = 0) called the coefﬁcients of the recurrence and x(n) is the generic term of an unknown sequence to be found. Applying this theorem to our recurrence with the initial conditions given—see Appendix B—we obtain the formula F (n) = 1√ 5 (φn − ˆφn), (2.9) where φ = (1 + √ 5)/2 ≈ 1.61803 and ˆφ =−1/φ ≈−0.61803.6 It is hard to believe that formula (2.9), which includes arbitrary integer powers of irrational numbers, yields nothing else but all the elements of Fibonacci sequence (2.5), but it does! One of the beneﬁts of formula (2.9) is that it immediately implies that F (n) grows exponentially (remember Fibonacci’s rabbits?), i.e., F (n) ∈ (φn). This 6. Constant φ is known as the golden ratio. Since antiquity, it has been considered the most pleasing ratio of a rectangle’s two sides to the human eye and might have been consciously used by ancient architects and sculptors. 2.5 Example: Computing the nth Fibonacci Number 81 follows from the observation that ˆφ is a fraction between −1 and 0, and hence ˆφn gets inﬁnitely small as n goes to inﬁnity. In fact, one can prove that the impact of the second term 1√ 5 ˆφn on the value of F (n) can be obtained by rounding off the value of the ﬁrst term to the nearest integer. In other words, for every nonnegative integer n, F (n) = 1√ 5 φn rounded to the nearest integer. (2.10) In the algorithms that follow, we consider, for the sake of simplicity, such oper- ations as additions and multiplications at unit cost. Since the Fibonacci numbers grow inﬁnitely large (and grow very rapidly), a more detailed analysis than the one offered here is warranted. In fact, it is the size of the numbers rather than a time-efﬁcient method for computing them that should be of primary concern here. Still, these caveats notwithstanding, the algorithms we outline and their analysis provide useful examples for a student of the design and analysis of algorithms. To begin with, we can use recurrence (2.6) and initial conditions (2.7) for the obvious recursive algorithm for computing F (n). ALGORITHM F (n) //Computes the nth Fibonacci number recursively by using its deﬁnition //Input: A nonnegative integer n //Output: The nth Fibonacci number if n ≤ 1 return n else return F(n− 1) + F(n− 2) Before embarking on its formal analysis, can you tell whether this is an efﬁ- cient algorithm? Well, we need to do a formal analysis anyway. The algorithm’s ba- sic operation is clearly addition, so let A(n) be the number of additions performed by the algorithm in computing F (n). Then the numbers of additions needed for computing F(n− 1) and F(n− 2) are A(n − 1) and A(n − 2), respectively, and the algorithm needs one more addition to compute their sum. Thus, we get the following recurrence for A(n): A(n) = A(n − 1) + A(n − 2) + 1 for n>1, (2.11) A(0) = 0,A(1) = 0. The recurrence A(n) − A(n − 1) − A(n − 2) = 1 is quite similar to recurrence F (n) − F(n− 1) − F(n− 2) = 0, but its right-hand side is not equal to zero. Such recurrences are called inhomogeneous. There are general techniques for solving inhomogeneous recurrences (see Appendix B or any textbook on discrete mathe- matics), but for this particular recurrence, a special trick leads to a faster solution. We can reduce our inhomogeneous recurrence to a homogeneous one by rewriting it as [A(n) + 1] − [A(n − 1) + 1] − [A(n − 2) + 1] = 0 and substituting B(n) = A(n) + 1: 82 Fundamentals of the Analysis of Algorithm Efﬁciency B(n) − B(n − 1) − B(n − 2) = 0, B(0) = 1,B(1) = 1. This homogeneous recurrence can be solved exactly in the same manner as recur- rence (2.6) was solved to ﬁnd an explicit formula for F (n). But it can actually be avoided by noting that B(n) is, in fact, the same recurrence as F (n) except that it starts with two 1’s and thus runs one step ahead of F (n). So B(n) = F(n+ 1), and A(n) = B(n) − 1 = F(n+ 1) − 1 = 1√ 5 (φn+1 − ˆφn+1) − 1. Hence, A(n) ∈ (φn), and if we measure the size of n by the number of bits b =log2 n+1in its binary representation, the efﬁciency class will be even worse, namely, doubly exponential: A(b) ∈ (φ2b). The poor efﬁciency class of the algorithm could be anticipated by the nature of recurrence (2.11). Indeed, it contains two recursive calls with the sizes of smaller instances only slightly smaller than size n. (Have you encountered such a situation before?) We can also see the reason behind the algorithm’s inefﬁciency by looking at a recursive tree of calls tracing the algorithm’s execution. An example of such a tree for n = 5 is given in Figure 2.6. Note that the same values of the function are being evaluated here again and again, which is clearly extremely inefﬁcient. We can obtain a much faster algorithm by simply computing the successive elements of the Fibonacci sequence iteratively, as is done in the following algo- rithm. ALGORITHM Fib(n) //Computes the nth Fibonacci number iteratively by using its deﬁnition //Input: A nonnegative integer n //Output: The nth Fibonacci number F[0] ← 0; F[1] ← 1 for i ← 2 to n do F[i] ← F[i − 1] + F[i − 2] return F[n] F(3) F(4) F(5) F(3) F(1)F(2)F(2) F(2) F(1) F(1) F(1)F(0) F(0) F(1) F(0) FIGURE 2.6 Tree of recursive calls for computing the 5th Fibonacci number by the deﬁnition-based algorithm. 2.5 Example: Computing the nth Fibonacci Number 83 This algorithm clearly makes n − 1 additions. Hence, it is linear as a function of n and “only” exponential as a function of the number of bits b in n’s binary representation. Note that using an extra array for storing all the preceding ele- ments of the Fibonacci sequence can be avoided: storing just two values is neces- sary to accomplish the task (see Problem 8 in this section’s exercises). The third alternative for computing the nth Fibonacci number lies in using formula (2.10). The efﬁciency of the algorithm will obviously be determined by the efﬁciency of an exponentiation algorithm used for computing φn. If it is done by simply multiplying φ by itself n − 1 times, the algorithm will be in (n) = (2b). There are faster algorithms for the exponentiation problem. For example, we will discuss (log n) = (b) algorithms for this problem in Chapters 4 and 6. Note also that special care should be exercised in implementing this approach to computing the nth Fibonacci number. Since all its intermediate results are irrational numbers, we would have to make sure that their approximations in the computer are accurate enough so that the ﬁnal round-off yields a correct result. Finally, there exists a (log n) algorithm for computing the nth Fibonacci number that manipulates only integers. It is based on the equality F(n− 1) F (n) F (n) F (n + 1) = 01 11 n for n ≥ 1 and an efﬁcient way of computing matrix powers. Exercises 2.5 1. Find a Web site dedicated to applications of the Fibonacci numbers and study it. 2. Fibonacci’s rabbits problem A man put a pair of rabbits in a place sur- rounded by a wall. How many pairs of rabbits will be there in a year if the initial pair of rabbits (male and female) are newborn and all rabbit pairs are not fertile during their ﬁrst month of life but thereafter give birth to one new male/female pair at the end of every month? 3. Climbing stairs Find the number of different ways to climb an n-stair stair- case if each step is either one or two stairs. For example, a 3-stair staircase can be climbed three ways: 1-1-1, 1-2, and 2-1. 4. How many even numbers are there among the ﬁrst n Fibonacci numbers, i.e., among the numbers F(0), F (1), . . . , F (n − 1)? Give a closed-form formula valid for every n>0. 5. Check by direct substitutions that the function 1√ 5(φn − ˆφn) indeed satisﬁes recurrence (2.6) and initial conditions (2.7). 6. The maximum values of the Java primitive types int and long are 231 − 1 and 263 − 1, respectively. Find the smallest n for which the nth Fibonacci number is not going to ﬁt in a memory allocated for 84 Fundamentals of the Analysis of Algorithm Efﬁciency a. the type int. b. the type long. 7. Consider the recursive deﬁnition-based algorithm for computing the nth Fi- bonacci number F (n). Let C(n) and Z(n) be the number of times F(1) and F(0) are computed, respectively. Prove that a. C(n) = F (n). b. Z(n) = F(n− 1). 8. Improve algorithm Fibof the text so that it requires only (1) space. 9. Prove the equality F(n− 1) F (n) F (n) F (n + 1) = 01 11 n for n ≥ 1. 10. How many modulo divisions are made by Euclid’s algorithm on two consec- utive Fibonacci numbers F (n) and F(n− 1) as the algorithm’s input? 11. Dissecting a Fibonacci rectangle Given a rectangle whose sides are two con- secutive Fibonacci numbers, design an algorithm to dissect it into squares with no more than two squares being the same size. What is the time efﬁciency class of your algorithm? 12. In the language of your choice, implement two algorithms for computing the last ﬁve digits of the nth Fibonacci number that are based on (a) the recursive deﬁnition-based algorithm F(n); (b) the iterative deﬁnition-based algorithm Fib(n). Perform an experiment to ﬁnd the largest value of n for which your programs run under 1 minute on your computer. 2.6 Empirical Analysis of Algorithms In Sections 2.3 and 2.4, we saw how algorithms, both nonrecursive and recursive, can be analyzed mathematically. Though these techniques can be applied success- fully to many simple algorithms, the power of mathematics, even when enhanced with more advanced techniques (see [Sed96], [Pur04], [Gra94], and [Gre07]), is far from limitless. In fact, even some seemingly simple algorithms have proved to be very difﬁcult to analyze with mathematical precision and certainty. As we pointed out in Section 2.1, this is especially true for the average-case analysis. The principal alternative to the mathematical analysis of an algorithm’s ef- ﬁciency is its empirical analysis. This approach implies steps spelled out in the following plan. General Plan for the Empirical Analysis of Algorithm Time Efﬁciency 1. Understand the experiment’s purpose. 2. Decide on the efﬁciency metric M to be measured and the measurement unit (an operation count vs. a time unit). 3. Decide on characteristics of the input sample (its range, size, and so on). 4. Prepare a program implementing the algorithm (or algorithms) for the exper- imentation. 2.6 Empirical Analysis of Algorithms 85 5. Generate a sample of inputs. 6. Run the algorithm (or algorithms) on the sample’s inputs and record the data observed. 7. Analyze the data obtained. Let us discuss these steps one at a time. There are several different goals one can pursue in analyzing algorithms empirically. They include checking the accuracy of a theoretical assertion about the algorithm’s efﬁciency, comparing the efﬁciency of several algorithms for solving the same problem or different imple- mentations of the same algorithm, developing a hypothesis about the algorithm’s efﬁciency class, and ascertaining the efﬁciency of the program implementing the algorithm on a particular machine. Obviously, an experiment’s design should de- pend on the question the experimenter seeks to answer. In particular, the goal of the experiment should inﬂuence, if not dictate, how the algorithm’s efﬁciency is to be measured. The ﬁrst alternative is to insert a counter (or counters) into a program implementing the algorithm to count the number of times the algorithm’s basic operation is executed. This is usually a straightforward operation; you should only be mindful of the possibility that the basic operation is executed in several places in the program and that all its executions need to be accounted for. As straightforward as this task usually is, you should always test the modiﬁed program to ensure that it works correctly, in terms of both the problem it solves and the counts it yields. The second alternative is to time the program implementing the algorithm in question. The easiest way to do this is to use a system’s command, such as the time command in UNIX. Alternatively, one can measure the running time of a code fragment by asking for the system time right before the fragment’s start (tstart) and just after its completion (tﬁnish), and then computing the difference between the two (tﬁnish− tstart).7 In C and C++, you can use the function clock for this purpose; in Java, the method currentTimeMillis() in the System class is available. It is important to keep several facts in mind, however. First, a system’s time is typically not very accurate, and you might get somewhat different results on repeated runs of the same program on the same inputs. An obvious remedy is to make several such measurements and then take their average (or the median) as the sample’s observation point. Second, given the high speed of modern com- puters, the running time may fail to register at all and be reported as zero. The standard trick to overcome this obstacle is to run the program in an extra loop many times, measure the total running time, and then divide it by the number of the loop’s repetitions. Third, on a computer running under a time-sharing system such as UNIX, the reported time may include the time spent by the CPU on other programs, which obviously defeats the purpose of the experiment. Therefore, you should take care to ask the system for the time devoted speciﬁcally to execution of 7. If the system time is given in units called “ticks,” the difference should be divided by a constant indicating the number of ticks per time unit. 86 Fundamentals of the Analysis of Algorithm Efﬁciency your program. (In UNIX, this time is called the “user time,” and it is automatically provided by the time command.) Thus, measuring the physical running time has several disadvantages, both principal (dependence on a particular machine being the most important of them) and technical, not shared by counting the executions of a basic operation. On the other hand, the physical running time provides very speciﬁc information about an algorithm’s performance in a particular computing environment, which can be of more importance to the experimenter than, say, the algorithm’s asymptotic efﬁciency class. In addition, measuring time spent on different segments of a program can pinpoint a bottleneck in the program’s performance that can be missed by an abstract deliberation about the algorithm’s basic operation. Getting such data—called proﬁling—is an important resource in the empirical analysis of an algorithm’s running time; the data in question can usually be obtained from the system tools available in most computing environments. Whether you decide to measure the efﬁciency by basic operation counting or by time clocking, you will need to decide on a sample of inputs for the experiment. Often, the goal is to use a sample representing a “typical” input; so the challenge is to understand what a “typical” input is. For some classes of algorithms—e.g., for algorithms for the traveling salesman problem that we are going to discuss later in the book—researchers have developed a set of instances they use for benchmark- ing. But much more often than not, an input sample has to be developed by the experimenter. Typically, you will have to make decisions about the sample size (it is sensible to start with a relatively small sample and increase it later if necessary), the range of instance sizes (typically neither trivially small nor excessively large), and a procedure for generating instances in the range chosen. The instance sizes can either adhere to some pattern (e.g., 1000, 2000, 3000,...,10,000 or 500, 1000, 2000, 4000,...,128,000) or be generated randomly within the range chosen. The principal advantage of size changing according to a pattern is that its impact is easier to analyze. For example, if a sample’s sizes are generated by doubling, you can compute the ratios M(2n)/M(n) of the observed metric M (the count or the time) to see whether the ratios exhibit a behavior typical of algorithms in one of the basic efﬁciency classes discussed in Section 2.2. The major disadvantage of nonrandom sizes is the possibility that the algorithm under investigation exhibits atypical behavior on the sample chosen. For example, if all the sizes in a sample are even and your algorithm runs much more slowly on odd- size inputs, the empirical results will be quite misleading. Another important issue concerning sizes in an experiment’s sample is whether several instances of the same size should be included. If you expect the observed metric to vary considerably on instances of the same size, it would be probably wise to include several instances for every size in the sample. (There are well-developed methods in statistics to help the experimenter make such de- cisions; you will ﬁnd no shortage of books on this subject.) Of course, if several instances of the same size are included in the sample, the averages or medians of the observed values for each size should be computed and investigated instead of or in addition to individual sample points. 2.6 Empirical Analysis of Algorithms 87 Much more often than not, an empirical analysis requires generating random numbers. Even if you decide to use a pattern for input sizes, you will typically want instances themselves generated randomly. Generating random numbers on a digital computer is known to present a difﬁcult problem because, in principle, the problem can be solved only approximately. This is the reason computer scien- tists prefer to call such numbers pseudorandom. As a practical matter, the easiest and most natural way of getting such numbers is to take advantage of a random number generator available in computer language libraries. Typically, its output will be a value of a (pseudo)random variable uniformly distributed in the interval between 0 and 1. If a different (pseudo)random variable is desired, an appro- priate transformation needs to be made. For example, if x is a continuous ran- dom variable uniformly distributed on the interval 0 ≤ x<1, the variable y = l+ x(r − l) will be uniformly distributed among the integer values between integers l and r − 1 (l < r). Alternatively, you can implement one of several known algorithms for gener- ating (pseudo)random numbers. The most widely used and thoroughly studied of such algorithms is the linear congruential method. ALGORITHM Random(n, m, seed,a,b) //Generates a sequence of n pseudorandom numbers according to the linear // congruential method //Input: A positive integer n and positive integer parameters m, seed,a,b //Output: A sequence r1,...,rn of n pseudorandom integers uniformly // distributed among integer values between 0 and m − 1 //Note: Pseudorandom numbers between 0 and 1 can be obtained // by treating the integers generated as digits after the decimal point r0 ← seed for i ← 1 to n do ri ← (a ∗ ri−1 + b) mod m The simplicity of this pseudocode is misleading because the devil lies in the details of choosing the algorithm’s parameters. Here is a partial list of recommen- dations based on the results of a sophisticated mathematical analysis (see [KnuII, pp. 184–185] for details): seed may be chosen arbitrarily and is often set to the current date and time; m should be large and may be conveniently taken as 2w, where w is the computer’s word size; a should be selected as an integer between 0.01m and 0.99m with no particular pattern in its digits but such that a mod 8 = 5; and the value of b can be chosen as 1. The empirical data obtained as the result of an experiment need to be recorded and then presented for an analysis. Data can be presented numerically in a table or graphically in a scatterplot, i.e., by points in a Cartesian coordinate system. It is a good idea to use both these options whenever it is feasible because both methods have their unique strengths and weaknesses. 88 Fundamentals of the Analysis of Algorithm Efﬁciency The principal advantage of tabulated data lies in the opportunity to manip- ulate it easily. For example, one can compute the ratios M(n)/g(n) where g(n) is a candidate to represent the efﬁciency class of the algorithm in question. If the algorithm is indeed in (g(n)), most likely these ratios will converge to some pos- itive constant as n gets large. (Note that careless novices sometimes assume that this constant must be 1, which is, of course, incorrect according to the deﬁnition of (g(n)).) Or one can compute the ratios M(2n)/M(n) and see how the running time reacts to doubling of its input size. As we discussed in Section 2.2, such ratios should change only slightly for logarithmic algorithms and most likely converge to 2, 4, and 8 for linear, quadratic, and cubic algorithms, respectively—to name the most obvious and convenient cases. On the other hand, the form of a scatterplot may also help in ascertaining the algorithm’s probable efﬁciency class. For a logarithmic algorithm, the scat- terplot will have a concave shape (Figure 2.7a); this fact distinguishes it from all the other basic efﬁciency classes. For a linear algorithm, the points will tend to aggregate around a straight line or, more generally, to be contained between two straight lines (Figure 2.7b). Scatterplots of functions in (n lg n) and (n2) will have a convex shape (Figure 2.7c), making them difﬁcult to differentiate. A scatterplot of a cubic algorithm will also have a convex shape, but it will show a much more rapid increase in the metric’s values. An exponential algorithm will most probably require a logarithmic scale for the vertical axis, in which the val- ues of loga M(n) rather than those of M(n) are plotted. (The commonly used logarithm base is 2 or 10.) In such a coordinate system, a scatterplot of a truly exponential algorithm should resemble a linear function because M(n) ≈ can im- plies logb M(n) ≈ logb c + n logb a, and vice versa. One of the possible applications of the empirical analysis is to predict the al- gorithm’s performance on an instance not included in the experiment sample. For example, if you observe that the ratios M(n)/g(n) are close to some constant c for the sample instances, it could be sensible to approximate M(n) by the prod- uct cg(n) for other instances, too. This approach should be used with caution, especially for values of n outside the sample range. (Mathematicians call such predictions extrapolation, as opposed to interpolation, which deals with values within the sample range.) Of course, you can try unleashing the standard tech- niques of statistical data analysis and prediction. Note, however, that the majority of such techniques are based on speciﬁc probabilistic assumptions that may or may not be valid for the experimental data in question. It seems appropriate to end this section by pointing out the basic differ- ences between mathematical and empirical analyses of algorithms. The princi- pal strength of the mathematical analysis is its independence of speciﬁc inputs; its principal weakness is its limited applicability, especially for investigating the average-case efﬁciency. The principal strength of the empirical analysis lies in its applicability to any algorithm, but its results can depend on the particular sample of instances and the computer used in the experiment. 2.6 Empirical Analysis of Algorithms 89 count or time n (b)(a) count or time n (c) count or time n FIGURE 2.7 Typical scatter plots. (a) Logarithmic. (b) Linear. (c) One of the convex functions. Exercises 2.6 1. Consider the following well-known sorting algorithm, which is studied later in the book, with a counter inserted to count the number of key comparisons. ALGORITHM SortAnalysis(A[0..n − 1]) //Input: An array A[0..n − 1] of n orderable elements //Output: The total number of key comparisons made count ← 0 for i ← 1 to n − 1 do 90 Fundamentals of the Analysis of Algorithm Efﬁciency v ← A[i] j ← i − 1 while j ≥ 0 and A[j] >vdo count ← count + 1 A[j + 1] ← A[j] j ← j − 1 A[j + 1] ← v return count Is the comparison counter inserted in the right place? If you believe it is, prove it; if you believe it is not, make an appropriate correction. 2. a. Run the program of Problem 1, with a properly inserted counter (or coun- ters) for the number of key comparisons, on 20 random arrays of sizes 1000, 2000, 3000,...,20,000. b. Analyze the data obtained to form a hypothesis about the algorithm’s average-case efﬁciency. c. Estimate the number of key comparisons we should expect for a randomly generated array of size 25,000 sorted by the same algorithm. 3. Repeat Problem 2 by measuring the program’s running time in milliseconds. 4. Hypothesize a likely efﬁciency class of an algorithm based on the following empirical observations of its basic operation’s count: size 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 count 11,966 24,303 39,992 53,010 67,272 78,692 91,274 113,063 129,799 140,538 5. What scale transformation will make a logarithmic scatterplot look like a linear one? 6. How can one distinguish a scatterplot for an algorithm in (lg lg n) from a scatterplot for an algorithm in (lg n)? 7. a. Find empirically the largest number of divisions made by Euclid’s algo- rithm for computing gcd(m, n) for 1≤ n ≤ m ≤ 100. b. For each positive integer k, ﬁnd empirically the smallest pair of integers 1≤ n ≤ m ≤ 100 for which Euclid’s algorithm needs to make k divisions in order to ﬁnd gcd(m, n). 8. The average-case efﬁciency of Euclid’s algorithm on inputs of size n can be measured by the average number of divisions Davg(n) made by the algorithm in computing gcd(n, 1), gcd(n, 2),...,gcd(n, n). For example, Davg(5) = 1 5 (1 + 2 + 3 + 2 + 1) = 1.8. 2.7 Algorithm Visualization 91 Produce a scatterplot of Davg(n) and indicate the algorithm’s likely average- case efﬁciency class. 9. Run an experiment to ascertain the efﬁciency class of the sieve of Eratos- thenes (see Section 1.1). 10. Run a timing experiment for the three algorithms for computing gcd(m, n) presented in Section 1.1. 2.7 Algorithm Visualization In addition to the mathematical and empirical analyses of algorithms, there is yet a third way to study algorithms. It is called algorithm visualization and can be deﬁned as the use of images to convey some useful information about algorithms. That information can be a visual illustration of an algorithm’s operation, of its per- formance on different kinds of inputs, or of its execution speed versus that of other algorithms for the same problem. To accomplish this goal, an algorithm visualiza- tion uses graphic elements—points, line segments, two- or three-dimensional bars, and so on—to represent some “interesting events” in the algorithm’s operation. There are two principal variations of algorithm visualization: Static algorithm visualization Dynamic algorithm visualization, also called algorithm animation Static algorithm visualization shows an algorithm’s progress through a series of still images. Algorithm animation, on the other hand, shows a continuous, movie-like presentation of an algorithm’s operations. Animation is an arguably more sophisticated option, which, of course, is much more difﬁcult to implement. Early efforts in the area of algorithm visualization go back to the 1970s. The watershed event happened in 1981 with the appearance of a 30-minute color sound ﬁlm titled Sorting Out Sorting. This algorithm visualization classic was produced at the University of Toronto by Ronald Baecker with the assistance of D. Sherman [Bae81, Bae98]. It contained visualizations of nine well-known sorting algorithms (more than half of them are discussed later in the book) and provided quite a convincing demonstration of their relative speeds. The success of Sorting Out Sorting made sorting algorithms a perennial fa- vorite for algorithm animation. Indeed, the sorting problem lends itself quite naturally to visual presentation via vertical or horizontal bars or sticks of different heights or lengths, which need to be rearranged according to their sizes (Figure 2.8). This presentation is convenient, however, only for illustrating actions of a typical sorting algorithm on small inputs. For larger ﬁles, Sorting Out Sorting used the ingenious idea of presenting data by a scatterplot of points on a coordinate plane, with the ﬁrst coordinate representing an item’s position in the ﬁle and the second one representing the item’s value; with such a representation, the process of sorting looks like a transformation of a “random” scatterplot of points into the points along a frame’s diagonal (Figure 2.9). In addition, most sorting algorithms 92 Fundamentals of the Analysis of Algorithm Efﬁciency FIGURE 2.8 Initial and ﬁnal screens of a typical visualization of a sorting algorithm using the bar representation. work by comparing and exchanging two given items at a time—an event that can be animated relatively easily. Since the appearance of Sorting Out Sorting, a great number of algorithm animations have been created, especially after the appearance of Java and the 2.7 Algorithm Visualization 93 FIGURE 2.9 Initial and ﬁnal screens of a typical visualization of a sorting algorithm using the scatterplot representation. World Wide Web in the 1990s. They range in scope from one particular algorithm to a group of algorithms for the same problem (e.g., sorting) or the same applica- tion area (e.g., geometric algorithms) to general-purpose animation systems. At the end of 2010, a catalog of links to existing visualizations, maintained under the 94 Fundamentals of the Analysis of Algorithm Efﬁciency NSF-supported AlgoVizProject, contained over 500 links. Unfortunately, a survey of existing visualizations found most of them to be of low quality, with the content heavily skewed toward easier topics such as sorting [Sha07]. There are two principal applications of algorithm visualization: research and education. Potential beneﬁts for researchers are based on expectations that algo- rithm visualization may help uncover some unknown features of algorithms. For example, one researcher used a visualization of the recursive Tower of Hanoi algo- rithm in which odd- and even-numbered disks were colored in two different colors. He noticed that two disks of the same color never came in direct contact during the algorithm’s execution. This observation helped him in developing a better non- recursive version of the classic algorithm. To give another example, Bentley and McIlroy [Ben93] mentioned using an algorithm animation system in their work on improving a library implementation of a leading sorting algorithm. The application of algorithm visualization to education seeks to help students learning algorithms. The available evidence of its effectiveness is decisively mixed. Although some experiments did register positive learning outcomes, others failed to do so. The increasing body of evidence indicates that creating sophisticated software systems is not going to be enough. In fact, it appears that the level of student involvement with visualization might be more important than speciﬁc features of visualization software. In some experiments, low-tech visualizations prepared by students were more effective than passive exposure to sophisticated software systems. To summarize, although some successes in both research and education have been reported in the literature, they are not as impressive as one might expect. A deeper understanding of human perception of images will be required before the true potential of algorithm visualization is fulﬁlled. SUMMARY There are two kinds of algorithm efﬁciency: time efﬁciency and space efﬁciency. Time efﬁciency indicates how fast the algorithm runs; space efﬁciency deals with the extra space it requires. An algorithm’s time efﬁciency is principally measured as a function of its input size by counting the number of times its basic operation is executed. A basic operation is the operation that contributes the most to running time. Typically, it is the most time-consuming operation in the algorithm’s innermost loop. For some algorithms, the running time can differ considerably for inputs of the same size, leading to worst-case efﬁciency, average-case efﬁciency, and best-case efﬁciency. The established framework for analyzing time efﬁciency is primarily grounded in the order of growth of the algorithm’s running time as its input size goes to inﬁnity. Summary 95 The notations O, , and are used to indicate and compare the asymptotic orders of growth of functions expressing algorithm efﬁciencies. The efﬁciencies of a large number of algorithms fall into the following few classes: constant, logarithmic, linear, linearithmic, quadratic, cubic, and exponential. The main tool for analyzing the time efﬁciency of a nonrecursive algorithm is to set up a sum expressing the number of executions of its basic operation and ascertain the sum’s order of growth. The main tool for analyzing the time efﬁciency of a recursive algorithm is to set up a recurrence relation expressing the number of executions of its basic operation and ascertain the solution’s order of growth. Succinctness of a recursive algorithm may mask its inefﬁciency. The Fibonacci numbers are an important sequence of integers in which every element is equal to the sum of its two immediate predecessors. There are several algorithms for computing the Fibonacci numbers, with drastically different efﬁciencies. Empirical analysis of an algorithm is performed by running a program implementing the algorithm on a sample of inputs and analyzing the data observed (the basic operation’s count or physical running time). This often involves generating pseudorandom numbers. The applicability to any algorithm is the principal strength of this approach; the dependence of results on the particular computer and instance sample is its main weakness. Algorithm visualization is the use of images to convey useful information about algorithms. The two principal variations of algorithm visualization are static algorithm visualization and dynamic algorithm visualization (also called algorithm animation). This page intentionally left blank 3 Brute Force and Exhaustive Search Science is as far removed from brute force as this sword from a crowbar. —Edward Lytton (1803–1873), Leila, Book II, Chapter I Doing a thing well is often a waste of time. —Robert Byrne, a master pool and billiards player and a writer After introducing the framework and methods for algorithm analysis in the preceding chapter, we are ready to embark on a discussion of algorithm design strategies. Each of the next eight chapters is devoted to a particular design strategy. The subject of this chapter is brute force and its important special case, exhaustive search. Brute force can be described as follows: Brute force is a straightforward approach to solving a problem, usually directly based on the problem statement and deﬁnitions of the concepts involved. The “force” implied by the strategy’s deﬁnition is that of a computer and not that of one’s intellect. “Just do it!” would be another way to describe the prescription of the brute-force approach. And often, the brute-force strategy is indeed the one that is easiest to apply. As an example, consider the exponentiation problem: compute an for a nonzero number a and a nonnegative integer n. Although this problem might seem trivial, it provides a useful vehicle for illustrating several algorithm design strategies, including the brute force. (Also note that computing an mod m for some large integers is a principal component of a leading encryption algorithm.) By the deﬁnition of exponentiation, an = a ∗ ...∗ a n times . 97 98 Brute Force and Exhaustive Search This suggests simply computing an by multiplying 1 by antimes. We have already encountered at least two brute-force algorithms in the book: the consecutive integer checking algorithm for computing gcd(m, n) in Section 1.1 and the deﬁnition-based algorithm for matrix multiplication in Section 2.3. Many other examples are given later in this chapter. (Can you identify a few algorithms you already know as being based on the brute-force approach?) Though rarely a source of clever or efﬁcient algorithms, the brute-force ap- proach should not be overlooked as an important algorithm design strategy. First, unlike some of the other strategies, brute force is applicable to a very wide va- riety of problems. In fact, it seems to be the only general approach for which it is more difﬁcult to point out problems it cannot tackle. Second, for some impor- tant problems—e.g., sorting, searching, matrix multiplication, string matching— the brute-force approach yields reasonable algorithms of at least some practi- cal value with no limitation on instance size. Third, the expense of designing a more efﬁcient algorithm may be unjustiﬁable if only a few instances of a prob- lem need to be solved and a brute-force algorithm can solve those instances with acceptable speed. Fourth, even if too inefﬁcient in general, a brute-force algo- rithm can still be useful for solving small-size instances of a problem. Finally, a brute-force algorithm can serve an important theoretical or educational pur- pose as a yardstick with which to judge more efﬁcient alternatives for solving a problem. 3.1 Selection Sort and Bubble Sort In this section, we consider the application of the brute-force approach to the problem of sorting: given a list of n orderable items (e.g., numbers, characters from some alphabet, character strings), rearrange them in nondecreasing order. As we mentioned in Section 1.3, dozens of algorithms have been developed for solving this very important problem. You might have learned several of them in the past. If you have, try to forget them for the time being and look at the problem afresh. Now, after your mind is unburdened of previous knowledge of sorting algo- rithms, ask yourself a question: “What would be the most straightforward method for solving the sorting problem?” Reasonable people may disagree on the answer to this question. The two algorithms discussed here—selection sort and bubble sort—seem to be the two prime candidates. Selection Sort We start selection sort by scanning the entire given list to ﬁnd its smallest element and exchange it with the ﬁrst element, putting the smallest element in its ﬁnal position in the sorted list. Then we scan the list, starting with the second element, to ﬁnd the smallest among the last n − 1 elements and exchange it with the second element, putting the second smallest element in its ﬁnal position. Generally, on the 3.1 Selection Sort and Bubble Sort 99 ith pass through the list, which we number from 0 to n − 2, the algorithm searches for the smallest item among the last n − i elements and swaps it with Ai: in their final positions A0 ≤ A1 ≤ . . . ≤ Ai–1 ⏐ Ai, . . . , Amin, . . . , An–1 the last n – i elements After n − 1 passes, the list is sorted. Here is pseudocode of this algorithm, which, for simplicity, assumes that the list is implemented as an array: ALGORITHM SelectionSort(A[0..n − 1]) //Sorts a given array by selection sort //Input: An array A[0..n − 1] of orderable elements //Output: Array A[0..n − 1] sorted in nondecreasing order for i ← 0 to n − 2 do min ← i for j ← i + 1 to n − 1 do if A[j] 1and n is a large positive integer, how would you circumvent the problem of a very large magnitude of an? 3. For each of the algorithms in Problems 4, 5, and 6 of Exercises 2.3, tell whether or not the algorithm is based on the brute-force approach. 4. a. Design a brute-force algorithm for computing the value of a polynomial p(x) = anxn + an−1xn−1 + ...+ a1x + a0 at a given point x0 and determine its worst-case efﬁciency class. b. If the algorithm you designed is in (n2), design a linear algorithm for this problem. c. Is it possible to design an algorithm with a better-than-linear efﬁciency for this problem? 5. A network topology speciﬁes how computers, printers, and other devices are connected over a network. The ﬁgure below illustrates three common topologies of networks: the ring, the star, and the fully connected mesh. ring star fully connected mesh You are given a boolean matrix A[0..n − 1, 0..n − 1], where n>3, which is supposed to be the adjacency matrix of a graph modeling a network with one of these topologies. Your task is to determine which of these three topologies, if any, the matrix represents. Design a brute-force algorithm for this task and indicate its time efﬁciency class. 6. Tetromino tilings Tetrominoes are tiles made of four 1 × 1 squares. There are ﬁve types of tetrominoes shown below: 3.1 Selection Sort and Bubble Sort 103 straight tetromino square tetromino L-tetromino T-tetromino Z-tetromino Is it possible to tile—i.e., cover exactly without overlaps—an 8 × 8 chessboard with a. straight tetrominoes? b. square tetrominoes? c. L-tetrominoes? d. T-tetrominoes? e. Z-tetrominoes? 7. A stack of fake coins There are n stacks of n identical-looking coins. All of the coins in one of these stacks are counterfeit, while all the coins in the other stacks are genuine. Every genuine coin weighs 10 grams; every fake weighs 11 grams. You have an analytical scale that can determine the exact weight of any number of coins. a. Devise a brute-force algorithm to identify the stack with the fake coins and determine its worst-case efﬁciency class. b. What is the minimum number of weighings needed to identify the stack with the fake coins? 8. Sort the list E, X, A, M, P, L, E in alphabetical order by selection sort. 9. Is selection sort stable? (The deﬁnition of a stable sorting algorithm was given in Section 1.3.) 10. Is it possible to implement selection sort for linked lists with the same (n2) efﬁciency as the array version? 11. Sort the list E, X, A, M, P, L, E in alphabetical order by bubble sort. 12. a. Prove that if bubble sort makes no exchanges on its pass through a list, the list is sorted and the algorithm can be stopped. b. Write pseudocode of the method that incorporates this improvement. c. Prove that the worst-case efﬁciency of the improved version is quadratic. 13. Is bubble sort stable? 14. Alternating disks You have a row of 2n disks of two colors, n dark and n light. They alternate: dark, light, dark, light, and so on. You want to get all the dark disks to the right-hand end, and all the light disks to the left-hand end. The only moves you are allowed to make are those that interchange the positions of two neighboring disks. Design an algorithm for solving this puzzle and determine the number of moves it takes. [Gar99] 104 Brute Force and Exhaustive Search 3.2 Sequential Search and Brute-Force String Matching We saw in the previous section two applications of the brute-force approach to the sorting porblem. Here we discuss two applications of this strategy to the problem of searching. The ﬁrst deals with the canonical problem of searching for an item of a given value in a given list. The second is different in that it deals with the string-matching problem. Sequential Search We have already encountered a brute-force algorithm for the general searching problem: it is called sequential search (see Section 2.1). To repeat, the algorithm simply compares successive elements of a given list with a given search key until either a match is encountered (successful search) or the list is exhausted without ﬁnding a match (unsuccessful search). A simple extra trick is often employed in implementing sequential search: if we append the search key to the end of the list, the search for the key will have to be successful, and therefore we can eliminate the end of list check altogether. Here is pseudocode of this enhanced version. ALGORITHM SequentialSearch2(A[0..n],K) //Implements sequential search with a search key as a sentinel //Input: An array A of n elements and a search key K //Output: The index of the ﬁrst element in A[0..n − 1] whose value is // equal to K or −1 if no such element is found A[n] ← K i ← 0 while A[i] = K do i ← i + 1 if i2 points not all on the same line is a convex polygon with the vertices at some of the points of S. (If all the points do lie on the same line, the polygon degenerates to a line segment but still with the endpoints at two points of S.) 112 Brute Force and Exhaustive Search p7 p6 p2p8 p3 p4 p1 p5 FIGURE 3.6 The convex hull for this set of eight points is the convex polygon with vertices at p1,p5,p6,p7, and p3. The convex-hull problem is the problem of constructing the convex hull for a given set S of n points. To solve it, we need to ﬁnd the points that will serve as the vertices of the polygon in question. Mathematicians call the vertices of such a polygon “extreme points.” By deﬁnition, an extreme point of a convex set is a point of this set that is not a middle point of any line segment with endpoints in the set. For example, the extreme points of a triangle are its three vertices, the extreme points of a circle are all the points of its circumference, and the extreme points of the convex hull of the set of eight points in Figure 3.6 are p1,p5,p6,p7, and p3. Extreme points have several special properties other points of a convex set do not have. One of them is exploited by the simplex method, a very important algorithm discussed in Section 10.1. This algorithm solves linear programming problems, which are problems of ﬁnding a minimum or a maximum of a linear function of n variables subject to linear constraints (see Problem 12 in this section’s exercises for an example and Sections 6.6 and 10.1 for a general discussion). Here, however, we are interested in extreme points because their identiﬁcation solves the convex-hull problem. Actually, to solve this problem completely, we need to know a bit more than just which of n points of a given set are extreme points of the set’s convex hull: we need to know which pairs of points need to be connected to form the boundary of the convex hull. Note that this issue can also be addressed by listing the extreme points in a clockwise or a counterclockwise order. So how can we solve the convex-hull problem in a brute-force manner? If you do not see an immediate plan for a frontal attack, do not be dismayed: the convex- hull problem is one with no obvious algorithmic solution. Nevertheless, there is a simple but inefﬁcient algorithm that is based on the following observation about line segments making up the boundary of a convex hull: a line segment connecting two points pi and pj of a set of n points is a part of the convex hull’s boundary if and 3.3 Closest-Pair and Convex-Hull Problems by Brute Force 113 only if all the other points of the set lie on the same side of the straight line through these two points.2 (Verify this property for the set in Figure 3.6.) Repeating this test for every pair of points yields a list of line segments that make up the convex hull’s boundary. A few elementary facts from analytical geometry are needed to implement this algorithm. First, the straight line through two points (x1,y1), (x2,y2) in the coordinate plane can be deﬁned by the equation ax + by = c, where a = y2 − y1, b = x1 − x2, c = x1y2 − y1x2. Second, such a line divides the plane into two half-planes: for all the points in one of them, ax + by > c, while for all the points in the other, ax + by < c. (For the points on the line itself, of course, ax + by = c.) Thus, to check whether certain points lie on the same side of the line, we can simply check whether the expression ax + by − c has the same sign for each of these points. We leave the implementation details as an exercise. What is the time efﬁciency of this algorithm? It is in O(n3): for each of n(n − 1)/2 pairs of distinct points, we may need to ﬁnd the sign of ax + by − c for each of the other n − 2 points. There are much more efﬁcient algorithms for this important problem, and we discuss one of them later in the book. Exercises 3.3 1. Assuming that sqrt takes about 10 times longer than each of the other oper- ations in the innermost loop of BruteForceClosestPoints, which are assumed to take the same amount of time, estimate how much faster the algorithm will run after the improvement discussed in Section 3.3. 2. Can you design a more efﬁcient algorithm than the one based on the brute- force strategy to solve the closest-pair problem for n points x1,x2,...,xn on the real line? 3. Let x1 1 points in the plane. 10. What modiﬁcation needs to be made in the brute-force algorithm for the convex-hull problem to handle more than two points on the same straight line? 11. Write a program implementing the brute-force algorithm for the convex-hull problem. 12. Consider the following small instance of the linear programming problem: maximize 3x + 5y subject to x + y ≤ 4 x + 3y ≤ 6 x ≥ 0,y≥ 0. a. Sketch, in the Cartesian plane, the problem’s feasible region, deﬁned as the set of points satisfying all the problem’s constraints. b. Identify the region’s extreme points. c. Solve this optimization problem by using the following theorem: A linear programming problem with a nonempty bounded feasible region always has a solution, which can be found at one of the extreme points of its feasible region. 3.4 Exhaustive Search Many important problems require ﬁnding an element with a special property in a domain that grows exponentially (or faster) with an instance size. Typically, such problems arise in situations that involve—explicitly or implicitly—combinatorial objects such as permutations, combinations, and subsets of a given set. Many such problems are optimization problems: they ask to ﬁnd an element that maximizes or minimizes some desired characteristic such as a path length or an assignment cost. Exhaustive search is simply a brute-force approach to combinatorial prob- lems. It suggests generating each and every element of the problem domain, se- lecting those of them that satisfy all the constraints, and then ﬁnding a desired element (e.g., the one that optimizes some objective function). Note that although the idea of exhaustive search is quite straightforward, its implementation typically requires an algorithm for generating certain combinatorial objects. We delay a dis- cussion of such algorithms until the next chapter and assume here that they exist. 116 Brute Force and Exhaustive Search We illustrate exhaustive search by applying it to three important problems: the traveling salesman problem, the knapsack problem, and the assignment problem. Traveling Salesman Problem The traveling salesman problem (TSP) has been intriguing researchers for the last 150 years by its seemingly simple formulation, important applications, and interesting connections to other combinatorial problems. In layman’s terms, the problem asks to ﬁnd the shortest tour through a given set of n cities that visits each city exactly once before returning to the city where it started. The problem can be conveniently modeled by a weighted graph, with the graph’s vertices representing the cities and the edge weights specifying the distances. Then the problem can be stated as the problem of ﬁnding the shortest Hamiltonian circuit of the graph. (A Hamiltonian circuit is deﬁned as a cycle that passes through all the vertices of the graph exactly once. It is named after the Irish mathematician Sir William Rowan Hamilton (1805–1865), who became interested in such cycles as an application of his algebraic discoveries.) It is easy to see that a Hamiltonian circuit can also be deﬁned as a sequence of n + 1 adjacent vertices vi0,vi1,...,vin−1,vi0, where the ﬁrst vertex of the sequence is the same as the last one and all the other n − 1 vertices are distinct. Further, we can assume, with no loss of generality, that all circuits start and end at one particular vertex (they are cycles after all, are they not?). Thus, we can get all the tours by generating all the permutations of n − 1 intermediate cities, compute the tour lengths, and ﬁnd the shortest among them. Figure 3.7 presents a small instance of the problem and its solution by this method. An inspection of Figure 3.7 reveals three pairs of tours that differ only by their direction. Hence, we could cut the number of vertex permutations by half. We could, for example, choose any two intermediate vertices, say, b and c, and then consider only permutations in which b precedes c. (This trick implicitly deﬁnes a tour’s direction.) This improvement cannot brighten the efﬁciency picture much, however. The total number of permutations needed is still 1 2 (n − 1)!, which makes the exhaustive-search approach impractical for all but very small values of n.Onthe other hand, if you always see your glass as half-full, you can claim that cutting the work by half is nothing to sneeze at, even if you solve a small instance of the problem, especially by hand. Also note that had we not limited our investigation to the circuits starting at the same vertex, the number of permutations would have been even larger, by a factor of n. Knapsack Problem Here is another well-known problem in algorithmics. Given n items of known weights w1,w2,...,wn and values v1,v2,...,vn and a knapsack of capacity W, ﬁnd the most valuable subset of the items that ﬁt into the knapsack. If you do not like the idea of putting yourself in the shoes of a thief who wants to steal the most 3.4 Exhaustive Search 117 2 5 8 7 3 1 a c b d a ---> b ---> c ---> d ---> a a ---> b ---> d ---> c ---> a a ---> c ---> b ---> d ---> a a ---> c ---> d ---> b ---> a a ---> d ---> b ---> c ---> a a ---> d ---> c ---> b ---> a I = 2 + 8 + 1 + 7 = 18 I = 2 + 3 + 1 + 5 = 11 optimal Tour Length optimal I = 5 + 8 + 3 + 7 = 23 I = 5 + 1 + 3 + 2 = 11 I = 7 + 3 + 8 + 5 = 23 I = 7 + 1 + 8 + 2 = 18 —— ——— FIGURE 3.7 Solution to a small instance of the traveling salesman problem by exhaustive search. valuable loot that ﬁts into his knapsack, think about a transport plane that has to deliver the most valuable set of items to a remote location without exceeding the plane’s capacity. Figure 3.8a presents a small instance of the knapsack problem. The exhaustive-search approach to this problem leads to generating all the subsets of the set of n items given, computing the total weight of each subset in order to identify feasible subsets (i.e., the ones with the total weight not exceeding the knapsack capacity), and ﬁnding a subset of the largest value among them. As an example, the solution to the instance of Figure 3.8a is given in Figure 3.8b. Since the number of subsets of an n-element set is 2n, the exhaustive search leads to a (2n) algorithm, no matter how efﬁciently individual subsets are generated. Thus, for both the traveling salesman and knapsack problems considered above, exhaustive search leads to algorithms that are extremely inefﬁcient on every input. In fact, these two problems are the best-known examples of so- called NP-hard problems. No polynomial-time algorithm is known for any NP- hard problem. Moreover, most computer scientists believe that such algorithms do not exist, although this very important conjecture has never been proven. More-sophisticated approaches—backtracking and branch-and-bound (see Sec- tions 12.1 and 12.2)—enable us to solve some but not all instances of these and 118 Brute Force and Exhaustive Search item 4item 3item 2item 1knapsack 10 w1 = 7 v1 = $42 w2 = 3 v2 = $12 w3 = 4 v3 = $40 w4 = 5 v4 = $25 (a) Subset Total weight Total value ∅ 0$0 {1} 7 $42 {2} 3 $12 {3} 4 $40 {4} 5 $25 {1, 2} 10 $54 {1, 3} 11 not feasible {1, 4} 12 not feasible {2, 3} 7 $52 {2, 4} 8 $37 3, 4 9 $65 {1, 2, 3} 14 not feasible {1, 2, 4} 15 not feasible {1, 3, 4} 16 not feasible {2, 3, 4} 12 not feasible {1, 2, 3, 4} 19 not feasible (b) FIGURE 3.8 (a) Instance of the knapsack problem. (b) Its solution by exhaustive search. The information about the optimal selection is in bold. 3.4 Exhaustive Search 119 similar problems in less than exponential time. Alternatively, we can use one of many approximation algorithms, such as those described in Section 12.3. Assignment Problem In our third example of a problem that can be solved by exhaustive search, there are n people who need to be assigned to execute n jobs, one person per job. (That is, each person is assigned to exactly one job and each job is assigned to exactly one person.) The cost that would accrue if the ith person is assigned to the jth job is a known quantity C[i, j] for each pair i, j = 1, 2,...,n. The problem is to ﬁnd an assignment with the minimum total cost. A small instance of this problem follows, with the table entries representing the assignment costs C[i, j]: Job 1 Job 2 Job 3 Job 4 Person 1 9278 Person 2 6437 Person 3 5818 Person 4 7694 It is easy to see that an instance of the assignment problem is completely speciﬁed by its cost matrix C. In terms of this matrix, the problem is to select one element in each row of the matrix so that all selected elements are in different columns and the total sum of the selected elements is the smallest possible. Note that no obvious strategy for ﬁnding a solution works here. For example, we cannot select the smallest element in each row, because the smallest elements may happen to be in the same column. In fact, the smallest element in the entire matrix need not be a component of an optimal solution. Thus, opting for the exhaustive search may appear as an unavoidable evil. We can describe feasible solutions to the assignment problem as n-tuples j1,...,jn in which the ith component, i = 1,...,n, indicates the column of the element selected in the ith row (i.e., the job number assigned to the ith person). For example, for the cost matrix above, 2, 3, 4, 1 indicates the assignment of Person 1 to Job 2, Person 2 to Job 3, Person 3 to Job 4, and Person 4 to Job 1. The requirements of the assignment problem imply that there is a one-to-one correspondence between feasible assignments and permutations of the ﬁrst n integers. Therefore, the exhaustive-search approach to the assignment problem would require generating all the permutations of integers 1, 2,...,n,computing the total cost of each assignment by summing up the corresponding elements of the cost matrix, and ﬁnally selecting the one with the smallest sum. A few ﬁrst iterations of applying this algorithm to the instance given above are shown in Figure 3.9; you are asked to complete it in the exercises. 120 Brute Force and Exhaustive Search 9 6 5 7 2 4 8 6 7 3 1 9 8 7 8 4 C = <1, 2, 3, 4> <1, 2, 4, 3> <1, 3, 2, 4> <1, 3, 4, 2> <1, 4, 2, 3> <1, 4, 3, 2> cost = 9 + 4 + 1 + 4 = 18 cost = 9 + 4 + 8 + 9 = 30 cost = 9 + 3 + 8 + 4 = 24 cost = 9 + 3 + 8 + 6 = 26 cost = 9 + 7 + 8 + 9 = 33 cost = 9 + 7 + 1 + 6 = 23 etc. FIGURE 3.9 First few iterations of solving a small instance of the assignment problem by exhaustive search. Since the number of permutations to be considered for the general case of the assignment problem is n!, exhaustive search is impractical for all but very small instances of the problem. Fortunately, there is a much more efﬁcient algorithm for this problem called the Hungarian method after the Hungarian mathematicians K¨onig and Egerv´ary, whose work underlies the method (see, e.g., [Kol95]). This is good news: the fact that a problem domain grows exponentially or faster does not necessarily imply that there can be no efﬁcient algorithm for solving it. In fact, we present several other examples of such problems later in the book. However, such examples are more of an exception to the rule. More often than not, there are no known polynomial-time algorithms for problems whose domain grows exponentially with instance size, provided we want to solve them exactly. And, as we mentioned above, such algorithms quite possibly do not exist. Exercises 3.4 1. a. Assuming that each tour can be generated in constant time, what will be the efﬁciency class of the exhaustive-search algorithm outlined in the text for the traveling salesman problem? b. If this algorithm is programmed on a computer that makes ten billion additions per second, estimate the maximum number of cities for which the problem can be solved in i. 1 hour. ii. 24 hours. iii. 1 year. iv. 1 century. 2. Outline an exhaustive-search algorithm for the Hamiltonian circuit problem. 3. Outline an algorithm to determine whether a connected graph represented by its adjacency matrix has an Eulerian circuit. What is the efﬁciency class of your algorithm? 4. Complete the application of exhaustive search to the instance of the assign- ment problem started in the text. 5. Give an example of the assignment problem whose optimal solution does not include the smallest element of its cost matrix. 3.4 Exhaustive Search 121 6. Consider the partition problem: given n positive integers, partition them into two disjoint subsets with the same sum of their elements. (Of course, the prob- lem does not always have a solution.) Design an exhaustive-search algorithm for this problem. Try to minimize the number of subsets the algorithm needs to generate. 7. Consider the clique problem: given a graph G and a positive integer k, deter- mine whether the graph contains a clique of size k, i.e., a complete subgraph of k vertices. Design an exhaustive-search algorithm for this problem. 8. Explain how exhaustive search can be applied to the sorting problem and determine the efﬁciency class of such an algorithm. 9. Eight-queens problem Consider the classic puzzle of placing eight queens on an 8 × 8 chessboard so that no two queens are in the same row or in the same column or on the same diagonal. How many different positions are there so that a. no two queens are on the same square? b. no two queens are in the same row? c. no two queens are in the same row or in the same column? Also estimate how long it would take to ﬁnd all the solutions to the problem by exhaustive search based on each of these approaches on a computer capable of checking 10 billion positions per second. 10. Magic squares A magic square of order n is an arrangement of the integers from 1 to n2 in an n × n matrix, with each number occurring exactly once, so that each row, each column, and each main diagonal has the same sum. a. Prove that if a magic square of order n exists, the sum in question must be equal to n(n2 + 1)/2. b. Design an exhaustive-search algorithm for generating all magic squares of order n. c. Go to the Internet or your library and ﬁnd a better algorithm for generating magic squares. d. Implement the two algorithms—the exhaustive search and the one you have found—and run an experiment to determine the largest value of n for which each of the algorithms is able to ﬁnd a magic square of order n in less than 1 minute on your computer. 11. Famous alphametic A puzzle in which the digits in a correct mathematical expression, such as a sum, are replaced by letters is called cryptarithm;if,in addition, the puzzle’s words make sense, it is said to be an alphametic.The most well-known alphametic was published by the renowned British puzzlist Henry E. Dudeney (1857–1930): 122 Brute Force and Exhaustive Search SEND + MORE MONEY Two conditions are assumed: ﬁrst, the correspondence between letters and decimal digits is one-to-one, i.e., each letter represents one digit only and dif- ferent letters represent different digits. Second, the digit zero does not appear as the left-most digit in any of the numbers. To solve an alphametic means to ﬁnd which digit each letter represents. Note that a solution’s uniqueness cannot be assumed and has to be veriﬁed by the solver. a. Write a program for solving cryptarithms by exhaustive search. Assume that a given cryptarithm is a sum of two words. b. Solve Dudeney’s puzzle the way it was expected to be solved when it was ﬁrst published in 1924. 3.5 Depth-First Search and Breadth-First Search The term “exhaustive search” can also be applied to two very important algorithms that systematically process all vertices and edges of a graph. These two traversal algorithms are depth-ﬁrst search (DFS) and breadth-ﬁrst search (BFS). These algorithms have proved to be very useful for many applications involving graphs in artiﬁcial intelligence and operations research. In addition, they are indispensable for efﬁcient investigation of fundamental properties of graphs such as connectivity and cycle presence. Depth-First Search Depth-ﬁrst search starts a graph’s traversal at an arbitrary vertex by marking it as visited. On each iteration, the algorithm proceeds to an unvisited vertex that is adjacent to the one it is currently in. (If there are several such vertices, a tie can be resolved arbitrarily. As a practical matter, which of the adjacent unvisited candidates is chosen is dictated by the data structure representing the graph. In our examples, we always break ties by the alphabetical order of the vertices.) This process continues until a dead end—a vertex with no adjacent unvisited vertices— is encountered. At a dead end, the algorithm backs up one edge to the vertex it came from and tries to continue visiting unvisited vertices from there. The algorithm eventually halts after backing up to the starting vertex, with the latter being a dead end. By then, all the vertices in the same connected component as the starting vertex have been visited. If unvisited vertices still remain, the depth-ﬁrst search must be restarted at any one of them. It is convenient to use a stack to trace the operation of depth-ﬁrst search. We push a vertex onto the stack when the vertex is reached for the ﬁrst time (i.e., the 3.5 Depth-First Search and Breadth-First Search 123 g a d e b c f j h h i j ga c d f b e i (a) (b) (c) d3, 1 c2, 5 a1, 6 e6, 2 b5, 3 f4, 4 j10,7 i9, 8 h8, 9 g7,10 FIGURE 3.10 Example of a DFS traversal. (a) Graph. (b) Traversal’s stack (the ﬁrst subscript number indicates the order in which a vertex is visited, i.e., pushed onto the stack; the second one indicates the order in which it becomes a dead-end, i.e., popped off the stack). (c) DFS forest with the tree and back edges shown with solid and dashed lines, respectively. visit of the vertex starts), and we pop a vertex off the stack when it becomes a dead end (i.e., the visit of the vertex ends). It is also very useful to accompany a depth-ﬁrst search traversal by construct- ing the so-called depth-ﬁrst search forest. The starting vertex of the traversal serves as the root of the ﬁrst tree in such a forest. Whenever a new unvisited vertex is reached for the ﬁrst time, it is attached as a child to the vertex from which it is being reached. Such an edge is called a tree edge because the set of all such edges forms a forest. The algorithm may also encounter an edge leading to a previously visited vertex other than its immediate predecessor (i.e., its parent in the tree). Such an edge is called a back edge because it connects a vertex to its ancestor, other than the parent, in the depth-ﬁrst search forest. Figure 3.10 provides an ex- ample of a depth-ﬁrst search traversal, with the traversal stack and corresponding depth-ﬁrst search forest shown as well. Here is pseudocode of the depth-ﬁrst search. ALGORITHM DFS(G) //Implements a depth-ﬁrst search traversal of a given graph //Input: Graph G =V,E //Output: Graph G with its vertices marked with consecutive integers // in the order they are ﬁrst encountered by the DFS traversal mark each vertex in V with 0 as a mark of being “unvisited” count ← 0 for each vertex v in V do if v is marked with 0 dfs(v) 124 Brute Force and Exhaustive Search dfs(v) //visits recursively all the unvisited vertices connected to vertex v //by a path and numbers them in the order they are encountered //via global variable count count ← count + 1; mark v with count for each vertex w in V adjacent to v do if w is marked with 0 dfs(w) The brevity of the DFS pseudocode and the ease with which it can be per- formed by hand may create a wrong impression about the level of sophistication of this algorithm. To appreciate its true power and depth, you should trace the algorithm’s action by looking not at a graph’s diagram but at its adjacency matrix or adjacency lists. (Try it for the graph in Figure 3.10 or a smaller example.) How efﬁcient is depth-ﬁrst search? It is not difﬁcult to see that this algorithm is, in fact, quite efﬁcient since it takes just the time proportional to the size of the data structure used for representing the graph in question. Thus, for the adjacency matrix representation, the traversal time is in (|V |2), and for the adjacency list representation, it is in (|V |+|E|) where |V | and |E| are the number of the graph’s vertices and edges, respectively. A DFS forest, which is obtained as a by-product of a DFS traversal, deserves a few comments, too. To begin with, it is not actually a forest. Rather, we can look at it as the given graph with its edges classiﬁed by the DFS traversal into two disjoint classes: tree edges and back edges. (No other types are possible for a DFS forest of an undirected graph.) Again, tree edges are edges used by the DFS traversal to reach previously unvisited vertices. If we consider only the edges in this class, we will indeed get a forest. Back edges connect vertices to previously visited vertices other than their immediate predecessors in the traversal. They connect vertices to their ancestors in the forest other than their parents. A DFS traversal itself and the forest-like representation of the graph it pro- vides have proved to be extremely helpful for the development of efﬁcient al- gorithms for checking many important properties of graphs.3 Note that the DFS yields two orderings of vertices: the order in which the vertices are reached for the ﬁrst time (pushed onto the stack) and the order in which the vertices become dead ends (popped off the stack). These orders are qualitatively different, and various applications can take advantage of either of them. Important elementary applications of DFS include checking connectivity and checking acyclicity of a graph. Since dfs halts after visiting all the vertices con- 3. The discovery of several such applications was an important breakthrough achieved by the two American computer scientists John Hopcroft and Robert Tarjan in the 1970s. For this and other contributions, they were given the Turing Award—the most prestigious prize in the computing ﬁeld [Hop87, Tar87]. 3.5 Depth-First Search and Breadth-First Search 125 nected by a path to the starting vertex, checking a graph’s connectivity can be done as follows. Start a DFS traversal at an arbitrary vertex and check, after the algorithm halts, whether all the vertices of the graph will have been vis- ited. If they have, the graph is connected; otherwise, it is not connected. More generally, we can use DFS for identifying connected components of a graph (how?). As for checking for a cycle presence in a graph, we can take advantage of the graph’s representation in the form of a DFS forest. If the latter does not have back edges, the graph is clearly acyclic. If there is a back edge from some vertex u to its ancestor v (e.g., the back edge from d to a in Figure 3.10c), the graph has a cycle that comprises the path from v to u via a sequence of tree edges in the DFS forest followed by the back edge from u to v. You will ﬁnd a few other applications of DFS later in the book, although more sophisticated applications, such as ﬁnding articulation points of a graph, are not included. (A vertex of a connected graph is said to be its articulation point if its removal with all edges incident to it breaks the graph into disjoint pieces.) Breadth-First Search If depth-ﬁrst search is a traversal for the brave (the algorithm goes as far from “home” as it can), breadth-ﬁrst search is a traversal for the cautious. It proceeds in a concentric manner by visiting ﬁrst all the vertices that are adjacent to a starting vertex, then all unvisited vertices two edges apart from it, and so on, until all the vertices in the same connected component as the starting vertex are visited. If there still remain unvisited vertices, the algorithm has to be restarted at an arbitrary vertex of another connected component of the graph. It is convenient to use a queue (note the difference from depth-ﬁrst search!) to trace the operation of breadth-ﬁrst search. The queue is initialized with the traversal’s starting vertex, which is marked as visited. On each iteration, the algorithm identiﬁes all unvisited vertices that are adjacent to the front vertex, marks them as visited, and adds them to the queue; after that, the front vertex is removed from the queue. Similarly to a DFS traversal, it is useful to accompany a BFS traversal by con- structing the so-called breadth-ﬁrst search forest. The traversal’s starting vertex serves as the root of the ﬁrst tree in such a forest. Whenever a new unvisited vertex is reached for the ﬁrst time, the vertex is attached as a child to the vertex it is being reached from with an edge called a tree edge. If an edge leading to a previously visited vertex other than its immediate predecessor (i.e., its parent in the tree) is encountered, the edge is noted as a cross edge. Figure 3.11 provides an exam- ple of a breadth-ﬁrst search traversal, with the traversal queue and corresponding breadth-ﬁrst search forest shown. 126 Brute Force and Exhaustive Search g a d e b c f j h h i j ga c d f b e i (a) (b) (c) a1 c2 d3 e4 f5 b6 g7 h8 j9 i10 FIGURE 3.11 Example of a BFS traversal. (a) Graph. (b) Traversal queue, with the numbers indicating the order in which the vertices are visited, i.e., added to (and removed from) the queue. (c) BFS forest with the tree and cross edges shown with solid and dotted lines, respectively. Here is pseudocode of the breadth-ﬁrst search. ALGORITHM BFS(G) //Implements a breadth-ﬁrst search traversal of a given graph //Input: Graph G =V,E //Output: Graph G with its vertices marked with consecutive integers // in the order they are visited by the BFS traversal mark each vertex in V with 0 as a mark of being “unvisited” count ← 0 for each vertex v in V do if v is marked with 0 bfs(v) bfs(v) //visits all the unvisited vertices connected to vertex v //by a path and numbers them in the order they are visited //via global variable count count ← count + 1; mark v with count and initialize a queue with v while the queue is not empty do for each vertex w in V adjacent to the front vertex do if w is marked with 0 count ← count + 1; mark w with count add w to the queue remove the front vertex from the queue 3.5 Depth-First Search and Breadth-First Search 127 a a e e b b f f c cg g d d h (a) (b) FIGURE 3.12 Illustration of the BFS-based algorithm for ﬁnding a minimum-edge path. (a) Graph. (b) Part of its BFS tree that identiﬁes the minimum-edge path from a to g. Breadth-ﬁrst search has the same efﬁciency as depth-ﬁrst search: it is in (|V |2) for the adjacency matrix representation and in (|V |+|E|) for the adja- cency list representation. Unlike depth-ﬁrst search, it yields a single ordering of vertices because the queue is a FIFO (ﬁrst-in ﬁrst-out) structure and hence the order in which vertices are added to the queue is the same order in which they are removed from it. As to the structure of a BFS forest of an undirected graph, it can also have two kinds of edges: tree edges and cross edges. Tree edges are the ones used to reach previously unvisited vertices. Cross edges connect vertices to those visited before, but, unlike back edges in a DFS tree, they connect vertices either on the same or adjacent levels of a BFS tree. BFS can be used to check connectivity and acyclicity of a graph, essentially in the same manner as DFS can. It is not applicable, however, for several less straightforward applications such as ﬁnding articulation points. On the other hand, it can be helpful in some situations where DFS cannot. For example, BFS can be used for ﬁnding a path with the fewest number of edges between two given vertices. To do this, we start a BFS traversal at one of the two vertices and stop it as soon as the other vertex is reached. The simple path from the root of the BFS tree to the second vertex is the path sought. For example, path a − b − c − g in the graph in Figure 3.12 has the fewest number of edges among all the paths between vertices a and g. Although the correctness of this application appears to stem immediately from the way BFS operates, a mathematical proof of its validity is not quite elementary (see, e.g., [Cor09, Section 22.2]). Table 3.1 summarizes the main facts about depth-ﬁrst search and breadth-ﬁrst search. 128 Brute Force and Exhaustive Search TABLE 3.1 Main facts about depth-ﬁrst search (DFS) and breadth-ﬁrst search (BFS) DFS BFS Data structure a stack a queue Number of vertex orderings two orderings one ordering Edge types (undirected graphs) tree and back edges tree and cross edges Applications connectivity, connectivity, acyclicity, acyclicity, articulation points minimum-edge paths Efﬁciency for adjacency matrix (|V 2|)(|V 2|) Efﬁciency for adjacency lists (|V |+|E|)(|V |+|E|) Exercises 3.5 1. Consider the following graph. bf c g ead a. Write down the adjacency matrix and adjacency lists specifying this graph. (Assume that the matrix rows and columns and vertices in the adjacency lists follow in the alphabetical order of the vertex labels.) b. Starting at vertex a and resolving ties by the vertex alphabetical order, traverse the graph by depth-ﬁrst search and construct the corresponding depth-ﬁrst search tree. Give the order in which the vertices were reached for the ﬁrst time (pushed onto the traversal stack) and the order in which the vertices became dead ends (popped off the stack). 2. If we deﬁne sparse graphs as graphs for which |E|∈O(|V |), which implemen- tation of DFS will have a better time efﬁciency for such graphs, the one that uses the adjacency matrix or the one that uses the adjacency lists? 3. Let G be a graph with n vertices and m edges. a. True or false: All its DFS forests (for traversals starting at different ver- tices) will have the same number of trees? b. True or false: All its DFS forests will have the same number of tree edges and the same number of back edges? 4. Traverse the graph of Problem 1 by breadth-ﬁrst search and construct the corresponding breadth-ﬁrst search tree. Start the traversal at vertex a and resolve ties by the vertex alphabetical order. 3.5 Depth-First Search and Breadth-First Search 129 5. Prove that a cross edge in a BFS tree of an undirected graph can connect vertices only on either the same level or on two adjacent levels of a BFS tree. 6. a. Explain how one can check a graph’s acyclicity by using breadth-ﬁrst search. b. Does either of the two traversals—DFS or BFS—always ﬁnd a cycle faster than the other? If you answer yes, indicate which of them is better and explain why it is the case; if you answer no, give two examples supporting your answer. 7. Explain how one can identify connected components of a graph by using a. a depth-ﬁrst search. b. a breadth-ﬁrst search. 8. A graph is said to be bipartite if all its vertices can be partitioned into two disjoint subsets X and Y so that every edge connects a vertex in X with a vertex in Y. (One can also say that a graph is bipartite if its vertices can be colored in two colors so that every edge has its vertices colored in different colors; such graphs are also called 2-colorable.) For example, graph (i) is bipartite while graph (ii) is not. x1 y1 x3 y2 x2 y3 (i) (ii) a c b d a. Design a DFS-based algorithm for checking whether a graph is bipartite. b. Design a BFS-based algorithm for checking whether a graph is bipartite. 9. Write a program that, for a given graph, outputs: a. vertices of each connected component b. its cycle or a message that the graph is acyclic 10. One can model a maze by having a vertex for a starting point, a ﬁnishing point, dead ends, and all the points in the maze where more than one path can be taken, and then connecting the vertices according to the paths in the maze. a. Construct such a graph for the following maze. 130 Brute Force and Exhaustive Search b. Which traversal—DFS or BFS—would you use if you found yourself in a maze and why? 11. Three Jugs Sim´eon Denis Poisson (1781–1840), a famous French mathemati- cian and physicist, is said to have become interested in mathematics after encountering some version of the following old puzzle. Given an 8-pint jug full of water and two empty jugs of 5- and 3-pint capacity, get exactly 4 pints of water in one of the jugs by completely ﬁlling up and/or emptying jugs into others. Solve this puzzle by using breadth-ﬁrst search. SUMMARY Brute force is a straightforward approach to solving a problem, usually directly based on the problem statement and deﬁnitions of the concepts involved. The principal strengths of the brute-force approach are wide applicability and simplicity; its principal weakness is the subpar efﬁciency of most brute-force algorithms. A ﬁrst application of the brute-force approach often results in an algorithm that can be improved with a modest amount of effort. The following noted algorithms can be considered as examples of the brute- force approach: . deﬁnition-based algorithm for matrix multiplication . selection sort . sequential search . straightforward string-matching algorithm Exhaustive search is a brute-force approach to combinatorial problems. It suggests generating each and every combinatorial object of the problem, selecting those of them that satisfy all the constraints, and then ﬁnding a desired object. The traveling salesman problem, the knapsack problem, and the assignment problem are typical examples of problems that can be solved, at least theoretically, by exhaustive-search algorithms. Exhaustive search is impractical for all but very small instances of problems it can be applied to. Depth-ﬁrst search (DFS) and breadth-ﬁrst search (BFS) are two principal graph-traversal algorithms. By representing a graph in a form of a depth-ﬁrst or breadth-ﬁrst search forest, they help in the investigation of many important properties of the graph. Both algorithms have the same time efﬁciency: (|V |2) for the adjacency matrix representation and (|V |+|E|) for the adjacency list representation. 4 Decrease-and-Conquer Plutarch says that Sertorius, in order to teach his soldiers that perseverance and wit are better than brute force, had two horses brought before them, and set two men to pull out their tails. One of the men was a burly Hercules, who tugged and tugged, but all to no purpose; the other was a sharp, weasel- faced tailor, who plucked one hair at a time, amidst roars of laughter, and soon left the tail quite bare. —E. Cobham Brewer, Dictionary of Phrase and Fable, 1898 The decrease-and-conquer technique is based on exploiting the relationship between a solution to a given instance of a problem and a solution to its smaller instance. Once such a relationship is established, it can be exploited either top down or bottom up. The former leads naturally to a recursive implementa- tion, although, as one can see from several examples in this chapter, an ultimate implementation may well be nonrecursive. The bottom-up variation is usually implemented iteratively, starting with a solution to the smallest instance of the problem; it is called sometimes the incremental approach. There are three major variations of decrease-and-conquer: decrease by a constant decrease by a constant factor variable size decrease In the decrease-by-a-constant variation, the size of an instance is reduced by the same constant on each iteration of the algorithm. Typically, this constant is equal to one (Figure 4.1), although other constant size reductions do happen occasionally. Consider, as an example, the exponentiation problem of computing an where a = 0 and n is a nonnegative integer. The relationship between a solution to an instance of size n and an instance of size n − 1 is obtained by the obvious formula an = an−1 . a. So the function f (n) = an can be computed either “top down” by using its recursive deﬁnition 131 132 Decrease-and-Conquer problem of size n subproblem of size n –1 solution to the subproblem solution to the original problem FIGURE 4.1 Decrease-(by one)-and-conquer technique. f (n) = f(n− 1) . a if n>0, 1ifn = 0, (4.1) or “bottom up” by multiplying 1 by antimes. (Yes, it is the same as the brute-force algorithm, but we have come to it by a different thought process.) More interesting examples of decrease-by-one algorithms appear in Sections 4.1–4.3. The decrease-by-a-constant-factor technique suggests reducing a problem instance by the same constant factor on each iteration of the algorithm. In most applications, this constant factor is equal to two. (Can you give an example of such an algorithm?) The decrease-by-half idea is illustrated in Figure 4.2. For an example, let us revisit the exponentiation problem. If the instance of size n is to compute an, the instance of half its size is to compute an/2, with the obvious relationship between the two: an = (an/2)2. But since we consider here instances with integer exponents only, the former does not work for odd n.Ifn is odd, we have to compute an−1 by using the rule for even-valued exponents and then multiply the result by a. To summarize, we have the following formula: Decrease-and-Conquer 133 problem of size n subproblem of size n/2 solution to the subproblem solution to the original problem FIGURE 4.2 Decrease-(by half)-and-conquer technique. an = ⎧ ⎨ ⎩ (an/2)2 if n is even and positive, (a(n−1)/2)2 . a if n is odd, 1ifn = 0. (4.2) If we compute an recursively according to formula (4.2) and measure the algo- rithm’s efﬁciency by the number of multiplications, we should expect the algorithm to be in (log n) because, on each iteration, the size is reduced by about a half at the expense of one or two multiplications. A few other examples of decrease-by-a-constant-factor algorithms are given in Section 4.4 and its exercises. Such algorithms are so efﬁcient, however, that there are few examples of this kind. Finally, in the variable-size-decrease variety of decrease-and-conquer, the size-reduction pattern varies from one iteration of an algorithm to another. Eu- clid’s algorithm for computing the greatest common divisor provides a good ex- ample of such a situation. Recall that this algorithm is based on the formula gcd(m, n) = gcd(n, m mod n). 134 Decrease-and-Conquer Though the value of the second argument is always smaller on the right-hand side than on the left-hand side, it decreases neither by a constant nor by a constant factor. A few other examples of such algorithms appear in Section 4.5. 4.1 Insertion Sort In this section, we consider an application of the decrease-by-one technique to sorting an array A[0..n − 1]. Following the technique’s idea, we assume that the smaller problem of sorting the array A[0..n − 2] has already been solved to give us a sorted array of size n − 1: A[0] ≤ ...≤ A[n − 2]. How can we take advantage of this solution to the smaller problem to get a solution to the original problem by taking into account the element A[n − 1]? Obviously, all we need is to ﬁnd an appropriate position for A[n − 1] among the sorted elements and insert it there. This is usually done by scanning the sorted subarray from right to left until the ﬁrst element smaller than or equal to A[n − 1] is encountered to insert A[n − 1] right after that element. The resulting algorithm is called straight insertion sort or simply insertion sort. Though insertion sort is clearly based on a recursive idea, it is more efﬁcient to implement this algorithm bottom up, i.e., iteratively. As shown in Figure 4.3, starting with A[1]and ending with A[n − 1],A[i]is inserted in its appropriate place among the ﬁrst i elements of the array that have been already sorted (but, unlike selection sort, are generally not in their ﬁnal positions). Here is pseudocode of this algorithm. ALGORITHM InsertionSort(A[0..n − 1]) //Sorts a given array by insertion sort //Input: An array A[0..n − 1] of n orderable elements //Output: Array A[0..n − 1] sorted in nondecreasing order for i ← 1 to n − 1 do v ← A[i] j ← i − 1 while j ≥ 0 and A[j] >vdo A[j + 1] ← A[j] j ← j − 1 A[j + 1] ← v A[0] ≤ . . . ≤ A[ j] < A[ j + 1] ≤ . . . ≤ A[i – 1] ⏐ A[i] . . . A[n – 1] smaller than or equal to A[i] greater than A[i] FIGURE 4.3 Iteration of insertion sort: A[i] is inserted in its proper position among the preceding elements previously sorted. 4.1 Insertion Sort 135 89 | 45 45 45 29 29 17 45 89 | 68 68 45 34 29 68 68 89 | 89 68 45 34 90 90 90 90 | 89 68 45 29 29 29 29 90 | 89 68 34 34 34 34 34 90 | 89 17 17 17 17 17 17 90 FIGURE 4.4 Example of sorting with insertion sort. A vertical bar separates the sorted part of the array from the remaining elements; the element being inserted is in bold. The operation of the algorithm is illustrated in Figure 4.4. The basic operation of the algorithm is the key comparison A[j]>v.(Whynot j ≥ 0? Because it is almost certainly faster than the former in an actual computer implementation. Moreover, it is not germane to the algorithm: a better imple- mentation with a sentinel—see Problem 8 in this section’s exercises—eliminates it altogether.) The number of key comparisons in this algorithm obviously depends on the nature of the input. In the worst case, A[j] >v is executed the largest number of times, i.e., for every j = i − 1,...,0. Since v = A[i], it happens if and only if A[j] >A[i] for j = i − 1,...,0. (Note that we are using the fact that on the ith iteration of insertion sort all the elements preceding A[i] are the ﬁrst i elements in the input, albeit in the sorted order.) Thus, for the worst-case input, we get A[0] > A[1] (for i = 1), A[1] >A[2] (for i = 2),...,A[n − 2] >A[n − 1] (for i = n − 1). In other words, the worst-case input is an array of strictly decreasing values. The number of key comparisons for such an input is Cworst(n) = n−1 i=1 i−1 j=0 1 = n−1 i=1 i = (n − 1)n 2 ∈ (n2). Thus, in the worst case, insertion sort makes exactly the same number of compar- isons as selection sort (see Section 3.1). In the best case, the comparison A[j] >v is executed only once on every iteration of the outer loop. It happens if and only if A[i − 1] ≤ A[i] for every i = 1,...,n− 1, i.e., if the input array is already sorted in nondecreasing order. (Though it “makes sense” that the best case of an algorithm happens when the problem is already solved, it is not always the case, as you are going to see in our discussion of quicksort in Chapter 5.) Thus, for sorted arrays, the number of key comparisons is Cbest(n) = n−1 i=1 1 = n − 1 ∈ (n). 136 Decrease-and-Conquer This very good performance in the best case of sorted arrays is not very useful by itself, because we cannot expect such convenient inputs. However, almost-sorted ﬁles do arise in a variety of applications, and insertion sort preserves its excellent performance on such inputs. A rigorous analysis of the algorithm’s average-case efﬁciency is based on investigating the number of element pairs that are out of order (see Problem 11 in this section’s exercises). It shows that on randomly ordered arrays, insertion sort makes on average half as many comparisons as on decreasing arrays, i.e., Cavg(n) ≈ n2 4 ∈ (n2). This twice-as-fast average-case performance coupled with an excellent efﬁciency on almost-sorted arrays makes insertion sort stand out among its principal com- petitors among elementary sorting algorithms, selection sort and bubble sort. In addition, its extension named shellsort, after its inventor D. L. Shell [She59], gives us an even better algorithm for sorting moderately large ﬁles (see Problem 12 in this section’s exercises). Exercises 4.1 1. Ferrying soldiers A detachment of n soldiers must cross a wide and deep river with no bridge in sight. They notice two 12-year-old boys playing in a rowboat by the shore. The boat is so tiny, however, that it can only hold two boys or one soldier. How can the soldiers get across the river and leave the boys in joint possession of the boat? How many times need the boat pass from shore to shore? 2. Alternating glasses a. There are 2n glasses standing next to each other in a row, the ﬁrst n of them ﬁlled with a soda drink and the remaining n glasses empty. Make the glasses alternate in a ﬁlled-empty-ﬁlled-empty pattern in the minimum number of glass moves. [Gar78] b. Solve the same problem if 2n glasses—n with a drink and n empty—are initially in a random order. 3. Marking cells Design an algorithm for the following task. For any even n, mark n cells on an inﬁnite sheet of graph paper so that each marked cell has an odd number of marked neighbors. Two cells are considered neighbors if they are next to each other either horizontally or vertically but not diagonally. The marked cells must form a contiguous region, i.e., a region in which there is a path between any pair of marked cells that goes through a sequence of marked neighbors. [Kor05] 4.1 Insertion Sort 137 4. Design a decrease-by-one algorithm for generating the power set of a set of n elements. (The power set of a set S is the set of all the subsets of S, including the empty set and S itself.) 5. Consider the following algorithm to check connectivity of a graph deﬁned by its adjacency matrix. ALGORITHM Connected(A[0..n − 1, 0..n − 1]) //Input: Adjacency matrix A[0..n − 1, 0..n − 1]) of an undirected graph G //Output: 1 (true) if G is connected and 0 (false) if it is not if n = 1 return 1 //one-vertex graph is connected by deﬁnition else if not Connected(A[0..n − 2, 0..n − 2]) return 0 else for j ← 0 to n − 2 do if A[n − 1,j] return 1 return 0 Does this algorithm work correctly for every undirected graph with n>0 vertices? If you answer yes, indicate the algorithm’s efﬁciency class in the worst case; if you answer no, explain why. 6. Team ordering You have the results of a completed round-robin tournament in which n teams played each other once. Each game ended either with a victory for one of the teams or with a tie. Design an algorithm that lists the teams in a sequence so that every team did not lose the game with the team listed immediately after it. What is the time efﬁciency class of your algorithm? 7. Apply insertion sort to sort the list E, X, A, M, P , L, E in alphabetical order. 8. a. What sentinel should be put before the ﬁrst element of an array being sorted in order to avoid checking the in-bound condition j ≥ 0 on each iteration of the inner loop of insertion sort? b. Is the sentinel version in the same efﬁciency class as the original version? 9. Is it possible to implement insertion sort for sorting linked lists? Will it have the same O(n2) time efﬁciency as the array version? 10. Compare the text’s implementation of insertion sort with the following ver- sion. ALGORITHM InsertSort2(A[0..n − 1]) for i ← 1 to n − 1 do j ← i − 1 while j ≥ 0 and A[j] >A[j + 1] do swap(A[j], A[j + 1]) j ← j − 1 138 Decrease-and-Conquer What is the time efﬁciency of this algorithm? How is it compared to that of the version given in Section 4.1? 11. Let A[0..n − 1] be an array of n sortable elements. (For simplicity, you may assume that all the elements are distinct.) A pair (A[i],A[j]) is called an inversion if iA[j]. a. What arrays of size n have the largest number of inversions and what is this number? Answer the same questions for the smallest number of inversions. b. Show that the average-case number of key comparisons in insertion sort is given by the formula Cavg(n) ≈ n2 4 . 12. Shellsort (more accurately Shell’s sort) is an important sorting algorithm that works by applying insertion sort to each of several interleaving sublists of a given list. On each pass through the list, the sublists in question are formed by stepping through the list with an increment hi taken from some predeﬁned decreasing sequence of step sizes, h1 > ...>hi > ...> 1, which must end with 1. (The algorithm works for any such sequence, though some sequences are known to yield a better efﬁciency than others. For example, the sequence 1, 4,13,40,121,...,used, of course, in reverse, is known to be among the best for this purpose.) a. Apply shellsort to the list S, H, E, L, L, S, O, R, T, I, S, U, S, E, F, U, L b. Is shellsort a stable sorting algorithm? c. Implement shellsort, straight insertion sort, selection sort, and bubble sort in the language of your choice and compare their performance on random arrays of sizes 10n for n = 2, 3, 4, 5, and 6 as well as on increasing and decreasing arrays of these sizes. 4.2 Topological Sorting In this section, we discuss an important problem for directed graphs, with a variety of applications involving prerequisite-restricted tasks. Before we pose this problem, though, let us review a few basic facts about directed graphs themselves. A directed graph,ordigraph for short, is a graph with directions speciﬁed for all its edges (Figure 4.5a is an example). The adjacency matrix and adjacency lists are still two principal means of representing a digraph. There are only two notable differences between undirected and directed graphs in representing them: (1) the adjacency matrix of a directed graph does not have to be symmetric; (2) an edge in a directed graph has just one (not two) corresponding nodes in the digraph’s adjacency lists. 4.2 Topological Sorting 139 b bc cd d e e (a) (b) a a FIGURE 4.5 (a) Digraph. (b) DFS forest of the digraph for the DFS traversal started at a. Depth-ﬁrst search and breadth-ﬁrst search are principal traversal algorithms for traversing digraphs as well, but the structure of corresponding forests can be more complex than for undirected graphs. Thus, even for the simple example of Figure 4.5a, the depth-ﬁrst search forest (Figure 4.5b) exhibits all four types of edges possible in a DFS forest of a directed graph: tree edges (ab, bc, de), back edges (ba) from vertices to their ancestors, forward edges (ac) from vertices to their descendants in the tree other than their children, and cross edges (dc), which are none of the aforementioned types. Note that a back edge in a DFS forest of a directed graph can connect a vertex to its parent. Whether or not it is the case, the presence of a back edge indicates that the digraph has a directed cycle. A directed cycle in a digraph is a sequence of three or more of its vertices that starts and ends with the same vertex and in which every vertex is connected to its immediate predecessor by an edge directed from the predecessor to the successor. For example, a, b, a is a directed cycle in the digraph in Figure 4.5a. Conversely, if a DFS forest of a digraph has no back edges, the digraph is a dag, an acronym for directed acyclic graph. Edge directions lead to new questions about digraphs that are either meaning- less or trivial for undirected graphs. In this section, we discuss one such question. As a motivating example, consider a set of ﬁve required courses {C1, C2, C3, C4, C5} a part-time student has to take in some degree program. The courses can be taken in any order as long as the following course prerequisites are met: C1 and C2 have no prerequisites, C3 requires C1 and C2, C4 requires C3, and C5 requires C3 and C4. The student can take only one course per term. In which order should the student take the courses? The situation can be modeled by a digraph in which vertices represent courses and directed edges indicate prerequisite requirements (Figure 4.6). In terms of this digraph, the question is whether we can list its vertices in such an order that for every edge in the graph, the vertex where the edge starts is listed before the vertex where the edge ends. (Can you ﬁnd such an ordering of this digraph’s vertices?) This problem is called topological sorting. It can be posed for an 140 Decrease-and-Conquer C1 C4 C2 C5 C3 FIGURE 4.6 Digraph representing the prerequisite structure of ﬁve courses. C1 C51 C42 C33 C14 C1 C3 C4 C5C25 (a) (b) (c) The popping-off order: C5, C4, C3, C1, C2 The topologically sorted list: C2 C3 C4 C5C2 FIGURE 4.7 (a) Digraph for which the topological sorting problem needs to be solved. (b) DFS traversal stack with the subscript numbers indicating the popping- off order. (c) Solution to the problem. arbitrary digraph, but it is easy to see that the problem cannot have a solution if a digraph has a directed cycle. Thus, for topological sorting to be possible, a digraph in question must be a dag. It turns out that being a dag is not only necessary but also sufﬁcient for topological sorting to be possible; i.e., if a digraph has no directed cycles, the topological sorting problem for it has a solution. Moreover, there are two efﬁcient algorithms that both verify whether a digraph is a dag and, if it is, produce an ordering of vertices that solves the topological sorting problem. The ﬁrst algorithm is a simple application of depth-ﬁrst search: perform a DFS traversal and note the order in which vertices become dead-ends (i.e., popped off the traversal stack). Reversing this order yields a solution to the topological sorting problem, provided, of course, no back edge has been encountered during the traversal. If a back edge has been encountered, the digraph is not a dag, and topological sorting of its vertices is impossible. Why does the algorithm work? When a vertex v is popped off a DFS stack, no vertex u with an edge from u to v can be among the vertices popped off before v. (Otherwise, (u, v) would have been a back edge.) Hence, any such vertex u will be listed after v in the popped-off order list, and before v in the reversed list. Figure 4.7 illustrates an application of this algorithm to the digraph in Fig- ure 4.6. Note that in Figure 4.7c, we have drawn the edges of the digraph, and they all point from left to right as the problem’s statement requires. It is a con- venient way to check visually the correctness of a solution to an instance of the topological sorting problem. 4.2 Topological Sorting 141 C1 delete C1 delete C2 delete C3 The solution obtained is C1, C2, C3, C4, C5 delete C4 delete C5 C4 C4 C4 C5 C5 C5 C5 C3 C4 C5 C3 C2 C3 C2 FIGURE 4.8 Illustration of the source-removal algorithm for the topological sorting problem. On each iteration, a vertex with no incoming edges is deleted from the digraph. The second algorithm is based on a direct implementation of the decrease-(by one)-and-conquer technique: repeatedly, identify in a remaining digraph a source, which is a vertex with no incoming edges, and delete it along with all the edges outgoing from it. (If there are several sources, break the tie arbitrarily. If there are none, stop because the problem cannot be solved—see Problem 6a in this section’s exercises.) The order in which the vertices are deleted yields a solution to the topological sorting problem. The application of this algorithm to the same digraph representing the ﬁve courses is given in Figure 4.8. Note that the solution obtained by the source-removal algorithm is different from the one obtained by the DFS-based algorithm. Both of them are correct, of course; the topological sorting problem may have several alternative solutions. The tiny size of the example we used might create a wrong impression about the topological sorting problem. But imagine a large project—e.g., in construction, research, or software development—that involves a multitude of interrelated tasks with known prerequisites. The ﬁrst thing to do in such a situation is to make sure that the set of given prerequisites is not contradictory. The convenient way of doing this is to solve the topological sorting problem for the project’s digraph. Only then can one start thinking about scheduling tasks to, say, minimize the total completion time of the project. This would require, of course, other algorithms that you can ﬁnd in general books on operations research or in special ones on CPM (Critical Path Method) and PERT (Program Evaluation and Review Technique) methodologies. As to applications of topological sorting in computer science, they include instruction scheduling in program compilation, cell evaluation ordering in spread- sheet formulas, and resolving symbol dependencies in linkers. 142 Decrease-and-Conquer Exercises 4.2 1. Apply the DFS-based algorithm to solve the topological sorting problem for the following digraphs: a a b cb c e g gfe f d d (b)(a) 2. a. Prove that the topological sorting problem has a solution if and only if it is a dag. b. For a digraph with n vertices, what is the largest number of distinct solutions the topological sorting problem can have? 3. a. What is the time efﬁciency of the DFS-based algorithm for topological sorting? b. How can one modify the DFS-based algorithm to avoid reversing the vertex ordering generated by DFS? 4. Can one use the order in which vertices are pushed onto the DFS stack (instead of the order they are popped off it) to solve the topological sorting problem? 5. Apply the source-removal algorithm to the digraphs of Problem 1 above. 6. a. Prove that a nonempty dag must have at least one source. b. How would you ﬁnd a source (or determine that such a vertex does not exist) in a digraph represented by its adjacency matrix? What is the time efﬁciency of this operation? c. How would you ﬁnd a source (or determine that such a vertex does not exist) in a digraph represented by its adjacency lists? What is the time efﬁciency of this operation? 7. Can you implement the source-removal algorithm for a digraph represented by its adjacency lists so that its running time is in O(|V |+|E|)? 8. Implement the two topological sorting algorithms in the language of your choice. Run an experiment to compare their running times. 9. A digraph is called strongly connected if for any pair of two distinct vertices u and v there exists a directed path from u to v and a directed path from v to u. In general, a digraph’s vertices can be partitioned into disjoint maximal subsets of vertices that are mutually accessible via directed paths; these subsets are called strongly connected components of the digraph. There are two DFS- 4.2 Topological Sorting 143 based algorithms for identifying strongly connected components. Here is the simpler (but somewhat less efﬁcient) one of the two: Step 1 Perform a DFS traversal of the digraph given and number its vertices in the order they become dead ends. Step 2 Reverse the directions of all the edges of the digraph. Step 3 Perform a DFS traversal of the new digraph by starting (and, if necessary, restarting) the traversal at the highest numbered vertex among still unvisited vertices. The strongly connected components are exactly the vertices of the DFS trees obtained during the last traversal. a. Apply this algorithm to the following digraph to determine its strongly connected components: a b c g h d e f b. What is the time efﬁciency class of this algorithm? Give separate answers for the adjacency matrix representation and adjacency list representation of an input digraph. c. How many strongly connected components does a dag have? 10. Spider’s web A spider sits at the bottom (point S) of its web, and a ﬂy sits at the top (F). How many different ways can the spider reach the ﬂy by moving along the web’s lines in the directions indicated by the arrows? [Kor05] F S 144 Decrease-and-Conquer 4.3 Algorithms for Generating Combinatorial Objects In this section, we keep our promise to discuss algorithms for generating combi- natorial objects. The most important types of combinatorial objects are permuta- tions, combinations, and subsets of a given set. They typically arise in problems that require a consideration of different choices. We already encountered them in Chapter 3 when we discussed exhaustive search. Combinatorial objects are stud- ied in a branch of discrete mathematics called combinatorics. Mathematicians, of course, are primarily interested in different counting formulas; we should be grate- ful for such formulas because they tell us how many items need to be generated. In particular, they warn us that the number of combinatorial objects typically grows exponentially or even faster as a function of the problem size. But our primary interest here lies in algorithms for generating combinatorial objects, not just in counting them. Generating Permutations We start with permutations. For simplicity, we assume that the underlying set whose elements need to be permuted is simply the set of integers from 1 to n; more generally, they can be interpreted as indices of elements in an n-element set {a1,...,an}. What would the decrease-by-one technique suggest for the problem of generating all n! permutations of {1,...,n}? The smaller-by-one problem is to generate all (n − 1)! permutations. Assuming that the smaller problem is solved, we can get a solution to the larger one by inserting n in each of the n possible positions among elements of every permutation of n − 1 elements. All the permu- tations obtained in this fashion will be distinct (why?), and their total number will be n(n − 1)!= n!. Hence, we will obtain all the permutations of {1,...,n}. We can insert n in the previously generated permutations either left to right or right to left. It turns out that it is beneﬁcial to start with inserting n into 12 ...(n− 1) by moving right to left and then switch direction every time a new permutation of {1,...,n− 1} needs to be processed. An example of applying this approach bottom up for n = 3 is given in Figure 4.9. The advantage of this order of generating permutations stems from the fact that it satisﬁes the minimal-change requirement: each permutation can be ob- tained from its immediate predecessor by exchanging just two elements in it. (For the method being discussed, these two elements are always adjacent to each other. start 1 insert 2 into 1 right to left 12 21 insert 3 into 12 right to left 123 132 312 insert 3 into 21 left to right 321 231 213 FIGURE 4.9 Generating permutations bottom up. 4.3 Algorithms for Generating Combinatorial Objects 145 Check this for the permutations generated in Figure 4.9.) The minimal-change re- quirement is beneﬁcial both for the algorithm’s speed and for applications using the permutations. For example, in Section 3.4, we needed permutations of cities to solve the traveling salesman problem by exhaustive search. If such permuta- tions are generated by a minimal-change algorithm, we can compute the length of a new tour from the length of its predecessor in constant rather than linear time (how?). It is possible to get the same ordering of permutations of n elements without explicitly generating permutations for smaller values of n. It can be done by associating a direction with each element k in a permutation. We indicate such a direction by a small arrow written above the element in question, e.g., 3 → 2 ← 4 → 1 ← . The element k is said to be mobile in such an arrow-marked permutation if its arrow points to a smaller number adjacent to it. For example, for the permutation 3 → 2 ← 4 → 1 ← , 3 and 4 are mobile while 2 and 1 are not. Using the notion of a mobile element, we can give the following description of the Johnson-Trotter algorithm for generating permutations. ALGORITHM JohnsonTrotter(n) //Implements Johnson-Trotter algorithm for generating permutations //Input: A positive integer n //Output: A list of all permutations of {1,...,n} initialize the ﬁrst permutation with 1 ← 2 ← ... n ← while the last permutation has a mobile element do ﬁnd its largest mobile element k swap k with the adjacent element k’s arrow points to reverse the direction of all the elements that are larger than k add the new permutation to the list Here is an application of this algorithm for n = 3, with the largest mobile element shown in bold: 1 ← 2 ← 3 ← 1 ← 3 ← 2 ← 3 ← 1 ← 2 ← 3 → 2 ← 1 ← 2 ← 3 → 1 ← 2 ← 1 ← 3 → . This algorithm is one of the most efﬁcient for generating permutations; it can be implemented to run in time proportional to the number of permutations, i.e., in (n!). Of course, it is horribly slow for all but very small values of n; however, this is not the algorithm’s “fault” but rather the fault of the problem: it simply asks to generate too many items. One can argue that the permutation ordering generated by the Johnson- Trotter algorithm is not quite natural; for example, the natural place for permu- tation n(n − 1)...1 seems to be the last one on the list. This would be the case if permutations were listed in increasing order—also called the lexicographic or- 146 Decrease-and-Conquer der—which is the order in which they would be listed in a dictionary if the numbers were interpreted as letters of an alphabet. For example, for n = 3, 123 132 213 231 312 321. So how can we generate the permutation following a1a2 ...an−1an in lexi- cographic order? If an−1 an, we ﬁnd the permutation’s longest decreasing sufﬁx ai+1 >ai+2 > ...>an (but ai ai+2 > ...>an ﬁnd the largest index j such that ai A[m]: K A[0] ...A[m − 1] search here if KA[m] . As an example, let us apply binary search to searching for K = 70 in the array 3 142731394255707481859398 The iterations of the algorithm are given in the following table: 3 142731394255707481859398 0123456789101112index value iteration 1 iteration 2 iteration 3 l m r l m r l,m r Though binary search is clearly based on a recursive idea, it can be easily implemented as a nonrecursive algorithm, too. Here is pseudocode of this nonre- cursive version. 4.4 Decrease-by-a-Constant-Factor Algorithms 151 ALGORITHM BinarySearch(A[0..n − 1],K) //Implements nonrecursive binary search //Input: An array A[0..n − 1] sorted in ascending order and // a search key K //Output: An index of the array’s element that is equal to K // or −1 if there is no such element l ← 0; r ← n − 1 while l ≤ r do m ←(l + r)/2 if K = A[m] return m else if K1,Cworst(1) = 1. (4.3) (Stop and convince yourself that n/2 must be, indeed, rounded down and that the initial condition must be written as speciﬁed.) We already encountered recurrence (4.3), with a different initial condition, in Section 2.4 (see recurrence (2.4) and its solution there for n = 2k). For the initial condition Cworst(1) = 1, we obtain Cworst(2k) = k + 1 = log2 n + 1. (4.4) Further, similarly to the case of recurrence (2.4) (Problem 7 in Exercises 2.4), the solution given by formula (4.4) for n = 2k can be tweaked to get a solution valid for an arbitrary positive integer n: Cworst(n) =log2 n+1 =log2(n + 1). (4.5) Formula (4.5) deserves attention. First, it implies that the worst-case time efﬁciency of binary search is in (log n). Second, it is the answer we should have 152 Decrease-and-Conquer fully expected: since the algorithm simply reduces the size of the remaining array by about half on each iteration, the number of such iterations needed to reduce the initial size n to the ﬁnal size 1 has to be about log2 n. Third, to reiterate the point made in Section 2.1, the logarithmic function grows so slowly that its values remain small even for very large values of n. In particular, according to formula (4.5), it will take no more than log2(103 + 1)=10 three-way comparisons to ﬁnd an element of a given value (or establish that there is no such element) in any sorted array of one thousand elements, and it will take no more than log2(106 + 1)=20 comparisons to do it for any sorted array of size one million! What can we say about the average-case efﬁciency of binary search? A so- phisticated analysis shows that the average number of key comparisons made by binary search is only slightly smaller than that in the worst case: Cavg(n) ≈ log2 n. (More accurate formulas for the average number of comparisons in a successful and an unsuccessful search are Cyes avg(n) ≈ log2 n − 1 and Cno avg(n) ≈ log2(n + 1), respectively.) Though binary search is an optimal searching algorithm if we restrict our op- erations only to comparisons between keys (see Section 11.2), there are searching algorithms (see interpolation search in Section 4.5 and hashing in Section 7.3) with a better average-case time efﬁciency, and one of them (hashing) does not even re- quire the array to be sorted! These algorithms do require some special calculations in addition to key comparisons, however. Finally, the idea behind binary search has several applications beyond searching (see, e.g., [Ben00]). In addition, it can be applied to solving nonlinear equations in one unknown; we discuss this continuous analogue of binary search, called the method of bisection, in Section 12.4. Fake-Coin Problem Of several versions of the fake-coin identiﬁcation problem, we consider here the one that best illustrates the decrease-by-a-constant-factor strategy. Among n identical-looking coins, one is fake. With a balance scale, we can compare any two sets of coins. That is, by tipping to the left, to the right, or staying even, the balance scale will tell whether the sets weigh the same or which of the sets is heavier than the other but not by how much. The problem is to design an efﬁcient algorithm for detecting the fake coin. An easier version of the problem—the one we discuss here—assumes that the fake coin is known to be, say, lighter than the genuine one.1 The most natural idea for solving this problem is to divide n coins into two piles of n/2 coins each, leaving one extra coin aside if n is odd, and put the two 1. A much more challenging version assumes no additional information about the relative weights of the fake and genuine coins or even the presence of the fake coin among n given coins. We pursue this more difﬁcult version in the exercises for Section 11.2. 4.4 Decrease-by-a-Constant-Factor Algorithms 153 piles on the scale. If the piles weigh the same, the coin put aside must be fake; otherwise, we can proceed in the same manner with the lighter pile, which must be the one with the fake coin. We can easily set up a recurrence relation for the number of weighings W(n) needed by this algorithm in the worst case: W(n) = W(n/2) + 1 for n>1,W(1) = 0. This recurrence should look familiar to you. Indeed, it is almost identical to the one for the worst-case number of comparisons in binary search. (The difference is in the initial condition.) This similarity is not really surprising, since both algorithms are based on the same technique of halving an instance size. The solution to the recurrence for the number of weighings is also very similar to the one we had for binary search: W(n) =log2 n. This stuff should look elementary by now, if not outright boring. But wait: the interesting point here is the fact that the above algorithm is not the most efﬁcient solution. It would be more efﬁcient to divide the coins not into two but into three piles of about n/3 coins each. (Details of a precise formulation are developed in this section’s exercises. Do not miss it! If your instructor forgets, demand the instructor to assign Problem 10.) After weighing two of the piles, we can reduce the instance size by a factor of three. Accordingly, we should expect the number of weighings to be about log3 n, which is smaller than log2 n. Russian Peasant Multiplication Now we consider a nonorthodox algorithm for multiplying two positive integers called multiplication `a la russe or the Russian peasant method. Let n and m be positive integers whose product we want to compute, and let us measure the instance size by the value of n. Now, if n is even, an instance of half the size has to deal with n/2, and we have an obvious formula relating the solution to the problem’s larger instance to the solution to the smaller one: n . m = n 2 . 2m. If n is odd, we need only a slight adjustment of this formula: n . m = n − 1 2 . 2m + m. Using these formulas and the trivial case of 1 . m = m to stop, we can compute product n . m either recursively or iteratively. An example of computing 50 . 65 with this algorithm is given in Figure 4.11. Note that all the extra addends shown in parentheses in Figure 4.11a are in the rows that have odd values in the ﬁrst column. Therefore, we can ﬁnd the product by simply adding all the elements in the m column that have an odd number in the n column (Figure 4.11b). Also note that the algorithm involves just the simple operations of halving, doubling, and adding—a feature that might be attractive, for example, to those 154 Decrease-and-Conquer nm nm 50 65 50 65 25 130 25 130 130 12 260 (+130) 12 260 6 520 6 520 3 1040 3 1040 1040 1 2080 (+1040) 1 2080 2080 2080 +(130 + 1040) = 3250 3250 (a) (b) FIGURE 4.11 Computing 50 . 65 by the Russian peasant method. who do not want to memorize the table of multiplications. It is this feature of the algorithm that most probably made it attractive to Russian peasants who, accord- ing to Western visitors, used it widely in the nineteenth century and for whom the method is named. (In fact, the method was known to Egyptian mathematicians as early as 1650 b.c. [Cha98, p. 16].) It also leads to very fast hardware implementa- tion since doubling and halving of binary numbers can be performed using shifts, which are among the most basic operations at the machine level. Josephus Problem Our last example is the Josephus problem, named for Flavius Josephus, a famous Jewish historian who participated in and chronicled the Jewish revolt of 66–70 c.e. against the Romans. Josephus, as a general, managed to hold the fortress of Jotapata for 47 days, but after the fall of the city he took refuge with 40 diehards in a nearby cave. There, the rebels voted to perish rather than surrender. Josephus proposed that each man in turn should dispatch his neighbor, the order to be determined by casting lots. Josephus contrived to draw the last lot, and, as one of the two surviving men in the cave, he prevailed upon his intended victim to surrender to the Romans. So let n people numbered 1 to n stand in a circle. Starting the grim count with person number 1, we eliminate every second person until only one survivor is left. The problem is to determine the survivor’s number J (n). For example (Figure 4.12), if n is 6, people in positions 2, 4, and 6 will be eliminated on the ﬁrst pass through the circle, and people in initial positions 3 and 1 will be eliminated on the second pass, leaving a sole survivor in initial position 5—thus, J(6) = 5. To give another example, if n is 7, people in positions 2, 4, 6, and 1 will be eliminated on the ﬁrst pass (it is more convenient to include 1 in the ﬁrst pass) and people in positions 5 and, for convenience, 3 on the second—thus, J(7) = 7. 4.4 Decrease-by-a-Constant-Factor Algorithms 155 12 (a) (b) 41 325 2161 11 3261 217 4152 FIGURE 4.12 Instances of the Josephus problem for (a) n = 6 and (b) n = 7. Subscript numbers indicate the pass on which the person in that position is eliminated. The solutions are J(6) = 5 and J(7) = 7, respectively. It is convenient to consider the cases of even and odd n’s separately. If n is even, i.e., n = 2k, the ﬁrst pass through the circle yields an instance of exactly the same problem but half its initial size. The only difference is in position numbering; for example, a person in initial position 3 will be in position 2 for the second pass, a person in initial position 5 will be in position 3, and so on (check Figure 4.12a). It is easy to see that to get the initial position of a person, we simply need to multiply his new position by 2 and subtract 1. This relationship will hold, in particular, for the survivor, i.e., J(2k) = 2J(k)− 1. Let us now consider the case of an odd n (n>1), i.e., n = 2k + 1. The ﬁrst pass eliminates people in all even positions. If we add to this the elimination of the person in position 1 right after that, we are left with an instance of size k. Here, to get the initial position that corresponds to the new position numbering, we have to multiply the new position number by 2 and add 1 (check Figure 4.12b). Thus, for odd values of n,weget J(2k + 1) = 2J(k)+ 1. Can we get a closed-form solution to the two-case recurrence subject to the initial condition J(1) = 1? The answer is yes, though getting it requires more ingenuity than just applying backward substitutions. In fact, one way to ﬁnd a solution is to apply forward substitutions to get, say, the ﬁrst 15 values of J (n), discern a pattern, and then prove its general validity by mathematical induction. We leave the execution of this plan to the exercises; alternatively, you can look it up in [GKP94], whose exposition of the Josephus problem we have been following. Interestingly, the most elegant form of the closed-form answer involves the binary representation of size n: J (n) can be obtained by a 1-bit cyclic shift left of n itself! For example, J(6) = J(1102) = 1012 = 5 and J(7) = J(1112) = 1112 = 7. 156 Decrease-and-Conquer Exercises 4.4 1. Cutting a stick A stick n inches long needs to be cut into n 1-inch pieces. Outline an algorithm that performs this task with the minimum number of cuts if several pieces of the stick can be cut at the same time. Also give a formula for the minimum number of cuts. 2. Design a decrease-by-half algorithm for computing log2 n and determine its time efﬁciency. 3. a. What is the largest number of key comparisons made by binary search in searching for a key in the following array? 3 14 27 31 39 42 55 70 74 81 85 93 98 b. List all the keys of this array that will require the largest number of key comparisons when searched for by binary search. c. Find the average number of key comparisons made by binary search in a successful search in this array. Assume that each key is searched for with the same probability. d. Find the average number of key comparisons made by binary search in an unsuccessful search in this array. Assume that searches for keys in each of the 14 intervals formed by the array’s elements are equally likely. 4. Estimate how many times faster an average successful search will be in a sorted array of one million elements if it is done by binary search versus sequential search. 5. The time efﬁciency of sequential search does not depend on whether a list is implemented as an array or as a linked list. Is it also true for searching a sorted list by binary search? 6. a. Design a version of binary search that uses only two-way comparisons such as ≤ and =. Implement your algorithm in the language of your choice and carefully debug it: such programs are notorious for being prone to bugs. b. Analyze the time efﬁciency of the two-way comparison version designed in part a. 7. Picture guessing A version of the popular problem-solving task involves pre- senting people with an array of 42 pictures—seven rows of six pictures each— and asking them to identify the target picture by asking questions that can be answered yes or no. Further, people are then required to identify the picture with as few questions as possible. Suggest the most efﬁcient algorithm for this problem and indicate the largest number of questions that may be necessary. 8. Consider ternary search—the following algorithm for searching in a sorted array A[0..n − 1]. If n = 1, simply compare the search key K with the single 4.5 Variable-Size-Decrease Algorithms 157 element of the array; otherwise, search recursively by comparing K with A[n/3], and if K is larger, compare it with A[2n/3] to determine in which third of the array to continue the search. a. What design technique is this algorithm based on? b. Set up a recurrence for the number of key comparisons in the worst case. You may assume that n = 3k. c. Solve the recurrence for n = 3k. d. Compare this algorithm’s efﬁciency with that of binary search. 9. An array A[0..n − 2] contains n − 1 integers from 1 to n in increasing order. (Thus one integer in this range is missing.) Design the most efﬁcient algorithm you can to ﬁnd the missing integer and indicate its time efﬁciency. 10. a. Write pseudocode for the divide-into-three algorithm for the fake-coin problem. Make sure that your algorithm handles properly all values of n, not only those that are multiples of 3. b. Set up a recurrence relation for the number of weighings in the divide-into- three algorithm for the fake-coin problem and solve it for n = 3k. c. For large values of n, about how many times faster is this algorithm than the one based on dividing coins into two piles? Your answer should not depend on n. 11. a. Apply the Russian peasant algorithm to compute 26 . 47. b. From the standpoint of time efﬁciency, does it matter whether we multiply n by m or m by n by the Russian peasant algorithm? 12. a. Write pseudocode for the Russian peasant multiplication algorithm. b. What is the time efﬁciency class of Russian peasant multiplication? 13. Find J(40)—the solution to the Josephus problem for n = 40. 14. Prove that the solution to the Josephus problem is 1 for every n that is a power of 2. 15. For the Josephus problem, a. compute J (n) for n = 1, 2,...,15. b. discern a pattern in the solutions for the ﬁrst ﬁfteen values of n and prove its general validity. c. prove the validity of getting J (n) by a 1-bit cyclic shift left of the binary representation of n. 4.5 Variable-Size-Decrease Algorithms In the third principal variety of decrease-and-conquer, the size reduction pattern varies from one iteration of the algorithm to another. Euclid’s algorithm for computing the greatest common divisor (Section 1.1) provides a good example 158 Decrease-and-Conquer of this kind of algorithm. In this section, we encounter a few more examples of this variety. Computing a Median and the Selection Problem The selection problem is the problem of ﬁnding the kth smallest element in a list of n numbers. This number is called the kth order statistic. Of course, for k = 1or k = n, we can simply scan the list in question to ﬁnd the smallest or largest element, respectively. A more interesting case of this problem is for k =n/2, which asks to ﬁnd an element that is not larger than one half of the list’s elements and not smaller than the other half. This middle value is called the median, and it is one of the most important notions in mathematical statistics. Obviously, we can ﬁnd the kth smallest element in a list by sorting the list ﬁrst and then selecting the kth element in the output of a sorting algorithm. The time of such an algorithm is determined by the efﬁciency of the sorting algorithm used. Thus, with a fast sorting algorithm such as mergesort (discussed in the next chapter), the algorithm’s efﬁciency is in O(n log n). You should immediately suspect, however, that sorting the entire list is most likely overkill since the problem asks not to order the entire list but just to ﬁnd its kth smallest element. Indeed, we can take advantage of the idea of partitioning a given list around some value p of, say, its ﬁrst element. In general, this is a rearrangement of the list’s elements so that the left part contains all the elements smaller than or equal to p, followed by the pivot p itself, followed by all the elements greater than or equal to p. all are ≤ p all are ≥ ppp Of the two principal algorithmic alternatives to partition an array, here we discuss the Lomuto partitioning [Ben00, p. 117]; we introduce the better known Hoare’s algorithm in the next chapter. To get the idea behind the Lomuto parti- tioning, it is helpful to think of an array—or, more generally, a subarray A[l..r] (0 ≤ l ≤ r ≤ n − 1)—under consideration as composed of three contiguous seg- ments. Listed in the order they follow pivot p, they are as follows: a segment with elements known to be smaller than p,the segment of elements known to be greater than or equal to p, and the segment of elements yet to be compared to p (see Fig- ure 4.13a). Note that the segments can be empty; for example, it is always the case for the ﬁrst two segments before the algorithm starts. Starting with i = l + 1, the algorithm scans the subarray A[l..r] left to right, maintaining this structure until a partition is achieved. On each iteration, it com- pares the ﬁrst element in the unknown segment (pointed to by the scanning index i in Figure 4.13a) with the pivot p. If A[i] ≥ p, i is simply incremented to expand the segment of the elements greater than or equal to p while shrinking the un- processed segment. If A[i]

k− 1, the kth smallest element in the entire array can be found as the kth smallest element in the left part of the partitioned array. And if s