mathematical methods

171
ops do it

Upload: g1545

Post on 09-Sep-2015

235 views

Category:

Documents


8 download

DESCRIPTION

A pdf of collated Wikipedia articles

TRANSCRIPT

  • opsdo it

  • Contents

    1 Simplex algorithm 11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Standard form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Simplex tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Pivot operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.5.1 Entering variable selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5.2 Leaving variable selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.6 Finding an initial canonical tableau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.7 Advanced topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.7.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.7.2 Degeneracy: Stalling and cycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.7.3 Eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.8 Other algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.9 Linear-fractional programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.11 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.13 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.14 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    2 Criss-cross algorithm 92.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 Comparison with the simplex algorithm for linear optimization . . . . . . . . . . . . . . . . . . . . 92.3 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4 Computational complexity: Worst and average cases . . . . . . . . . . . . . . . . . . . . . . . . . 102.5 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2.5.1 Other optimization problems with linear constraints . . . . . . . . . . . . . . . . . . . . . 112.5.2 Vertex enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.5.3 Oriented matroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    i

  • ii CONTENTS

    2.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    3 Big M method 143.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2 Other usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.4 References and external links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4 Approximation algorithm 164.1 Performance guarantees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2 Algorithm design techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.3 Epsilon terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.5 Citations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    5 Karmarkars algorithm 195.1 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.3 Patent controversy --- Can Mathematics be patented ?" . . . . . . . . . . . . . . . . . . . . . . . 195.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    6 Interior point method 226.1 Primal-dual interior point method for nonlinear optimization . . . . . . . . . . . . . . . . . . . . . 226.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236.4 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    7 Blands rule 247.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247.2 Extensions to oriented matroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247.4 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    8 Mathematical optimization 268.1 Optimization problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    8.2.1 Minimum and maximum value of a function . . . . . . . . . . . . . . . . . . . . . . . . . 278.2.2 Optimal input arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    8.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

  • CONTENTS iii

    8.4 Major subelds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278.4.1 Multi-objective optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298.4.2 Multi-modal optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    8.5 Classication of critical points and extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298.5.1 Feasibility problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298.5.2 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298.5.3 Necessary conditions for optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298.5.4 Sucient conditions for optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298.5.5 Sensitivity and continuity of optima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308.5.6 Calculus of optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    8.6 Computational optimization techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308.6.1 Optimization algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308.6.2 Iterative methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308.6.3 Global convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318.6.4 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    8.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.7.1 Mechanics and engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.7.2 Economics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.7.3 Operations research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.7.4 Control engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.7.5 Petroleum engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.7.6 Molecular modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    8.8 Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328.11 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    8.11.1 Comprehensive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338.11.2 Continuous optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.11.3 Combinatorial optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.11.4 Relaxation (extension method) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    8.12 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    9 Linear programming 369.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379.2 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379.3 Standard form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    9.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389.4 Augmented form (slack form) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    9.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389.5 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    9.5.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

  • iv CONTENTS

    9.5.2 Another example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399.6 Covering/packing dualities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    9.6.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409.7 Complementary slackness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409.8 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    9.8.1 Existence of optimal solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409.8.2 Optimal vertices (and rays) of polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    9.9 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409.9.1 Basis exchange algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409.9.2 Interior point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419.9.3 Comparison of interior-point methods versus simplex algorithms . . . . . . . . . . . . . . 419.9.4 Approximate Algorithms for Covering/Packing LPs . . . . . . . . . . . . . . . . . . . . . 42

    9.10 Open problems and recent work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429.11 Integer unknowns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429.12 Integral linear programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439.13 Solvers and scripting (programming) languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439.14 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439.15 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449.16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449.17 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459.18 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    10 Operations research 4610.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4610.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    10.2.1 Historical origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4710.2.2 Second World War . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4710.2.3 After World War II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    10.3 Problems addressed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4910.4 Management science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    10.4.1 Related elds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5010.4.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    10.5 Societies and journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5010.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5110.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5110.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5210.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    11 Linear-fractional programming 5411.1 Relation to linear programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5411.2 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

  • CONTENTS v

    11.3 Transformation to a linear program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5411.4 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5411.5 Properties of and algorithms for linear-fractional programs . . . . . . . . . . . . . . . . . . . . . . 5511.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5511.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5511.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5511.9 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

    12 Oriented matroid 5712.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5712.2 Axiomatizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    12.2.1 Circuit axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5712.2.2 Chirotope axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.2.3 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    12.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.3.1 Directed graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.3.2 Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5812.3.3 Convex polytope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

    12.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5912.4.1 Orientability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5912.4.2 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5912.4.3 Topological representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5912.4.4 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5912.4.5 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    12.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6012.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    12.6.1 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6112.6.2 Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6112.6.3 On the web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    12.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    13 Algorithm 6313.1 Word origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6313.2 Informal denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6313.3 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    13.3.1 Expressing algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6513.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6513.5 Computer algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6513.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

    13.6.1 Algorithm example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6713.6.2 Euclids algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6713.6.3 Testing the Euclid algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

  • vi CONTENTS

    13.6.4 Measuring and improving the Euclid algorithms . . . . . . . . . . . . . . . . . . . . . . . 6913.7 Algorithmic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

    13.7.1 Formal versus empirical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7013.7.2 Execution eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

    13.8 Classication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7013.8.1 By implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7013.8.2 By design paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7113.8.3 Optimization problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7213.8.4 By eld of study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7213.8.5 By complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7313.8.6 By evaluative type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    13.9 Continuous algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7313.10Legal issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7313.11Etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7413.12History: Development of the notion of algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 74

    13.12.1 Origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7413.12.2 Discrete and distinguishable symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7413.12.3 Manipulation of symbols as place holders for numbers: algebra . . . . . . . . . . . . . . 7413.12.4 Mechanical contrivances with discrete states . . . . . . . . . . . . . . . . . . . . . . . . . 7413.12.5 Mathematics during the 19th century up to the mid-20th century . . . . . . . . . . . . . . 7513.12.6 Emil Post (1936) and Alan Turing (193637, 1939) . . . . . . . . . . . . . . . . . . . . . 7613.12.7 J. B. Rosser (1939) and S. C. Kleene (1943) . . . . . . . . . . . . . . . . . . . . . . . . . 7713.12.8 History after 1950 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    13.13See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7713.14Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7813.15References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

    13.15.1 Secondary references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8213.16Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8213.17External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

    14 Quadratic programming 8414.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8414.2 Solution methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    14.2.1 Equality constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8414.3 Lagrangian duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8514.4 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8514.5 Solvers and scripting (programming) languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8514.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8514.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    14.7.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8514.7.2 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

    14.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

  • CONTENTS vii

    15 Convex optimization 8715.1 Convex optimization problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8715.2 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8715.3 Standard form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8715.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8815.5 Lagrange multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8815.6 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8815.7 Convex minimization with good complexity: Self-concordant barriers . . . . . . . . . . . . . . . . 8915.8 Quasiconvex minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8915.9 Convex maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8915.10Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8915.11See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8915.12Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8915.13References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9015.14External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

    16 Optimization problem 9116.1 Continuous optimization problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9116.2 Combinatorial optimization problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    16.2.1 NP optimization problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9116.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9216.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

    17 Time complexity 9317.1 Table of common time complexities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9317.2 Constant time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9317.3 Logarithmic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9317.4 Polylogarithmic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9417.5 Sub-linear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9417.6 Linear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9417.7 Linearithmic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9417.8 Quasilinear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9517.9 Sub-quadratic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9517.10Polynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    17.10.1 Strongly and weakly polynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9517.10.2 Complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

    17.11Superpolynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9617.12Quasi-polynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

    17.12.1 Relation to NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9717.13Sub-exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

    17.13.1 First denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9717.13.2 Second denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

  • viii CONTENTS

    17.14Exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9817.15Double exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9817.16See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9817.17References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

    18 Ellipsoid method 10018.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10018.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10018.3 Unconstrained minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10018.4 Inequality-constrained minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

    18.4.1 Application to linear programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10118.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10118.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10118.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10118.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

    19 Nonlinear programming 10319.1 Applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10319.2 The general non-linear optimization problem (NLP) . . . . . . . . . . . . . . . . . . . . . . . . . 10319.3 Possible solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10319.4 Methods for solving the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10319.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

    19.5.1 2-dimensional example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10419.5.2 3-dimensional example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

    19.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10419.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10519.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10519.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10519.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

    20 Convex function 10620.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10620.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10620.3 Convex function calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10720.4 Strongly convex functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

    20.4.1 Uniformly convex functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10820.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10920.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10920.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10920.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11020.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

    21 Convex set 111

  • CONTENTS ix

    21.1 In vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11121.1.1 Non-convex set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    21.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11221.2.1 Intersections and unions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11221.2.2 Closed convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11221.2.3 Convex sets and rectangles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    21.3 Convex hulls and Minkowski sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11221.3.1 Convex hulls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11221.3.2 Minkowski addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11321.3.3 Convex hulls of Minkowski sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11321.3.4 Minkowski sums of convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

    21.4 Generalizations and extensions for convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11321.4.1 Star-convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11321.4.2 Orthogonal convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11321.4.3 Non-Euclidean geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11421.4.4 Order topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11421.4.5 Convexity spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

    21.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11421.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11421.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

    22 Real number 11622.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11622.2 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

    22.2.1 Axiomatic approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11722.2.2 Construction from the rational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

    22.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11822.3.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11822.3.2 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11822.3.3 The complete ordered eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11822.3.4 Advanced properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

    22.4 Applications and connections to other areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12022.4.1 Real numbers and logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12022.4.2 In physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12022.4.3 In computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12022.4.4 Reals in set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

    22.5 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12022.6 Generalizations and extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12122.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12122.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12122.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12222.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

  • x CONTENTS

    23 Combinatorial optimization 12323.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12323.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12323.3 Specic problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12423.4 Distributed Combinatorial Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12423.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12423.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12523.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

    24 Candidate solution 12624.1 Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12624.2 Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12624.3 Linear programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

    25 Basis (linear algebra) 12725.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12725.2 Expression of a basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12825.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12825.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12925.5 Extending to a basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12925.6 Example of alternative proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

    25.6.1 From the denition of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12925.6.2 By the dimension theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13025.6.3 By the invertible matrix theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

    25.7 Ordered bases and coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13025.8 Related notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

    25.8.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13025.8.2 Ane geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

    25.9 Proof that every vector space has a basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13125.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13225.11Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13225.12References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

    25.12.1 General references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13225.12.2 Historical references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

    25.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

    26 Vector space 13326.1 Introduction and denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

    26.1.1 First example: arrows in the plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13426.1.2 Second example: ordered pairs of numbers . . . . . . . . . . . . . . . . . . . . . . . . . . 13426.1.3 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13426.1.4 Alternative formulations and elementary consequences . . . . . . . . . . . . . . . . . . . . 134

  • CONTENTS xi

    26.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13526.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

    26.3.1 Coordinate spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13526.3.2 The complex numbers and other eld extensions . . . . . . . . . . . . . . . . . . . . . . . 13526.3.3 Function spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13526.3.4 Linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

    26.4 Basis and dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13626.5 Linear maps and matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

    26.5.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13826.5.2 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

    26.6 Basic constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13826.6.1 Subspaces and quotient spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13926.6.2 Direct product and direct sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13926.6.3 Tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

    26.7 Vector spaces with additional structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14026.7.1 Normed vector spaces and inner product spaces . . . . . . . . . . . . . . . . . . . . . . . 14026.7.2 Topological vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14126.7.3 Algebras over elds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

    26.8 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14326.8.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14326.8.2 Fourier analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14326.8.3 Dierential geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

    26.9 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14426.9.1 Vector bundles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14526.9.2 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14526.9.3 Ane and projective spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

    26.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14626.11Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14626.12Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14626.13References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

    26.13.1 Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14826.13.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14826.13.3 Historical references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14926.13.4 Further references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

    26.14External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15026.15Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 151

    26.15.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15126.15.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15626.15.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

  • Chapter 1

    Simplex algorithm

    This article is about the linear programming algorithm.For the non-linear optimization heuristic, see NelderMead method.

    In mathematical optimization, Dantzig's simplex algo-rithm (or simplex method) is a popular algorithm forlinear programming.[1][2][3][4][5] The journal Computingin Science and Engineering listed it as one of the top 10algorithms of the twentieth century.[6]

    The name of the algorithm is derived from the conceptof a simplex and was suggested by T. S. Motzkin.[7] Sim-plices are not actually used in the method, but one in-terpretation of it is that it operates on simplicial cones,and these become proper simplices with an additionalconstraint.[8][9][10][11] The simplicial cones in question arethe corners (i.e., the neighborhoods of the vertices) ofa geometric object called a polytope. The shape of thispolytope is dened by the constraints applied to the ob-jective function.

    1.1 OverviewFurther information: Linear programmingThe simplex algorithm operates on linear programs instandard form, that is linear programming problems ofthe form,

    Minimize

    cT x

    Subject to

    Ax b; xi 0

    with x = (x1; : : : ; xn) the variables of the problem,c = (c1; : : : ; cn) are the coecients of the objectivefunction, A a pn matrix, and b = (b1; : : : ; bp) con-stants with bj 0 . There is a straightforward process toconvert any linear program into one in standard form sothis results in no loss of generality.In geometric terms, the feasible region

    Starting vertex

    Optimalsolution

    A system of linear inequalities denes a polytope as a feasibleregion. The simplex algorithm begins at a starting vertex andmoves along the edges of the polytope until it reaches the vertexof the optimum solution.

    Polyhedron of simplex algorithm in 3D

    Ax b; xi 0

    1

  • 2 CHAPTER 1. SIMPLEX ALGORITHM

    is a (possibly unbounded) convex polytope. There is asimple characterization of the extreme points or verticesof this polytope, namely x = (x1; : : : ; xn) is an extremepoint if and only if the subset of column vectorsAi corre-sponding to the nonzero entries of x ( xi 6= 0 ) are linearlyindependent.[12] In this context such a point is known asa basic feasible solution (BFS).It can be shown that for a linear program in standard form,if the objective function has a minimum value on the fea-sible region then it has this value on (at least) one of theextreme points.[13] This in itself reduces the problem toa nite computation since there is a nite number of ex-treme points, but the number of extreme points is unman-ageably large for all but the smallest linear programs.[14]

    It can also be shown that if an extreme point is not aminimum point of the objective function then there is anedge containing the point so that the objective functionis strictly decreasing on the edge moving away from thepoint.[15] If the edge is nite then the edge connects toanother extreme point where the objective function hasa smaller value, otherwise the objective function is un-bounded below on the edge and the linear program hasno solution. The simplex algorithm applies this insightby walking along edges of the polytope to extreme pointswith lower and lower objective values. This continues un-til the minimum value is reached or an unbounded edgeis visited, concluding that the problem has no solution.The algorithm always terminates because the number ofvertices in the polytope is nite; moreover since we jumpbetween vertices always in the same direction (that of theobjective function), we hope that the number of verticesvisited will be small.[15]

    The solution of a linear program is accomplished in twosteps. In the rst step, known as Phase I, a starting ex-treme point is found. Depending on the nature of the pro-gram this may be trivial, but in general it can be solved byapplying the simplex algorithm to a modied version ofthe original program. The possible results of Phase I areeither a basic feasible solution is found or that the feasibleregion is empty. In the latter case the linear program iscalled infeasible. In the second step, Phase II, the sim-plex algorithm is applied using the basic feasible solutionfound in Phase I as a starting point. The possible resultsfrom Phase II are either an optimum basic feasible solu-tion or an innite edge on which the objective function isunbounded below.[3][16][17]

    1.2 Standard form

    The transformation of a linear program to one in standardform may be accomplished as follows.[18] First, for eachvariable with a lower bound other than 0, a new variable isintroduced representing the dierence between the vari-able and bound. The original variable can then be elimi-nated by substitution. For example, given the constraint

    x1 5a new variable, y1 , is introduced with

    y1 = x1 5x1 = y1 + 5

    The second equation may be used to eliminate x1 fromthe linear program. In this way, all lower bound con-straints may be changed to non-negativity restrictions.Second, for each remaining inequality constraint, a newvariable, called a slack variable, is introduced to changethe constraint to an equality constraint. This variable rep-resents the dierence between the two sides of the in-equality and is assumed to be nonnegative. For examplethe inequalities

    x2 + 2x3 3x4 + 3x5 2are replaced with

    x2 + 2x3 + s1 = 3

    x4 + 3x5 s2 = 2s1; s2 0

    It is much easier to perform algebraic manipulation oninequalities in this form. In inequalities where appearssuch as the second one, some authors refer to the variableintroduced as a surplus variable.Third, each unrestricted variable is eliminated from thelinear program. This can be done in two ways, one is bysolving for the variable in one of the equations in which itappears and then eliminating the variable by substitution.The other is to replace the variable with the dierence oftwo restricted variables. For example if z1 is unrestrictedthen write

    z1 = z+1 z1

    z+1 ; z1 0

    The equation may be used to eliminate z1 from the linearprogram.When this process is complete the feasible region will bein the form

    Ax = b; xi 0It is also useful to assume that the rank ofA is the numberof rows. This results in no loss of generality since other-wise either the system Ax b has redundant equationswhich can be dropped, or the system is inconsistent andthe linear program has no solution.[19]

  • 1.5. ALGORITHM 3

    1.3 Simplex tableauxA linear program in standard form can be represented asa tableau of the form

    1 cT 00 A b

    The rst row denes the objective function and the re-maining rows specify the constraints. (Note, dierent au-thors use dierent conventions as to the exact layout.) Ifthe columns of A can be rearranged so that it contains theidentity matrix of order p (the number of rows in A) thenthe tableau is said to be in canonical form.[20] The vari-ables corresponding to the columns of the identity matrixare called basic variables while the remaining variablesare called nonbasic or free variables. If the nonbasic vari-ables are assumed to be 0, then the values of the basicvariables are easily obtained as entries in b and this solu-tion is a basic feasible solution.Conversely, given a basic feasible solution, the columnscorresponding to the nonzero variables can be expandedto a nonsingular matrix. If the corresponding tableau ismultiplied by the inverse of this matrix then the result isa tableau in canonical form.[21]

    Let

    1 cTB cTD 00 I D b

    be a tableau in canonical form. Additional row-additiontransformations can be applied to remove the coecientscTB from the objective function. This process is called pric-ing out and results in a canonical tableau

    1 0 cTD zB0 I D b

    where zB is the value of the objective function at thecorresponding basic feasible solution. The updated co-ecients, also known as relative cost coecients, are therates of change of the objective function with respect tothe nonbasic variables.[16]

    1.4 Pivot operationsThe geometrical operation of moving from a basic feasi-ble solution to an adjacent basic feasible solution is imple-mented as a pivot operation. First, a nonzero pivot elementis selected in a nonbasic column. The row containing thiselement is multiplied by its reciprocal to change this el-ement to 1, and then multiples of the row are added tothe other rows to change the other entries in the column

    to 0. The result is that, if the pivot element is in row r,then the column becomes the r-th column of the iden-tity matrix. The variable for this column is now a basicvariable, replacing the variable which corresponded to ther-th column of the identity matrix before the operation.In eect, the variable corresponding to the pivot columnenters the set of basic variables and is called the enteringvariable, and the variable being replaced leaves the setof basic variables and is called the leaving variable. Thetableau is still in canonical form but with the set of basicvariables changed by one element.[3][16]

    1.5 AlgorithmLet a linear program be given by a canonical tableau.The simplex algorithm proceeds by performing succes-sive pivot operations which each give an improved basicfeasible solution; the choice of pivot element at each stepis largely determined by the requirement that this pivotimprove the solution.

    1.5.1 Entering variable selection

    Since the entering variable will, in general, increase from0 to a positive number, the value of the objective functionwill decrease if the derivative of the objective functionwith respect to this variable is negative. Equivalently, thevalue of the objective function is decreased if the pivotcolumn is selected so that the corresponding entry in theobjective row of the tableau is positive.If there is more than one column so that the entry in theobjective row is positive then the choice of which one toadd to the set of basic variables is somewhat arbitrary andseveral entering variable choice rules[22] have been devel-oped.If all the entries in the objective row are less than or equalto 0 then no choice of entering variable can be made andthe solution is in fact optimal. It is easily seen to be opti-mal since the objective row now corresponds to an equa-tion of the form

    z(x) = zB+variables nonbasic to corresponding terms nonnegative

    Note that by changing the entering variable choice rule sothat it selects a column where the entry in the objectiverow is negative, the algorithm is changed so that it ndsthe maximum of the objective function rather than theminimum.

    1.5.2 Leaving variable selection

    Once the pivot column has been selected, the choice ofpivot row is largely determined by the requirement that

  • 4 CHAPTER 1. SIMPLEX ALGORITHM

    the resulting solution be feasible. First, only positive en-tries in the pivot column are considered since this guar-antees that the value of the entering variable will be non-negative. If there are no positive entries in the pivot col-umn then the entering variable can take any nonnegativevalue with the solution remaining feasible. In this casethe objective function is unbounded below and there isno minimum.Next, the pivot row must be selected so that all the otherbasic variables remain positive. A calculation shows thatthis occurs when the resulting value of the entering vari-able is at a minimum. In other words, if the pivot columnis c, then the pivot row r is chosen so that

    br/arc

    is the minimum over all r so that arc > 0. This is called theminimum ratio test.[22] If there is more than one row forwhich the minimum is achieved then a dropping variablechoice rule[23] can be used to make the determination.

    1.5.3 ExampleSee also: Revised simplex algorithm Numericalexample

    Consider the linear program

    Minimize

    Z = 2x 3y 4zSubject to

    3x+ 2y + z 102x+ 5y + 3z 15

    x; y; z 0

    With the addition of slack variables s and t, this is repre-sented by the canonical tableau

    241 2 3 4 0 0 00 3 2 1 1 0 100 2 5 3 0 1 15

    35where columns 5 and 6 represent the basic variables s andt and the corresponding basic feasible solution is

    x = y = z = 0; s = 10; t = 15:

    Columns 2, 3, and 4 can be selected as pivot columns, forthis example column 4 is selected. The values of z result-ing from the choice of rows 2 and 3 as pivot rows are 10/1= 10 and 15/3 = 5 respectively. Of these the minimum is

    5, so row 3 must be the pivot row. Performing the pivotproduces

    241 23 113 0 0 43 200 73 13 0 1 13 50 23

    53 1 0

    13 5

    35Now columns 4 and 5 represent the basic variables z ands and the corresponding basic feasible solution is

    x = y = t = 0; z = 5; s = 5:

    For the next step, there are no positive entries in the ob-jective row and in fact

    Z = 20 + 23x+ 113 y + 43 t

    so the minimum value of Z is 20.

    1.6 Finding an initial canonicaltableau

    In general, a linear program will not be given in canon-ical form and an equivalent canonical tableau must befound before the simplex algorithm can start. This canbe accomplished by the introduction of articial vari-ables. Columns of the identity matrix are added as col-umn vectors for these variables. If the b value for a con-straint equation is negative, the equation is negated be-fore adding the identity matrix columns. This does notchange the set of feasible solutions or the optimal solu-tion, and it ensures that the slack variables will constitutean initial feasible solution. The new tableau is in canon-ical form but it is not equivalent to the original problem.So a new objective function, equal to the sum of the ar-ticial variables, is introduced and the simplex algorithmis applied to nd the minimum; the modied linear pro-gram is called the Phase I problem.[24]

    The simplex algorithm applied to the Phase I problemmust terminate with a minimum value for the new objec-tive function since, being the sum of nonnegative vari-ables, its value is bounded below by 0. If the minimum is0 then the articial variables can be eliminated from theresulting canonical tableau producing a canonical tableauequivalent to the original problem. The simplex algo-rithm can then be applied to nd the solution; this stepis called Phase II. If the minimum is positive then thereis no feasible solution for the Phase I problem where thearticial variables are all zero. This implies that the fea-sible region for the original problem is empty, and so theoriginal problem has no solution.[3][16][25]

  • 1.7. ADVANCED TOPICS 5

    1.6.1 Example

    Consider the linear program

    Minimize

    Z = 2x 3y 4z

    Subject to

    3x+ 2y + z = 10

    2x+ 5y + 3z = 15

    x; y; z 0

    This is represented by the (non-canonical) tableau

    241 2 3 4 00 3 2 1 100 2 5 3 15

    35Introduce articial variables u and v and objective func-tion W = u + v, giving a new tableau

    26641 0 0 0 0 1 1 00 1 2 3 4 0 0 00 0 3 2 1 1 0 100 0 2 5 3 0 1 15

    3775Note that the equation dening the original objectivefunction is retained in anticipation of Phase II.After pricing out this becomes

    26641 0 5 7 4 0 0 250 1 2 3 4 0 0 00 0 3 2 1 1 0 100 0 2 5 3 0 1 15

    3775Select column 5 as a pivot column, so the pivot row mustbe row 4, and the updated tableau is

    26641 0 73

    13 0 0 43 5

    0 1 23 113 0 0 43 200 0 73

    13 0 1 13 5

    0 0 2353 1 0

    13 5

    3775Now select column 3 as a pivot column, for which row 3must be the pivot row, to get

    26641 0 0 0 0 1 1 00 1 0 257 0 27 107 13070 0 1 17 0

    37 17 157

    0 0 0 117 1 27 37 257

    3775

    The articial variables are now 0 and they may bedropped giving a canonical tableau equivalent to the orig-inal problem:

    241 0 257 0 13070 1 17 0 1570 0 117 1

    257

    35This is, fortuitously, already optimal and the optimumvalue for the original linear program is 130/7.

    1.7 Advanced topics

    1.7.1 ImplementationMain article: Revised simplex algorithm

    The tableau form used above to describe the algorithmlends itself to an immediate implementation in which thetableau is maintained as a rectangular (m + 1)-by-(m + n+ 1) array. It is straightforward to avoid storing the m ex-plicit columns of the identity matrix that will occur withinthe tableau by virtue of B being a subset of the columnsof [A, I]. This implementation is referred to as the "stan-dard simplex algorithm. The storage and computationoverhead are such that the standard simplex method is aprohibitively expensive approach to solving large linearprogramming problems.In each simplex iteration, the only data required are therst row of the tableau, the (pivotal) column of the tableaucorresponding to the entering variable and the right-hand-side. The latter can be updated using the pivotal columnand the rst row of the tableau can be updated using the(pivotal) row corresponding to the leaving variable. Boththe pivotal column and pivotal row may be computed di-rectly using the solutions of linear systems of equationsinvolving the matrix B and a matrix-vector product us-ing A. These observations motivate the "revised simplexalgorithm", for which implementations are distinguishedby their invertible representation of B.[4]

    In large linear-programming problems A is typically asparse matrix and, when the resulting sparsity of Bis exploited when maintaining its invertible representa-tion, the revised simplex algorithm is a much more ef-cient than the standard simplex method. Commer-cial simplex solvers are based on the revised simplexalgorithm.[4][25][26][27][28]

    1.7.2 Degeneracy: Stalling and cyclingIf the values of all basic variables are strictly positive,then a pivot must result in an improvement in the objec-tive value. When this is always the case no set of ba-sic variables occurs twice and the simplex algorithm must

  • 6 CHAPTER 1. SIMPLEX ALGORITHM

    terminate after a nite number of steps. Basic feasible so-lutions where at least one of the basic variables is zero arecalled degenerate and may result in pivots for which thereis no improvement in the objective value. In this casethere is no actual change in the solution but only a changein the set of basic variables. When several such pivotsoccur in succession, there is no improvement; in largeindustrial applications, degeneracy is common and such"stalling" is notable. Worse than stalling is the possibil-ity the same set of basic variables occurs twice, in whichcase, the deterministic pivoting rules of the simplex al-gorithm will produce an innite loop, or cycle. Whiledegeneracy is the rule in practice and stalling is common,cycling is rare in practice. A discussion of an exampleof practical cycling occurs in Padberg.[25] Blands ruleprevents cycling and thus guarantee that the simplex al-gorithm always terminates.[25][29][30] Another pivoting al-gorithm, the criss-cross algorithm never cycles on linearprograms.[31]

    1.7.3 Eciency

    The simplex method is remarkably ecient in practiceand was a great improvement over earlier methods suchas FourierMotzkin elimination. However, in 1972, Kleeand Minty[32] gave an example showing that the worst-case complexity of simplex method as formulated byDantzig is exponential time. Since then, for almost ev-ery variation on the method, it has been shown thatthere is a family of linear programs for which it per-forms badly. It is an open question if there is a variationwith polynomial time, or even sub-exponential worst-casecomplexity.[33][34]

    Analyzing and quantifying the observation that the sim-plex algorithm is ecient in practice, even though it hasexponential worst-case complexity, has led to the devel-opment of other measures of complexity. The simplexalgorithm has polynomial-time average-case complexityunder various probability distributions, with the preciseaverage-case performance of the simplex algorithm de-pending on the choice of a probability distribution forthe random matrices.[34][35] Another approach to study-ing "typical phenoma" uses Baire category theory fromgeneral topology, and to show that (topologically) mostmatrices can be solved by the simplex algorithm in a poly-nomial number of steps. Another method to analyze theperformance of the simplex algorithm studies the behav-ior of worst-case scenarios under small perturbation areworst-case scenarios stable under a small change (in thesense of structural stability), or do they become tractable?Formally, this method uses random problems to which isadded a Gaussian random vector ("smoothed complex-ity").[36]

    1.8 Other algorithmsOther algorithms for solving linear-programming prob-lems are described in the linear-programming article.Another basis-exchange pivoting algorithm is the criss-cross algorithm.[37][38] There are polynomial-time algo-rithms for linear programming that use interior pointmethods: These include Khachiyan's ellipsoidal al-gorithm, Karmarkar's projective algorithm, and path-following algorithms.[17]

    1.9 Linear-fractional program-ming

    Main article: Linear-fractional programming

    Linearfractional programming (LFP) is a generalizationof linear programming (LP) where the objective functionof linear programs are linear functions and the objectivefunction of a linearfractional program is a ratio of twolinear functions. In other words, a linear program is afractionallinear program in which the denominator is theconstant function having the value one everywhere. Alinearfractional program can be solved by a variant ofthe simplex algorithm[39][40][41][42] or by the criss-crossalgorithm.[43]

    1.10 See also Criss-cross algorithm FourierMotzkin elimination Karmarkars algorithm NelderMead simplicial heuristic Pivoting rule of Bland, which avoids cycling

    1.11 Notes[1] Murty, Katta G. (1983). Linear programming. New York:

    John Wiley & Sons Inc. pp. xix+482. ISBN 0-471-09725-X. MR 720547.

    [2] Richard W. Cottle, ed. The Basic George B. Dantzig.Stanford Business Books, Stanford University Press, Stan-ford, California, 2003. (Selected papers by George B.Dantzig)

    [3] George B. Dantzig and Mukund N. Thapa. 1997. Linearprogramming 1: Introduction. Springer-Verlag.

    [4] George B. Dantzig and Mukund N. Thapa. 2003. LinearProgramming 2: Theory and Extensions. Springer-Verlag.

  • 1.11. NOTES 7

    [5] Michael J. Todd (February 2002). The many facets oflinear programming. Mathematical Programming 91 (3).(Invited survey, from the International Symposium onMathematical Programming.)

    [6] Computing in Science and Engineering, volume 2, no. 1,2000 html version

    [7] Murty (1983, Comment 2.2)

    [8] Murty (1983, Note 3.9)

    [9] Stone, Richard E.; Tovey, Craig A. (1991). The simplexand projective scaling algorithms as iteratively reweightedleast squares methods. SIAM Review 33 (2): 220237.doi:10.1137/1033049. JSTOR 2031142. MR 1124362.

    [10] Stone, Richard E.; Tovey, Craig A. (1991). Erratum:The simplex and projective scaling algorithms as itera-tively reweighted least squares methods. SIAMReview 33(3): 461. doi:10.1137/1033100. JSTOR 2031443. MR1124362.

    [11] Strang, Gilbert (1 June 1987). Karmarkars algorithmand its place in applied mathematics. The Mathe-matical Intelligencer (New York: Springer) 9 (2): 410. doi:10.1007/BF03025891. ISSN 0343-6993. MR'''883185'''.

    [12] Murty (1983, Theorem 3.1)

    [13] Murty (1983, Theorem 3.3)

    [14] Murty (1983, p. 143, Section 3.13)

    [15] Murty (1983, p. 137, Section 3.8)

    [16] Evar D. Nering and Albert W. Tucker, 1993, Linear Pro-grams and Related Problems, Academic Press. (elemen-tary)

    [17] Robert J. Vanderbei, Linear Programming: Foundationsand Extensions, 3rd ed., International Series in OperationsResearch & Management Science, Vol. 114, SpringerVerlag, 2008. ISBN 978-0-387-74387-5.

    [18] Murty (1983, Section 2.2)

    [19] Murty (1983, p. 173)

    [20] Murty (1983, section 2.3.2)

    [21] Murty (1983, section 3.12)

    [22] Murty (1983, p. 66)

    [23] Murty (1983, p. 67)

    [24] Murty (1983, p. 60)

    [25] M. Padberg, Linear Optimization and Extensions, SecondEdition, Springer-Verlag, 1999.

    [26] Dmitris Alevras and Manfred W. Padberg, Linear Opti-mization and Extensions: Problems and Extensions, Uni-versitext, Springer-Verlag, 2001. (Problems from Pad-berg with solutions.)

    [27] Maros, Istvn; Mitra, Gautam (1996). Simplex algo-rithms. In J. E. Beasley. Advances in linear and integerprogramming. Oxford Science. pp. 146. MR 1438309.

    [28] Maros, Istvn (2003). Computational techniques of thesimplex method. International Series in Operations Re-search & Management Science 61. Boston, MA: KluwerAcademic Publishers. pp. xx+325. ISBN 1-4020-7332-1. MR 1960274.

    [29] Bland, Robert G. (May 1977). New nite pivoting rulesfor the simplex method. Mathematics of OperationsResearch 2 (2): 103107. doi:10.1287/moor.2.2.103.JSTOR 3689647. MR 459599.

    [30] Murty (1983, p. 79)

    [31] There are abstract optimization problems, called orientedmatroid programs, on which Blands rule cycles (incor-rectly) while the criss-cross algorithm terminates cor-rectly.

    [32] Klee, Victor; Minty, George J. (1972). How good isthe simplex algorithm?". In Shisha, Oved. Inequali-ties III (Proceedings of the Third Symposium on Inequal-ities held at the University of California, Los Angeles,Calif., September 19, 1969, dedicated to the memory ofTheodore S. Motzkin). New York-London: AcademicPress. pp. 159175. MR 332165.

    [33] Christos H. Papadimitriou and Kenneth Steiglitz, Combi-natorial Optimization: Algorithms and Complexity, Cor-rected republication with a new preface, Dover. (com-puter science)

    [34] Alexander Schrijver, Theory of Linear and Integer Pro-gramming. John Wiley & sons, 1998, ISBN 0-471-98232-6 (mathematical)

    [35] The simplex algorithm takes on average D steps for acube. Borgwardt (1987): Borgwardt, Karl-Heinz (1987).The simplex method: A probabilistic analysis. Algorithmsand Combinatorics (Study and Research Texts) 1. Berlin:Springer-Verlag. pp. xii+268. ISBN 3-540-17096-0.MR 868467.

    [36] Spielman, Daniel; Teng, Shang-Hua (2001). Smoothedanalysis of algorithms: why the simplex algorithm usu-ally takes polynomial time. Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Com-puting. ACM. pp. 296305. arXiv:cs/0111050.doi:10.1145/380752.380813. ISBN 978-1-58113-349-3.

    [37] Terlaky, Tams; Zhang, Shu Zhong (1993). Pivotrules for linear programming: A Survey on recenttheoretical developments. Annals of Operations Re-search (Springer Netherlands). 4647 (1): 203233.doi:10.1007/BF02096264. ISSN 0254-5330. MR1260019. CiteSeerX: 10 .1 .1 .36 .7658.

    [38] Fukuda, Komei; Terlaky, Tams (1997). Thomas M.Liebling and Dominique de Werra, ed. Criss-crossmethods: A fresh view on pivot algorithms. Math-ematical Programming: Series B 79 (13) (Amster-dam: North-Holland Publishing Co.). pp. 369395.doi:10.1007/BF02614325. MR 1464775.

    [39] Murty (1983, Chapter 3.20 (pp. 160164) and pp. 168and 179)

  • 8 CHAPTER 1. SIMPLEX ALGORITHM

    [40] Chapter ve: Craven, B. D. (1988). Fractional program-ming. Sigma Series in Applied Mathematics 4. Berlin:Heldermann Verlag. p. 145. ISBN 3-88538-404-3. MR949209.

    [41] Kruk, Serge; Wolkowicz, Henry (1999). Pseudolin-ear programming. SIAM Review 41 (4): 795805.doi:10.1137/S0036144598335259. JSTOR 2653207.MR 1723002.

    [42] Mathis, Frank H.; Mathis, Lenora Jane (1995). A non-linear programming algorithm for hospital management.SIAM Review 37 (2): 230234. doi:10.1137/1037046.JSTOR 2132826. MR 1343214.

    [43] Ills, Tibor; Szirmai, kos; Terlaky, Tams (1999). Thenite criss-cross method for hyperbolic programming.European Journal of Operational Research 114 (1): 198214. doi:10.1016/S0377-2217(98)00049-6. ISSN 0377-2217. PDF preprint.

    1.12 References Murty, Katta G. (1983). Linear programming. New

    York: John Wiley & Sons, Inc. pp. xix+482. ISBN0-471-09725-X. MR 720547.

    1.13 Further readingThese introductions are written for students of computerscience and operations research:

    Thomas H. Cormen, Charles E. Leiserson, RonaldL. Rivest, and Cliord Stein. Introduction to Algo-rithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 29.3:The simplex algorithm, pp. 790804.

    Frederick S. Hillier and Gerald J. Lieberman:Introduction to Operations Research, 8th edition.McGraw-Hill. ISBN 0-07-123828-X

    Rardin, Ronald L. (1997). Optimization in opera-tions research. Prentice Hall. p. 919. ISBN 0-02-398415-5.

    1.14 External links An Introduction to Linear Programming and the

    Simplex Algorithm by Spyros Reveliotis of theGeorgia Institute of Technology.

    Greenberg, Harvey J., Klee-Minty Polytope ShowsExponential Time Complexity of Simplex MethodUniversity of Colorado at Denver (1997) PDFdownload

    Simplex Method A tutorial for Simplex Methodwith examples (also two-phase and M-method).

    Example of Simplex Procedure for a Standard Lin-ear Programming Problem by Thomas McFarlandof the University of Wisconsin-Whitewater.

    PHPSimplex: online tool to solve Linear Program-ming Problems by Daniel Izquierdo and Juan JosRuiz of the University of Mlaga (UMA, Spain)

    simplex-m Online Simplex Solver

  • Chapter 2

    Criss-cross algorithm

    This article is about an algorithm for mathematical op-timization. For the naming of chemicals, see crisscrossmethod.

    In mathematical optimization, the criss-cross algo-

    The criss-cross algorithm visits all 8 corners of the KleeMintycube in the worst case. It visits 3 additional corners on average.The KleeMinty cube is a perturbation of the cube shown here.

    rithm denotes a family of algorithms for linear pro-gramming. Variants of the criss-cross algorithm alsosolve more general problems with linear inequality con-straints and nonlinear objective functions; there arecriss-cross algorithms for linear-fractional programmingproblems,[1][2] quadratic-programming problems, andlinear complementarity problems.[3]

    Like the simplex algorithm of George B. Dantzig, thecriss-cross algorithm is not a polynomial-time algorithmfor linear programming. Both algorithms visit all 2D cor-ners of a (perturbed) cube in dimension D, the KleeMinty cube (after Victor Klee and George J. Minty),in the worst case.[4][5] However, when it is started at arandom corner, the criss-cross algorithm on average vis-its only D additional corners.[6][7][8] Thus, for the three-dimensional cube, the algorithm visits all 8 corners in theworst case and exactly 3 additional corners on average.

    2.1 HistoryThe criss-cross algorithm was published independentlyby Tams Terlaky[9] and by Zhe-Min Wang;[10] relatedalgorithms appeared in unpublished reports by otherauthors.[3]

    2.2 Comparison with the simplexalgorithm for linear optimiza-tion

    See also: Linear programming, Simplex algorithm andBlands ruleIn linear programming, the criss-cross algorithm pivots

    In its second phase, the simplex algorithm crawls along the edgesof the polytope until it nally reaches an optimum vertex. Thecriss-cross algorithm considers bases that are not associated withvertices, so that some iterates can be in the interior of the feasi-ble region, like interior-point algorithms; the criss-cross algorithmcan also have infeasible iterates outside the feasible region.

    between a sequence of bases but diers from the simplexalgorithm of George Dantzig. The simplex algorithm rstnds a (primal-) feasible basis by solving a "phase-oneproblem"; in phase two, the simplex algorithm pivotsbetween a sequence of basic feasible solutions so that theobjective function is non-decreasing with each pivot, ter-

    9

  • 10 CHAPTER 2. CRISS-CROSS ALGORITHM

    minating when with an optimal solution (also nally nd-ing a dual feasible solution).[3][11]

    The criss-cross algorithm is simpler than the simplex al-gorithm, because the criss-cross algorithm only has one-phase. Its pivoting rules are similar to the least-indexpivoting rule of Bland.[12] Blands rule uses only signs ofcoecients rather than their (real-number) order whendeciding eligible pivots. Blands rule selects an enteringvariables by comparing values of reduced costs, using thereal-number ordering of the eligible pivots.[12][13] UnlikeBlands rule, the criss-cross algorithm is purely combina-torial, selecting an entering variable and a leaving vari-able by considering only the signs of coecients ratherthan their real-number ordering.[3][11] The criss-cross al-gorithm has been applied to furnish constructive proofsof basic results in real linear algebra, such as the lemmaof Farkas.[14]

    While most simplex variants are monotonic in the objec-tive (strictly in the non-degenerate case), most variants ofthe criss-cross algorithm lack a monotone merit functionwhich can be a disadvantage in practice.

    2.3 DescriptionThe criss-cross algorithm works on a standard pivottableau (or on-the-y calculated parts of a tableau, if im-plemented like the revised simplex method). In a generalstep, if the tableau is primal or dual infeasible, it selectsone of the infeasible rows / columns as the pivot row / col-umn using an index selection rule. An important propertyis that the selection is made on the union of the infeasibleindices and the standard version of the algorithm does notdistinguish column and row indices (that is, the columnindices basic in the rows). If a row is selected then thealgorithm uses the index selection rule to identify a po-sition to a dual type pivot, while if a column is selectedthen it uses the index selection rule to nd a row positionand carries out a primal type pivot.

    2.4 Computational complexity:Worst and average cases

    The time complexity of an algorithm counts the numberof arithmetic operations sucient for the algorithm tosolve the problem. For example, Gaussian eliminationrequires on the order of D3 operations, and so it is said tohave polynomial time-complexity, because its complex-ity is bounded by a cubic polynomial. There are exam-ples of algorithms that do not have polynomial-time com-plexity. For example, a generalization of Gaussian elim-ination called Buchbergers algorithm has for its com-plexity an exponential function of the problem data (thedegree of the polynomials and the number of variablesof the multivariate polynomials). Because exponential

    The worst-case computational complexity of Khachiyans ellip-soidal algorithm is a polynomial. The criss-cross algorithm hasexponential complexity.

    functions eventually grow much faster than polynomialfunctions, an exponential complexity implies that an al-gorithm has slow performance on large problems.Several algorithms for linear programmingKhachiyan'sellipsoidal algorithm, Karmarkar's projective algorithm,and central-path algorithmshave polynomial time-complexity (in the worst case and thus on average). Theellipsoidal and projective algorithms were published be-fore the criss-cross algorithm.However, like the simplex algorithm of Dantzig, the criss-cross algorithm is not a polynomial-time algorithm forlinear programming. Terlakys criss-cross algorithm vis-its all the 2D corners of a (perturbed) cube in dimensionD, according to a paper of Roos; Rooss paper modiesthe KleeMinty construction of a cube on which the sim-plex algorithm takes 2D steps.[3][4][5] Like the simplex al-gorithm, the criss-cross algorithm visits all 8 corners ofthe three-dimensional cube in the worst case.When it is initialized at a random corner of the cube,the criss-cross algorithm visits only D additional cor-ners, however, according to a 1994 paper by Fukuda andNamiki.[6][7] Trivially, the simplex algorithm takes on av-erageD steps for a cube.[8][15] Like the simplex algorithm,the criss-cross algorithm visits exactly 3 additional cor-ners of the three-dimensional cube on average.

    2.5 Variants

    The criss-cross algorithm has been extended to solvemore general problems than linear programming prob-lems.

  • 2.6. SUMMARY 11

    2.5.1 Other optimization problems withlinear constraints

    There are variants of the criss-cross algorithm forlinear programming, for quadratic programming, andfor the linear-complementarity problem with su-cient matrices";[3][6][16][17][18][19] conversely, for linearcomplementarity problems, the criss-cross algorithmterminates nitely only if the matrix is a sucientmatrix.[18][19] A sucient matrix is a generalization bothof a positive-denite matrix and of a P-matrix, whoseprincipal minors are each positive.[18][19][20] The criss-cross algorithm has been adapted also for linear-fractionalprogramming.[1][2]

    2.5.2 Vertex enumeration

    The criss-cross algorithm was used in an algorithm forenumerating all the vertices of a polytope, which was pub-lished by David Avis and Komei Fukuda in 1992.[21] Avisand Fukuda presented an algorithm which nds the v ver-tices of a polyhedron dened by a nondegenerate systemof n linear inequalities in D dimensions (or, dually, thev facets of the convex hull of n points in D dimensions,where each facet contains exactly D given points) in timeO(nDv) and O(nD) space.[22]

    2.5.3 Oriented matroids

    s

    14/4

    2

    3/3 t

    3/4

    1/3 4/5

    The max-ow min-cut theorem states that the maximum owthrough a network is exactly the capacity of its minimum cut.This theorem can be proved using the criss-cross algorithm fororiented matroids.

    The criss-cross algorithm is often studied using the the-ory of oriented matroids (OMs), which is a combinatorialabstraction of linear-optimization theory.[17][23] Indeed,Blands pivoting rule was based on his previous pa-pers on oriented-matroid theory. However, Blandsrule exhibits cycling on some oriented-matroid linear-programming problems.[17] The rst purely combina-torial algorithm for linear programming was devisedby Michael J. Todd.[17][24] Todds algorithm was devel-oped not only for linear-programming in the setting oforiented matroids, but also for quadratic-programmingproblems and linear-complementarity problems.[17][24]

    Todds algorithm is complicated even to state, unfortu-nately, and its nite-convergence proofs are somewhatcomplicated.[17]

    The criss-cross algorithm and its proof of nite ter-mination can be simply stated and readily extend thesetting of oriented matroids. The algorithm can befurther simplied for linear feasibility problems, thatis for linear systems with nonnegative variables; theseproblems can be formulated for oriented matroids.[14]The criss-cross algorithm has been adapted for prob-lems that are more complicated than linear program-ming: There are oriented-matroid variants also forthe quadratic-programming problem and for the linear-complementarity problem.[3][16][17]

    2.6 SummaryThe criss-cross algorithm is a simply stated algorithm forlinear programming. It was the second fully combina-torial algorithm for linear programming. The partiallycombinatorial simplex algorithm of Bland cycles on some(nonrealizable) oriented matroids. The rst fully combi-natorial algorithm was published by Todd, and it is alsolike the simplex algorithm in that it preserves feasibilityafter the rst feasible basis is generated; however, Toddsrule is complicated. The criss-cross algorithm is not asimplex-like algorithm, because it need not maintain fea-sibility. The criss-cross algorithm does not have polyno-mial time-complexity, however.Researchers have extended the criss-cross algorithm formany optimization-problems, including linear-fractionalprogramming. The criss-cross algorithm can solvequadratic programming problems and linear complemen-tarity problems, even in the setting of oriented matroids.Even when generalized, the criss-cross algorithm remainssimply stated.

    2.7 See also Jack Edmonds (pioneer of combinatorial optimiza-

    tion and oriented-matroid theorist; doctoral advisorof Komei Fukuda)

    2.8 Notes[1] Ills, Szirmai & Terlaky (1999)

    [2] Stancu-Minasian, I. M. (August 2006). A sixth bib-liography of fractional programming. Optimization 55(4): 405428. doi:10.1080/02331930600819613. MR2258634.

    [3] Fukuda & Terlaky (1997)

    [4] Roos (1990)

  • 12 CHAPTER 2. CRISS-CROSS ALGORITHM

    [5] Klee, Victor; Minty, George J. (1972). How good isthe simplex algorithm?". In Shisha, Oved. Inequali-ties III (Proceedings of the Third Symposium on Inequal-ities held at the University of California, Los Angeles,Calif., September 19, 1969, dedicated to the memory ofTheodore S. Motzkin). New York-London: AcademicPress. pp. 159175. MR 332165.

    [6] Fukuda & Terlaky (1997, p. 385)

    [7] Fukuda & Namiki (1994, p. 367)

    [8] The simplex algorithm takes on average D steps for acube. Borgwardt (1987): Borgwardt, Karl-Heinz (1987).The simplex method: A probabilistic analysis. Algorithmsand Combinatorics (Study and Research Texts) 1. Berlin:Springer-Verlag. pp. xii+268. ISBN 3-540-17096-0.MR 868467.

    [9] Terlaky (1985) and Terlaky (1987)

    [10] Wang (1987)

    [11] Terlaky & Zhang (1993)

    [12] Bland, Robert G. (May 1977). New nite pivoting rulesfor the simplex method. Mathematics of OperationsResearch 2 (2): 103107. doi:10.1287/moor.2.2.103.JSTOR 3689647. MR 459599.

    [13] Blands rule is also related to an earlier least-index rule,which was proposed by Katta G. Murty for the linear com-plementarity problem, according to Fukuda & Namiki(1994).

    [14] Klafszky & Terlaky (1991)

    [15] More generally, for the simplex algorithm, the ex-pected number of steps is proportional to D for linear-programming problems that are randomly drawn from theEuclidean unit sphere, as proved by Borgwardt and bySmale.

    [16] Fukuda & Namiki (1994)

    [17] Bjrner, Anders; Las Vergnas, Michel; Sturmfels, Bernd;White, Neil; Ziegler, Gnter (1999). 10 Linear program-ming. Oriented Matroids. Cambridge University Press.pp. 417479. doi:10.1017/CBO9780511586507. ISBN978-0-521-77750-6. MR 1744046.

    [18] den Hertog, D.; Roos, C.; Terlaky, T. (1 July 1993). Thelinear complementarity problem, sucient matrices, andthe criss-cross method (pdf). Linear Algebra and its Ap-plications 187: 114. doi:10.1016/0024-3795(93)90124-7.

    [19] Csizmadia, Zsolt; Ills, Tibor (2006). New criss-cross type algorithms for linear complementarityproblems with sucient matrices (pdf). Opti-mization Methods and Software 21 (2): 247266.doi:10.1080/10556780500095009. MR 2195759.

    [20] Cottle, R. W.; Pang, J.-S.; Venkateswaran, V. (MarchApril 1989). Sucient matrices and the linear comple-mentarity problem. Linear Algebra and its Applications.114115: 231249. doi:10.1016/0024-3795(89)90463-1. MR 986877.

    [21] Avis & Fukuda (1992, p. 297)

    [22] The v vertices in a simple arrangement of n hyperplanesin D dimensions can be found in O(n2Dv) time and O(nD)space complexity.

    [23] The theory of oriented matroids was initiated by R. TyrrellRockafellar. (Rockafellar 1969):Rockafellar, R. T. (1969). The elementary vectors of asubspace of RN (1967)". In R. C. Bose and T. A. Dowl-ing. Combinatorial Mathematics and its Applications. TheUniversity of North Carolina Monograph Series in Prob-ability and Statistics (4). Chapel Hill, North Carolina:University of North Carolina Press. pp. 104127. MR278972. PDF reprint.Rockafellar was inuenced by the earlier studies of AlbertW. Tucker and George J. Minty. Tucker and Minty hadstudied the sign patterns of the matrices arising throughthe pivoting operations of Dantzigs simplex algorithm.

    [24] Todd, Michael J. (1985). Linear and quadratic program-ming in oriented matroids. Journal of CombinatorialTheory. Series B 39 (2): 105133. doi:10.1016/0095-8956(85)90042-5. MR 811116.

    2.9 References Avis, David; Fukuda, Komei (December 1992).

    A pivoting algorithm for convex hulls and ver-tex enumeration of arrangements and polyhe-dra. Discrete and Computational Geometry 8(ACM Symposium on Computational Geometry(North Conway, NH, 1991) number 1): 295313.doi:10.1007/BF02293050. MR 1174359.

    Csizmadia, Zsolt; Ills, Tibor (2006). New cri