extensions of optimization theory and new computational approaches for higher-order

317
8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 1/317 EXTENSIONS OF OPTIMIZATION THEORY AND NEW COMPUTATIONAL APPROACHES FOR HIGHER-ORDER DYNAMIC SYSTEMS by Tawiwat Veeraklaew A dissertation submitted to the Faculty of the University of Delaware in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mechanical Engineering Fall 1999 c 1999 Tawiwat Veeraklaew All Rights Reserved

Upload: zaeem73

Post on 30-May-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 1/317

EXTENSIONS OF OPTIMIZATION THEORY AND NEW

COMPUTATIONAL APPROACHES FOR

HIGHER-ORDER DYNAMIC SYSTEMS

by

Tawiwat Veeraklaew

A dissertation submitted to the Faculty of the University of Delaware inpartial fulfillment of the requirements for the degree of Doctor of Philosophy inMechanical Engineering

Fall 1999

c 1999 Tawiwat VeeraklaewAll Rights Reserved

Page 2: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 2/317

EXTENSIONS OF OPTIMIZATION THEORY AND NEW

COMPUTATIONAL APPROACHES FOR

HIGHER-ORDER DYNAMIC SYSTEMS

by

Tawiwat Veeraklaew

Approved:Tsu-Wei Chou, Ph.D.Acting Chair of the Department of Mechanical Engineering

Approved:Andras Z. Szeri, Ph.D.Interim Dean of the College of Engineering

Approved:John C. Cavanaugh, Ph.D.Vice Provost for Academic Programs and Planning

Page 3: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 3/317

I certify that I have read this dissertation and that in my opinion it meetsthe academic and professional standard required by the University as adissertation for the degree of Doctor of Philosophy.

Signed:Sunil K. Agrawal, Ph.D.Professor in charge of dissertation

I certify that I have read this dissertation and that in my opinion it meetsthe academic and professional standard required by the University as adissertation for the degree of Doctor of Philosophy.

Signed:Jian-Qiao Sun, Ph.D.Member of dissertation committee

I certify that I have read this dissertation and that in my opinion it meetsthe academic and professional standard required by the University as adissertation for the degree of Doctor of Philosophy.

Signed:Michael D. Greenberg, Ph.D.Member of dissertation committee

I certify that I have read this dissertation and that in my opinion it meetsthe academic and professional standard required by the University as adissertation for the degree of Doctor of Philosophy.

Signed:Francis J. Doyle III, Ph.D.Member of dissertation committee

Page 4: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 4/317

ACKNOWLEDGEMENTS

The author believes that this thesis could be done only because of the fol-

lowing persons whom the author respects and loves; therefore, the author would

like to express his gratefulness. First, to my patron, Her Royal Highness Princess

Galayaniwattana; without her royal highness’s encouragement and support my fam-

ily would not have been together during the course of this thesis and this degree

would not have been possible.

Also, I would like to thank my advisor, Professor Sunil K. Agrawal; without

his guidance, support, and patience this thesis would not have been a success. I

would also like to thank Professor Jian-Qiao Sun, Professor Michael Greenberg,

and Professor Francis Doyle III for the help and cooperation they extended to me

over the course of this thesis. I am very thankful to the faculty and staff of the

Department of Mechanical Engineering, University of Delaware, for their help in

making this degree possible.

Thanks are also due to my fellow students and postdoctoral fellows at the

Mechanical Systems Laboratory for creating an exciting research environment. In

particular, I would like to thank Armando Ferreira, Stephen T. Pledgie, Madhu

Annapragada, Rahul Rao, Shourov Bhattacharya, Nadeem Faiz, Xiaochun Xu, Dr.

Felix Pfister, and Dr. Maximilian Schlemmer.Finally, I would like to thank my parents, Ravi and Ratuay Veeraklaew,

and my family Passarawon, Arthima, and Taweetiya Veeraklaew for their constant

encouragement and support.

iv

Page 5: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 5/317

To my family—Passarawon, Arthima, and Taweetiya

v

Page 6: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 6/317

TABLE OF CONTENTS

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiLIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

Chapter

1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Direct Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Indirect Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3.1 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . 41.3.2 Pontryagin’s Principle . . . . . . . . . . . . . . . . . . . . . . 91.3.3 First Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.3.4 Linear Feedback: Terminal Controllers . . . . . . . . . . . . . 131.3.5 Neighboring Extremal Paths (Final Time Specified) . . . . . . 151.3.6 Determination of Neighboring Extremal Paths . . . . . . . . . 171.3.7 Sufficient Conditions . . . . . . . . . . . . . . . . . . . . . . . 19

1.4 Motivations and Objectives . . . . . . . . . . . . . . . . . . . . . . . 221.5 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2 THE OPTIMALITY PRINCIPLE FOR HIGHER-ORDER

DYNAMIC SYSTEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.1 Higher-order Dynamic Systems . . . . . . . . . . . . . . . . . . . . . 282.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.2.1 Variation Statement of General Functions . . . . . . . . . . . 30

vi

Page 7: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 7/317

2.2.2 Dynamic Optimization . . . . . . . . . . . . . . . . . . . . . . 34

2.2.2.1 No Auxiliary Constraints . . . . . . . . . . . . . . . 372.2.2.2 Variable Final Time and Free Final States . . . . . . 37

2.2.2.3 Auxiliary Equality Constraints . . . . . . . . . . . . 382.2.2.4 Auxiliary Inequality Constraints . . . . . . . . . . . 39

2.3 Extensions of The Minimum Principle . . . . . . . . . . . . . . . . . 40

2.3.1 Hamilton-Jacobi Theory . . . . . . . . . . . . . . . . . . . . . 40

2.3.1.1 Constraint Set U  . . . . . . . . . . . . . . . . . . . . 402.3.1.2 Constraint Set A(t, x) . . . . . . . . . . . . . . . . . 45

2.4 First Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2.5.1 Fourth-Order Form . . . . . . . . . . . . . . . . . . . . . . . . 522.5.2 First-Order Form . . . . . . . . . . . . . . . . . . . . . . . . . 52

2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3 LINEAR SYSTEMS WITH QUADRATIC CRITERIA: LINEARFEEDBACK TERMINAL CONTROLLERS . . . . . . . . . . . . . 54

3.1 Feedback Law for Higher-order Linear System: Finite-time RegulatorProblem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3.1.1 Matrix Riccati Equation for the Higher-order Systems . . . . . 55

3.2 Second-order Linear Systems: Finite-time Regulator Problem . . . . . 57

3.2.1 Example: A Second-order Linear System With QuadraticCriteria. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4 NEIGHBORING OPTIMUM FEEDBACK LAW . . . . . . . . . . 64

4.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.2 Example 1: A simple second-order linear system. . . . . . . . . . . . 67

vii

Page 8: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 8/317

4.3 Example 2: One-armed manipulator with Coulomb Friction . . . . . . 704.4 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

5 NUMERICAL ALGORITHMS . . . . . . . . . . . . . . . . . . . . . . 74

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745.2 Nonlinear Programming . . . . . . . . . . . . . . . . . . . . . . . . . 745.3 Direct Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

5.3.1 Method of Solution . . . . . . . . . . . . . . . . . . . . . . . . 765.3.2 Computation Issues . . . . . . . . . . . . . . . . . . . . . . . . 78

5.4 Indirect Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

5.4.1 Method of Solution . . . . . . . . . . . . . . . . . . . . . . . . 805.4.2 A General-purpose Program . . . . . . . . . . . . . . . . . . . 83

5.5 Examples: Minimum Energy Problems . . . . . . . . . . . . . . . . . 84

5.5.1 Example 1: Spring-mass-damper System . . . . . . . . . . . . 85

5.5.1.1 An example of the input file to the general-purposeprograms: user.m . . . . . . . . . . . . . . . . . . . . 91

5.5.2 Example 2: Flexible Link . . . . . . . . . . . . . . . . . . . . 95

5.5.2.1 An example of the input file to the general-purposeprograms: user.m . . . . . . . . . . . . . . . . . . . . 99

5.6 Examples: Minimum-time Problem . . . . . . . . . . . . . . . . . . . 103

5.6.1 Example 3: Single Mass . . . . . . . . . . . . . . . . . . . . . 1035.6.2 Example 4: Flexible Link . . . . . . . . . . . . . . . . . . . . 105

5.7 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

6 APPLICATION OF OPTIMAL CONTROL TO ROBOT

viii

Page 9: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 9/317

SYSTEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.1 Dynamic Equations of Robot Systems . . . . . . . . . . . . . . . . . . 108

6.1.1 Forward Kinematics and Denavit-Hartenberg Parameters . . . 1086.1.2 Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116.1.3 Dynamic Equations of Motion . . . . . . . . . . . . . . . . . . 112

6.2 Optimality Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 1126.3 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

6.3.1 Example 1: The one-armed manipulator with Coulomb Friction1156.3.2 Example 2: The polar manipulator. . . . . . . . . . . . . . . . 1176.3.3 Example 3: The planar two-link manipulator. . . . . . . . . . 119

6.3.4 Example 4: SCARA robot . . . . . . . . . . . . . . . . . . . . 122

6.4 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Appendix

A VARIATION OF A FUNCTIONAL. . . . . . . . . . . . . . . . . . . 126B AN EQUIVALENT PROBLEM STATEMENT TO THE

PROBLEM OF THE NEIGHBORING EXTREMALS . . . . . . . 130C THE QUADRATIC FORM OF J ∗(X(T ), T ). . . . . . . . . . . . . . . 134

D THE SOLUTION FORM FOR λ IN THE FIRST-ORDERLINEAR QUADRATIC REGULATOR PROBLEM. . . . . . . . . 138

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

E GENERAL PURPOSE PROGRAMS . . . . . . . . . . . . . . . . . 143

E.1 Direct Higher-Order Method . . . . . . . . . . . . . . . . . . . . . . . 143

E.1.1 allnec.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143E.1.2 concj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143E.1.3 conoj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145E.1.4 execute.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147E.1.5 funcon.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147E.1.6 funcont.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148E.1.7 funobj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154E.1.8 funobjt.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

ix

Page 10: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 10/317

E.1.9 guess.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157E.1.10 jac.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159E.1.11 main.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160E.1.12 main-guess.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

E.1.13 necc.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167E.1.14 Npsol.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170E.1.15 param.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172E.1.16 pre.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174E.1.17 result1.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175E.1.18 result2.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177E.1.19 user.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

E.2 Direct Minimum Time Using Higher-Order Method . . . . . . . . . . 182

E.2.1 allnec.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182E.2.2 concj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182E.2.3 conoj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184E.2.4 execute.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185E.2.5 funcon.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185E.2.6 funcont.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186E.2.7 funobj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192E.2.8 funobjt.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193E.2.9 guess.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193E.2.10 jac.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

E.2.11 main.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194E.2.12 main-guess.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 197E.2.13 necc.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201E.2.14 Npsol.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204E.2.15 param.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204E.2.16 pre.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206E.2.17 result1.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207E.2.18 result2.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209E.2.19 user.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

E.3 Indirect Higher-Order Method . . . . . . . . . . . . . . . . . . . . . . 214

E.3.1 allnec.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214E.3.2 concj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214E.3.3 conoj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215E.3.4 execute.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218E.3.5 funcon.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

x

Page 11: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 11/317

E.3.6 funcont.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219E.3.7 funobj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229E.3.8 funobjt.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231E.3.9 guess.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

E.3.10 main.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234E.3.11 main-guess.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 238E.3.12 necc.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242E.3.13 Npsol.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245E.3.14 param.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245E.3.15 pre.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249E.3.16 result1.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249E.3.17 result2.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252E.3.18 user.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

E.4 Indirect Minimum Time Using Higher-Order Method . . . . . . . . . 256

E.4.1 allnec.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256E.4.2 concj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257E.4.3 conoj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258E.4.4 execute.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259E.4.5 funcon.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259E.4.6 funcont.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260E.4.7 funobj.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271E.4.8 funobjt.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272

E.4.9 guess.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272E.4.10 main.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274E.4.11 main-guess.m . . . . . . . . . . . . . . . . . . . . . . . . . . . 278E.4.12 necc.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281E.4.13 Npsol.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284E.4.14 param.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285E.4.15 pre.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289E.4.16 result1.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289E.4.17 result2.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291E.4.18 user.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

F GENERAL CODE INSTRUCTION . . . . . . . . . . . . . . . . . . . 297

xi

Page 12: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 12/317

LIST OF FIGURES

1.1 The allowable increment hx(t) in x(t) for the case with variable endpoint and end time. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2 Neighboring optimal feedback control. . . . . . . . . . . . . . . . . . 18

2.1 Single-link robot with joint flexibility. . . . . . . . . . . . . . . . . . 29

2.2 The allowable increment hx(i)(t) in x(i)(t) for the case with variableend point and end time where i = 0,..., p − 1. . . . . . . . . . . . . 32

3.1 The comparison of the optimal solutions from second-order andfirst-order closed-loop and open-loop controls. . . . . . . . . . . . . 60

3.2 The Riccati solutions of the example problem from second-order andfirst-order optimal feedback controls. . . . . . . . . . . . . . . . . . 61

4.1 The higher-order neighboring optimal feedback control. . . . . . . . 68

4.2 Optimal solutions of the system with m = 1, m = 1.1, andneighboring optimal. . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.3 Optimal solutions of the system with θ1(0) = 3, θ1(0) = 3.05, andneighboring optimal. . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5.1 Piecewise polynomial representation of the states. . . . . . . . . . . 77

5.2 Piecewise constant representation of the inputs. . . . . . . . . . . . 77

5.3 A flowchart of the development of the general-purpose programs. . . 85

5.4 A spring-mass-damper system with two masses and one input. . . . 86

xii

Page 13: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 13/317

5.5 The 6-interval direct response curves x1, x2, and u for Example 1.The plots for the four cases (I)-(IV) overlap within the accuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

5.6 The 6-interval indirect response curves x1, x2, and u for Example 1.The plots for the four cases (I)-(IV) overlap within the accuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.7 The analytical response curves x1, x2, and u of Example 1 usingmatrix exponential. . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.8 The response curves x1, x2, and u of Example 1 when the firstintegral is implemented explicitly. . . . . . . . . . . . . . . . . . . . 88

5.9 The first integral comparisons of Example 1. . . . . . . . . . . . . . 89

5.10 The accuracy of the solutions in comparison with the solutionobtained with the first integral used explicitly as a constraint inExample 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.11 Single-link robot with joint flexibility. . . . . . . . . . . . . . . . . . 95

5.12 The 6-interval direct response curves q1, q2, and u for Example 2.The plots for the four cases (I)-(IV) overlap within the accuracy of 

the drawing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.13 The 6-interval indirect response curves q1, q2, and u for Example 2.The plots for the four cases (I)-(IV) overlap within the accuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.14 The response curves q1, q2, and u of Example 2 when the firstintegral is implemented explicitly. . . . . . . . . . . . . . . . . . . . 97

5.15 The first integral comparison of Example 2. . . . . . . . . . . . . . . 98

5.16 The accuracy of the solutions in comparison with the solutionobtained with the first integral used explicitly as a constraint inExample 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.17 One mass system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

xiii

Page 14: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 14/317

5.18 The 6-interval direct response curves x, x(1), and u for Example 3.The plots of first-order and second-order systems overlap within theaccuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . . . 104

5.19 The 6-interval indirect response curves x, x(1), and u for Example 3.The plots of first-order and second-order systems overlap within theaccuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . . . 105

5.20 The 6-interval direct response curves q1, q2, and u for Example 4.The plots of first-order and second-order systems overlap within theaccuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . . . 106

5.21 The 6-interval indirect response curves q1, q2, and u for Example 4.The plots of first-order and second-order systems overlap within the

accuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . . . 106

6.1 The Denavit-Hartenberg parameters. . . . . . . . . . . . . . . . . . 110

6.2 The process of the development of the dynamic equations of motionand interfacing with dynamic optimization toolboxes. . . . . . . . . 115

6.3 One-armed manipulator with friction. . . . . . . . . . . . . . . . . . 116

6.4 The 6-interval direct response curves θ1, θ(1)1 , and M a for Example 1.

The plots of first-order and second-order systems overlap within theaccuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . . . 117

6.5 The 6-interval indirect response curves θ1, θ(1)1 , and M a for Example

1. The plots of first-order and second-order systems overlap withinthe accuracy of the drawing. . . . . . . . . . . . . . . . . . . . . . . 118

6.6 Polar manipulator. Shaded areas indicate actuator location: ()rotary actuator; () linear actuator. . . . . . . . . . . . . . . . . . . 119

6.7 The 6-interval direct response curves θ1, θ

(1)

1 ,c2, c

(1)

2 , u1 and u2 forExample 2. The plots of first-order and second-order systems overlapwithin the accuracy of the drawing. . . . . . . . . . . . . . . . . . . 120

6.8 The 6-interval indirect response curves θ1, θ(1)1 ,c2, c

(1)2 , u1 and u2 for

Example 2. The plots of first-order and second-order systems overlapwithin the accuracy of the drawing. . . . . . . . . . . . . . . . . . . 120

xiv

Page 15: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 15/317

6.9 The planar two-link manipulator. . . . . . . . . . . . . . . . . . . . 121

6.10 The 6-interval direct response curves q1, q(1)1 ,q2, q(1)2 , u1 and u2 forExample 3. The plots of first-order and second-order systems overlap

within the accuracy of the drawing. . . . . . . . . . . . . . . . . . . 122

6.11 The 6-interval indirect response curves q1, q(1)1 ,q2, q

(1)2 , u1 and u2 for

Example 3. The plots of first-order and second-order systems overlapwithin the accuracy of the drawing. . . . . . . . . . . . . . . . . . . 122

6.12 SCARA robot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

6.13 The 6-interval direct response curves q1, q2, q3, q4, u1, u2, u3, and u4

for Example 4. The plots of first-order and second-order systems

overlap within the accuracy of the drawing. . . . . . . . . . . . . . . 125

6.14 The 6-interval indirect response curves q1, q2, q3, q4, u1, u2, u3, andu4 for Example 4. The plots of first-order and second-order systemsoverlap within the accuracy of the drawing. . . . . . . . . . . . . . . 125

xv

Page 16: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 16/317

LIST OF TABLES

5.1 The CPU run-time of Example 1 showing direct/indirect andfirst-order/higher-order comparisons. . . . . . . . . . . . . . . . . . . 88

5.2 The CPU run-time of Example 2 showing direct/indirect andfirst-order/higher-order comparisons. . . . . . . . . . . . . . . . . . . 97

5.3 The CPU run-time of Example 3 showing direct/indirect andfirst-order/second-order comparisons. . . . . . . . . . . . . . . . . . . 104

5.4 The CPU run-time of Example 4 showing direct/indirect andfirst-order/second-order comparisons. . . . . . . . . . . . . . . . . . . 105

6.1 The CPU run-time of Example 1 showing direct/indirect andfirst-order/second-order comparisons. . . . . . . . . . . . . . . . . . . 117

6.2 The CPU run-time of Example 2 showing direct/indirect andfirst-order/second-order comparisons. . . . . . . . . . . . . . . . . . . 118

6.3 The CPU run-time of Example 3 showing direct/indirect andfirst-order/second-order comparisons. . . . . . . . . . . . . . . . . . . 121

6.4 The CPU run-time of Example 4 showing direct/indirect andfirst-order/second-order comparisons. . . . . . . . . . . . . . . . . . . 124

xvi

Page 17: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 17/317

ABSTRACT

In recent years, using the tools of linear and nonlinear systems theory, it has

been shown that a large number of dynamic systems can be written in canonical

forms. These canonical forms are alternatives to state-space forms and can be rep-

resented by higher-order differential equations. For planning and control purposes,

these canonical forms provide a number of advantages when compared to their cor-

responding first-order forms. In this thesis, the author addresses the question of op-

timization of dynamic systems described by higher-order differential equations with

general nonlinear constraints. The minimum principle for higher-order systems is

derived directly from their higher-order forms and the results are confirmed by clas-

sical theory using the first-order form. In addition, the linear continuous feedback

control law is obtained for the higher-order systems. The concept of neighboring

extremals and the second variation, also the sufficient conditions for a minima and

a maxima, are derived for higher-order systems. Therefore, the sufficient conditions

must be satisfied along the minimum and maximum trajectories.

The optimality conditions for higher-order systems are derived using two ap-

proaches: (i) variational calculus of a higher-order augmented cost functional; (ii)

Hamilton-Jacobi theory, thereby extending Pontryagin’s principle. It is shown that

the two approaches lead to the same results. Further, the results of these two ap-proaches are also verified through the application of variational calculus to their

equivalent first-order forms. These conditions are then used to develop computa-

tional approaches for higher-order systems. A general purpose program has been

developed to benchmark computations between higher-order and first-order systems.

xvii

Page 18: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 18/317

Page 19: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 19/317

Chapter 1

INTRODUCTION

Man is constantly dealing with optimal control problems. In whatever we

attempt, we try to attain a certain goal in an optimum fashion. One problem, that

must be solved immediately is the decision of defining the goal itself. This goal may

be different from one person to the next. The best way to attain a goal can be

thought of as a problem statement for optimal control and is considered in many

areas such as engineering, mathematics, finance, business, and sports. Dynamic

engineering systems are targeted in this thesis and are chosen to best illustrate the

higher-order dynamic optimization problems.

This thesis considers higher-order systems rather than the first-order sys-

tems. In optimal control theory, the system equations are always considered in the

first-order forms. The fundamental theory of calculus of variations, developed in

the last century, provides the necessary tools for dynamic optimization. In early

1995, Agrawal and coworkers found that for linear time-invariant dynamic systems,

canonical forms could be used to explicitly embed the state equations into the cost

functional. This technique reduces an optimization problem with state equations

to an optimization problem without state equations. The results from higher-order

variational calculus were used to find the necessary conditions for optimality [23].

These higher-order necessary conditions were derived without Lagrange multipliers

[1], [44]. Since the Lagrange multipliers are artificially introduced unknown vari-

ables in the problems, eliminating these gives a significant benefit in term of the

1

Page 20: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 20/317

computations as described in [1] and [44]. This idea has been successfully extended

to classes of time-varying systems [9], fully actuated robot systems [3], and feedback

linearizable systems [5]. The constrained optimization problem of higher-order sys-

tems, a more general class of problem, is the main objective of this thesis. Since

first-order dynamic optimization problems have been researched for more than half-

a-century, this chapter provides a brief discussion of the optimality theory so that

comparisons between first-order and higher-order systems can be made later in this

thesis.

1.1 Problem Statement

A large class of dynamic systems can be described by the first-order differ-

ential equations given by

x(1)(t) = f (x(t), u(t), t), x(t0) = x0, (1.1)

where x ∈ Rn, u ∈ Rm are the state and control vectors respectively, (i) denotes

ith derivative of a variable with respect to time, and f  is an n-dimensional column

vector of linear or nonlinear functions containing the states and the controls. The

statement of the problem is to determine the control input u(t) and the state variable

x(t) that minimize the performance index

J  = φ(x(tf ), tf ) + tf t0

L(x(t), u(t), t) dt, (1.2)

where t0 and tf  are as the initial and final time, respectively. The performance

index J  consists of the following terms: (i) φ(x(tf ), tf ) is a terminal cost based on

the final time tf  and the final state x(tf ), (ii) tf t0 L(x(t), u(t), t) dt is an integral

cost dependent on the time history of the state and control variables. The functions

φ and L are selected to reflect an appropriate cost associated with the state and

control trajectories.

2

Page 21: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 21/317

1.2 Direct Approaches

In the direct approach, the admissible solutions for the state variables and

control inputs are chosen in such a way that a certain number of unknown coefficients

are introduced such that all constraints stated in the problem are satisfied. The di-

rect method converts the optimal control problem to a static parameter optimization

problem. Once the conversion is made, the nonlinear programming problem is solved

using many available numerical techniques that include weighted residual methods

and nonlinear programming.

The advantages and disadvantages of the direct method relate to the choice

of initial guess and the accuracy of the solution. One of the biggest advantages of the direct method is that the solution can be obtained starting from a very poor

initial guess. However, the accuracy of the approximate solution is poor compared

with the solution from the indirect approaches.

1.3 Indirect Approaches

The indirect approach includes necessary first-order conditions in the numer-

ical optimization problem. This makes the algorithm, in general, more complicated.

The necessary conditions arise from techniques such as calculus of variations or

Pontryagin’s principle. These necessary conditions are in terms of differential and

algebraic equations. The advantage of the indirect method is that some properties

of the optimal trajectory are explicitly included. However, a disadvantage is the

introduction of the additional variables known as Lagrange multipliers.

In this section, the calculus of variations and Pontryagin’s principle, the two

well-known theories for the indirect approaches, are reviewed.

3

Page 22: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 22/317

1.3.1 Calculus of Variations

In this section, the calculus of variations is applied to the first-order dynamic

optimization problem without any auxiliary constraints. The cost functional J  is

J  = φ(x(tf ), tf ) + tf t0

L(x(t), u(t), t)dt, (1.3)

subject to dynamic constraints of the form

x(1)(t) = f (x(t), u(t), t), (1.4)

with free end states and end times. The necessary conditions can be derived in

several steps. It is assumed here for simplicity that both state and control variables

are unbounded, noting that the performance index J  is dependent on x(t) and u(t)

in the interval [t0, tf ]. In variational calculus, the constraints of the problem are

handled by introducing additional variables called Lagrange multipliers λ to define

an augmented cost functional:

J [x, u] = φ(x(tf ), tf ) + tf t0

L(x(t), x(1)(t), u(t), λ(t), t) dt

where L = L(x(t), u(t), t) + λT (t)(f (x(t), u(t), t) − x(1)). (1.5)

Let φ and L(x, x(1), u , λ(t), t) be functions with continuous first and second partial

derivatives with respect to their arguments. The space of functions in which the

extremum is sought is D1(t0, tf ) [23]. It consists of all functions x(t) which are

continuous and have continuous first derivatives on an interval [t0, tf ]. The norm

for this function class is

x1 =1

i=0

maxt0≤t≤tf 

|x(i)(t)|, (1.6)

where x(0)(t) denotes the function x(t) itself. Therefore, two functions in D1 are

considered to be close if the functions and their first derivatives are close together.

For any > 0,

x1 − x21 < (1.7)

4

Page 23: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 23/317

implies that

|x1(t) − x2(t)| < , and |x(1)1 (t) − x

(1)2 (t)| < (1.8)

for all t0 ≤ t ≤ tf . This indicates “closeness” for elements in this function space. Toapply the necessary condition for an extremum to the problem formulated above,

the variation of the functional (1.5) must be calculated using the concept of the

variation (or differential) of a functional, as described in Appendix A. Before de-

riving the necessary conditions for the above problem, the variation statement must

be considered.

Variation StatementThe objective of this section is to determine the necessary conditions for

extrema of the functional J [x] of the form

J [x] = φ(x(tf ), tf ) + tf t0

F (t, x(t), x(1)(t)) dt (1.9)

with free end states and end times. Let φ(x(tf ), tf ) and F (t, x(t), x(1)(t)) be func-

tions with continuous first and second partial derivatives with respect to their argu-

ments. The space of functions in which the extremum is sought is D1(t0, tf ). First,

we give x(t) an increment hx(t), as shown in Figure 1.1. Then, the corresponding

increment of the functional (1.5) is

∆J  = J [x + hx] − J [x]

= φ

x(tf ) + dx|tf , tf  + δtf 

− φ(x(tf ), tf )

+ tf +δtf t0+δt0

F (t, x + hx, x(1) + hx(1)) dt − tf t0

F (t,x,x(1)) dt

= φ x(tf ) + dx|tf , tf  + δtf  − φ(x(tf ), tf )

+ tf t0

F (t, x + hx, x(1) + hx(1)) − F (t,x,x(1)) dt

+ tf +δtf tf 

F (t, x + hx, x(1) + hx(1)) dt

− t0+δt0t0

F (t, x + hx, x(1) + hx(1)) dt. (1.10)

5

Page 24: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 24/317

Figure 1.1: The allowable increment hx(t) in x(t) for the case with variable endpoint and end time.

Using Taylor’s theorem to expand the integrand and neglecting higher order terms,

we obtain the variation of  J [x] as

δJ  =

∂φ

∂xdx +

∂φ

∂tδt

tf 

+ tf t0

∂F 

∂xhx +

∂F 

∂x(1)hx(1)

dt

+ F (t, x + hx, x(1) + hx(1))|tf δtf 

− F (t, x + hx, x(1) + hx(1))|t0δt0. (1.11)

According to Theorem 1 of Appendix A, a necessary condition for J [x] to have an

extremum is that

∂φ

∂xdx +

∂φ

∂tδt

tf 

+ tf t0

∂F 

∂xhx +

∂F 

∂x(1)hx(1)

dt

6

Page 25: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 25/317

+F (t, x + hx, x(1) + hx(1))|tf δtf 

−F (t, x + hx, x(1) + hx(1))|t0δt0 = 0. (1.12)

Integrating by parts the ∂F ∂x(1)hx(1) term within the integral in Eq.(1.12), we get

∂φ

∂xdx +

∂φ

∂tδt

tf 

+ tf t0

∂F 

∂x−

∂F 

∂x(1)

(1) hx dt

+

∂F 

∂x(1)hx

tf t0

+ F (t, x + hx, x(1) + hx(1))|tf δtf 

−F (t, x + hx, x(1) + hx(1))|t0δt0 = 0. (1.13)

Define ∂F ∂x

= ∂F ∂x1

∂F ∂x2

... ∂F ∂xn, ∂φ

∂x= ∂φ

∂x1

∂φ∂x2

... ∂φ∂xn, and hx = (hx1 hx2 ... hxn)T .

Referring to Figure 1.1, we have

hx(t0) = dx|t0 − x(1)|t0δt0,

hx(tf ) = dx|tf  − x(1)|tf δtf . (1.14)

Eq.(1.13) can be rewritten as

 tf t0

∂F 

∂x−

∂F 

∂x(1)

(1)

hx dt

+

∂F 

∂x(1)+

∂φ

∂x

dx

tf t0

+

F  −

∂F 

∂x(1)

x(1)

δt

tf t0

+∂φ

∂tδt

tf 

= 0 (1.15)

where hx and δt are the variations of x and t respectively. Since these variations are

arbitrary in t0 ≤ t ≤ tf  and at the end points, δJ  vanishes if all the coefficients mul-

tiplying these variations are zero. This also follows from Lemma 1 of Appendix A.

Thus, the first order necessary conditions are

∂F 

∂x−

∂F 

∂x(1)

(1)

= 0, (1.16)

∂F 

∂x(1)+

∂φ

∂x

tf t0

= 0, (1.17)

7

Page 26: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 26/317

F  −

∂F 

∂x(1)x(1) +

∂φ

∂t

tf t0

= 0. (1.18)

On applying these results to the dynamic optimization problem of Eq.(1.5),

the necessary conditions can be shown to be

∂L

∂x−

∂L

∂x(1)

(1)

= 0, (1.19)

∂L

∂u= 0, (1.20)

∂L

∂λ= 0 = f (x,u,t) − x(1), (1.21)

∂L

∂x(1)+

∂φ

∂xtf 

t0

= 0, (1.22)

L −

∂L

∂x(1)x(1) +

∂φ

∂t

tf t0

= 0. (1.23)

For simplicity and ease of comparison of the results to be obtained from Pontryagin’s

principle, we define a scalar quantity called the Hamiltonian H  as follows:

H (x(t), u(t), λ(t), t) = L(x(t), u(t), t) + λT (t)f (x(t), u(t), t). (1.24)

Now, Eq.(1.5) can be rewritten as

J  = φ(x(tf ), tf ) + tf t0

H (x(t), u(t), λ(t), t) − λT (t)x(1)(t)

dt. (1.25)

From the principles of variational calculus, the necessary conditions for optimality

become

x(1)(t) =

∂H 

∂λ

= f (x(t), u(t), t), (1.26)

λ(1)(t) = −

∂H 

∂x

, (1.27)

0 =

∂H 

∂u

, (1.28)

8

Page 27: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 27/317

along with the appropriate final conditions since the boundary conditions at tf  are

not specified:

∂φ

∂xT 

− λtf 

= 0,H +

∂φ

∂t

tf 

= 0. (1.29)

Equations (1.26) and (1.27) are called state and co-state equations respectively.

Eq.(1.28) is called the control optimality condition which also has embedded in it

the switching structure in the presence of inequality constraints.

Hence, Eqs. (1.26) and (1.27) are 2n first-order differential equations. Eq.(1.28)

are a set of  m algebraic equation while Eqs.(1.29) constitute a part of the bound-

ary conditions. We describe the sufficient conditions for a minimum after briefly

describing the Pontryagin’s principle.

1.3.2 Pontryagin’s Principle

The problem statement in Pontryagin’s theory is still the same as that de-

scribed in Eqs.(1.3) and (1.4), with the additional condition that u(t) ∈ U  for

t0 ≤ t ≤ tf . Here,  U  is a set of valid admissible control inputs. The first step in this

approach is to obtain the Hamilton-Jacobi-Bellman equation by using the principle

of optimality.

The Principle of Optimality: If a control input  u(t) is optimal for the

initial state x(t) in the interval  [t, tf ], the same control in the interval  [t + t, tf ], is

also optimal for the initial state x(t + t), where t ≥ 0 and  x(t + t) is a point 

on the optimal state trajectory.

For a system described by Eq. (1.4), consider the performance measure

J (x(t),t, u(τ )t≤τ ≤tf 

) = φ(x(tf ), tf ) + tf t

L(x(τ ), u(τ ), τ )dτ, (1.30)

where x(t) is an admissible state and t ≤ tf . We note that the performance index is

dependent upon x(t) and t, and on the optimal control history in the interval [t, tf ].

9

Page 28: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 28/317

For all admissible x(t) and t ≤ tf , using the controls that minimize Eq.(1.30),

let the minimum cost function be defined as

J ∗(x(t), t) = minu(τ )

t≤τ ≤tf 

φ(x(tf ), tf ) +

 tf t

L(x(τ ), u(τ ), τ )dτ 

. (1.31)

On subdividing the interval and using the additive property of the integral, we have

J ∗(x(t), t) = minu(τ )

t≤τ ≤tf 

φ(x(tf ), tf ) +

 t+t

tL(x(τ ), u(τ ), τ )dτ 

+ tf t+t

L(x(τ ), u(τ ), τ )dτ 

. (1.32)

The principle of optimality implies that

J ∗(x(t), t) = minu(τ )

t≤τ ≤t+t

 t+t

tL(x(τ ), u(τ ), τ )dτ  + J ∗(x(t + t), t + t)

, (1.33)

where J ∗(x(t + t), t + t) is the minimum cost from the initial state x(t + t)

over the time interval [t + t, tf ]. On expanding (1.33) in a Taylor’s series about

(x(t), t), we obtain

J ∗

(x(t), t) = minu(τ )t≤τ ≤t+t

 t+t

t L(x(τ ), u(τ ), τ )dτ + J ∗

(x(t), t) +∂J ∗

∂t (x(t), t)t

+∂J ∗

∂x(x(t), t)x(1)t + higher order terms

. (1.34)

For small t,

J ∗(x(t), t) = minu(t)

L(x(t), u(t), t)t + J ∗(x(t), t) +

∂J ∗

∂t(x(t), t)t

+∂J ∗

∂x(x(t), t)f (x(t), u(t), t)t + O(t) . (1.35)

On neglecting higher-order terms of t and separating terms dependent on u(t), we

can rewrite the above equation as

∂J ∗

∂t(x(t), t) + min

u(t)

L(x(t), u(t), t) +

∂J ∗

∂x(x(t), t)f (x(t), u(t), t)

= 0. (1.36)

10

Page 29: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 29/317

Defining the Hamiltonian H  as

H (x(t), u(t),

∂J ∗

∂x

, t) = L(x(t), u(t), t) +∂J ∗

∂x(x(t), t)f (x(t), u(t), t), (1.37)

and

x(t), u∗(x(t),

∂J ∗

∂x

, t),

∂J ∗

∂x

, t

= minu(t)

x(t), u(x(t),

∂J ∗

∂x

, t),

∂J ∗

∂x

, t

, (1.38)

the Hamilton-Jacobi-Bellman equation is obtained as

∂J ∗

∂t(x(t), t) + H 

x(t), u∗(x(t),

∂J ∗

∂x

, t),

∂J ∗

∂x

, t = 0, (1.39)

where J  satisfies the boundary condition

J ∗(x(tf ), tf ) = φ(x(tf ), tf ). (1.40)

Without going into the derivation, we state that Pontryagin’s principle can be de-

rived from Eq.(1.39) [29]. One redefines the scalar Hamiltonian H  as

H (x(t), u(t), λ(t), t) = L(x(t), u(t), t) + λ(t)T f (x(t), u(t), t), (1.41)

where λ(t) ∈ Rn and is equal to∂J ∗

∂x

T in the above argument. These λ(t) are also

called Lagrange multipliers or co-state variables.

If  u∗(t) is the optimal control trajectory for the problem, and x∗(t) is the

corresponding optimal state trajectory, then there exists a nonzero vector λ(t) such

that

x(1)∗(t) =

∂H 

∂λ

= f (x∗, u∗, t), (1.42)

λ(1)∗(t) = −

∂H 

∂x

(x∗, u∗, λ∗, t), (1.43)

H (x∗, u∗, λ∗, t) ≤ H (x∗, u , λ∗, t), (1.44)

11

Page 30: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 30/317

∂φ

∂x

− λ

tf 

= 0, (1.45)

H +

∂φ

∂ttf 

= 0, (1.46)

for all admissible control and for all t ∈ [t0, tf ]. Eq.(1.44) also can be written as

H (x∗, u∗, λ∗, t) = minu(t)∈U 

H (x∗, u , λ∗, t). (1.47)

If the problem has unbounded control inputs u(t), following the derivation ([29],

[30], and [35]), Eq.(1.47) can be written as

∂H ∂u

= 0, (1.48)

which is the same as Eq.(1.28). Therefore, the necessary conditions derived from

the calculus of variations, i.e., Eqs.(1.26), (1.27), (1.28), and (1.29), are identical to

Eqs.(1.42), (1.43), (1.45), (1.46), and (1.48) derived from Pontryagin’s principle.

1.3.3 First Integrals

The differential equation (1.19) admits a number of first integrals dependingon the structure of the integrand.

• In Eq.(1.19), if ∂L

∂x

T = 0, we have

λ(1)(t) = 0. (1.49)

Hence, the optimal solution admits n first integrals λ(t) = K . With similar

reasoning, if the integrand does not explicitly contain an element of x, say xi,correspondingly, there is a single first integral for each xi.

• If the integrand of (1.19) does not explicitly contain t,

L −∂L

∂x(1)x(1) = K  implies that H  = K. (1.50)

12

Page 31: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 31/317

This property can be verified by time derivation of the left-hand and the right-

hand sides. For open-ended time problems, where φ does not explicitly depend

on t, i.e. ∂φ∂t

= 0, the constant K  = 0.

1.3.4 Linear Feedback: Terminal Controllers

It is quite well known that solving the partial differential equation (1.39) is

very difficult, especially for a nonlinear system. Therefore, obtaining the explicit

“exact” feedback control law for nonlinear systems is rarely possible. However,

in many problems, computing the neighboring optimal feedback law involving the

extremal paths becomes feasible, as described in Section 1.3.6.

Feedback control laws can be found for linear dynamic systems, where a

quadratic performance index J  is to be minimized with free end states at a given

final time tf . The statement of the problem is to determine u(t) that takes the

system from an initial state x0 at time t0 to an unspecified point at time tf . The

path during t0 and tf  must minimize the quadratic functional

J  =1

2(xT S f x)|tf  +

1

2

 tf t0

(xT Qx + uT Ru)dt, (1.51)

where Q is an (n × n) symmetric positive semi-definite matrix and R is an (m × m)

symmetric and positive definite matrix, subject to the linear state equations:

x(1) = Ax + Bu (1.52)

where A is an (n × n) matrix, x is an (n × 1) state vector, B is an (n × m)

matrix, and u is an (m × 1) control vector. The solution of the above problem can

be obtained using Eq.(1.52) along with the Euler-Lagrange equation,

λ(1) = −

∂H 

∂x

; λ(tf ) = S f x(tf ); (1.53)

and ∂H 

∂u

= 0, (1.54)

13

Page 32: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 32/317

where

H  =1

2xT Qx +

1

2uT Ru + λT (Ax + Bu). (1.55)

Solving Eqs.(1.53) and (1.54), we get a linear two-point boundary-value problem: x(1)

λ(1)

=

A −BR−1BT 

−Q AT 

x

λ

(1.56)

where

x(t0) = x0 (1.57)

and

λ(tf ) = S f x(tf ). (1.58)

From Appendix (D), the solution for λ(t) in this problem may be written as

λ(t) = S (t)x(t). (1.59)

This optimal form of the solution has been shown in [18] by using a transition

matrix. By substituting Eq.(1.59) back into Eq.(1.56) and eliminating x(1) and λ,

Eq.(1.56) can be written as

(S (1) + SA + AT S − SB R−1BT S + Q)x = 0. (1.60)

Since x(t) is arbitrary, the following identity is true:

S (1) + SA + AT S − SB R−1BT S + Q = 0. (1.61)

Eq.(1.61) is called a matrix Riccati equation and must be solved along with the

boundary condition at time tf :

S (tf ) = S f . (1.62)

Since S (t) can be obtained by integrating backward in time and u = −R−1BT λ, the

continuous feedback law for this terminal control problem can also be written as

u(t) = −R−1BT S (t)x(t). (1.63)

14

Page 33: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 33/317

1.3.5 Neighboring Extremal Paths (Final Time Specified)

In this section, we suppose that the extremal trajectories of the state and

control variables x(t) and u(t) are found and satisfy all first-order necessary condi-

tions of section 1.3.1 for the fixed final time problems. Now, small perturbations

in the initial states δx(t0) produce small perturbations from this extremal path i.e.,

δx(t), δλ(t), and δu(t). These conditions are governed by linearizing Eqs. (1.26)

through (1.29) as follows:

δx(1)(t) =∂f 

∂xδx +

∂f 

∂uδu, (1.64)

δλ(1)(t) = −∂ 2H 

∂x2δx −

∂f 

∂xδλ −

∂ 2H 

∂u∂xδu, (1.65)

0 =∂ 2H 

∂x∂uδx +

∂f 

∂u

δλ +∂ 2H 

∂u2δu, (1.66)

δλ(tf ) =∂ 2φ

∂x2δx

t=tf 

, (1.67)

where

∂f ∂x

=

∂f 1∂x1

∂f 1∂x2

. . . ∂f 1∂xn

∂f 2

∂x1

∂f 2

∂x2 . . .∂f 2

∂xn...

.... . .

...

∂f n∂x1

∂f n∂x2

. . . ∂f n∂xn

(n×n)

, ∂f ∂u

=

∂f 1∂u1

∂f 1∂u2

. . . ∂f 1∂um

∂f 2

∂u1

∂f 2

∂u2 . . .∂f 2

∂um...

.... . .

...

∂f n∂u1

∂f n∂u2

. . . ∂f n∂um

(n×m)

,

∂ 2H 

∂x2=

∂ 2H ∂x21

∂ 2H ∂x1∂x2

. . . ∂ 2H ∂x1∂xn

∂ 2H ∂x2∂x1

∂ 2H ∂x22

. . . ∂ 2H ∂x2∂xn

......

. . ....

∂ 2H ∂xn∂x1

∂ 2H ∂xn∂x2

. . . ∂ 2H ∂x2n

(n×n)

,

∂ 2H 

∂u2=

∂ 2H ∂u21

∂ 2H ∂u2∂u1

. . . ∂ 2H ∂un∂u1

∂ 2H ∂u1∂u2

∂ 2H ∂u22

. . . ∂ 2H ∂un∂u2

......

. . ....

∂ 2H ∂u1∂um

∂ 2H ∂u2∂um

. . . ∂ 2H ∂u2m

(m×m)

,

15

Page 34: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 34/317

∂ 2H 

∂u∂x=

∂ 2H ∂u1∂x1

∂ 2H ∂u2∂x1

. . . ∂ 2H ∂um∂x1

∂ 2H ∂u1∂x2

∂ 2H ∂u2∂x2

. . . ∂ 2H ∂um∂x2

......

. . ....

∂ 2H ∂u1∂xn

∂ 2H ∂u2∂xn

. . . ∂ 2H ∂um∂xn

(n×m)

, and

∂ 2H 

∂x∂u=

∂ 2H ∂x1∂u1

∂ 2H ∂x2∂u1

. . . ∂ 2H ∂xn∂u1

∂ 2H ∂x1∂u2

∂ 2H ∂x2∂u2

. . . ∂ 2H ∂xn∂u2

......

. . ....

∂ 2H ∂x1∂um

∂ 2H ∂x2∂um

. . . ∂ 2H ∂xn∂um

(m×n)

.

These conditions can be derived alternatively by considering the expansion

of the performance index and dynamic constraints to the second order with the

assumptions that all the first-order necessary conditions vanish about an extremal

path. This can be done by expanding the augmented performance index to the

second-order and the dynamic constraints to the first-order by linearizing the original

dynamic constraints Eq.(1.26):

δ2J  = 12

δxT φδx|tf  +12  

tf t0 [δxT  δuT ]H 

δx

δu

dt (1.68)

where the Hessian matrix H  has the form

H  =

∂ 2H 

∂x2∂ 2H ∂x∂u

∂ 2H ∂u∂x

∂ 2H ∂u2

(1.69)

subject to

δx(1)(t) =∂f 

∂xδx +

∂f 

∂uδu, (1.70)

and a given

δx(t0). (1.71)

In order to derive Eqs.(1.64) through (1.67), δu(t) must be determined such that

δ2J  is minimized subject to Eqs.(1.70) and (1.71) since a neighboring extremal

16

Page 35: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 35/317

path is the problem of interest. This problem is of the linear-quadratic type and

the associated two-point boundary-value problem can be found using the necessary

conditions for optimality Eqs.(1.26)-(1.29) with the Hamiltonian H  defined as

H  =1

2[δxT  δuT ]H 

δx

δu

+ δλT 

∂f 

∂xδx +

∂f 

∂uδu

.

As a results, we obtain Eqs.(1.64)-(1.67).

Assuming that ∂ 2H ∂u2

is invertible or nonsingular for t0 ≤ t ≤ tf , Eq.(1.66) can

be solved for δu in terms of  δx and δλ as:

δu(t) = −∂ 2H 

∂u2−1

∂ 2H 

∂x∂uδx +∂f 

∂uT 

δλ

. (1.72)

By substituting Eq.(1.72) into Eqs.(1.64) and (1.65), the following two-point boundary-

value problem with δx(t0) and δλ(tf ) from Eq.(1.67) results

δx(1) = A(t)δx − B(t)δλ, (1.73)

δλ(1) = −C (t)δx − AT (t)δλ, (1.74)

where

A(t) =∂f 

∂x−

∂f 

∂u

∂ 2H 

∂u2

−1∂ 2H 

∂x∂u

B(t) =∂f 

∂u

∂ 2H 

∂u2

−1 ∂f 

∂u

C (t) =∂ 2H 

∂x2−

∂ 2H 

∂u∂x

∂ 2H 

∂u2

−1∂ 2H 

∂x∂u. (1.75)

1.3.6 Determination of Neighboring Extremal Paths

A neighboring optimum feedback law is obtained in this section by using the

sweep method for linear-quadratic problems [18], seeking the solution of Eq.(1.74)

in the form

δλ(t) = S (t)δx(t), (1.76)

17

Page 36: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 36/317

Figure 1.2: Neighboring optimal feedback control.

where S (t) is a symmetric metric to be determined with

S (tf ) = ∂ 2

φ∂x2

|t=tf . (1.77)

Substituting Eq.(1.76) into Eqs.(1.64) and (1.65) and eliminating δx(1) and δλ [18],

the time varying Riccati equation must hold for t0 ≤ t ≤ tf . This is

S (1) = −SA − AT S + SB S − C. (1.78)

The solution of ¯S (t) can be solved by using backward integration of Eq.(1.78)

with the boundary conditions S (tf ) = ∂ 2φ∂x2

|t=tf . The value of  S (t) is stored and is

used to compute the continuous linear feedback law from Eq.(1.72)

δu(t) = −

∂ 2H 

∂u2

−1 ∂ 2H 

∂u∂x+

∂f 

∂u

δx. (1.79)

18

Page 37: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 37/317

This continuous linear feedback law is the neighboring optimum feedback law [18].

Figure 1.2 provides a block diagram of the algorithm of the neighboring optimum

feedback control based on Eq.(1.79).

1.3.7 Sufficient Conditions

Section 1.3.1 described the necessary conditions for an extremum. Further

conditions must be used to check if the solution is a maximum or minimum. These

conditions are called the sufficient conditions.

Consider the problem of finding the control input u(t) ∈ Rm and the state

variable x(t) ∈ Rn that minimizes the cost functional

J [x, u] = Φ(t, x)|tf  + tf t0

L(t,x,u) dt (1.80)

subject to

x(1) = f (t,x,u); x(t0) = x0

where x(t) ∈ Rn, and t0, tf  are both fixed.

Suppose that u(t), x(t) is an extremum solution to this problem. Let hx(t) ∈

Rn

and hu(t) ∈ Rm

represent appropriate increments from the optimum trajectory.Then the increment in an augmented cost functional J , as described in Section 1.3.1,

is given by

∆J  = J [x + hx, u + hu, λ + hλ] − J [x,u,λ]. (1.81)

Expanding (1.81) in a Taylor series gives

∆J  = δJ  + δ2J  + higher order terms, (1.82)

where δJ  and δ2J  are the first and second variation respectively. The scalar Hamil-

tonian function is defined as H  = L + λT f . Since x(t), u(t) is an extremum, δJ  = 0,

and Eq.(1.82) can be written as

∆J  = δ2J  + higher order terms. (1.83)

19

Page 38: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 38/317

For small increments hx hu, and hλ, the term δ2J  is the most significant quantity

in ∆J . Thus, neglecting the higher order terms in Eq.(1.82) we see that a sufficient

condition for x(t), u(t) to be a minimum is that δ2J  be positive definite. Here, the

expression for δ2J  is

δ2J  =1

2hT x

∂ 2Φ

∂x2hx

tf 

+1

2

 tf t0

hT x hT 

u

hx

hu

dt (1.84)

where

H  =

∂ 2H 

∂x2∂ 2H ∂u∂x

∂ 2H ∂u∂x

T ∂ 2H ∂u2

. (1.85)

From the expression of  δ2J  in Eq.(1.84), it is greater than zero for all increments

hx and hu if 

∂ 2Φ

∂x2> 0 and H > 0, (1.86)

i.e. the matrices are positive definite. Hence, a set of sufficient conditions for the

extremum x(t), u(t) to be a minimum is that Eq.(1.86) must be satisfied. The

positive definiteness of the Hessian matrix H , in general is hard to verify since it

must hold for all time t in the domain [t0, tf ]. Also, hx and hu are not independent.

Therefore, one can find less restrictive conditions that still ensure δ2J  > 0.

The constraints on hx and hu can be derived using the dynamic equations:

d

dt(x + hx) − x(1) = f (x + hx, u + hu, t) − f (x,u,t) (1.87)

On expanding the right hand side of Eq.(1.87) in a Taylor series and neglecting the

higher-order terms, yields

h(1)x =

∂f 

∂xhx +

∂f 

∂uhu (1.88)

Here the following property has been used, ddt

(δx) = δ( ddt

x).

Also in order to further simplify δ2J , the following zero quantity is added to

δ2J :

1

2

 tf t0

hT x S 

∂f 

∂xhx +

∂f 

∂uhu − h(1)

x

dt = 0, (1.89)

20

Page 39: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 39/317

where S (t) ∈ Rn×n is a symmetric matrix to be determined.

The goal here is to express Eq.(1.89) in a quadratic form having the same

structure of Eq.(1.84). To do this, we need to integrate by parts the term associated

with h(1)x once and add the results in Eq.(1.84). As a result, we get

δ2J  =1

2hT x Shx|tf  +

1

2

 tf t0

zT ∂ 2H 

∂u2zdt (1.90)

z =

∂ 2H 

∂u2

−1 ∂ 2H 

∂x∂u+

∂f 

∂u

hx + hu.

In order to derive Eq.(1.90), there is one condition that must be satisfied which is

the time varying Riccati equation. This equation along with the boundary condition

has the following form:

S (1) +∂ 2H 

∂x2+ S 

∂f 

∂x+

∂f 

∂x

S  =

∂ 2H 

∂x∂u

+ S ∂f 

∂u

∂ 2H 

∂u2

−1 ∂ 2H 

∂x∂u+

∂f 

∂u

, (1.91)

S (tf ) =∂ 2Φ

∂x2

x(tf )

. (1.92)

If we consider trajectories for which hx(t0) = 0, then the development above can be

summarized such that the following conditions are sufficient for x(t) and u(t) to be

minimum solutions, i.e., δ2J  > 0, as

1. The Legendre Condition: ∂ 2H/∂u2 > 0.

2. The Jacobi Condition: The Riccati Eq. (1.91) has a finite solution in the

interval t0 ≤ t ≤ tf .

Noting that the sufficient conditions derived above assume that t0 and tf  are both

fixed.

21

Page 40: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 40/317

1.4 Motivations and Objectives

In recent years, many research groups have shown that a large number of 

dynamic systems can be written in canonical form [1],[6],[7],[8],[9],[33],[43]. These

canonical forms are alternatives to state-space forms and can be represented by

higher-order differential equations. For planning and control purposes, these canon-

ical forms provide a number of advantages when compared to their corresponding

first-order forms. Therefore, the questions of optimization of dynamic systems de-

scribed by higher-order differential equations have been addressed by our group for

several years. In early 1995, the optimality theory for linear and nonlinear problems

without constraints was developed [44]. These theories did not use Lagrange mul-tipliers [1],[7],[8],[9]. In reality, constrained problems such as path, boundary, and

inequality constraints are quite important and are studied in this thesis. Finally, the

general optimality theorem for higher-order systems has been derived directly from

their higher-order forms and the results are confirmed by classical theory using the

first-order form. This theorem will be discussed in Chapter 2. Further extensions of 

the optimal control theory such as feedback control law, sufficient conditions, and

neighboring extremals are presented in Chapters 3, ??, and 4.General purpose programs must be developed in order to show the computa-

tional comparison between higher-order and first-order systems. The algorithms are

discussed in Chapter 5. Chapter 6 applies the technique to robotic systems which

are represented by second-order differential equations. The computational time or

the CPU run-time are compared in more detail in both Chapters 5 and 6.

1.5 Thesis OverviewChapter 2: This chapter deals with the general theory of higher-order dy-

namic system optimization with n states and m control inputs commanded to move

between two considered points, i.e., (i) fixed end time fixed end point, (ii) fixed

22

Page 41: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 41/317

end time variable end point, (iii) fixed end point variable end time, and (iv) vari-

able end time variable end point. A new procedure for dynamic optimization of 

these problems is presented in order to handle a variety of problems such as mini-

mum energy, minimum time, and minimum fuel with constraints such as (i) equality

constraints, (ii) inequality constraints, and (iii) boundary constraints. In this new

procedure, some of Lagrange variables used with the first-order optimality principle

are eliminated.

Chapter 3: The existence of an “exact” continuous feedback control law for

terminal controllers of first-order linear systems with quadratic criteria is well known.

This chapter describes the development of continuous feedback law for higher-order

systems. There is also a discussion on the advantages and disadvantages between

higher-order and first-order representations.

Chapter ??: Chapter 2 determines the indirect necessary conditions for

higher-order dynamic systems to extremize a cost functional. In order to indicate

that the extremum is a maximum or a minimum, this chapter provides the sufficient

conditions for higher-order systems. The results on both necessary and sufficient

conditions are compared. Also, an illustrative example is provided at the end of thechapter.

Chapter 4: Optimal feedback control law for nonlinear systems is rarely

possible; however, neighboring optimal feedback law can be computed. The results

from Chapter 3 are used to develop a higher-order neighboring extremal feedback

law in this chapter.

Chapter 5: In order to convert an optimal control problem to a nonlinear

programming problem, the necessary steps are discussed in this chapter. A general

purpose program to solve such a problems is also developed. Since a benchmark is

needed to compare the first and higher-order computational procedures, a general

purpose code is developed.

23

Page 42: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 42/317

Chapter 6: Open-chain manipulators are described in this chapter whose

equations of motion are of second-order. These systems have a second-order form,

therefore, the optimality principles described in this thesis can be applied to this

class of problems.

Chapter ??: This chapter summarizes the work and outlines some potential

for future research problems.

24

Page 43: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 43/317

Chapter 2

THE OPTIMALITY PRINCIPLE FOR HIGHER-ORDER

DYNAMIC SYSTEMS

It is customary to express dynamic systems in state-space form, i.e., a system

of first-order differential equations. However, for a large class of mechanical systems,

the most natural representation of the system is a set of second-order differential

equations which arise from the application of Newton’s laws. Needless to say, these

second-order differential equations can be converted to a set of first-order differential

equations, but this process is accompanied by inversion of the inertia matrix which

makes the first-order form unreasonably complicated ([38], [6]). From a different

perspective, using the theory of linear and nonlinear systems, see e.g. [26], tools

of differential geometry provide alternative representations of systems in canonical

forms which allow the system to be rewritten as higher-order differential equations in

the canonical variables. Dynamic systems that have this feature include controllable

linear systems [1], feedback linearizable systems [26], chained form systems [33], and

differentially flat systems [20].

This chapter addresses the underlying theory for optimal trajectory gener-

ation for this broad class of systems which have a higher-order representation intheir original coordinates or in the transformed coordinates. Currently, methods

which exploit the structure of the higher-order differential equations to efficiently

compute the optimal solution are still in their infancy. In a recent study, a direct

method was used for a class of higher-order systems to compute the optimal solution

25

Page 44: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 44/317

[34]. In some recent works, Agrawal and coworkers have exploited the structure of 

the higher-order equations to compute the optimal solution using indirect meth-

ods. These studies assumed no inequality constraints and the state-space equations

were explicitly embedded into the cost functional, thereby, reducing a constrained

optimization problem to an unconstrained optimization problem. The results from

higher-order variational calculus were used to find the necessary conditions for op-

timality [23]. This approach was demonstrated for linear time-invariant systems

[1], classes of time-varying systems [9], feedback linearizable systems [3], and fully

actuated robot systems [4]. A recent study uses the explicit structure of globally

feedback linearizable systems to derive some important results applicable to the

optimal solution of Mayer’s problem in the presence of inequality constraints [37].

At this time, we are unaware of a unifying theory that would extend the minimum

principle [35] to systems in higher-order form, without converting them to first-

order form. Equivalence between the results of the minimum principle and calculus

of variations with the first-order form have been demonstrated in the literature ([15],

[13], [36], [30], [18]).

The purpose of this chapter is to extend the classical results of the minimumprinciple, applicable to systems described in first-order forms, to systems in higher-

order forms. For completeness, these results are derived using Hamilton-Jacobi

theory as well as calculus of variations applicable to functionals with high derivatives.

Further, these results are verified using the classical results of the minimum principle

by rewriting the higher-order system in a first-order form. The results obtained by

these alternative approaches are then compared to gain a better insight into their

equivalence.

The statement of the problem is to find the optimal trajectory, i.e., function

pair (x(t), u(t)) for a dynamic system described by

x( p)(t) = f (x, x(1),...,x( p−1), u , t), (2.1)

26

Page 45: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 45/317

where x ∈ Rn and u ∈ Rm and x(i) represents the ith derivative of  x, and f  is a

vector function in Rn. We assume that f  has continuous first and second partial

derivatives with respect to the arguments x, x(1),...,x( p−1), and u. The trajectory

must minimize the cost

J [x, u] = φ(x(tf ), x(1)(tf ),...,x( p−1)(tf ), tf )

+ tf t0

L(x(τ ), x(1)(τ ),...,x( p−1)(τ ), u(τ ), τ ) dτ  (2.2)

and satisfy a finite number of constraints

c j(t,x,x(1),...,x( p−1), u) ≤ 0, j = 1,...,r, (2.3)

gi(x(t), x(1)(t),...,x( p−1)(t), u(t), t) = 0, i = 1,...,r (2.4)

where r and r are the number of auxiliary inequality and equality constraints on

the problem, respectively. The function φ, L, and c j are also assumed to have

continuous first and second partial derivatives with respect to their arguments.

It is important to point out some salient feature of the problem statement:

(i) For p = 1, Eq.(2.1) is the familiar first-order description of the system with n

states and m inputs; (ii) For p > 1, if Eq.(2.1) is written in the state-space form,

it will result in np first-order differential equations with m control inputs; (iii) An

optimality theory derived for a higher-order system should reduce to the classical

first-order results when p = 1.

Some special cases of this problem which are suitable for the derivation of 

the optimization theory are: (i) u is constrained to a set which is mathematically

represented as u ∈ A[t, x(t), x(1)(t),..., x( p−1)(t)]. This represents general equal-

ity and inequality constraints on the control inputs. A special case of this con-

straint is when the set A is independent of states and time; (ii) The cost functional

of Eq. (2.2) includes some important problems such as minimum time, minimum

fuel, and maximum terminal states as special cases; (iii) If  p = 1, Eq. (2.1) is the

27

Page 46: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 46/317

familiar state-space description of a system. Optimization of this class has been

dealt with extensively in the literature. The problem addressed in this chapter sat-

isfies the following constraints: (i) u(t) ∈ A[t, x(t), x(1)(t),...,x( p−1)(t)]; (ii) Initial

conditions of the trajectories x(t), x(1)(t),...,x( p−1)(t) are specified while the terminal

conditions are free.

Since the purpose of this chapter is to extend the theory of dynamic optimiza-

tion to the case p > 1, in the following sections the optimization theory is derived

using two approaches: (i) variational calculus; (ii) Hamilton-Jacobi theory to arrive

at the extended form of Pontryagin’s principle that applies to higher-order systems.

It will be demonstrated that the two approaches lead to the same results. Also,

these results are consistent with those obtained by the use of the classical minimum

principle applied to the first-order form.

2.1 Higher-order Dynamic Systems

As mentioned at the beginning of this chapter, the most natural represen-

tation of mechanical systems is a set of second-order differential equations which

arise from the application of Newton’s laws. This presents an important reason

to study higher-order systems in this thesis. However, there exists a larger class

of dynamic systems, both linear and nonlinear, for which the equations of motion

can be transformed to the higher-order form. The higher-order representation for

controllable linear dynamic systems is presented in [44]. This higher-order repre-

sentation of dynamic systems falls under the umbrella of differential flatness ([20],

[33], [34]). For general nonlinear systems, the techniques to obtain such higher-order

representations are still in their fancy. We present a simple example to motivate thediscussion for which a fourth-order representation exists in addition to its second-

order representation.

The example is of a single link manipulator rotating in a vertical plane driven

28

Page 47: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 47/317

Figure 2.1: Single-link robot with joint flexibility.

through a flexible drive train, as shown in Fig. 2.1 [38]. The system has two degrees-

of-freedom and the equations of motion are

I 1q(2)1 + m1gl sin q1 + k(q1 − q2) = 0, (2.5)

I 2q(2)2 − k(q1 − q2) = u, (2.6)

where I 1 and I 2 are respectively the link and actuator moment of inertia, m1 is the

mass of the link with mass center at a distance l from the joint, k is the stiffness

of the drive train, g is the gravity constant, and u is the actuator torque. The two

second-order differential equations describing the system have a special structure.

From Eq. (2.5), q2 can be written explictly in terms of  q1 and its second derivative.

On substituting this expression of  q2 in Eq. (2.6), we obtain a single fourth-order

differential equation in the variable q1 up to its fourth derivatives:

q

(4)

1 = α1u + (α2 cos q1 + α3)q

(2)

1 + α4 sin q1q

(1)

1

2

+ α5 sin q1, (2.7)

where αi are constants given as α1 = kI 1I 2

, α2 = −m1glI 1

, α3 = −k(I 1+I 2)I 1I 2

, α4 = m1glI 1

,

and α5 = −m1gklI 1I 2

. This differential equation has the structure of Eq. (2.1) with

n = 1 and m = 1.

29

Page 48: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 48/317

2.2 Calculus of Variations

This section presents some fundamental results from calculus of variations

which are useful in developing the theory for optimization of higher-order dynamic

systems.

2.2.1 Variation Statement of General Functions

The objective of this section is to determine the necessary conditions for

extrema of the functional J [x] depending on higher-order derivatives

J [x] = φ(x(tf ), x(1)(tf ),...,x( p−1)(tf ), tf )

+ tf t0

F (t,x,x(1),...,x( p)) dt (2.8)

with open end states and end times. Let φ and F (t,x,x(1),...,x( p)) be functions with

continuous first and second partial derivatives with respect to all its arguments. The

space of functions in which the extremum is sought is D p(t0, tf ) [23]. It consists of 

all functions x(t) which are continuous and have continuous derivatives up to p on

an interval [t0, tf ]. The norm for this function class is

x p =

 pi=0

maxt0≤t≤tf |x(i)(t)|, (2.9)

where x(0)(t) denotes the function x(t) itself. Therefore, two functions in D p are

considered to be close if the values of the functions and their derivatives up to order

 p are close. For > 0,

x1 − x2 p ≤ (2.10)

implies that

|x1(t) − x2(t)| ≤ , |x(1)1 (t) − x(1)

2 (t)| ≤ , ..., |x( p)1 (t) − x( p)

2 (t)| ≤ (2.11)

for all t0 ≤ t ≤ tf .

In order to apply the necessary condition for an extremum to the problem

formulated above, the variation of the functional (2.8) must be computed. On

30

Page 49: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 49/317

providing x(t) with an increment hx(t) as shown in Figure 2.2, the corresponding

increment in the functional (2.8) can be computed as below:

∆J  = J [x + hx] − J [x]= φ

x(tf ) + dx|tf , x(1)(tf ) + dx(1)|tf ,...,x( p−1)(tf ) + dx( p−1)|tf , tf  + δtf 

− φ

x(tf ), x(1)(tf ),...,x( p−1)(tf ), tf 

+ tf +δtf t0+δt0

F (t, x + hx,...,x( p) + hx(p)) dt − tf t0

F (t,x,x(1),...,x( p)) dt

= φ

x(tf ) + dx|tf , x(1)(tf ) + dx(1)|tf ,...,x( p−1)(tf ) + dx( p−1)|tf , tf  + δtf 

− φ

x(tf ), x(1)(tf ),...,x( p−1)(tf ), tf 

+ tf t0 F (t, x + hx,...,x

( p)

+ hx(p)) − F (t,x,x(1)

,...,x( p)

) dt

+ tf +δtf tf 

F (t, x + hx,...,x( p) + hx(p)) dt − t0+δt0t0

F (t, x + hx,...,x( p) + hx(p)) dt.

(2.12)

Using Taylor’s theorem to expand the integrand and neglecting higher order terms,

we obtain the variation of  J [x] as

δJ  =

∂φ

∂xdx +

∂φ

∂x(1)dx(1) + ... +

∂φ

∂x( p−1)dx( p−1) +

∂φ

∂tδt

tf 

+ tf t0

∂F 

∂xhx +

∂F 

∂x(1)hx(1) + ... +

∂F 

∂x( p)hx(p)

dt

+ F (t, x + hx, x(1) + hx(1),...,x( p) + hx(p))|tf δtf 

− F (t, x + hx, x(1) + hx(1),...,x( p) + hx(p))|t0δt0. (2.13)

According to Theorem 1 of Appendix A, a necessary condition for J [x] to have an

extremum is that

∂φ

∂xdx +

∂φ

∂x(1)dx

(1)

+ ... +

∂φ

∂x( p−1)dx

( p−1)

+

∂φ

∂t δt

tf 

+ tf t0

∂F 

∂xhx +

∂F 

∂x(1)hx(1) + ... +

∂F 

∂x( p)hx(p)

dt

+ F (t, x + hx, x(1) + hx(1),...,x( p) + hx(p))|tf δtf 

− F (t, x + hx, x(1) + hx(1),...,x( p) + hx(p))|t0δt0 = 0. (2.14)

31

Page 50: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 50/317

Figure 2.2: The allowable increment hx(i)(t) in x(i)(t) for the case with variableend point and end time where i = 0,..., p − 1.

Integrating terms such as ∂F ∂x(i)

hx(i) by parts i times within the integral in Eq.(2.14)

i = 1,...,p, we get

∂φ

∂xdx +

∂φ

∂x(1)dx(1) + ... +

∂φ

∂x( p−1)dx( p−1) +

∂φ

∂tδt

tf 

+ tf t0

∂F 

∂x−

∂F 

∂x(1)

(1)

+ ... + (−1) p

∂F 

∂x( p)

( p)

hx dt

+

∂F 

∂x(1)−

∂F 

∂x(2)

(1)

+ ... + (−1) p−1

∂F 

∂x( p)

( p−1)

hx

tf t0

+

∂F 

∂x(2)−

∂F 

∂x(3)

(1)

+ ... + (−1) p−2

∂F 

∂x( p)

( p−2)h(1)

x

tf t0

32

Page 51: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 51/317

+ ... +

∂F 

∂x( p)h( p−1)x

tf t0

+ F (t, x + hx, x(1) + hx(1) ,...,x( p) + hx(p))|tf δtf 

− F (t, x + hx, x(1) + hx(1) ,...,x( p) + hx(p))|t0δt0 = 0. (2.15)

Referring to Figure 2.2, we have

hx(i)(t0) = dx(i)|t0 − x(i+1)|t0δt0, and

hx(i)(tf ) = dx(i)|tf  − x(i+1)|tf δtf , (2.16)

where i = 0,...,p − 1. Eq.(2.15) can be rewritten as

 tf t0

∂F 

∂x−

∂F 

∂x(1)

(1)

+ ... + (−1) p

∂F 

∂x( p)

( p) hx dt

+

∂F 

∂x(1)−

∂F 

∂x(2)

(1)

+ ... + (−1) p−1

∂F 

∂x( p)

( p−1)

+∂φ

∂x

dx

tf t0

+

∂F 

∂x(2)−

∂F 

∂x(3)

(1)

+ ... + (−1) p−2

∂F 

∂x( p)

( p−2)

+∂φ

∂x(1)

dx(1)

tf t0

+... + ∂F 

∂x( p) +

∂φ

∂x( p−1)

dx( p−1)tf 

t0

+

F  −

∂F 

∂x(1)−

∂F 

∂x(2)

(1)

+ ... + (−1) p−1

∂F 

∂x( p)

( p−1) x(1)

∂F 

∂x(2)−

∂F 

∂x(3)

(1)

+ ... + (−1) p−2

∂F 

∂x( p)

( p−2)x(2)

−... −

∂F 

∂x( p)

x( p)

δt

tf t0

+∂φ

∂tδt

tf 

= 0 (2.17)

where hx and δt are the variations of  x and t respectively. Since these variations

are arbitrary over t0 ≤ t ≤ tf  and dx(i), i = 1,...,p − 1, are arbitrary at the end

points, δJ  will vanish if all the coefficients of these variations are equal to zero.

From the integral term of Eq.(2.17) and from the generalization of Lemma 1 of 

33

Page 52: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 52/317

Appendix A, its integrand must be zero. Thus, the first-order necessary conditions

for an extremum are:

∂F ∂x −

∂F ∂x(1)(1)

+ ... + (−1) p

∂F ∂x( p)( p)

= 0 (2.18) ∂F 

∂x(1)−

∂F 

∂x(2)

(1)

+ ... + (−1) p−1

∂F 

∂x( p)

( p−1)

+∂φ

∂x

tf t0

= 0

∂F 

∂x(2)−

∂F 

∂x(3)

(1)

+ ... + (−1) p−2

∂F 

∂x( p)

( p−2)

+∂φ

∂x(1)

tf t0

= 0

...

∂F 

∂x( p) +

∂φ

∂x( p−1)tf 

t0 = 0 (2.19)F  −

∂F 

∂x(1)−

∂F 

∂x(2)

(1)

+ ... + (−1) p−1

∂F 

∂x( p)

( p−1)x(1)

∂F 

∂x(2)−

∂F 

∂x(3)

(1)

+ ... + (−1) p−2

∂F 

∂x( p)

( p−2)x(2)

−... −

∂F 

∂x( p)

x( p) +

∂φ

∂t

tf t0

= 0. (2.20)

The derivation of Eq.(2.18) is not completely rigorous, since the transitionfrom Eq.(2.14) to Eq.(2.17) presupposes the existence of the derivatives

∂F ∂x(1)

(1),...,

∂F ∂x(p)

( p). However, by a somewhat more elaborate argument [23]. it can be shown

that Eq.(2.14) implies Eq.(2.17), without this additional hypothesis. In fact, the

arguments in question prove the existence of these derivatives. Further, any x(t)

that has continuous p derivatives and satisfies Euler-Lagrange Eq.(2.2.2) possesses

continuous 2 p derivatives.

2.2.2 Dynamic Optimization

The time history of the control input that minimizes the functional of Eq.(2.2)

can be computed using the results from calculus of variations. For such optimization

problems, Eq.(2.1) is considered as a constraint to be satisfied by the solution.

34

Page 53: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 53/317

However, for specific classes of problems, Eq.(2.1) can be used to eliminate u(t) in

terms of  x(t) and its higher derivatives. Such a methodology has been investigated

in the context of linear and nonlinear systems ([1] [4] [5] [9]). Also, several kinds of 

dynamic optimization problems are possible, such as (i) fixed final time and final

states prescribed, (ii) fixed final time and final states lies on a constraint surface,

(iii) fixed final time with auxiliary constraints, (iv) the above cases with variable

final time, (v) minimum time, (vi) path constraints, and (vii) inequality constraints.

Before describing the necessary conditions for each problem, the general form for the

variation of J  is obtained by considering the functional (2.2) subject to the dynamic

equations (2.1) with open end states and end time. Also, the auxiliary state and

control equality and inequality constraints are (2.3) and (2.4), respectively.

Using the method of Valentine, slack variables are introduced as additional

inputs in order to convert each inequality constraint into an equality constraint. By

adding the new slack variables s j(t), Eq.(2.3) can be written as

c j(x(t), x(1)(t),...,x( p−1)(t), u(t), t) + s2 j = 0, j = 1,...,r (2.21)

Adjoining Eqs. (2.1), (2.4) and (2.21) to the functional (2.2) respectively with the

Lagrange multipliers λ ∈ Rn, ν  ∈ Rr , and µ ∈ Rr gives

J [x,u,λ,ν,µ] = φ + tf t0

L + λT (f  − x( p)) + ν T g + µT (c + s2)

dt (2.22)

where g, c, and s are r, r, and r-dimensional vectors consist of  gi, c j, and s j

respectively. For simplicity, Eq.(2.22) can be written compactly as

J [x,u,λ,ν,µ] = φ + tf t0

Ldt

L = L + λT (f  − x( p)) + ν T g + µT (c + s2), (2.23)

where the functional J  depends on the continuous functions x(t), u(t), λ(t), ν (t),

and µ(t). Alternatively,

J [x,u,λ,ν,µ] = φ(x(tf ), x(1)(tf ),...,x( p−1)(tf ), tf )

35

Page 54: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 54/317

+ tf t0

L(x(t), x(1)(t),...,x( p−1)(t), u(t), λ(t), ν (t), µ(t), s(t), t) dt. (2.24)

This is the most general form of the dynamic optimization problem where the func-

tion values and the time at both ends are variable.

Applying the results derived in Section 2.2.2 , the first-order necessary con-

ditions for an extremum are as follows:

∂L

∂x−

∂L

∂x(1)

(1)

+ ... + (−1) p−1

∂L

∂x( p−1)

( p−1)

= (−1) pλ( p)T , (2.25)

∂L

∂u= 0, (2.26)

∂L

∂λ = 0 = f  − x( p)

, (2.27)∂L

∂ν = 0 = g, (2.28)

∂L

∂µ= 0 = c + s2, (2.29)

∂L

∂s= 0 = 2µT s, (2.30)

∂L

∂x(1)−

∂L

∂x(2)

(1)

+ ... + (−1) p−1

∂L

∂x( p)

( p−1)

+∂φ

∂x

tf 

t0

= 0,

∂L

∂x(2)−

∂L

∂x(3)

(1)

+ ... + (−1) p−2

∂L

∂x( p)

( p−2)

+∂φ

∂x(1)

tf t0

= 0,

...∂L

∂x( p)+

∂φ

∂x( p−1)

tf t0

= 0, (2.31)

L −

∂L

∂x(1)−

∂L

∂x(2)

(1)

+ ... + (−1) p−1

∂L

∂x( p)

( p−1) x(1)

− ∂L

∂x(2)−

∂L

∂x(3)

(1)

+ ... + (−1) p−2

∂L

∂x( p)

( p−2)x(2)

−... −

∂L

∂x( p)

x( p) +

∂φ

∂t

tf t0

= 0. (2.32)

36

Page 55: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 55/317

This general result can be specialized to the specific subcases: Section 2.2.2.1 de-

scribes problems without any auxiliary constraint. Prescribed final states are ad-

dressed in Section 2.2.2.2. Auxiliary equality constraints are considered in Sec-

tion 2.2.2.3, and Section 2.2.2.4 describes problems with auxiliary inequality con-

straints.

2.2.2.1 No Auxiliary Constraints

The statement of the problem is to minimize the performance index J  of 

Eq.(2.2) subject to the dynamic equations (2.1). The end time t0 and tf  are fixed

and the initial and final states x(t0),..., x( p−1)(t0), x(tf ),..., x( p−1)(tf ) are specified.

L becomes

L = L + λT (f  − x( p)). (2.33)

Therefore, Eqs. (2.25), (2.26), and (2.27) are the first-order necessary conditions for

the extremum of this specific problem. These conditions are summarized below

∂L

∂x−

∂L

∂x(1)

(1)

+ ... + (−1) p−1

∂L

∂x( p−1)

( p−1)

= (−1) pλ( p)T ,

∂L

∂u = 0, ∂L

∂λ = 0 = f  − x( p).

Boundary conditions are given in terms of  x,..., and x( p−1) at both initial time t0

and final time tf .

2.2.2.2 Variable Final Time and Free Final States

In this section, we suppose that the final states and time are not specified.

Therefore, the variations on x, x(1),..., x( p), and t can be chosen arbitrarily at the final

time. Besides Eqs. (2.25), (2.26), and (2.27), the additional necessary conditions

to be satisfied by the differential equation are (2.31), and (2.32). These necessary

conditions are

∂L

∂x−

∂L

∂x(1)

(1)

+ ... + (−1) p−1

∂L

∂x( p−1)

( p−1)

= (−1) pλ( p)T ,

37

Page 56: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 56/317

∂L

∂u= 0,

∂L

∂λ= 0 = f  − x( p),

∂L

∂x(1)−

∂L

∂x(2)(1)

+ ... + (−1) p−1 ∂L

∂x( p)( p−1)

+∂φ

∂x

tf 

= 0,

∂L

∂x(2)−

∂L

∂x(3)

(1)

+ ... + (−1) p−2

∂L

∂x( p)

( p−2)

+∂φ

∂x(1)

tf 

= 0,

...∂L

∂x( p)+

∂φ

∂x( p−1)

tf 

= 0,

L −

∂L

∂x(1)−

∂L

∂x(2)

(1)

+ ... + (−1) p−1

∂L

∂x( p)

( p−1)

x(1)

∂L

∂x(2)−

∂L

∂x(3)

(1)

+ ... + (−1) p−2

∂L

∂x( p)

( p−2)x(2)

−... −

∂L

∂x( p)

x( p) +

∂φ

∂t

tf 

= 0.

These equations must be solved along with the given boundary conditions of  x and

its derivative at time t0.

2.2.2.3 Auxiliary Equality Constraints

Auxiliary constraints are important for realistic problems. Sometimes, penalty

function methods are used to include such constraints in the solution approach [43].

However, we do not discuss this method in this thesis. By the use of the variational

theory, the function L now becomes

L = L + λT (f  − x( p)) + ν T g. (2.34)

The necessary conditions of optimality become (2.25), (2.26), (2.27), (2.28), (2.31),

and (2.32), assuming that it is a problem with variable end times and free end states.

These conditions are

∂L

∂x−

∂L

∂x(1)

(1)

+ ... + (−1) p−1

∂L

∂x( p−1)

( p−1)

= (−1) pλ( p)T ,

38

Page 57: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 57/317

∂L

∂u= 0,

∂L

∂λ= 0 = f  − x( p),

∂L

∂ν = 0 = g,

∂L

∂x(1)−

∂L

∂x(2)

(1)

+ ... + (−1) p−1

∂L

∂x( p)

( p−1)

+∂φ

∂x

tf 

= 0,

∂L

∂x(2)−

∂L

∂x(3)

(1)

+ ... + (−1) p−2

∂L

∂x( p)

( p−2)

+∂φ

∂x(1)

tf 

= 0,

...∂L

∂x( p)+

∂φ

∂x( p−1)

tf 

= 0,

L − ∂L

∂x(1)−

∂L

∂x(2)

(1)

+ ... + (−1) p−1

∂L

∂x( p)

( p−1)

x(1)

∂L

∂x(2)−

∂L

∂x(3)

(1)

+ ... + (−1) p−2

∂L

∂x( p)

( p−2)x(2)

−... −

∂L

∂x( p)

x( p) +

∂φ

∂t

tf 

= 0.

These equations must be solved along with the given boundary conditions of  x and

its derivative at time t0

.

2.2.2.4 Auxiliary Inequality Constraints

For general inequality constrained problems, Eqs. (2.25)-(2.32) are the nec-

essary conditions. The sufficient conditions require δ2J  > 0. Using the proofs

provided in Chapter ??, one can show that the sufficient conditions are

∂ 2H 

∂u2≥ 0 (2.35)

∂ 2H ∂s2

= µ ≥ 0 (2.36)

where H  = L + λT f  + µT (c + s2).

Eq.(2.36) states that each Lagrange multiplier µi must be nonnegative. How-

ever, there are two possibilities: (i) if  µi > 0, Eq.(2.30) requires that si = 0, then

39

Page 58: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 58/317

ci = 0, and (ii) if  µi = 0, Eq.(2.30) requires that si is arbitrary, then ci < 0,

i = 1, ... , r. Therefore, Eqs. (2.29) and (2.30) can be restated without using the

slack variables:

ci(t,x,x(1),...,x( p−1), u) ≤ 0, i = 1,...,r, (2.37)

µi > 0 if  ci = 0, µi = 0 if ci < 0, i = 1,..., r. (2.38)

2.3 Extensions of The Minimum Principle

The purpose of this section is to extend the minimum principle for optimal

control to the case p > 1. The minimum principle will be derived using Hamilton-

Jacobi theory to arrive at the extended form of Pontryagin’s principle applicable to

higher-order systems.

2.3.1 Hamilton-Jacobi Theory

In this section, the derivation of the extended form of Pontryagin’s principle is

performed in two steps. First, the constraint set of the control variable is considered

independent of time and states, i.e., u ∈ U  where U  is a set of admissible u(t). The

optimality conditions are derived for this case using the pattern of proof given in

[29]. Second, the constraint set is considered to be time and state dependent, i.e.,

u ∈ A[t, x(t), x(1)(t),...,x( p−1)(t)]. The optimality conditions are extended to this

case. In order to abbreviate notations, we define x(t) = (x(t)T  x(1)(t)T  ... x( p−1)(t)T ).

2.3.1.1 Constraint Set U 

We imbed this problem in a larger problem class by defining a return function

J (x(t), t , u(τ )) = φ(x(tf ), tf ) + 

tf 

tL(x(τ ), u(τ ), τ )dτ. (2.39)

40

Page 59: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 59/317

Here, x(t) is an admissible start point at t and u(τ ) is an admissible input trajectory,

i.e., u(τ ) ∈ U  and is defined over t ≤ τ  ≤ tf . For this start point (t, x(t)), we define

the minimum cost

J ∗(x(t), t) = minu(τ )∈U 

φ(x(tf ), tf ) +

 tf t

L(x(τ ), u(τ ), τ )dτ 

. (2.40)

By subdividing the interval (t, tf ) into t ≤ τ  ≤ t + ∆t and t + ∆t ≤ τ  ≤ tf , from

the principle of optimality, we can rewrite Eq. (2.40) as

J ∗(x(t), t) = minu(τ )∈U 

 t+∆t

tLdτ  + J ∗(x(t + ∆t), t + ∆t)

. (2.41)

On expanding Eq. (2.41) in Taylor’s series about (x(t), t), we obtain

J ∗(x(t), t) = minu(τ )∈U 

 t+∆t

tLdτ  + J ∗(x(t), t)

+∂J ∗

∂t(x(t), t)∆t +

∂J ∗

∂ x(x(t), t)x(1)(t)∆t + h.o.t.

. (2.42)

Recalling x(t) = (x(t)T  x(1)(t)T  ... x( p−1)(t)T ) and for small ∆t,

J ∗(x(t), t) = minu(t)∈U 

{J ∗(x(t), t)

+

L(x(t), u(t), t) + ∂J ∗

∂t(x(t), t) + ∂J 

∂x(x(t), t)x(1)(t) + ...

+∂J ∗

∂x( p−2)(x(t), t)x( p−1)(t) +

∂J ∗

∂x( p−1)(x(t), t)f (x(t), u(t), t)

∆t

+ o(∆t)} . (2.43)

On neglecting higher-order terms of ∆t and separating terms dependent on u(t), we

can rewrite the above equation as

0 =∂J ∗

∂t (x(t), t) +∂J ∗

∂x (x(t), t)x(1)(t) + ...

+∂J ∗

∂x( p−2)(x(t), t)x( p−1)(t)

+ minu(t)∈U 

L(x(t), u(t), t) +

∂J ∗

∂x( p−1)(x(t), t)f (x(t), u(t), t)

.

(2.44)

41

Page 60: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 60/317

One can now define a Hamiltonian H 

x(t), u(t),

∂J ∗

∂x( p−1)

, t

= L(x(t), u(t), t)

+ ∂J ∗

∂x( p−1)(x(t), t)f (x(t), u(t), t) (2.45)

and

x(t), u∗

x(t),

∂J ∗

∂x( p−1)

, t

,

∂J ∗

∂x( p−1)

, t

=

minu(t)∈U 

x(t), u(t),

∂J ∗

∂x( p−1)

, t

. (2.46)

Here, the minimizing control is said to depend on x(t), ∂J ∗

∂x(p−1)T  (x(t), t), and t.

From Eqs. (2.44) and (2.46), the extended form of Hamilton-Jacobi equation is

∂J ∗

∂t(x(t), t) +

∂J ∗

∂x(x(t), t)x(1)(t) + ... +

∂J ∗

∂x( p−2)(x(t), t)x( p−1)(t)

+H 

x(t), u∗

x(t),

∂J ∗

∂x( p−1)

, t

,

∂J ∗

∂x( p−1)

, t

= 0, (2.47)

where J  satisfies the boundary condition

J ∗(x(tf ), tf ) = φ(x(tf ), tf ). (2.48)

In summary, (i) the optimal control u∗(x(t),

∂J ∗

∂x(p−1)

T , t) minimizes the Hamilto-

nian H  defined in Eq. (2.46), and (ii) the optimal return function satisfies the partial

differential equation (2.47). If p = 1, these results simplify to the classical Hamilton-

Jacobi equations as derived in Eqs.(1.39) in Section 1.3.2. Eq. (2.46) is an equivalent

statement to that made by Pontryagin [35].

• Extended Pontryagin’s Principle

The extended Pontryagin’s principle for the case of  p > 1 can be derived from

Hamilton-Jacobi Eq. (2.47) using the pattern suggested by Kirk [29]. If (x∗(t), t) is

a point on the optimal trajectory, Hamilton-Jacobi equation can also be written as

0 = minu(t)∈U 

∂J ∗

∂t(x∗(t), t) +

∂J ∗

∂x(x∗(t), t)x∗(1)(t) + ...+

42

Page 61: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 61/317

∂J ∗

∂x( p−2)(x∗(t), t)x∗( p−1)(t) + H 

x∗(t), u(t),

∂J ∗

∂x( p−1)

, t

,

(2.49)

since ∂J ∗

∂t(x∗(t), t), ∂J ∗

∂x(x∗(t), t)x∗(1)(t), ..., ∂J ∗

∂x(p−2)(x∗(t), t)x∗( p−1)(t) are independent

of  u(t). In words, for an (x∗(t), t), the control u∗(t) minimizes the right hand side

of Eq. (2.49) and the minimum is zero. Hence, if we define a function

v(x(t), u∗(t), t) =∂J ∗

∂t(x(t), t) +

∂J ∗

∂x(x(t), t)x(1)(t) + ...

+∂J ∗

∂x( p−2)(x(t), t)x( p−1)(t) + H 

x(t), u∗(t),

∂J ∗

∂x( p−1)

, t

, (2.50)

in the neighborhood of  x∗(t), i.e., x(t) = x∗(t) + δx(t), this function has a lo-

cal minimum at x∗(t). If   x(t) is not constrained by any boundaries, this local

minimum property can be written mathematically as ∂v∂ x

(x∗(t), u∗(t), t) = 0. As-

suming that the mixed partial derivatives are continuous, the order of the deriva-

tives in a mixed partial can be interchanged. With this property and recalling

x(t) = (x(t)T  x(1)(t)T  ... x( p−1)(t)T ), ∂v∂ x

(x∗(t), u∗(t), t) = 0 simplifies to the following

component equations:

∂ 2J ∗

∂x(k)∂t+

∂ 2J ∗

∂x(k)∂xx(1) + ... +

∂ 2J ∗

∂x(k)∂x( p−2)x( p−1) +

∂J ∗

∂x(k−1)

+ ...

+∂ 2J ∗

∂x(k)∂x( p−1)x( p) +

∂L

∂x(k)

+∂f 

∂x(k)

∂J ∗

∂x( p−1)

= 0, k = 0,...,p − 1, (2.51)

evaluated at x∗(t), u∗(t), t. Here, a term such as ∂J ∗

∂x(k)∂x(p−2)denotes a (n× n) matrix

with rs element∂J ∗

∂x(k)r ∂x(p−2)

s . Using the definition of the total time derivative of afunction ∂J ∗

∂x(k), the above equation simplifies to

d

∂J ∗

∂x(k)

T dt

+

∂J ∗

∂x(k−1)

+

∂L

∂x(k)

+∂f 

∂x(k)

∂J ∗

∂x( p−1)

= 0, k = 0,...,p − 1.

(2.52)

43

Page 62: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 62/317

On defining ψk(t) = ∂J ∗

∂x(k−1)(x∗(t), t), Eq. (2.52) can be written as

ψ(1)k+1(t) + ψk(t) +

∂H 

∂x(k)(x∗(t), u∗(t), t) = 0, k = 0,...,p − 1 (2.53)

Note that ψk are defined for values 1 through p. In summary, each (x∗(t), t) on the

optimal path satisfies Eq. (2.53). The components of this equation are

ψ(1) p (t) + ψ p−1(t) +

∂H 

∂x( p−1)(x∗(t), u∗(t), t) = 0,

ψ(1) p−1(t) + ψ p−2(t) +

∂H 

∂x( p−2)(x∗(t), u∗(t), t) = 0,

...

ψ

(1)

2 (t) + ψ1(t) +

∂H 

∂x(1) (x

(t), u

(t), t) = 0,

ψ(1)1 (t) +

∂H 

∂x(x∗(t), u∗(t), t) = 0. (2.54)

Using these equations, it is possible to eliminate ψ p−1(t) through ψ1(t). Then, the

resulting differential equation is

∂H 

∂x−

∂H 

∂x(1)

(1)

+ ... + (−1) p−1

∂H 

∂x( p−1)

( p−1)

= (−1) pψ( p) p (2.55)

which must hold at each point (x∗(t), t) of the optimal solution.

The transition from Eq.(2.54) to Eq.(2.55) presupposes the existence of 

∂H ∂x(1)

(1),

∂H ∂x(2)

(2),...,

∂H ∂x(p)

( p). While deriving these results using calculus of variations, this

additional hypothesis is not needed. This has been discussed in more detail in Sec-

tion 2.2.2. The solution of this differential equation requires boundary conditions

ψ p(tf ), ..., ψ( p−1) p (tf ). Using Eq. (2.48) and from the definition,

ψk(x∗(tf ), tf ) =∂φ

∂x(k−1)(x∗(tf ), tf ), k = 1,...,p. (2.56)

From Eqs. (2.54), it can be shown that

ψ p−k = (−1)kψ(k) p + (−1)k

∂H 

∂x( p−1)

(k−1)

+ (−1)k−1

∂H 

∂x( p−2)

(k−2)

+ ...

+ (−1)∂H 

∂x( p−k), k = 0,...,p − 1. (2.57)

44

Page 63: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 63/317

Given ψ p−k(tf ) from Eq. (2.56) and the expression of the Hamiltonian, Eq. (2.57)

can be used to compute the boundary values ψ(k) p (tf ).

In summary, for a dynamic system described by differential Eqs. (2.1) with

boundary conditions x(0), x(1)(0),...,x( p−1)(0), the optimal trajectories that minimize

the cost functional (2.2) with u(t) ∈ U  satisfy the following conditions: (i) define

a Hamiltonian H (x(t), u(t), t) = L(x(t), u(t), t) + ψT  p (t)f (x(t), u(t), t), (ii) find the

minimum of the Hamiltonian H (x(t), u∗(t), t) within the set u(t) ∈ U , (iii) find ψ p(t)

that satisfy the differential Eq. (2.55) along with boundary conditions of  ψ(k) p (tf )

computed from Eq. (2.57). When p = 1, these results simplify to those of the

classical minimum principle [18].

2.3.1.2 Constraint Set A(t, x)

In this section, the constraint set for u(t) is both time and state dependent,

i.e., u(t) ∈ A(t, x). In the derivation of Hamilton-Jacobi equations, the steps are

same as of the previous section with Eq. (2.46) replaced in the following way:

x(t), u∗(x(t),

∂J ∗

∂x( p−1)

, t),

∂J ∗

∂x( p−1)

, t

=

minu(t)∈A(t,x)

x(t), u(t),

∂J ∗

∂x( p−1)

, t

. (2.58)

In the derivation of the extended Pontryagin’s principle, the steps remain essentially

the same. x∗(t) now minimizes v(x(t), u∗(t), t) subject to the constraint set u(t) ∈

A(t, x). If the constraints are written mathematically as

c j(t, x, u) ≤ 0, j = 1,...,r, (2.59)

andgi(t, x, u) = 0, i = 1,...,r, (2.60)

x∗(t) minimizes v(x(t), u∗(t), t), where

v(x(t), u∗(t), t) = v(x(t), u∗(t), t) +ri=1

ν igi +r

 j=1

µ j(c j(t, x, u∗) + s2 j ), (2.61)

45

Page 64: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 64/317

where s j are introduced as slack variables. This local minimum property can be

written mathematically as ∂v

∂ x(x∗(t), u∗(t), t) = 0. Using a line of arguments similar

to the previous section and defining a modified Hamiltonian H  = H  +r

i=1

ν igi +

r j=1

µ jc j(t, x, u∗), one can shown that

ψ(1)k+1(t) + ψk(t) +

∂H 

∂x(k)(x∗(t), u∗(t), t) = 0, k = 0,...,p − 1 (2.62)

and µ js j = µ jC  j(t, x∗, u∗) = 0, j = 1,...,r. This second condition says that if the

 jth constraint is active, µ j ≥ 0, otherwise µ j = 0. The costate Eqs. (2.55) now get

modified to

∂H 

∂x−

∂H 

∂x(1)

(1)

+ ... + (−1) p−1

∂H 

∂x( p−1)

( p−1)

= (−1) pψ( p) p , (2.63)

which must hold at each point (x∗(t), t) of the optimal solution. The solution of 

ψ p−k gets modified to

ψ p−k = (−1)kψ(k) p + (−1)k

∂H 

∂x( p−1)

(k−1)

+ (−1)k−1

∂H 

∂x( p−2)

(k−2)

+ ...

+ (−1)∂H 

∂x( p−k), k = 0,...,p − 1. (2.64)

Since the optimal solutions must satisfy the partial differential equation (2.47)which is the extended form of the Hamilton-Jacobi equation, this also must be

satisfied at the boundary. Using Eq.(2.48) and Eq.(2.64), this partial differential

equation can be rewritten as(−1) p−1ψ( p−1)

 p + (−1) p−1

∂H 

∂x( p−1)

( p−2)

+ ... + (−1)∂H 

∂x(1)

x(1)(tf )

+... +

(−1)ψ(1)

 p + (−1)

∂H 

∂x( p−1)

x( p−1)(tf ) +

∂φ

∂t(x(tf ), tf )

+H 

x(tf ), u∗

x(tf ), ψT  p (tf ), tf 

, ψT 

 p (tf ), tf 

= 0. (2.65)

Also, by the use of Eq.(2.48), Eq.(2.64) can be used at the boundary as

∂φ

∂x

tf 

=

(−1) p−1ψ( p−1)

 p + (−1) p−1

∂H 

∂x( p−1)

( p−2)

+ ... + (−1)∂H 

∂x(1)

tf 

,

46

Page 65: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 65/317

∂φ

∂x(1)

tf 

=

(−1) p−2ψ( p−2)

 p + (−1) p−2

∂H 

∂x( p−1)

( p−3)

+ ... + (−1)∂H 

∂x(2)

tf 

,

...

∂φ

∂x( p−1)

tf 

= ψ p|tf 

. (2.66)

In summary, the optimal trajectories for a higher-order system with con-

straints u(t) ∈ A(t, x) satisfies Eqs. (2.1), (2.58), (2.59), (2.60) , (2.63), (2.65),

(2.66) along with the condition µ j ≥ 0 if  jth constraint is active, otherwise µ j = 0.

Now, we can conclude that both calculus of variations and minimum princi-

ple have the same results of the necessary conditions for an extremum. First, we

consider the same dynamic equations of motion (2.1). Then, using the fact that

L = H  − λT x( p) and the equivalence between λT  and ψ p, it is evident that the

optimal necessary conditions, Eqs.(2.25), (2.3), (2.28), (2.31) and (2.32), derived

from the calculus of variations are the same as Eqs.(2.63), (2.59), (2.60), (2.65), and

(2.66), respectively. One interpretation of Eq.(2.26) is that on the optimal trajec-

tory, u(t) pointwise minimizes H . An equivalent statement is that u(t) pointwise

minimizes H  subject to the constraints (2.59) and (2.60). Hence, u∗

(t) of Eq.(2.26)also satisfies the property that identical to Eq.(2.58).

However, there is one essential distinction in the necessary conditions (2.26)

and (2.58). In general, Eq.(2.26) has an explicit form of the switching structure

while Eq.(2.58) has an implicit form.

2.4 First Integrals

The necessary conditions for optimization using calculus of variations follow

by making δJ  = 0. This provides sets of differential equations to be satisfied by the

problem and appropriate boundary conditions. From the necessary conditions, it is

47

Page 66: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 66/317

clear that one of the governing differential equation is

∂L

∂x−

∂L

∂x(1)

(1)

+ ... + (−1) p

∂L

∂x( p)

( p)

= 0. (2.67)

In general, this differential equation is a 2 pth order differential equation. This

equation is the extended Euler-Lagrange equation and for p = 1 reduces to the

familiar form. The differential equation (2.67) admits a number of first integrals

depending on the structure of the integrand L.

• If  L does not explicitly contain x, i.e., ∂L

∂x= 0, Eq. (2.67) becomes

ddt

∂L

∂x(1) −

∂L

∂x(2)

(1)

+ ... + (−1) p−1

∂L

∂x( p)

( p−1) = 0. (2.68)

Hence, the optimal solution admits n first integrals ∂L

∂x(1)-( ∂L

∂x(2))(1) +...+ (−1) p−1

( ∂L

∂x(p))( p−1) = K . With a similar reasoning, if L does not explicitly contain an

element of x, say xi, correspondingly, there is a single first integral for each xi.

• If  L does not explicitly depend on x and x(1), one can write Eq. (2.67) as

d2

dt2 ∂L

∂x(2)−

∂L

∂x(3)

(1)

+ ... + (−1) p−2

∂L

∂x( p)

( p−1) = 0. (2.69)

Hence, the optimal solution admits the integral ∂L

∂x(2)−( ∂L

∂x(3))(1)+...+(−1) p−2

( ∂L

∂x(p))( p−1) = K . This argument can be extended to obtain other first integrals

depending on the elements of  x and their higher derivatives in the integral.

• If  L does not explicitly contain t,

L − ∂L

∂x(1)−

∂L

∂x(2)

(1)

+ ... + (−1) p−1

∂L

∂x( p)

( p−1) x(1)

∂L

∂x(2)−

∂L

∂x(3)

(1)

+ ... + (−1) p−2

∂L

∂x( p)

( p−2)x(2)

− ... −∂L

∂x( p)x( p) = K. (2.70)

48

Page 67: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 67/317

This property can be verified by time derivation of the left-hand and the right-

hand sides. For open end time problems, where φ does not explicitly depend

on t, i.e., ∂φ∂t

= 0, the constant K  = 0. For p = 1, if L is independent of time,

we arrive at the familiar result L − x(1)T  ∂L

∂x(1)= K .

For the system under consideration, L = H −λT x( p) and H  = L+λT f +µT c.

If  L, f , and c are not explicit functions of time, according to the above discussion,

the solution has a first integral given by Eq. (2.70). On simplifying, it can be shown

that this equation reduces to

− ∂H 

∂x(1) − ∂H 

∂x(2)(1)

+ ... + (−1) p−2 ∂H 

∂x( p−1)( p−2)

+ (−1) p

λ( p−1)

x(1)

∂H 

∂x(2)−

∂H 

∂x(3)

(1)

+ ... + (−1) p−3

∂H 

∂x( p−1)

( p−3)

+ (−1) p−1λ( p−2)

x(2) −

... −

∂H 

∂x( p−1)+ λ(1)

x( p−1) = K. (2.71)

With the equivalence between λT  and ψ p of Section 2.3.1, one can rewrite Eq. (2.71)

using Eq. (2.64) as

H  + ψ1x(1) + ψ2x(2) + ... + ψ p−1x( p−1) = K. (2.72)

Eq.(2.72) must be an equivalent statement to systems described by the first-order

differential equations.

First-Order Representation

In this section, we would like to show an equivalent statement of the first

integral from both classical first-order and higher-order representation. In order to

have a distinct notation between the classical Hamiltonian from Chapter 1 and the

extended Hamiltonian in this chapter, let us define the notation H c as a classical

Hamiltonian.

49

Page 68: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 68/317

A simple way to express the higher-order system by first-order differential

equations is through the definition of the extended vector x(t) = (x(t)T  x(1)(t)T  ...

x( p−1)(t)T ) that was introduced earlier of this chapter. This vector has a dimension

of (np × 1). Let the individual (n × 1) subvectors of x(t) be renamed as x1(t) = x(t),

x2(t) = x(1)(t), ...,x p(t) = x( p−1)(t). With this definition,

x1(1)(t) = x2(t)

x2(1)(t) = x3(t)

...

x p−1(1)(t) = x p(t)

x p(1)(t) = f (x1(t), x2(t), ..., x p(t), u(t), t). (2.73)

For the system under consideration, L = H + λT 1 (x2 − x

(1)1 ) + ... + λT 

 p−1(x p − x(1) p−1) −

λT  p x(1)

 p , where H  = L + λT  p f  + µT c. If  L is not an explicit function of time, the

solution has a first integral given by Eq. (2.70) simplified for p = 1. This condition

is

L − p

i=1

(xi)(1) ∂L

∂ (xi)(1)

= K. (2.74)

By evaluation, it can be shown equivalent to

H  + λT 1 x2 + ... + λT 

 p−1x p = K. (2.75)

This result is consistent with the result of Eq. (2.72). Further, this is also equivalent

to the statement H c = K , as shown in Eq.(1.50), consistent with the classical first-

order result where H c == H c +r

 j=1µ jc j(t, x1,..., x p, u).

2.5 Example

From Section 2.3.1, it is clear that there is a complete equivalence between

the optimization results whether they are derived using the higher-order or the first-

order form. This equivalence holds for all systems, linear or nonlinear, as long as the

50

Page 69: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 69/317

constraints are consistent with the class proposed in this chapter. In this section,

we provide an example of a nonlinear system which can be written both in the

first-order and higher-order form.

The example is of a single link manipulator rotating in a vertical plane driven

through a flexible drive train [38], shown in Fig. 2.1. The system has two degrees-

of-freedom and the equations of motion are (2.5) and (2.6). Let the objective be

to steer the system from a given set of initial conditions on q1, q2, q(1)1 , and q(1)2 at

t0 to an unspecified goal point while minimizing a cost J  =tf  t0

u2dt. The trajectory

must satisfy the constraint −1 ≤ u ≤ 1 during motion. We have obtained a single

fourth-order differential equation in the variable q1

up to its fourth derivatives as

(2.7)

Also, the fourth-order differential equation of Eq. (2.7) could be written in

the first-order form

x(1)1 = x2

x(1)2 = x3

x(1)

3= x4

x(1)4 = α1u + (α2 cos x1 + α3)x3 + α4 sin x1x2

2 + α5 sin x1, (2.76)

where x1 = q1. This differential equation has the structure of Eq. (2.1) with n = 4

and m = 1.

Eqs. (2.5) and (2.6) are both second-order differential equations. On solving

for q(2)1 and q(2)2 from these, one obtains equations in the form of Eq. (2.1) with n = 2

and m = 1. The fourth possibility is that Eqs. (2.5) and (2.6) are each reduced to

two first-order equations, thereby producing a form of Eq. (2.1) with n = 4 and

m = 1.

The optimization of this problem could be addressed using any of the four

alternative descriptions of the system. For brevity, we will only show the equivalence

51

Page 70: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 70/317

of the results for the first two forms, i.e., the fourth-order form of Eq. (2.7) and the

first-order form of Eq. (2.76).

2.5.1 Fourth-Order Form

Using the fourth-order form, H  = H  = u2 + λ[α1u + (α2 cos q1 + α3)q(2)1 +

α4 sin q1q(1)1

2+ α5 sin q1]. The feasible control is defined in the set U  = [−1, 1]. From

the extended form of Pontryagin’s principle, we need to select u such that H  is

minimized. Since u2 + λα1u is the term in H  that is dependent on u, it can also be

written as (u + λα12

)2 − (λα1

2)2. From this form, it is evident that u∗, the minimum

of  u within U , satisfies the following switching structure:

u∗ =

1, α1λ < −2

−λα1/2, −2 ≤ α1λ ≤ 2

−1, α1λ > 2

. (2.77)

On evaluating Eq. (2.55), the following fourth-order differential equation in λ results

λ(4) − λ(2)(α3 + α2 cos q1) − λα5 cos q1 = 0. (2.78)

In summary, the optimal solution is characterized by the fourth-order differential

Eq. (2.7), the fourth-order Lagrange multiplier Eq. (2.78), and the switching struc-

ture of Eq. (2.77).

2.5.2 First-Order Form

One can either work with the classical definition of the augmented Hamil-

tonian H c and use Eq. (1.43) to determine the Lagrange multiplier equation or

use H  with Eq. (2.55) to determine the Lagrange multipliers. The two expres-

sions are: H  = u2 + λ4[α1u + (α2 cos x1 + α3)x3 + α4 sin x1x22 + α5 sin x1] and

H c = u2 + λ1x2 + λ2x3 + λ3x4 + λ4[α1u + (α2 cos x1 + α3)x3 + α4 sin x1x22 + α5 sin x1].

It can be shown that the Lagrange multiplier equations in both cases are

λ(4)4 − λ

(2)4 (α3 + α2 cos x1) − λ4α5 cos x1 = 0. (2.79)

52

Page 71: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 71/317

With either expression of the Hamiltonian, it is clear that u∗, the minimum of  u

within U , satisfies the following switching structure:

u∗ =

1, α1λ4 < −2

−λ4α1/2, −2 ≤ α1λ4 ≤ 2

−1, α1λ4 > 2

. (2.80)

As expected, Eqs. (2.78) and (2.79) are identical as long as x1 is interpreted as q1

and λ is interpreted as λ4. Similarly, Eqs. (2.77) and (2.80) are identical.

2.6 Summary

In this Chapter, dynamic optimization of systems described by higher-order

differential equations was addressed. The minimum principle for higher-order sys-

tems was derived directly from their higher-order forms and the results were con-

firmed by classical theory using first-order forms. The optimality conditions were

derived using both Hamilton-Jacobi theory, thereby, extending Pontryagin’s theory,

and through variational calculus. It was shown that the different approaches lead

to the same results. The result applicable to higher-order systems was illustrated

by an example.

53

Page 72: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 72/317

Chapter 3

LINEAR SYSTEMS WITH QUADRATIC CRITERIA:

LINEAR FEEDBACK TERMINAL CONTROLLERS

In the previous chapter, the optimality principles were derived successfully

for the dynamic system described by the higher-order differential equations. This

chapter extends the continuous feedback law, which is well known for first-order

linear systems, to general higher-order linear systems. Further, we will compare the

algorithms of the first and higher-order approaches.

3.1 Feedback Law for Higher-order Linear System: Finite-time Regula-

tor Problem

For first-order linear dynamic systems with open-end states and fixed final

time, the feedback law was developed in Chapter 1. In this chapter, the procedure is

extended to higher-order systems. In order to develop the “exact” feedback law, the

same problem is posed in the higher-order form instead of the first-order form. The

statement of the problem is to determine a feedback law for u(t) to take a system

from an initial state x, x(1),..., and x( p−1) at the initial time t0 to an unspecified

point at the final time tf . The feedback during t0 and tf  must minimize a quadratic

cost functional

J  =1

2xT φx|tf  +

1

2

 tf t0

(xT Qx + uT Ru)dt, (3.1)

54

Page 73: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 73/317

where x =

xT 

x(1)T 

...

x( p−1)T 

, φ is an (np × np) constant symmetric non-

negative definite matrix, R is an (m × m) symmetric and positive definite matrix,

and Q is an (np × np) symmetric positive semi-definite matrix defined as

Q =

Qxx Qxx(1) . . . Qxx(p−1)

(Qxx(1))T  Qx(1)x(1) . . . Qx(1)x(p−1)

......

. . ....

(Qxx(p−1))T  (Qx(1)x(p−1))

T  . . . Qx(p−1)x(p−1)

.

The linear system is described by the following state equations:

x( p)(t) = A p−1(t)x( p−1)(t) + ... + A1(t)x(1)(t) + A0(t)x(t) + B(t)u(t) (3.2)

where A p−1(t),..., A1(t), A0(t) are (n× n) time-varying matrices, x(t), x(1)(t),..., and

x( p−1)(t) are (n × 1) state vectors, B(t) is an (n × m) matrix, and u(t) is an (m × 1)

control vector.

In order to solve this problem, the Hamilton-Jacobi equations derived in

the previous chapter are used. First, the optimal performance index J ∗(x(t), t), if 

it exists, must be of the form1

2xT  ¯

P (t)x, where¯

P (t) is a symmetric and positivedefinite matrix defined as follows:

P (t) =

P xx P xx(1) . . . P  xx(p−1)

(P xx(1))T  P x(1)x(1) . . . P  x(1)x(p−1)

......

. . ....

(P xx(p−1))T  (P x(1)x(p−1))T  . . . P  x(p−1)x(p−1)

(np×np)

.

The proof is provided in Appendix C.

3.1.1 Matrix Riccati Equation for the Higher-order Systems

In this section, the Hamilton-Jacobi equation for the higher-order system is

used to show that Riccati equations can be extended for higher-order systems using

55

Page 74: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 74/317

the extended Hamilton-Jacobi equation (2.47). These results can also be verified

with Riccati equations when p = 1.

First, the general form of Hamilton-Jacobi equation for this specific problem

is as follows:

∂J ∗

∂t(x(t), t) +

∂J ∗

∂x(x(t), t)x(1)(t) + ... +

∂J ∗

∂x( p−2)(x(t), t)x( p−1)(t) +

minu(t)∈U 

L(x(t), u(t), t) +

∂J ∗

∂x( p−1)(x(t), t)f (x(t), u(t), t)

= 0, (3.3)

where L(x(t), u(t), t) is xT Qx + uT Ru, and f (x(t), u(t), t) is the right-hand side

expression of Eq.(3.2). Using the form of  J ∗(x(t), t) = xT P (t)x, Eq.(3.3) can be

rewritten as

xT P (1)x + 2xT P 0x(1)(t) + ... + 2xT P  p−2x( p−1)(t) +

minu(t)∈U 

xT Qx + uT Ru + 2xT P  p−1Ax + 2xT P  p−1Bu

= 0, (3.4)

where P i is the (i + 1)th column of the matrix P (t) and A = [A0 A1 ... A p−2 A p−1].

To determine the minimum of the expression within the bracket in Eq.(3.4), we

complete the square:

xT Qx + uT Ru + 2xT P  p−1Ax + 2xT P  p−1Bu =

(u + R−1BT P T  p−1x)T R(u + R−1BT P T  p−1x) +

xT (Q − P  p−1BR−1BT P T  p−1 + P  p−1A + AT P T  p−1)x. (3.5)

Since matrix R is a positive definite, the above expression is minimized by setting

u = −R−1BT P T  p−1x. (3.6)

This leads to the feedback law which can be implemented in the higher-order linear

quadratic regulator problem. On using this expression for u(t), Eq.(3.4) becomes

xT P (1)x + 2xT P 0x(1)(t) + ... + 2xT P  p−2x( p−1)(t) =

−xT (Q − P  p−1BR−1BT P T  p−1 + P  p−1A + AT P T  p−1)x. (3.7)

56

Page 75: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 75/317

Eq.(3.7) can be rewritten completely in the quadratic form as

xT P (1)x + xT (P + P T )x = −xT (Q − P  p−1BR−1BT P T  p−1 + P  p−1A + AT P T  p−1)x, (3.8)

where

P  =

0 P xx P xx(1) . . . P  xx(p−2)

0 (P xx(1))T  P x(1)x(1) . . . P  x(1)x(p−2)

0 (P xx(2))T  (P x(1)x(2))T  . . . P  x(2)x(p−2)...

......

. . ....

0 (P xx(p−1))T  (P x(1)x(p−1))T  . . . P  x(p−1)x(p−2)

. (3.9)

This equation holds for all x(t); therefore, we have

−P (1) − (P  + P T ) = Q − P  p−1BR−1BT P T  p−1 + P  p−1A + AT P T  p−1 (3.10)

The boundary condition for this differential equations is P (tf ) = φ. For a first-order

system, when p = 1, P  = 0 and Eq.(3.10) reduces to the familiar Riccati equation,

shown in Eq.(1.61). In order to use the feedback control law (3.6), only the matrix

P  p−1(t) is needed. However, the result of  P  p−1 depends on the other elements of 

P (t).

3.2 Second-order Linear Systems: Finite-time Regulator Problem

The second-order system is considered in this section to illustrate the results.

Also, it is worth pointing out that the equations of motion of mechanical systems

are second-order differential equations that arise out of the application of Newton’s

laws. We shall briefly repeat all the steps here.

The statement of the problem is to determine a feedback law for u(t) to take

the system from an initial state x and x(1) at the initial time t0 to an unspecified

position at the final time tf . The path during t0 and tf  must minimize the special

quadratic functional

J  =1

2xT φx|tf  +

1

2

 tf t0

(xT Qx + uT Ru)dt, (3.11)

57

Page 76: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 76/317

where x =

xT 

x(1)T 

, φ is a (2n×2n) constant symmetric positive semi-definite

matrix, R is an (m×m) symmetric and positive definite matrix, and Q is a (2n×2n)

symmetric positive semi-definite matrix defined as

Q =

Qxx Qxx(1)

(Qxx(1))T  Qx(1)x(1)

subject to a linear system described by the following differential equations:

x(2)(t) = A1(t)x(1)(t) + A0(t)x(t) + B(t)u(t), (3.12)

where A1(t) and A0(t) are (n × n) matrices, B(t) is an (n × m) matrix function of 

time, and u(t) is an (m × 1) control vector. It is shown that J ∗(x(t), t) has the form

J ∗(x(t), t) =1

2xT P (t)x, (3.13)

where P (t) is a (2n × 2n) matrix function of time defined as

P (t) =

P xx(t) P xx(1)(t)

(P xx(1)(t))T  P x(1)x(1)(t)

.

Using the line of proof provided in the previous section, the following conditions

need to be solved in order to obtain the continuous feedback control law:

• The optimal control: The optimal control u∗(·) for an any arbitrary initial

condition has the following form:

u∗(t) = −R−1BT P T xx(1)(t)x + P T x(1)x(1)(t)x(1)

. (3.14)

• The extended Riccati equation: The following conditions need to be solved

to derive P xx(1)(t) and P x(1)x(1)(t) according to Eq.(3.10):

− P 

(1)xx P 

(1)

xx(1)P (1)xx(1)

T P (1)x(1)x(1)

− 0 P xx

(P xx)T  P xx(1) + (P xx(1))T 

= Rs1 Rs2

RsT 2 Rs3

,

(3.15) P xx(tf ) P xx(1)(tf )

(P xx(1)(tf ))T  P x(1)x(1)(tf )

= φ =

φxx φxx(1)

φxx(1) φx(1)x(1)

, (3.16)

58

Page 77: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 77/317

where

Rs1 = Qxx − P xx(1)BR−1BT P T xx(1) + P xx(1)A0 + AT 0 P T xx(1)

Rs2 = Qxx(1) − P xx(1)BR−1BT P T x(1)x(1) + P xx(1)A1 + AT 0 P T x(1)x(1)

Rs3 = Qx(1)x(1) − P x(1)x(1)BR−1BT P T x(1)x(1) + P x(1)x(1)A1 + AT 1 P T x(1)x(1).

(3.17)

3.2.1 Example: A Second-order Linear System With Quadratic Crite-

ria.

Find u(t) in term of a continuous feedback control law that minimize

J  =1

2x2|tf  +

1

2

 10

u2dt, (3.18)

subject to

x(2) = u (3.19)

where x and u ∈ R1, and the given initial conditions are x(0) = 1 and x(1)(0) = 0.

This problem can be solved by the techniques of Chapter 2 and the feedback law

suggested in this chapter. We will compare the solutions of the two approaches

along with the classical closed-loop control from Chapter 1.

Closed-loop Control: Second-order

A feedback control law can be obtained using Eq.(3.14) with R = 1 and

B = 1. P xx(1)(t) and P x(1)x(1)(t) need to be determined using Eqs.(3.15) and (3.16).

For this problem, Q = A0 = A1 = 0. The time-varying differential equations for

this problem are

P (1)xx − P 2xx(1) = 0; P xx(tf ) = 1

P (1)xx(1)

+ P xx − P xx(1)P x(1)x(1) = 0; P xx(1)(tf ) = 0

P (1)

x(1)x(1)+ 2P xx(1) − P 2x(1)x(1) = 0; P x(1)x(1)(tf ) = 0. (3.20)

59

Page 78: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 78/317

Figure 3.1: The comparison of the optimal solutions from second-order and first-order closed-loop and open-loop controls.

Once P xx(t), P xx(1)(t) and P x(1)x(1)(t) are computed by integrating backwards from tf 

with the boundary conditions, the feedback control law is computed using Eq.(3.14).

The optimal control input is shown in Figure 3.1.Closed-loop: First-order

Here, we find the feedback control law for this same problem by using the

first-order form, as described in Chapter 1. The first-order representation of the

problem is: find u(t) as a feedback control law to minimize

J  =1

2x21|tf  +

1

2

 10

u2dt, (3.21)

subject to

x(1)1 = x2,

x(1)2 = u, (3.22)

60

Page 79: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 79/317

Figure 3.2: The Riccati solutions of the example problem from second-order andfirst-order optimal feedback controls.

where x1(t), x2(t), and u(t) ∈ R1, and the given initial conditions are x1(0) = 1 and

x2(0) = 0.

In Eq.(1.63), Q = 0, R = 1 and BT  = [0 1]. Also, S (t) is a (2 × 2) symmetric

matrix,

S (t) =

S 1(t) S 2(t)

S 2(t) S 3(t)

, (3.23)

which is solved using the Riccati equation (1.61), along with the boundary conditions

Eq.(1.62) at tf . For this example, we have

S (1)1 − S 22 = 0; S 1(tf ) = 1

S (1)2 + S 1 − S 2S 3 = 0; S 2(tf ) = 0

S (1)3 + 2S 2 − S 23 = 0; S 3(tf ) = 0, (3.24)

which are integrated backward from tf . The structure of Eq.(3.20) is identical to

Eq.(3.33). Not surprisingly, the solutions of  x1(t) and x2(t) are the same as x(t)

61

Page 80: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 80/317

and x(1)(t) as shown in Figure 3.1.

Open-loop Solution

For open-loop optimal solution, we define

H  =1

2u2 + λu (3.25)

and from the first-order necessary conditions, we have

λ(2) = 0 ⇒ λ(t) = c3 + c4t, (3.26)

where c3 and c4 are scalar coefficients to be determined. Furthermore, from the

optimality conditions, we have∂H 

∂u= u + λ = 0 ⇒ u(t) = −λ(t) ⇒ u(t) = −c3 − c4t. (3.27)

Therefore, the form of the optimal x(t) and x(1)(t) can be obtained by solving

Eq.(3.19) with u(t) defined in Eq.(3.27):

x(t) = c1 + c2t −c32

t2 −c46

t3, (3.28)

x(1)(t) = c2 − c3t −c42

t2. (3.29)

From the given initial condition at time t0 = 0, c1 = 1 and c2 = 0. This implies that

x(t) = 1 −c32

t2 −c46

t3, (3.30)

x(1)(t) = −c3t −c42

t2. (3.31)

Now, c3 and c4 are found by using Eq.(2.66) at final time (tf ), i.e., λ(tf ) = 0 and

λ(1)(tf ) = −x(t = tf )=−1 + c32

+ c46

. These can be solved simultaneously as linear

algebraic equations resulting in c3 = 0.75 and c4 = −0.75. Therefore, the optimal

state and control trajectories are

x(t) = 1 −3

8t2 +

1

8t3, (3.32)

x(1)(t) = −3

4t +

3

8t2, (3.33)

u(t) = −3

4+

3

4t. (3.34)

62

Page 81: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 81/317

These trajectories are identical to those shown in Figure 3.1 computed using optimal

closed-loop solutions.

3.3 Summary

This chapter provided the results for optimal linear feedback control derived

from the results of Hamilton-Jacobi equation. The results were illustrated with an

example that has both second-order and higher-order forms. The main result of 

this chapter is the extension of the theory of feedback law for systems described by

higher-order differential equations.

63

Page 82: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 82/317

Chapter 4

NEIGHBORING OPTIMUM FEEDBACK LAW

In this chapter, the line of argument of Chapter 1 is followed. The neighboring

optimal theory prescribes a feedback control law that accounts for mismatch in the

initial conditions of the problem. We assume that the extremal trajectories of thestates and control x(t), x(1)(t),..., x( p−1)(t) and u(t) have been found that satisfy all

the first-order necessary conditions described in Chapter 2. Also the appropriate

sufficient conditions for a minimum or maximum are satisfied.

4.1 Problem Statement

We consider small perturbations in the initial conditions δx(t0), δx(1)(t0)

,..., δx( p−1)

(t0) that produce small perturbations in the extremal path, i.e., δx(t),δx(1)(t),..., δx( p−1)(t), δu(t), and δλ(t),..., δλ( p−1)(t).

For a dynamic system described by Eq.(2.1), cost functional (2.2), and fixed

end time, the optimal necessary conditions are:

x p(t) = f (x, x(1),...,x( p−1), u , t) (4.1)

(−1) pλ( p)T  p =

∂H 

∂x

− ∂H 

∂x(1)

(1)

+ ... + (−1) p−1 ∂H 

∂x( p−1)

( p−1)

≡ ΛT  (4.2)

∂H 

∂u= 0 (4.3)

λ(k−1)T (tf ) =∂φ

∂x( p−k)(tf ), k = 1,...,p. (4.4)

64

Page 83: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 83/317

On linearizing the above extremal conditions and noting that there is no perturba-

tion on time, i.e. (δt = 0), we obtain

δx( p)(t) = ∂f ∂x δx + ... + ∂f ∂x( p−1)δx( p−1) + ∂f ∂u δu (4.5)

δλ( p) =∂ Λ

∂xδx + ... +

∂ Λ

∂x(2 p−2)δx(2 p−2) +

∂ Λ

∂λδλ + ... +

∂ Λ

∂λ( p−1)δλ( p−1) +

∂ Λ

∂uδu (4.6)

0 =∂ 2H 

∂x∂uδx + ... +

∂ 2H 

∂x( p−1)∂uδx( p−1) +

∂ 2H 

∂u2δu +

∂f 

∂uT 

δλ (4.7)

δλ(k−1)(tf ) =

∂ 2φ

∂x( p−k)∂xδx + ... +

∂ 2φ

∂x( p−k)∂x( p−1)δx( p−1)

t=tf 

,

k = 1,...,p. (4.8)

These conditions can be also derived by expanding the performance index J 

to the second-order and dynamic equations to the first-order. Their expressions are:

δ2J  = 12

hT x

Φhx|tf  + 12

tf  t0

[hT x

hT u ]H 

hx

hu

dt, (4.9)

where hx, Φ, H , and H  are the same as defined in Section ??. The perturbation of 

the state equations gives

δx( p)(t) =∂f 

∂xδx + ... +

∂f 

∂x( p−1)δx( p−1) +

∂f 

∂uδu (4.10)

and δx(t0), δx(1)

(t0),..., and δx( p−1)

(t0) specified. The assumption that all first-ordernecessary conditions vanish about an extremal path must still hold.

Since the problem of interest is to find the neighboring optimal path, δu(t)

must be determined in such a way that δ2J  is minimized subject to Eq.(4.10). Now,

this problem is of the linear-quadratic regulator type which was studied in detail in

65

Page 84: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 84/317

Chapter 3. The results of the associated two-point boundary-value problem are the

same as Eqs.(4.5)-(4.8).

To determine the neighboring extremal paths, the following problem must be

solved: find δu(t) that takes the system from an initial state δx(t0), δx(1)(t0),...,

and δx( p−1)(t0) to unspecified points at the final time tf . The path during t0 and tf 

must minimize a special quadratic functional

δ2J  ≡ J  =1

2hx

T Φhx|tf  +1

2

 tf t0

(hxT Qxxhx + uT ∂ 2H 

∂u2u + 2hT 

xQxuu)dt,

(4.11)

where Φ must be an (np × np) constant symmetric nonnegative definite matrix.

Qxx and Qxu are the sub-matrices of the Hessian matrix Eq.(??), which have the

following forms:

Qxx =

∂ 2H ∂x2

∂ 2H ∂x(1)∂x

... ∂ 2H ∂x(p−1)∂x

( ∂ 2H ∂x(1)∂x

)T  ∂ 2H (∂x(1))2

... ∂ 2H ∂x(p−1)∂x(1)

......

. . ....

( ∂ 2H ∂x(p−1)∂x

)T  ( ∂ 2H ∂x(p−1)∂x(1)

)T  ... ∂ 2H (∂x(p−1))2

, (4.12)

and

Qxu =

∂ 2H ∂u∂x

∂ 2H ∂u∂x(1)

...

∂ 2H ∂u∂x(p−1)

(4.13)

subject to a linear system described by Eq.(4.10).

According to Chapter 3, the optimal performance index J ∗(x(t), t) must have

the form 1

2

hxT P (t)hx, where P (t) is a symmetric matrix of the form

P  =

P xx P xx(1) . . . P  xx(p−1)

(P xx(1))T  P x(1)x(1) . . . P  x(1)x(p−1)...

.... . .

...

(P xx(p−1))T  (P x(1)x(p−1))T  . . . P  x(p−1)x(p−1)

66

Page 85: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 85/317

In order to derive the neighboring optimum feedback law, the same proce-

dure, which has been used to derive the extended Riccati equation in Chapter 3, is

followed. As a result, we obtain the extended neighboring optimum feedback law as

δu = −

∂ 2H 

∂u2

−1 ∂ 2H 

∂x∂u...

∂ 2H 

∂u∂x( p−1)

+

∂f 

∂u

P T  p−1

hx. (4.14)

where P i is the (i + 1)th column of the matrix P (t). In addition, there exists a set

of differential equations to obtain P (t). These differential equations have the form

−P (1) − (P  + P T ) = Qxx −

P  p−1B + Qxu

∂ 2H 

∂u2

−1 B T P T  p−1 + QT 

xu

+ P  p−1A

+ AT 

P T  p−1 (4.15)

where A =∂f ∂x

∂f 

∂x(1)... ∂f 

∂x(p−2)∂f 

∂x(p−1)

and B = ∂f 

∂u. Moreover, we have boundary

conditions associated with the above differential equations as

P (tf ) = Φ. (4.16)

Before this chapter is concluded with examples, the block diagram of the neigh-

boring optimum feedback control scheme for the higher order system is shown inFigure 4.1.

4.2 Example 1: A simple second-order linear system.

Find x(t) and u(t) that minimize

J  = x2|tf  + 10

u2dt, (4.17)

subject to

mx(2) = u (4.18)

where x and u ∈ R1, m is a parameter of the system, and the given initial conditions

are x(0) = 0 and x(1)(0) = 1.

67

Page 86: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 86/317

Figure 4.1: The higher-order neighboring optimal feedback control.

Optimal Solution for m = 1:

From Eq.(4.16), we can show that

P xx(1) = 2; P xx(1)(1) = 0; P x(1)x(1)(1) = 0; (4.19)

Define

H  = u2 + λu (4.20)

and from the first-order necessary conditions, we have

λ(2)

= 0 ⇒ λ(t) = c1 + c2t, (4.21)

where c1 and c2 are scalar coefficients to be determined. Furthermore, from the

optimality conditions, we have

∂H 

∂u= 2u + λ = 0 ⇒ u(t) = −

1

2λ(t) ⇒ u(t) = −

c12

−c22

t. (4.22)

68

Page 87: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 87/317

Therefore, the form of the optimal state trajectories x(t) and x(1)(t) can be obtained

by solving Eq.(4.18) with u(t) as defined in Eq.(4.22):

x(t) = c4 + c3t −c24 t

2

−c112 t

3

, (4.23)

x(1)(t) = c3 −c22

t −c14

t2. (4.24)

From the given initial condition at time t0 = 0, it can be shown that c3 = 1 and

c4 = 0, which implies that

x(t) = t −c24

t2 −c112

t3, (4.25)

x(1)(t) = 1 −c22

t −c14

t2. (4.26)

Now, c1 and c2 can be found by using the Eq.(2.66) at the final time (tf ) result that

λ(tf ) = 0 and λ(1)(tf ) = −2x(tf )=−2+2 c24

+2 c112

. These can be solved simultaneously

as linear algebraic equations resulting in c1 = −1.5 and c2 = 1.5. Therefore, the

optimal state and control trajectories are

x(t) = t −3

8t2 +

1

8t3, (4.27)

x(1)(t) = 1 −3

4

t +3

8

t2, (4.28)

u(t) = −3

4+

3

4t. (4.29)

A Neighboring Feedback Control Law:

Suppose that the parameter m = 1 is not precise due to an error of measure-

ment and let us assume that its exact value is 1.1. As a result, solutions derived for

m = 1 are no longer valid optimal solutions. A neighboring feedback control law

can now be used to obtain new optimal solutions for the system with m = 1.1 using

Eq.(4.14). However, P xx(1) and P x(1)x(1) need to be determined. From Eq.(4.15), we

can obtain the following time-varying differential equations as

P (1)xx − 0.5P 2xx(1) = 0; P xx(tf ) = 2

69

Page 88: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 88/317

Figure 4.2: Optimal solutions of the system with m = 1, m = 1.1, and neighboringoptimal.

P (1)xx(1)

+ P xx − 0.5P xx(1)P x(1)x(1) = 0; P xx(1)(tf ) = 0

P (1)x(1)x(1)

+ 2P xx(1) − 0.5P 2x(1)x(1) = 0; P x(1)x(1)(tf ) = 0. (4.30)

Once P xx(1) and P x(1)x(1) are computed by integrating backwards from the final time

tf  with the boundary conditions and stored as a function of time, the neighboring

feedback control law can be computed by Eq.(4.14). Results for the system with

m = 1, m = 1.1, and neighboring feedback law are shown in Figure 4.2.

4.3 Example 2: One-armed manipulator with Coulomb Friction

We consider a physical pendulum of mass m and length L, shown in Fig-

ure 6.3, which has an actuator input as M a at the base together with a resistive

Coulomb friction couple proportional to the radial force Rx. Furthermore, we as-

sume that the Coulomb friction couple C f  has magnitude C f  = µ|Rx| where Rx

is the radial component of the reaction at the support O, µ is the coefficient of 

70

Page 89: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 89/317

Figure 4.3: Optimal solutions of the system with θ1(0) = 3, θ1(0) = 3.05, andneighboring optimal.

friction between the sleeve and the hub, and is the hub radius. The system has

one degree-of-freedom and the equation of motion is

13

mL2θ(2)1 + µmL2

θ(1)21 +

mg L2

sin(θ1) + µmg cos(θ1)

= M a, (4.31)

where L is the link length, m is the mass of the link with the mass center at a

distance L2

from the base of the pendulum, g is the gravity constant, and M a is the

actuator moment. Let the objective be to steer the system from a given set of initial

conditions on θ1(0) = 3 and θ(1)1 (0) = 0 to unspecified goal points at tf  = 1 while

minimizing a cost J  = θ21(tf ) +tf  t0

(M a)2 dt. The parameters used in the model (in

MKS units) are: L = 1.0, m = 1.0, g = 9.8, µ = 0.3, and = 0.01. The optimaltrajectory of  θ1(t) is shown in Figure 4.3.

Suppose that the initial conditions are changed to θ1(0) = 3.05 and θ(1)1 (0) =

0, a neighboring feedback control law can now be used to obtain new optimal solu-

tions for a new system using Eq.(4.14). In order to apply Eq.(4.14), P xx(1)(t) and

71

Page 90: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 90/317

P x(1)x(1)(t) are needed. From Eq.(4.15) and the following expressions:

B = 1, A = [−α2 cos(θ1) + α1 sin(θ1) − α3θ(1)1 ], (4.32)

Qxx =

α1λ1 cos(θ1) + α2λ1 sin(θ1) 0

0 −α3λ1

, Qxu =

0

0

, (4.33)

we can obtain the following time-varying differential equations as

P (1)xx + α1λ1cos(θ1) + α2λ1sin(θ1) − 0.5P 2xx(1) +

2 (−α2cos(θ1) + α1sin(θ1)) P xx(1) = 0; P xx(tf ) = 2

P (1)xx(1)

+ P xx − 0.5P xx(1)P x(1)x(1) − α3θ(1)1 P xx(1) +

(−α2cos(θ1) + α1sin(θ1)) P x(1)x(1) = 0; P xx(1)(tf ) = 0

P (1)

x(1)x(1)+ 2P xx(1) − 0.5P 2x(1)x(1) − α3λ1 +

−2α3θ(1)1 P x(1)x(1) = 0; P x(1)x(1)(tf ) = 0. (4.34)

where α1 = 3µgL2

, α2 = 3g2L

, and α3 = 3µL

. Once P xx(1)(t) and P x(1)x(1)(t) are computed

by integrating backwards from the final time tf  with the boundary conditions and

stored as a function of time, the neighboring feedback control law can be computed

by Eq.(4.14). Results for the system with θ1(0) = 3, θ1(0) = 3.05, and neighboring

feedback law are shown in Figure 4.3.

From the above two examples, we have shown that both initial condition and

modeling errors can be handled by a neighboring feedback control law quite well.

Thus, the solutions of the new problems, when perturbation of the initial conditions

are sufficiently small, can be computed without recomputing the complete optimal

control problems.

4.4 summary

If the optimal paths are provided, the feedback gain for the neighboring opti-

mal paths can be found, as shown in this chapter. Also, the type of feedback control

72

Page 91: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 91/317

is identical with the linear feedback control described in Chapter 3. The weighting

factors in the quadratic performance index J  has the form of the second partial

derivative of the Hamiltonian and the linear system equations are considered as a

linear perturbation equation around the optimal path. Noting that the neighboring

feedback control law is developed under the assumption that the perturbation of the

initial conditions must be sufficiently small; otherwise, the complete optimal control

problem has to be recomputed.

73

Page 92: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 92/317

Chapter 5

NUMERICAL ALGORITHMS

5.1 Introduction

In this chapter, a numerical algorithm is developed for solving dynamic opti-

mization problems. This algorithm utilizes both the direct and indirect approaches

and permits one to consider problems characterized by differential equations of any

order. There are many numerical techniques that are currently available; however,

our objective here is to develop a general numerical approach that is applicable to

the type of problems mentioned above. Having examined several numerical meth-

ods such as multiple shooting, weighted residual, and nonlinear programming, it

was concluded that nonlinear programming is the most general technique for han-

dling more complex constrained optimization problems. As such, this technique was

selected as the primary approach utilized in this thesis to obtain and compare the

optimal solutions.

5.2 Nonlinear Programming

The nonlinear programming problem, which is the preferred method of so-

lution utilized in this thesis, has a structure that requires the determination of the

n-vector y=(y1,...,yn) that minimizes a scalar-valued objective function

J (y) (5.1)

74

Page 93: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 93/317

subject to equality and inequality constraints of the form

bl ≤

y

C (y)

≤ bu (5.2)

where bl and bu are the appropriated lower and upper bounds. C (y) is a (k × 1)

vector of other constraints, both linear and nonlinear. If  C (y) is a set of equality

constraints, k ≤ n.

A general approach for solving optimal control problems using nonlinear pro-

gramming is presented in this chapter. The method of solution is based on the

theory of collocation and consists of the following steps: (i) choose N  − 1 interme-

diate points (nodes) between t0 and tf  and divide the total time into N  intervals;

(ii) choose an admissible form for x(t), λ(t), u(t), etc..., in each of these intervals,

i.e., form a base solution and add mode functions to refine the solution if needed;

(iii) impose continuity of up to the ( p − 1)th derivative of x(t) and λ(t) at the N − 1

intermediate node points, but allow for discontinuity of  u(t); (iv) solve for the base

solution using the given boundary conditions of  x(t) and λ(t) at t0 and tf  and the

continuity requirement at the node points to obtain the initial guess to the problem;

(v) satisfy the differential equations by choosing collocation points within each in-

terval, i.e., each collocation point provides n equations corresponding to the system

dynamic equations; (vi) pose this problem as a nonlinear programming problem with

the objective function to find the best mode coefficients such that the functional J 

is minimized.

5.3 Direct Scheme

A direct trajectory optimization method using nonlinear programming is de-

scribed in this section. This method represents state and control variables by piece-

wise polynomials and piecewise constants, respectively. A collocation scheme is used

to convert the optimal control problem to a nonlinear programming problem. The

75

Page 94: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 94/317

primary advantage of this scheme is that the extension to the general problem in-

volving path and inequality constraints on control and/or state variables becomes

easier to handle as the optimal solutions for the control variables u(t) are permit-

ted to be discontinuous. The basic ideas of this approach have been delineated by

Johnson [28] and Hahn and Johnson [24]. Also, a similar approach that considers

control variables as piecewise polynomials has been described by Hargraves [25].

The problem statement in the direct approach is to minimize a cost functional

J [x, u] = φ(x(tf ), x(1)(tf ),...,x( p−1)(tf ), tf )

+  tf 

t0

L(x(τ ), x(1)(τ ),...,x( p−1)(τ ), u(τ ), τ ) dτ, (5.3)

satisfy the constraints

x( p)(t) = f (x, x(1),...,x( p−1), u , t) (5.4)

c j(t,x,x(1),...,x( p−1), u) ≤ 0, j = 1,...,r, (5.5)

and a set of given boundary conditions.

5.3.1 Method of Solution

The goal of this section is to describe the direct approach that is used to

reduce the optimal control problem described in Eqs.(5.3)-(5.5) to a nonlinear pro-

gramming problem. In order to be successful, the state and control variables are

represented by piecewise polynomials and piecewise constants, respectively. Ad-

ditionally, collocation is used to satisfy the constraints for the problem such as

equality/inequality constraints placed on the state and control variables. First, the

length of each interval is defined as S i=ti − ti−1, as shown in Figure 5.1.

Then, the state variables are described by polynomials in each interval seg-

ment. Also, constants are used to represent the control inputs in the same manner

as shown in Figure 5.2.

76

Page 95: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 95/317

Figure 5.1: Piecewise polynomial representation of the states.

Figure 5.2: Piecewise constant representation of the inputs.

The coefficients that are used to parameterize the state and control solu-

tions are computed using nonlinear programming to satisfy Eqs.(5.4) and (5.5), and

continuity conditions at the node points.

To provide a better understanding, let the states x(t) be represented on each

interval at least by a polynomial of order p − 1 in order to ensure the continuity

across the nodes as

x = y0 + y1t + y2t2 + y3t3,..., +y p−1t p−1 (5.6)

77

Page 96: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 96/317

where y0, y1, y2,..., y p−1 are n dimensional vectors. The controls u(t) are represented

as

u = yu, (5.7)

where yu is an m dimensional vector. The differential equations (5.4) and the in-

equality constraints (5.5) are then satisfied at a finite collocation grid over t0 and

tf . Similar methods have been used in direct optimization of problems posed in

first-order forms ([24], [25]).

With N  intervals and the parameterization of x(t) and u(t), the total number

of decision variables y is ≥ (np + m)N . The variables x are subjected to boundary

conditions at t0 and/or tf  and continuity across the interval boundaries. This results

in a maximum of  np(N + 1) equality constraints. If N c is the number of collocation

points, Eq.(5.4) results in nN c equality constraints and Eq.(5.5) rN c inequality

constraints. The nonlinear programming problem is now well defined and can be

solved by standard available tools.

5.3.2 Computation Issues

We briefly compare the dimension of this nonlinear programming problemobtained for a higher-order form to an alternate one posed for the first-order form.

We assume that we select the same number of intervals and collocation points in

the two alternative problems. A system that is described by n pth-order differential

equations has an equivalent description with np first-order differential equations.

Hence, the piecewise parameterization of the states must be at least first-degree

polynomials. We again choose the inputs to be piecewise constants. Similar to

the higher-order case, this will result in the total number of decision variables to

be larger than or equal to (np + m)N . Similarly, the boundary conditions and

continuity across the nodes add up to a maximum of  np(N + 1) equality constraints.

78

Page 97: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 97/317

If  N c is the number of collocation points, the dynamic equations in the first-order

will yield npN c equality constraints and rN c inequality constraints.

From this comparison, one can make the following observation: For the same

collocation grid in time, the dynamic equations in the higher-order form yield equality 

constraints which are “p” times smaller in number compared to the first-order form.

5.4 Indirect Scheme

The use of nonlinear programming to obtain the solution of an optimal control

problems is a well established approach as shown in the previous section. It was

shown in Chapter 2 that the optimal trajectory for a general pth-order system with

general constraints must satisfy the following optimality conditions:

x( p)(t) = f (x(t), x(1)(t),...,x( p−1)(t), u , t), (5.8)

∂H 

∂x−

∂H 

∂x(1)

(1)

+ ... + (−1) p−1

∂H 

x( p−1)

( p−1)

= (−1)( p−1)λ( p), (5.9)

H (x(t), x(1)(t),...,x( p−1)(t), u∗(t), λ(t), t)

= min H 

(x(t), x(1)

(t),...,x( p−1)

(t), u(t), λ(t), t) (5.10)

and

c j(t, x(t), x(1)(t),...,x( p−1)(t), u) ≤ 0, j = 1,...,r, (5.11)

for all t ∈ [t0, tf ]. The modified Hamiltonian is defined as H  = H +r

 j=1µ jc j(t, x, u∗).

Also, the sufficient condition says that if the jth constraint is active, µ j ≥ 0. Oth-

erwise, µ j = 0. The equality constraints gi, described in Chapter 2, are considered

as a subset of Eq.(5.11). The forms of the switching structure can be derived eitherfrom Eq.(5.10) or from the variational theory as

∂H 

∂u j

= 0, j = 1,...,m. (5.12)

In this thesis, we choose Eq.(5.12) for numerical implementation instead of Eq.(5.10).

79

Page 98: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 98/317

5.4.1 Method of Solution

This section describes the indirect approach to reduce the optimality con-

ditions stated in Eqs.(5.8)-(5.12) to a nonlinear programming problem. In these

conditions, there are n state equations (5.8), n costate equations (5.9), r inequality

constraints (5.11), and m optimality equations (5.12). They all total to (2n + m + r)

conditions. Consistently, the problem has (2n + m + r) unknown variables: n states

x(t), n costates λ(t), m controls u(t), and r Lagrange variables µ(t). The collocation

technique is used to convert these conditions to a nonlinear programming problem,

similar to the direct method. A nonlinear programming problem minimizes a scalar-

valued objective function, and we use the cost functional of Eq.(5.3) as the objectivefunction. In addition to the continuity of state variables at the collocation points,

additional conditions must be satisfied to ensure continuity of Lagrange variables.

The remaining discussion concerns continuity of the optimal trajectory across

the nodes. The continuity is not only on the state variables up to ( p− 1)th derivative

but also auxiliary conditions derived from the calculus of variations, called the nat-

ural boundary conditions. Since the time domain is divided into N  intervals with

(N  − 1) intermediate node points, additional conditions need to be satisfied at theintermediate node points. Using variational analysis, a δJ i can be written as

δJ i = t−

i

t+i−1

hiT 

∂ H

i

∂xi

−d

dt

∂ Hi

∂x(1)i

+ ... + (−1) pd p

dt p∂ H

i

∂x( p)i

dt

+ hiT 

∂ H

i

∂x(1)i

−d

dt

∂ Hi

∂x(2)i

+ ... + (−1) p−1d p−1

dt p−1∂ H

i

∂x( p)i

t−i

t+i−1

+ h(1)T i

∂ Hi

∂x(2)

i

−d

dt

∂ Hi

∂x(3)

i

+ ... + (−1) p−2d p−2

dt p−2

∂ Hi

∂x( p)

it−i

t+i−1

+ ... + h( p)T i

∂ Hi

∂x( p)i

t−i

t+i−1

(5.13)

where hi = δxi and J  =I 

i=1J i. The integrand in every interval has been set to zero,

80

Page 99: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 99/317

as derived in Eq. (5.9). The continuity of states results in

x(t−i ) = x(t+i ), x(1)(t−i ) = x(1)(t+i ),...,

x( p−1)(t−i ) = x( p−1)(t+i ), i = 1,...,N − 1. (5.14)

As a result, at an interval boundary, the variations satisfy

h(t−i ) = h(t+i ), h(1)(t−i ) = h(1)(t+i ),...,

h( p−1)(t−i ) = h( p−1)(t+i ), i = 1,...,N − 1. (5.15)

On making these substitutions into Eq.(5.13), the boundary terms of  δJ  can be

grouped in terms of  h(t−i ), h(1)(t−i ),...,h( p−1)(t−i ), i = 1,...,N . At these interval

boundaries, the variations h and their derivatives up to p − 1 can be arbitrary,

hence the coefficients multiplying these terms must go to zero. This results in np

constraints at each interval boundary in addition to the np constraints of Eqs. (5.14).

As a result, each interval boundary within t0 and tf  satisfies 2np constraints which

can be rewritten as

∂H 

∂x(1) − ∂H 

∂x(2)(1)

+ ... + (−1)

 p−2 ∂H 

∂x( p−1)( p−2)

+ (−1)

 p

λ

( p−1)t−i

= ∂H 

∂x(1)−

∂H 

∂x(2)

(1)

+ ... + (−1) p−2

∂H 

∂x( p−1)

( p−2)

+ (−1) pλ( p−1)

t+i

∂H 

∂x(2)−

∂H 

∂x(3)

(1)

+ ... + (−1) p−3

∂H 

∂x( p−1)

( p−3)

+ (−1) p−1λ( p−2)

t−i

=

∂H 

∂x(2)−

∂H 

∂x(3)

(1)

+ ... + (−1) p−3

∂H 

∂x( p−1)

( p−3)

+ (−1) p−1λ( p−2)

t+i...

∂H 

∂x( p−1)− λ(1)

t−i

=

∂H 

∂x( p−1)− λ(1)

t+i

∂H 

∂x( p)

t−i

=

∂H 

∂x( p)

t+i

81

Page 100: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 100/317

x(t−i ) = x(t+i ), x(1)(t−i ) = x(1)(t+i ),...,

x( p−1)(t−i ) = x( p−1)(t+i ), i = 1,...,N − 1 (5.16)

In the special case when p = 1, these conditions become the continuity of the

Lagrange multipliers [18]. Additionally, there are np boundary conditions at t0 and

tf .

Eqs.(5.4), (5.5), (5.9), (5.10), and (5.12) are then satisfied at a finite colloca-

tion grid over t0 and tf . Similar methods have been used in indirect optimization

of problems posed in first-order form [14].

With N  intervals and the parameterization of x, λ, u, and µ the total number

of decision variables y is ≥ (3np + m)N . The variables x are subjected to bound-

ary conditions at t0 and/or tf  and continuity across the interval boundaries. This

results in a maximum of 2np(N  + 1) equality constraints. If  N c is the number of 

collocation points, Eq.(5.4) and (5.9) each result in nN c equality constraints. The

control optimality equations (5.12) yield mN c equality constraints, and Eq.(5.5) rN c

inequality constraints. The nonlinear programming problem is now well defined and

can be solved by standard available tools.

Since the state and the costate equations can be alternatively written in

first-order forms, one can reason out that for each collocation point, the number of 

inequality constraints in the first-order and higher-order form are the same. The

substantial difference in the two alternative forms comes from the dynamic equations

and the costate equations. Similar to the direct case, one can make the following

observation: For the same collocation grid in time, the dynamic equations and the

costate equations in the higher-order form yield equality constraints which are “p”

times smaller in number compared to the first-order form.

Computational Comparisons: The collocation procedure applied to the system

equations (5.8), Lagrange multiplier equations (5.9), optimality equations (5.12),

82

Page 101: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 101/317

and inequality constraints (5.11), results in 2nN  + mN  nonlinear equations and

rN  inequality constraints in the nonlinear programming problem. Alternatively,

Eqs.(5.8) and (5.9) could be converted to first-order form result in 2np first-order

differential equations. Clearly, if  p = 1, the computations with the higher-order

form and the first-order form are identical, but as p becomes larger, the higher-order

approach provides considerably fewer equations in the nonlinear programming prob-

lem since the number of equality constraints is reduced. Therefore, we anticipate

computation efficiency using the higher-order method outlined in this chapter.

The distinction between the direct and indirect schemes are as follows: the

indirect technique has more conditions to be satisfied than the direct procedure

due to the appearance of Lagrange variables. These artificial variables make the

indirect algorithm somewhat more tedious because the initial guess for Lagrange

variables is needed to determine the optimal solutions. However, the advantage of 

the indirect scheme is accuracy of its solution. This accuracy is a result of the first-

order optimality conditions that are incorporated in the solution. In the remainder

of this chapter, a general-purpose program that has been developed based on these

ideas is presented. The computational comparisons are discussed with illustrativeexamples.

5.4.2 A General-purpose Program

An outline of the computational steps of a general program is described

here. Also, the computational flowchart is shown in Figure 5.3. Given the problem

statement and the number of interval N , the steps of the flowchart are

• Choose Direct or Indirect approach.

• Compute the optimality condition symbolically.

1. Direct scheme: Obtain Eqs.(5.3),(5.4), (5.5), and (5.14)

83

Page 102: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 102/317

2. Indirect scheme: Obtain Eqs.(5.8), (5.9), (5.11), (5.12), and (5.16)

• Discretize conditions from step 2 using collocation technique.

• Parameterize all variables, i.e., x(t) and λ(t) with piecewise polynomials and

piecewise constants for u.

• Convert all symbolic conditions to the form of  C (y) ≤ 0, where y is a vector

of all unknown coefficients.

• Create an interface with a nonlinear programming software.

• Obtain the optimal solutions.

We have developed general-purpose programs as an outline of the compu-

tational steps described above. These codes are written by using the commercial

software called MATLAB, as shown in Appendix E. The instructions on how to use

these programs are also provided in Appendix F.

In this chapter, several examples will be presented to compare direct and

indirect approaches and to illustrate the relative advantage of higher-order systems.

Most of these examples are taken from mechanical systems literature, and are there-

fore second-order systems.

5.5 Examples: Minimum Energy Problems

Minimum energy problems arise in many aerospace and robotic applications.

The objective is to find the control inputs u∗i (t) satisfying the constraints

ul ≤ ui(t) ≤ uu, i = 1,...,m. (5.17)

where ul and uu are the lower and upper limits of  ui(t), while a performance index

J  of the form given below is to be minimized:

J  = tf t0

mi=1

riu2i dt (5.18)

84

Page 103: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 103/317

Figure 5.3: A flowchart of the development of the general-purpose programs.

where ri, i = 1,...,m, are nonnegative weighting factors. This type of control

problem is usually refereed to as a minimum-energy problem  [29].

5.5.1 Example 1: Spring-mass-damper System

A point-to-point control of a spring-mass-damper system is considered, as

shown in Figure 5.4. The equations of motion of such systems have the form

Mx(2) + Cx(1) + Kx = Bu, (5.19)

85

Page 104: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 104/317

Figure 5.4: A spring-mass-damper system with two masses and one input.

Figure 5.5: The 6-interval direct response curves x1, x2, and u for Example 1. The

plots for the four cases (I)-(IV) overlap within the accuracy of thedrawing.

where, x ∈ Rn and u ∈ Rm. The matrices M , C , K , and B are:

M  =

m1 0

0 m2

, C  =

c1 + c2 −c2

−c2 c2 + c3

K  = k1 + k2 −k2

−k2 k2 + k3

, B = 1

0

. (5.20)

The system equation can also be written in the first-order form as

q(1) = Aq + Bu, (5.21)

86

Page 105: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 105/317

Figure 5.6: The 6-interval indirect response curves x1, x2, and u for Example 1.The plots for the four cases (I)-(IV) overlap within the accuracy of thedrawing.

Figure 5.7: The analytical response curves x1, x2, and u of Example 1 using matrixexponential.

where

A =

0 I n

−M −1K  −M −1C 

, B =

0

M −1B

, q =

x

x(1)

. (5.22)

For the study, the parameters used in the model (in MKS units) are: m1 = m2 = 1.0,

c1 = c3 = 1.0, c2 = 2.0, k1 = k2 = k3 = 3.0. For the second-order system,n = 2, m = 1, and p = 2. The boundary conditions specified for the problem are

x(t0) = (10 20)T , x(1)(t0) = (10 20)T , x(tf ) = (0 0)T  and x(1)(tf ) = (0 0)T , where

t0 = 0 and tf  = 2.0. It is desired to minimize the cost J  =tf  t0

u21 dt. The trajectory

must satisfy the constraint −300 ≤ u ≤ 300 during motion.

87

Page 106: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 106/317

Systems CPU run-time (Second)1. Direct First-order (I) 20.662. Direct Second-order (II) 7.8003. Direct Fourth-order (III) 2.91

4. Direct Fourth-order (IV) 7.035. Indirect First-order (I) 99.746. Indirect Second-order (II) 22.357. Indirect Fourth-order (III) 15.878. Indirect Fourth-order (IV) 20.83

Table 5.1: The CPU run-time of Example 1 showing direct/indirect and first-order/higher-order comparisons.

Figure 5.8: The response curves x1, x2, and u of Example 1 when the first integralis implemented explicitly.

In this specific example, there are two different forms of the higher-order

structure: (i) second-order structure, derived from Newton’s laws as shown in Eq.

(5.19); (ii) On applying a linear transformation q = T x to Eq.(5.21) where T  is a

controllable matrix corresponding to the pair (A, B), (A, B) transforms to canonical

forms ([1], [7]). These canonical forms are alternative to state-space forms and can

be represented by higher-order differential equations. In the space of transformed

variables x = (x1x2x3x4)T , the dynamic equations have a fourth-order form [1]. For

the given parameters of the problem, this equation is

x(4)4 + 6x(3)

4 + 17x(2)4 + 24x(1)

4 + 27x4 = u, (5.23)

where x4, u ∈ R1.

88

Page 107: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 107/317

Figure 5.9: The first integral comparisons of Example 1.

Note that Eq.(5.23) has the special property that u(t) can be expressed as a

function of state variable x4(t) and its derivatives. On substituting the expression

of control input into the cost functional J  and inequality constraints, the alternate

optimization problem is to find a state trajectory x4(t) that satisfies the constraint

−300 ≤ x(4)4 + 6x

(3)4 + 17x

(2)4 + 24x

(1)4 + 27x4 ≤ 300 (5.24)

and minimizes the cost functional

J  = tf t0

x(4)4 + 6x(3)

4 + 17x(2)4 + 24x(1)

4 + 27x4

2dt. (5.25)

As a result, this alternative problem does not include the dynamic constraint (5.23).

Since the system has several alternate dynamic models, these can be briefly

summarized as follows:

• Case (I): State-space representation which is the first-order structure as shown

in Eq.(5.21).

89

Page 108: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 108/317

Figure 5.10: The accuracy of the solutions in comparison with the solution ob-tained with the first integral used explicitly as a constraint in Exam-ple 1.

• Case (II): Second-order structure derived from Newton’s laws as shown in

Eq.(5.19).

• Case (III): Using a linear transformation the system is represented by fourth-

order differential equation as shown in Eq.(5.23).

• Case (IV): Result from case (III) can be used to present an alternate opti-

mization problem as shown in Eqs.(5.24) and (5.25).

The optimal solution for this example, for the number of intervals N  = 6,

are plotted in Figures 5.5, 5.6, 5.8, and 5.9. The optimal solutions for cases (I), (II),

(III), and (IV), using direct approach, have the same results as shown in Figures

5.5. Similarly, Figure 5.6 shows that these four different cases have the same results

while using indirect approach.

90

Page 109: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 109/317

Here, we would like to compare the accuracy of the solutions between di-

rect and indirect approaches. The unconstrained linear optimization problem has a

closed form solution that can be obtained using the matrix exponential technique.

This analytical solution is provided in Figure 5.7 since the optimal trajectory does

not violate any constraints. Also, this problem does not explicitly depend on the

time t, so there is a first integral, as described in Chapter 2. We can also use this first

integral to refine the indirect solution, as shown in Figure 5.9. From the plots, the

indirect solution with 6 intervals that explicative utilizes the first integral is quite

close to the closed form solutions. The use of the first integral explicitly provides

the best 6-interval solution as shown in Figure 5.9. From the plots, we observe that

the indirect approach provides a more accurate solution than the direct approach.

However, Table 5.1 shows that the CPU run-time using the direct scheme is

less than the indirect approach. This difference can be attributed to the presence

of a larger number of variables in the indirect method due to Lagrange multipliers.

The benefit of using higher-order methods is also shown in Table 5.1. From

the table, the higher-order solution takes less CPU run-time compared to the first-

order solutions, both in direct and indirect methods.According to the fourth-order system which we have two different problems,

results in Table 5.1 have shown that keeping a control input u(t) in the problem has

better CPU run-time compared with an alternate problem where u(t) is eliminated.

This can be explained based on numerical algorithms used in this chapter. Since

control input u(t) is represented by a piecewise constant, searching for an admissible

solution within a set of constants is easier.

5.5.1.1 An example of the input file to the general-purpose programs:

user.m

In this section, we provides an example code (user.m) that user needs to

modify as an input of the general-purpose programs. Especially, a user.m file below

91

Page 110: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 110/317

is consistent with the system described by the second-order differential equations,

i.e., case (II).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%This matlab file is called user.m %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 1. Number of state variables...format>>> n=?; %

n=2; %

%

% 2. Number of control variables...format>>> m=?; %

  m=1;

%

% 3. Number of the highest derivative of state variable... %

%format>>> p=?; ps is needed for each equation. %

p=2; %

ps=[2;2]; %

%

% 4. Number of the intervals***...format>>> N=?; %

% This interval means that users can set the accuracy of the %

%solutions.If N is large,it will be more accuracy of the solutions%

%but not always*. %

N=6; %

%

% 5. Initial time...format>>> t0=?; %t0=0; %

%

% 6 Final time...format>>> tf=?. %

% where N=N in section4. %

tf=2; %

%

%7.This is where the cost function needed to be defined in the sym%

%bolic format with one must understand what is the logic below. %

% 7.1 Control inputs must set as u1, u2, ..., um. %

% 7.2 State variables...As the user know that how many state vari-%%ables he/she has. %

% State variables must set as x10, x11, ..., x1p, x20, x21,%

% ..., x2p, ..., xn0, xn1, ..., xnp %

% where x10=x1, x11=x1_dot, and so on. %

% Format>>> L=sym(’?’); %

L=sym(’u1^2’); %

92

Page 111: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 111/317

%

% 8. This place is where the state equations have to be defined as%

%format below: %

% Format>>> fs(1,1)=sym(’?’);, ..., fs(n,1)=sym(’?’); %

fs(1,1)=sym(’x12+3*x11-2*x21+6*x10-3*x20-u1’); %

fs(2,1)=sym(’x22-2*x11+3*x21-3*x10+6*x20’); %

%

% 9. Also, at this point, users need to provide all constraints. %

% Format>>> cu(1,1)=sym(’?’);, ..., cu(m,1)=sym(’?’); where m is %

% the number of the control input. cu must be defined for all %

% control variables.******* %

% Format>>> cx(1,1)=sym(’?’);, ..., cx(r,1)=sym(’?’); where r is %

% the number of the constraints %

% Detail: %

% cu means constraints contain only control inputs. %% cx means constraints contain only state or both control and sta-%

% te variables. %

% 9.1 Further more for the lower and upper bound of the inequality%

% constraints. If one has the equality constraints, the lower %

% and upper bounds must defined as the same values. %

% Format>>> cu_l(1,1)=?;, ..., cu_l(m,1)=?; %

% cu_u(1,1)=?;, ..., cu_u(m,1)=?; %

% and so on where l and u mean upper/lower bound respectively.%

%*****************************************************************%

ru=1; %cu(1,1)=sym(’u1’); %

%cu(2,1)=sym(’u2’); %

%

cu_l(1,1)=-300; %

cu_u(1,1)= 300; %

%*****************************************************************%

%

r=0; % remember this is a number of the constraint of x and u. %

rr=ru+r; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%cx(1,1)=sym(’x12’); %

%cx_l(1,1)=-4; %

%cx_u(1,1)=4; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 10. Boundary Conditions %

93

Page 112: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 112/317

% Format>>> x0(i)=?; and xf(i)=?; %

% where x0(1) =x10dot, x0(2) =x11dot, ..., x0(p) =x1pdot %

% x0(p+1)=x20dot, x0(p+2)=x21dot, ..., x0(2p)=x2pdot %

% ... %

% x0((n-1)p+1)=xn0dot, x0((n-1)p+2)=xn1dot, ..., x0(np)=xnpdot%

% and repeat with the same logic for xf(1),...,xf(np). %

%

x0(1,1)=sym(’x10’); %

x0(2,1)=sym(’x11’); %

x0(3,1)=sym(’x20’); %

x0(4,1)=sym(’x21’); %

%Bound of x0 %

x0_l(1,1)=10; %

x0_l(2,1)=10; %

x0_l(3,1)=20; %x0_l(4,1)=20; %

x0_u(1,1)=10; %

x0_u(2,1)=10; %

x0_u(3,1)=20; %

x0_u(4,1)=20; %

%

xf(1,1)=sym(’x10’); %

xf(2,1)=sym(’x11’); %

xf(3,1)=sym(’x20’); %

xf(4,1)=sym(’x21’); %%Bound of xf %

xf_l(1,1)=0; %

xf_l(2,1)=0; %

xf_l(3,1)=0; %

xf_l(4,1)=0; %

xf_u(1,1)=0; %

xf_u(2,1)=0; %

xf_u(3,1)=0; %

xf_u(4,1)=0; %

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 11. Define the minmax values of all parameters. %

%

absmax=1e13; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

94

Page 113: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 113/317

%***********************end of user.m file*************************

5.5.2 Example 2: Flexible Link

Figure 5.11: Single-link robot with joint flexibility.

This example is of a single link manipulator rotating in a vertical plane

driven through a flexible drive train [38], shown in Figure. 5.11. The system has

two degrees-of-freedom and the equations of motion are

I 1q(2)1 + m1gl sin q1 + k(q1 − q2) = 0, (5.26)

I 2q(2)2 − k(q1 − q2) = u, (5.27)

where I 1 and I 2 are, respectively, the link and actuator moment of inertia, m1 is the

mass of the link with its mass center at a distance l from the joint, k is the stiffness

of the drive train, g is the gravity constant, and u is the actuator torque. Let the

objective be to steer the system from a given set of initial conditions on q1, q2, q

(1)

1 ,and q(1)2 at t0 to a specified goal point at tf  while minimizing a cost J  =

tf  t0

u2dt. The

trajectory must satisfy the constraint −15 ≤ u ≤ 15 during motion. The parameters

used in the model (in MKS units) are: I 1 = I 2 = 1.0, k = 1.0, g = 9.8, m1 = 0.01,

and l = 0.5. The second-order system, as described in Eqs. (5.26) and (5.27), can

95

Page 114: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 114/317

Figure 5.12: The 6-interval direct response curves q1, q2, and u for Example 2.The plots for the four cases (I)-(IV) overlap within the accuracy of the drawing.

Figure 5.13: The 6-interval indirect response curves q1, q2, and u for Example 2.The plots for the four cases (I)-(IV) overlap within the accuracy of 

the drawing.

be rewritten in state-space form as

q(1)1 = q3, (5.28)

q(1)2 = q4, (5.29)

q

(1)

3 = −(m1gl sin q1 + k(q1 − q2))/I 1, (5.30)

q(1)4 = (k(q1 − q2) + u)/I 2, (5.31)

which is considered a first-order system.

The two second-order differential equations describing the system have a

96

Page 115: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 115/317

Figure 5.14: The response curves q1, q2, and u of Example 2 when the first integralis implemented explicitly.

Systems CPU run-time (Second)

1. Direct First-order (I) 10.002. Direct Second-order (II) 4.7003. Direct Fourth-order (III) 2.3804. Direct Fourth-order (IV) 8.7545. Indirect First-order (I) 434.86. Indirect Second-order (II) 137.87. Indirect Fourth-order (III) 58.628. Indirect Fourth-order (IV) 68.17

Table 5.2: The CPU run-time of Example 2 showing direct/indirect and first-order/higher-order comparisons.

special structure. From Eq. (5.26), q2 can be written explictly in terms of  q1 and its

second derivative. On substituting this expression for q2 into Eq. (5.27), we obtain a

single fourth-order differential equation in the variable q1 up to its fourth derivatives

q(4)1 = α1u + (α2 cos q1 + α3)q

(2)1 + α4 sin q1q

(1)1

2+ α5 sin q1, (5.32)

where αi are constants with α1 = kIJ 

, α2 = −m1glI 

, α3 = −k(I +J )IJ 

, α4 = m1glI 

, and

α5 = −m1gkl

IJ  .The above differential equation has the structure of Eq. (2.1) with n = 1 and

m = 1. As a result, we can reexpress control input u(t) as a function of state variable

q1(t) and its derivatives. As such, the dynamic constraints (5.32) can be eliminated

by substituting the expression of control input into cost functional J  and inequality

97

Page 116: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 116/317

Figure 5.15: The first integral comparison of Example 2.

constraints on u(t). We have an alternative optimization problem as follows: find

the state trajectory q1(t) that satisfies the constraints

−15 ≤ q(4)1 − (α2 cos q1 + α3)q

(2)1 − α4 sin q1q

(1)1

2− α5 sin q1 /α1 ≤ 15 (5.33)

and minimizes the cost functional

J  = tf t0

q(4)1 − (α2 cos q1 + α3)q

(2)1 − α4 sin q1q

(1)1

2− α5 sin q1

/α1

2dt. (5.34)

As a result, this alternative problem does not include the dynamic constraint (5.32).

Since the system has several alternate dynamic models, these can be briefly

summarized as follows:

• Case (I): State-space representation which is the first-order structure as shown

in Eqs.(5.28)-(5.31).

• Case (II): Second-order structure derived from Newton’s laws as shown in

Eqs.(5.26) and (5.27).

98

Page 117: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 117/317

Figure 5.16: The accuracy of the solutions in comparison with the solution ob-tained with the first integral used explicitly as a constraint in Exam-ple 2.

• Case (III): Fourth-order differential equation as shown in Eq.(5.32).

• Case (IV): Result from case (III) can be used to present an alternate opti-

mization problem as shown in Eqs.(5.33) and (5.34).

The results for this nonlinear example have similar performance as the spring-

mass-damper system in terms of accuracy, CPU run-time, and computational saving.

The results are shown in Table 5.2 and Figs. 5.12, 5.13, 5.14, and 5.15.

5.5.2.1 An example of the input file to the general-purpose programs:

user.m

In this section, we provides an example code (user.m) that user needs to

modify as an input of the general-purpose programs. Especially, a user.m file below

99

Page 118: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 118/317

is consistent with the system described by the fourth-order differential equations,

i.e., case (III).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%This matlab file is called user.m %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 1. Number of state variables...format>>> n=?; %

n=1; %

%

% 2. Number of control variables...format>>> m=?; %

  m=1;

%

% 3. Number of the highest derivative of state variable... %

%format>>> p=?; ps is needed for each equation. %

p=4; %

ps=[4]; %

%

% 4. Number of the intervals***...format>>> N=?; %

% This interval means that users can set the accuracy of the %

%solutions.If N is large,it will be more accuracy of the solutions%

%but not always*. %

N=6; %

%

% 5. Initial time...format>>> t0=?; %t0=0; %

%

% 6 Final time...format>>> tf=?. %

% where N=N in section4. %

tf=1; %

%

%7.This is where the cost function needed to be defined in the sym%

%bolic format with one must understand what is the logic below. %

% 7.1 Control inputs must set as u1, u2, ..., um. %

% 7.2 State variables...As the user know that how many state vari-%%ables he/she has. %

% State variables must set as x10, x11, ..., x1p, x20, x21,%

% ..., x2p, ..., xn0, xn1, ..., xnp %

% where x10=x1, x11=x1_dot, and so on. %

% Format>>> L=sym(’?’); %

L=sym(’u1^2’); %

100

Page 119: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 119/317

%

% 8. This place is where the state equations have to be defined as%

%format below: %

% Format>>> fs(1,1)=sym(’?’);, ..., fs(n,1)=sym(’?’); %

fs(1,1)=sym(’x14-(u1+(-0.049*cos(x10)-2)*x12+0.049*sin(x10)*x11^2-

0.049*sin(x10))’); %

%

% 9. Also, at this point, users need to provide all constraints. %

% Format>>> cu(1,1)=sym(’?’);, ..., cu(m,1)=sym(’?’); where m is %

% the number of the control input. cu must be defined for all %

% control variables.******* %

% Format>>> cx(1,1)=sym(’?’);, ..., cx(r,1)=sym(’?’); where r is %

% the number of the constraints %

% Detail: %

% cu means constraints contain only control inputs. %% cx means constraints contain only state or both control and sta-%

% te variables. %

% 9.1 Further more for the lower and upper bound of the inequality%

% constraints. If one has the equality constraints, the lower %

% and upper bounds must defined as the same values. %

% Format>>> cu_l(1,1)=?;, ..., cu_l(m,1)=?; %

% cu_u(1,1)=?;, ..., cu_u(m,1)=?; %

% and so on where l and u mean upper/lower bound respectively.%

%*****************************************************************%

ru=1; %cu(1,1)=sym(’u1’); %

%cu(2,1)=sym(’u2’); %

%

cu_l(1,1)=-15; %

cu_u(1,1)= 15; %

%*****************************************************************%

%

r=0; % remember this is a number of the constraint of x and u. %

rr=ru+r; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%cx(1,1)=sym(’x12’); %

%cx_l(1,1)=-4; %

%cx_u(1,1)=4; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 10. Boundary Conditions %

101

Page 120: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 120/317

% Format>>> x0(i)=?; and xf(i)=?; %

% where x0(1) =x10dot, x0(2) =x11dot, ..., x0(p) =x1pdot %

% x0(p+1)=x20dot, x0(p+2)=x21dot, ..., x0(2p)=x2pdot %

% ... %

% x0((n-1)p+1)=xn0dot, x0((n-1)p+2)=xn1dot, ..., x0(np)=xnpdot%

% and repeat with the same logic for xf(1),...,xf(np). %

%

x0(1,1)=sym(’x10’); %

x0(2,1)=sym(’x11’); %

x0(3,1)=sym(’x12’); %

x0(4,1)=sym(’x13’); %

%Bound of x0 %

x0_l(1,1)=0.03; %

x0_l(2,1)=0.04; %

x0_l(3,1)=-0.0215; %x0_l(4,1)=0.008; %

x0_u(1,1)=0.03; %

x0_u(2,1)=0.04; %

x0_u(3,1)=-0.0215; %

x0_u(4,1)=0.008; %

%

xf(1,1)=sym(’x10’); %

xf(2,1)=sym(’x11’); %

xf(3,1)=sym(’x12’); %

xf(4,1)=sym(’x13’); %%Bound of xf %

xf_l(1,1)=0.06; %

xf_l(2,1)=0.08; %

xf_l(3,1)=-0.0429; %

xf_l(4,1)=-0.0639; %

xf_u(1,1)=0.06; %

xf_u(2,1)=0.08; %

xf_u(3,1)=-0.0429; %

xf_u(4,1)=-0.0639; %

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 11. Define the minmax values of all parameters. %

%

absmax=1e6; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

102

Page 121: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 121/317

%***********************end of user.m file*************************

5.6 Examples: Minimum-time Problem

In this section, the minimum-time problem is considered, where the objective

is to steer the system from the initial state at t0 to a specified target in minimum

time. The performance index J  is mathematically expressed as

J  = tf t0

dt = tf  − t0. (5.35)

Typically, the control variables are constrained as

ul ≤ ui(t) ≤ uu, i = 1, ..., m t ∈ [t0, t∗], (5.36)

where t∗ is denoted the minimum time solution. To present some important aspects

of the minimum-time problem, let us consider the following examples.

5.6.1 Example 3: Single Mass

Figure 5.17: One mass system.

Consider the system with only a mass and one input. The system has onedegree-of-freedom and the equation of motion is

mx(2) = u, (5.37)

where u(t) is an actuator input and let mass m = 1. The above equations can be

103

Page 122: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 122/317

Systems CPU run-time (Second)1. Direct First-order 226.682. Direct Second-order 129.033. Indirect First-order 330.66

4. Indirect Second-order 16.14

Table 5.3: The CPU run-time of Example 3 showing direct/indirect and first-order/second-order comparisons.

Figure 5.18: The 6-interval direct response curves x, x(1), and u for Example 3.The plots of first-order and second-order systems overlap within theaccuracy of the drawing.

written in state-space form as

x(1) = x1,

x(1)1 = u, (5.38)

where x(t) and x(1)(t) are the position and velocity of the mass, respectively. The

objective is defined as steering the system from a given set of initial conditions on

x(0), and x(1)(0) to a specified goal point at tf  while minimizing a cost J  =tf  t0

dt =

tf  − t0. The trajectory must satisfy the constraint −4 ≤ u ≤ 4 during motion. The

boundary conditions are taken to be x(0) = 0, x(1)(0) = 0, x(1) = 1, and x(1)(1) = 0.

The results for this example also have similar performance as the previous

systems in terms of accuracy and CPU run-time. These are shown in Table 5.3.

Also, the optimal solutions are shown in Figures 5.18, and 5.19.

104

Page 123: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 123/317

Figure 5.19: The 6-interval indirect response curves x, x(1), and u for Example 3.The plots of first-order and second-order systems overlap within theaccuracy of the drawing.

Systems CPU run-time (Second)1. Direct First-order 92.872. Direct Second-order 59.623. Indirect First-order 259.674. Indirect Second-order 133

Table 5.4: The CPU run-time of Example 4 showing direct/indirect and first-order/second-order comparisons.

5.6.2 Example 4: Flexible Link

The single link manipulator of Fig. 5.11 is once again considered. The ob-

  jective is to steer the system from a given set of initial conditions on q1, q2, q(1)1 ,

and q(1)2 at t0 to a specified goal point at tf  while minimizing a cost J  =

tf  t0

dt. The

trajectory must satisfy the constraint −1 ≤ u ≤ 1 during motion. The parameters

used in the model (in MKS units) are: I 1 = I 2 = 1.0, k = 5.0, g = 9.8, m1 = 0.01,

and l = 0.5. The results for this nonlinear example have a similar performance

as the previous example in terms of accuracy, CPU run-time, and computational

saving. These are shown in Table 5.4 and Figs. 5.20, 5.21.

It is well known that an optimal solution for a minimum time problem must

be a bang-bang solution. However, our approximate solutions from Figs.5.20 and

5.21 show that they are not bang-bang solutions. These can be explained base on

105

Page 124: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 124/317

Figure 5.20: The 6-interval direct response curves q1, q2, and u for Example 4.The plots of first-order and second-order systems overlap within theaccuracy of the drawing.

Figure 5.21: The 6-interval indirect response curves q1, q2, and u for Example 4.The plots of first-order and second-order systems overlap within the

accuracy of the drawing.

the collocation technique used in this chapter. By assigning the collocation points

to the minimum time problems, we assume that the switching points must occur

at these points. In reality, it may not be happened as shown in Figs.5.20 and 5.21.

However, these solutions provide the best approximate to the admissible form of the

solutions that we choose.

106

Page 125: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 125/317

5.7 summary

This chapter described general computational approaches for handling com-

plex constraint optimization problems. Also, these approaches can be used for prob-

lems characterized by differential equations of any order. Nonlinear programming

was selected as the main numerical algorithm. Optimal control problems were con-

verted to nonlinear programming problems by both direct and indirect approaches.

These two approaches mainly used the collocation technique to discretize the time

domain. As a result, dynamic optimization problems become static problems which

allow to be solved using nonlinear programming. Based on these ideas, general-

purpose programs were developed.Several examples were selected from mechanical systems and solved for their

optimal solutions. We conclude that the indirect method is more accurate compared

with direct method. However, the indirect approach requires significantly more

computer CPU run-time than the direct method. The higher-order forms have

significantly reduced on-line computations in both direct and indirect approaches.

Therefore, we conclude that real-time implementation may benefit by considering

systems in their higher-order forms as opposed to working with their first-orderrepresentations.

107

Page 126: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 126/317

Chapter 6

APPLICATION OF OPTIMAL CONTROL TO ROBOT

SYSTEMS

An important class of mechanical systems is robot manipulators. It is well

known that open-chain robotics are described by a set of second-order differential

equations. The first part of this chapter deals with the basic theory of robot systems

and the derivation of their dynamic equations of motion. This chapter specializes

the theory of the previous chapters to systems described by second-order differential

equations.

6.1 Dynamic Equations of Robot Systems

The fundamentals of dynamic equations of open-chain manipulators is de-

scribed briefly in this section. The successive bodies in a robot chain are character-

ized by four Denavit-Hartenberg parameters ([19], [38], [39]). These are used in the

analysis of forward kinematics. The Jacobian is a mapping between the Cartesian

velocity of the end effector and the joint rates. Finally, Euler-Lagrange equations

are used to write the dynamic equations of motion, represented by second-order

differential equations.

6.1.1 Forward Kinematics and Denavit-Hartenberg Parameters

A robot is a set of rigid links connected in a serial chain using revolute or

prismatic joints. It is customary to assume that all joints of the robot have only

108

Page 127: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 127/317

a single degree-of-freedom. A revolute joint is characterized by a rotation while a

prismatic joint by a linear displacement.

Suppose that a robot has n links and are numbered from link 0 to n starting

at the base. The joints are numbered from 1 to n where the ith  joint connects the

link i to i − 1. The variable of each joint is denoted by a parameter qi, which is

either a rotation angle or a displacement. The base of the robot is assigned the

frame F 0. Further, we assign frames F 1 through F n to links respectively.

A homogeneous matrix Ai transforms the coordinates of a point from frame

F i to F i−1. Ai is a function of  qi which is either θi or di. Mathematically, we have

Ai =

cos(θi) −sin(θi)cos(αi) sin(θi)sin(αi) aicos(θi)sin(θi) cos(θi)cos(αi) −cos(θi)sin(αi) aicos(θi)

0 sin(αi) cos(αi) di

0 0 0 1

, (6.1)

where ai, di, θi, and αi are the D-H parameters of link i and joint i [38]. Figure 6.1

shows the assignment of the D-H frame in which the link parameters ai, di, θi, and

αi are simply defined as:

• ai, which is called length , is the distance along xi from Oi to where the inter-

section of  xi and zi−1 occurs.

• di, called offset , is the distance along zi−1 from Oi to where xi and zi−1 are

intersected.

• αi, which is called twist , is the angle measured about xi between zi−1 and zi.

• θi, called angle, is the angle of link i relative to link i − 1 and measured about

zi−1.

The transformation from the coordinate frame F  j to frame F i is denoted by

T  ji . Conventionally, the definition of this transformation matrix is

T  ji = Ai+1Ai+2...A j−1A j if  i < j,

109

Page 128: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 128/317

T  ji = I  if  i = j,

T  ji =

T i j−1

if  i > j. (6.2)

It is known that each homogeneous transformation matrix Ai has the form

Ai =

Ri

i−1 dii−1

0 1

, (6.3)

which implies that

T  ji = Ai+1...A j =

R j

i d ji

0 1

, (6.4)

where the matrix R ji expresses the relative orientation of the frame F  j to frame F i

and the vector d ji describes the position of the origin of frame F  j respect to frame

F i.

Figure 6.1: The Denavit-Hartenberg parameters.

110

Page 129: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 129/317

6.1.2 Jacobian

The Jacobian matrix is an important quantity in the derivation of the dy-

namic equations of motion. For an n-link robot with joint variables q1,..., qn, the

transformation matrix from the end effector to the base frame has the form

T n0 (q) =

Rn

0 (q) dn0 (q)

0 1

, (6.5)

where q = (q1,..., qn)T  is the vector of joint variables. Now, we can derive linear and

angular velocities by computing time derivatives of Rn0 (q) and dn

0 (q). This results in

the mapping between Cartesian space and joint space given by

V n0

ωn0

= J n0

q(1)1

q(1)2

...

q(1)n

, (6.6)

where

J n0 =

J n0v

−−

J n0ω

. (6.7)

J n0v and J n0ω have the following forms

J n0v = [J v1|J v2|...|J vn]; J n0ω = [J ω1|J ω2|...|J ωn], (6.8)

with J vi = zi−1 × (On − Oi−1) if its joint is revolute and J vi = zi−1 if the joint is

prismatic. Here, On denotes the vector from the origin of the base frame O0 to the

origin of the frame n. Similarly, the ith column of  J ω is J ωi = ρizi−1, where ρi is

equal to 1 if joint is revolute and 0 if prismatic.

111

Page 130: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 130/317

6.1.3 Dynamic Equations of Motion

Euler-Lagrange equations can be used to generate general set of differential

equations that describe the motion of a mechanical system. The Euler-Lagrange

equations are written as

d

dt

∂ L

∂q (1)i

∂ L

∂qi= ui, i = 1,...,n (6.9)

where L = K − V  is the Lagrangian of the system. K  and V  are the kinetic energy

and potential energy of the system, respectively. ui is a set of actuator torques or

forces. The kinetic energy can be written in terms of Jacobian matrix for the center

of mass of link i:

K  =1

2q(1)T 

ni=1

miJ T vci(q)J vci(q) + J T ωi(q)Ri(q)I iR

T i (q)J ωi(q)

q(1) (6.10)

where Ri is a rotation matrix of link i and I i is the inertia matrix of link i. As a

consequence, the equations of motion of an n degree-of-freedom mechanical system

are n second-order differential equations of the form

n

s=1

Disq(2)

s

+n

s=1

n

 p=1

hipsq(1)

 p

q(1)

s

+∂V 

∂qi

= ui, i = 1,...,n (6.11)

where q1,...,qn are generalized coordinates, Dis are inertia matrix terms, hipsq(1) p q(1)s

are Coriolis and centripetal terms, ∂V ∂qi

are gravity terms, and ui are actuator inputs.

6.2 Optimality Conditions

Since the highest derivative of the dynamic variable q is two, the optimality

conditions derived from both calculus of variations and Pontryagin’s principle in

Chapter 2 can be applied with p = 2. In order to highlight these conditions, let us

define a specific problem statement of finding the optimal trajectory for a dynamic

system described by

x(2) = f 

x, x(1), u , t

, (6.12)

112

Page 131: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 131/317

where x ∈ Rn and u ∈ Rm. Here x ≡ q. The trajectory must minimize the

performance index

J  = φ

x(tf ), x(1)(tf ), tf 

+ tf t0

L

x, x(1), u , t

dt, (6.13)

and satisfy the constraints

c j

x, x(1), u , t

≤ 0, j = 1,...,r, (6.14)

where r is the number of the constraints. Let us now summarize the optimality

results from Chapter 2 for the fixed end time fixed end point problem, i.e., t0 and

tf  are specified and x and x(1)

are given at t0 and tf .

Variational Calculus

Adjoining Eqs.(6.12) and (6.14) to the functional Eq.(6.13) with Lagrange

multipliers yields

J  = φ + tf t0

L + λT 

f  − x(2)

+ µT 

c + s2

dt. (6.15)

Applying the calculus of variations and defining L = L + λT 

f  − x(2)

+ µT  (c + s2),

we have the necessary conditions for an extremum as

x(2) = f 

x, x(1), u , t

, (6.16)

λ(2)T  =∂L

∂x−

d

dt

∂L

∂x(1), (6.17)

∂L

∂u= 0, (6.18)

C i(x, x(1), u , t) ≤ 0, i = 1,...,r, (6.19)

µi > 0 if  ci = 0, and µi = 0 if  ci < 0, i = 1,...,r. (6.20)

Pontryagin’s Principle

113

Page 132: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 132/317

In terms of the Hamiltonian

H  = L

x, x(1), u , t

+ λT (t)f 

x, x(1), u , t

+ µT (t)

c + s2

, (6.21)

the necessary conditions for optimal control u∗ are

x∗(2) = f 

x∗, x∗(1), u∗, t

, (6.22)

λ∗(2)T  =∂H 

∂x−

d

dt

∂H 

∂x(1), (6.23)

x∗, x∗(1), u∗, t

≤ H 

x, x(1), u , t

, for all admissible u(t), (6.24)

C i(x, x(1), u , t) ≤ 0, i = 1,...,r, (6.25)

µi > 0 if  ci = 0, and µi = 0 if  ci < 0, i = 1,...,r. (6.26)

The necessary conditions for robotic systems, derived from both calculus

of variations and Pontryagin’s principle, are shown above. Moreover, there are

sufficient conditions that are already derived for the dynamic systems described by

the second-order differential equations in Chapter ??, which are Eqs.(??) through

(??).

At the beginning of this chapter, we provided the fundamental knowledge and

a common approach used to obtain the dynamic equations of motion of open-chain

manipulators [19], [38], [39]. This idea allows us to develop the Matlab symbolic

code in which the equations of motion can be derived symbolically after the D-H

frames are assigned as shown in Figure 6.2. We have developed such symbolic codes

and assigned them as a shareware at FTP site:

http://mechsys6.me.udel.edu/dyn_eq/.

From the previous chapter, we have developed general purpose programs fordynamic optimal control problems. In addition, we have developed a symbolic code

for deriving the equations of motion for the general open-chain manipulator systems

in this chapter. Therefore, we have provided a complete set of the codes for robot

systems as shown in Figure 6.2.

114

Page 133: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 133/317

Figure 6.2: The process of the development of the dynamic equations of motionand interfacing with dynamic optimization toolboxes.

6.3 Illustrative Examples

6.3.1 Example 1: The one-armed manipulator with Coulomb Friction

We consider a physical pendulum of mass m and length L, shown in Fig-ure 6.3, which has an actuator input as M a at the base together with a resistive

Coulomb friction couple proportional to the radial force Rx. Furthermore, we as-

sume that the Coulomb friction couple C f  has magnitude C f  = µ|Rx| where Rx

is the radial component of the reaction at the support O, µ is the coefficient of 

friction between the sleeve and the hub, and is the hub radius. The system has

one degree-of-freedom and the equation of motion is

13

mL2θ(2)1 + µmL2

θ(1)21 +

mg L2

sin(θ1) + µmg cos(θ1)

= M a, (6.27)

where L is the link length, m is the mass of the link with the mass center at a

distance L2

from the base of the pendulum, g is the gravity constant, and M a is the

actuator moment. Let the objective be to steer the system from a given set of initial

115

Page 134: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 134/317

Figure 6.3: One-armed manipulator with friction.

conditions on θ1 and θ(1)1 at t0 to a specified goal point at tf  while minimizing a cost

J  =tf 

 t0

(M a)2 dt. The trajectory must satisfy the constraint −50 ≤ M a ≤ 50 during

motion. The parameters used in the model (in MKS units) are: L = 1.0, m = 1.0,

g = 9.8, µ = 0.3, and = 0.01. The second-order system described in Eq.(6.27) can

be rewritten in state-space form as

θ(1)1 = θ2, (6.28)

θ(1)2 = −µ3

2Lθ(1)21 −

mg

3

2Lsin(θ1) + µ

3

L2gcos(θ1)

+ M a. (6.29)

The results in this nonlinear example, again, have the same performance as exam-

ples in Chapter 5 in terms of CPU run-time, and computational savings, as shown

in Table 6.1. Their results of direct and indirect approaches are provided in Figures

6.4, and 6.5.

116

Page 135: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 135/317

Systems CPU run-time (Second)1. Direct First-order 6.162. Direct Second-order 2.863. Indirect First-order 26.97

4. Indirect Second-order 9.94

Table 6.1: The CPU run-time of Example 1 showing direct/indirect and first-order/second-order comparisons.

Figure 6.4: The 6-interval direct response curves θ1, θ(1)1 , and M a for Example 1.

The plots of first-order and second-order systems overlap within theaccuracy of the drawing.

6.3.2 Example 2: The polar manipulator.

Consider the two-dimensional manipulator referred to as a Polar manipulator

[39], shown in Figure 6.6. The manipulator components are considered to have

moments of inertia I G1 and I G2 with respect to the centers of mass which are located

at the distance c1 and c2(t) from the base point O, respectively. Assume that

the actuator masses are negligible, the system has two degrees-of-freedom and the

equations of motion are

m1gc1cos(θ1) + m2gc2cos(θ1)

+(c21m1 + I g1 + I g2 + m2c22)θ(2)1 + 2m2c2c(1)2 θ(1)1 = u1,

−m2gsin(θ1) − m2c(2)2 + m2c2θ(1)21 = u2, (6.30)

where u1(t) and u2(t) are the actuator inputs associated with the variables θ1(t)

117

Page 136: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 136/317

Figure 6.5: The 6-interval indirect response curves θ1, θ(1)1 , and M a for Example 1.

The plots of first-order and second-order systems overlap within theaccuracy of the drawing.

Systems CPU run-time (Second)1. Direct First-order 34.722. Direct Second-order 15.213. Indirect First-order 699.924. Indirect Second-order 95.02

Table 6.2: The CPU run-time of Example 2 showing direct/indirect and first-order/second-order comparisons.

and c2(t), respectively. The above equations can be written in state-space form as

c(1)2 = c3,

θ(1)1 = θ2,

θ(1)2 =−m1gc1cos(θ1) − m2gc2cos(θ1) + 2m2c2c3θ2 + u1

c21m1 + I g1 + I g2 + m2c22,

c(1)3 = −gsin(θ1) + c2θ22 −u2

m2(6.31)

Let the objective be defined as steering the system from a given set of initial condi-

tions on θ1, θ

(1)

1 , c2, and c

(1)

2 at t0 to a specified goal point at tf  while minimizing acost J  =

tf  t0

u21 + u2

2dt. The trajectory must satisfy the constraint −50 ≤ u1 ≤ 50 and

−50 ≤ u2 ≤ 50 during motion. The parameters used in the model (in MKS units)

are: m1 = m2 = 1.0, c1 = 3.0, g = 9.8, I G1 = 1.0, and I G2 = 1.0. The two-point

boundary conditions are taken as θ1(0) = 0, θ(1)1 (0) = 0, c2(0) = 0, c

(1)2 (0) = 0,

118

Page 137: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 137/317

Figure 6.6: Polar manipulator. Shaded areas indicate actuator location: ()rotary actuator; () linear actuator.

θ1(1) = π4

, θ(1)1 (1) = 0, c2(1) = 0, and c(1)2 (1) = 0. The results for this nonlinear

example, again, have the same performance as the previous systems in terms of 

CPU run-time, as illustrated in Table 6.2. The optimal solutions base on direct and

indirect approaches are shown in Figures 6.7, and 6.8.

6.3.3 Example 3: The planar two-link manipulator.

Consider a two-dimensional manipulator known as a two-link revolute robot

(R-R manipulator) [38], shown in Figure 6.9. The manipulator is considered as a

119

Page 138: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 138/317

Figure 6.7: The 6-interval direct response curves θ1, θ(1)1 ,c2, c

(1)2 , u1 and u2 for Ex-

ample 2. The plots of first-order and second-order systems overlapwithin the accuracy of the drawing.

Figure 6.8: The 6-interval indirect response curves θ1, θ(1)1 ,c2, c

(1)2 , u1 and u2 for

Example 2. The plots of first-order and second-order systems overlap

within the accuracy of the drawing.

point mass system with no moments of inertia. The system has two degrees-of-

freedom and the equations of motion are

(m2l2c2 + m2lc2l1cos(q2))q(2)2 + (m2l21 + m1l2c1 + m2l2c2 + 2m2lc2l1cos(q2))q

(2)1

+ m2gl

1cos(q

1) + m

2gl

2cos(q

1+ q

2) − 2m

2lc2

l1sin(q

2)q

(1)

2q(1)

1

− m2lc2l1sin(q2)q(1)22 + m1gl1cos(q1) = u1,

(m2l2c2 + m2lc2l1cos(q2))q(2)1 + m2l2c2q(2)2 + m2lc2l1sin(q2)q(1)21

+ m2gl2cos(q1 + q2) = u2. (6.32)

120

Page 139: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 139/317

Figure 6.9: The planar two-link manipulator.

Systems CPU run-time (Second)1. Direct First-order 40.322. Direct Second-order 33.453. Indirect First-order 742.874. Indirect Second-order 243.76

Table 6.3: The CPU run-time of Example 3 showing direct/indirect and first-order/second-order comparisons.

Equation (6.32) can also be written in state-space form where we define two addi-

tional variables as q(1)1 = q3 and q(1)2 = q4. Let the objective of this optimal control

problem be defined as steering the system from a given set of initial conditions on

q1, q(1)1 , q2, and q(1)2 at t0 to a specified goal point at tf  while minimizing a cost

J  =

tf  t0

u21 + u22dt. The trajectory must satisfy the constraint −100 ≤ u1 ≤ 100 and

−100 ≤ u2 ≤ 100 during motion. The parameters used in the model (in MKS units)

are: m1 = 1.3, m2 = 1.6, l1 = 1.0, l2 = 1.0, lc1 = 1.0, lc2 = 1.0, and g = 9.8.

The optimal solutions are shown in Figures 6.10, and 6.11, while the two-point

121

Page 140: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 140/317

Figure 6.10: The 6-interval direct response curves q1, q(1)1 ,q2, q

(1)2 , u1 and u2 for

Example 3. The plots of first-order and second-order systems overlapwithin the accuracy of the drawing.

Figure 6.11: The 6-interval indirect response curves q1, q(1)1 ,q2, q

(1)2 , u1 and u2 for

Example 3. The plots of first-order and second-order systems overlap

within the accuracy of the drawing.

boundary conditions are specified as q1(0) = π6

, q(1)1 (0) = 0, q2(0) = π7

, q(1)2 (0) = 0,

q1(1) = π5

, q(1)1 (1) = 0, q2(1) = π6

, and q(1)2 (1) = 0. The results in this nonlinear

example have the same performance as the previous systems in terms of accuracy

and CPU run-time, as shown in Table 6.3.

6.3.4 Example 4: SCARA robot

The SCARA robot is chosen as the last minimum energy example. This

manipulator consists of four links that are revolute, revolute, prismatic, and revolute

122

Page 141: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 141/317

Figure 6.12: SCARA robot.

 joints, respectively. The structure of the robot is presented in Figure 6.12. The links

are assumed to be point masses and governed by the following differential equations:

(m1l2c1 + m2l2c2 + 2m2lc2l1cos(q2) + m2l21 + m3l22 + 2m3l2l1cos(q2) +

m3l21 + m4l22 + 2m4l2l1cos(q2) + m4l21 + 4)q(2)1 + (m2l2c2 + m2lc2l1cos(q2) +

m3l22 + m3l2l1cos(q2) + m4l22 + m4l2l1cos(q2) + 3)q(2)2 − q

(2)4 +

0.5(−2m2lc2l1sin(q2) − 2m3l2l1sin(q2) − 2m4l2l1sin(q2))q(1)2 q(1)1 +

(0.5(−2m2lc2l1sin(q2) − 2m3l2l1sin(q2) − 2m4l2l1sin(q2))q(1)1 +

0.5(−2m2lc2l1sin(q2) − 2m3l2l1sin(q2) − 2m4l2l1sin(q2))q(1)2 )q

(1)2 = u1,

(6.33)

123

Page 142: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 142/317

(m2l2c2 + m2lc2l1cos(q2) + m3l22 + m3l2l1cos(q2) + m4l22 +

m4l2l1cos(q2) + 3)q(2)1 + (m2l2c2 + m3l22 + m4l22 + 3)q(2)2 − q(2)4 +

0.5(2m2lc2l1sin(q2) + 2m3l2l1sin(q2) + 2m4l2l1sin(q2))q

(1)2

1 = u2,

(6.34)

(m3 + m4)q(2)3 = u3, (6.35)

− q(2)1 − q(2)2 + q(2)4 = u4. (6.36)

Let the objective of this optimal control problem be defined as steering the system

Systems CPU run-time (Second)1. Direct First-order 959.38

2. Direct Second-order 323.373. Indirect First-order 184154. Indirect Second-order 4443.1

Table 6.4: The CPU run-time of Example 4 showing direct/indirect and first-order/second-order comparisons.

from a given set of initial conditions on q1, q(1)1 , q2, and q(1)2 at t0 to a specified goal

point at tf  while minimizing a cost J  =tf 

 t0u21 + u2

2 + u23 + u2

4 dt. The trajectory

must satisfy the constraint −100 ≤ ui ≤ 100, where i = 1,..., 4, during motion. The

parameters used in the model (in MKS units) are: m1 = 1.0, m2 = 1.0, m3 = 1.0,

m4 = 1.0, l1 = 1.0, l2 = 1.0, lc1 = 1.0, lc2 = 1.0, and g = 9.8. The optimal

solutions are shown in Figures 6.13, and 6.14. The two-point boundary conditions

are qi(0) = 0, q(1)i (0) = 0, and q

(1)i (1) = 0, where i = 1,..., 4 while q1(1) = π

2,

q2(1) = π4

, q3(1) = 0.15, and q4(1) = −π. The results in this nonlinear example

have the same performance as the previous systems in terms of accuracy and CPU

run-time, as clearly shown in Table 6.4.

6.4 summary

This chapter analyzed the higher-order optimal control problems with robot

systems that are described by a set of second-order differential equations. As four

124

Page 143: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 143/317

Figure 6.13: The 6-interval direct response curves q1, q2, q3, q4, u1, u2, u3, andu4 for Example 4. The plots of first-order and second-order systemsoverlap within the accuracy of the drawing.

Figure 6.14: The 6-interval indirect response curves q1, q2, q3, q4, u1, u2, u3, andu4 for Example 4. The plots of first-order and second-order systems

overlap within the accuracy of the drawing.

examples have been investigated, it can be concluded that higher-order systems,

which are second-order systems in this chapter, have significantly reduced the on-line

computations by comparing the CPU run-time with their first-order representations.

These benefits have occurred in both direct and indirect approaches.

125

Page 144: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 144/317

Appendix A

VARIATION OF A FUNCTIONAL.

In this Appendix, we presents some concepts of  variation (or differential) of 

a functional described in [23].

Definitions of normed spaces.

• The space C(a, b) consists of all continuous functions x(t) defined on a closed

interval [a, b]. By addition of elements of  C and multiplication of elements of 

C by numbers, we mean ordinary addition of functions and multiplication of 

functions by numbers. The norm is defined as the maximum of the absolute

value, i.e.,

x0 = maxa≤t≤b

|x(t)|. (A.1)

• The space Dn(a, b) consists of all continuous functions x(t) defined on a closed

interval [a, b] which are continuous and have continuous derivatives up to or-

der n inclusive, where n is a fixed integer. Addition of elements in Dn and

multiplication of elements in Dn by numbers are defined just as above. The

norm is now defined by the formula

xn =n

i=1

maxa≤t≤b

|x(i)(t)|, (A.2)

where x(i) =

ddt

(i)x(t).

Lemma 1. If  α(t) is continuous in  [a, b], and if 

 ba

α(t)h(t) dt = 0 (A.3)

126

Page 145: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 145/317

  for every function  h(t) ∈ C(a, b) such that  h(a) = h(b) = 0, then  α(t) = 0 for all  t

in  [a, b].

Proof. We like to prove this lemma by contradiction. Suppose the function

α(t) is nonzero, say positive, at some point in [a, b]. Then, α(t) is also positive in

some interval [t1, t2] contained in [a, b]. If we set

h(t) = (t − t1)(t2 − t) (A.4)

for t in [t1, t2] and h(t) = 0 otherwise, then h(t) obviously satisfies the conditions of 

the lemma, i.e., h(t1) = h(t2) = 0. However,

 ba α(t)h(t) dt =

 t2t1 α(t)(t − t1)(t2 − t) dt > 0, (A.5)

which is a contradiction. This contradiction proves the lemma.

Lemma 2. If  α(t) is continuous in  [a, b], and if 

 ba

α(t)h(1)(t) dt = 0 (A.6)

 for every function  h(t) ∈ D1(a, b) such that  h(a) = h(b) = 0, then  α(t) = c for all  t

in  [a, b], where c is a constant .

Proof. Let c be a constant defined by the condition

 ba

[α(t) − c] dt = 0 (A.7)

and let

h(t) = ba

[α(ξ) − c] dξ, (A.8)

so that h(t) automatically belongs to D1(a, b) and satisfies the conditions h(a) =

h(b) = 0. Then if  ba α(t)h(1)(t) dt = 0, we have

 ba

[α(t) − c] h(1)(t) dt = ba

α(t)h(1)(t) dt − c [h(b) − h(a)] = 0. (A.9)

On the other hand,

 ba

[α(t) − c] h(1)(t) dt = ba

[α(t) − c]2 dt. (A.10)

127

Page 146: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 146/317

It follows that α(t) − c = 0, i.e., α(t) = c, for all t ∈ [a, b].

Lemma 3. If  α(t) and  β (t) are continuous in  [a, b], and if 

 ba

α(t)h(t) + β (t)h(1)(t)

dt = 0 (A.11)

  for every function  h(t) ∈ D1(a, b) such that  h(a) = h(b) = 0, then  β (t) is differen-

tiable, and  β (1)(t) = α(t) for all  t ∈ [a, b].

Proof. Setting

A(t) = t0

α(ξ) dξ. (A.12)

On integrating by parts the quantity −

 ba A(t)h(1)(t) dt, and assuming that h(a) =

h(b) = 0, we find that

− ba

A(t)h(1)(t) dt = ba

α(t)h(t) dt, (A.13)

i.e., (A.11) can be written as

 ba

[−A(t) + β (t)] h(1)(t) dt = 0 (A.14)

But, according to Lemma 2, this implies that

β (t) − A(t) = constant. (A.15)

Hence,

β (1)(t) = α(t), (A.16)

for all t in [a, b], as asserted. We emphasize that the differentiability of the function

β (t) was not assumed in advance.

Theorem 1. A necessary condition for the differentiable functional J [x] tohave an extremum for x = x∗ is that its variation vanish for x = x∗, i.e., that

δJ [h] = 0 (A.17)

for x = x∗ and all admissible hx.

128

Page 147: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 147/317

Proof. Suppose J [x] has a minimum for x = x∗. According to the definition

of the variation δJ [h], we have

∆J [h] = δJ [h] + εh, (A.18)

where ε → 0 as h → 0. Thus, for sufficiently small h, the sign of ∆J [h] will be

the same as the sign of  δJ [h].

Now, suppose that δJ [h0] = 0 for some admissible h0. Then for any α > 0,

no matter how small, we have

δJ [−αh0] = −δJ [αh0] < 0. (A.19)

Hence, (A.18) can be made to have either sign for arbitrary small h. But this is

impossible, since by hypothesis J [x] has a minimum for x = x∗, i.e.,

∆J [h] = J [x∗ + h] − J [x∗] ≥ 0 (A.20)

for all sufficiently small h. This contradiction proves the theorem.

129

Page 148: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 148/317

Appendix B

AN EQUIVALENT PROBLEM STATEMENT TO THE

PROBLEM OF THE NEIGHBORING EXTREMALS

Since the problem of interest is the neighboring optimal path, δu(t) must be

determined in such a way that δ2J  is minimized subject to Eq.(4.10). We would

like to derive Eqs.(4.5)-(4.8) from this problem statement.

For p = 2, Eqs.(4.5)-(4.8) are

δx(2)(t) =∂f 

∂xδx +

∂f 

∂x(1)δx(1) +

∂f 

∂uδu (B.1)

δλ(2) =∂ 2H 

∂x2δx +

∂ 2H 

∂x∂x(1)δx(1) +

∂ 2H 

∂x∂uδu +

∂ 2H 

∂x∂λδλ

− [ ∂ ∂x

ddt

∂H ∂x(1)

δx + ∂ 

∂x(1)

ddt

∂H ∂x(1)

δx(1) + ∂ 

∂x(2)

ddt

∂H ∂x(1)

δx(2) +

∂ 

∂λ

d

dt

∂H 

∂x(1)

δλ +

∂ 

∂λ(1)

d

dt

∂H 

∂x(1)

δλ(1) +

∂ 

∂u

d

dt

∂H 

∂x(1)

δu +

∂ 

∂u(1)

d

dt

∂H 

∂x(1)

δu(1)] (B.2)

0 =∂ 2H 

∂u∂xδx +

∂ 2H 

∂u∂x(1)δx(1) +

∂ 2H 

∂u2δu +

∂ 2H 

∂u∂λδλ (B.3)

δλ(tf ) =

∂ 2φ

∂x(1)∂xδx +

∂ 2φ

∂x(1)2∂x(1)

t=tf 

, (B.4)

δλ(1)(tf ) =

∂ 2φ

∂x2δx +

∂ 2φ

∂x∂x(1)δx(1)

t=tf 

(B.5)

130

Page 149: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 149/317

The equivalent problem statement is to minimizes δ2J , which has the form

J  = δ2J  =1

2

δxT ∂ 2φ

∂x2δx + 2δxT  ∂ 2φ

∂x∂x(1)δx(1) + δx(1)T  ∂ 2φ

∂x(1)2δx(1)

|tf  +

1

2

 tf t0

[δxT ∂ 2H 

∂x2δx + 2δxT  ∂ 2H 

∂x∂x(1)δx(1) + δx(1)T  ∂ 2H 

∂x(1)2δx(1) +

2δxT  ∂ 2H 

∂x∂uδu + 2δx(1)T  ∂ 2H 

∂x(1)∂uδu + δuT ∂ 2H 

∂u2δu]dt, (B.6)

subject to

δx(2)(t) =∂f 

∂xδx +

∂f 

∂x(1)δx(1) +

∂f 

∂uδu (B.7)

From calculus of variation, we derive the necessary conditions by augmenting Eq.(B.7)to the cost with a Lagrange multiplier, which is δλ in this case. Therefore, we have

J  = Ψ|tf  +1

2

 tf t0

F dt, (B.8)

where

Ψ =1

2δxT ∂ 2φ

∂x2δx + 2δxT  ∂ 2φ

∂x∂x(1)δx(1) + δx(1)T  ∂ 2φ

∂x(1)2δx(1) (B.9)

and

F  =1

2[δxT ∂ 2H 

∂x2δx + 2δxT  ∂ 2H 

∂x∂x(1)δx(1) + δx(1)T  ∂ 2H 

∂x(1)2δx(1) +

2δxT  ∂ 2H 

∂x∂uδu + 2δx(1)T  ∂ 2H 

∂x(1)∂uδu + δuT ∂ 2H 

∂u2δu] +

δλ

∂f 

∂xδx +

∂f 

∂x(1)δx(1) +

∂f 

∂uδu − δx(2)

, (B.10)

From Chapter 2, we have the following necessary conditions

∂F ∂δλ

= 0 (B.11)

∂F 

∂δu= 0 (B.12)

∂F 

∂δx−

d

dt

∂F 

∂δ x(1)+

d2

dt2∂F 

∂δx(2)= 0 (B.13)

131

Page 150: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 150/317

and

δλ(tf ) =∂ Ψ

∂x(1); δλ(1)(tf ) =

∂ Ψ

∂x(B.14)

It can be seen that results from Eq.(B.11), (B.12), and (B.14) are identical withEq.(B.1), (B.3), (B.4), and (B.5). However, we still need to show he equivalence

between Eq.(B.13) and Eq.(B.2). In order to do this, let us expand Eq.(B.2) as

following

δλ(2) =∂ 2H 

∂x2δx +

∂ 2H 

∂x∂x(1)δx(1) +

∂ 2H 

∂x∂uδu +

∂ 2H 

∂x∂λδλ

− [∂ 

∂x

∂ 2H 

∂x(1)∂xx(1) +

∂ 2H 

∂x(1)2x(2) +

∂ 2H 

∂x(1)∂λλ(1) +

∂ 2H 

∂x(1)∂uu(1)

δx +

∂ ∂x(1)

∂ 2H 

∂x(1)∂xx(1) + ∂ 2H 

∂x(1)2x(2) + ∂ 2H 

∂x(1)∂λλ(1) + ∂ 2H 

∂x(1)∂uu(1)

δx(1) +

∂ 

∂x(2)

∂ 2H 

∂x(1)∂xx(1) +

∂ 2H 

∂x(1)2x(2) +

∂ 2H 

∂x(1)∂λλ(1) +

∂ 2H 

∂x(1)∂uu(1)

δx(2) +

∂ 

∂λ

∂ 2H 

∂x(1)∂xx(1) +

∂ 2H 

∂x(1)2x(2) +

∂ 2H 

∂x(1)∂λλ(1) +

∂ 2H 

∂x(1)∂uu(1)

δλ +

∂ 

∂λ(1)

∂ 2H 

∂x(1)∂xx(1) +

∂ 2H 

∂x(1)2x(2) +

∂ 2H 

∂x(1)∂λλ(1) +

∂ 2H 

∂x(1)∂uu(1)

δλ(1) +

∂ 

∂u ∂ 2H 

∂x(1)∂xx

(1)

+

∂ 2H 

∂x(1)2x

(2)

+

∂ 2H 

∂x(1)∂λλ

(1)

+

∂ 2H 

∂x(1)∂uu

(1)δu +

∂ 

∂u(1)

∂ 2H 

∂x(1)∂xx(1) +

∂ 2H 

∂x(1)2x(2) +

∂ 2H 

∂x(1)∂λλ(1) +

∂ 2H 

∂x(1)∂uu(1)

δu(1)]. (B.15)

Noting that H  is not a function of x(2), λ(1), and u(1), Eq.(B.15) is reduced to

δλ(2) =∂ 2H 

∂x2δx +

∂ 2H 

∂x∂x(1)δx(1) +

∂ 2H 

∂x∂uδu +

∂ 2H 

∂x∂λδλ

− [∂ 

∂x

∂ 2H 

∂x(1)∂xx(1) +

∂ 2H 

∂x(1)2x(2) +

∂ 2H 

∂x(1)∂λλ(1) +

∂ 2H 

∂x(1)∂uu(1)

δx +

∂ ∂x(1)

∂ 2H 

∂x(1)∂xx(1) + ∂ 2H 

∂x(1)2x(2) + ∂ 2H 

∂x(1)∂λλ(1) + ∂ 2H 

∂x(1)∂uu(1)

δx(1) +

∂ 

∂λ

∂ 2H 

∂x(1)∂xx(1) +

∂ 2H 

∂x(1)2x(2) +

∂ 2H 

∂x(1)∂λλ(1) +

∂ 2H 

∂x(1)∂uu(1)

δλ +

∂ 

∂u

∂ 2H 

∂x(1)∂xx(1) +

∂ 2H 

∂x(1)2x(2) +

∂ 2H 

∂x(1)∂λλ(1) +

∂ 2H 

∂x(1)∂uu(1)

δu +

132

Page 151: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 151/317

∂ 2H 

∂x(1)2δx(2) +

∂ 2H 

∂x(1)δλ(1)δλ(1) +

∂ 2H 

∂x(1)∂uδu(1)]. (B.16)

Now, working on the Eq.(B.13), we have

∂ 2H 

∂x2δx +

∂ 2H 

∂x∂x(1)δx(1) +

∂ 2H 

∂x∂uδu +

∂ 2H 

∂x∂λδλ

−[

∂ 

∂x

∂ 2H 

∂x(1)∂xx(1) +

∂ 

∂x(1)

∂ 2H 

∂x(1)∂xx(2) +

∂ 

∂λ

∂ 2H 

∂x(1)∂xλ(1) +

∂ 

∂u

∂ 2H 

∂x(1)∂xu(1)

δx +

∂ 

∂x

∂ 2H 

∂x(1)2x(1) +

∂ 

∂x(1)

∂ 2H 

∂x(1)2x(2) +

∂ 

∂λ

∂ 2H 

∂x(1)2λ(1) +

∂ 

∂u

∂ 2H 

∂x(1)2u(1)

δx(1) +

∂ 

∂x

∂ 2H 

∂x∂λx(1) +

∂ 

∂x(1)

∂ 2H 

∂x∂λx(2) +

∂ 

∂λ

∂ 2H 

∂x∂λλ(1) +

∂ 

∂u

∂ 2H 

∂x∂λu(1)

δλ +

∂ ∂x ∂ 

2

H ∂x(1)∂u x(1) + ∂ ∂x(1) ∂ 

2

H ∂x(1)∂ux(2) + ∂ ∂λ ∂ 

2

H ∂x(1)∂u λ(1) + ∂ ∂u ∂ 

2

H ∂x(1)∂u u(1)

δu +

∂ 2H 

∂x(1)2δx(2) +

∂ 2H 

∂x(1)δλ(1)δλ(1) +

∂ 2H 

∂x(1)∂uδu(1)] − δλ(2) = 0, (B.17)

which is identical with the Eq.(B.16). This completes the proof.

133

Page 152: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 152/317

Appendix C

THE QUADRATIC FORM OF J ∗(X(T ), T ).

From the general form of the optimal performance index as shown in Eq.(2.40),

we shall make use of the results on the quadratic form of J ∗(x(t), t) described in [10]

for p = 1 to the general case where p > 1 with the only different form of the x(t).

Lemma 1. Considering a scalar quantity  d(X ) = X T AX  which has a 

quadratic form for all real vector  X  and matrix  A, there exist two properties:

• d(αX ) = (αX )T  A (αX ) = α2 (X )T  A (X ) = α2d(X ) ( Principle of homogene-

ity),

• d(X 1) + d(X 2) = 12

(X 1 + X 2)T A(X 1 + X 2) + (X 1 − X 2)T A(X 1 − X 2)

= 12 [d(X 1 + X 2) + d(X 1 − X 2)] ( Principle of additivity).

Theorem 1. In order for  J ∗(x(t), t) to be a quadratic form, the necessary 

and sufficient conditions are that  J ∗(x(t), t) is continuous in  x(t) which is obvious,

and 

• i) J ∗(αx(t), t) = α2J ∗(x(t), t) for all real  λ, and 

• ii) J ∗(x1(t), t) + J ∗(x2(t), t) = 12

[J ∗(x1 + x2(t), t) + J ∗(x1 − x2(t), t)].

Proof. u∗x

is introduced as a temporary notation which denote the optimal

control over the interval [t, tf ] when the initial state x(t) at time t. Mathematically,

we have

J ∗(x(t), t) = J (x(t), u∗x

, t). (C.1)

134

Page 153: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 153/317

It follows that J (λx(t), u∗λx, t) = J ∗(λx(t), t). By the use of the optimal index, we

have

J ∗(λx(t), t) ≤ J (λx(t), λu∗x

, t) = λ2J ∗(x(t), t), (C.2)

which we can conclude that

J ∗(λx(t), t) ≤ λ2J ∗(x(t), t). (C.3)

On the other hand, It follows from Eq. (C.1) that λJ (λx(t), u∗x

, t) = λ2J ∗(x(t), t).

By the use of the optimal index again, we have

λ2J ∗(x(t), t) ≤ λ2J (x(t), λ−1u∗λx, t) = J ∗(λx(t), t), (C.4)

which we can conclude that

λ2J ∗(x(t), t) ≤ J ∗(λx(t), t). (C.5)

From Eqs.(C.3) and (C.5), it implies that

J ∗(λx(t), t) = λ2J ∗(x(t), t). (C.6)

By using Eqs.(C.6) and (C.1), we have

J ∗(x1(t), t) + J ∗(x2(t), t) =1

4

J (2x1(t), u∗

2x1, t) + J (2x2(t), u∗

2x2, t)

(C.7)

By the use of the optimal index, we can obtain that

1

4

J (2x1(t), u∗

2x1, t) + J (2x2(t), u∗

2x2, t)

≤1

4

J (2x1(t), u∗

x1+x2+ u∗

x1−x2, t) + J (2x2(t), u∗

x1+x2− u∗

x1−x2, t)

=1

2J (x1 + x2(t), u∗

x1+x2, t) + J (x1 − x2(t), u∗

x1−x2, t)

=1

2(J ∗(x1 + x2(t), t) + J ∗(x1 − x2(t), t)) (C.8)

which we can conclude that

J ∗(x1(t), t) + J ∗(x2(t), t) ≤1

2(J ∗(x1 + x2(t), t) + J ∗(x1 − x2(t), t)) . (C.9)

135

Page 154: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 154/317

However, using Eqs.(C.6) and (C.1), we also have

1

2(J ∗(x1 + x2(t), t) + J ∗(x1 − x2(t), t)) =

12

J (x1 + x2(t), u∗x1+x2 , t) + J (x1 − x2(t), u∗x1−x2 , t)

(C.10)

By the use of the optimal index, we again can obtain that

1

2

J (x1 + x2(t), u∗

x1+x2, t) + J (x1 − x2(t), u∗

x1−x2, t)

≤1

2

J (x1 + x2(t), u∗

2x1, t) + J (x1 − x2(t), u∗

2x2, t)

=1

4

J (2x1(t), u∗

2x1, t) + J (2x2(t), u∗

2x2, t)

=

1

4 (J ∗

(2x1(t), t) + J ∗

(2x2(t), t))

= J ∗(x1(t), t) + J ∗(x2(t), t) (C.11)

which we can conclude that

1

2(J ∗(x1 + x2(t), t) + J ∗(x1 − x2(t), t)) ≤ J ∗(x1(t), t) + J ∗(x2(t), t). (C.12)

From Eqs.(C.9) and (C.12), it implies that

J ∗(x1(t), t) + J ∗(x2(t), t) =1

2

(J ∗(x1 + x2(t), t) + J ∗(x1 − x2(t), t)) . (C.13)

It can be seen obviously that Eq.(C.6) and (C.13) are the necessary and sufficient

conditions for J ∗(x(t), t) to have a quadratic form. Therefore, in general, we con-

clude that J ∗(x(t), t) has the form

J ∗(x(t), t) = xT P (t)x (C.14)

where without loss of the generality that P (t) is an (np × np) symmetric matrix

function of time defined as

P  =

P xx P xx(1) . . . P  xx(p−1)

(P xx(1))T  P x(1)x(1) . . . P  x(1)x(p−1)...

.... . .

...

(P xx(p−1))T  (P x(1)x(p−1))T  . . . P  x(p−1)x(p−1)

.

136

Page 155: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 155/317

Noting that proofs provided above are shown in one direction. Proving that J 

is necessary a quadratic in x, as shown in Eq.(C.14) for some symmetric P  is a

technically difficult problem [10]; however, the line of proof are provided in [10].

137

Page 156: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 156/317

Appendix D

THE SOLUTION FORM FOR λ IN THE FIRST-ORDER

LINEAR QUADRATIC REGULATOR PROBLEM.

Proof provided in Appendix C is also valid for first-order systems; therefore,

in general, we conclude that J ∗(x(t), t) has the form

J ∗(x(t), t) = xT S (t)x (D.1)

where without loss of the generality S (t) is an (n × n) symmetric matrix function

of time, where n is a number of state variables in the first-order systems. As known

that λ(t) is equal to∂J ∗

∂x

T , the solution form for λ(t) in the first-order linear

quadratic regulator problem is

λ(t) = S (t)x(t) (D.2)

138

Page 157: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 157/317

REFERENCES

[1] Agrawal, S.K., and Veeraklaew, T., 1996, “A Higher-Order Method for Dy-namic Optimization of a Class of Linear Time-Invariant ,” Journal of DynamicSystems, Measurements, and Control, Transactions of ASME , Vol. 118, No. 4,pp.786-791.

[2] Agrawal, S.K., Li, S., and Fabien, B.C., “Optimal Trajectories of Open-Chain

Mechanical Systems: Explicit Optimality Equation with Multiple Shooting So-lution”, Mechanics of Structures and Machines, Vol. 25, No. 2, 1997, pp.163-177.

[3] Agrawal, S. K., Claewplodtook, P., and Fabien, B. C., Optimal Trajectories of Open-Chain Mechanical Systems: A New Solution Without Lagrange Multi-pliers, Journal of Dynamic Systems, Measurement, and Control, Transactionsof the ASME , Vol. 120, No. 1, 1998, pp.134-136.

[4] Agrawal, S.K., and Veeraklaew, T., 1996, “Designing Robots for Optimal Per-formance During Repetitive Motion Journal,” IEEE Transactions on Robotics

and Automation , Vol.14, No.5, pp. 771-777.

[5] Agrawal, S. K. and Faiz, N., “A New Efficient Method for Optimization of aClass of Nonlinear Systems Without Lagrange Multipliers”, Journal of Opti-mization Theory and Applications, Vol. 97, No. 1, 1998, pp.11-28.

[6] Agrawal, S.K., and Fabien, B.C., 1999, Optimization of Dynamic Systems,Kluwer Academic Publishers, Boston, 1999.

[7] Agrawal, S.K., Veeraklaew, T., Fabien, B.C., 1998, “Direct and Indirect Opti-mization of Linear Time-Invariant Dynamic Systems Using Transformations,”

Optimal Control Applications and Methods, Vol. 19, pp. 393-410.

[8] Agrawal, S.K., Veeraklaew, T., 1996, “A New Procedure for Dynamic Opti-mization of a Class of Linear Systems with Boundary Constraints,” The Dy-namics, Measurement and Control Division of The Japan Society of Mechanical Engineers, Japan.

139

Page 158: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 158/317

[9] Agrawal, S.K., and Xu, X., 1997, “A New Procedure for Optimization of aClass of Linear Time-Varying Systems,” Journal of Vibration and Control , Vol.3, pp. 379-396.

[10] Anderson, B.O., and Moore, J.B., 1990, Optimal Control: Linear QuadraticMethods, Prentice Hall, Englewood Cliffs, New Jersey.

[11] Bellman, R.E., and Kalaba, R.E., 1965a, Dynamic Programming and Modern Control Theory , Academic Press,New York.

[12] Berkovitz, L.D., 1974, Optimal Control Theory , Springer-Verlag, New York.

[13] Berkovitz, L. D., 1961, “Variational Methods in Problems of Control and Pro-gramming,” Journal of Mathematical Analysis and Applications, Vol. 3, pp.145-169.

[14] Betts, J., Bauer, T., Hoffman, W., and Zondervan, K., 1984, “Solving theOptimal Control Problem Using a Nonlinear Programming Technique Part 1:General Formulation,” AIAA Paper 84-2037 , Aug. 1984.

[15] Bliss, G. A., 1946, Lectures on the Calculus of Variations, The University of Chicago Press, Chicago.

[16] Bolza, O., 1931, Lectures on the Calculus of Variations, G. E. Stechert andCompany.

[17] Brebbia, C.A., 1978, The Boundary Element Method for Engineers, Pentech

Press, London.

[18] Bryson Jr., A.E., and Ho, Y.C., 1975, Applied Optimal Control , HemispherePublishing Corporation, New York.

[19] Craig, J.J., 1986, Introduction to Robotic: Mechanics and Control , Addison-Wesley Publishing Company.

[20] Fliess, M., Levine, J., Martin, P., Rouchon, P., 1995, “Flatness and Defect of Nonlinear Systems: Introductory Theory and Examples,” Journal of Control ,Vol. 61, No. 2, pp. 1327-1361.

[21] Fond, S., 1979, “Dynamic Programming Approach to The Maximum Princi-ple of Distributed-parameter Systems,” Journal of Optimization Theory and Applications, Vol. 27, No. 4, pp. 583-601.

[22] Forsyth, A. R., 1960, Calculus of Variations, Dover Publications, Inc., NewYork.

140

Page 159: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 159/317

[23] Gelfand, I.S., and Fomin, S.V., 1963, Calculus of Variations, Prentice-HallBook Company, Englewood Cliffs, New Jersey.

[24] Hahn, D.W., and Johnson, F.T., 1971, “Finite Report for Chebyshev Trajectory

Optimization Program (CHEBYTOP II),”The Boeing Co., Seattle, WS, Rept.D180-1296-1, prepared for NASA under Contract NAS2-5994, June.

[25] Hargraves, C.R., and Paris, S.W., 1987, “Direct Trajectory Optimization UsingNonlinear Programming and Collocation,” Journal of Guidance, Control, and Dynamics, Vol., 10, No. 4, pp. 338-342.

[26] Isidori, A., 1995, Nonlinear Control Systems, 3rd edition, Springer-Verlag, NewYork.

[27] Jacobson, D.H., 1977, Extensions of Linear-Quadratic Control, Optimization 

and Matrix Theory , Academic Press, New York.

[28] Johnson, F.T., 1969, “Approximate Finite-Thrust Trajectory Optimiza-tion,”AIAA Journal , Vol. 7, June, pp. 993-997.

[29] Kirk, D.E., 1970, Optimal Control Theory: An Introduction , Prentice Hall Elec-trical Engineering Series, Englewood Cliffs, New Jersey.

[30] Leitmann, G., 1962, Optimization Techniques, Academic Press Inc., London.

[31] Leitmann G., 1981, The Calculus of Variations and Optimal Control: An In-troduction , Plenum Press, New York.

[32] Meirovitch, L., 1990, Dynamics and Control of Structures, John Wiley andSons, New York, 1990.

[33] Murray, R.R., Li, Z.,and Sastry, S.S., 1993, A Mathematical Introduction toRobotic Manipulation , CRC Press.

[34] Nieuwstadt, M. J. and Murray, R. M., 1998, “Real Time Trajectory Generationfor Differentially Flat Systems,” Int’l. J. Robust and Nonlinear Control .

[35] Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., and Mischenko, E.F.,

1962, The Mathematical Theory of Optimal Process, Inter science Publishers,Inc., New York.

[36] Sage, A., 1968, Optimum Systems Control , Prentice-Hall, Inc., EnglewoodCliffs, New Jersey.

141

Page 160: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 160/317

[37] Schlemmer, M. and Agrawal, S. K., 1999, “Globally Feedback LinearizableTime-Invariant Systems: Optimal Solution Using Mayer’s Cost,” submitted forpublication Journal of Dynamic Systems, Measurement, and Control, Transac-tions of the ASME .

[38] Spong, M.W., Vidyasagar M., 1986, Robot dynamics and control , John Wileyand Sons.

[39] Stadler, W., 1995, Analytical robotics and mechatronics, New York, McGraw-Hill, Inc.

[40] Tieu, D., Cluett, W.R. and Penlidis, A., 1995, “Comparison of CollocationMethods for Solving Dynamic Optimization Problems,” Computers and Chem-ical Engineering , Vol.19, No.4, pp. 375-381.

[41] Tsang, T.H., Himmelgblau, D.M., and Edgar, T.F., 1975, “Optimal Control viaCollocation and Non-linear Programming,” International Journal of Control ,Vol. 21, No. 5, pp. 763-768.

[42] Tu, Pierre N.V., 1991, Introductory Optimization Dynamics, Springer-Verlag,New York.

[43] Veeraklaew, T., and Agrawal, S.K., 1998, “Dynamic Optimization of LinearSystems with Constraints using a Higher-Order Method with Penalty Func-tion,” IV WCCM (Fourth World Congress on Computational Mechanics),Buenos Aires, Argentina.

[44] Veeraklaew, T., 1998, “Optimization of Linear Time-invariant Dynamic Sys-tems Without Lagrange Multipliers,” Masters Thesis, Ohio University, Athens,Ohio.

142

Page 161: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 161/317

Appendix E

GENERAL PURPOSE PROGRAMS

E.1 Direct Higher-Order Method

E.1.1 allnec.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This file is called allnec.m which is a file that connect all the%

%following files together. It will be needed in funobj.m, funcon.m,%

%and main.m in order to provide all the symbolic forms of the nece-%

%ssary condition. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%user input%%%%%%%%%%%%%%%

user

%*************************************

%%%%%%%%%parameterized variables%%%%%%

param

%*************************************

E.1.2 concj.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 5. Finding the Jacobian matrix cJac. %

% This is a correct version, but beware of large size error. %

in1=1*nv*n*N+N*m; %#of all variables. %

dc=c; %ds=size(dc); %

cn=ds(1); %

fid1=fopen(’r_c.m’,’w+’); %

for i=1:cn, %

nn1=num2str(i); %

nn2=dc(i); %

143

Page 162: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 162/317

nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid1,’[%s ;\n’,nn5); %

elseif i==cn, %

fprintf(fid1,’%s ]\n’,nn5); %

else %

fprintf(fid1,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %

end %

st1=fclose(fid1); %

fid11=fopen(’r_cj.m’,’w+’); %

for i=1:cn, %for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

nn5=char(nn3); %

nn6=char(nn4); %

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %fprintf(fid11,’[%s ,\n’,nn5); %

elseif ij==cn*in1, %

fprintf(fid11,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid11,’%s ;\n’,nn5); %

else %

fprintf(fid11,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

st2=fclose(fid11); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************End of concj.m**************************

144

Page 163: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 163/317

E.1.3 conoj.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 5. Finding the Jacobian matrix objgrd. %

in1=1*nv*n*N+N*m; %#of all variables. %

dc1=LL1; %

dc2=LL2; %

fid21=fopen(’r_o1.m’,’w+’); %

for i=1:N, %

nn1=num2str(i); %

nn2=dc1(i); %

nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid21,’[%s ;\n’,nn5); %elseif i==N, %

fprintf(fid21,’%s ]\n’,nn5); %

else %

fprintf(fid21,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %

end %

st1=fclose(fid21); %

fid22=fopen(’r_o2.m’,’w+’); %

for i=1:N, %nn1=num2str(i); %

nn2=dc2(i); %

nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid22,’[%s ;\n’,nn5); %

elseif i==N, %

fprintf(fid22,’%s ]\n’,nn5); %

else %

fprintf(fid22,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %

end %

st2=fclose(fid22); %

fid23=fopen(’r_oj1.m’,’w+’); %

145

Page 164: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 164/317

for i=1:N, %

for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc1(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

nn5=char(nn3); %

nn6=char(nn4); %

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid23,’[%s ,\n’,nn5); %

elseif ij==N*in1, %

fprintf(fid23,’%s ]\n’,nn5); %

elseif ij==dj, %fprintf(fid23,’%s ;\n’,nn5); %

else %

fprintf(fid23,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

st3=fclose(fid23); %

fid24=fopen(’r_oj2.m’,’w+’); %

for i=1:N, %for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc2(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

nn5=char(nn3); %

nn6=char(nn4); %

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid24,’[%s ,\n’,nn5); %

elseif ij==N*in1, %

fprintf(fid24,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid24,’%s ;\n’,nn5); %

else %

146

Page 165: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 165/317

fprintf(fid24,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

st4=fclose(fid24); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************End of conoj.m**************************

E.1.4 execute.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

pre

 main%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.1.5 funcon.m

function [c,cJac] = funcon(y,ncnln)

user %

para %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %nv=nv1; %

else %

nv=nv2+1; %

end %

pm=n*p-sum(ps); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %nn1=num2str(i); %

nn2=t0+i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

clear nn1 nn2 %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

147

Page 166: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 166/317

in1=1*nv*n*N+N*m; %#of all variables. %

cn=1*n*p*(N+1)+2*n*N+2*r*N-pm*(N+1); %# of nonlin constraints %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

fid1=fopen(’r_c.m’,’r’); %

s1 = fscanf(fid1,’%s’); %

c=eval(s1); %

st1=fclose(fid1); %

fid2=fopen(’r_cj.m’,’r’); %

s2 = fscanf(fid2,’%s’); %

cJac=eval(s2); %

st2=fclose(fid2); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%return; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funcon.m%%%%%%%%%%%%%%%%%%%%%%%%

E.1.6 funcont.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

allnec %This M-file will prepare all symbolic computations. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

t=sym(’t’); %

tf=sym(’tf’); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%2.Preparing F(n_states), CC(np_continuity_lagrange), %

%XX(np_continuities_state), CX(r_constraints), B0(np_initial), %

%and Bf(np_final). %

for i=1:n, %

nnn1=num2str(i); %

148

Page 167: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 167/317

if fs(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

df(ii,1)=fs(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(df(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%df(j)=d1(j); %

end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

df=d1; %end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

F((i-1)*N+1:i*N,1)=d1; %

end %

%******************************************************************

ddp=0; %

for ii=1:n, %

dp=ps(ii); %

for i=1:dp, %

XX(ddp*N+(i-1)*N+1:ddp*N+(i)*N,1)=Xdp((ii-1)*N+1:ii*N,i); %

end %

ddp=dp+ddp; %

end %

%******************************************************************

149

Page 168: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 168/317

if r==0, %

CX=[]; %

else %

for i=1:r, %

nn1=num2str(i); %

if cx(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

dc(ii,1)=cx(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(dc(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%

dc(j)=d1(j); %

end %

end %

for k1=1:m, %

nn1=num2str(k1); %nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dc=d1; %

end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

CX((i-1)*N+1:i*N,1)=d1; %

end %

end %

%*****************************************************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

clear d1 %

150

Page 169: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 169/317

clear d2 %

ddp=0; %

for ii=1:n, %

dp=ps(ii); %

for i=1:dp, %

k=i+ddp; %

dc0(k,1)=x0(k); %

dcf(k,1)=xf(k); %

for k1=1:n, %

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(1); %

nn4=num2str(N); %d1(k,1)=subs(dc0(k,1),sym([’x’nn1 nn2]),sym([’x’nn1 nn2... %

’(’nn3 ’)’]),0); %

d2(k,1)=subs(dcf(k,1),sym([’x’nn1 nn2]),sym([’x’nn1 nn2... %

’(’nn4 ’)’]),0); %

dc0(k,1)=d1(k,1); %

dcf(k,1)=d2(k,1); %

end %

end %

end %

ddp=dp+ddp; %end %

for j=1:pp, %

dd1(j,1)=eval(d1(j)); %

dd2(j,1)=eval(d2(j)); %

end %

B0=dd1; %

Bf=dd2; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Substitute with all applicable ti. %

%************************F to be F_C******************************%

F_C=sym([zeros(2*n*N,1)]); %

for i=1:n, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

151

Page 170: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 170/317

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

F_C((i-1)*2*N+ii+(j-1)*2)=subs(F(k),sym(’t’),d); %

end %

end %

end %

%***Note for future works*** %

%d=sym([’t’num2str(1)]) %

%d=eval(sym([’t’num2str(1)])) %

%subs(F_C(1),sym(’t0’),d) %

%*************************** %

%**********************CX to be CX_C******************************%

CX_C=sym([zeros(2*r*N,1)]); %

for i=1:r, %for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

CX_C((i-1)*2*N+ii+(j-1)*2)=subs(CX(k),sym(’t’),d); %

end %

end %

end %%***********************XX to be XX_C*****************************%

for i=1:pp, %

for j=1:N, %

k=j+(i-1)*(N); %

nn1=num2str(j); %

nn2=num2str(j-1); %

d=sym([’t’nn1]); %

XXX(k,1)=subs(XX(k),sym(’t’),d); %

dd=sym([’t’nn2]); %

XXXX(k,1)=subs(XX(k),sym(’t’),dd); %

end %

end %

for i=1:pp, %

for j=1:N-1, %

k=j+(i-1)*(N-1); %

XX_C(k,1)=XXX(k+(i-1))-XXXX(k+i); %

152

Page 171: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 171/317

end %

end %

%***********************B0 to be B0_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

nn1=num2str(0); %

d=sym([’t’nn1]); %

B0_C(i,1)=subs(B0(i),sym(’t’),d); %

end %

%***********************Bf to be Bf_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

nn1=num2str(N); %d=sym([’t’nn1]); %

Bf_C(i,1)=subs(Bf(i),sym(’t’),d); %

end %

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Form the constraint matrix c. %

ca=[B0_C; %

Bf_C; %

XX_C; %F_C; %

CX_C]; %

si1=size(ca); %

si2=si1(1); %

for i=1:si2, %

c(i,1)=ca(i); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***This is an obtion to provide the gradient of c**************

% This is a correct version, but beware of large size error. %

concj %

%***************************************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%End of funcont.m%%%%%%%%%%%%%%%%%%%%%%%%%

153

Page 172: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 172/317

E.1.7 funobj.m

function [objf,objgrd] = funobj(y)

user %

para %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

pm=n*p-sum(ps); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=t0+i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

nn1=num2str(N-1); %

eval([’tf1=t’nn1 ’;’]); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

cn=1*n*p*(N+1)+2*n*N+2*r*N-pm*(N+1); %# of nonlin constraints %in1=1*nv*n*N+N*m; %#of all variables. %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

fid1=fopen(’r_o1.m’,’r’); %

s1 = fscanf(fid1,’%s’); %

objf1=eval(s1); %

st1=fclose(fid1); %

fid2=fopen(’r_o2.m’,’r’); %

s2 = fscanf(fid2,’%s’); %

objf2=eval(s2); %

st2=fclose(fid2); %

h=(tf1-t0)./(N-1); %

n2=(N-1)/2; %

a1=0.0; %

154

Page 173: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 173/317

for ii= 1:n2, %

a1=a1+4*objf1(2*ii)+2*objf1(2*ii+1); %

end %

objj1=h*(objf1(1)+a1-objf1(N))/3.0; %

h=(tf-t1)./(N-1); %

a1=0.0; %

for ii= 1:n2, %

a1=a1+4*objf2(2*ii)+2*objf2(2*ii+1); %

end %

objj2=h*(objf2(1)+a1-objf2(N))/3.0; %

objf=objj1+objj2 %

%*************It is better to forget about the******* %

%*************gradient since we may add************** %

%*************error cumulation to the problems.****** %

%jac %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

return; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funobj.m%%%%%%%%%%%%%%%%%%%%%%

E.1.8 funobjt.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

allnec %This M-file will prepare all symbolic computations. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

t=sym(’t’); %

tf=sym(’tf’); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Preparing He. The Hamiltonian Equation. %

H=L; %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

155

Page 174: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 174/317

dhh(ii,1)=H; %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(dhh(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%

dhh(j)=d1(j); %

end %

end %

for k1=1:m, %

nn1=num2str(k1); %nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dhh=d1; %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

He=d1; %

%******************************************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Substitute with all applicable ti. %

%************************He to be He_O****************************%

He_O=sym([zeros(2*n*N,1)]); %

for j=1:N, %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

He_O(ii+(j-1)*2)=subs(He(j),sym(’t’),d); %

end %

end %

%***Note for future works*** %

% d=sym([’t’num2str(1)]) %

% d=eval(sym([’t’num2str(1)])) %

156

Page 175: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 175/317

% subs(He_O(1),sym(’t0’),d) %

%*************************** %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Form the objective function objf.%

obja=He_O; %

for i=1:N, %

ii=i+(i-1); %

ij=i+(i); %

LL1(i,1)=obja(ii); %

LL2(i,1)=obja(ij); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%*** This is an option for obtaining jacobian (objgrd)***

conoj %

%********************************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funobjt.m%%%%%%%%%%%%%%%%%%%%%%%%

E.1.9 guess.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file named guess is the most important file to provide an%

%initial guesses for the code... %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=t0+i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %end %

pp=sum(ps); %

Nn1=(pp*(N+1)); %

Nn2=(n*nv*N); %

L=sym([zeros(Nn1,Nn2)]); %

157

Page 176: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 176/317

for i=1:N+1, %

nn1=num2str(i-1); %

for j=1:n, %

for l=1:nv, %

nn2=num2str(l-1); %

if i==N+1, %

L(j+(i-1)*pp,(j-1)*nv+l+(i-2)*nv*n)=sym([’t’nn1 ’^’nn2]);%

d=L(j+(i-1)*pp,(j-1)*nv+l+(i-2)*nv*n); %

for k=1:ps(j)-1, %

L(j+(i-1)*pp+k*n,(j-1)*nv+l+(i-2)*nv*n)=diff(d); %

d=L(j+(i-1)*pp+k*n,(j-1)*nv+l+(i-2)*nv*n); %

end %

else %

L(j+(i-1)*pp,(j-1)*nv+l+(i-1)*nv*n)=sym([’t’nn1 ’^’nn2]);%

d=L(j+(i-1)*pp,(j-1)*nv+l+(i-1)*nv*n); %for k=1:ps(j)-1, %

L(j+(i-1)*pp+k*n,(j-1)*nv+l+(i-1)*nv*n)=diff(d); %

d=L(j+(i-1)*pp+k*n,(j-1)*nv+l+(i-1)*nv*n); %

end %

end %

end %

end %

end %

for i=1:N-1, %

nn1=num2str(i); %for j=1:n, %

for l=1:nv, %

nn2=num2str(l-1); %

L(j+(i)*pp,(j-1)*nv+l+(i-1)*nv*n)=sym([’-t’nn1 ’^’nn2]); %

d=L(j+(i)*pp,(j-1)*nv+l+(i-1)*nv*n); %

for k=1:ps(j)-1, %

L(j+(i)*pp+k*n,(j-1)*nv+l+(i-1)*nv*n)=diff(d); %

d=L(j+(i)*pp+k*n,(j-1)*nv+l+(i-1)*nv*n); %

end %

end %

end %

end %

%************************** %

for i=1:Nn1, %

for j=1:Nn2, %

L1(i,j)=eval(L(i,j)); %

158

Page 177: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 177/317

end %

end %

%

Linv=pinv(L1); %

Nnx=pp*(N-1); %

xxx=zeros(Nnx,1); %

%

R=[x0_l; %

xxx; %

xf_l]; %

%

yc=Linv*R; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.1.10 jac.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

fid3=fopen(’r_oj1.m’,’r’); %

s3 = fscanf(fid3,’%s’); %

objgrd1=eval(s3); %

st3=fclose(fid3); %

fid4=fopen(’r_oj2.m’,’r’); %

s4 = fscanf(fid4,’%s’); %

objgrd2=eval(s4); %

st4=fclose(fid4); %

for i=1:in1, %

do1=objgrd1(:,i); %

do2=objgrd2(:,i); %

h=(tf1-t0)./(N-1); %

n2=(N-1)/2; %

a1=0.0; %

for ii= 1:n2, %

a1=a1+4*do1(2*ii)+2*do1(2*ii+1); %

end %

objg1(:,i)=h*(do1(1)+a1-do1(N))/3.0; %

h=(tf-t1)./(N-1); %a1=0.0; %

for ii= 1:n2, %

a1=a1+4*do2(2*ii)+2*do2(2*ii+1); %

end %

objg2(:,i)=h*(do2(1)+a1-do2(N))/3.0; %

159

Page 178: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 178/317

end %

objgrd=objg1+objg2; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.1.11 main.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file called main.m which is the main program that user %

%need to run for their optimal solutions. The optimal solutions %

%are obtained using the nonlinear programing called NPSOL package.%

%INSTRUCTION HOW TO USE THIS CODE: %

% a) User need to modify M-file named user.m by following %

% the instruction that embeded inside closely. %

% b) The second step is preparing all symbolic code for %

% all constraints and objective functions by typing %% command "pre" on Matlab cammand window and disregard all %

% warnings that appeared on the screen. %

% c) The last step to run this code is by typing %

% command "main" on Matlab cammand window. %

% %

%THIS CODE IS DEVELOPED BY: %

% Captain Tawiwat Veeraklaew %

% Department of Mechanical Engineering %

% Chulachomklao Royal Military Academy %

% Nakhon-Nayok, 26001 THAILAND %

% E-mail: [email protected] %

% [email protected] %

% [email protected] %

% or contact Dr. Sunil K. Agrawal at University of Delaware %

% E-mail: [email protected] %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

clear

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Obtaining all inputs from user.m, initial guesses, and %% # of variables. %

user %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %

160

Page 179: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 179/317

Page 180: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 180/317

for j=1:N, %

ij=j+(i-1)*N; %

cul(ij,1)=cu_l(i); %

end %

end %

l_box(in3+1:in2,1) = cul; %

end %

%

l_A = []; %

%

l_nonlin(1:(2*n*p-2*pm),1) = [x0_l;xf_l]; %

l_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %ij=j+(i-1)*2*N; %

cxl(ij,1)=cx_l(i); %

end %

end %

l_nonlin(in5+1:in4,1) = cxl;%double check! %

end %

%************************************************%

u_box = BB*(max(eye(in1))’); %

%

if m>0, %for i=1:m, %

for j=1:N, %

ij=j+(i-1)*N; %

cuu(ij,1)=cu_u(i); %

end %

end %

u_box(in3+1:in2,1) = cuu; %

end %

%

u_A = []; %

%

u_nonlin(1:(2*n*p-2*pm),1) = [x0_u;xf_u]; %

u_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

162

Page 181: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 181/317

ij=j+(i-1)*2*N; %

cxu(ij,1)=cx_u(i); %

end %

end %

u_nonlin(in5+1:in4,1) = cxu;%double check! %

end %

l=[l_box; l_A;l_nonlin]; %

u=[u_box; u_A;u_nonlin]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Call NPSOL. %

funobj = ’funobj’; %

funcon = ’funcon’; %

[y,f,g,c,cJac,inform,lambda,iter,istate]= npsol(A,l,u,y,funobj... %,funcon,500,3); %

result=y; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Plot results. %

result1 %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***************************End of main.m**************************

E.1.12 main-guess.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file called main* which is the main program that user %

%need to run for their optimal solutions. The optimal solutions %

%are obtained using the nonlinear programing called NPSOL package.%

%INSTRUCTION HOW TO USE THIS CODE: %

% a) User need to modify M-file named user.m by following %

% the instruction that embeded inside closely. %

% b) The second step is preparing all symbolic code for %% all constraints and objective functions by typing %

% command "pre" on Matlab cammand window and disregard all %

% warnings that appeared on the screen. %

% c) The last step to run this code is by typing %

% command "main" on Matlab cammand window. %

163

Page 182: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 182/317

% %

%THIS CODE IS DEVELOPED BY: %

% Captain Tawiwat Veeraklaew %

% Department of Mechanical Engineering %

% Chulachomklao Royal Military Academy %

% Nakhon-Nayok, 26001 THAILAND %

% E-mail: [email protected] %

% [email protected] %

% [email protected] %

% or contact Dr. Sunil K. Agrawal at University of Delaware %

% E-mail: [email protected] %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

clear all

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Obtaining all inputs from user.m, initial guesses, and %

% # of variables. %

user %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %end %

if m==0, %

cu=[]; %

cu_l=[]; %

cu_u=[]; %

else %

cu=cu; %

cu_l=cu_l; %

cu_u=cu_u; %

end %

if r==0, %

cx=[]; %

cx_l=[]; %

cx_u=[]; %

else %

cx=cx; %

164

Page 183: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 183/317

cx_l=cx_l; %

cx_u=cx_u; %

end %

pm=n*p-sum(ps); %

in1=1*nv*n*N+N*m; %#of all variables including mu. %

in2=1*nv*n*N+N*m; %# of all variables without mu. %

in3=1*nv*n*N; %# of all variables without mu and u. %

in4=1*n*p*(N+1)+2*n*N+2*r*N-pm*(N+1); %# of nonlin constraints%

in5=1*n*p*(N+1)+2*n*N-pm*(N+1); %# of nonlin constraints %

guess %

y=yc; %

d1=size(yc); %

d2=d1(1); %

in5=in1-d2; %

y(d2+1:in1,1)=0.5*max(eye(in5))’; %A =[]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Form upper and lower bound. %

BB=absmax; %This is the abs value limit. %

l_box = -BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %for j=1:N, %

ij=j+(i-1)*N; %

cul(ij,1)=cu_l(i); %

end %

end %

l_box(in3+1:in2,1) = cul; %

end %

%

l_A = []; %

%

l_nonlin(1:(2*n*p-2*pm),1) = [x0_l;xf_l]; %

l_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

165

Page 184: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 184/317

cxl(ij,1)=cx_l(i); %

end %

end %

l_nonlin(in5+1:in4,1) = cxl;%double check! %

end %

%************************************************%

u_box = BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*2*N; %

cuu(ij,1)=cu_u(i); %

end %

end %u_box(in3+1:in2,1) = cuu; %

end %

%

u_A = []; %

%

u_nonlin(1:(2*n*p-2*pm),1) = [x0_u;xf_u]; %

u_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %ij=j+(i-1)*2*N; %

cxu(ij,1)=cx_u(i); %

end %

end %

u_nonlin(in5+1:in4,1) = cxu;%double check! %

end %

l=[l_box; l_A;l_nonlin]; %

u=[u_box; u_A;u_nonlin]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Call NPSOL. %

funobj = ’funobj’; %

funcon = ’funcon’; %

[y,f,g,c,cJac,inform,lambda,iter,istate]= npsol(A,l,u,y,funobj,...%

funcon,500,3); %

166

Page 185: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 185/317

result=y; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Plot results. %

result1 %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%************************End of main_guess.m***********************

E.1.13 necc.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This file is called necc.m which will provide all the necessary c%

%onditions before using NPSOL. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%*******************************************

% 1. Define Extended Hamiltonian function. %

H=L; %

for i=1:n, %

nn1=num2str(i); %

nn2=[’v’ nn1 ’0’]; %

nn3=eval([’fs(’ nn1 ’)’]); %

H1 = nn2*nn3; %

H=H+H1; %

end %

for i=1:r, %

nn1=num2str(i); %

nn2=[’mu’ nn1 ’0’]; %

nn3=eval([’cx(’ nn1 ’,1)’]); %

H1=nn2*nn3; %

H=H+H1; %

end %

%*******************************************

%******************************************************

% 2. This step will provide the costate equations. %

% format>>> g(1)=(), ..., g(n)=() %

gs=sym([zeros(n,1)]); %

dd=sym(’0’); %

167

Page 186: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 186/317

Page 187: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 187/317

Page 188: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 188/317

E.1.14 Npsol.m

%NPSOL Sequential quadratic programming code NPSOL by Gill et. al.

% [x,f,g,c,cJac,inform,lambda,iter,istate]= ...

% npsol(A,l,u,x,funobj,funcon,msg,derlvl,verlvl)

% solves the problem

% minimize f(x)

% subject to ( ) ( x ) ( )

% (l) =< ( Ax ) =< (u)

% ( ) ( c(x) ) ( )

% On successful termination, x contains the optimal solution, and

% f and g contain the final objective function value and gradient.

% c and cJac contains the constraint and the constraint Jacobian.

% Lambda contains the Lagrange multiplier vector of the final QP,

% istate denotes the status of the constraints and iter denotes

% the number of iterations. Inform contains information about the% exit status of NPSOL. Funobj is the name of the m-file for

% evaluating the objective function and funcon is the name of the

% m-file for evaluating the constraint functions (without .m

% -extension).

%

% The following two m-files must be supplied

% (where default names are ’funobj.m’ and ’funcon.m’

% unless stated otherwise in the variables

% funobj and funcon). Default value for msg is 10.

%% ’funobj.m’ containing function [f,gradf]=funobj(x) that

% computes the objective function f and the

% gradient gradf at the point x.

%

% ’funcon.m’ containing function [c,cJac]=funcon(x,ncnln) that

% computes the constraint value c and the

% constraint Jacobian cJac for the ncnln

% nonlinear constraints at the point x.

%

% Derlvl controls which derivatives are given. (Default=0)

% Verlvl controls the level of checking of gradients.

% (Default=0)

%

% This version dated 20-Mar-1993.

function [x,f,g,c,cJac,inform,lambda,iter,istate]= ...

170

Page 189: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 189/317

npsol(A,l,u,x,funobj,funcon,msg,derlvl,verlvl)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% M-file written by Anders Forsgren

% Division of Optimization and Systems Theory

% Department of Mathematics

% Royal Institute of Technology

% S-100 44 Stockholm

% Sweden

% [email protected]

if nargin < 9

verlvl = 0;

end

if nargin < 8

derlvl = 0;

end

if nargin < 7

  msg = 10;

end

if nargin < 5

funobj = ’funobj’;funcon = ’funcon’;

end

[mA,nA] = size(A);

[lx] = length(x);

[ll] = length(l);

[lu] = length(u);

if ll ~= lu | ...

ll < mA + lx | ...

(nA > 0 & nA ~= lx),

clear mA nA lx ll lu obj lambda istate iter inform msglvl ...

c cJac funcon derlvl f g msg funobj verlvl

disp(’ ’)

disp(’Error using npsol, incompatible dimensions of input data.’)

disp(’Npsol not executed. Please check below and correct. ’)

171

Page 190: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 190/317

disp(’ ’)

whos

else

[x,f,g,c,cJac,inform,lambda,iter,istate] = ...

npmex(A,l,u,x,funobj,funcon,msg,derlvl,verlvl);

end

E.1.15 param.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This file called param.m which will parameterize all state, costa%

%te and control variables before passing through funcon.m,funobj.m%

%main.m such that the optimum will be provided using NpsoL. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Set how many parameters we need %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

%******Let’s calculate for state variables first.*** %

X=sym([zeros(n*N,1)]); %

d=sym([zeros(n*N,1)]); %

for i =1:n, %

for j=1:N, %

for k=1:nv, %

kk=k-1; %

nn4=num2str(kk); %

kj=(i-1)*nv*N+k+(j-1)*nv; %

nn5=num2str(kj); %

dx=sym([’y’nn5 ])*sym([’t^’nn4]); %d((i-1)*N+j)=d((i-1)*N+j)+dx; %

end %

end %

end %

X=d; %

172

Page 191: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 191/317

%

% X formats are like the following: %

% X(1:N,1)=x1 interval 1 to N. %

% X(N+1:2N,1)=x2 interval 1:N. %

% ... %

% X((i-1)N+1:iN)=xi inteval 1 to N. %

% Also, the same manner applies to V (Lamda). %

%*******Now, calculate for control variables.*********************%

U=sym([zeros(m*N,1)]); %

d=sym([zeros(m*N,1)]); %

for i =1:m, %

for j=1:N, %

ku=1*n*nv*N+j+(i-1)*N; %

nn1=num2str(ku); %

du=sym([’y’nn1 ]); %d((i-1)*N+j)=d((i-1)*N+j)+du; %

end %

end %

U=d; %

% U formats are like the following: %

% U(1:N,1)=U1 interval 1 to N. %

% U(N+1:2N,1)=U2 interval 1:N. %

% ... %

% U((i-1)N+1:iN)=Ui inteval 1 to N. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Finding derivatives of X upto p_th derivative. %

dh(:,1)=X; %

for i=2:p+1, %

dd=dh(:,(i-1)); %

dh(:,i)=diff(dd,’t’); %

end %

Xdp=dh; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Set all variables to be match with all necessary conditions. %

% x10,...,x1p, x20,...,x2p, ..., xn0,...,xnp %

% u1, ..., um %

for i=1:n, %

173

Page 192: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 192/317

for j=1:p+1, %

nn1=num2str(i); %

jj=j-1; %

nn2=num2str(jj); %

nn3=num2str(j); %

d1=(i-1)*N+1; %

nn4=num2str(d1); %

d2=i*N; %

nn5=num2str(d2); %

nn6=eval([’Xdp(’nn4 ’:’ nn5 ’,’nn3 ’)’]); %

%****Good for future works**** %

eval([’x’ nn1 nn2 ’ = nn6;’]); %

%***************************** %

end %

end %for i=1:m, %

nn1=num2str(i); %

d1=(i-1)*N+1; %

d2=i*N; %

nn2=num2str(d1); %

nn3=num2str(d2); %

nn4=eval([’U(’nn2 ’:’ nn3 ’)’]); %

%****Good for future works**** %

eval([’u’ nn1 ’ = nn4;’]); %

%***************************** %end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***************************end of param.m*************************

E.1.16 pre.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This M-file called pre.m which will provide constraints and objec%

%tive function for funcon.m and funobj.m in the symbolic form at %

%r_c.m and r_o.m respectively. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%******************

clear %

funcont %

174

Page 193: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 193/317

clear %

funobjt %

%******************

%**********************End of pre.m********************************

E.1.17 result1.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file names result1.m which will traslate the data %

%solution from the nonlinear programing to show on the figures.%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

close all

if inform<2,fprintf(’Congratulation! Your Solutions found.\n’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************************************

cl=1; % The number of node for each interval.

%**********************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Find T1,..., T_N-1. %

for i=1:N, %

nn1=num2str(i-1); %

nn2=num2str(i); %nn3=eval(sym([’t’nn1])); %

nn4=eval(sym([’t’nn2])); %

eval([’T’nn1 ’=nn3:(nn4-nn3)/cl:nn4;’]); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

175

Page 194: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 194/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Find x101,..., xnp_N-1 and plot them. %

param %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:n, % state %

nn2=num2str(j); %

for k1=1:p, %derivatives upto (p)th derivative %

nn3=num2str(k1-1); %for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

for k3=1:p+2, % variables in each state %

nn5=num2str(k3); %

d=eval(sym([’x’nn2 nn3 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’x’nn2 nn3 nnn ’(1,’nn4 ’)=dd;’]); %

end %

end %figure(k1+(j-1)*p) %

hold on %

eval([’plot(T’nnn ’,x’nn2 nn3 nnn ’)’]) %

end %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Find u11,..., um_N-1 and plot them. %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:m, % state %

nn2=num2str(j); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

176

Page 195: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 195/317

d=eval(sym([’u’nn2 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’u’nn2 nnn ’(1,’nn4 ’)=dd;’]); %

end %

figure(n*p+j) %

hold on %

eval([’plot(T’nnn ’,u’nn2 nnn ’)’]) %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

elseif inform==6,

fprintf(’Local minimum Exist!, please check the reasons.\n’)

fprintf(’If it is not true, try to change the limits by:\n’)fprintf(’typing max(y) and compare with BB in main.m.\n’)

fprintf(’However, to see plots now type. >>result2.’)

else

fprintf(’No solution!,please type "inform" to check the reason\n’)

fprintf(’and refer back to NpSoL manual.\n’)

fprintf(’However, to see plots now type >>result2.’)

end

E.1.18 result2.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file names result1.m which will traslate the data %

%solution from the nonlinear programing to show on the figures.%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

177

Page 196: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 196/317

%**********************************************

cl=1; % The number of node for each interval.

%**********************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Find T1,..., T_N-1. %

for i=1:N, %

nn1=num2str(i-1); %

nn2=num2str(i); %

nn3=eval(sym([’t’nn1])); %

nn4=eval(sym([’t’nn2])); %

eval([’T’nn1 ’=nn3:(nn4-nn3)/cl:nn4;’]); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Find x101,..., xnp_N-1 and plot them. %

param %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %for j=1:n, % state %

nn2=num2str(j); %

for k1=1:p, %derivatives upto (p)th derivative %

nn3=num2str(k1-1); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

for k3=1:p+2, % variables in each state %

nn5=num2str(k3); %

d=eval(sym([’x’nn2 nn3 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’x’nn2 nn3 nnn ’(1,’nn4 ’)=dd;’]); %

end %

end %

figure(k1+(j-1)*p) %

hold on %

178

Page 197: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 197/317

eval([’plot(T’nnn ’,x’nn2 nn3 nnn ’)’]) %

end %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Find u11,..., um_N-1 and plot them. %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:m, % state %

nn2=num2str(j); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

d=eval(sym([’u’nn2 ’(’nn1 ’)’])); %t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’u’nn2 nnn ’(1,’nn4 ’)=dd;’]); %

end %

figure(n*p+j) %

hold on %

eval([’plot(T’nnn ’,u’nn2 nnn ’)’]) %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.1.19 user.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This matlab file is called user.m %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 1. Number of state variables...format>>> n=?; %

n=1; %

%

% 2. Number of control variables...format>>> m=?; %  m=1;

%

% 3. Number of the highest derivative of state variable... %

%format>>> p=?; %

p=2; %

179

Page 198: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 198/317

ps=[2]; %

%

% 4. Number of the intervals***...format>>> N=?; %

% This interval means that users can set the accuracy of the %

%solutions.If N is large,it will be more accuracy of the solutions%

%but not always*. %

N=6; %

%

% 5. Initial time...format>>> t0=?; %

t0=0; %

%

% 6 Final time...format>>> tf=?. %

% where N=N in section4. %

tf=1; %

%%7.This is where the cost function needed to be defined in the sym%

%bolic format with one must understand what is the logic below. %

% 7.1 Control inputs must set as u1, u2, ..., um. %

% 7.2 State variables...As the user know that how many state vari-%

%ables he/she has. %

% State variables must set as x10, x11, ..., x1p, x20, x21,%

% ..., x2p, ..., xn0, xn1, ..., xnp %

% where x10=x1, x11=x1_dot, and so on. %

% Format>>> L=sym(’?’); %

L=sym(’u1^2’); %%

% 8. This place is where the state equations have to be defined as%

%format below: %

% Format>>> fs(1,1)=sym(’?’);, ..., fs(n,1)=sym(’?’); %

fs(1,1)=sym(’x12-u1’); %

%

% 9. Also, at this point, users need to provide all constraints. %

% Format>>> cu(1,1)=sym(’?’);, ..., cu(m,1)=sym(’?’); where m is %

% the number of the control input. cu must be defined for all %

% control variables.******* %

% Format>>> cx(1,1)=sym(’?’);, ..., cx(r,1)=sym(’?’); where r is %

% the number of the constraints %

% Detail: %

% cu means constraints contain only control inputs. %

% cx means constraints contain only state or both control and sta-%

% te variables. %

180

Page 199: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 199/317

% 9.1 Further more for the lower and upper bound of the inequality%

% constraints. If one has the equlity constraints, the lower %

% and upper bounds must defined as the same values. %

% Format>>> cu_l(1,1)=?;, ..., cu_l(m,1)=?; %

% cu_u(1,1)=?;, ..., cu_u(m,1)=?; %

% and so on where l and u mean upper/lower bound respectively.%

%*****************************************************************%

cu(1,1)=sym(’u1’); %

%cu(2,1)=sym(’u2’); %

%

cu_l(1,1)=-4; %

cu_u(1,1)= 4; %

%cu_l(2,1)=-4; %

%cu_u(2,1)= 4; %

%*****************************************************************%%

r=0; % remember this is a number of the constraint of x and u. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%cx(1,1)=sym(’x12’); %

%cx_l(1,1)=-4; %

%cx_u(1,1)=4; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 10. Boundary Conditions %

% Format>>> x0(i)=?; and xf(i)=?; %% where x0(1) =x10dot, x0(2) =x11dot, ..., x0(p) =x1pdot %

% x0(p+1)=x20dot, x0(p+2)=x21dot, ..., x0(2p)=x2pdot %

% ... %

% x0((n-1)p+1)=xn0dot, x0((n-1)p+2)=xn1dot, ..., x0(np)=xnpdot%

% and repeat with the same logic for xf(1),...,xf(np). %

%

x0(1,1)=sym(’x10’); %

x0(2,1)=sym(’x11’); %

%Bound of x0 %

x0_l(1,1)=0; %

x0_l(2,1)=0; %

x0_u(1,1)=0; %

x0_u(2,1)=0; %

%

xf(1,1)=sym(’x10’); %

xf(2,1)=sym(’x11’); %

181

Page 200: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 200/317

%Bound of xf %

xf_l(1,1)=1; %

xf_l(2,1)=0; %

xf_u(1,1)=1; %

xf_u(2,1)=0; %

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 11. Define the minmax values of all parameters. %

%

absmax=1e13; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************end of user.m file*************************

E.2 Direct Minimum Time Using Higher-Order Method

E.2.1 allnec.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This file is called allnec.m which is a file that connect all the%

%following files together. It will be needed in funobj.m, funcon.m%

%and main.m in order to provide all the symbolic forms of the nece%

%ssary condition. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%user input%%%%%%%%%%%%%%%

user

%*************************************

%%%%%%%%%parameterized variables%%%%%%

param

%*************************************

E.2.2 concj.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 5. Finding the Jacobian matrix cJac. %

% This is a correct version, but beware of large size error. %

in1=1*nv*n*N+N*m+1; %#of all variables. %

dc=c; %

ds=size(dc); %

182

Page 201: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 201/317

cn=ds(1); %

fid1=fopen(’r_c.m’,’w+’); %

for i=1:cn, %

nn1=num2str(i); %

nn2=dc(i); %

nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid1,’[%s ;\n’,nn5); %

elseif i==cn, %

fprintf(fid1,’%s ]\n’,nn5); %

else %

fprintf(fid1,’%s ;\n’,nn5); %

end %clear nn1 nn2 nn3 nn4 nn5 %

end %

st1=fclose(fid1); %

fid11=fopen(’r_cj.m’,’w+’); %

for i=1:cn, %

for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %nn5=char(nn3); %

nn6=char(nn4); %

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid11,’[%s ,\n’,nn5); %

elseif ij==cn*in1, %

fprintf(fid11,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid11,’%s ;\n’,nn5); %

else %

fprintf(fid11,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

183

Page 202: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 202/317

st2=fclose(fid11); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************End of concj.m**************************

E.2.3 conoj.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 5. Finding the Jacobian matrix objgrd. %

in1=1*nv*n*N+N*m+1; %#of all variables. %

dc1=H; %

fid21=fopen(’r_o1.m’,’w+’); %

for i=1:1, %

nn1=num2str(i); %

nn2=dc1(i); %nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid21,’[%s ]\n’,nn5); %

elseif i==N, %

fprintf(fid21,’%s ]\n’,nn5); %

else %

fprintf(fid21,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %

end %

st1=fclose(fid21); %

fid23=fopen(’r_oj1.m’,’w+’); %

for i=1:1, %

for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc1(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

nn5=char(nn3); %nn6=char(nn4); %

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid23,’[%s ,\n’,nn5); %

184

Page 203: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 203/317

elseif ij==in1, %

fprintf(fid23,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid23,’%s ;\n’,nn5); %

else %

fprintf(fid23,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

st3=fclose(fid23); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************End of conoj.m**************************

E.2.4 execute.m

Same

E.2.5 funcon.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

function [c,cJac] = funcon(y,ncnln)

user %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

pm=n*p-sum(ps); %

cn=1*n*p*(N+1)+2*n*N+2*r*N-pm*(N+1); %# of nonlin constraints %

in1=1*nv*n*N+N*m+1; %#of all variables. %

for i=1:in1, %

nn1=num2str(i); %eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

nn1=num2str(in1); %

eval([’tf’ ’ = y(’nn1 ’);’]); %

for i=1:N-1, %

nn1=num2str(i); %

185

Page 204: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 204/317

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

fid1=fopen(’r_c.m’,’r’); %

s1 = fscanf(fid1,’%s’); %

c=eval(s1); %

st1=fclose(fid1); %

fid2=fopen(’r_cj.m’,’r’); %

s2 = fscanf(fid2,’%s’); %

cJac=eval(s2); %

st2=fclose(fid2); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

return; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funcon.m%%%%%%%%%%%%%%%%%%%%%%

E.2.6 funcont.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

allnec %This M-file will prepare all symbolic computations. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

t=sym(’t’); %

tf=sym(’tf’); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%2.Preparing F(n_states), %

% XX(np_continuities_state), CX(r_constraints), B0(np_initial), %

% and Bf(np_final). %

for i=1:n, %

nnn1=num2str(i); %

if fs(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %df(ii,1)=fs(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

186

Page 205: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 205/317

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(df(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’])); %

df(j)=d1(j); %

end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

df=d1; %end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

F((i-1)*N+1:i*N,1)=d1; %

end %

%******************************************************************

ddp=0; %

for ii=1:n, %

dp=ps(ii); %for i=1:dp, %

XX(ddp*N+(i-1)*N+1:ddp*N+(i)*N,1)=Xdp((ii-1)*N+1:ii*N,i); %

end %

ddp=dp+ddp; %

end %

%******************************************************************

if r==0, %

CX=[]; %

else %

for i=1:r, %

nn1=num2str(i); %

if cx(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

187

Page 206: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 206/317

dc(ii,1)=cx(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(dc(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%

dc(j)=d1(j); %

end %

end %

for k1=1:m, %

nn1=num2str(k1); %nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dc=d1; %

end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

CX((i-1)*N+1:i*N,1)=d1; %end %

end %

%*****************************************************************%

clear d1 %

clear d2 %

ddp=0; %

for ii=1:n, %

dp=ps(ii); %

for i=1:dp, %

k=i+ddp; %

dc0(k,1)=x0(k); %

dcf(k,1)=xf(k); %

for k1=1:n, %

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

188

Page 207: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 207/317

nn2=num2str(k22); %

nn3=num2str(1); %

nn4=num2str(N); %

d1(k,1)=subs(dc0(k,1),sym([’x’nn1 nn2]),sym([’x’nn1 nn2... %

’(’nn3 ’)’]),0); %

d2(k,1)=subs(dcf(k,1),sym([’x’nn1 nn2]),sym([’x’nn1 nn2... %

’(’nn4 ’)’]),0); %

dc0(k,1)=d1(k,1); %

dcf(k,1)=d2(k,1); %

end %

end %

end %

ddp=dp+ddp; %

end %

for j=1:n*p, %dd1(j,1)=eval(d1(j)); %

dd2(j,1)=eval(d2(j)); %

end %

B0=dd1; %

Bf=dd2; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Substitute with all applicable ti. %

%************************F to be F_C******************************%F_C=sym([zeros(2*n*N,1)]); %

for i=1:n, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

F_C((i-1)*2*N+ii+(j-1)*2)=subs(F(k),sym(’t’),d); %

end %

end %

end %

%***Note for future works*** %

% d=sym([’t’num2str(1)]) %

% d=eval(sym([’t’num2str(1)])) %

% subs(F_C(1),sym(’t0’),d) %

189

Page 208: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 208/317

%*************************** %

%**********************CX to be CX_C******************************%

CX_C=sym([zeros(2*r*N,1)]); %

for i=1:r, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

CX_C((i-1)*2*N+ii+(j-1)*2)=subs(CX(k),sym(’t’),d); %

end %

end %

end %

%***********************XX to be XX_C*****************************%pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

for j=1:N, %

k=j+(i-1)*(N); %

nn1=num2str(j); %

nn2=num2str(j-1); %

d=sym([’t’nn1]); %

XXX(k,1)=subs(XX(k),sym(’t’),d); %

dd=sym([’t’nn2]); %XXXX(k,1)=subs(XX(k),sym(’t’),dd); %

end %

end %

for i=1:pp, %

for j=1:N-1, %

k=j+(i-1)*(N-1); %

XX_C(k,1)=XXX(k+(i-1))-XXXX(k+i); %

end %

end %

%***********************B0 to be B0_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

nn1=num2str(0); %

d=sym([’t’nn1]); %

B0_C(i,1)=subs(B0(i),sym(’t’),d); %

190

Page 209: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 209/317

end %

%***********************Bf to be Bf_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

nn1=num2str(N); %

d=sym([’t’nn1]); %

Bf_C(i,1)=subs(Bf(i),sym(’t’),d); %

end %

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Form the constraint matrix c. %

ca=[B0_C; %Bf_C; %

XX_C; %

F_C; %

CX_C]; %

si1=size(ca); %

si2=si1(1); %

in1=1*nv*n*N+N*m+1; %#of all variables including mu. %

nn1=num2str(N); %

nn2=num2str(in1); %

for i=1:si2, %cb(i,1)=subs(ca(i),sym([’t’nn1]),sym([’y’nn2]),0); %

end %

for i=1:si2, %

c(i,1)=subs(cb(i),sym([’tf’]),sym([’y’nn2]),0); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***This is an obtion to provide the gradient of c**************

% This is a correct version, but beware of large size error. %

concj %

%***************************************************************

%%%%%%%%%%%%%%%%%%%%%%%End of funcont.m%%%%%%%%%%%%%%%%%%%%%%%%%%%%

191

Page 210: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 210/317

E.2.7 funobj.m

function [objf,objgrd] = funobj(y)

user %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

pm=n*p-sum(ps); %

in1=1*nv*n*N+N*m+1; %#of all variables. %

cn=1*n*p*(N+1)+2*n*N+2*r*N-pm*(N+1); %# of nonlin constraints %

for i=1:in1, %nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

nn1=num2str(in1); %

eval([’tf’ ’ = y(’nn1 ’);’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %nn1=num2str(N-1); %

eval([’tf1=t’nn1 ’;’]); %

fid1=fopen(’r_o1.m’,’r’); %

s1 = fscanf(fid1,’%s’); %

objf=eval(s1) %

st1=fclose(fid1); %

fid2=fopen(’r_oj1.m’,’r’); %

s2 = fscanf(fid2,’%s’); %

objgrd=eval(s2); %

st2=fclose(fid2); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

return; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funobj.m%%%%%%%%%%%%%%%%%%%%%%

192

Page 211: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 211/317

E.2.8 funobjt.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

allnec %This M-file will prepare all symbolic computations. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

t=sym(’t’); %

tf=sym(’tf’); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Preparing He. The Hamiltonian Equation. %

in1=1*nv*n*N+N*m+1; %#of all variables. %

nn1=num2str(in1); %

H=sym([’y’nn1]); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%*** This is an option for obtaining jacobian (objgrd)***

conoj %

%********************************************************

%%%%%%%%%%%%%%%%%%%%%%%End of funobjt.m%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.2.9 guess.m

Same

E.2.10 jac.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

fid3=fopen(’r_oj1.m’,’r’); %

s3 = fscanf(fid3,’%s’); %

objgrd1=eval(s3); %

st3=fclose(fid3); %

fid4=fopen(’r_oj2.m’,’r’); %

s4 = fscanf(fid4,’%s’); %

objgrd2=eval(s4); %

st4=fclose(fid4); %for i=1:in1, %

do1=objgrd1(:,i); %

do2=objgrd2(:,i); %

h=(tf1-t0)./(N-1); %

n2=(N-1)/2; %

a1=0.0; %

193

Page 212: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 212/317

for ii= 1:n2, %

a1=a1+4*do1(2*ii)+2*do1(2*ii+1); %

end %

objg1(:,i)=h*(do1(1)+a1-do1(N))/3.0; %

h=(tf-t1)./(N-1); %

a1=0.0; %

for ii= 1:n2, %

a1=a1+4*do2(2*ii)+2*do2(2*ii+1); %

end %

objg2(:,i)=h*(do2(1)+a1-do2(N))/3.0; %

end %

objgrd=objg1+objg2; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.2.11 main.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file called main.m which is the main program that user %

%need to run for their optimal solutions. The optimal solutions %

%are obtained using the nonlinear programing called NPSOL package.%

%INSTRUCTION HOW TO USE THIS CODE: %

% a) User need to modify M-file named user.m by following %

% the instruction that embeded inside closely. %

% b) The second step is preparing all symbolic code for %

% all constraints and objective functions by typing %

% command "pre" on Matlab cammand window and disregard all %

% warnings that appeared on the screen. %

% c) The last step to run this code is by typing %

% command "main" on Matlab cammand window. %

% %

%THIS CODE IS DEVELOPED BY: %

% Captain Tawiwat Veeraklaew %

% Department of Mechanical Engineering %

% Chulachomklao Royal Military Academy %

% Nakhon-Nayok, 26001 THAILAND %

% E-mail: [email protected] %% [email protected] %

% [email protected] %

% or contact Dr. Sunil K. Agrawal at University of Delaware %

% E-mail: [email protected] %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

194

Page 213: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 213/317

clear

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Obtaining all inputs from user.m, initial guesses, and %

% # of variables. %

user %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

if m==0, %cu=[]; %

cu_l=[]; %

cu_u=[]; %

else %

cu=cu; %

cu_l=cu_l; %

cu_u=cu_u; %

end %

if r==0, %

cx=[]; %cx_l=[]; %

cx_u=[]; %

else %

cx=cx; %

cx_l=cx_l; %

cx_u=cx_u; %

end %

pm=n*p-sum(ps); %

in1=1*nv*n*N+N*m+1; %#of all variables. %

in2=1*nv*n*N+N*m; %# of all variables without mu. %

in3=1*nv*n*N; %# of all variables without mu and u. %

in4=1*n*p*(N+1)+2*n*N+2*r*N-pm*(N+1); %#of nonlin constraints %

in5=1*n*p*(N+1)+2*n*N-pm*(N+1); %# of nonlin constraints %

y=1*max(eye(in1))’; %

A =[]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

195

Page 214: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 214/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Form upper and lower bound. %

BB=absmax; %This is the abs value limit. %

l_box = -BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*N; %

cul(ij,1)=cu_l(i); %

end %

end %

l_box(in3+1:in2,1) = cul; %

end %l_box(in1,1) = t_l; %

%

l_A = []; %

%

l_nonlin(1:(2*n*p-2*pm),1) = [x0_l;xf_l]; %

l_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %cxl(ij,1)=cx_l(i); %

end %

end %

l_nonlin(in5+1:in4,1) = cxl;%double check! %

end %

%************************************************%

u_box = BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*N; %

cuu(ij,1)=cu_u(i); %

end %

end %

u_box(in3+1:in2,1) = cuu; %

196

Page 215: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 215/317

end %

u_box(in1,1) = t_u; %

%

u_A = []; %

%

u_nonlin(1:(2*n*p-2*pm),1) = [x0_u;xf_u]; %

u_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

cxu(ij,1)=cx_u(i); %

end %

end %

u_nonlin(in5+1:in4,1) = cxu;%double check! %end %

l=[l_box; l_A;l_nonlin]; %

u=[u_box; u_A;u_nonlin]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Call NPSOL. %

funobj = ’funobj’; %

funcon = ’funcon’; %

[y,f,g,c,cJac,inform,lambda,iter,istate]= npsol(A,l,u,y,funobj,...%funcon,500,3); %

result=y; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Plot results. %

result1 %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***************************End of main.m**************************

E.2.12 main-guess.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file called main* which is the main program that user %

%need to run for their optimal solutions. The optimal solutions %

197

Page 216: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 216/317

%are obtained using the nonlinear programing called NPSOL package.%

%INSTRUCTION HOW TO USE THIS CODE: %

% a) User need to modify M-file named user.m by following %

% the instruction that embeded inside closely. %

% b) The second step is preparing all symbolic code for %

% all constraints and objective functions by typing %

% command "pre" on Matlab cammand window and disregard all %

% warnings that appeared on the screen. %

% c) The last step to run this code is by typing %

% command "main" on Matlab cammand window. %

% %

%THIS CODE IS DEVELOPED BY: %

% Captain Tawiwat Veeraklaew %

% Department of Mechanical Engineering %

% Chulachomklao Royal Military Academy %% Nakhon-Nayok, 26001 THAILAND %

% E-mail: [email protected] %

% [email protected] %

% [email protected] %

% or contact Dr. Sunil K. Agrawal at University of Delaware %

% E-mail: [email protected] %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

clear

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Obtaining all inputs from user.m, initial guesses, and %

% # of variables. %

user %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %

nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

if m==0, %

cu=[]; %

cu_l=[]; %

cu_u=[]; %

else %

198

Page 217: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 217/317

cu=cu; %

cu_l=cu_l; %

cu_u=cu_u; %

end %

if r==0, %

cx=[]; %

cx_l=[]; %

cx_u=[]; %

else %

cx=cx; %

cx_l=cx_l; %

cx_u=cx_u; %

end %

pm=n*p-sum(ps); %

in1=1*nv*n*N+N*m+1; %#of all variables. %in2=1*nv*n*N+N*m; %# of all variables without mu. %

in3=1*nv*n*N; %# of all variables without mu and u. %

in4=1*n*p*(N+1)+2*n*N+2*r*N-pm*(N+1); %#of nonlin constraints %

in5=1*n*p*(N+1)+2*n*N-pm*(N+1); %# of nonlin constraints %

guess %

y=yc; %

d1=size(yc); %

d2=d1(1); %

in5=in1-d2; %

y(d2+1:in1,1)=1*max(eye(in5))’; %A =[]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Form upper and lower bound. %

BB=absmax; %This is the abs value limit. %

l_box = -BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*N; %

cul(ij,1)=cu_l(i); %

end %

end %

l_box(in3+1:in2,1) = cul; %

199

Page 218: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 218/317

end %

l_box(in1,1) = 0; %

%

l_A = []; %

%

l_nonlin(1:(2*n*p-2*pm),1) = [x0_l;xf_l]; %

l_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

cxl(ij,1)=cx_l(i); %

end %

end %

l_nonlin(in5+1:in4,1) = cxl;%double check! %end %

%************************************************%

u_box = BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*N; %

cuu(ij,1)=cu_u(i); %

end %end %

u_box(in3+1:in2,1) = cuu; %

end %

u_box(in1,1) = 2; %

%

u_A = []; %

%

u_nonlin(1:(2*n*p-2*pm),1) = [x0_u;xf_u]; %

u_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

cxu(ij,1)=cx_u(i); %

end %

end %

200

Page 219: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 219/317

u_nonlin(in5+1:in4,1) = cxu;%double check! %

end %

l=[l_box; l_A;l_nonlin]; %

u=[u_box; u_A;u_nonlin]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Call NPSOL. %

funobj = ’funobj’; %

funcon = ’funcon’; %

[y,f,g,c,cJac,inform,lambda,iter,istate]= npsol(A,l,u,y,funobj,...%

funcon,500,3); %

result=y; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

toc%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Plot results. %

result1 %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%*******************End of main_guess.m****************************

E.2.13 necc.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This file is called necc.m which will provide all the necessary %

%conditions before using NPSOL. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%*******************************************

% 1. Define Extended Hamiltonian function. %

H=L; %

for i=1:n, %

nn1=num2str(i); %

nn2=[’v’ nn1 ’0’]; %

nn3=eval([’fs(’ nn1 ’)’]); %H1 = nn2*nn3; %

H=H+H1; %

end %

for i=1:r, %

nn1=num2str(i); %

201

Page 220: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 220/317

nn2=[’mu’ nn1 ’0’]; %

nn3=eval([’cx(’ nn1 ’,1)’]); %

H1=nn2*nn3; %

H=H+H1; %

end %

%*******************************************

%******************************************************

% 2. This step will provide the costate equations. %

% format>>> g(1)=(), ..., g(n)=() %

gs=sym([zeros(n,1)]); %

dd=sym(’0’); %

ddd=sym(’0’); %

dd1=sym(’0’); %

for i=1:n, %for j=1:(p+1), %

jj=j-1; %

nn1=num2str(i); %

nn2=num2str(jj); %

%

d1=diff(H,[’x’nn1 nn2]); %

if j==1, %

dd1=d1; %

end %end if %

if j>1, %%**********Chain Rule************************ %

for k1=1:n, %

for k2=1:(p+1) %

kk2=k2-1; %

nn3=num2str(k1); %

nn4=num2str(kk2); %

kk=kk2+jj; %

nn5=num2str(kk); %

dd=diff(d1,[’x’nn3 nn4])*[’x’nn3 nn5]; %

ddd=diff(d1,[’v’nn3 nn4])*[’v’nn3 nn5]; %

dddd=diff(d1,[’mu’nn3 nn4])*[’mu’nn3 nn5];%

da=dd+ddd+dddd; %

dd1=dd1+((-1)^(j+1))*(da); %

end %

end %

%******************************************** %

202

Page 221: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 221/317

end %end if %

%

end %

gs(i)=simple(dd1); %

end %

%

%******************************************************

%********************************************************

% 3. This step will provide the continuity conditions. %

% Format %

for ii=1:p, %

jp=ii+1; %

cs=sym([zeros(n,1)]); %

dd=sym(’0’); %ddd=sym(’0’); %

dd1=sym(’0’); %

for i=1:n, %

for j=jp:(p+1), %

jj=j-1; %

nn1=num2str(i); %

nn2=num2str(jj); %

%

d1=diff(H,[’x’nn1 nn2]); %

if j==jp, %dd1=d1; %

end %end if %

if j>jp, %

%**********Chain Rule************************ %

for k1=1:n, %

for k2=1:(p+1) %

kk2=k2-1; %

nn3=num2str(k1); %

nn4=num2str(kk2); %

kk=kk2+jj; %

nn5=num2str(kk); %

dd=diff(d1,[’x’nn3 nn4])*[’x’nn3 nn5]; %

ddd=diff(d1,[’v’nn3 nn4])*[’v’nn3 nn5]; %

dddd=diff(d1,[’mu’nn3 nn4])*[’mu’nn3 nn5]; %

da=dd+ddd+dddd; %

dd1=dd1+((-1)^(j+1))*(da); %

203

Page 222: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 222/317

end %

end %

%******************************************** %

end %end if %

%

end %

cs(i)=simple(dd1); %

end %

cc((ii-1)*n+1:ii*n,1)=cs; %

end %

%********************************************************

%*******************end of necc.m file*******************

E.2.14 Npsol.mSame

E.2.15 param.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This file called param.m which will parameterize all state, etc. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Set how many parameters we need %

nv1=(1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N); %nv2=floor((1*n*p*(N+1)+2*n*N+2*r*N)/((1*n)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

%******Let’s calculate for state variables first.*** %

X=sym([zeros(n*N,1)]); %

d=sym([zeros(n*N,1)]); %

for i =1:n, %for j=1:N, %

for k=1:nv, %

kk=k-1; %

nn4=num2str(kk); %

kj=(i-1)*nv*N+k+(j-1)*nv; %

nn5=num2str(kj); %

204

Page 223: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 223/317

dx=sym([’y’nn5 ])*sym([’t^’nn4]); %

d((i-1)*N+j)=d((i-1)*N+j)+dx; %

end %

end %

end %

X=d; %

%

% X formats are like the following: %

% X(1:N,1)=x1 interval 1 to N. %

% X(N+1:2N,1)=x2 interval 1:N. %

% ... %

% X((i-1)N+1:iN)=xi inteval 1 to N. %

% Also, the same manner applies to V (Lamda). %

%*******Now, calculate for control variables.*********************%

U=sym([zeros(m*N,1)]); %d=sym([zeros(m*N,1)]); %

for i =1:m, %

for j=1:N, %

ku=1*n*nv*N+j+(i-1)*N; %

nn1=num2str(ku); %

du=sym([’y’nn1 ]); %

d((i-1)*N+j)=d((i-1)*N+j)+du; %

end %

end %

U=d; %% U formats are like the following: %

% U(1:N,1)=U1 interval 1 to N. %

% U(N+1:2N,1)=U2 interval 1:N. %

% ... %

% U((i-1)N+1:iN)=Ui inteval 1 to N. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Finding derivatives of X upto p_th derivative. %

dh(:,1)=X; %

for i=2:p+1, %

dd=dh(:,(i-1)); %

dh(:,i)=diff(dd,’t’); %

end %

Xdp=dh; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

205

Page 224: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 224/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Set all variables to be match with all necessary conditions.%

% x10,...,x1p, x20,...,x2p, ..., xn0,...,xnp %

% u1, ..., um %

for i=1:n, %

for j=1:p+1, %

nn1=num2str(i); %

jj=j-1; %

nn2=num2str(jj); %

nn3=num2str(j); %

d1=(i-1)*N+1; %

nn4=num2str(d1); %

d2=i*N; %

nn5=num2str(d2); %nn6=eval([’Xdp(’nn4 ’:’ nn5 ’,’nn3 ’)’]); %

%****Good for future works**** %

eval([’x’ nn1 nn2 ’ = nn6;’]); %

%***************************** %

end %

end %

for i=1:m, %

nn1=num2str(i); %

d1=(i-1)*N+1; %

d2=i*N; %nn2=num2str(d1); %

nn3=num2str(d2); %

nn4=eval([’U(’nn2 ’:’ nn3 ’)’]); %

%****Good for future works**** %

eval([’u’ nn1 ’ = nn4;’]); %

%***************************** %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************end of param.m*****************************

E.2.16 pre.m

Same

206

Page 225: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 225/317

E.2.17 result1.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file names result1.m which will traslate the data %

%solution from the nonlinear programing to show on the figures.%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

close all

if inform<2,

fprintf(’Congratulation! Your Solutions found.\n’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn1=num2str(in1); %

eval([’tf’ ’ = y(’nn1 ’);’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %eval([’t’ nn1 ’ = nn2;’]); %

end %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************************************

cl=3; % The number of node for each interval.

%**********************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Find T1,..., T_N-1. %

for i=1:N, %

nn1=num2str(i-1); %

nn2=num2str(i); %

nn3=eval(sym([’t’nn1])); %

nn4=eval(sym([’t’nn2])); %

eval([’T’nn1 ’=nn3:(nn4-nn3)/cl:nn4;’]); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Find x101,..., xnp_N-1 and plot them. %

param %

for i=1:in1, %

nn1=num2str(i); %

207

Page 226: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 226/317

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:n, % state %

nn2=num2str(j); %

for k1=1:p, %derivatives upto (p)th derivative %

nn3=num2str(k1-1); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

for k3=1:p+2, % variables in each state %

nn5=num2str(k3); %

d=eval(sym([’x’nn2 nn3 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %dd=eval(d); %

eval([’x’nn2 nn3 nnn ’(1,’nn4 ’)=dd;’]); %

end %

end %

figure(k1+(j-1)*p) %

hold on %

eval([’plot(T’nnn ’,x’nn2 nn3 nnn ’)’]) %

end %

end %

end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Find u11,..., um_N-1 and plot them. %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:m, % state %

nn2=num2str(j); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

d=eval(sym([’u’nn2 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’u’nn2 nnn ’(1,’nn4 ’)=dd;’]); %

end %

figure(n*p+j) %

208

Page 227: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 227/317

hold on %

eval([’plot(T’nnn ’,u’nn2 nnn ’)’]) %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

elseif inform==6,

fprintf(’Local minimum Exist!, please check the reasons.\n’)

fprintf(’If it is not true, try to change the limits by:\n’)

fprintf(’typing max(y) and compare with BB in main.m.\n’)

fprintf(’However, to see plots now type. >>result2.’)

else

fprintf(’No solution!, please type "inform" to check the reason\n’)

fprintf(’and refer back to NpSoL manual.\n’)

fprintf(’However, to see plots now type >>result2.’)end

E.2.18 result2.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file names result1.m which will traslate the data %

%solution from the nonlinear programing to show on the figures.%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn1=num2str(in1); %

eval([’tf’ ’ = y(’nn1 ’);’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************************************

cl=3; % The number of node for each interval.

%**********************************************

209

Page 228: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 228/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Find T1,..., T_N-1. %

for i=1:N, %

nn1=num2str(i-1); %

nn2=num2str(i); %

nn3=eval(sym([’t’nn1])); %

nn4=eval(sym([’t’nn2])); %

eval([’T’nn1 ’=nn3:(nn4-nn3)/cl:nn4;’]); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Find x101,..., xnp_N-1 and plot them. %

param %

for i=1:in1, %nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:n, % state %

nn2=num2str(j); %

for k1=1:p, %derivatives upto (p)th derivative %

nn3=num2str(k1-1); %for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

for k3=1:p+2, % variables in each state %

nn5=num2str(k3); %

d=eval(sym([’x’nn2 nn3 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’x’nn2 nn3 nnn ’(1,’nn4 ’)=dd;’]); %

end %

end %

figure(k1+(j-1)*p) %

hold on %

eval([’plot(T’nnn ’,x’nn2 nn3 nnn ’)’]) %

end %

end %

end %

210

Page 229: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 229/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Find u11,..., um_N-1 and plot them. %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:m, % state %

nn2=num2str(j); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

d=eval(sym([’u’nn2 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’u’nn2 nnn ’(1,’nn4 ’)=dd;’]); %

end %figure(n*p+j) %

hold on %

eval([’plot(T’nnn ’,u’nn2 nnn ’)’]) %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.2.19 user.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This matlab file is called user.m %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 1. Number of state variables...format>>> n=?; %

n=1; %

%

% 2. Number of control variables...format>>> m=?; %

  m=1;

%

% 3. Number of the highest derivative of state variable... %

%format>>> p=?; %p=2; %

ps=[2]; %

%

% 4. Number of the intervals***...format>>> N=?; %

N=6; %

211

Page 230: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 230/317

%

% 5. Initial time...format>>> t0=?; %

t0=0; %

%

% 6 Final time...format>>> tf=?. %

% where N=N in section4. %

% tf=1; %

t_l=0; %assigned lower bound of final time %

t_u=2; %assigned upper bound of final time %

%

% 7. This place is where the state equations have to be defined as%

%format below: %

% Format>>> fs(1,1)=sym(’?’);, ..., fs(n,1)=sym(’?’); %

fs(1,1)=sym(’x12-u1’); %

%% 8. Also, at this point, users need to provide all constraints. %

% Format>>> cu(1,1)=sym(’?’);, ..., cu(m,1)=sym(’?’); where m is %

% the number of the control input. cu must be defined for all %

% control variables.******* %

% Format>>> cx(1,1)=sym(’?’);, ..., cx(r,1)=sym(’?’); where r is %

% the number of the constraints %

% Detail: %

% cu means constraints contain only control inputs. %

% cx means constraints contain only state or both control and sta-%

% te variables. %% 9.1 Further more for the lower and upper bound of the inequality%

% constraints. If one has the equlity constraints, the lower %

% and upper bounds must defined as the same values. %

% Format>>> cu_l(1,1)=?;, ..., cu_l(m,1)=?; %

% cu_u(1,1)=?;, ..., cu_u(m,1)=?; %

% and so on where l and u mean upper/lower bound respectively.%

%*****************************************************************%

cu(1,1)=sym(’u1’); %

%cu(2,1)=sym(’u2’); %

%

cu_l(1,1)=-4; %

cu_u(1,1)= 4; %

%cu_l(2,1)=-4; %

%cu_u(2,1)= 4; %

%*****************************************************************%

%

212

Page 231: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 231/317

r=0; % remember this is a number of the constraint of x and u. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%cx(1,1)=sym(’x12’); %

%cx_l(1,1)=-4; %

%cx_u(1,1)=4; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 9. Boundary Conditions %

% Format>>> x0(i)=?; and xf(i)=?; %

% where x0(1) =x10dot, x0(2) =x11dot, ..., x0(p) =x1pdot %

% x0(p+1)=x20dot, x0(p+2)=x21dot, ..., x0(2p)=x2pdot %

% ... %

% x0((n-1)p+1)=xn0dot, x0((n-1)p+2)=xn1dot, ..., x0(np)=xnpdot%

% and repeat with the same logic for xf(1),...,xf(np). %

%x0(1,1)=sym(’x10’); %

x0(2,1)=sym(’x11’); %

%Bound of x0 %

x0_l(1,1)=0; %

x0_l(2,1)=0; %

x0_u(1,1)=0; %

x0_u(2,1)=0; %

%

xf(1,1)=sym(’x10’); %

xf(2,1)=sym(’x11’); %%Bound of xf %

xf_l(1,1)=1; %

xf_l(2,1)=0; %

xf_u(1,1)=1; %

xf_u(2,1)=0; %

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 10. Define the minmax values of all parameters. %

%

absmax=1e3; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************end of user.m file*************************

213

Page 232: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 232/317

E.3 Indirect Higher-Order Method

E.3.1 allnec.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This file is called allnec.m which is a file that connect all the%%following files together. It will be needed in funobj.m, funcon.m%

%and main.m in order to provide all the symbolic forms of the nece%

%ssary condition. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%user input%%%%%%%%%%%%%%%

user

%*************************************

%%%%%%%%%%necessary conditions%%%%%%%%necc

%*************************************

%%%%%%%%%parameterized variables%%%%%%

param

%*************************************

E.3.2 concj.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 5. Finding the Jacobian matrix cJac. %% This is a correct version, but beware of large size error. %

in1=2*nv*n*N+N*m+nv*rr*N; %#of all variables including mu. %

dc=c; %

ds=size(dc); %

cn=ds(1); %

fid1=fopen(’r_c.m’,’w+’); %

for i=1:cn, %

nn1=num2str(i); %

nn2=dc(i); %

nn3=sym([’c’nn1]); %nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid1,’[%s ;\n’,nn5); %

elseif i==cn, %

fprintf(fid1,’%s ]\n’,nn5); %

214

Page 233: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 233/317

else %

fprintf(fid1,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %

end %

st1=fclose(fid1); %

fid11=fopen(’r_cj.m’,’w+’); %

for i=1:cn, %

for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

nn5=char(nn3); %

nn6=char(nn4); %ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid11,’[%s ,\n’,nn5); %

elseif ij==cn*in1, %

fprintf(fid11,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid11,’%s ;\n’,nn5); %

else %

fprintf(fid11,’%s ,\n’,nn5); %end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

st2=fclose(fid11); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************End of concj.m**************************

E.3.3 conoj.m%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 5. Finding the Jacobian matrix objgrd. %

in1=2*nv*n*N+N*m+nv*rr*N; %#of all variables including mu. %

dc1=LL1; %

dc2=LL2; %

215

Page 234: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 234/317

fid21=fopen(’r_o1.m’,’w+’); %

for i=1:N, %

nn1=num2str(i); %

nn2=dc1(i); %

nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid21,’[%s ;\n’,nn5); %

elseif i==N, %

fprintf(fid21,’%s ]\n’,nn5); %

else %

fprintf(fid21,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %end %

st1=fclose(fid21); %

fid22=fopen(’r_o2.m’,’w+’); %

for i=1:N, %

nn1=num2str(i); %

nn2=dc2(i); %

nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %fprintf(fid22,’[%s ;\n’,nn5); %

elseif i==N, %

fprintf(fid22,’%s ]\n’,nn5); %

else %

fprintf(fid22,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %

end %

st2=fclose(fid22); %

fid23=fopen(’r_oj1.m’,’w+’); %

for i=1:N, %

for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc1(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

216

Page 235: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 235/317

nn5=char(nn3); %

nn6=char(nn4); %

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid23,’[%s ,\n’,nn5); %

elseif ij==N*in1, %

fprintf(fid23,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid23,’%s ;\n’,nn5); %

else %

fprintf(fid23,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %end %

st3=fclose(fid23); %

fid24=fopen(’r_oj2.m’,’w+’); %

for i=1:N, %

for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc2(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

nn5=char(nn3); %nn6=char(nn4); %

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid24,’[%s ,\n’,nn5); %

elseif ij==N*in1, %

fprintf(fid24,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid24,’%s ;\n’,nn5); %

else %

fprintf(fid24,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

st4=fclose(fid24); %

217

Page 236: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 236/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************End of conoj.m**************************

E.3.4 execute.m

Same

E.3.5 funcon.m

function [c,cJac] = funcon(y,ncnln)

user %

nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %

if nv1==nv2, %

nv=nv1; %else %

nv=nv2+1; %

end %

pm=n*p-sum(ps); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

clear nn1 nn2 %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

in1=2*nv*n*N+N*m+nv*rr*N; %#of all variables including mu. %

cn=2*n*p*N+4*n*N+2*rr*N-pm*(N+1); %# of nonlin constraints %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %end %

fid1=fopen(’r_c.m’,’r’); %

s1 = fscanf(fid1,’%s’); %

c=eval(s1); %

st1=fclose(fid1); %

fid2=fopen(’r_cj.m’,’r’); %

218

Page 237: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 237/317

s2 = fscanf(fid2,’%s’); %

cJac=eval(s2); %

st2=fclose(fid2); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

return; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funcon.m%%%%%%%%%%%%%%%%%%%%%%

E.3.6 funcont.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

allnec %This M-file will prepare all symbolic computations. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

t=sym(’t’); %

tf=sym(’tf’); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Preparing F(n_states), G(n_costates), %

% CC(np_continuities_lagrange), %

% XX(np_continuities_state), CX(r_constraints), B0(np_initial),%

% HU(m_optimalities, and Bf(np_final). %

for i=1:n, %

nnn1=num2str(i); %

if fs(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

df(ii,1)=fs(i); %end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:p+1, %

219

Page 238: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 238/317

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(df(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’])); %

d1(j)=subs(d1(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’])); %

df(j)=d1(j); %

end %

end %

for k1=1:r, %

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’ nn3’)’])); %

end %

end %

df=d1; %

end %if gs(i)==0, %

d2=sym([zeros(N,1)]); %

else %

d2=sym([zeros(N,1)]); %

for ii=1:N, %

dg(ii,1)=gs(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d2(j)=subs(dg(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’])); %

d2(j)=subs(d2(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’])); %

220

Page 239: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 239/317

dg(j)=d2(j); %

end %

end %

for k1=1:r, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d2(j)=subs(d2(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%

end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %d2(j)=subs(d2(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dg=d2; %

end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

d2(ii)=eval(d2(ii)); %

end %

F((i-1)*N+1:i*N,1)=d1; %G((i-1)*N+1:i*N,1)=d2; %

end %

%******************************************************************

clear d1 %

for i=1:m, %

nnn1=num2str(i); %

if Hu(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

dHu(ii,1)=Hu(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

221

Page 240: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 240/317

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(dHu(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%

d1(j)=subs(d1(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’])); %

dHu(j)=d1(j); %

end %

end %

for k1=1:rr, %

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %d1(j)=subs(d1(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%

end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dHu=d1; %end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

HU((i-1)*N+1:i*N,1)=d1; %

end %

%******************************************************************

for i=1:p, %

for j=1:n, %

ij=p-i; %

nnn1=num2str(ij); %

nnn2=num2str(j); %

k=j+(i-1)*n; %

if cc(k)==0, %

d1=sym([zeros(N,1)]); %

else %

222

Page 241: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 241/317

d1=sym([zeros(N,1)]); %

for ii=1:N, %

dcc(ii,1)=cc(k); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(dcc(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%

d1(j)=subs(d1(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’])); %

dcc(j)=d1(j); %

end %end %

for k1=1:r, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%

end %

end %for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dcc=d1; %

end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

CC((k-1)*N+1:k*N,1)=d1; %sign* %

end %

end %

%******************************************************************

ddp=0; %

223

Page 242: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 242/317

for ii=1:n, %

dp=ps(ii); %

for i=1:dp, %

XX(ddp*N+(i-1)*N+1:ddp*N+(i)*N,1)=Xdp((ii-1)*N+1:ii*N,i); %

end %

ddp=dp+ddp; %

end %

%******************************************************************

if r==0, %

CX=[]; %

else %

for i=1:r, %

nn1=num2str(i); %

if cx(i)==0, %

d1=sym([zeros(N,1)]); %else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

dc(ii,1)=cx(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:2*p+1, %

nn1=num2str(k1); %k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(dc(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]))%

d1(j)=subs(d1(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’]))%

dc(j)=d1(j); %

end %

end %

for k1=1:r, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%

end %

end %

224

Page 243: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 243/317

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dc=d1; %

end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

CX((i-1)*N+1:i*N,1)=d1; %

end %

end %

%*****************************************************************%clear d1 %

clear d2 %

ddp=0; %

for ii=1:n, %

dp=ps(ii); %

for i=1:dp, %

k=i+ddp; %

dc0(k,1)=x0(k); %

dcf(k,1)=xf(k); %

for k1=1:n, %for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(1); %

nn4=num2str(N); %

d1(k,1)=subs(dc0(k,1),sym([’x’nn1 nn2]),sym([’x’nn1 nn2... %

’(’nn3 ’)’]),0); %

d2(k,1)=subs(dcf(k,1),sym([’x’nn1 nn2]),sym([’x’nn1 nn2... %

’(’nn4 ’)’]),0); %

dc0(k,1)=d1(k,1); %

dcf(k,1)=d2(k,1); %

end %

end %

end %

ddp=dp+ddp; %

225

Page 244: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 244/317

end %

for j=1:n*p, %

dd1(j,1)=eval(d1(j)); %

dd2(j,1)=eval(d2(j)); %

end %

B0=dd1; %

Bf=dd2; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Substitute with all applicable ti. %

%************************F to be F_C******************************%

F_C=sym([zeros(2*n*N,1)]); %

for i=1:n, %

for j=1:N, %k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

F_C((i-1)*2*N+ii+(j-1)*2)=subs(F(k),sym(’t’),d); %

end %

end %

end %

%***Note for future works*** %% d=sym([’t’num2str(1)]) %

% d=eval(sym([’t’num2str(1)])) %

% subs(F_C(1),sym(’t0’),d) %

%*************************** %

%**********************G to be G_C********************************%

G_C=sym([zeros(2*n*N,1)]); %

for i=1:n, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

%d=eval(sym([’t’nn3])); %

d=sym([’t’nn3]); %

G_C((i-1)*2*N+ii+(j-1)*2)=subs(G(k),sym(’t’),d); %

end %

226

Page 245: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 245/317

end %

end %

%**********************CX to be CX_C******************************%

CX_C=sym([zeros(2*r*N,1)]); %

for i=1:r, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

CX_C((i-1)*2*N+ii+(j-1)*2)=subs(CX(k),sym(’t’),d); %

end %

end %

end %%***********************CC to be CC_C*****************************%

for i=1:n*p, %

for j=1:N, %

k=j+(i-1)*(N); %

nn1=num2str(j); %

nn2=num2str(j-1); %

d=sym([’t’nn1]); %

CCC(k,1)=subs(CC(k),sym(’t’),d); %

dd=sym([’t’nn2]); %

CCCC(k,1)=subs(CC(k),sym(’t’),dd); %end %

end %

for i=1:n*p, %

for j=1:N-1, %

k=j+(i-1)*(N-1); %

CC_C(k,1)=CCC(k+(i-1))-CCCC(k+(i)); %

end %

end %

%***********************XX to be XX_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

for j=1:N, %

k=j+(i-1)*(N); %

nn1=num2str(j); %

nn2=num2str(j-1); %

227

Page 246: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 246/317

d=sym([’t’nn1]); %

XXX(k,1)=subs(XX(k),sym(’t’),d); %

dd=sym([’t’nn2]); %

XXXX(k,1)=subs(XX(k),sym(’t’),dd); %

end %

end %

for i=1:pp, %

for j=1:N-1, %

k=j+(i-1)*(N-1); %

XX_C(k,1)=XXX(k+(i-1))-XXXX(k+i); %

end %

end %

%***********************B0 to be B0_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %for i=1:pp, %

nn1=num2str(0); %

d=sym([’t’nn1]); %

B0_C(i,1)=subs(B0(i,1),sym(’t’),d); %

end %

%***********************Bf to be Bf_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

nn1=num2str(N); %d=sym([’t’nn1]); %

Bf_C(i,1)=subs(Bf(i,1),sym(’t’),d); %

end %

%

%**********************HU to be HU_C******************************%

HU_C=sym([zeros(2*m*N,1)]); %

for i=1:m, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

HU_C((i-1)*2*N+ii+(j-1)*2)=subs(HU(k),sym(’t’),d); %

end %

end %

228

Page 247: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 247/317

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Form the constraint matrix c. %

ca=[B0_C; %

Bf_C; %

XX_C; %

CC_C; %

F_C; %

G_C; %

CX_C; %

HU_C]; %

si1=size(ca); %

si2=si1(1); %

for i=1:si2, %c(i,1)=ca(i); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***This is an obtion to provide the gradient of c**************

% This is a correct version, but beware of large size error. %

concj %

%***************************************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funcont.m%%%%%%%%%%%%%%%%%%%%%%%%

E.3.7 funobj.m

function [objf,objgrd] = funobj(y)

user %

nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %end %

pm=n*p-sum(ps); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

229

Page 248: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 248/317

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

nn1=num2str(N-1); %

eval([’tf1=t’nn1 ’;’]); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

cn=2*n*p*N+4*n*N+2*rr*N-pm*(N+1); %# of nonlin constraints %

in1=2*nv*n*N+N*m+nv*rr*N; %#of all variables including mu. %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %fid1=fopen(’r_o1.m’,’r’); %

s1 = fscanf(fid1,’%s’); %

objf1=eval(s1); %

st1=fclose(fid1); %

fid2=fopen(’r_o2.m’,’r’); %

s2 = fscanf(fid2,’%s’); %

objf2=eval(s2); %

st2=fclose(fid2); %

h=(tf1-t0)./(N-1); %

n2=(N-1)/2; %a1=0.0; %

for ii= 1:n2, %

a1=a1+4*objf1(2*ii)+2*objf1(2*ii+1); %

end %

objj1=h*(objf1(1)+a1-objf1(N))/3.0; %

h=(tf-t1)./(N-1); %

a1=0.0; %

for ii= 1:n2, %

a1=a1+4*objf2(2*ii)+2*objf2(2*ii+1); %

end %

objj2=h*(objf2(1)+a1-objf2(N))/3.0; %

objf=objj1+objj2 %

%*************It is better to forget about the******* %

%*************gradient since we may add************** %

%*************error cumulation to the problems.****** %

%jac %

230

Page 249: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 249/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

return; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funobj.m%%%%%%%%%%%%%%%%%%%%%%

E.3.8 funobjt.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

allnec %This M-file will prepare all symbolic computations. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %eval([’t’ nn1 ’ = nn2;’]); %

end %

t=sym(’t’); %

tf=sym(’tf’); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Preparing He. The Hamiltonian Equation. %

H=L; %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

dhh(ii,1)=H; %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(dhh(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%

dhh(j)=d1(j); %end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

231

Page 250: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 250/317

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dhh=d1; %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

He=d1; %

%******************************************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Substitute with all applicable ti. %

%************************He to be He_O****************************%

He_O=sym([zeros(2*n*N,1)]); %

for j=1:N, %for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

He_O(ii+(j-1)*2)=subs(He(j),sym(’t’),d); %

end %

end %

%***Note for future works*** %

% d=sym([’t’num2str(1)]) %

% d=eval(sym([’t’num2str(1)])) %% subs(He_O(1),sym(’t0’),d) %

%*************************** %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Form the objective function objf.%

obja=He_O; %

for i=1:N, %

ii=i+(i-1); %

ij=i+(i); %

LL1(i,1)=obja(ii); %

LL2(i,1)=obja(ij); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%*** This is an option for obtaining jacobian (objgrd)***

232

Page 251: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 251/317

conoj %

%********************************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funobjt.m%%%%%%%%%%%%%%%%%%%%%%%

E.3.9 guess.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file named guess is the most important file to provide an%

%initial guesses for the code... %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

Nn1=(n*p*(N+1)); %

Nn2=(n*nv*N); %

L=sym([zeros(Nn1,Nn2)]); %

for i=1:N+1, %

nn1=num2str(i-1); %

for j=1:n, %

for l=1:nv, %

nn2=num2str(l-1); %

if i==N+1, %

L(j+(i-1)*n*p,(j-1)*nv+l+(i-2)*nv*n)=sym([’t’nn1 ’^’nn2])%

d=L(j+(i-1)*n*p,(j-1)*nv+l+(i-2)*nv*n); %

for k=1:p-1, %

L(j+(i-1)*n*p+k*n,(j-1)*nv+l+(i-2)*nv*n)=diff(d); %

d=L(j+(i-1)*n*p+k*n,(j-1)*nv+l+(i-2)*nv*n); %

end %

else %L(j+(i-1)*n*p,(j-1)*nv+l+(i-1)*nv*n)=sym([’t’nn1 ’^’nn2])%

d=L(j+(i-1)*n*p,(j-1)*nv+l+(i-1)*nv*n); %

for k=1:p-1, %

L(j+(i-1)*n*p+k*n,(j-1)*nv+l+(i-1)*nv*n)=diff(d); %

d=L(j+(i-1)*n*p+k*n,(j-1)*nv+l+(i-1)*nv*n); %

233

Page 252: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 252/317

end %

end %

end %

end %

end %

for i=1:N-1, %

nn1=num2str(i); %

for j=1:n, %

for l=1:nv, %

nn2=num2str(l-1); %

L(j+(i)*n*p,(j-1)*nv+l+(i-1)*nv*n)=sym([’-t’nn1 ’^’nn2]); %

d=L(j+(i)*n*p,(j-1)*nv+l+(i-1)*nv*n); %

for k=1:p-1, %

L(j+(i)*n*p+k*n,(j-1)*nv+l+(i-1)*nv*n)=diff(d); %

d=L(j+(i)*n*p+k*n,(j-1)*nv+l+(i-1)*nv*n); %end %

end %

end %

end %

%************************** %

for i=1:Nn1, %

for j=1:Nn2, %

L1(i,j)=eval(L(i,j)); %

end %

end %%

Linv=pinv(L1); %

Nnx=n*p*(N-1); %

xxx=zeros(Nnx,1); %

%

R=[x0_l; %

xxx; %

xf_l]; %

%

yc=Linv*R; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.3.10 main.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file called main.m which is the main program that user %

234

Page 253: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 253/317

%need to run for their optimal solutions. The optimal solutions %

%are obtained using the nonlinear programing called NPSOL package.%

%INSTRUCTION HOW TO USE THIS CODE: %

% a) User need to modify M-file named user.m by following %

% the instruction that embeded inside closely. %

% b) The second step is preparing all symbolic code for %

% all constraints and objective functions by typing %

% command "pre" on Matlab cammand window and disregard all %

% warnings that appeared on the screen. %

% c) The last step to run this code is by typing %

% command "main" on Matlab cammand window. %

% %

%THIS CODE IS DEVELOPED BY: %

% Captain Tawiwat Veeraklaew %

% Department of Mechanical Engineering %% Chulachomklao Royal Military Academy %

% Nakhon-Nayok, 26001 THAILAND %

% E-mail: [email protected] %

% [email protected] %

% [email protected] %

% or contact Dr. Sunil K. Agrawal at University of Delaware %

% E-mail: [email protected] %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

clear

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Obtaining all inputs from user.m, initial guesses, and %

% # of variables. %

user %

nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

if m==0, %

cu=[]; %

cu_l=[]; %

cu_u=[]; %

235

Page 254: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 254/317

else %

cu=cu; %

cu_l=cu_l; %

cu_u=cu_u; %

end %

if r==0, %

cx=[]; %

cx_l=[]; %

cx_u=[]; %

else %

cx=cx; %

cx_l=cx_l; %

cx_u=cx_u; %

end %

pm=n*p-sum(ps); %in1=2*nv*n*N+N*m+nv*rr*N; %#of all variables including mu. %

in2=2*nv*n*N+N*m; %# of all variables without mu. %

in3=2*nv*n*N; %# of all variables without mu and u. %

in4=2*n*p*N+4*n*N+2*rr*N-pm*(N+1); %# of nonlin constraints %

in41=2*n*p*N+4*n*N+2*r*N-pm*(N+1); %# of nonlin c without cu. %

in5=2*n*p*N+4*n*N-pm*(N+1); %# of nonlin c without CX and cu. %

%

y=0.5*max(eye(in1))’; %

%

A =[]; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Form upper and lower bound. %

BB=absmax; %This is the abs value limit. %

l_box = -BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*N; %

cul(ij,1)=cu_l(i); %

end %

end %

l_box(in3+1:in2,1) = cul; %

end %

236

Page 255: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 255/317

%

l_A = []; %

%

l_nonlin(1:(2*n*p-2*pm),1) = [x0_l;xf_l]; %

l_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

cxl(ij,1)=cx_l(i); %

end %

end %

l_nonlin(in5+1:in41,1) = cxl;%double check! %

end %

%************************************************%u_box = BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*2*N; %

cuu(ij,1)=cu_u(i); %

end %

end %

u_box(in3+1:in2,1) = cuu; %end %

%

u_A = []; %

%

u_nonlin(1:(2*n*p-2*pm),1) = [x0_u;xf_u]; %

u_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

cxu(ij,1)=cx_u(i); %

end %

end %

u_nonlin(in5+1:in41,1) = cxu;%double check! %

end %

l=[l_box; l_A;l_nonlin]; %

237

Page 256: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 256/317

u=[u_box; u_A;u_nonlin]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Call NPSOL. %

funobj = ’funobj’; %

funcon = ’funcon’; %

[y,f,g,c,cJac,inform,lambda,iter,istate]= npsol(A,l,u,y,funobj,...%

funcon,500,3); %

result=y; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Plot results. %

result1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%************************End of main.m*****************************

E.3.11 main-guess.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file called main* which is the main program that user %

%need to run for their optimal solutions. The optimal solutions %

%are obtained using the nonlinear programing called NPSOL package.%

%INSTRUCTION HOW TO USE THIS CODE: %

% a) User need to modify M-file named user.m by following %

% the instruction that embeded inside closely. %

% b) The second step is preparing all symbolic code for %

% all constraints and objective functions by typing %

% command "pre" on Matlab cammand window and disregard all %

% warnings that appeared on the screen. %

% c) The last step to run this code is by typing %

% command "main" on Matlab cammand window. %

% %

%THIS CODE IS DEVELOPED BY: %% Captain Tawiwat Veeraklaew %

% Department of Mechanical Engineering %

% Chulachomklao Royal Military Academy %

% Nakhon-Nayok, 26001 THAILAND %

% E-mail: [email protected] %

238

Page 257: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 257/317

% [email protected] %

% [email protected] %

% or contact Dr. Sunil K. Agrawal at University of Delaware %

% E-mail: [email protected] %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

clear all

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Obtaining all inputs from user.m, initial guesses, and %

% # of variables. %

user %

nv1=(2*n*p*N+4*n*N+2*r*N)/((2*n+r)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N)/((2*n+r)*N)); %

if nv1==nv2, %nv=nv1; %

else %

nv=nv2+1; %

end %

if m==0, %

cu=[]; %

cu_l=[]; %

cu_u=[]; %

else %

cu=cu; %cu_l=cu_l; %

cu_u=cu_u; %

end %

if r==0, %

cx=[]; %

cx_l=[]; %

cx_u=[]; %

else %

cx=cx; %

cx_l=cx_l; %

cx_u=cx_u; %

end %

pm=n*p-sum(ps); %

in1=2*nv*n*N+N*m+nv*rr*N; %#of all variables including mu. %

in2=2*nv*n*N+N*m; %# of all variables without mu. %

in3=2*nv*n*N; %# of all variables without mu and u. %

239

Page 258: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 258/317

in4=2*n*p*N+4*n*N+2*rr*N-pm*(N+1); %# of nonlin constraints %

in41=2*n*p*N+4*n*N+2*r*N-pm*(N+1); %# of nonlin c without cu. %

in5=2*n*p*N+4*n*N-pm*(N+1); %# of nonlin c without CX and cu. %

guess %

y=yc; %

d1=size(yc); %

d2=d1(1); %

in5=in1-d2; %

y(d2+1:in1,1)=1.5*max(eye(in5))’; %

A =[]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Form upper and lower bound. %

BB=absmax; %This is the abs value limit. %l_box = -BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*N; %

cul(ij,1)=cu_l(i); %

end %

end %

l_box(in3+1:in2,1) = cul; %end %

%

l_A = []; %

%

l_nonlin(1:(2*n*p-2*pm),1) = [x0_l;xf_l]; %

l_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

cxl(ij,1)=cx_l(i); %

end %

end %

l_nonlin(in5+1:in41,1) = cxl;%double check! %

end %

%************************************************%

240

Page 259: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 259/317

u_box = BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*2*N; %

cuu(ij,1)=cu_u(i); %

end %

end %

u_box(in3+1:in2,1) = cuu; %

end %

%

u_A = []; %

%

u_nonlin(1:(2*n*p-2*pm),1) = [x0_u;xf_u]; %u_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

cxu(ij,1)=cx_u(i); %

end %

end %

u_nonlin(in5+1:in41,1) = cxu;%double check! %

end %l=[l_box; l_A;l_nonlin]; %

u=[u_box; u_A;u_nonlin]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Call NPSOL. %

funobj = ’funobj’; %

funcon = ’funcon’; %

[y,f,g,c,cJac,inform,lambda,iter,istate]= npsol(A,l,u,y,funobj,...%

funcon,500,3); %

result=y; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Plot results. %

result1 %

241

Page 260: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 260/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************End of main_guess.m*************************

E.3.12 necc.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This file is called necc.m which will provide all the necessary %

%conditions before using NPSOL. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%*******************************************

% 1. Define Extended Hamiltonian function. %

%

H=L; %for i=1:n, %

nn1=num2str(i); %

nn2=[’v’ nn1 ’0’]; %

nn3=eval([’fs(’ nn1 ’)’]); %

H1 = nn2*nn3; %

H=H+H1; %

end %

for i=1:ru, %

nn1=num2str(i); %

nn2=[’mu’ nn1 ’0’]; %

nn3=eval([’cu(’ nn1 ’,1)’]); %

H1=nn2*nn3; %

H=H+H1; %

end %

for i=1:r, %

nn1=num2str(i+ru); %

nn2=[’mu’ nn1 ’0’]; %

nn3=eval([’cx(’ nn1 ’,1)’]); %

H1=nn2*nn3; %

H=H+H1; %

end %%*******************************************

%******************************************************

% 2. This step will provide the costate equations. %

% format>>> g(1)=(), ..., g(n)=() %

242

Page 261: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 261/317

gs=sym([zeros(n,1)]); %

dd=sym(’0’); %

ddd=sym(’0’); %

dd1=sym(’0’); %

for i=1:n, %

for j=1:(p), %

jj=j-1; %

nn1=num2str(i); %

nn2=num2str(jj); %

%

d1=diff(H,[’x’nn1 nn2]); %

if j==1, %

dd1=d1; %

end %end if %

if j>1, %%**********Chain Rule************************ %

for k1=1:n, %

for k2=1:p %

kk2=k2-1; %

nn3=num2str(k1); %

nn4=num2str(kk2); %

kk=kk2+jj; %

nn5=num2str(kk); %

dd=diff(d1,[’x’nn3 nn4])*[’x’nn3 nn5]; %

ddd=diff(d1,[’v’nn3 nn4])*[’v’nn3 nn5]; %dddd=diff(d1,[’mu’nn3 nn4])*[’mu’nn3 nn5];%

da=dd+ddd+dddd; %

dd1=dd1+((-1)^(j+1))*(da); %

end %

end %

%******************************************** %

end %end if %

%

end %

gs(i)=simple(dd1); %

end %

%

%******************************************************

%********************************************************

% 3. This step will provide the continuity conditions. %

243

Page 262: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 262/317

% Format %

for ii=1:p, %

jp=ii+1; %

cs=sym([zeros(n,1)]); %

dd=sym(’0’); %

ddd=sym(’0’); %

dd1=sym(’0’); %

for i=1:n, %

for j=jp:(p), %

jj=j-1; %

nn1=num2str(i); %

nn2=num2str(jj); %

%

d1=diff(H,[’x’nn1 nn2]); %

if j==jp, %dd1=d1; %

end %end if %

if j>jp, %

%**********Chain Rule************************ %

for k1=1:n, %

for k2=1:p %

kk2=k2-1; %

nn3=num2str(k1); %

nn4=num2str(kk2); %

kk=kk2+jj; %nn5=num2str(kk); %

dd=diff(d1,[’x’nn3 nn4])*[’x’nn3 nn5]; %

ddd=diff(d1,[’v’nn3 nn4])*[’v’nn3 nn5]; %

dddd=diff(d1,[’mu’nn3 nn4])*[’mu’nn3 nn5]; %

da=dd+ddd+dddd; %

dd1=dd1+((-1)^(j+1))*(da); %

end %

end %

%******************************************** %

end %end if %

%

end %

cs(i)=simple(dd1); %

end %

cc((ii-1)*n+1:ii*n,1)=cs; %

end %

244

Page 263: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 263/317

%********************************************************

%******************************************************

% 4. This step will provide the optimality equations. %

% format>>> Hu(1)=(), ..., Hu(m)=() %

Hu=sym([zeros(m,1)]); %

for i=1:m, %

nn1=num2str(i); %

d1=diff(H,[’u’nn1]); %

Hu(i)=simple(d1); %

end %

%

%******************************************************

%*******************end of necc.m file*******************

E.3.13 Npsol.m

Same

E.3.14 param.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This file called param.m which will parameterize all state, etc. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Set how many parameters we need. %nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

%******Let’s calculate for state and costate variables first.%

X=sym([zeros(n*N,1)]); %

V=sym([zeros(n*N,1)]); %d=sym([zeros(n*N,1)]); %

dd=sym([zeros(n*N,1)]); %

for i =1:n, %

for j=1:N, %

for k=1:nv, %

kk=k-1; %

245

Page 264: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 264/317

nn4=num2str(kk); %

kj=(i-1)*nv*N+k+(j-1)*nv; %

kv=(i-1)*nv*N+n*nv*N+k+(j-1)*nv; %

nn5=num2str(kj); %

nn6=num2str(kv); %

dx=sym([’y’nn5 ])*sym([’t^’nn4]); %

dv=sym([’y’nn6 ])*sym([’t^’nn4]); %

d((i-1)*N+j)=d((i-1)*N+j)+dx; %

dd((i-1)*N+j)=dd((i-1)*N+j)+dv; %

end %

end %

end %

X=d; %

V=dd; %

%% X and V formats are like the following: %

% X(1:N,1)=x1 interval 1 to N. %

% X(N+1:2N,1)=x2 interval 1:N. %

% ... %

% X((i-1)N+1:iN)=xi inteval 1 to N. %

% Also, the same manner applies to V (Lamda). %

%*******Now, calculate for control variables.*********************%

U=sym([zeros(m*N,1)]); %

d=sym([zeros(m*N,1)]); %

for i =1:m, %for j=1:N, %

ku=2*n*nv*N+j+(i-1)*N; %

nn1=num2str(ku); %

du=sym([’y’nn1 ]); %

d((i-1)*N+j)=d((i-1)*N+j)+du; %

end %

end %

U=d; %

% U formats are like the following: %

% U(1:N,1)=U1 interval 1 to N. %

% U(N+1:2N,1)=U2 interval 1:N. %

% ... %

% U((i-1)N+1:iN)=Ui inteval 1 to N. %

%*******And now, calculate for mu.********************************%

rr=ru+r; %

if rr>0, %

246

Page 265: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 265/317

MU=sym([zeros(rr*N,1)]); %

d=sym([zeros(rr*N,1)]); %

for i =1:rr, %

for j=1:N, %

for k=1:nv, %

kk=k-1; %

nn4=num2str(kk); %

kj=(i-1)*nv*N+2*n*nv*N+m*N+k+(j-1)*nv; %

nn5=num2str(kj); %

dx=sym([’y’nn5 ])*sym([’t^’nn4]); %

d((i-1)*N+j)=d((i-1)*N+j)+dx; %

end %

end %

end %

MU=d; %end %

%

% MU formats are like the following: %

% MU(1:N,1)=mu1 interval 1 to N. %

% MU(N+1:2N,1)=mu2 interval 1:N. %

% ... %

% MU((i-1)N+1:iN)=mui inteval 1 to N. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Finding derivatives of X and V upto p_th derivative. %dh(:,1)=X; %

for i=2:p+1, %

dd=dh(:,(i-1)); %

dh(:,i)=diff(dd,’t’); %

end %

Xdp=dh; %

dh(:,1)=V; %

for i=2:p+1, %

dd=dh(:,(i-1)); %

dh(:,i)=diff(dd,’t’); %

end %

Vdp=dh; %

if rr>0, %

ddh(:,1)=MU; %

for i=2:p+1, %

ddd=ddh(:,(i-1)); %

247

Page 266: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 266/317

ddh(:,i)=diff(ddd,’t’); %

end %

MUdp=ddh; %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Set all variables to be match with all necessary conditions. %

% x10,...,x1p, x20,...,x2p, ..., xn0,...,xnp %

% v10,...,v1p, v20,...,v2p, ..., vn0,...,vnp %

% u1, ..., um %

% mu10, ..., murp %

for i=1:n, %

for j=1:p+1, %

nn1=num2str(i); %jj=j-1; %

nn2=num2str(jj); %

nn3=num2str(j); %

d1=(i-1)*N+1; %

nn4=num2str(d1); %

d2=i*N; %

nn5=num2str(d2); %

nn6=eval([’Xdp(’nn4 ’:’ nn5 ’,’nn3 ’)’]); %

nn7=eval([’Vdp(’nn4 ’:’ nn5 ’,’nn3 ’)’]); %

%****Good for future works**** %eval([’x’ nn1 nn2 ’ = nn6;’]); %

eval([’v’ nn1 nn2 ’ = nn7;’]); %

%***************************** %

end %

end %

for i=1:m, %

nn1=num2str(i); %

d1=(i-1)*N+1; %

d2=i*N; %

nn2=num2str(d1); %

nn3=num2str(d2); %

nn4=eval([’U(’nn2 ’:’ nn3 ’)’]); %

%****Good for future works**** %

eval([’u’ nn1 ’ = nn4;’]); %

%***************************** %

end %

248

Page 267: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 267/317

if rr>0, %

for i=1:rr, %

for j=1:p+1, %

nn1=num2str(i); %

jj=j-1; %

nn2=num2str(jj); %

nn3=num2str(j); %

d1=(i-1)*N+1; %

nn4=num2str(d1); %

d2=i*N; %

nn5=num2str(d2); %

nn6=eval([’MUdp(’nn4 ’:’ nn5 ’,’nn3 ’)’]); %

%****Good for future works**** %

eval([’mu’ nn1 nn2 ’ = nn6;’]); %

%***************************** %end %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***************************end of param.m*************************

E.3.15 pre.m

Same

E.3.16 result1.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file names result1.m which will traslate the data %

%solution from the nonlinear programing to show on the figures.%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

close all

if inform<2,

fprintf(’Congratulation! Your Solutions found.\n’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

249

Page 268: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 268/317

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************************************

cl=3; % The number of node for each interval.

%**********************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Find T1,..., T_N-1. %

for i=1:N, %

nn1=num2str(i-1); %

nn2=num2str(i); %

nn3=eval(sym([’t’nn1])); %

nn4=eval(sym([’t’nn2])); %

eval([’T’nn1 ’=nn3:(nn4-nn3)/cl:nn4;’]); %end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Find x101,..., xnp_N-1 and plot them. %

param %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:n, % state %

nn2=num2str(j); %

for k1=1:p, %derivatives upto (p)th derivative %

nn3=num2str(k1-1); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

for k3=1:p+2, % variables in each state %

nn5=num2str(k3); %

d=eval(sym([’x’nn2 nn3 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’x’nn2 nn3 nnn ’(1,’nn4 ’)=dd;’]); %

end %

250

Page 269: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 269/317

end %

figure(k1+(j-1)*p) %

hold on %

eval([’plot(T’nnn ’,x’nn2 nn3 nnn ’)’]) %

end %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Find u11,..., um_N-1 and plot them. %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:m, % state %

nn2=num2str(j); %for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

d=eval(sym([’u’nn2 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’u’nn2 nnn ’(1,’nn4 ’)=dd;’]); %

end %

figure(n*p+j) %

hold on %

eval([’plot(T’nnn ’,u’nn2 nnn ’)’]) %end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

elseif inform==6,

fprintf(’Local minimum Exist!, please check the reasons.\n’)

fprintf(’If it is not true, try to change the limits by:\n’)

fprintf(’typing max(y) and compare with BB in main.m.\n’)

fprintf(’However, to see plots now type. >>result2.’)

else

fprintf(’No solution!, please type "inform" to check the reason\n’)

fprintf(’and refer back to NpSoL manual.\n’)

fprintf(’However, to see plots now type >>result2.’)

end

251

Page 270: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 270/317

E.3.17 result2.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file names result1.m which will traslate the data %

%solution from the nonlinear programing to show on the figures.%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************************************

cl=3; % The number of node for each interval.

%**********************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Find T1,..., T_N-1. %

for i=1:N, %

nn1=num2str(i-1); %nn2=num2str(i); %

nn3=eval(sym([’t’nn1])); %

nn4=eval(sym([’t’nn2])); %

eval([’T’nn1 ’=nn3:(nn4-nn3)/cl:nn4;’]); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Find x101,..., xnp_N-1 and plot them. %

param %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

for i=1:N, % intervals %

nn1=num2str(i); %

252

Page 271: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 271/317

nnn=num2str(i-1); %

for j=1:n, % state %

nn2=num2str(j); %

for k1=1:p, %derivatives upto (p)th derivative %

nn3=num2str(k1-1); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

for k3=1:p+2, % variables in each state %

nn5=num2str(k3); %

d=eval(sym([’x’nn2 nn3 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’x’nn2 nn3 nnn ’(1,’nn4 ’)=dd;’]); %

end %

end %figure(k1+(j-1)*p) %

hold on %

eval([’plot(T’nnn ’,x’nn2 nn3 nnn ’)’]) %

end %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Find u11,..., um_N-1 and plot them. %

for i=1:N, % intervals %nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:m, % state %

nn2=num2str(j); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

d=eval(sym([’u’nn2 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’u’nn2 nnn ’(1,’nn4 ’)=dd;’]); %

end %

figure(n*p+j) %

hold on %

eval([’plot(T’nnn ’,u’nn2 nnn ’)’]) %

end %

end %

253

Page 272: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 272/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.3.18 user.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%This matlab file is called user.m. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 1. Number of state variables...format>>> n=?; %

n=1; %

%

% 2. Number of control variables...format>>> m=?; %

  m=1;

%

% 3. Number of the highest derivative of state variable... %%format>>> p=?; %

p=2; %

ps=[2]; %

%

% 4. Number of the intervals***...format>>> N=?; %

N=6; %

%

% 5. Initial time...format>>> t0=?; %

t0=0; %

%

% 6 Final time...format>>> tf=?. %

% where N=N in section4. %

tf=1; %

%

%7.This is where the cost function needed to be defined. %

% 7.1 Control inputs must set as u1, u2, ..., um. %

% 7.2 State variables...As the user know that how many state vari-%

%ables he/she has. %

% State variables must set as x10, x11, ..., x1p, x20, x21,%

% ..., x2p, ..., xn0, xn1, ..., xnp %

% where x10=x1, x11=x1_dot, and so on. %% Format>>> L=sym(’?’); %

L=sym(’u1^2’); %

%

% 8. This place is where the state equations have to be defined as%

%format below: %

254

Page 273: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 273/317

% Format>>> fs(1,1)=sym(’?’);, ..., fs(n,1)=sym(’?’); %

fs(1,1)=sym(’x12-u1’); %

%

% 9. Also, at this point, users need to provide all constraints. %

% Format>>> cu(1,1)=sym(’?’);, ..., cu(m,1)=sym(’?’);where m is%

% the number of the control input. cu must be defined for all %

% control variables.******* %

% Format>>> cx(1,1)=sym(’?’);, ..., cx(r,1)=sym(’?’);where r is%

% the number of the constraints %

% Detail: %

% cu means constraints contain only control inputs. %

% cx means constraints contain only state or both control and sta-%

% te variables. %

% 9.1 Further more for the lower and upper bound of the inequality%

% constraints. If one has the equlity constraints, the lower %% and upper bounds must defined as the same values. %

% Format>>> cu_l(1,1)=?;, ..., cu_l(m,1)=?; %

% cu_u(1,1)=?;, ..., cu_u(m,1)=?; %

% and so on where l and u mean upper/lower bound respectively.%

%*****************************************************************%

ru=1;

cu(1,1)=sym(’u1’); %

%cu(2,1)=sym(’u2’); %

%

cu_l(1,1)=-4; %cu_u(1,1)= 4; %

%cu_l(2,1)=-4; %

%cu_u(2,1)= 4; %

%*****************************************************************%

%

r=0; % remember this is a number of the constraint of x and u. %

rr=ru+r; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%cx(1,1)=sym(’x12’); %

%cx_l(1,1)=-4; %

%cx_u(1,1)=4; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 10. Boundary Conditions %

% Format>>> x0(i)=?; and xf(i)=?; %

% where x0(1) =x10dot, x0(2) =x11dot, ..., x0(p) =x1pdot %

255

Page 274: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 274/317

% x0(p+1)=x20dot, x0(p+2)=x21dot, ..., x0(2p)=x2pdot %

% ... %

% x0((n-1)p+1)=xn0dot, x0((n-1)p+2)=xn1dot, ..., x0(np)=xnpdot%

% and repeat with the same logic for xf(1),...,xf(np). %

%

x0(1,1)=sym(’x10’); %

x0(2,1)=sym(’x11’); %

%Bound of x0 %

x0_l(1,1)=0; %

x0_l(2,1)=0; %

x0_u(1,1)=0; %

x0_u(2,1)=0; %

%

xf(1,1)=sym(’x10’); %

xf(2,1)=sym(’x11’); %%Bound of xf %

xf_l(1,1)=1; %

xf_l(2,1)=0; %

xf_u(1,1)=1; %

xf_u(2,1)=0; %

%

%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 11. Define the minmax values of all parameters. %%

absmax=1e13; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************end of user.m file*************************

E.4 Indirect Minimum Time Using Higher-Order Method

E.4.1 allnec.m

%%%%%%%%%%%%%user input%%%%%%%%%%%%%%%

user

%*************************************

%%%%%%%%%%necessary conditions%%%%%%%%

necc

256

Page 275: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 275/317

%*************************************

%%%%%%%%%parameterized variables%%%%%%

param

%*************************************

E.4.2 concj.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 5. Finding the Jacobian matrix cJac. %

% This is a correct version, but beware of large size error. %

in1=2*nv*n*N+N*m+nv*rr*N+1; %#of all variables including mu. %

dc=c; %

ds=size(dc); %

cn=ds(1); %fid1=fopen(’r_c.m’,’w+’); %

for i=1:cn, %

nn1=num2str(i); %

nn2=dc(i); %

nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid1,’[%s ;\n’,nn5); %

elseif i==cn, %

fprintf(fid1,’%s ]\n’,nn5); %

else %

fprintf(fid1,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %

end %

st1=fclose(fid1); %

fid11=fopen(’r_cj.m’,’w+’); %

for i=1:cn, %

for j=1:in1, %

nn1=num2str(i); %nn2=num2str(j); %

nn3=diff(dc(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

nn5=char(nn3); %

nn6=char(nn4); %

257

Page 276: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 276/317

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid11,’[%s ,\n’,nn5); %

elseif ij==cn*in1, %

fprintf(fid11,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid11,’%s ;\n’,nn5); %

else %

fprintf(fid11,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

st2=fclose(fid11); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************End of concj.m**************************

E.4.3 conoj.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 5. Finding the Jacobian matrix objgrd. %

in1=2*nv*n*N+N*m+nv*rr*N+1; %#of all variables including mu. %

dc1=H; %

fid21=fopen(’r_o1.m’,’w+’); %

for i=1:1, %

nn1=num2str(i); %

nn2=dc1(i); %

nn3=sym([’c’nn1]); %

nn4=char(nn3); %

nn5=char(nn2); %

if i==1, %

fprintf(fid21,’[%s ]\n’,nn5); %

elseif i==N, %

fprintf(fid21,’%s ]\n’,nn5); %else %

fprintf(fid21,’%s ;\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 %

end %

258

Page 277: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 277/317

st1=fclose(fid21); %

fid23=fopen(’r_oj1.m’,’w+’); %

for i=1:1, %

for j=1:in1, %

nn1=num2str(i); %

nn2=num2str(j); %

nn3=diff(dc1(i),sym([’y’nn2])); %

nn4=sym([’c’nn1 ’_’nn2]); %

nn5=char(nn3); %

nn6=char(nn4); %

ij=j+(i-1)*in1; %

dj=i*in1; %

if ij==1, %

fprintf(fid23,’[%s ,\n’,nn5); %

elseif ij==in1, %fprintf(fid23,’%s ]\n’,nn5); %

elseif ij==dj, %

fprintf(fid23,’%s ;\n’,nn5); %

else %

fprintf(fid23,’%s ,\n’,nn5); %

end %

clear nn1 nn2 nn3 nn4 nn5 nn6 %

end %

end %

st3=fclose(fid23); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%***********************End of conoj.m**************************

E.4.4 execute.m

Same

E.4.5 funcon.m

function [c,cJac] = funcon(y,ncnln)user %

nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

259

Page 278: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 278/317

nv=nv2+1; %

end %

pm=n*p-sum(ps); %

in1=2*nv*n*N+N*m+nv*rr*N+1; %#of all variables including mu. %

cn=2*n*p*N+4*n*N+2*rr*N-pm*(N+1); %# of nonlin constraints %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

nn1=num2str(in1); %

eval([’tf’ ’ = y(’nn1 ’);’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %end %

clear nn1 nn2 %

fid1=fopen(’r_c.m’,’r’); %

s1 = fscanf(fid1,’%s’); %

c=eval(s1); %

st1=fclose(fid1); %

fid2=fopen(’r_cj.m’,’r’); %

s2 = fscanf(fid2,’%s’); %

cJac=eval(s2); %

st2=fclose(fid2); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

return; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funcon.m%%%%%%%%%%%%%%%%%%%%%%

E.4.6 funcont.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

allnec %This M-file will prepare all symbolic computations. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

t=sym(’t’); %tf=sym(’tf’); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Preparing F(n_states), G(n_costates), %

% CC(np_continuities_lagrange), %

260

Page 279: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 279/317

% XX(np_continuities_state), CX(r_constraints), B0(np_initial),%

% HU(m_optimalities, and Bf(np_final). %

for i=1:n, %

nnn1=num2str(i); %

if fs(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

df(ii,1)=fs(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:p+1, %nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(df(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]))%

d1(j)=subs(d1(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’]))%

df(j)=d1(j); %

end %

end %

for k1=1:r, %

for k2=1:p+1, %nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%

end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

df=d1; %

end %

if gs(i)==0, %

261

Page 280: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 280/317

d2=sym([zeros(N,1)]); %

else %

d2=sym([zeros(N,1)]); %

for ii=1:N, %

dg(ii,1)=gs(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d2(j)=subs(dg(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]))%

d2(j)=subs(d2(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’]))%dg(j)=d2(j); %

end %

end %

for k1=1:r, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d2(j)=subs(d2(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d2(j)=subs(d2(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dg=d2; %

end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

d2(ii)=eval(d2(ii)); %

end %

F((i-1)*N+1:i*N,1)=d1; %

G((i-1)*N+1:i*N,1)=d2; %

262

Page 281: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 281/317

end %

%******************************************************************

clear d1 %

for i=1:m, %

nnn1=num2str(i); %

if Hu(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

dHu(ii,1)=Hu(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

d1(j)=subs(dHu(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%

d1(j)=subs(d1(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’])); %

dHu(j)=d1(j); %

end %

end %

for k1=1:rr, %for k2=1:p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%

end %

end %

for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dHu=d1; %

end %

263

Page 282: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 282/317

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

HU((i-1)*N+1:i*N,1)=d1; %

end %

%******************************************************************

for i=1:p, %

for j=1:n, %

ij=p-i; %

nnn1=num2str(ij); %

nnn2=num2str(j); %

k=j+(i-1)*n; %

if cc(k)==0, %

d1=sym([zeros(N,1)]); %

else %d1=sym([zeros(N,1)]); %

for ii=1:N, %

dcc(ii,1)=cc(k); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %nn2=num2str(k22); %

d1(j)=subs(dcc(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]));%

d1(j)=subs(d1(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’])); %

dcc(j)=d1(j); %

end %

end %

for k1=1:r, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%

end %

end %

for k1=1:m, %

264

Page 283: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 283/317

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dcc=d1; %

end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %

end %

CC((k-1)*N+1:k*N,1)=d1; %sign* %

end %

end %

%******************************************************************

ddp=0; %for ii=1:n, %

dp=ps(ii); %

for i=1:dp, %

XX(ddp*N+(i-1)*N+1:ddp*N+(i)*N,1)=Xdp((ii-1)*N+1:ii*N,i); %

end %

ddp=dp+ddp; %

end %

%******************************************************************

if r==0, %

CX=[]; %else %

for i=1:r, %

nn1=num2str(i); %

if cx(i)==0, %

d1=sym([zeros(N,1)]); %

else %

d1=sym([zeros(N,1)]); %

for ii=1:N, %

dc(ii,1)=cx(i); %

end %

for j=1:N, %

nn3=num2str(j); %

for k1=1:n, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

265

Page 284: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 284/317

nn2=num2str(k22); %

d1(j)=subs(dc(j),sym([’x’nn1 nn2]),sym([’x’nn1 nn2 ’(’nn3 ’)’]))%

d1(j)=subs(d1(j),sym([’v’nn1 nn2]),sym([’v’nn1 nn2 ’(’nn3 ’)’]))%

dc(j)=d1(j); %

end %

end %

for k1=1:r, %

for k2=1:2*p+1, %

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’mu’nn1 nn2]),sym([’mu’nn1 nn2 ’(’nn3 ’)’]))%

end %

end %for k1=1:m, %

nn1=num2str(k1); %

nn3=num2str(j); %

d1(j)=subs(d1(j),sym([’u’nn1]),sym([’u’nn1 ’(’nn3 ’)’])); %

end %

end %

dc=d1; %

end %

for ii=1:N, %

d1(ii)=eval(d1(ii)); %end %

CX((i-1)*N+1:i*N,1)=d1; %

end %

end %

%*****************************************************************%

clear d1 %

clear d2 %

ddp=0; %

for ii=1:n, %

dp=ps(ii); %

for i=1:dp, %

k=i+ddp; %

dc0(k,1)=x0(k); %

dcf(k,1)=xf(k); %

for k1=1:n, %

for k2=1:p+1, %

266

Page 285: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 285/317

nn1=num2str(k1); %

k22=k2-1; %

nn2=num2str(k22); %

nn3=num2str(1); %

nn4=num2str(N); %

d1(k,1)=subs(dc0(k,1),sym([’x’nn1 nn2]),sym([’x’nn1 nn2... %

’(’nn3 ’)’]),0); %

d2(k,1)=subs(dcf(k,1),sym([’x’nn1 nn2]),sym([’x’nn1 nn2... %

’(’nn4 ’)’]),0); %

dc0(k,1)=d1(k,1); %

dcf(k,1)=d2(k,1); %

end %

end %

end %

ddp=dp+ddp; %end %

for j=1:n*p, %

dd1(j,1)=eval(d1(j)); %

dd2(j,1)=eval(d2(j)); %

end %

B0=dd1; %

Bf=dd2; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3. Substitute with all applicable ti. %

%************************F to be F_C******************************%

F_C=sym([zeros(2*n*N,1)]); %

for i=1:n, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

F_C((i-1)*2*N+ii+(j-1)*2)=subs(F(k),sym(’t’),d); %

end %

end %

end %

%***Note for future works*** %

% d=sym([’t’num2str(1)]) %

267

Page 286: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 286/317

% d=eval(sym([’t’num2str(1)])) %

% subs(F_C(1),sym(’t0’),d) %

%*************************** %

%**********************G to be G_C********************************%

G_C=sym([zeros(2*n*N,1)]); %

for i=1:n, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

%d=eval(sym([’t’nn3])); %

d=sym([’t’nn3]); %

G_C((i-1)*2*N+ii+(j-1)*2)=subs(G(k),sym(’t’),d); %

end %end %

end %

%**********************CX to be CX_C******************************%

CX_C=sym([zeros(2*r*N,1)]); %

for i=1:r, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %d=sym([’t’nn3]); %

CX_C((i-1)*2*N+ii+(j-1)*2)=subs(CX(k),sym(’t’),d); %

end %

end %

end %

%***********************CC to be CC_C*****************************%

for i=1:n*p, %

for j=1:N, %

k=j+(i-1)*(N); %

nn1=num2str(j); %

nn2=num2str(j-1); %

d=sym([’t’nn1]); %

CCC(k,1)=subs(CC(k),sym(’t’),d); %

dd=sym([’t’nn2]); %

CCCC(k,1)=subs(CC(k),sym(’t’),dd); %

end %

268

Page 287: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 287/317

end %

for i=1:n*p, %

for j=1:N-1, %

k=j+(i-1)*(N-1); %

CC_C(k,1)=CCC(k+(i-1))-CCCC(k+(i)); %

end %

end %

%***********************XX to be XX_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

for j=1:N, %

k=j+(i-1)*(N); %

nn1=num2str(j); %

nn2=num2str(j-1); %d=sym([’t’nn1]); %

XXX(k,1)=subs(XX(k),sym(’t’),d); %

dd=sym([’t’nn2]); %

XXXX(k,1)=subs(XX(k),sym(’t’),dd); %

end %

end %

for i=1:pp, %

for j=1:N-1, %

k=j+(i-1)*(N-1); %

XX_C(k,1)=XXX(k+(i-1))-XXXX(k+i); %end %

end %

%***********************B0 to be B0_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

nn1=num2str(0); %

d=sym([’t’nn1]); %

B0_C(i,1)=subs(B0(i,1),sym(’t’),d); %

end %

%***********************Bf to be Bf_C*****************************%

pm=n*p-sum(ps); %

pp=n*p-pm; %

for i=1:pp, %

nn1=num2str(N); %

d=sym([’t’nn1]); %

269

Page 288: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 288/317

Bf_C(i,1)=subs(Bf(i,1),sym(’t’),d); %

end %

%

%**********************HU to be HU_C******************************%

HU_C=sym([zeros(2*m*N,1)]); %

for i=1:m, %

for j=1:N, %

k=j+(i-1)*N; %

for ii=1:2, %

ij=(ii-1)+(j-1); %

nn3=num2str(ij); %

d=sym([’t’nn3]); %

HU_C((i-1)*2*N+ii+(j-1)*2)=subs(HU(k),sym(’t’),d); %

end %

end %end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Form the constraint matrix c. %

ca=[B0_C; %

Bf_C; %

XX_C; %

CC_C; %

F_C; %G_C; %

CX_C; %

HU_C]; %

si1=size(ca); %

si2=si1(1); %

in1=2*nv*n*N+N*m+nv*rr*N+1; %#of all variables including mu. %

nn1=num2str(N); %

nn2=num2str(in1); %

for i=1:si2, %

cb(i,1)=subs(ca(i),sym([’t’nn1]),sym([’y’nn2]),0); %

end %

for i=1:si2, %

c(i,1)=subs(cb(i),sym([’tf’]),sym([’y’nn2]),0); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

270

Page 289: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 289/317

%***This is an obtion to provide the gradient of c**************

% This is a correct version, but beware of large size error. %

concj %

%***************************************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funcont.m%%%%%%%%%%%%%%%%%%%%%%%%

E.4.7 funobj.m

function [objf,objgrd] = funobj(y)

user %

nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %

if nv1==nv2, %

nv=nv1; %else %

nv=nv2+1; %

end %

pm=n*p-sum(ps); %

cn=2*n*p*N+4*n*N+2*rr*N-pm*(N+1); %# of nonlin constraints %

in1=2*nv*n*N+N*m+nv*rr*N+1; %#of all variables including mu. %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

nn1=num2str(in1); %

eval([’tf’ ’ = y(’nn1 ’);’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

nn1=num2str(N-1); %

eval([’tf1=t’nn1 ’;’]); %

fid1=fopen(’r_o1.m’,’r’); %

s1 = fscanf(fid1,’%s’); %objf=eval(s1) %

st1=fclose(fid1); %

fid2=fopen(’r_oj1.m’,’r’); %

s2 = fscanf(fid2,’%s’); %

objgrd=eval(s2); %

271

Page 290: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 290/317

st2=fclose(fid2); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

return; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%End of funobj.m%%%%%%%%%%%%%%%%%%%%%%

E.4.8 funobjt.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

allnec %This M-file will prepare all symbolic computations. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

t=sym(’t’); %

tf=sym(’tf’); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2.Preparing He. The Hamiltonian Equation. %

in1=2*nv*n*N+N*m+nv*rr*N+1; %#of all variables including mu. %

nn1=num2str(in1); %

H=sym([’y’nn1]); %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%*** This is an option for obtaining jacobian (objgrd)*** %

conoj %

%*****************************************************************%

%%%%%%%%%%%%%%%%%%%%%%%%%End of funobjt.m%%%%%%%%%%%%%%%%%%%%%%%%%%

E.4.9 guess.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file named guess is the most important file to provide an%

%initial guesses for the code... %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

Nn1=(n*p*(N+1)); %

272

Page 291: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 291/317

Nn2=(n*nv*N); %

L=sym([zeros(Nn1,Nn2)]); %

for i=1:N+1, %

nn1=num2str(i-1); %

for j=1:n, %

for l=1:nv, %

nn2=num2str(l-1); %

if i==N+1, %

L(j+(i-1)*n*p,(j-1)*nv+l+(i-2)*nv*n)=sym([’t’nn1 ’^’nn2]);%

d=L(j+(i-1)*n*p,(j-1)*nv+l+(i-2)*nv*n); %

for k=1:p-1, %

L(j+(i-1)*n*p+k*n,(j-1)*nv+l+(i-2)*nv*n)=diff(d); %

d=L(j+(i-1)*n*p+k*n,(j-1)*nv+l+(i-2)*nv*n); %

end %

else %L(j+(i-1)*n*p,(j-1)*nv+l+(i-1)*nv*n)=sym([’t’nn1 ’^’nn2]);%

d=L(j+(i-1)*n*p,(j-1)*nv+l+(i-1)*nv*n); %

for k=1:p-1, %

L(j+(i-1)*n*p+k*n,(j-1)*nv+l+(i-1)*nv*n)=diff(d); %

d=L(j+(i-1)*n*p+k*n,(j-1)*nv+l+(i-1)*nv*n); %

end %

end %

end %

end %

end %for i=1:N-1, %

nn1=num2str(i); %

for j=1:n, %

for l=1:nv, %

nn2=num2str(l-1); %

L(j+(i)*n*p,(j-1)*nv+l+(i-1)*nv*n)=sym([’-t’nn1 ’^’nn2]); %

d=L(j+(i)*n*p,(j-1)*nv+l+(i-1)*nv*n); %

for k=1:p-1, %

L(j+(i)*n*p+k*n,(j-1)*nv+l+(i-1)*nv*n)=diff(d); %

d=L(j+(i)*n*p+k*n,(j-1)*nv+l+(i-1)*nv*n); %

end %

end %

end %

end %

%************************** %

for i=1:Nn1, %

273

Page 292: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 292/317

for j=1:Nn2, %

L1(i,j)=eval(L(i,j)); %

end %

end %

%

Linv=pinv(L1); %

Nnx=n*p*(N-1); %

xxx=zeros(Nnx,1); %

%

R=[x0_l; %

xxx; %

xf_l]; %

%

yc=Linv*R; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.4.10 main.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file called main.m which is the main program that user %

%need to run for their optimal solutions. The optimal solutions %

%are obtained using the nonlinear programing called NPSOL package.%

%INSTRUCTION HOW TO USE THIS CODE: %

% a) User need to modify M-file named user.m by following %

% the instruction that embeded inside closely. %

% b) The second step is preparing all symbolic code for %

% all constraints and objective functions by typing %

% command "pre" on Matlab cammand window and disregard all %

% warnings that appeared on the screen. %

% c) The last step to run this code is by typing %

% command "main" on Matlab cammand window. %

% %

%THIS CODE IS DEVELOPED BY: %

% Captain Tawiwat Veeraklaew %

% Department of Mechanical Engineering %

% Chulachomklao Royal Military Academy %% Nakhon-Nayok, 26001 THAILAND %

% E-mail: [email protected] %

% [email protected] %

% [email protected] %

% or contact Dr. Sunil K. Agrawal at University of Delaware %

274

Page 293: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 293/317

% E-mail: [email protected] %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

clear

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Obtaining all inputs from user.m, initial guesses, and %

% # of variables. %

user %

nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %end %

if m==0, %

cu=[]; %

cu_l=[]; %

cu_u=[]; %

else %

cu=cu; %

cu_l=cu_l; %

cu_u=cu_u; %

end %if r==0, %

cx=[]; %

cx_l=[]; %

cx_u=[]; %

else %

cx=cx; %

cx_l=cx_l; %

cx_u=cx_u; %

end %

pm=n*p-sum(ps); %

in1=2*nv*n*N+N*m+nv*rr*N+1; %#of all variables including mu. %

in2=2*nv*n*N+N*m; %# of all variables without mu. %

in3=2*nv*n*N; %# of all variables without mu and u. %

in4=2*n*p*N+4*n*N+2*rr*N-pm*(N+1); %# of nonlin constraints %

in41=2*n*p*N+4*n*N+2*r*N-pm*(N+1); %# of nonlin c without cu. %

in5=2*n*p*N+4*n*N-pm*(N+1); %# of nonlin c without CX and cu. %

275

Page 294: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 294/317

%

y=1*max(eye(in1))’; %

%

A =[]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Form upper and lower bound. %

BB=absmax; %This is the abs value limit. %

l_box = -BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

ij=j+(i-1)*N; %cul(ij,1)=cu_l(i); %

end %

end %

l_box(in3+1:in2,1) = cul; %

end %

l_box(in1,1) = t_l; %

%

l_A = []; %

%

l_nonlin(1:(2*n*p-2*pm),1) = [x0_l;xf_l]; %l_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

cxl(ij,1)=cx_l(i); %

end %

end %

l_nonlin(in5+1:in41,1) = cxl;%double check! %

end %

%************************************************%

u_box = BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

276

Page 295: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 295/317

ij=j+(i-1)*2*N; %

cuu(ij,1)=cu_u(i); %

end %

end %

u_box(in3+1:in2,1) = cuu; %

end %

u_box(in1,1) = t_u; %

%

u_A = []; %

%

u_nonlin(1:(2*n*p-2*pm),1) = [x0_u;xf_u]; %

u_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %ij=j+(i-1)*2*N; %

cxu(ij,1)=cx_u(i); %

end %

end %

u_nonlin(in5+1:in41,1) = cxu;%double check! %

end %

l=[l_box; l_A;l_nonlin]; %

u=[u_box; u_A;u_nonlin]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Call NPSOL. %

funobj = ’funobj’; %

funcon = ’funcon’; %

[y,f,g,c,cJac,inform,lambda,iter,istate]= npsol(A,l,u,y,funobj,...%

funcon,500,3); %

result=y; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Plot results. %

result1 %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%************************End of main.m*****************************

277

Page 296: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 296/317

E.4.11 main-guess.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file called main* which is the main program that user %

%need to run for their optimal solutions. The optimal solutions %

%are obtained using the nonlinear programing called NPSOL package.%

%INSTRUCTION HOW TO USE THIS CODE: %

% a) User need to modify M-file named user.m by following %

% the instruction that embeded inside closely. %

% b) The second step is preparing all symbolic code for %

% all constraints and objective functions by typing %

% command "pre" on Matlab cammand window and disregard all %

% warnings that appeared on the screen. %

% c) The last step to run this code is by typing %

% command "main" on Matlab cammand window. %

% %%THIS CODE IS DEVELOPED BY: %

% Captain Tawiwat Veeraklaew %

% Department of Mechanical Engineering %

% Chulachomklao Royal Military Academy %

% Nakhon-Nayok, 26001 THAILAND %

% E-mail: [email protected] %

% [email protected] %

% [email protected] %

% or contact Dr. Sunil K. Agrawal at University of Delaware %

% E-mail: [email protected] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

clear all

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Obtaining all inputs from user.m, initial guesses, and %

% # of variables. %

user %

nv1=(2*n*p*N+4*n*N+2*r*N)/((2*n+r)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N)/((2*n+r)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

if m==0, %

278

Page 297: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 297/317

cu=[]; %

cu_l=[]; %

cu_u=[]; %

else %

cu=cu; %

cu_l=cu_l; %

cu_u=cu_u; %

end %

if r==0, %

cx=[]; %

cx_l=[]; %

cx_u=[]; %

else %

cx=cx; %

cx_l=cx_l; %cx_u=cx_u; %

end %

pm=n*p-sum(ps); %

in1=2*nv*n*N+N*m+nv*rr*N; %#of all variables including mu. %

in2=2*nv*n*N+N*m; %# of all variables without mu. %

in3=2*nv*n*N; %# of all variables without mu and u. %

in4=2*n*p*N+4*n*N+2*rr*N-pm*(N+1); %# of nonlin constraints %

in41=2*n*p*N+4*n*N+2*r*N-pm*(N+1); %# of nonlin c without cu. %

in5=2*n*p*N+4*n*N-pm*(N+1); %# of nonlin c without CX and cu. %

guess %y=yc; %

d1=size(yc); %

d2=d1(1); %

in5=in1-d2; %

y(d2+1:in1,1)=1.5*max(eye(in5))’; %

A =[]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Form upper and lower bound. %

BB=absmax; %This is the abs value limit. %

l_box = -BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %

for j=1:N, %

279

Page 298: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 298/317

ij=j+(i-1)*N; %

cul(ij,1)=cu_l(i); %

end %

end %

l_box(in3+1:in2,1) = cul; %

end %

%

l_A = []; %

%

l_nonlin(1:(2*n*p-2*pm),1) = [x0_l;xf_l]; %

l_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %cxl(ij,1)=cx_l(i); %

end %

end %

l_nonlin(in5+1:in41,1) = cxl;%double check! %

end %

%************************************************%

u_box = BB*(max(eye(in1))’); %

%

if m>0, %

for i=1:m, %for j=1:N, %

ij=j+(i-1)*2*N; %

cuu(ij,1)=cu_u(i); %

end %

end %

u_box(in3+1:in2,1) = cuu; %

end %

%

u_A = []; %

%

u_nonlin(1:(2*n*p-2*pm),1) = [x0_u;xf_u]; %

u_nonlin(in4,1) = 0; %

if r>0, %

for i=1:r, %

for j=1:2*N, %

ij=j+(i-1)*2*N; %

280

Page 299: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 299/317

cxu(ij,1)=cx_u(i); %

end %

end %

u_nonlin(in5+1:in41,1) = cxu;%double check! %

end %

l=[l_box; l_A;l_nonlin]; %

u=[u_box; u_A;u_nonlin]; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Call NPSOL. %

funobj = ’funobj’; %

funcon = ’funcon’; %

[y,f,g,c,cJac,inform,lambda,iter,istate]= npsol(A,l,u,y,funobj,...%

funcon,500,3); %result=y; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Plot results. %

result1 %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************End of main_guess.m*************************

E.4.12 necc.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This file is called necc.m which will provide all the necessary %

%conditions before using NPSOL. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%******************************************************************

% 1. Define Extended Hamiltonian function. %

nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

281

Page 300: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 300/317

in1=2*nv*n*N+N*m+nv*rr*N+1; %

%#of all variables including mu. %

nnn=num2str(in1); %

%

H=sym([’y’nnn]);; %

for i=1:n, %

nn1=num2str(i); %

nn2=[’v’ nn1 ’0’]; %

nn3=eval([’fs(’ nn1 ’)’]); %

H1 = nn2*nn3; %

H=H+H1; %

end %

for i=1:r, %

nn1=num2str(i); %

nn2=[’mu’ nn1 ’0’]; %nn3=eval([’cx(’ nn1 ’,1)’]); %

H1=nn2*nn3; %

H=H+H1; %

end %

%******************************************************************

%******************************************************

% 2. This step will provide the costate equations. %

% format>>> g(1)=(), ..., g(n)=() %

gs=sym([zeros(n,1)]); %dd=sym(’0’); %

ddd=sym(’0’); %

dd1=sym(’0’); %

for i=1:n, %

for j=1:(p), %

jj=j-1; %

nn1=num2str(i); %

nn2=num2str(jj); %

%

d1=diff(H,[’x’nn1 nn2]); %

if j==1, %

dd1=d1; %

end %end if %

if j>1, %

%**********Chain Rule************************ %

for k1=1:n, %

282

Page 301: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 301/317

for k2=1:p %

kk2=k2-1; %

nn3=num2str(k1); %

nn4=num2str(kk2); %

kk=kk2+jj; %

nn5=num2str(kk); %

dd=diff(d1,[’x’nn3 nn4])*[’x’nn3 nn5]; %

ddd=diff(d1,[’v’nn3 nn4])*[’v’nn3 nn5]; %

dddd=diff(d1,[’mu’nn3 nn4])*[’mu’nn3 nn5];%

da=dd+ddd+dddd; %

dd1=dd1+((-1)^(j+1))*(da); %

end %

end %

%******************************************** %

end %end if %%

end %

gs(i)=simple(dd1); %

end %

%

%******************************************************

%********************************************************

% 3. This step will provide the continuity conditions. %

% Format %for ii=1:p, %

jp=ii+1; %

cs=sym([zeros(n,1)]); %

dd=sym(’0’); %

ddd=sym(’0’); %

dd1=sym(’0’); %

for i=1:n, %

for j=jp:(p), %

jj=j-1; %

nn1=num2str(i); %

nn2=num2str(jj); %

%

d1=diff(H,[’x’nn1 nn2]); %

if j==jp, %

dd1=d1; %

end %end if %

283

Page 302: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 302/317

if j>jp, %

%**********Chain Rule************************ %

for k1=1:n, %

for k2=1:p %

kk2=k2-1; %

nn3=num2str(k1); %

nn4=num2str(kk2); %

kk=kk2+jj; %

nn5=num2str(kk); %

dd=diff(d1,[’x’nn3 nn4])*[’x’nn3 nn5]; %

ddd=diff(d1,[’v’nn3 nn4])*[’v’nn3 nn5]; %

dddd=diff(d1,[’mu’nn3 nn4])*[’mu’nn3 nn5]; %

da=dd+ddd+dddd; %

dd1=dd1+((-1)^(j+1))*(da); %

end %end %

%******************************************** %

end %end if %

%

end %

cs(i)=simple(dd1); %

end %

cc((ii-1)*n+1:ii*n,1)=cs; %

end %

%********************************************************%******************************************************

% 4. This step will provide the optimality equations. %

% format>>> Hu(1)=(), ..., Hu(m)=() %

Hu=sym([zeros(m,1)]); %

for i=1:m, %

nn1=num2str(i); %

d1=diff(H,[’u’nn1]); %

Hu(i)=simple(d1); %

end %

%

%******************************************************

%*******************end of necc.m file*******************

E.4.13 Npsol.m

Same

284

Page 303: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 303/317

E.4.14 param.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. Set how many parameters we need %

nv1=(2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N); %

nv2=floor((2*n*p*N+4*n*N+2*r*N+2*m*N)/((2*n+r+ru)*N)); %

if nv1==nv2, %

nv=nv1; %

else %

nv=nv2+1; %

end %

%******Let’s calculate for state and costate variables first.%

X=sym([zeros(n*N,1)]); %

V=sym([zeros(n*N,1)]); %

d=sym([zeros(n*N,1)]); %dd=sym([zeros(n*N,1)]); %

for i =1:n, %

for j=1:N, %

for k=1:nv, %

kk=k-1; %

nn4=num2str(kk); %

kj=(i-1)*nv*N+k+(j-1)*nv; %

kv=(i-1)*nv*N+n*nv*N+k+(j-1)*nv; %

nn5=num2str(kj); %

nn6=num2str(kv); %dx=sym([’y’nn5 ])*sym([’t^’nn4]); %

dv=sym([’y’nn6 ])*sym([’t^’nn4]); %

d((i-1)*N+j)=d((i-1)*N+j)+dx; %

dd((i-1)*N+j)=dd((i-1)*N+j)+dv; %

end %

end %

end %

X=d; %

V=dd; %

%

% X and V formats are like the following: %

% X(1:N,1)=x1 interval 1 to N. %

% X(N+1:2N,1)=x2 interval 1:N. %

% ... %

% X((i-1)N+1:iN)=xi inteval 1 to N. %

% Also, the same manner applies to V (Lamda). %

285

Page 304: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 304/317

%*******Now, calculate for control variables.*********************%

U=sym([zeros(m*N,1)]); %

d=sym([zeros(m*N,1)]); %

for i =1:m, %

for j=1:N, %

ku=2*n*nv*N+j+(i-1)*N; %

nn1=num2str(ku); %

du=sym([’y’nn1 ]); %

d((i-1)*N+j)=d((i-1)*N+j)+du; %

end %

end %

U=d; %

% U formats are like the following: %

% U(1:N,1)=U1 interval 1 to N. %

% U(N+1:2N,1)=U2 interval 1:N. %% ... %

% U((i-1)N+1:iN)=Ui inteval 1 to N. %

%*******And now, calculate for mu.********************************%

rr=ru+r; %

if rr>0, %

MU=sym([zeros(rr*N,1)]); %

d=sym([zeros(rr*N,1)]); %

for i =1:rr, %

for j=1:N, %

for k=1:nv, %kk=k-1; %

nn4=num2str(kk); %

kj=(i-1)*nv*N+2*n*nv*N+m*N+k+(j-1)*nv; %

nn5=num2str(kj); %

dx=sym([’y’nn5 ])*sym([’t^’nn4]); %

d((i-1)*N+j)=d((i-1)*N+j)+dx; %

end %

end %

end %

MU=d; %

end %

%

% MU formats are like the following: %

% MU(1:N,1)=mu1 interval 1 to N. %

% MU(N+1:2N,1)=mu2 interval 1:N. %

% ... %

286

Page 305: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 305/317

% MU((i-1)N+1:iN)=mui inteval 1 to N. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2. Finding derivatives of X and V upto p_th derivative. %

dh(:,1)=X; %

for i=2:p+1, %

dd=dh(:,(i-1)); %

dh(:,i)=diff(dd,’t’); %

end %

Xdp=dh; %

dh(:,1)=V; %

for i=2:p+1, %

dd=dh(:,(i-1)); %

dh(:,i)=diff(dd,’t’); %end %

Vdp=dh; %

if rr>0, %

ddh(:,1)=MU; %

for i=2:p+1, %

ddd=ddh(:,(i-1)); %

ddh(:,i)=diff(ddd,’t’); %

end %

MUdp=ddh; %

end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Set all variables to be match with all necessary conditions. %

% x10,...,x1p, x20,...,x2p, ..., xn0,...,xnp %

% v10,...,v1p, v20,...,v2p, ..., vn0,...,vnp %

% u1, ..., um %

% mu10, ..., murp %

for i=1:n, %

for j=1:p+1, %

nn1=num2str(i); %

jj=j-1; %

nn2=num2str(jj); %

nn3=num2str(j); %

d1=(i-1)*N+1; %

nn4=num2str(d1); %

287

Page 306: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 306/317

d2=i*N; %

nn5=num2str(d2); %

nn6=eval([’Xdp(’nn4 ’:’ nn5 ’,’nn3 ’)’]); %

nn7=eval([’Vdp(’nn4 ’:’ nn5 ’,’nn3 ’)’]); %

%****Good for future works**** %

eval([’x’ nn1 nn2 ’ = nn6;’]); %

eval([’v’ nn1 nn2 ’ = nn7;’]); %

%***************************** %

end %

end %

for i=1:m, %

nn1=num2str(i); %

d1=(i-1)*N+1; %

d2=i*N; %

nn2=num2str(d1); %nn3=num2str(d2); %

nn4=eval([’U(’nn2 ’:’ nn3 ’)’]); %

%****Good for future works**** %

eval([’u’ nn1 ’ = nn4;’]); %

%***************************** %

end %

if rr>0, %

for i=1:rr, %

for j=1:p+1, %

nn1=num2str(i); %jj=j-1; %

nn2=num2str(jj); %

nn3=num2str(j); %

d1=(i-1)*N+1; %

nn4=num2str(d1); %

d2=i*N; %

nn5=num2str(d2); %

nn6=eval([’MUdp(’nn4 ’:’ nn5 ’,’nn3 ’)’]); %

%****Good for future works**** %

eval([’mu’ nn1 nn2 ’ = nn6;’]); %

%***************************** %

end %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

288

Page 307: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 307/317

%***************************end of param.m*************************

E.4.15 pre.m

Same

E.4.16 result1.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file names result1.m which will traslate the data %

%solution from the nonlinear programing to show on the figures.%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

close all

if inform<2,

fprintf(’Congratulation! Your Solutions found.\n’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 1. calculate ti except t0 and tf. %

nn1=num2str(in1); %

eval([’tf’ ’ = y(’nn1 ’);’]); %

for i=1:N-1, %

nn1=num2str(i); %

nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************************************

cl=3; % The number of node for each interval.

%**********************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Find T1,..., T_N-1. %

for i=1:N, %

nn1=num2str(i-1); %nn2=num2str(i); %

nn3=eval(sym([’t’nn1])); %

nn4=eval(sym([’t’nn2])); %

eval([’T’nn1 ’=nn3:(nn4-nn3)/cl:nn4;’]); %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

289

Page 308: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 308/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Find x101,..., xnp_N-1 and plot them. %

param %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:n, % state %

nn2=num2str(j); %

for k1=1:p, %derivatives upto (p)th derivative %

nn3=num2str(k1-1); %for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

for k3=1:p+2, % variables in each state %

nn5=num2str(k3); %

d=eval(sym([’x’nn2 nn3 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’x’nn2 nn3 nnn ’(1,’nn4 ’)=dd;’]); %

end %

end %figure(k1+(j-1)*p) %

hold on %

eval([’plot(T’nnn ’,x’nn2 nn3 nnn ’)’]) %

end %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Find u11,..., um_N-1 and plot them. %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:m, % state %

nn2=num2str(j); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

290

Page 309: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 309/317

d=eval(sym([’u’nn2 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’u’nn2 nnn ’(1,’nn4 ’)=dd;’]); %

end %

figure(n*p+j) %

hold on %

eval([’plot(T’nnn ’,u’nn2 nnn ’)’]) %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

elseif inform==6,

fprintf(’Local minimum Exist!, please check the reasons.\n’)

fprintf(’If it is not true, try to change the limits by:\n’)fprintf(’typing max(y) and compare with BB in main.m.\n’)

fprintf(’However, to see plots now type. >>result2.’)

else

fprintf(’No solution!, please type "inform" to check the reason\n’)

fprintf(’and refer back to NpSoL manual.\n’)

fprintf(’However, to see plots now type >>result2.’)

end

E.4.17 result2.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% This M-file names result1.m which will traslate the data %

%solution from the nonlinear programing to show on the figures.%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

close all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 1. calculate ti except t0 and tf. %

nn1=num2str(in1); %

eval([’tf’ ’ = y(’nn1 ’);’]); %

for i=1:N-1, %

nn1=num2str(i); %nn2=i*(tf-t0)/N; %

eval([’t’ nn1 ’ = nn2;’]); %

end %

nn0=num2str(N); %

eval([’t’ nn0 ’ = tf;’]); %

291

Page 310: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 310/317

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%**********************************************

cl=3; % The number of node for each interval.

%**********************************************

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 2.Find T1,..., T_N-1. %

for i=1:N, %

nn1=num2str(i-1); %

nn2=num2str(i); %

nn3=eval(sym([’t’nn1])); %

nn4=eval(sym([’t’nn2])); %

eval([’T’nn1 ’=nn3:(nn4-nn3)/cl:nn4;’]); %

end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 3. Find x101,..., xnp_N-1 and plot them. %

param %

for i=1:in1, %

nn1=num2str(i); %

eval([’y’nn1 ’ = y(’nn1 ’);’]); %

end %

for i=1:N, % intervals %nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:n, % state %

nn2=num2str(j); %

for k1=1:p, %derivatives upto (p)th derivative %

nn3=num2str(k1-1); %

for k2= 1:cl+1, % node in the interval %

nn4=num2str(k2); %

for k3=1:p+2, % variables in each state %

nn5=num2str(k3); %

d=eval(sym([’x’nn2 nn3 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’x’nn2 nn3 nnn ’(1,’nn4 ’)=dd;’]); %

end %

end %

292

Page 311: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 311/317

figure(k1+(j-1)*p) %

hold on %

eval([’plot(T’nnn ’,x’nn2 nn3 nnn ’)’]) %

end %

end %

end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 4. Find u11,..., um_N-1 and plot them. %

for i=1:N, % intervals %

nn1=num2str(i); %

nnn=num2str(i-1); %

for j=1:m, % state %

nn2=num2str(j); %

for k2= 1:cl+1, % node in the interval %nn4=num2str(k2); %

d=eval(sym([’u’nn2 ’(’nn1 ’)’])); %

t=eval(sym([’T’nnn ’(’nn4 ’)’])); %

dd=eval(d); %

eval([’u’nn2 nnn ’(1,’nn4 ’)=dd;’]); %

end %

figure(n*p+j) %

hold on %

eval([’plot(T’nnn ’,u’nn2 nnn ’)’]) %

end %end %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

E.4.18 user.m

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%This matlab file is called user.m. %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 1. Number of state variables...format>>> n=?; %

n=1; %%

% 2. Number of control variables...format>>> m=?; %

  m=1;

%

% 3. Number of the highest derivative of state variable... %

293

Page 312: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 312/317

%format>>> p=?; %

p=2; %

ps=[2]; %

%

% 4. Number of the intervals***...format>>> N=?; %

% This interval means that users can set the accuracy of the %

%solutions.If N is large,it will be more accuracy of the solutions%

%but not always*. %

N=6; %

%

% 5. Initial time...format>>> t0=?; %

t0=0; %

%

% 6 Final time...format>>> tf=?. %

% where N=N in section4. %t_l=0; %

t_u=2; %

%

% 7. This place is where the state equations have to be defined as%

%format below: %

% Format>>> fs(1,1)=sym(’?’);, ..., fs(n,1)=sym(’?’); %

fs(1,1)=sym(’x12-u1’); %

%

% 8. Also, at this point, users need to provide all constraints. %

% Format>>> cu(1,1)=sym(’?’);, ..., cu(m,1)=sym(’?’); where m is %% the number of the control input. cu must be defined for all %

% control variables.******* %

% Format>>> cx(1,1)=sym(’?’);, ..., cx(r,1)=sym(’?’); where r is %

% the number of the constraints %

% Detail: %

% cu means constraints contain only control inputs. %

% cx means constraints contain only state or both control and sta-%

% te variables. %

% 9.1 Further more for the lower and upper bound of the inequality%

% constraints. If one has the equlity constraints, the lower %

% and upper bounds must defined as the same values. %

% Format>>> cu_l(1,1)=?;, ..., cu_l(m,1)=?; %

% cu_u(1,1)=?;, ..., cu_u(m,1)=?; %

% and so on where l and u mean upper/lower bound respectively.%

%*****************************************************************%

ru=1; %

294

Page 313: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 313/317

cu(1,1)=sym(’u1’); %

%cu(2,1)=sym(’u2’); %

%

cu_l(1,1)=-4; %

cu_u(1,1)= 4; %

%cu_l(2,1)=-4; %

%cu_u(2,1)= 4; %

%*****************************************************************%

%

r=0; % remember this is a number of the constraint of x and u. %

rr=ru+r; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%cx(1,1)=sym(’x12’); %

%cx_l(1,1)=-4; %

%cx_u(1,1)=4; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%

% 9. Boundary Conditions %

% Format>>> x0(i)=?; and xf(i)=?; %

% where x0(1) =x10dot, x0(2) =x11dot, ..., x0(p) =x1pdot %

% x0(p+1)=x20dot, x0(p+2)=x21dot, ..., x0(2p)=x2pdot %

% ... %

% x0((n-1)p+1)=xn0dot, x0((n-1)p+2)=xn1dot, ..., x0(np)=xnpdot%

% and repeat with the same logic for xf(1),...,xf(np). %

%x0(1,1)=sym(’x10’); %

x0(2,1)=sym(’x11’); %

%Bound of x0 %

x0_l(1,1)=0; %

x0_l(2,1)=0; %

x0_u(1,1)=0; %

x0_u(2,1)=0; %

%

xf(1,1)=sym(’x10’); %

xf(2,1)=sym(’x11’); %

%Bound of xf %

xf_l(1,1)=1; %

xf_l(2,1)=0; %

xf_u(1,1)=1; %

xf_u(2,1)=0; %

%

295

Page 314: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 314/317

Page 315: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 315/317

Appendix F

GENERAL CODE INSTRUCTION

In this Appendix, the instructions on how to use the general codes in the

previous appendix are described. First, one has to figure it out and decide which

option is suitable for his/her problem. Next is obtaining all the M-files, which

are belong to its Classification, as shown in the Appendix E. Once all the M-files

have been copied or downloaded, there is only file that need to be modified which

is “user.m”. Inside this file, there are instructions embed to be followed. After

the “user.m” file has been modified, one must go back to the MATLAB command

window in order to prepare all the symbolic code and run the nonlinear programming

code. The command name “execute” must be typed in the MATLAB command

windows as

>>execute

This command is need to be type only one time if all the symbolic format in the

“user.m” file are not changed otherwise retyping is needed. This step may take upto

several hours to complete which is depended on the system and the speed of the

computer. At this point, the code will construct the following M-files, which are

containing all the necessary symbolic codes:

r_c.m

r_cj.m

r_o.m

r_o1.m

r_o2.m

297

Page 316: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 316/317

r_oj1.m

r_oj2.m

Also, this execute command will run the nonlinear programming code named NPSOL,

which is a commercial package, automatically in order to obtain the optimal trajec-

tories. However, if the message on screen shows “No Solution found!” or “The local

minimum might exist!”, one can follow the instruction shown on the screen, or try

to run nonlinear programming code again with the better guesses providing inside

the code which is require the certain command to be typed on the screen as

>>main_guess

If the message on screen is still showing “The local minimum may exist!”, one must

check the following to make sure that the code providing the right solutions. First,

plot the solutions by typing command “result2”, and check all the boundary condi-

tions if they are satisfied or not including the continuities of the state variables that

have to be continued. If the answer is no at least only one point, those solutions can

not be considered as the right solutions. The way to achieve the optimal solutions in

this particular event is typing the command named “mainr” to repeat the procedure

which using the previous output to be the initial guesses for the present step. It

might require using this command several times in order to achieve the right optimal

solutions. One may always get the message on the screen as “No Solution found!”.

In this case, he or she might need to change boundary conditions, final time, initial

time, and lower/upper bounds of all constraints to the problem since the problem

may be posted as no feasible optimal trajectories exist.

Once the optimal trajectories have been found, the code will plot all the state

and control trajectories as the following logic. As state in the problem statement, the

optimal control problem has n state and m control input variables. Therefore, the

first n ∗ p figures are the solutions of the state variables and their higher derivatives

upto ( p − 1)th derivative where p is number of the highest derivative of the problem.

298

Page 317: Extensions of Optimization Theory and New Computational Approaches for Higher-Order

8/14/2019 Extensions of Optimization Theory and New Computational Approaches for Higher-Order

http://slidepdf.com/reader/full/extensions-of-optimization-theory-and-new-computational-approaches-for-higher-order 317/317