automation for roboticsdl.booktolearn.com/ebooks2/engineering/robotics/... · 2019-06-24 ·...
TRANSCRIPT
Au
tom
atio
n fo
r Ro
bo
tics
Lu
c J
au
lin
www.iste.co.uk Z(7ib8e8-CBHJIA(
A discipline that is in full development, propelled by the riseof autonomous mobile robotics – notably drones −automation has the objective of designing controls capableof working within an existing dynamic system (automobile,airplane, economic system, etc.). The resulting controlledsystem is thus constructed by looping a physical systemactivated and equipped with sensors using smartelectronics. While the initial system only obeyed the laws ofphysics, the evolution of the looped system also obeyed anIT program embedded in the control electronics.
In order to enable a better understanding of the keyconcepts of automation, this book develops thefundamental aspects of the field while also proposingnumerous concrete exercises and their solutions. Thetheoretical approach that it presents fundamentally uses thestate space and makes it possible to process general andcomplex systems in a simple way, involving severalswitches and sensors of different types. This approachrequires the use of developed theoretical tools such aslinear algebra, analysis and physics, generally taught inpreparatory classes for specialist engineering courses.
Luc Jaulin is Professor in robotics at ENSTA-Bretagne inFrance. He conducts research at the Lab-STICC in the fieldof submarine robotics and sailing robots using set methods.
Automationfor Robotics
Luc Jaulin
CONTROL, SYSTEMSAND INDUSTRIAL ENGINEERING SERIES
W798-Jaulin.qxp_Layout 1 16/01/2015 09:34 Page 1
Automation for Robotics
Series Editor Hisham Abou Kandil
Automation for Robotics
Luc Jaulin
First published 2015 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd John Wiley & Sons, Inc. 27-37 St George’s Road 111 River Street London SW19 4EU Hoboken, NJ 07030 UK USA
www.iste.co.uk www.wiley.com
© ISTE Ltd 2015 The rights of Luc Jaulin to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2014955868 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-84821-798-0
Contents
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . vii
CHAPTER 1. MODELING . . . . . . . . . . . . . . . . . . . 1
1.1. Linear systems . . . . . . . . . . . . . . . . . . . . . 11.2. Mechanical systems . . . . . . . . . . . . . . . . . . 21.3. Servomotors . . . . . . . . . . . . . . . . . . . . . . . 41.4. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 41.5. Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 21
CHAPTER 2. SIMULATION . . . . . . . . . . . . . . . . . 47
2.1. Concept of vector field . . . . . . . . . . . . . . . . . 472.2. Graphical representation . . . . . . . . . . . . . . . 49
2.2.1. Patterns . . . . . . . . . . . . . . . . . . . . . . . 502.2.2. Rotation matrix . . . . . . . . . . . . . . . . . . 502.2.3. Homogeneous coordinates . . . . . . . . . . . . 52
2.3. Simulation . . . . . . . . . . . . . . . . . . . . . . . . 542.3.1. Euler’s method . . . . . . . . . . . . . . . . . . . 542.3.2. Runge–Kutta method . . . . . . . . . . . . . . . 552.3.3. Taylor’s method . . . . . . . . . . . . . . . . . . 56
2.4. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 562.5. Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 67
vi Automation for Robotics
CHAPTER 3. LINEAR SYSTEMS . . . . . . . . . . . . . . 85
3.1. Stability . . . . . . . . . . . . . . . . . . . . . . . . . 853.2. Laplace transform . . . . . . . . . . . . . . . . . . . 87
3.2.1. Laplace variable . . . . . . . . . . . . . . . . . . 873.2.2. Transfer function . . . . . . . . . . . . . . . . . 883.2.3. Laplace transform . . . . . . . . . . . . . . . . . 883.2.4. Input–output relation . . . . . . . . . . . . . . . 90
3.3. Relationship between state and transferrepresentations . . . . . . . . . . . . . . . . . . . . . 90
3.4. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 923.5. Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 103
CHAPTER 4. LINEAR CONTROL . . . . . . . . . . . . . . 127
4.1. Controllability and observability . . . . . . . . . . 1284.2. State feedback control . . . . . . . . . . . . . . . . 1294.3. Output feedback control . . . . . . . . . . . . . . . 1304.4. Summary . . . . . . . . . . . . . . . . . . . . . . . . 1334.5. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 1344.6. Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 150
CHAPTER 5. LINEARIZED CONTROL . . . . . . . . . . 185
5.1. Linearization . . . . . . . . . . . . . . . . . . . . . . 1855.1.1. Linearization of a function . . . . . . . . . . . . 1855.1.2. Linearization of a dynamic system . . . . . . . 1875.1.3. Linearization around an operating point . . . 187
5.2. Stabilization of a nonlinear system . . . . . . . . 1885.3. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 1915.4. Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 207
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . 235
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Introduction
I.1. State representation
Biological, economic and other mechanical systemssurrounding us can often be described by a differentialequation such as:{
x(t) = f(x(t),u(t))
y(t) = g(x(t),u(t))
under the hypothesis that the time t in which the systemevolves is continuous [JAU 05]. The vector u(t) is the input(or control) of the system. Its value may be chosen arbitrarilyfor all t. The vector y(t) is the output of the system and can bemeasured with a certain degree of accuracy. The vector x(t) iscalled the state of the system. It represents the memory of thesystem, in other words the information needed by the systemin order to predict its own future, for a known input u(t). Thefirst of the two equations is called the evolution equation. It isa differential equation that enables us to know where thestate x(t) is headed knowing its value at the present momentt and the control u(t) that we are currently exerting. Thesecond equation is called the observation equation. It allowsus to calculate the output vector y(t), knowing the state andcontrol at time t. Note, however, that, unlike the evolution
viii Automation for Robotics
equation, this equation is not a differential equation as itdoes not involve the derivatives of the signals. The twoequations given above form the state representation of thesystem.
It is sometimes useful to consider a discrete time k, withk ∈ Z, where Z is the set of integers. If, for instance, theuniverse is being considered as a computer, it is possible toconsider that the time k is discrete and synchronized to theclock of the microprocessor. Discrete-time systems oftenrespect a recurrence equation such as:{
x(k + 1) = f(x(k),u(k))
y(k) = g(x(k),u(k))
The first objective of this book is to understand the conceptof state representation through numerous exercises. For this,we will consider, in Chapter 1, a large number of variedexercises and show how to reach a state representation. Wewill then show, in Chapter 2, how to simulate a given systemon a computer using its state representation.
The second objective of this book is to propose methods tocontrol the systems described by state equations. In otherwords, we will attempt to build automatic machines (in whichhumans are practically not involved, except to give orders, orsetpoints), called controllers capable of domesticating(changing the behavior in a desired direction) the systemsbeing considered. For this, the controller will have to computethe inputs u(t) to be applied to the system from the (more orless noisy) knowledge of the outputs y(t) and from thesetpoints w(t) (see Figure I.1).
From the point of view of the user, the system, referred toas a closed-loop system, with input w(t) and output y(t), willhave a suitable behavior. We will say that we have controlledthe system. With this objective of control, we will, in a firstphase, only look at linear systems, in other words when the
Introduction ix
functions f and g are assumed linear. Thus, in the continuous-time case, the state equations of the system are written as:{
x(t) = Ax(t) +Bu(t)
y(t) = Cx(t) +Du(t)
and in the discrete-time case, they become:{x(k + 1) = Ax(k) +Bu(k)
y(k) = Cx(k) +Du(k)
Figure I.1. Closed loop concept illustrating the control of a system
The matrices A,B,C,D are called evolution, control,observation and direct matrices. A detailed analysis of thesesystems will be performed in Chapter 3. We will then explain,in Chapter 4, how to stabilize these systems. Finally, we willshow in Chapter 5 that around certain points, calledoperating points, nonlinear systems behave like linearsystems. It will then be possible to stabilize them using thesame methods as those developed for the linear case.
Finally, this book is accompanied by numerous MATLABprograms available at:
http://www.ensta-bretagne.fr/jaulin/isteauto.html
x Automation for Robotics
I.2. Exercises
EXERCISE I.1.– Underwater robot
The underwater robot Saucisse of the Superior NationalSchool of Advanced Techniques (SNSAT) Bretagne [JAU 09],whose photo is given in Figure I.2, is a control system. Itincludes a computer, three propellers, a camera, a compassand a sonar. What does the input vector u, the output vectory, the state vector x and the setpoint w correspond to in thiscontext? Where does the computer come in the control loop?
Figure I.2. Controlled underwater robot
EXERCISE I.2.– Sailing robot
The sailing robot Vaimos (French Research Institute forExploitation of the Sea (FRIES) and SNSAT Bretagne) inFigure I.3 is also a control system [JAU 12a, JAU 12b]. It iscapable of following paths by itself, such as the one drawn inFigure I.3. It has a rudder and a sail adjustable using asheet. It also has an anemometer on top of the mast, a
Introduction xi
compass and a Global Positioning System (GPS). Describewhat the input vector u, the output vector y, the state vectorx and the setpoint w may correspond to.
a) b)Figure I.3. Sailing robot Vaimos a) and a path followed by Vaimos
b). The zig-zags in the path are due to Vaimos having to tack in orderto sail against the wind
I.3. Solutions
Solution to Exercise I.1 (underwater robot)
The input vector u ∈ R3 corresponds to the electric voltage
given to the three propellers and the output vector y(t)includes the compass, the sonar data and the images takenby the cameras. The state vector x corresponds to theposition, orientation and speeds of the robot. The setpoint wis requested by the supervisor. For instance, if we want toperform a course control, the setpoint w will be the desiredspeed and course for the robot. The controller is a pogramexecuted by the computer.
xii Automation for Robotics
Solution to Exercise I.2 (sailing robot)
The input vector u ∈ R2 corresponds to the length of the sail
sheet δmaxv and to the angle of the rudder δg. The output vector
y ∈ R4 includes the GPS data m, the ultrasound anemometer
(weather vane on top of the mast) ψ and the compass θ. Thesetpoint w indicates here the segment ab to follow. Figure I.4illustrates this control loop. A supervisor, not represented onthe figure, takes care of sequencing the segments to follow insuch a way that the robot follows the desired path (here 12segments forming a square box followed by a return to port).
Figure I.4. Control loop of the sailing robot
1
Modeling
We will call modeling the step that consists of finding amore or less accurate state representation of the system weare looking at. In general, constant parameters appear in thestate equations (such as the mass or the inertial moment of abody, the coefficient of viscous friction, the capacitance of acapacitor, etc.). In these cases, an identification step mayprove to be necessary. In this book, we will assume that allthe parameters are known, otherwise we invite the reader toconsult Eric Walter’s book [WAL 14] for a broad range ofidentification methods. Of course, no systematic methodologyexists that can be used to model a system. The goal of thischapter and of the following exercises is to present, usingseveral varied examples, how to obtain a staterepresentation.
1.1. Linear systems
In the continuous-time case, linear systems can bedescribed by the following state equations:{
x(t) = Ax(t) +Bu(t)
y(t) = Cx(t) +Du(t)
2 Automation for Robotics
Linear systems are rather rare in nature. However, theyare relatively easy to manipulate using linear algebratechniques and often approximate in an acceptable mannerthe nonlinear systems around their operating point.
1.2. Mechanical systems
The fundamental principle of dynamics allows us to easilyfind the state equations of mechanical systems (such asrobots). The resulting calculations are relatively complicatedfor complex systems and the use of computer algebra systemsmay prove to be useful. In order to obtain the state equationsof a mechanical system composed of several subsystemsS1,S2, . . . ,Sm, assumed to be rigid, we follow three steps:
1) Obtaining the differential equations. For each subsystemSk, with mass m and inertial matrix J, the following relationsmust be applied:∑
i fi = ma∑iMfi = Jω
where the fi are the forces acting on the subsystem Sk, Mfi
represents the torque created by the force fi on Sk, withrespect to its center. The vector a represents the tangentialacceleration of Sk and the vector ω represents the angularacceleration of Sk. After decomposing these 2m vectorialequations according to their components, we obtain 6m scalardifferential equations such that some of them might bedegenerate.
2) Removing the components of the internal forces. Indifferential equations there are the so-called bonding forces,which are internal to the whole mechanical system, eventhough they are external to each subsystem composing it.They represent the action of a subsystem Sk on anothersubsystem S�. Following the action–reaction principle, the
Modeling 3
existence of such a force, denoted by fk,�, implies the existenceof another force f �,k, representing the action of S� on Sk,such that f �,k = −fk,�. Through a formal manipulation ofthe differential equations and by taking into account theequations due to the action-reaction principle, it is possibleto remove the internal forces. The resulting number ofdifferential equations has to be reduced to the number n ofdegrees of freedom q1, . . . , qn of the system.
3) Obtaining the state equations. We then have to isolatethe second derivative q1, . . . , qn from the set of n differentialequations in such a manner to obtain a vectorial relation suchas:
q = f (q, q,u)
where u is the vector of external forces that are not derivedfrom a potential (in other words, those which we apply to thesystem). The state equations are then written as:
d
dt
⎛⎝q
q
⎞⎠ =
⎛⎝ q
f (q, q,u)
⎞⎠A mechanical system whose dynamics can be described by
the relation q = f (q, q,u) will be referred to as holonomic.For a holonomic system, q and q are thus independent. Ifthere is a so-called non-holonomic constraint that links thetwo of them (of the form h (q, q) = 0), the system will bereferred to as non-holonomic. Such systems may be found forinstance in mobile robots with wheels [LAU 01]. Readersinterested in more details on the modeling of mechanicalsystems may consult [KHA 07].
4 Automation for Robotics
1.3. Servomotors
A mechanical system is controlled by forces or torquesand obeys a dynamic model that depends on many poorlyknown coefficients. This same mechanical system representedby a kinematic model is controlled by positions, velocities oraccelerations. The kinematic model depends on well-knowngeometric coefficients and is a lot easier to put into equations.In practice, we move from a dynamic model to its kinematicequivalent by adding servomotors. In summary, a servomotoris a direct current motor with an electrical control circuit anda sensor (of the position, velocity or acceleration). The controlcircuit computes the voltage u to give to the motor in order forthe value measured by the sensor corresponds to the setpointw. In practice, the signal w is generally given in the form ofa square wave called pulse-width modulation (PWM) ). Thereare three types of servomotors:
– the position servo. The sensor measures the position (orthe angle) x of the motor and the control rule is expressed asu = k (x− w). If k is large, we may conclude that x � w;
– the velocity servo. The sensor measures the velocity (orthe angular velocity) x of the motor and the control rule isexpressed as u = k (x− w). If k is large, we have x � w;
– the acceleration servo. The sensor measures theacceleration (tangential or angular) x of the motor andthe control rule is expressed as u = k (x− w). If k is large, wehave x � w.
1.4. Exercises
EXERCISE 1.1.– Integrator
The integrator is a linear system described by thedifferential equation y = u. Find a state representation forthis system. Give this representation in a matrix form.
Modeling 5
EXERCISE 1.2.– Second order system
Let us consider the system with input u and output ydescribed by the second order differential equation:
y + a1y + a0y = bu
Taking x = (y, y), find a state equation for this system. Giveit in matrix form.
EXERCISE 1.3.– Mass-spring system
Let us consider a system with input u and output q1 asshown in Figure 1.1 (u is the force applied to the secondcarriage, qi is the deviation of the ith carriage with respect toits point of equilibrium, ki is the stiffness of the ith spring andα is the coefficient of viscous friction).
Figure 1.1. a) Mass-spring system at rest, b) system in any state
Let us take the state vector:
x = (q1, q2, q1, q2)T
1) Find the state equations of the system.
2) Is this system linear?
6 Automation for Robotics
EXERCISE 1.4.– Simple pendulum
Let us consider the pendulum in Figure 1.2. The input ofthis system is the momentum u exerted on the pendulumaround its axis. The output is y(t), the algebraic distancebetween the mass m and the vertical axis:
1) Determine the state equations of this system.
2) Express the mechanical energy Em as a function of thestate of the system. Show that the latter remains constantwhen the momentum u is nil.
EXERCISE 1.5.– Dynamic modeling of an inverted rodpendulum
Let us consider the so-called inverted rod pendulumsystem, which is composed of a pendulum of length � placedin an unstable equilibrium on a carriage, as represented inFigure 1.3. The value u is the force exerted on the carriage ofmass M , x indicates the position of the carriage, θ is theangle between the pendulum and the vertical axis and �R isthe force exerted by the carriage on the pendulum. At theextremity B of the pendulum, a point mass m is fixated. Wemay ignore the mass of the rod. Finally, A is the point ofarticulation between the rod and the carriage and �Ω = θ�k isthe rotation vector associated with the rod.
1) Write the fundamental principle of dynamics as appliedon the carriage and the pendulum.
2) Show that the velocity vector at point B is expressedby the relation vB =
(x− θ cos θ
)�i − θ sin θ�j. Calculate the
acceleration vB of point B.
3) In order to model the inverted pendulum, we will take thestate vector x = (x, θ, x, θ
). Justify this choice.
4) Find the state equations for the inverted rod pendulum.
Modeling 7
Figure 1.2. Simple pendulum with state vector x = (q, q)
Figure 1.3. Inverted rod pendulum
EXERCISE 1.6.– Kinematic modeling of an inverted rodpendulum
In a kinematic model, the inputs are no longer forces ormoments, but kinematic variables, in other words positions,velocities or accelerations. It is the role of servomotors totranslate these kinematic variables into forces or moments.
8 Automation for Robotics
Let us take the state equations of the inverted rod pendulumestablished in the previous exercise:
ddt
⎛⎜⎜⎜⎜⎜⎝x
θ
x
θ
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝x
θ−m sin θ(�θ2−g cos θ)
M+m sin2 θsin θ((M+m)g−m�θ2 cos θ)
�(M+m sin2 θ)
⎞⎟⎟⎟⎟⎟⎠+
⎛⎜⎜⎜⎜⎜⎝0
0
1M+m sin2 θ
cos θ�(M+m sin2 θ)
⎞⎟⎟⎟⎟⎟⎠u
1) Instead of taking the force u on the carriage as input, letus rather take the acceleration a = x. What does the statemodel become?
2) Show how, by using a proportional control such asu=K (a− x) with large K, it is possible to move from adynamic model to a kinematic model. In what way doesthis control recall the servomotor principle or the operationalamplifier principle?
EXERCISE 1.7.– Segway
The segway represented on the left side of Figure 1.4 is avehicle with two wheels and a single axle. It is stable since itis controlled. In the modeling step, we will of course assumethat the engine is not controlled.
Its open loop behavior is very close to that of the planarunicycle represented in Figure 1.4 on the right hand side. Inthis figure, u represents the exerted momentum between thebody and the wheel.
The link between these two elements is a pivoting pin. Wewill denote by B the center of gravity of the body and by Athat of the wheel. C is a fix point on the disk. Let us denoteby α the angle between the vector
−→AC and the horizontal axis
and by θ the angle between the body of the unicycle and thevertical axis. This system has two degrees of freedom α and θ.The state of out system is given by the vector x = (α, θ, α, θ)T.
Modeling 9
The parameters of our system are:
– for the disk: its mass M , its radius a, its moment of inertiaJM ;
– for the pendulum: its mass m, its moment of inertia Jp,the distance � between its center of gravity and the center ofthe disk.
Find the equations of the systems.
Figure 1.4. The segway has two wheels and one axle
EXERCISE 1.8.– Hamilton’s method
Hamilton’s method allows us to obtain the state equationsof a conservative mechanical system (in other words, whoseenergy is conserved) only from the expression of a singlefunction: its energy. For this, we define the Hamiltonian asthe mechanical energy of the system, in other words the sumof the potential energy and the kinetic energy. TheHamiltonian can be expressed as a function H (q,p) of thedegrees of freedom q and of the associated amount ofmovement (or kinetic moments in the case of a rotation) p.The Hamilton equations are written as:⎧⎨⎩ q = ∂H(q,p)
∂p
p = −∂H(q,p)∂q
10 Automation for Robotics
1) Let us consider the simple pendulum shown in Figure1.6. This pendulum has a length of � and is composed of asingle point mass m. Calculate the Hamiltonian of the system.Deduce the state equations from this.
2) Show that if a system is described by Hamilton equations,then the Hamiltonian is constant.
Figure 1.5. Holonomic robot with omni wheels
EXERCISE 1.9.– Omnidirectional robot
Let us consider the robot with three omni wheels, asshown in Figure 1.5. An omni wheel is a wheel equippedwith a multitude of small rollers over its entire peripherythat allow it to slide sideways (in other words perpendicularlyto its nominal movement direction). Let us denote by vi thevelocity vector of the contact point of the ith wheel. If ii isthe normed direction vector indicating the nominal movementdirection of the wheel, then the component of vi accordingto ii corresponds to the rotation ωi of the wheel whereasits complementary component (perpendicular to ii) is linkedto the rotation of the peripheral rollers. If r is the radiusof the wheel, then we have the relation rωi = 〈vi, ii〉 =
‖vi‖ . ‖ii‖ . cosαi, where αi = cos(vi, ii). If cosαi = ±1, thewheel is in its nominal state, i.e. it behaves like a classical
Modeling 11
wheel. If cosαi = 0, the wheel no longer turns and it is in astate of skid.
1) Give the state equations of the system. We will use thestate vector x = (x, y, θ) and the input vector ω = (ω1, ω2, ω3).
2) Propose a loop that allows us to obtain a model tankdescribed by the following state equations:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
x = v cos θ
y = v sin θ
θ = u1
v = u2
Figure 1.6. Simple pendulum
EXERCISE 1.10.– Modeling a tank
The robot tank in Figure 1.7 is composed of two parallelmotorized crawlers (or wheels) whose accelerations (whichform the inputs u1 and u2 of the system) are controlled by twoindependent motors. In the case where wheels areconsidered, the stability of the system is ensured by one ortwo idlers, not represented on the figure. The degrees of
12 Automation for Robotics
freedom of the robot are the x, y coordinates of the center ofthe axle and its orientation θ.
1) Why can’t we choose as state vector the vector x = (x, y, θ,x, y, θ)T?
2) Let us denote by v1 and v2 the center velocities of each ofthe motorized wheels. Let us choose as state vector the vectorx = (x, y, θ, v1, v2)
T. What might justify such a choice? Give thestate equations of the system.
Figure 1.7. Robot tank viewed from above
EXERCISE 1.11.– Modeling a car
Let us consider the car as shown in Figure 1.8. The driverof the car (on the left hand side on the figure) has two controls:acceleration of the front wheels (assumed to be motorized)and rotation velocity of the steering wheel. The brakes hererepresent a negative acceleration. We will denote by δ theangle between the front wheels and the axis of the car, by θthe angle made by the car with respect to the horizontal axisand by (x, y) the coordinates of the middle of the rear axle. Thestate variables of our system are composed of:
– the position coordinates, in other words all the knowledgenecessary to draw the car, more specifically the x, y
Modeling 13
coordinates of the center of the rear axle, the orientation θ ofthe car, and the angle δ of the front wheels;
– the kinetic coordinate v representing the velocity of thecenter of the front axle (indeed, the sole knowledge of thisvalue and the position coordinates allows to calculate all thevelocities of all the other elements of the car).
Calculate the state equations of the system. We will assumethat the two wheels have the same velocity v (even thoughin reality, the inner wheel during a turn is slower than theouter one). Thus, as illustrated on the right-hand side figure,everything happens as if there were only two virtual wheelssituated at the middle of the axles.
Figure 1.8. Car moving on a plane (view from above)
EXERCISE 1.12.– Car-trailer system
Let us consider the car studied in Exercise 1.11. Let us adda trailer to this car whose attachment point is found in themiddle of the rear axle of the car, as illustrated by Figure 1.9.Find the state equations of the car-trailer system.
14 Automation for Robotics
Figure 1.9. Car with a trailer
Figure 1.10. Sailboat to be modeled
EXERCISE 1.13.– Sailboat
Let us consider the sailboat to be modeled represented inFigure 1.10. The state vector x = (x, y, θ, δv, δg, v, ω)
T, ofdimension 7, is composed of the coordinates x, y of the centerof gravity G of the boat (the drift is found at G), of theorientation θ, the angle δv of the sail, the angle δg of therudder, the velocity v of the center of gravity G and of theangular velocity ω of the boat. The inputs u1 and u2 ofthe system are the derivatives of the angles δv and δg. Theparameters (assumed to be known and constant) are: V thevelocity of the wind, rg the distance of the rudder to G, rv thedistance of the mast to G, αg the lift of the rudder (if the
Modeling 15
rudder is perpendicular to the vessel’s navigation, the waterexerts a force of αgv on the rudder), αv the lift of the sail (ifthe sail is stationary and perpendicular to the wind, thelatter exerts a force of αvV ), αf the coefficient of friction ofthe boat on the water in the direction of navigation (the waterexerts a force opposite to the direction of navigation on theboat equal to αfv
2), αθ the angular coefficient of friction (thewater exerts a momentum of friction on the boat equal to−αθω); given the form of the boat that is rather streamlinedin order to maintain its course, αθ will be large compared toαf ), J the inertial momentum of the boat, � the distancebetween the center of pressure of the sail and the mast, β thecoefficient of drift (when the sail is released, the boat tends todrift in the direction of the wind with a velocity equal to βV ).The state vector is composed of the coordinates of position,i.e. the coordinates x, y of the inertial center of the boat, theorientation θ, and the angles δv and δg of the sail and therudder and the kinetic coordinates v and ω representing,respectively, the velocity of the center of rotation G and theangular velocity of the vessel. Find the state equations forour system x = f(x,u), where x = (x, y, θ, δv, δg, v, ω)
T andu = (u1, u2)
T.
EXERCISE 1.14.– Direct current motor
A direct current motor can be described by Figure 1.11, inwhich u is the supply voltage of the motor, i is the currentabsorbed by the motor, R is the armature resistance, L is thearmature inductance, e is the electromotive force, ρ is thecoefficient of friction in the motor, ω is the angular velocity ofthe motor and Tr is the torque exerted by the motor on theload.
Let us recall the equations of an ideal direct current motor:e = KΦω and T = KΦi. In the case of an induction-independent motor, or a motor with permanent magnets, theflow Φ is constant. We are going to put ourselves in thissituation.
16 Automation for Robotics
Figure 1.11. Direct current motor
1) We take as inputs of the system Tr and u. Find the stateequations.
2) We connect a ventilator to the output of the system witha characteristic of Tr = αω2. Give the new state equations ofthe motor.
EXERCISE 1.15.– Electrical circuit
The electrical circuit of Figure 1.12 has the input as voltageu(t) and the output as voltage y(t). Find the state equations ofthe system. Is this a linear system?
Figure 1.12. Electrical circuit to be modeled
EXERCISE 1.16.– The three containers
1) Let us consider two containers placed as shown inFigure 1.13.
In the left container, the water flows without friction in thedirection of the right container. In the left container, the waterflows in a fluid way, as opposed to the right container, where
Modeling 17
there are turbulences. These are turbulences that absorb thekinetic energy of the water and transform it into heat. Withoutthese turbulences, we would have a perpetual back-and-forthmovement of the water between the two containers. If a isthe cross-section of the canal, then it shows the so-calledTorricelli’s law that states that the water flow from the rightcontainer to the left one is equal to:
QD = a.sign (zA − zB)√
2g|zA − zB|
Figure 1.13. Hydraulic system composed of two containers filledwith water and connected with a canal
2) Let us now consider the system composed of threecontainers as represented in Figure 1.14.
Figure 1.14. System composed of three containers filled with waterand connected with two canals
18 Automation for Robotics
The water from containers 1 and 3 can flow towardcontainer 2, but also toward the outside with atmosphericpressure. The associated flow rates are given, followingToricelli’s relation, by:
Q1ext = a.√2gh1
Q3ext = a.√2gh3
Similarly, the flow rate from a container i toward acontainer j is given by:
Qij = a.sign (hi − hj)√
2g|hi − hj |
The state variables of this system that may be consideredare the heights of the containers. In order to simplify, we willassume that the surfaces of the containers are all equal to1 m2; thus, the volume of water in a container is interlinkedwith its height. Find the state equations describing thedynamics of the system.
EXERCISE 1.17.– Pneumatic cylinder
Let us consider the pneumatic cylinder with return springas shown in Figure 1.15. Such a cylinder is often referred to assingle-acting since the air under pressure is only found in oneof the two chambers.
The parameters of this system are the stiffness of thespring k, the surface of the piston a and the mass m at theend of the piston (the masses of all the other objects areignored). We assume that everything happens under aconstant temperature T0. We will take as state vectorx = (z, z, p) where z is the position of the cylinder, z itsvelocity and p the pressure inside the chamber. The input ofthe system is the volumetric flow rate u of the air toward thechamber of the cylinder. In order to simplify, we will assumethat there is vacuum in the spring chamber and that when
Modeling 19
z = 0 (the cylinder is in the left hand limit) the spring is inequilibrium. Find the state equations of the pneumaticcylinder.
Figure 1.15. Single-acting pneumatic cylinder
EXERCISE 1.18.– Fibonacci sequence
We will now study the evolution of the number y(k) ofrabbit couples on a farm as a function of the year k. At year 0,there is only a single couple of newborn rabbits on the farm(and thus y(0) = 1). The rabbits only become fertile a yearafter their birth. It follows that at year 1, there is still asingle couple of rabbits, but this couple is fertile (and thusy(1) = 1). A fertile couple gives birth, each year, to anothercouple of rabbits. Thus, at year 2, there is a fertile couple ofrabbits and a newborn couple. This evolution can bedescribed in Table 1.1, where N means newborn and A meansadult.
k = 0 k = 1 k = 2 k = 3 k = 4
N A A A A
N A A
N A
N
N
Table 1.1. Evolution of the number of rabbits
20 Automation for Robotics
Let us denote by x1(k) the number of newborn couples,by x2(k) the number of fertile couples and by y(k) the totalnumber of couples.
1) Give the state equations that govern the system.
2) Give the recurrence relation satisfied by y(k).
EXERCISE 1.19.– Bus network
We consider a public transport system of buses with 4 linesand 4 buses. There are only two stations where travelers canchange lines. This system can be represented by a Petri net(see Figure 1.16). Each token corresponds to a bus. The placesp1, p2, p3, p4 represent the lines. These places are composed of anumber that corresponds to the minimum amount of time thatthe token must remain in its place (this corresponds to thetransit time). The transitions t1, t2 ensure synchronization.They are only crossed when each upstream place of thetransition has at least one token that has waited sufficientlylong. In this case, the upstream places lose a token and thedownstream places gain one. This structure ensures that thecorrespondence will be performed systematically and that thebuses will leave in pairs.
1) Let us assume that at time t = 0, the transitionst1 and t2 are crossed for the first time and that we arein the configuration of Figure 1.16 (this corresponds tothe initialization). Give the crossing times for each of thetransitions.
Figure 1.16. Petri net of the bus system
Modeling 21
2) Let us denote by xi (k) the time when the transition ti iscrossed for the kth time. Show that the dynamics of the modelcan be written using states:
x (k + 1) = f (x (k))
where x = (x1, x2)T is the state vector. Remember that here k
is not the time, but an event number.
3) Let us now attempt to reformulate elementary algebraby redefining the addition and multiplication operators (see[BAC 92]) as follows:⎧⎨⎩a⊕ b = max (a, b)
a⊗ b = a+ b
Thus, 2 ⊕ 3 = 3, whereas 2 ⊗ 3 = 5. Show that in this newalgebra (called max-plus), the previous system is linear.
4) Why do you think that matrix calculus (such as we knowit) could not be used easily in this new algebra?
1.5. Solutions
Solution to Exercise 1.1 (integrator)
One possible state representation for the integrator is thefollowing:{
x(t) = u(t)
y(t) = x(t)
The matrices associated with this system are A = (0),B= (1), C = (1) and D = (0) . They all have dimension 1× 1.
22 Automation for Robotics
Solution to Exercise 1.2 (second-order system)
By taking x = (y, y), this differential equation can bewritten in the form:⎧⎪⎨⎪⎩
(x1x2
)=
(x2
−a1x2 − a0x1 + bu
)y = x1
or, in a standard form using state matrices A, B, C and D:⎧⎪⎪⎨⎪⎪⎩x=
(0 1
−a0 −a1
)x+
(0
b
)u
y =(1 0
)x
Solution to Exercise 1.3 (mass-spring system)
1) The fundamental principle of dynamics applied tocarriage 1 then to carriage 2 gives:{
−k1q1 − αq1 + k2(q2 − q1) = m1q1u− αq2 − k2(q2 − q1) = m2q2
In other words:(q1q2
)=
(1m1
(− (k1 + k2) q1 + k2q2 − αq1))1m2
(k2q1 − k2q2 − αq2 + u)
)
We thus have the following state representation:⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩x =
⎛⎜⎜⎜⎝0 0 1 0
0 0 0 1
− k1+k2m1
k2m1
− αm1
0k2m2
− k2m2
0 − αm2
⎞⎟⎟⎟⎠ x+
⎛⎜⎜⎜⎝0
0
01m2
⎞⎟⎟⎟⎠u
q1 =(1 0 0 0
)x
Modeling 23
2) Yes, this system is linear since it can be written as:{x = Ax+Bu
y = Cx+Du
Solution to Exercise 1.4 (simple pendulum)
1) Following the fundamental principle of dynamics, wehave:
−�mg sin q + u = Jq
where � is the length of the pendulum. However, for ourexample, J = m�2, therefore:
q =u− �mg sin q
m�2
Let us take as state vector x = (q, q). The state equations ofthe system are then written as:
ddt
(q
q
)=
(q
u−�mg sin qm�2
)y = � sin q
or: (x1x2
)=
(x2
u−�mg sinx1m�2
)y = � sinx1
2) The mechanical energy of the pendulum is given by:
Em =1
2m�2q2︸ ︷︷ ︸
kinetic energy
+mg� (1− cos q)︸ ︷︷ ︸potential energy
24 Automation for Robotics
When the torque u is nil, we have:
dEmdt = 1
2m�2 (2qq) +mg�q sin q
= m�2(q−�mg sin q
m�2
)+mg�q sin q = 0
The mechanical energy of the pendulum therefore remainsconstant, which is coherent with the fact that the pendulumwithout friction is a conservative system.
Solution to Exercise 1.5 (dynamic modeling of aninverted rod pendulum)
1) The fundamental principle of dynamics applied to thecarriage and the pendulum gives us:
(u−Rx)�i = Mx�i (carriage in translation)Rx
�i+Ry�j −mg�j = mvB (pendulum in translation)
Rx� cos θ +Ry� sin θ = 0θ (pendulum in rotation)
where vB is the velocity vector of point B. For the thirdequation, the inertial momentum of the pendulum wasdefined nil.
2) Since:
−−→OB = (x− � sin θ)�i+ � cos θ�j
we have:
vB =(x− �θ cos θ
)�i− �θ sin θ�j
Therefore, the acceleration of point B is given by:
vB =(x− �θ cos θ + �θ2 sin θ
)�i−
(�θ sin θ + �θ2 cos θ
)�j
3) It is the vector of the degrees of freedom and of theirderivatives. There are no non-holonomic constraints.
Modeling 25
4) After scalar decomposition of the dynamics equationsgiven above, we obtain:⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩
Mx = u−Rx (i)Rx =m
(x− �θ cos θ + �θ2 sin θ
)(ii)
Ry −mg = −m(�θ sin θ + �θ2 cos θ
)(iii)
Rx cos θ +Ry sin θ = 0 (iv)
These four equations describe, respectively, (i) the carriagein translation, (ii) the pendulum in translation following �i,(iii) the pendulum in translation following �j and (iv) thependulum in rotation. We thus verify that the number ofdegrees of freedom (here x and θ) added to the number ofinternal forces (here Rx and Ry) is equal to the number ofequations. In a matrix form, these equations become:⎛⎜⎜⎜⎝
M 0 1 0
−m m� cos θ 1 0
0 m� sin θ 0 1
0 0 cos θ sin θ
⎞⎟⎟⎟⎠⎛⎜⎜⎜⎝
x
θ
Rx
Ry
⎞⎟⎟⎟⎠ =
⎛⎜⎜⎜⎝u
m�θ2 sin θ
mg −m�θ2 cos θ
0
⎞⎟⎟⎟⎠Therefore:
(x
θ
)=
(1 0 0 0
0 1 0 0
).
⎛⎜⎜⎜⎜⎜⎜⎝
M 0 1 0
−m m� cos θ 1 0
0 m� sin θ 0 1
0 0 cos θ sin θ
⎞⎟⎟⎟⎟⎟⎟⎠
−1
.
⎛⎜⎜⎜⎝
u
m�θ2 sin θ
mg − m�θ2 cos θ
0
⎞⎟⎟⎟⎠
=
⎛⎜⎜⎝
−m sin θ(�θ2−g cos θ)+uM+m sin2 θ
sin θ((M+m)g−m�θ2 cos θ)+cos θu�(M+m sin2 θ)
⎞⎟⎟⎠
The state equations are therefore written as:
ddt
⎛⎜⎜⎜⎝x
θ
x
θ
⎞⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎝x
θ−m sin θ(�θ2−g cos θ)
M+m sin2 θ(sin θ)((M+m)g−m�θ2 cos θ)
�(M+m sin2 θ)
⎞⎟⎟⎟⎟⎠+
⎛⎜⎜⎜⎝0
01
M+m sin2 θcos θ
�(M+m sin2 θ)
⎞⎟⎟⎟⎠u
26 Automation for Robotics
or equivalently:
ddt
⎛⎜⎜⎜⎝x1x2x3x4
⎞⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎝x3x4
−m sinx2(�x24−g cosx2)
M+m sin2 x2(sinx2)((M+m)g−m�x24 cosx2)
�(M+m sin2 x2)
⎞⎟⎟⎟⎟⎠+
⎛⎜⎜⎜⎝0
01
M+m sin2 x2cosx2
�(M+m sin2 x2)
⎞⎟⎟⎟⎠u
Solution to Exercise 1.6 (kinematic modeling of aninverted rod pendulum)
1) Following the dynamic model of the inverted rodpendulum, we obtain:
a =1
M +m sin2 θ
(−m sin θ(�θ2 − g cos θ) + u
)After isolating u, we get:
u = m sin θ(�θ2 − g cos θ
)) +
(M +m sin2 θ
)a
Therefore:
θ = sin θ((M+m)g−m�θ2 cos θ)
�(M+m sin2 θ)+ cos θ
�(M+m sin2 θ)u
=sin θ((M+m)g−m�θ2 cos θ)
�(M+m sin2 θ)+ cos θ
�(M+m sin2 θ)
(m sin θ
(�θ2−g cos θ
)+
(M+m sin2 θ
)a)
= 1�(M+m sin2 θ)
((M + m)g sin θ − gm sin θ cos2 θ +
(M + m sin2 θ
)cos θa
)= g sin θ
� + cos θ� a
Let us note that this relation could have been obtaineddirectly by noticing that:
�θ = a. cos θ︸ ︷︷ ︸acceleration of A that contributes to the rotation
+ g. sin θ︸ ︷︷ ︸acceleration of B from the viewpoint of A
REMARK.– In order to obtain this relation in a more rigorousmanner and without the use of the dynamic model, we would
Modeling 27
need to write the temporal derivative of the formula of thecomposition of the velocities (or Varignon’s formula), in otherwords:
vA = vB +−−→AB ∧ −→
ω
and write this formula in the frame of the pendulum. Weobtain:⎛⎜⎝ a cos θ
−a sin θ
0
⎞⎟⎠ =
⎛⎜⎝−g sin θ
n
0
⎞⎟⎠+
⎛⎜⎝0
�
0
⎞⎟⎠ ∧
⎛⎜⎝ 0
0
ω
⎞⎟⎠where n corresponds to the normal acceleration of the mass m.We thus obtain, in addition to the desired relation, the normalacceleration n = −a sin θ that will not be used.
The model therefore becomes:
ddt
⎛⎜⎜⎜⎝x
θ
x
θ
⎞⎟⎟⎟⎠ =
⎛⎜⎜⎜⎝x
θ
0g sin θ
�
⎞⎟⎟⎟⎠+
⎛⎜⎜⎜⎝0
0
1cos θ�
⎞⎟⎟⎟⎠ a
This model, referred to as kinetic, only involves positions,velocities and accelerations. It is much more simple than thedynamic model and contains less coefficients. On the otherhand, it corresponds less to reality since the real input is aforce and not an acceleration.
2) In the case of the inverted rod pendulum, we can movefrom the dynamic model with input u to a kinetic model withinput a by generating u with a proportional control of typehigh gain of the form:
u = K (a− x)
with K very large and where a is a new input (see Figure 1.17).
28 Automation for Robotics
Figure 1.17. The inverted rod pendulum, looped by a high gain K,behaves like a kinematic model
The acceleration x can be measured by an accelerometer.If K is sufficiently large, we will have a control u that willcreate the desired acceleration a, in other words we will beable to say that x = a. Thus, the system could be describedby state equations of the kinematic model that do not involveany of the inertial parameters of the system. A controller thatwill be designed over the kinematic model will therefore bemore robust than that designed over the dynamic model sincethe controller will work no matter what the masses (m, M ),the inertial moments, the frictions, etc. are. This high gain-type control is very close to the op-amp principle. In additionto being more robust, such an approach also allows to have asimpler model that is easier to obtain.
Solution to Exercise 1.7 (segway)
In order to find the state equations, we apply thefundamental principle of dynamics on each subsystem, moreprecisely the wheel and the body. We have:⎧⎪⎪⎪⎨⎪⎪⎪⎩
−Rx + Fx =−Maα (wheel in translation)Fxa+ u= JM α (wheel in rotation)
Rx�i+Ry
�j −mg�j =mvB (body in translation)Rx� cos θ +Ry� sin θ − u= Jpθ (body in rotation)
Modeling 29
where vB is the velocity vector of point B. Since:
−−→OB = (−aα− � sin θ)�i+ (� cos θ + a)�j
using differentiation, we obtain:
vB =(−aα− �θ cos θ
)�i− �θ sin θ�j
or:
vB =(−aα− �θ cos θ + �θ2 sin θ
)�i−
(�θ sin θ + �θ2 cos θ
)�j
Thus, after scalar decomposition, the dynamics equationsbecome:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
−Rx + Fx = −Maα
Fxa+ u = JM α
Rx = m(−aα− �θ cos θ + �θ2 sin θ)
Ry −mg = −m(�θ sin θ + �θ2 cos θ)
Rx� cos θ +Ry� sin θ − u = Jpθ
We verify that the number of degrees of freedom (here α andθ) added to the number of components of the internal forces(here Rx, Ry and Fx) is equal to the number of equations. Inmatrix form, these equations become:⎛⎜⎜⎜⎝
Ma 0 −1 0 1
JM 0 0 0 −a
ma m� cos θ 1 0 0
0 m� sin θ 0 1 0
0 −Jp � cos θ � sin θ 0
⎞⎟⎟⎟⎠⎛⎜⎜⎜⎝
α
θ
Rx
Ry
Fx
⎞⎟⎟⎟⎠ =
⎛⎜⎜⎜⎝0
u
m�θ2 sin θ
mg −m�θ2 cos θ
u
⎞⎟⎟⎟⎠Therefore:
(α
θ
)=
(1 0 0 0 0
0 1 0 0 0
).
⎛⎜⎜⎝Ma 0 −1 0 1
JM 0 0 0 −a
ma m� cos θ 1 0 0
0 m� sin θ 0 1 0
0 −Jp � cos θ � sin θ 0
⎞⎟⎟⎠−1
.
⎛⎜⎜⎝0
u
m�θ2 sin θ
mg −m�θ2 cos θ
u
⎞⎟⎟⎠
30 Automation for Robotics
In other words:⎧⎨⎩ α =μ3(μ2θ2−μg cos θ) sin θ+(μ2+μ3 cos θ)u
μ1μ2−μ23 cos
2 θ
θ =(μ1μg−μ2
3θ2 cos θ) sin θ−(μ1+μ3 cos θ)u
μ1μ2−μ23 cos
2 θ
with:
μ1 = JM + a2 (m+M) , μ2 = Jp +m�2,
μ3 = am�, μg = g�m
The state equations are therefore written as:
d
dt
⎛⎜⎜⎜⎝α
θ
α
θ
⎞⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝α
θμ3(μ2θ
2−μg cos θ) sin θ+(μ2+μ3 cos θ)u
μ1μ2−μ23 cos
2 θ
(μ1μg−μ23θ
2 cos θ) sin θ−(μ1+μ3 cos θ)u
μ1μ2−μ23 cos
2 θ
⎞⎟⎟⎟⎟⎟⎠By taking x =
(α, θ, α, θ
)T, these equations become:
⎛⎜⎜⎜⎝x1x2x3x4
⎞⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝x3x4
μ3(μ22x4−μg cosx2) sinx2+(μ2+μ3 cosx2)u
μ1μ2−μ23 cos
2 x2(μ1μg−μ2
3x24 cosx2) sinx2−(μ1+μ3 cosx2)u
μ1μ2−μ23 cos
2 x2
⎞⎟⎟⎟⎟⎟⎠Solution to Exercise 1.8 (Hamilton’s method)
1) The Hamiltonian is written as:
H (q, p) =1
2m (�q)2︸ ︷︷ ︸
Kinetic energy
+mg� (1− cos q)︸ ︷︷ ︸Potential energy
=1
2
p2
m�2+mg� (1− cos q)
Modeling 31
since the amount of movement of the pendulum (or rather thekinetic moment in this case) is p = Jq = m�2q and thusm (�q)2 = m
(� pm�2
)2= p2
m�2. The state equations of the
pendulum are therefore:{q = ∂H(q,p)
∂p = pm�2
p = −∂H(q,p)∂q = −mg� sin q
where the state vector here is x = (q, p)T. Let us note that:
q =p
m�2=
−mg� sin q
m�2=
−g sin q
�
and we come back to the differential equation of thependulum.
2) We have:
H =∂H (q,p)
∂pp+
∂H (q,p)
∂qq = 0
The Hamiltonian (or, equivalently, the mechanical energy)is thus constant.
Solution to Exercise 1.9 (omnidirectional robot)
1) We have rωi = 〈vi, ii〉. However, following the velocitycomposition formula (Varignon’s formula), vi = v − aθii.Therefore:
rωi = < v − aθ.ii, ii > = 〈v, ii〉 − aθ
or:
v =
(x
y
), i1=
(− sin θ
cos θ
), i2=
(− sin
(θ − π
3
)cos
(θ − π
3
) ),
i3 =
(− sin
(θ + π
3
)cos
(θ + π
3
) )
32 Automation for Robotics
Therefore:⎛⎜⎝ ω1
ω2
ω3
⎞⎟⎠ =1
r
⎛⎜⎝ − sin θ cos θ −a
− sin(θ − π
3
)cos
(θ − π
3
) −a
− sin(θ + π
3
)cos
(θ + π
3
) −a
⎞⎟⎠︸ ︷︷ ︸
A(θ)
⎛⎜⎝ x
y
θ
⎞⎟⎠
The state equations are therefore:⎛⎜⎝ x
y
θ
⎞⎟⎠ = A−1 (θ) .ω
2) For this we need to design an input controlleru = (u1, u2)
T and an output controller ω = (ω1, ω2, ω3)T. The
new inputs u1,u2 correspond to the desired angular velocityand acceleration. Let us choose the controller:⎧⎪⎪⎪⎨⎪⎪⎪⎩
v = u2
ω = A (θ) .
⎛⎜⎝v cos θ
v sin θ
u1
⎞⎟⎠The loop is represented in Figure 1.18.
The loop system is then written as:⎧⎪⎪⎪⎨⎪⎪⎪⎩x = v cos θ
y = v sin θ
θ = u1v = u2
Solution to Exercise 1.10 (modeling a tank)
1) The state vector cannot be chosen to be equal to(x, y, θ, x, y, θ)T, which would seem natural with respect to
Modeling 33
Lagrangian theory. Indeed, if this was our choice, some stateswould have no physical meaning. For instance the state:(
x = 0, y = 0, θ = 0, x = 1, y = 1, θ = 0)
has no meaning since the tank is not allowed to skid. Thisphenomenon is due to the existence of wheels that createsconstraints between the natural state variables. Here, wenecessarily have the so-called non-holonomic constraint:
y = x tan θ
Figure 1.18. The robot with omni wheels looped in this mannerbehaves like a tank
Mechanical systems for which there are such constraintson the equality of natural state variables (by natural statevariables we mean the vector (q, q) where q is the vector ofthe degrees of liberty of our system) are said to benon-holonomic. When such a situation arises, it is useful touse these constraints in order to reduce the number of statevariables and this, until no more constraints are left betweenthe state variables.
2) This choice of state variables is easily understood in thesense that these variables allow us to draw the tank (x, y, θ)and the knowledge of v1,v2 allows us to calculate thevariables x, y, θ. Moreover, every arbitrary choice of vector
34 Automation for Robotics
(x, y, θ, v1, v2) corresponds to a physically possible situation.The state equations of the system are:⎛⎜⎜⎜⎜⎜⎝
x
y
θ
v1v2
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝v1+v2
2 cos θv1+v2
2 sin θv2−v1
�
Ru1Ru2
⎞⎟⎟⎟⎟⎟⎠where � is the distance between the two wheels. The thirdrelation on θ is obtained by the velocity composition rule(Varignon’s formula). Indeed, we have:
v2 = v1 +−−−→C2C1 ∧ −→ω
where −→ω is the instantaneous rotation vector of the tank andv1 and v2 are the velocity vectors of the centers of the wheels.Let us note that this relation is a vectorial relation thatdepends on the observer but that is independent of the frame.Let us express this in the frame of the tank, represented onthe figure. We must be careful not to confuse the observerfixed on the ground with the frame in which the relation isexpressed. This equation is written as:⎛⎜⎝v2
0
0
⎞⎟⎠︸ ︷︷ ︸
v2
=
⎛⎜⎝v10
0
⎞⎟⎠︸ ︷︷ ︸
v1
+
⎛⎜⎝0
�
0
⎞⎟⎠︸ ︷︷ ︸−−−→C2C1
∧
⎛⎜⎝0
0
θ
⎞⎟⎠︸ ︷︷ ︸
−→ω
We thus obtain v2 = v1 + �θ or θ = v2−v1� .
Solution to Exercise 1.11 (modeling a car)
Let us consider an observer fixed with respect to theground. Following the velocity composition rule (Varignon’sformula), we have:
vA = vM +−−→AM ∧ −→ω
Modeling 35
where −→ω is the instantaneous rotation vector of the car. Let usexpress this vectorial relation in the frame of the car, which isrepresented in the figure:⎛⎜⎝v cos δ
v sin δ
0
⎞⎟⎠ =
⎛⎜⎝vM0
0
⎞⎟⎠+
⎛⎜⎝−L
0
0
⎞⎟⎠ ∧
⎛⎜⎝0
0
θ
⎞⎟⎠where L is the distance between the front and rear axles.Therefore:(
v cos δ
v sin δ
)=
(vM0
)+
(0
Lθ
)
Thus:
θ =v sin δ
L
and: {x = vM cos θ = v cos δ cos θ
y = vM sin θ = v cos δ sin θ
The evolution equation of the car is therefore written as:⎛⎜⎜⎜⎜⎜⎝x
y
θ
v
δ
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝v cos δ cos θ
v cos δ sin θv sin δL
u1u2
⎞⎟⎟⎟⎟⎟⎠Solution to Exercise 1.12 (car-trailer system)
By looking at the figure and using the state equations ofthe car, we have:
θr =vr sin δr
Lr
36 Automation for Robotics
with:
vr =√x2 + y2 =
√(v cos δ cos θ)2 + (v cos δ sin θ)2 = v cos δ
δr = θ − θr
The parameter Lr represents the distance between theattachment point and the middle of the axle of the trailer.However, only θr has to be added as state variable to those ofthe car. Indeed, it is clear that xr, yr, vr, δr, xr, yr . . . can beobtained analytically from the sole knowledge of the state ofthe car and the angle θr. Thus, the state equations of thecat-trailer system are given by:⎛⎜⎜⎜⎜⎜⎜⎜⎝
x
y
θ
θrv
δ
⎞⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝
v cos δ cos θ
v cos δ sin θv sin δ
Lv cos δ sin(θ−θr)
Lr
u1u2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠Solution to Exercise 1.13 (sailboat)
The modeling that we will propose here is inspired fromthe article [JAU 04]. Even though it is simplistic, theobtained model remains relatively consistent with reality andis used for the simulation of robotic sailboats (such asVaimos) in order to test the behavior of the controllers. Inorder to perform this modeling, we will use the fundamentalprinciple of dynamics in translation (in order to obtain anexpression of the tangential acceleration v) and then inrotation (in order to obtain an expression of the angularacceleration ω).
TANGENTIAL ACCELERATION v.– The wind exerts anorthogonal force on the sail with an intensity equal to:
fv = αv (V cos (θ + δv)− v sin δv)
Modeling 37
Concerning the water, it exerts a force on the rudder that isequal to:
fg = αgv sin δg
orthogonal to the rudder. The friction force exerted by it onthe boat is assumed to be proportional to the square of theboat’s velocity. The fundamental equation of dynamics,projected following the axis of the boat gives:
mv = sin δvfv − sin δgfg − αfv2
The radial acceleration can be considered as nil if weassume that the drift is perfect.
ANGULAR ACCELERATION ω.– Among the forces that act onthe rotation of the boat, we can find the forces fv and fgexerted by the sail and the rudder, but also a force of angularfriction which we assume to be viscous. The fundamentalequation of dynamics gives us:
Jω = dvfv − dgfg − αθω
where:{dv = �− rv cos δvdg = rg cos δg
The state equations of the boat are therefore written as:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
x = v cos θ (i)y = v sin θ − βV (ii)θ = ω (iii)δv = u1 (iv)δg = u2 (v)
v =fv sin δv−fg sin δg−αfv2
m(vi)
ω =(�−rv cos δv)fv−rg cos δgfg−αθω
J(vii)
fv = αv (V cos (θ + δv)− v sin δv) (viii)fg = αgv sin δg (ix)
38 Automation for Robotics
Let us note that these previous equations are notdifferential but algebraic ones. In order to be perfectlyconsistent with a state equation, we would need to removethese two equations as well as the two internal forces fv andfg that appear in equations (vi) and (vii).
Solution to Exercise 1.14 (direct current motor)
1) The equations governing the system are given by:
u = Ri+ L didt + e (electrical part)
Jω = T − ρω − Tr (mechanical part)
However, the equations of a direct current machine aree=KΦω and T = KΦi. Therefore:
u = Ri+ L didt +KΦω
Jω = KΦi− ρω − Tr
We then have the following linear state equations:{didt = −R
L i− KΦL ω + u
L
ω = KΦJ i− ρ
Jω − TrJ
where the inputs are u and Tr and the state variables are i andω. These equations can be written in matrix form:
d
dt
(i
ω
)=
(−R
L −KΦL
KΦJ − ρ
J
)(i
ω
)+
(1L 0
0 − 1J
)(u
Tr
)
2) When the motor is running, the torque Tr is beingimposed. The following table gives some mechanicalcharacteristics, in continuous output, of the pair (Tr, ω):
Tr = Cte motor used for liftingTr = αω locomotive, mixer, pumpTr =
αω machine tool (lathe, milling machine)
Tr = αω2 ventilator, fast car
Modeling 39
In our case, Tr = αω2. The motor now only has a singleinput that is the armature voltage u(t). We have:{
didt = −R
L i− KΦL ω + u
L
ω = KΦJ i− ρ
Jω − αω2
J
This is a nonlinear system. It now only has a single inputu(t).
Solution to Exercise 1.15 (RLC circuit)
Let i1 be the electrical current in the resistance R1 (top tobottom). Following the node and mesh rules, we have:⎧⎪⎨⎪⎩
u(t)− v(t)−R1i1(t) = 0 (mesh rule)Ldi
dt +R2i(t)−R1i1(t) = 0 (mesh rule)i(t) + i1(t)− C dv
dt = 0 (node rule)
Intuitively, we can understand that the memory of thesystem corresponds to the capacitor charge and theelectromagnetic flow in the coil. Indeed, if these values areknown at time t = 0, for a known input, the future of thesystem is determined in a unique manner. Thus, the possiblestate variables are given by the values i(t) (proportional tothe flow) and v(t) (proportional to the charge). We obtain thestate equations by removing the i1 in the previous equationsand by isolating di
dt and dvdt . Of course, one equation must be
removed. We obtain:{dvdt = − 1
CR1v(t) + 1
C i(t) +1
CR1u(t)
didt = − 1
Lv(t)− R2L i(t) + 1
Lu(t)
40 Automation for Robotics
Note that the output is given by y(t) = R2i(t). Finally, wereach the state representation of a linear system given by:
ddt
(v(t)
i(t)
)=
(− 1
CR1
1C
− 1L −R2
L
) (v(t)
i(t)
)+
(1
CR11L
)u(t)
y(t) =(0 R2
) (v(t)
i(t)
)
Solution to Exercise 1.16 (the three containers)
1) With the aim of applying Bernoulli’s relation of the leftcontainer, let us consider a flow tube, in other words a virtualtube (see figure) in which the water has a fluid movement anddoes not cross the walls. Bernoulli’s relation tells us that inthis tube, at every point:
P + ρv2
2+ ρgz = constant
where P is the pressure at the considered point, z its heightand v the velocity of the water at this point. The coefficientρ is the bulk density of the water and g is the gravitationalconstant. Following Bernouilli’s relation, we have:
PD + ρv2D2
+ ρgzD = PA + ρv2A2
+ ρgzA
in other words:
PD = PA + ρg (zA − zD)− ρv2D2
Moreover, we may assume that C is far from the turbulencezone and that the water is not moving. Therefore, we have thefollowing Bernoulli relation:
PC + ρgzC = PB + ρgzB
Modeling 41
in other words:
PC = PB + ρg (zB − zC)
Let us note that, in this turbulence zone, the water isslowed down, but we can assume that the pressure does notchange, i.e. PC = PD. Thus, we have:
PB︸︷︷︸Patm
+ ρg (zB − zC) = PA︸︷︷︸Patm
+ ρg (zA − zD)− ρv2D2
As PA = PB = Patm, and that zC = zD, this equationbecomes:
ρg (zA − zB) = ρv2D2
or:
vD =√2g (zA − zB)
In the case where the level of the right container is higherthan that of the left one, a similar study gives us:
vD = −√2g (zB − zA)
The minus sign of the expression indicates that the flownow moves from the right container toward the left one. Thusthe general relation for the velocity of the water in the canalis:
vD = sign (zA − zB)√
2g|zA − zB|
If a is the cross section of the canal, the water flow from theright container to the left one is:
QD = a.sign (zA − zB)√
2g|zA − zB |
This is the so-called Torricelli law.
42 Automation for Robotics
REMARK.– Initially, this law was proven in a simpler contextwhere the water flows into emptiness. The total energy of afluid element of mass m is conserved if we consider this latterto be falling freely in the tube flow. Thus, for the two points Aand B, where A is at the surface and B in the tube, we have:
mghA +1
2mv2A︸ ︷︷ ︸=0
= mghB +1
2mv2B
and therefore:
vB =√2g (hA − hB)
We can then deduce Toricelli’s relation. This reasoning isonly possible in the case of a perfect fluid, where, given theabsence of friction, we assume that the forces of tangentialpressure are nil.
2) The state equations are obtained by writing that thevolume of water in a container is equal to the sum of theincoming flows minus the sum of the outgoing flows, in otherwords:
h1 = −Q1ext −Q12 + u1
h2 = Q12 −Q23
h3 = −Q3ext +Q23 + u2
or:
h1 = −a.√2gh1 − a.sign (h1 − h2)
√2g|h1 − h2|+ u1
h2 = a.sign (h1 − h2)√
2g|h1 − h2|−a.sign (h2 − h3)
√2g|h2 − h3|
h3 = −a.√2gh3 + a.sign (h2 − h3)
√2g|h2 − h3|+ u2
Modeling 43
Solution to Exercise 1.17 (pneumatic cylinder)
The input of the system is the volumetric flow rate u of theair toward the cylinder chamber. We then have:
u =
(V
n
)n
where n is the number of gas in the chamber and V is thevolume of the chamber. The fundamental principle ofdynamics gives us pa− kz = mz. Therefore, the first two stateequations are:{
z = z,
z = ap−kzm
The ideal gas law (pV = nRT ) is given by pza = nRT . Bydifferentiating, we obtain:
a (pz + pz) = R(nT + nT
)By assuming an isothermal evolution, this relation
becomes:
a (pz + pz) = RnT = Rnu
VT = pu
By isolating p, we obtain the third state equation of oursystem, which is:
p =p
z
(ua− z
)The state equations of the system are therefore:⎧⎪⎨⎪⎩
z = z
z = ap−kzm
p = pz
(ua − z
)
44 Automation for Robotics
or, since x = (z, z, p):⎧⎪⎨⎪⎩x1 = x2x2 =
ax3−kx1m
x3 = −x3x1
(x2 − u
a
)Solution to Exercise 1.18 (Fibonacci sequence)
1) The state equations are given by:⎧⎪⎨⎪⎩x1(k + 1) = x2(k)
x2(k + 1) = x1(k) + x2(k)
y(k) = x1(k) + x2(k)
where x1(0) = 1 and x2(0) = 0 as the initial conditions. Thissystem is called a Fibonacci system.
2) Let us now look for the recurrence relation associatedwith this system. For this we need to express y(k),y(k + 1) and y(k + 2) as a function of x1(k) and x2(k). Theresulting calculations are the following:
y(k) = x1(k) + x2(k)
y(k + 1) = x1(k + 1) + x2(k + 1) = x1(k) + 2x2(k)
y(k + 2) = x1(k + 2) + x2(k + 2) = x1(k + 1) + 2x2(k + 1)
= 2x1(k) + 3x2(k)
In other words:⎛⎜⎝ y(k)
y(k + 1)
y(k + 2)
⎞⎟⎠ =
⎛⎜⎝1 1
1 2
2 3
⎞⎟⎠(x1(k)
x2(k)
)
By removing x1(k) and x2(k) from this system of threelinear equations, we obtain a single equation given by:
y(k + 2)− y(k + 1)− y(k) = 0
Modeling 45
The initial conditions are y(0) = y(1) = 1. It is in general inthis form that the Fibonacci system is described.
Solution to Exercise 1.19 (bus network)
1) The timetable is given below.
k 1 2 3 4 5
x1 (k) 0 5 8 13 16
x2 (k) 0 3 8 11 16
Table 1.2. Timetable for the buses
We note a periodicity of 2 in the sequence progression.
2) The state equations are:{x1 (k + 1) = max (x1 (k) + 2, x2 (k) + 5)
x2 (k + 1) = max (x1 (k) + 3, x2 (k) + 3)
3) In matrix form, we have:
x (k + 1) =
(2 5
3 3
)⊗ x (k)
4) The problem comes from the fact that (R, ⊕) is only amonoid and not a group (since the image does not exist).Therefore (R, ⊕, ⊗) is not a ring (as it is not necessary formatrix calculus) but a dioid.
2
Simulation
In this chapter, we will show how to perform a computersimulation of a nonlinear system described by its stateequations:{
x(t) = f(x(t),u(t))
y(t) = g(x(t),u(t))
This step is important in order to test the behavior of asystem (controlled or not). Before presenting the simulationmethod, we will introduce the concept of vector fields. Thisconcept will allow us to better understand the simulationmethod as well as certain behaviors that could appear innonlinear systems. We will also give several concepts ofgraphics that are necessary for the graphical representationof our systems.
2.1. Concept of vector field
We will now present the concept of vector fields and showthe manner in which they are useful in order to betterunderstand the various behaviors of systems. We invite thereaders to consult Khalil [KHA 02] for further details on thissubject. A vector field is a continuous function f of R
n to Rn.
48 Automation for Robotics
When n = 2, a graphical representation of the function f canbe imagined. For instance, the vector field associated with thelinear function:
f :
R2 → R
2(x1x2
)→
(x1 + x2x1 − x2
)
is illustrated in Figure 2.1. In order to obtain this figure, wehave taken a set of vectors from the initial set, following a grid.Then, for each grid vector x, we have drawn its image vectorf(x) by giving it the vector x as origin.
Figure 2.1. Vector field associated with a linear application
The MATLAB code that allowed us to generate such a fieldis given below:
axis([-2,2,-2,2]);
Mx = -2:0.5:2; My = -2:0.5:2;
[X1,X2] = meshgrid(Mx,My);
VX=X1+X2; VY=X1-X2;
quiver(Mx,My,VX,VY,’black’);
This program can also be found in the filefield_syslin.m. We may recognize in this figure the
Simulation 49
characteristic spaces (dotted lines) of the linear application.We can also see that one eigenvalue is positive and anothereigenvalue is negative. This can be verified by analyzing thematrix of our linear application given by:
A =
(1 1
1 −1
)
Its eigenvalues are√2 and −√
2, and the associatedeigenvectors are:
v1 =
(0.9239
0.3827
)et v2 =
(−0.3827
0.9239
)
Let us note that the vector x represented in the figure isnot an eigenvector, since x and f(x) are not collinear.However, all the vectors that belong to the characteristicsubspaces (represented as dotted lines in the figure) areeigenvectors. Along the characteristic subspace associatedwith the negative eigenvalue, the field vectors tend to pointtoward 0, whereas these vectors point to infinity along thecharacteristic subspace associated with the positiveeigenvalue.
For an autonomous system (in other words, one withoutinput), the evolution is given by the equation x(t) = f(x(t)).When f is a function of R
2 to R2, we can obtain a graphical
representation of f by drawing the vector field associatedwith f . The graph will then allow us to better understand thebehavior of our system.
2.2. Graphical representation
In this section, we will give several concepts that arenecessary for the graphical representation of systems duringsimulations.
50 Automation for Robotics
2.2.1. Patterns
A pattern is a matrix with two or three rows (followingwhether the object is in the plane or in space) and n columnsthat represent the n vertices of a shape-retaining polygon,meant to represent the object. It is important that the unionsof all the segments formed by the two consecutive points ofthe pattern form the edges of the polygon that we wish torepresent. For instance, the pattern M of the chassis (seeFigure 2.2) of the car (with the rear wheels) is given by:(
−1 4 5 5 4 −1 −1 0 0 −1
−2 −2 −1 1 2 2 −2 −2 −3 −3
1 0 0 −1 1 0 0 3 3 3
−3 −3 3 3 3 3 2 2 3 −3
)
It is clear on the graph of the car in movement that thefront wheels can move with respect to the chassis, as well aswith respect to one another. They, therefore, cannot beincorporated into the pattern of the chassis. For the graph ofthe car, we will, therefore, have to use 3 patterns: that of thechassis, that of the left front wheel and that of the right frontwheel. In MATLAB, the pattern M (here in two dimensions)can be drawn in blue using the instructions:
M=[-1 4 5 5 4 -1 -1 -1 0 0 -1 1 0 0 -1 1 0 0 3 3 3;
-2 -2 -1 1 2 2 -2 -2 -2 -3 -3 -3 -3 3 3 3 3 2 2 3 -3];
plot(M(1, :),M(2, :),’blue’);
Recall that in MATLAB, M(i, :) returns the ith row of thematrix M.
2.2.2. Rotation matrix
Let us recall that the jth column of the matrix of a linearapplication of Rn → R
n represents the image of the jth vector
Simulation 51
ej of the standard basis. Thus, the expression of a rotationmatrix of angle θ in the plane R
2 is given by (see Figure 2.3):
R =
(cos θ − sin θ
sin θ cos θ
)
Figure 2.2. Car to be represented graphically
Figure 2.3. Rotation of angle θ
Concerning rotations in the space R3, it is important to
specify the rotation axis. We can distinguish between 3 mainrotations: the rotation of angle θ around the Ox axis, therotation around the Oy axis and the rotation around the Oz
52 Automation for Robotics
axis. The associated matrices are, respectively, given by:
Rx =
⎛⎜⎝1 0 0
0 cos θ − sin θ
0 sin θ cos θ
⎞⎟⎠ , Ry =
⎛⎜⎝ cos θ 0 sin θ
0 1 0
− sin θ 0 cos θ
⎞⎟⎠
and Rz =
⎛⎜⎝cos θ − sin θ 0
sin θ cos θ 0
0 0 1
⎞⎟⎠
2.2.3. Homogeneous coordinates
Drawing two-dimensional or three-dimensional (3D)objects on a screen requires a series of affine transformations(rotations, translations and homotheties) of the form:
fi :Rn → R
n
x �→ Aix+ bi
with n = 2 or 3. However, the manipulation of compositions ofaffine functions is not as simple as that of linear applications.The idea of the transformation into homogeneous coordinatesis to transform a system of affine equations into a system oflinear equations. Let us note first of all that an affine equationof type y = Ax+ b can be rewritten in the form:(
y
1
)=
(A b
0 1
)(x
1
)
We will, therefore, define the homogeneous transformationof a vector as follows:
x �→ xh =
(x
1
)
Simulation 53
Thus, an equation such as:
y = A3 (A2 (A1x+ b1) + b2) + b3
where there is a composition of 3 affine transformations canbe rewritten as:
yh =
(A3 b3
0 1
)(A2 b2
0 1
)(A1 b1
0 1
)xh
By using Rodrigues’ formula that tells us that the rotationmatrix around the vector w and of angle ϕ = ||w|| is given by:
Rw = exp
⎛⎜⎝ 0 −wz wy
wz 0 −wx
−wy wx 0
⎞⎟⎠we can write a MATLAB function to generate a homogeneousrotation matrix of R3:
function R=Rotate(w)
A=[ 0 -w(3) w(2); w(3) 0 -w(1); -w(2) w(1) 0]);
R=[expm(A),[0;0;0]; 0 0 0 1];
A function that generates a homogeneous translationmatrix of a vector v of R3 is given below:
function T=Translate(v)
T=eye(4,4);
T(1:3,4)=v;
These two functions are given in the files Rotate.m andTranslate.m.
54 Automation for Robotics
2.3. Simulation
In this section, we will present the integration method toperform a computer simulation of a nonlinear systemdescribed by its state equations:{
x(t) = f(x(t),u(t))
y(t) = g(x(t),u(t))
This method is rather approximative, but remains simple tounderstand and is enough in order to describe the behaviorsof most of the robotized systems.
2.3.1. Euler’s method
Let dt be a very small number compared to the timeconstants of the system and which corresponds to thesampling period of the method (for example, dt = 0.01). Theevolution equation is approximated by:
x(t+dt)−x(t)dt � f(x(t),u(t))
in other words:
x (t+ dt) � x (t) + f(x(t),u(t)).dt
This equation can be interpreted as an order 1 Taylorformula. From this, we can deduce the simulation algorithm(called Euler’s method):
Algorithm EULER(in: x0)1 x := x0;t := 0; dt = 0.01;
2 repeat3 wait for uinput;4 y := g(x,u);
5 return y;
6 x := x+ f(x,u).dt;
7 wait for interrupt from timer;8 t = t+ dt;
9 while true
Simulation 55
The timer creates a periodic interrupt every dt s. Thus, ifthe computer is sufficiently fast, the simulation is performedat the same speed as our physical system. We then refer toreal-time simulation. In some circumstances, what we areinterested in is obtaining the result of the simulation in thefastest possible time (for instance, in order to predict how asystem will behave in the future). In this case, it is notnecessary to slow the computer down in order to synchronizeit with our physical time.
We call local error the quantity:
et = ||x(t+ dt)− x(t+ dt)|| with x(t) = x(t)
where x(t + dt) is the exact solution of the differentialequation x = f(x,u) and x(t + dt) is the estimated value ofthe state vector, for the integration scheme being used. ForEuler’s method, we can show that et is of order 1, i.e.et = o(dt).
2.3.2. Runge–Kutta method
There are more efficient integration methods in which thelocal error is of order 2 or more. This is the case of theRunge–Kutta method of order 2, which consists of replacingthe recurrence x(t+ dt) := x(t) + f (x(t),u(t)) .dt with:
x(t+ dt) = x(t) + dt.
[14 .f (x(t),u(t))
+34 .f(x(t) +
23dt.f(x(t),u(t))︸ ︷︷ ︸xE(t+ 2
3dt)
,u(t+ 23dt))
]
Let us note in this expression the value xE(t +23dt) which
can be interpreted as the integration obtained by Euler’smethod at time t + 2
3dt. The quantity between the square
56 Automation for Robotics
brackets is an average between an estimation of f (x(t),u(t))and an estimation of f
(x(t+ 2
3dt),u(t+23dt)
). The local error
et here is of order 2 and the integration method is, therefore,a lot more precise. There are Runge–Kutta methods of orderhigher than 2 that we will not discuss here.
2.3.3. Taylor’s method
Euler’s method (which is an order 1 Taylor method) can beextended to higher orders. Let us show, without loss ofgenerality, how to extend to second order. We have:
x (t+ dt) = x (t) + x(t).dt+ x(t).dt2 + o(dt2
)But:
x(t) = f(x(t),u(t))
x(t) =∂f
∂x(x(t),u(t)).x(t) +
∂f
∂u(x(t),u(t)).u(t)
Therefore, the integration scheme becomes:
x(t+ dt) = x(t) + dt.f(x(t),u(t)) + dt2
.
(∂f
∂x(x(t),u(t)).f(x(t),u(t)) +
∂f
∂u(x(t),u(t)).u(t)
)
2.4. Exercises
EXERCISE 2.1.– Vector field of the predator–prey system
The predator–prey system, also called a Lotka–Volterrasystem, is given by:{
x1(t) = (1− x2(t))x1(t)
x2(t) = (x1(t)− 1)x2(t)
Simulation 57
The state variables x1(t) and x2(t) represent the size ofthe predator and prey populations. For example, x1 couldrepresent the number of preys in thousands, whereas x2 couldbe the number of predators in thousands. Even though thenumber of preys and predators are integers, we will assumethat x1 and x2 are real values. The quadratic terms of thisstate equation represent the interactions between the twospecies. Let us note that the preys grow in an exponentialmanner when there are no predators. Similarly, the populationof predators declines when there is no prey.
1) Figure 2.4 corresponds to the vector field associated withthe evolution function:
f(x) =
((1− x2)x1(x1 − 1)x2
)
on the grid [0, 2] × [0, 2]. Discuss the dynamic behavior of thesystem using this figure.
2) Also using this figure, give the point of equilibrium. Verifyit through calculation.
Figure 2.4. Vector field associated with the Lotka–Volterrasystem, in the plane (x1, x2)
58 Automation for Robotics
Figure 2.5. Simple pendulum with state vector
x = (x1, x2)T =
(θ, θ
)T
EXERCISE 2.2.– Vector field of a simple pendulum
Let us consider the simple pendulum described by thefollowing state equations:{
x1 = x2x2 = −g sinx1
The vector field associated with the evolution function f(x)is drawn in Figure 2.6.
1) Following the graph, give the stable and unstable pointsof equilibrium.
2) Draw on the figure, a path for the pendulum that couldhave been obtained by Euler’s method.
EXERCISE 2.3.– Pattern of a cube
Let us consider the 3D cube [0, 1]× [0, 1]× [0, 1].
1) Give its pattern in matrix form.
2) What matrix operation must be performed in order torotate it with an angle θ around the Ox axis?
Simulation 59
Figure 2.6. Vector field associated with the simple pendulumin the phase plane (x1, x2) =
(θ, θ
)
EXERCISE 2.4.– Drawing a car
Here, we are looking to design a MATLAB function thatwe will call voiture_draw(x), which draws a car in a statex=(x, y, θ, v, δ)T where x, y, θ correspond to the pose of the car(in other words, its position and orientation), v is the speedand δ is the angle of the front wheels.
1) Define, in homogeneous coordinates, a pattern for thechassis and a common pattern for the two front wheels.
2) Define the transformation matrices for the chassis, leftfront wheel and right front wheel.
3) Deduce from this, a MATLAB function called car_draw(x) that draws the car in a given state x.
EXERCISE 2.5.– Simulation of a pendulum
Let us consider a pendulum described by the followingdifferential equation:
θ =−g sin θ
�
60 Automation for Robotics
where θ represents the angle of the pendulum. We will takeg = 9.81 ms−2 and � = 1 m. We initialize the pendulum at timet = 0 with θ = 1 rad and θ = 0 rad.s−1. Then, we let go of thependulum. Write a small program (influenced by a MATLAB-like syntax) that determines the angle of the pendulum at timet = 1 s. The program will have to use Euler’s method.
EXERCISE 2.6.– Van der Pol system
Let us consider the system described by the followingdifferential equation:
y +(y2 − 1
)y + y = 0
1) Let us choose as state vector x = (y y)T. Give the stateequations of the system.
2) Linearize this system around the point of equilibrium.What are the poles of the system? Is the system stable aroundthe equilibrium point?
3) The vector field associated with this system is representedin Figure 2.7 in the state space (x1, x2). We initialize thesystem in x0 = (0.1 0)T. Draw on the figure, the path x (t)of the system on the state space. Give the form of y(t).
4) Can a path have a loop?
5) In order to simulate this system, would it be better toemploy Euler’s method, the Runge–Kutta method, or will bothintegration schemes give equivalent behaviors?
EXERCISE 2.7.– Simulation of a car
Let us consider the car with the following state equations:⎛⎜⎜⎜⎜⎜⎝x
y
θ
v
δ
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝v cos δ cos θ
v cos δ sin θv sin δL
u1u2
⎞⎟⎟⎟⎟⎟⎠
Simulation 61
Figure 2.7. Vector field associated with the Van der Pol system
The state vector is given by x = (x, y, θ, v, δ)T. Let us takethe initial state for the car as x (0) = (0 0 0 7 0)T, whichmeans that at time t = 0, the car is centered around theorigin, with a nil heading angle, a speed of 7 ms−1 and thefront wheels parallel to the axis of the car. We assume thatthe vectorial control u(t) remains constant and equal to(0 0.2)T. Which means that the car does not accelerate (sinceu1 = 0) and that the steering wheel is turning at a constantspeed of 0.2 rad.s−1 (since u2 = 0.2). Give a MATLAB programthat simulates the dynamic evolution of this car during 10 swith Euler’s method and a sampling step of 0.01 s.
EXERCISE 2.8.– Integration by Taylor’s method
Let us consider a robot (such as, a tank) described by thefollowing state equations:⎛⎜⎜⎜⎜⎜⎝
x1x2x3x4x5
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝x4 cosx3x4 sinx3
x5u1u2
⎞⎟⎟⎟⎟⎟⎠
62 Automation for Robotics
Propose a second-order integration scheme using Taylor’smethod. The input will be:
u =
(u1u2
)=
(cos (t)
sin (t)
)EXERCISE 2.9.– Radius of a tricycle wheel
Let us consider the 3D representation of a tricycle as shownin Figure 2.8. In this figure, the small black disks are on asame horizontal plane of height r and the small hollow disksare on the same horizontal plane with height nil.
The radius of the front wheel in bold has an angle of αwith the horizontal plane, as represented in the figure. Give,as a function of x, y, θ, δ, α, L, r, the expression of thetransformation matrix (in homogeneous coordinates) thatlinks the radius positioned on the Ox axis with that (in bold)of the front wheel. The expression must have a matrixproduct form.
EXERCISE 2.10.– Three-dimensional simulation of a tricyclein MATLAB
The driver of the tricycle of Figure 2.9 has two controls:the acceleration of the front wheel and the rotation speed ofthe steering wheel. The state variables of our system arecomposed of the position coordinates (the x, y coordinates ofthe center if the rear axle, the orientation θ of the tricycle andthe angle δ of the front wheel) and of the speed v of the centerof the front wheel. As seen in Exercise 1.11, the evolutionequation of the tricycle (similar to that of the car) is writtenas: ⎛⎜⎜⎜⎜⎜⎝
x
y
θ
v
δ
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝v cos δ cos θ
v cos δ sin θv sin δL
u1u2
⎞⎟⎟⎟⎟⎟⎠
Simulation 63
where L = 3 is the distance between the rear axle and thecenter of the front wheel. The state vector here isx = (x, y, θ, v, δ)T. The distance between each rear wheel andthe axis of the tricycle is given by e = 2 m. The radius of thewheels is r = 1 m.
Figure 2.8. We are looking to draw a radius of one ofthe tricycle wheels
Figure 2.9. Robotized tricycle to be simulated
1) Create a pattern for the body of the tricycle. Rotate andtranslate it in 3D using the homogeneous coordinates andRodrigues’ formula.
64 Automation for Robotics
2) Create a function tricycle_draw that draws thistricycle in three dimensions together with the wheels, as inFigure 2.10. This function will have as parameter the statevector x.
3) By using the speed composition rule, calculate the speedsthat each wheel must have.
4) In order to rotate the wheels (numbered from 1 to 3 asin the figure) with the progress of the tricycle, we add 3 statevariables α1, α2, α3 corresponding to the angle of the wheels.Give the new state equations of the system. Simulate thesystem in MATLAB.
r
e
L
2
1
3
Figure 2.10. Tricycle to be drawn
EXERCISE 2.11.– Manipulator arms
The manipulator robot represented in Figure 2.11 iscomposed of three arms in series. The first arm, of length 3,can rotate around the Oz axis. The second arm, of length 2,placed at the end of the first arm can also rotate around theOz axis. The third arm, of length 1, placed at the end of thesecond arm, can rotate around the axis formed by the secondarm. This robot has 3 degrees of freedom x = (α1, α2, α3),where the αi represents the angles formed by each of the
Simulation 65
arms. The basic pattern chosen to represent each of the armsis the unit cube. Each arm is assumed to be a parallelepipedof thickness 0.3. In order to take the form of the arm, thepattern must be subjected to an affinity, represented by adiagonal matrix. Then, it has to be rotated and translated inorder to be correctly placed. Design a MATLAB program thatsimulates this system, with a 3D vision inspired from thefigure.
Figure 2.11. Manipulator robot composed of three arms
EXERCISE 2.12.– Simulation and control of a skating robot inMATLAB
The skating vehicle represented in Figure 2.12 is designedto move on a frozen lake and stands on five ice-skates[JAU 10]. This system has two inputs: the tangent u1 of angleβ of the front skate (we have chosen the tangent as input inorder to avoid singularities) and u2 the torque exerted on thearticulation between the two sledges and which correspondsto the angle δ. The propulsion, therefore, only comes from thetorque u2 and is similar to the mode of propulsion of a snakeor an eel [BOY 06]. Any control over u1, therefore, does notgive any energy to the system.
66 Automation for Robotics
Figure 2.12. Skating robot
1) Show that the system can be described by the followingstate equations:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
x = v cos θ
y = v sin θ
θ = vu1v = − (u1 + sin δ)u2 − v
δ = −v (u1 + sin δ)
where v is the speed of the center of the front axle. This is ofcourse a simplified and normalized model where, for reasons ofsimplicity, the coefficients (masses, viscous friction, interaxledistances, etc.) have been chosen unitary.
2) Simulate this system by using Euler’s method for 2 s. Wewill take as initial condition x = (0, 0, 0, 3, 0.5)T, as samplingperiod dt = 0.05 s and as input u = (1, 0)T. Discuss the result.
3) We will now attempt to control this system usingbiomimicry, i.e. by attempting to reproduce the propulsion ofthe snake. We choose u1 to be of the form:
u1 = p1 cos (p2t) + p3
where p1 is the amplitude, p2 is the pulse and p3 is thebias. We choose u2 so that the propulsion torque is always
Simulation 67
a driving force, in other words δ.u2 ≥ 0. Indeed, the termδ.u2 corresponds to the power brought by the robot whichis transformed into kinetic energy. Program this control andmake the right choice for the parameters that allow us toensure an efficient propulsion. Reproduce the two behaviorsillustrated in Figure 2.13 on your computer.
Figure 2.13. Simulations of the controlled skating robot
4) Add a second control loop that controls the parameters piof your controller in order for your robot to be able to follow adesired heading θ.
2.5. Solutions
Solution to Exercise 2.1 (vector field of the predator–preysystem)
1) The evolution of the state vector of the system is donein the direction of the arrows (since, x = f(x)). We can thusobserve that the evolution of the system is periodic and thatthe state traverses an almost-circular curve (with a center of(1, 1)) in the direct trigonometric direction.
68 Automation for Robotics
2) The point (1, 1) and the point (0, 0) of the figure(represented by small black disks) appear to be points ofequilibrium of our system, since the vector field is canceledout. Let us verify this by calculation. The points ofequilibrium satisfy the equation f(x) = 0, i.e.:{
(1− x2)x1 = 0
(−1 + x1)x2 = 0
The first point of equilibrium is x = (0, 0)T, whichcorresponds to a situation in which none of the two speciesexists. The second point of equilibrium is given by x = (1, 1)T
and corresponds to a situation of ecological equilibrium.
Solution to Exercise 2.2 (vector field of a simplependulum)
1) The small black disks represent the points of equilibrium.The first, the third and the last points correspond to thesituation where the pendulum is at the bottom, in stableequilibrium. The second and fourth points correspond tothe situation where pendulum is on top, in an unstableequilibrium. Around the stable point of equilibrium, the statevector tends to rotate around this point, thus forming a cycle.
2) Figure 2.14 represents the path of the pendulum obtainedby Euler’s method in the state space for the initial conditionθ = 1 and θ = 0. The pendulum, which is not subject to anyfriction, should normally traverse a cycle. However, Euler’smethod gives a bit of energy to the pendulum at each step,which would explain its path that tends to diverge. Thiscomes from the fact that the field, even though tangent tothe path of the system, tends to leave the system. It is clearthat Euler’s method cannot be used to simulate conservativesystems (such as, the planetary system), which do not havefriction and tend to perform cycles. However, for dissipativesystems (with friction), Euler’s method often proves to besufficient to correctly describe the behavior of the system. For
Simulation 69
our pendulum, Euler’s method tends to give some energy tothe pendulum which, after around 10 oscillations, starts toperform complete revolutions around its axis, in the indirecttrigonometric direction.
Figure 2.14. Path of the pendulum in the state space
Solution to Exercise 2.3 (pattern of a cube)
1) The pattern is given by:
M =
⎛⎜⎝ 0 1 1 0 0 0 1 1 0 0 0 0 1 1 1 1
0 0 0 0 0 1 1 1 1 1 1 0 0 1 1 0
0 0 1 1 0 0 0 1 1 0 1 1 1 1 0 0
⎞⎟⎠is associated with the unit cube [0, 1]3 of R3. Let us note thatthis pattern is composed of 16 columns, instead of 13 (indeed,the cube has 12 edges). This comes from the fact that, in orderto draw all the edges of the cube without lifting the pen, wenecessarily need to go through a minimum of 13 vertices of thecube.
70 Automation for Robotics
2) We perform the following matrix operation:
M =
⎛⎜⎝ 1 0 0
0 cos θ − sin θ
0 sin θ cos θ
⎞⎟⎠ ·M
Solution to Exercise 2.4 (drawing a car)
1) For the chassis, we can take the pattern Mchassis given by:(−1 4 5 5 4 −1 −1 0 0 −1
−2 −2 −1 1 2 2 −2 −2 −3 −3
1 1 1 1 1 1 1 1 1 1
1 0 0 −1 1 0 0 3 3 3
−3 −3 3 3 3 3 2 2 3 −3
1 1 1 1 1 1 1 1 1 1
)
For this, we have taken the pattern for the chassis of thecar, and in order to make it homogeneous, we have added aline of 1s. For the pattern of the left front wheel (as well as theright front wheel), we will take, in homogeneous coordinates,the following pattern:
Mwheel =
⎛⎜⎝−1 1
0 0
1 1
⎞⎟⎠2) We have to subject the chassis to a rotation of angle θ and
a translation of vector (x, y). For this, it is sufficient to multiplyM from the left-hand side by the matrix:
Rchassis =
⎛⎜⎝cos θ − sin θ x
sin θ cos θ y
0 0 1
⎞⎟⎠For the left front wheel, we give it a rotation of angle δ
followed by a translation of (3, 3), then rotate it again by θand translate it by (x, y). The resulting transformation matrix
Simulation 71
is:
Rleft_front_wheel =
⎛⎜⎝cos θ − sin θ x
sin θ cos θ y
0 0 1
⎞⎟⎠⎛⎜⎝cos δ − sin δ 3
sin δ cos δ 3
0 0 1
⎞⎟⎠A similar matrix can be obtained for the right front wheel.
3) We define first of all the pattern for the chassis of the car(with its rear wheels) with a pattern for a front wheel (leftor right). They are then subjected to the transformations andfinally draw the transformed patterns. We obtain the followingMATLAB function.
function voiture_draw(x)
Mch= [-1 4 5 5 4 -1 -1 -1 0 0 -1 1 0 0 -1 1 0 0 3 3 3;
-2 -2 -1 1 2 2 -2 -2 -2 -3 -3 -3 -3 3 3 3 3 2 2 3 -3;
ones(1,21)]; % Pattern for the chassis
Mav= [-1 1;0 0;1 1]; %Pattern for a front wheel
Rch= [cos(x(3)),-sin(x(3)),x(1);sin(x(3)),cos(x(3)),
x(2);0 0 1]
Mch=Rch*Mch;
Mavd=Rch* [cos(x(5)),-sin(x(5)) 3;sin(x(5)),cos(x(5)) 3
;0 0 1]*Mav;
Mavg=Rch* [cos(x(5)),-sin(x(5)) 3;sin(x(5)),cos(x(5))
-3;0 0 1]*Mav;
plot(Mch(1, :),Mch(2, :),’blue’);
plot(Mavd(1, :),Mavd(2, :),’black’);
plot(Mavg(1, :),Mavg(2, :),’black’);
Solution to Exercise 2.5 (simulation of a pendulum)
The state equations of the pendulum are:(x1x2
)=
(x2
−g sinx1
�
)
72 Automation for Robotics
The MATLAB program allowing us to simulate thependulum during 1 s is given below. It computes anapproximation of the vector x corresponding to the state ofthe pendulum for t = 1s. The angle corresponds to x1.
L=1;g=9.81;dt=0.01; % initialization
x= [1;0]; % initial condition
for t=0:dt:1,
x=x+ [x(2); -(g/L)*sin(x(1))]*dt;
end;
This program is also given in the file pendule_main.m.It calls the evolution function pendule_f.m and the drawingfunction pendule_draw.m.
Solution to Exercise 2.6 (Van der Pol system)
1) The state equations are:{x1 = x2x2 = − (
x21 − 1)x2 − x1
2) The point of equilibrium satisfies:{0 = x20 =
(x21 − 1
)x2 + x1
i.e. x1 = x2 = 0. By linearizing around x = (0 0)T, we obtain:{x1 = x2x2 = x2 − x1
Let:
x =
(0 1
−1 1
)x
Simulation 73
The characteristic polynomial is s2 − s+1. The eigenvaluesof the evolution matrix are therefore 1
2 ±√32 i. The system is
unstable.
3) The path is drawn in Figure 2.15.
We can observe a stable limit cycle. The output y (t) is givenin Figure 2.16.
The Van der Pol system is an oscillator. Whatever the initialconditions are, it quickly starts oscillating with constantfrequency and amplitude.
4) No, it is not possible to create a loop since the system isdeterministic. For each x, there is only one corresponding x.
5) The system is structurally stable, in other words everyinfinitely small perturbation of the vector field leaves itsstability properties unchanged. This property is common forrobots, but not for conservative mechanical systems (suchas, the frictionless pendulum: the smallest friction altersits stability properties). The integration schemes will all,therefore, probably give an equivalent behavior.
Figure 2.15. Path for the Van der Pol system obtainedby Euler’s method
74 Automation for Robotics
Figure 2.16. Output y (t) obtained for the Van der Pol oscillator
Solution to Exercise 2.7 (simulation of a car)
The complete program of the simulation of the car is givenbelow.
x= [0;0;0;7;0];u= [0;0.2];dt=0.01;
for t=0:dt:10,
xpoint=
[x(4)*cos(x(5))*cos(x(3));x(4)*cos(x(5))*sin(x(3));
x(4)*sin(x(5))/3; u(1);u(2)];
x=x+xpoint*dt;end;
Solution to Exercise 2.8 (integration using Taylor’smethod)
We have:⎛⎜⎜⎜⎜⎜⎝x1x2x3x4x5
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝x4 cosx3 − x4x3 sinx3x4 sinx3 + x4x3 cosx3
x5u1u2
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝u1 cosx3 − x4x5 sinx3u1 sinx3 + x4x5 cosx3
u2u1u2
⎞⎟⎟⎟⎟⎟⎠
Simulation 75
Thus, the second-order Taylor integration scheme iswritten as:⎛⎜⎜⎜⎜⎜⎝
x1 (t+ dt)
x2 (t+ dt)
x3 (t+ dt)
x4 (t+ dt)
x5 (t+ dt)
⎞⎟⎟⎟⎟⎟⎠︸ ︷︷ ︸
x(t+dt)
=
⎛⎜⎜⎜⎜⎜⎝x1 (t)
x2 (t)
x3 (t)
x4 (t)
x5 (t)
⎞⎟⎟⎟⎟⎟⎠︸ ︷︷ ︸
x(t)
+ dt.
⎛⎜⎜⎜⎜⎜⎝x4(t). cos x3(t)
x4(t). sin x3(t)
x5(t)
u1(t)
u2(t)
⎞⎟⎟⎟⎟⎟⎠︸ ︷︷ ︸
f(x(t),u(t))
+dt2.
⎛⎜⎜⎜⎜⎜⎝u1(t). cos x3(t)− x4(t).x5(t). sin x3(t)
u1(t). sin x3(t) + x4(t).x5(t). cos x3(t)
u2(t)
u1(t)
u2(t)
⎞⎟⎟⎟⎟⎟⎠︸ ︷︷ ︸
∂f∂x
(x(t),u(t)).f(x(t),u(t))+ ∂f∂u
(x(t),u(t)).u(t)
In order to be able to apply this integration scheme, wemust take:
u(t) =
(cos t
sin t
)et u(t) =
(− sin t
cos t
)
Solution to Exercise 2.9 (radius of a tricycle wheel)
This transformation matrix is given by:
R : =
⎛⎜⎜⎜⎝
1 0 0 x
0 1 0 y
0 0 1 0
0 0 0 1
⎞⎟⎟⎟⎠
︸ ︷︷ ︸translation of the body
·
⎛⎜⎜⎜⎝
exp
⎛⎜⎝
0 −θ 0
θ 0 0
0 0 0
⎞⎟⎠
0
0
0
0 0 0 1
⎞⎟⎟⎟⎠
︸ ︷︷ ︸rotation of the body
·
⎛⎜⎜⎜⎝
1 0 0 L
0 1 0 0
0 0 1 r
0 0 0 1
⎞⎟⎟⎟⎠
︸ ︷︷ ︸positioning of the front wheel
·
⎛⎜⎜⎜⎝
exp
⎛⎜⎝
0 −δ 0
δ 0 0
0 0 0
⎞⎟⎠
0
0
0
0 0 0 1
⎞⎟⎟⎟⎠
︸ ︷︷ ︸rotation of δ with respect to Oz
·
⎛⎜⎜⎜⎝
exp
⎛⎜⎝
0 0 α
0 0 0
−α 0 0
⎞⎟⎠
0
0
0
0 0 0 1
⎞⎟⎟⎟⎠
︸ ︷︷ ︸rotation of α with respect to Oy
76 Automation for Robotics
Solution to Exercise 2.10 (three-dimensional simulationof a tricycle in MATLAB)
1) A matrix in homogeneous coordinates is composed of 4rows and as much columns as there are points in the pattern.The last row only contains 1s. For instance for the unit cube,we have:
M =
⎛⎜⎜⎜⎝0 1 1 0 0 0 1 1 0 0 0 0 1 1 1 1
0 0 0 0 0 1 1 1 1 1 1 0 0 1 1 0
0 0 1 1 0 0 0 1 1 0 1 1 1 1 0 0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
⎞⎟⎟⎟⎠Rodrigues’ formula tells us that the expression of the
rotation matrix R of angle ||ω|| around ω is given by:
R = exp
⎛⎜⎝ 0 −ωz ωy
ωz 0 −ωx
−ωy ωx 0
⎞⎟⎠Thus, in order to rotate the pattern M then translate it by
(tx, ty, ty), we will perform the following operation:
M :=
⎛⎜⎜⎜⎝1 0 0 tx0 1 0 ty0 0 1 tz0 0 0 1
⎞⎟⎟⎟⎠︸ ︷︷ ︸
translation
·
⎛⎜⎜⎜⎝ exp
⎛⎜⎝ 0 −ωz ωy
ωz 0 −ωx
−ωy ωx 0
⎞⎟⎠ 0
0
0
0 0 0 1
⎞⎟⎟⎟⎠︸ ︷︷ ︸
rotation
·M
2) The drawing function is given below.
function tricycle3D_draw(x,L,e,t)
M = [-1 2 1 0 -1 -1 2 1 0 -1 0 0 1 1 2 L 2 0 0 0; ...
-1 -1 -0.5 -0.5 -1 1 1 0.5 0.5 1 0.5 -0.5 -0.5 0.5 1 0 -1
-1 -e e; ...
0 0 1 1 0 0 0 1 1 0 1 1 1 1 0 0 0 0 0 0; ...
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];
Simulation 77
wheel = [ ];
for a=0:pi/8:2*pi; % pattern of the wheel
radius = [cos(a); 0; sin(a); 1];
wheel = [wheel,radius, [0;0;0;1],radius];
end
Rcam = Rotate([0.8;0;0]); % position the camera
Rchassis = Rcam*Translate([x(1);x(2);0])*Rotate
([0;0;x(3)]); M1 = Rchassis*M;
wheel1 = Rchassis*Translate([0;-2;0])*Rotate
([0;x(6);0])*wheel;
wheel2 = Rchassis*Translate([0; 2;0])*Rotate
([0;x(7);0])*wheel;
wheel3 = Rchassis*Translate([3; 0;0])*Rotate([0;0;x(5)])
*Rotate([0;x(8);0])*wheel;
axis([-7,7,-5,5]); % axes
plot(M1(1, :),M1(3, :),’black’); % chassis
plot(roue1(1, :),roue1(3, :),’blue’); % wheel 1
plot(roue2(1, :),roue2(3, :),’blue’); % wheel 2
plot(roue3(1, :),roue3(3, :),’green’); % wheel 3
3) Following the speed composition rule, we have, for thefirst wheel:
vR1 = vA +−−→R1A ∧ −→ω
or, by expressing this relation in the frame of the tricycle (seefigure of the problem statement):⎛⎜⎝rα1
0
0
⎞⎟⎠ =
⎛⎜⎝v cos δ
v sin δ
0
⎞⎟⎠+
⎛⎜⎝L
e
0
⎞⎟⎠ ∧
⎛⎜⎝0
0
θ
⎞⎟⎠
78 Automation for Robotics
or:
rα1 = v cos δ − eθ = v cos δ + ev sin δ
L
Thus:
α1 =v
r
(cos δ +
e sin δ
L
)For the second wheel, the same reasoning applies. We
obtain:
α2 =v
r
(cos δ − e sin δ
L
)For the third wheel, we have:
α3 =v
r
4) We obtain:⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x
y
θ
v
δ
α1
α2
α3
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
v cos δ cos θ
v cos δ sin θv sin δ
L
u1u2
vr
(cos δ + e sin δ
L
)vr
(cos δ − ev sin δ
L
)vr
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠The simulation program is given below.
L = 3; % distance between the axles
e = 2; % distance between the wheel and the car axis
x = [-1;-1;0.8;1;0.3;0;0;0];
% x = (x,y,theta,v,delta,a1,a2,a3)
Simulation 79
% ai is the angle of wheel i
dt = 0.01;
for t = 0:dt:5;
tricycle3D_draw(x,L,e,t);
u = [0.1;0.1];
x = x+tricycle3D_f(x,u,L,e)*dt;
end;
This program is available in the file tricycle3D_main.m.
Solution to Exercise 2.11 (manipulator arms)
Table 2.1 represents the series of transformations to applyto the pattern in order to represent each arm. As indicatedin this table, the second arm has to be subjected to all thetransformations applied to the first arm and the third arm tothose applied to the second arm.
Arm 1 Arm 2 Arm 3Diag(3, 0.3, 0.3) Diag(2, 0.3, 0.3) Diag(1, 0.3, 0.3)
Rotz(α1) Rotz(α2) Rotx(α3)
Rotx(0.5) Trans(3, 0, 0) Trans(2, 0, 0)Rotz(α1) Rotz(α2)Rotx(0.5) Trans(3, 0, 0)
Rotz(α1)
Rotx(0.5)
Table 2.1. Transformation for the three arms of the robot
The MATLAB program (see file bras3D.m) below simulatesthe movement of these arms.
M0 = [0 1 1 0 0 0 1 1 0 0 0 0 1 1 1 1; ...
0 0 0 0 0 1 1 1 1 1 1 0 0 1 1 0; ...
0 0 1 1 0 0 0 1 1 0 1 1 1 1 0 0; ...
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] ;
M0=Translate([0;-0.5;-0.5])*M0;
Rcam=Rotate([0.5;0;0]); % camera
80 Automation for Robotics
x = [0;0;0]; % initial state
dt = 0.005;
w = [2;-4;5]; % angular speeds
for t=0:dt:3,
M1=diag([3,0.3,0.3,1])*M0; R1=Rcam*Rotate([0;0;
x(1)]);
M2=diag([2,0.3,0.3,1])*M0; R2=Translate([3;0;
0])*Rotate([0;0;x(2)]);
M3=diag([1,0.3,0.3,1])*M0; R3=Translate([2;0;
0])*Rotate([x(3);0;0]);
M1=R1*M1; M2=R1*R2*M2; M3=R1*R2*R3*M3;
plot(M1(1, :), M1(2, :), ’-blue.’);
plot(M2(1, :), M2(2, :), ’-green.’);
plot(M3(1, :), M3(2, :), ’-black.’);
x = x + dt*w;
end;
Solution to Exercise 2.12 (simulation and control of askating vehicle in MATLAB)
1) The angular speed of the sledge is given by:
θ =v1 sinβ
L1
where v1 is the speed of the front skate and L1 is the distancebetween the front skate and the center of the axle of the frontsledge. If v corresponds to the speed of the middle of the frontsledge axle, we have v = v1 cosβ. These two relations give us:
θ =v tanβ
L1=
vu1L1
From the point of view of the rear sledge, everythinghappens as if there was a virtual wheel in the middle of thefront sledge axle that moves together with the latter. Thus,by taking the formula above, the angular speed of the rear
Simulation 81
sledge is θ + δ = −v sin δL2
where L2 is the distance between thecenters of the axles. And therefore:
δ = −v sin δ
L2− θ = −v sin δ
L2− vu1
L1
Following the theorem of kinetic energy, the temporalderivative of the kinetic energy is equal to the sum of thepowers brought in to the system, i.e.:
d
dt
(1
2mv2
)= u2.δ︸︷︷︸
engine power
− (αv) .v︸ ︷︷ ︸dissipated power
where α is the coefficient of viscous friction. For reasons ofsimplification, we have just assumed that the friction force isαv, which assumes that only the front sledge brakes. We thenhave:
mvv = u2.δ − αv2 = u2.
(−v sin δ
L2− vu1
L1
)− αv2
or:
mv = u2.
(−sin δ
L2− u1
L1
)− αv
The system can be described by the following stateequations:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
x = v cos θ
y = v sin θ
θ = vu1v = − (u1 + sin δ)u2 − v
δ = −v (u1 + sin δ)
or, to simplify, the coefficients (mass m, coefficient of viscousfriction α, interaxle distances L1, L2, etc.) have been chosenunitary.
82 Automation for Robotics
2) First of all, we create in the file snake_f.m the followingevolution function:
function v=snake_f(x,u)
theta=x(3); v=x(4); delta=x(5);
v= [v*cos(theta); v*sin(theta); v*u(1); ...
-u(2)*(u(1)+sin(delta))-v; -v*(u(1)+sin(delta))];
Then, we program Euler’s method in the file snake_main.mas follows:
x= [0;0;0;3;0.5]; dt=0.05;
for t=0:dt:2,
u= [1;0];
x=x+snake_f(x,u)*dt;
snake_draw(x,u);
end;
3) We choose u2 in order for the propulsion torque to be adriving force, in other words δu2 ≥ 0. Indeed, δu2 correspondsto the power given to the robot that is transformed into kineticenergy. If u2 is bounded by the interval [−p4, p4], we choosebang-bang-type control for u2 of the form:
u2 = p4sign(δ)
which means to exert a maximum propulsion. The chosencontrol is therefore:
u =
(p1 cos (p2t) + p3
p4sign (−v (u1 + sin δ))
)
The parameters of the control remain to be determined.The bias parameter p3 is allowed to vary. The engine torquepower gives us p4. The parameter p1 is directly linked to theamplitude of the oscillation created during the movement.Finally, the parameter p2 gives the frequency of the
Simulation 83
oscillations. The simulations can help us to correctly fixateparameters p1 and p2. Figure 2.13 illustrates two simulationswhere the robot starts with a quasi-nil speed. In the uppersimulation, the bias p3 is nil. In the lower simulation, wehave p3 > 0. The main program (see file snake_main.m) isgiven below:
x= [0;0;2;0.1;0]; % x,y,theta,v,delta
dt=0.05;p1=0.5;p2=3;p3=0;p4=5;thetabar=pi/6;
for t=0:dt:10,
u1=p1*cos(p2*t)+p3;
u= [u1;p4*sign(-x(4)*(u1+sin(x(5))))];
x=x+snake_f(x,u)*dt;
snake_draw(x,u);
end;
4) We control the bias p3 by a sawtooth-type proportionalcontrol of the form:
p3 = atan
(tan
θ − θ
2
)in order to avoid jumps of 2π. In MATLAB:p3=atan(tan((thetabar-x(3))/2));
3
Linear Systems
The study of linear systems [BOU 06] is fundamental forthe proper understanding of the concepts of stability and thedesign of linear controllers. Let us recall that linear systemsare of the form:{
x(t) = Ax(t) +Bu(t)
y(t) = Cx(t) +Du(t)
for continuous-time systems and:{x(k + 1) = Ax(k) +Bu(k)
y(k) = Cx(k) +Du(k)
for discrete-time systems.
3.1. Stability
A linear system is stable (also called asymptotically stablein the literature) if, after a sufficiently long period of time, thestate no longer depends on the initial conditions, no matterwhat they are. This means (see Exercises 3.1 and 3.2) that:
limt→∞ eAt = 0n if the system is continuous-timelimk→∞Ak = 0n if the system is discrete-time
86 Automation for Robotics
In this expression, we can see the concept of matrixexponential. The exponential of a square matrix M ofdimension n can be defined through its integer seriesdevelopment:
eM = In +M+1
2!M2 +
1
3!M3 + · · · =
∞∑i=0
1
i!Mi
where In is the identity matrix of dimension n. It is clear thateM is of the same dimension as M. Here are some of theimportant properties concerning the exponentials of matrices.If 0n is the zero matrix of n × n and if M and N are twomatrices n× n, then:
e0n = IneM.eN = eM+N (if the matrices commute)
ddt
(eMt
)= MeMt
CRITERION OF STABILITY.– There is a criterion of stabilitythat only depends on the matrix A. A continuous-time linearsystem is stable if and only if all the eigenvalues of itsevolution matrix A have strictly negative real parts. Adiscrete-time linear system is stable if and only if all theeigenvalues of A are strictly inside the unit circle. Thiscorresponds to the criterion of stability as proven in Exercise3.3.
Whether it is for continuous-time or discrete-time linearsystems, the position of the eigenvalues of A is of paramountimportance in the study of stability of a linear system. Thecharacteristic polynomial of a linear system is defined as thecharacteristic polynomial of the matrix A that is given by theformula:
P (s) = det (sIn −A)
Its roots are the eigenvalues of A. Indeed, if s is a root ofP (s), then det (sIn −A) = 0, in other words there is a vector v
Linear Systems 87
non-nil such that (sIn −A)v = 0. This means that sv−Av = 0or Av = sv. Therefore s is an eigenvalue of A. A corollary ofthe stability criterion is thus the following.
COROLLARY.– A continuous-time linear system is stable ifand only if all the roots of its characteristic polynomial havenegative real parts. A discrete-time linear system is stable ifand only if all the roots of its characteristic polynomial areinside the unit circle.
3.2. Laplace transform
The Laplace transform is a very useful tool for the controlengineer in the manipulation of systems described bydifferential equations. This approach is particularly utilizedin the context of monovariate systems (i.e. systems with asingle input and output) and may be regarded as a competitorto the state-representation approach considered in this book.
3.2.1. Laplace variable
The space of differential operators in ddt is a ring and has
favorable properties such as associativity. For example:
d4
dt4
(d3
dt3+
d
dt
)=
d4
dt4d3
dt3+
d4
dt4d
dt=
d7
dt7+
d5
dt5
This ring is commutative. For example:
d4
dt4
(d3
dt3+
d
dt
)=
(d3
dt3+
d
dt
)d4
dt4
We may associate with the operator ddt the symbol s called
Laplace variable. Thus the operator d4
dt4
(d3
dt3+ d
dt
)will be
represented by the polynomial s4(s3 + s
).
88 Automation for Robotics
3.2.2. Transfer function
Let us consider a linear system with input u and output ydescribed by a differential relation such as:
y(t) = H
(d
dt
).u(t)
The function H(s) is called the transfer function of thesystem. Let us take, for instance, the system described by thedifferential equation:
y + 2y + 3y = 4u− 5u
We have:
y(t) =
(4 ddt − 5
d2
dt2+ 2 d
dt + 3
).u(t)
Its transfer function is therefore:
H(s) =4s− 5
s2 + 2s+ 3
If the transfer function H(s) of a system is a rationalfunction, its denominator P (s) is called the characteristicpolynomial.
3.2.3. Laplace transform
We call Laplace transform y(s) of the signal y(t) thetransfer function of the system that generates y(t) from theDirac delta function δ(t). We will denote it by y(s) = L (y(t)).We also say that y (t) is the impulse response of the system.Table 3.1 shows several systems together with their transferfunction and impulse response.
In this table, E (t) is the unit step which is equal to 1 ift ≥ 0 and 0 otherwise. Thus, the Laplace transforms of
Linear Systems 89
δ (t) , E (t) , δ (t) , δ(t − τ) are respectively 1, 1s , s, e−τs. But we
can go further, as is shown in Table 3.2.
System Equation Transfer function Impulse responseIdentity y (t) = u (t) 1 δ (t)
Integrator y (t) = u (t) 1s
E (t) (step)Differentiator y (t) = u (t) s δ (t)
Delay y (t) = u (t− τ) e−τs δ (t− τ)
Table 3.1. Transfer function and impulse response of someelementary systems
Equation Transfer function Impulse responsey (t) = α1u (t− τ1) α1e−τ1s + α2e−τ2s α1δ (t− τ1) + α2δ (t− τ2)
+α2u (t− τ2)
y (t) =∑∞
i=0 αiu (t− τi)∑∞
i=0 αie−τis
∑∞i=0 αiδ(t− τi)
y (t) =∫∞−∞ f(τ)u (t− τ) dτ
∫∞−∞ f(τ)e−τsdτ
∫∞−∞ f(τ)δ(t− τ)dτ
Table 3.2. Transfer function and impulse responseof composed systems
The operation y (t) =∫∞−∞ f(τ)u (t− τ) is called
convolution. We may notice that the impulse response of thesystem described by the last row of the table is:
y (t) =
∫ ∞
−∞f(τ)u (t− τ) dτ
∣∣∣∣u(t)=δ(t)
=
∫ ∞
−∞f(τ)δ (t− τ) dτ = f(t)
and therefore the Laplace transform of a function f (t) is:
f (s) =
∫ ∞
−∞f(τ)e−τsdτ
90 Automation for Robotics
Let us note that the relation:
f(t) =
∫ ∞
−∞f(τ)δ(t− τ)dτ
illustrates the fact that f (t) can be estimated by the sum ofan infinity of infinitely approached Dirac distributions.
3.2.4. Input–output relation
Let us consider a system with input u, output y and transferfunction H(s), as in Figure 3.1.
Figure 3.1. Laplace transform and transfer function
We have:
y(t) = H(d
dt).u(t) = H(
d
dt).u
(d
dt
)︸ ︷︷ ︸
y( ddt)
.δ(t)
Therefore, the Laplace transform of y(t) is:
y(s) = H(s).u(s)
3.3. Relationship between state and transferrepresentations
Let us consider the system described by its state equations:{x = Ax+Bu
y = Cx+Du
Linear Systems 91
The Laplace transform of the state representation is givenby: {
sx = Ax+Bu
y = Cx+Du
The first equation can be re-written as sx − Ax = Bu, i.e.sIx − Ax = Bu, where I is the identity matrix. From this,by factoring, we get (sI−A) x = Bu (we must be careful, anotation such as sx −Ax = (s−A) x is not permitted since sis a scalar whereas A is a matrix). Therefore:{
x = (sI−A)−1Bu
y = Cx+Du
and thus:
y =(C(sI−A)−1B+D
)u
The matrix:
G(s) = C(sI−A)−1B+D
is called transfer matrix. It is a matrix of transfer functions(in other words of rational functions in s) in which everydenominator is a divisor of the characteristic polynomialPA (s) of A. By multiplying each side by PA(s) and replacings by d
dt , we obtain a system of input–output differentialequations. The state x will no longer appear there.
Reciprocally, in the case of monovariate linear systems (inother words with a single input and a single output), we canobtain a state representation from an expression of thetransfer function, as it will be illustrated in Exercises 3.16,3.17 and 3.18.
92 Automation for Robotics
3.4. Exercises
EXERCISE 3.1.– Solution of a continuous-time linear stateequation
Show that the continuous-time linear system:
x(t) = Ax(t) +Bu(t)
has the solution:
x(t) = eAtx(0) +
∫ t
0eA(t−τ)Bu(τ)dτ
The function eAtx(0) is called homogeneous, free ortransient solution. The function
∫ t0 e
A(t−τ)Bu(τ)dτ is calledforced solution.
EXERCISE 3.2.– Solution of a discrete-time linear stateequation
Show that the discrete-time linear system:
x(k + 1) = Ax(k) +Bu(k)
has the solution:
x(k) = Akx(0) +
k−1∑�=0
Ak−�−1Bu(�)
The function Akx(0) is the homogeneous solution, and thefunction
∑k�=0A
k−�Bu(�) is the forced solution.
EXERCISE 3.3.– Criterion of stability
The proof of the stability criterion for linear systems isbased on the theorem of correspondence of the eigenvaluesthat is articulated as follows. If f is a polynomial (or more
Linear Systems 93
generally an integer series) and if A is an Rn×n matrix then
the eigenvectors of A are also eigenvectors of f(A). Moreoverif the eigenvalues of A are {λ1, . . . , λn} then those of f(A) are{f (λ1) , . . . , f(λn)}.
1) Prove the theorem of correspondence of the eigenvaluesin the case where f is a polynomial.
2) Let spec(A) = {λ1, . . . , λn} be the spectrum of A, i.e.its eigenvalues. By using the theorem of correspondence ofthe eigenvalues, calculate spec(I+A) , spec
(Ak
), spec
(eAt
)and spec(f (A)). In these expressions, I denotes the identitymatrix, k is an integer and f is the characteristic polynomialof A.
3) Show that if a continuous-time linear system is stable,then all the eigenvalues of its evolution matrix A have strictlynegative real parts (in fact, the condition is necessary andsufficient, but we will limit the proof to the implication).
4) Show that if a discrete-time linear system is stable, thenall the eigenvalues of A are strictly within the unit circle.Again, even though we have the equivalence here, we willlimit ourselves to the proof of the implication.
EXERCISE 3.4.– Laplace variable
Let us consider two systems in parallel as illustrated inFigure 3.2.
Figure 3.2. Two linear systems interconnected in parallel
1) Without employing the Laplace transform, and by onlyusing elementary differential calculus, give a differential
94 Automation for Robotics
equation that links u with y.
2) Using algebraic manipulations involving the ddt operator,
obtain the same differential equation again.
3) Obtain the same result by using the Laplace variable s.
EXERCISE 3.5.– Transfer functions of elementary systems
Give the transfer function of the following systems withinput u and output y:
1) a differentiator expressed by the differential equationy = u;
2) an integrator that obeys the differential equation y = u;
3) a delay of τ that is expressed by the input-output relationy(t) = u(t− τ).
EXERCISE 3.6.– Transfer function of composite systems
1) Let us consider two systems of transfer functions H1(s)and H2(s) placed in series as shown in Figure 3.3.
Figure 3.3. Two systems in series
Calculate, in function of H1(s) and H2(s), the transferfunction of the composite system.
2) Let us place the two systems of transfer functions H1(s)and H2(s) in parallel as shown in Figure 3.4.
Give, in function of H1(s) and H2(s), the transfer functionof the composite system.
3) Let us loop the system H(s) as shown in Figure 3.5.
Linear Systems 95
Figure 3.4. Two systems in parallel
Figure 3.5. Looped system
Give, in function of H(s), the transfer function of the loopedsystem.
EXERCISE 3.7.– Transfer matrix
Let us consider the continuous-time linear systemdescribed by its state representation:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
x(t) =
(1 3
2 0
)x(t) +
(1
1
)u(t)
y(t) =
(1 2
1 0
)x(t) +
(2
0
)u(t)
1) Calculate its transfer matrix.
2) Give a differential relation that links the input to theoutputs.
EXERCISE 3.8.– Matrix block multiplication
The goal of this exercise is to recall the concept of blockmanipulation of matrices. This type of manipulation is widely
96 Automation for Robotics
used in linear control. We consider the following product of twoblock matrices:
(C11 C12 C13
C21 C22 C23
)︸ ︷︷ ︸
=C
=
(A11 A12 A13
A21 A22 A23
)︸ ︷︷ ︸
=A
·
⎛⎜⎜⎝B11 B12 B13
B21 B22 B23
B31 B32 B33
⎞⎟⎟⎠︸ ︷︷ ︸
=B
As illustrated in Figure 3.6, we have:
C12 = A11 ·B12 +A12 ·B22 +A13 ·B32
or, more generally:
Cij =∑k
Aik ·Bkj
1) Write the conditions on the number of rows and columnsfor each sub-matrix so that the block product is possible.
2) The matrices C and A11 are square. The matrices A22 andB33 all have 3 rows of 2 columns. Find the dimensions of all thesubmatrices.
Figure 3.6. Block multiplication of two matrices
Linear Systems 97
EXERCISE 3.9.– Change of basis
Let us consider the continuous-time linear systemdescribed by its state equations:{
x = Ax+Bu
y = Cx+Du
Let us take the change of basis v = P−1x, where P is atransfer matrix (i.e. square and invertible).
1) What do the system equations become if v is the new statevector?
2) Let us consider the system described by the followingstate equations:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
x =
⎛⎜⎜⎝4 − 1
2 −12
4 1 −1
4 −2 2
⎞⎟⎟⎠x+
⎛⎜⎜⎝1
2
4
⎞⎟⎟⎠u
y =(2 1 1
)x
We are trying to find a simpler representation, i.e. withmore zeros and ones (in order for instance to limit thenumber of components that are necessary in the design of thecircuitry). We propose to take the following change of basis:
v =
⎛⎜⎜⎝1 1 1
2 1 2
2 1 0
⎞⎟⎟⎠−1
x
which brings us to a Jordan normal form. What does the newstat representation become?
3) Calculate the transfer function of this system. What is thecharacteristic polynomial of the system?
98 Automation for Robotics
EXERCISE 3.10.– Change of basis toward a companion matrix
Let us consider a system with the input described by theevolution equation:
x = Ax+ bu
Let us note that the control matrix traditionally denoted byB becomes, in our case of a single input, a vector b. Let us takeas transfer matrix (assumed invertible):
P =(b | Ab | A2b | . . . | An−1b
)The new state vector is therefore v = P−1x. Show that, in
this new basis, the state equations are written as:
v =
⎛⎜⎜⎜⎜⎜⎝0 0 0 −a0
1 0 0 −a1
0... 0 . . .
0 0 1 −an−1
⎞⎟⎟⎟⎟⎟⎠v +
⎛⎜⎜⎜⎜⎜⎝1
0...
0
⎞⎟⎟⎟⎟⎟⎠u
where the ai are the coefficients of the characteristicpolynomial of the matrix A.
EXERCISE 3.11.– Pole-zero cancellation
Let us consider the system described by the state equation:{x = −x
y = x+ u
Calculate the differential equation that links y to u, by thedifferential method, then by Laplace’s method (without usingpole-zero cancellation). What do you conclude?
Linear Systems 99
EXERCISE 3.12.– State equations of a wiring system
Let us consider the system S described by the wiringsystem in Figure 3.7.
1) Give its state equations in matrix form.
2) Calculate the characteristic polynomial of the system. Isthe system stable?
3) Calculate the transfer function of the system.
Figure 3.7. Second-order wiring system
EXERCISE 3.13.– Combination of systems
Let us consider the systems S1 and S2 given on top ofFigure 3.8.
1) Give the state equations in matrix form of the systemSa obtained by placing systems S1 and S2 in series. Give thetransfer function and characteristic polynomial of Sa.
2) Do the same with the system Sb obtained by placingsystems S1 and S2 in parallel.
3) Do the same with the system Sc obtained by looping S1 byS2 as represented in Figure 3.8.
100 Automation for Robotics
Figure 3.8. Composition of systems
EXERCISE 3.14.– Calculating a transfer function
Let us consider the system represented in Figure 3.9.Calculate its transfer function.
Figure 3.9. Linear system for which the transfer functionmust be calculated
EXERCISE 3.15.– Transfer matrix
Let us consider the continuous-time linear systemdescribed by its state representation:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
x(t) =
(1 3
2 0
)x(t) +
(1
1
)u(t)
y(t) =
(1 2
1 0
)x(t) +
(2
0
)u(t)
Linear Systems 101
1) Calculate its transfer function G(s).
2) Deduce from this, a differential input–output relation forthis system.
EXERCISE 3.16.– Canonical form of a control
Let us consider the order 3 linear system with a singleinput and a single output, described by the followingdifferential equation:
...y + a2y + a1y + a0y = b2u+ b1u+ b0u
1) Calculate its transfer function G(s).
2) By noticing that this system can be obtained by placingthe two transfer function systems in series:
G1(s) =1
s3 + a2s2 + a1s+ a0and G2(s) = b2s
2 + b1s+ b0
deduce a wiring system with only three integrators, someadders and amplifiers.
3) Give the state equations associated with this wiring.
EXERCISE 3.17.– Canonical observation form
Let us consider once more the system described by thedifferential equation:
...y + a2y + a1y + a0y = b2u+ b1u+ b0u
1) Show that this differential equation can be written inintegral form as:
y =
∫ ∫ ∫(b2u− a2y + b1u− a1y + b0u− a0y)
2) Deduce from this a wiring system with only threeintegrators, some adders and amplifiers.
102 Automation for Robotics
3) Give the state equations associated with this wiring.
4) Compare this with the results obtained in Exercise 3.16.
EXERCISE 3.18.– Modal form
A monovariate linear system is in modal form if it can bewritten as:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩
x =
⎛⎜⎜⎜⎜⎜⎝λ1 0 · · · 00 λ2 0 · · ·· · · · · · · · · · · ·0 · · · 0 λn
⎞⎟⎟⎟⎟⎟⎠x+
⎛⎜⎜⎜⎜⎜⎝1
1...
1
⎞⎟⎟⎟⎟⎟⎠u
y =(c1 c2 · · · cn
)x + d u
1) Draw a wiring system for this system using integrators,adders and amplifiers.
2) Calculate the transfer function associated with thissystem.
3) Calculate its characteristic polynomial.
EXERCISE 3.19.– Jordan normal form
The system described by the state equations:
x =
⎛⎜⎜⎜⎜⎜⎜⎜⎝
−2 1 0 0 0
0 −2 1 0 0
0 0 −2 0 0
0 0 0 −3 1
0 0 0 0 −3
⎞⎟⎟⎟⎟⎟⎟⎟⎠x+
⎛⎜⎜⎜⎜⎜⎜⎜⎝
0
0
1
0
1
⎞⎟⎟⎟⎟⎟⎟⎟⎠u
y =(−2 −1 3 −4 7
)x+ 2u
is in Jordan normal form since its evolution matrix A is aJordan matrix. In other words, it is a block diagonal and eachblock (here there are two) has zeros everywhere, except on
Linear Systems 103
the diagonal that contains equal elements and on the oppositediagonal that only contains ones. Moreover, the control matrixonly contains ones and zeros that are positioned at the lastrow of each block.
1) Draw a wiring system for this system using integrators,adders and amplifiers.
2) Calculate its transfer function.
3) Calculate its characteristic polynomial as well as theassociated eigenvalues.
3.5. Solutions
Solution to Exercise 3.1 (solution of a continuous-timelinear state equation)
Let us take z(t) = e−Atx(t). We have x(t) = eAtz(t) andtherefore, by differentiating x(t) = AeAtz(t) + eAtz(t). Theevolution equation x(t) = Ax(t) +Bu(t), is transformed into:
AeAtz(t) + eAtz(t) = AeAtz(t) +Bu(t)
and, after simplification:
z(t) = e−AtBu(t)
After integrating, we obtain:
z(t) = z (0) +
∫ t
0e−AτBu(τ)dτ
Therefore:
x(t) = eAt(z (0) +
∫ t0 e
−AτBu(τ)dτ)
= eAtz (0) +∫ t0 e
Ate−AτBu(τ)dτ
= eAtx(0) +∫ t0 e
A(t−τ)Bu(τ)dτ
104 Automation for Robotics
Solution to Exercise 3.2 (solution of a discrete-timelinear state equation)
The proof can easily be obtained by recurrence. First of all,if k = 0, the relation is verified. Let us establish that if it isverified for k, then it is also verified for k + 1. We have:
x(k + 1) = Ax(k) +Bu(k)
= A(Akx(0) +
∑k−1�=0 A
k−�−1Bu(�))+Bu(k)
= Ak+1x(0) +∑k−1
�=0 Ak−�Bu(�) +Bu(k)
= Ak+1x(0) +∑k
�=0Ak−�Bu(�)
The relation is therefore also verified for k + 1.
Solution to Exercise 3.3 (criterion of stability)
1) Let x be an eigenvector of A associated with theeigenvalue λ. We will now show that x is also an eigenvectorof f(A) with eigenvalue f(λ). First of all, this property is trueif f(A) = Aj . Indeed, since Ax = λx, we have:
Ajx = Aj−1.Ax︸︷︷︸λx
= λAj−1x = λAj−2.Ax︸︷︷︸λx
= λ2Aj−2x
= · · · = λjA0x = λjx
We, therefore, have the property f(A).x = f(λ).x whenf(A) = Aj . Let us assume that this property is true for twopolynomials f1 and f2 in A, we will now show that it is alsotrue for f1 + f2 and αf1. Since it is true for f1 and f2, we have:
f1(A).x = f1(λ).x
f2(A).x = f2(λ).x
Linear Systems 105
and therefore:
(f1(A)+f2(A)) .x = f1(A).x+f2(A).x=f1(λ).x+f2(λ).x
= (f1(λ)+f2(λ)) .x,
(α.f1(A)) .x = α. (f1(A).x) = α.f1(λ).x = (α.f1(λ)) .x
By recurrence, we can deduce that the property is true forall functions f(A) that can be generated from the Aj bycompositions of additions and multiplications by a scalar, inother words for the functions f that are polynomials.
2) If spec (A) = {λ1, . . . , λn} then:
spec (I+A) = {1 + λ1, . . . , 1 + λn}spec
(Ak
)=
{λk1, . . . , λ
kn
}spec
(eAt
)=
{eλ1t, . . . , eλnt
}spec (f (A)) = {f (λ1) , . . . , f (λn)} = {0, . . . , 0}
Let us note that f (A) is the zero matrix, whichcorresponds to the Cayley-Hamilton theorem, which statesthat every square matrix cancels out its characteristicpolynomial.
3) Let us take f(A) = eAt =∑∞
j=01j! (At)j , the theorem of
correspondence of the eigenvalues (which we will assume isapplicable even for polynomials of infinite degree, i.e. integerseries), tells us that the eigenvalues of eAt are of the formeλjt, where λj denotes the jime eigenvalue of A. However, the
106 Automation for Robotics
stability of the system is expressed using the conditionlimt→∞ eAt = 0n. We have:
eAt t→∞→ 0n ⇒ ∀j ∈ {1, . . . , n} , eλjt t→∞→ 0
(continuity of the exponential
⇔ ∀j ∈ {1, . . . , n} , |e(Re λj+i Im λj)t| t→∞→ 0
and of the eigenvalues)
⇔ ∀j ∈ {1, . . . , n} , |e(Re λj)t|︸ ︷︷ ︸=e(Re λj)t
.|ei(Im λj)t|︸ ︷︷ ︸=1
t→∞→ 0
⇔ ∀j ∈ {1, . . . , n} ,Re (λj) < 0
4) In the discrete-time case, we need to take f(A) = Ak. Asimilar reasoning to the continuous-time case gives us:
Ak k→∞→ 0n ⇒ ∀j ∈ {1, . . . , n} , λkj
k→∞→ 0 (continuity)
⇔ ∀j ∈ {1, . . . , n} , |(ρje
iθj)k |︸ ︷︷ ︸
=ρkj eikθj
k→∞→ 0 (polar form)
⇔ ∀j ∈ {1, . . . , n} , ρkj k→∞→ 0
⇔ ∀j ∈ {1, . . . , n} , ρj < 1
Solution to Exercise 3.4 (Laplace variable)
1) This system can be described by the following equations:⎧⎪⎪⎨⎪⎪⎩y1 + y1 = u
y2 + 2y2 = −u
y1 + y2 = y
Linear Systems 107
Thus we need to get rid of the y1 and y2. By differentiatingthe above equations, we obtain:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
y1 + y1 = u
y2 + 2y2 = −u
y1 + y2 = y
y1 + y2 = y
We have in total 7 equations with 6 surplus variables: y1, y2,y1, y2, y1, y2. We can eliminate them by a substitution methodand thus obtain the differential equation that we were lookingfor:
y + 3y + 2y = u
2) We have:⎧⎪⎪⎨⎪⎪⎩(ddt + 1
)(y1) = u(
ddt + 2
)(y2) = −u
y1 + y2 = y
Let us dare the following calculation:⎧⎨⎩y1 =1
ddt+1
· uy2 = − 1
ddt+2
· u
Whence:
y = y1 + y2 =1
ddt + 1
· u− 1ddt + 2
· u
=
(1
ddt + 1
− 1ddt + 2
)· u
108 Automation for Robotics
After reducing to the same denominator:
y =
((ddt + 2
)− (ddt + 1
)(ddt + 1
) (ddt + 2
) )· u =
1d2
dt2+ 3 d
dt + 2· u
Therefore:(d2
dt2+ 3
d
dt+ 2
)· y = u
or alternatively:
y + 3y + 2y = u
This reasoning allows us to obtain the expected resultusing few calculations in an elegant manner. However, thisreasoning needs to be placed within a mathematicalframework. This is what the Laplace transform brings us. Letus note that the above reasoning could have lead to anincoherence that we can also observe with the (careless) useof the Laplace transform. For instance, the followingreasoning is clearly wrong:
y = u ⇔ d
dty =
d
dtu ⇔ y =
ddtddt
u ⇔ y = u
We clearly have no right to simplify by ddt since the set of
differential operators generated by ddt is a ring and not a field.
Likewise, in the framework of non-linear differentialequations (such as y + yy = u) we quickly reach absurdreasonings.
3) We have:⎧⎪⎪⎨⎪⎪⎩(s+ 1) · y1 = u
(s+ 2) · y2 = −u
y1 + y2 = y
Linear Systems 109
and, by eliminating y1 and y2:
y =1
s2 + 3s+ 2· u
Solution to Exercise 3.5 (transfer functions ofelementary systems)
1) A differentiator is expressed by the differential equationy = u, i.e. y(t) = d
dt (u(t)). Its transfer function is, therefore,H(s) = s.
2) An integrator is expressed by the differential equationy = u, i.e. d
dt (y(t)) = u(t), or alternatively:
y(t) =
(1ddt
)u(t)
Its transfer function is, therefore, H(s) = s−1.
3) A delay of τ is expressed by the input–output relationy(t) = u(t−τ). Let us assume that the function u(t) is analytic.An integer series development of u (t) around t0 gives us:
u(t) =
∞∑i=0
1
i!u(i)(t0). (t− t0)
i
Let us replace t by t− τ then t0 by t, we obtain:
u(t− τ) =
∞∑i=0
1
i!u(i)(t). (−τ)i
Thus:
y(t) = u(t− τ) =
∞∑i=0
1
i!u(i)(t). (−τ)i =
( ∞∑i=0
1
i!
(−τ
d
dt
)i)(u(t))
= e−τ ddt (u(t))
110 Automation for Robotics
Thus, the differential operator e−τ ddt corresponds to a delay
of τ of the signal u(t). The transfer function of the linearsystem, which generates an output signal identical to theinput signal, but delayed by τ , is given by H(s) = e−τs.
Solution to Exercise 3.6 (transfer function of compositesystems)
1) The transfer function of the composite system is H(s) =H2(s).H1(s). Indeed, y(s) = H2(s).H1(s)u(s).
2) We have y(s) = y1(s) + y2(s) = H1(s)u(s) + H2(s)u(s) =(H1(s) +H2(s)) u(s). Thus, the transfer function of thecomposite system is H(s) = H1(s) +H2(s).
3) Since y(s) = H(s)e(s) and that e(s) = u(s)− y(s), we havey(s) = H(s) (u(s)− y(s)). By isolating y(s), we obtain:
y(s) =H(s)
1 +H(s)u(s)
The transfer function of the looped system is, therefore,H(s)
1+H(s) .
Solution to Exercise 3.7 (transfer matrix)
1) The transfer matrix G(s) is given by:
G(s) = C(sI−A)−1B+D
=
(1 2
1 0
)(s− 1 −3
−2 s
)−1 (1
1
)+
(2
0
)
=
(2s2+s−7s2−s−6s+3
s2−s−6
)
2) The relation y = G(s)u is written as:
(s2 − s− 6
)y =
(2s2 + s− 7
s+ 3
)u
Linear Systems 111
Or alternatively, by substituting s by ddt :{
y1 − y1 − 6y1 = 2u+ u− 7u
y2 − y2 − 6y2 = u+ 3u
Solution to Exercise 3.8 (matrix block multiplication)
1) Let us denote by �M and cM the number of rows andcolumns of the matrix M. We have:
�Aik= �Cij , cBkj
= cCij , cAik= �Bkj
i.e.:
�Ai1 = �Ai2 = �Ai3 = �Ci1 = �Ci2 = �Ci3 , i ∈ {1, 2}cB1j = cB2j = cB3j = cC1j = cC2j , j ∈ {1, 2, 3}cA1k
= cA2k= �Bk1
= �Bk2= �Bk3
, k ∈ {1, 2, 3}
We therefore need 2+3+3 = 8 pieces of information on therows and columns in order to find all the dimensions.
2) A linear resolution of this system of equations gives usthe dimensions of Figure 3.10.
Solution to Exercise 3.9 (change of basis)
1) By replacing x by Pv, we obtain:{Pv = APv +Bu
y = CPv +Du
i.e.: {v = P−1APv +P−1Bu
y = CPv +Du
112 Automation for Robotics
which is a state representation. Thus, a linear system has asmany state representations as there are transfer matrices. Ofcourse, some representations are preferable depending on thedesired application.
Figure 3.10. Dimensions of the submatrices involvedin the block multiplication
2) The Jordan normal form decomposition of the evolutionmatrix is:⎛⎜⎜⎝
1 1 1
2 1 2
2 1 0
⎞⎟⎟⎠−1
︸ ︷︷ ︸P−1
⎛⎜⎜⎝4 −1
2 −12
4 1 −1
4 −2 2
⎞⎟⎟⎠︸ ︷︷ ︸
A
⎛⎜⎜⎝1 1 1
2 1 2
2 1 0
⎞⎟⎟⎠︸ ︷︷ ︸
P
=
⎛⎜⎜⎝2 1 0
0 2 0
0 0 3
⎞⎟⎟⎠︸ ︷︷ ︸
A
Thus, following the basis change relation as seen above, thenew state representation is:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
v =
⎛⎜⎜⎝2 1 0
0 2 0
0 0 3
⎞⎟⎟⎠v +
⎛⎜⎜⎝2
0
−1
⎞⎟⎟⎠u
y =(6 4 4
)v
Linear Systems 113
3) The transfer function of this system is:
(6 4 4
)⎛⎜⎜⎝⎛⎜⎜⎝
s 0 0
0 s 0
0 0 s
⎞⎟⎟⎠−
⎛⎜⎜⎝2 1 0
0 2 0
0 0 3
⎞⎟⎟⎠⎞⎟⎟⎠
−1 ⎛⎜⎜⎝2
0
−1
⎞⎟⎟⎠ =12
s− 2− 4
s− 3
Its characteristic polynomial (s− 2) (s− 3) is of degree 2,whereas the evolution matrix admits (s− 2)2 (s− 3) ascharacteristic polynomial. This fact is representative of apole-zero cancellation in transfer function calculations.
Solution to Exercise 3.10 (change of basis toward acompanion matrix)
From the equation x = Ax + bu and by taking v = P−1x,we obtain Pv = APv + bu. In other words:
v = P−1APv +P−1bu
If we take A = P−1AP and b = P−1b, the system is writtenas:
v = Av + bu
Since P = (b | Ab | A2b | . . . | An−1b), if ei denotes thevector that only contains zeros, except a 1 at position i, wehave Pei = Ai−1b. Thus:
ei = P−1.Ai−1b
Therefore, b = P−1b = e1. In order to obtain A, we write:
A =(a1 | a2 | a3 | . . . | an−1 | an
)= P−1AP
=(P−1Ab | P−1A2b | P−1A
3b | . . .
| P−1An−1b | P−1Anb)
=(e2 | e3 | e4 | . . . | en | P−1Anb
)
114 Automation for Robotics
where the ai denotes the iimes columns of A. However, theCayley-Hamilton theorem tells us that every square matrixcancels out its characteristic polynomial:
P (s) = sn + an−1sn−1 + · · ·+ a1s+ a0
Thus An + an−1An−1+ · · ·+ a1A+ a0I = 0, or alternatively:
An = −an−1An−1 − · · · − a1A− a0I
By multiplying the left-hand side by P−1 and the right handside by b, we obtain:
P−1Anb = P−1(−an−1A
n−1b+ · · · − a1Ab− a0b)
= P−1.P
⎛⎜⎜⎝−a0
. . .
−an−1
⎞⎟⎟⎠ =
⎛⎜⎜⎝−a0
. . .
−an−1
⎞⎟⎟⎠In conclusion, after change of basis, we obtain an evolution
equation such as:
v =
⎛⎜⎜⎜⎜⎜⎝0 0 0 −a0
1 0 0 −a1
0... 0 . . .
0 0 1 −an−1
⎞⎟⎟⎟⎟⎟⎠v +
⎛⎜⎜⎜⎜⎜⎝1
0...
0
⎞⎟⎟⎟⎟⎟⎠u
Solution to Exercise 3.11 (pole-zero cancellation)
By the differential method, we have:
y = x+ u = −x+ u = −y + u+ u
and therefore y + y = u+ u. By Laplace, we have:{sx = −x
y = x+ u⇒
{x = 0
s+1
y = x+ u⇒ y =
0
s+ 1+ u =
0 + (s+ 1)u
s+ 1
⇒ (s+ 1) y = (s+ 1) u
Linear Systems 115
and therefore y + y = u + u. Let us note that we haveforbidden ourselves from using pole-zero cancellations.Strictly speaking, we should have taken into account theinitial conditions, which are not done in practice by controlengineers when calculating transfer functions. Often, we areallowed to perform pole-zero cancellations when these arestable. Thus, we could write:
(s+ a) (s+ 3)
(s+ a) (s+ 2)=
s+ 3
s+ 2
if a > 0. In our example y + y = u+ u, if we are allowed to usethis pole-zero cancellation, we obtain:
y =s+ 1
s+ 1u = u
We have simplified the pole −1 (in the denominator) withthe zero −1 (in the numerator). Mathematically, this is notcorrect, but in practice, it is. Indeed, we have the followingsolution for the differential equation:
y (t) = αe−t + u (t)
in other words, once the steady state has been reached, wewill have y (t) = u (t). In the case of an unstable zero, this isno longer the case given the fact that the dependence on theinitial conditions remains.
Solution to Exercise 3.12 (state equations of a wiringsystem)
1) We have:⎧⎪⎪⎨⎪⎪⎩x1 = −x2 + u− 3x1 + y
x2 = x1 + x2
y = 2x1 + x2
⇔
⎧⎪⎪⎨⎪⎪⎩x1 = u− x1
x2 = x1 + x2
y = 2x1 + x2
116 Automation for Robotics
Thus:⎧⎪⎪⎨⎪⎪⎩x =
(−1 0
1 1
)x+
(1
0
)u
y =(2 1
)x
2) We have P (s) = (s+ 1) (s− 1). The system is unstable,since the pole +1 has positive real part.
3) In the Laplace domain, the state equations are writtenas: ⎧⎪⎪⎨⎪⎪⎩
sx1 = u− x1
sx2 = x1 + x2
y = 2x1 + x2
And thus:⎧⎪⎪⎨⎪⎪⎩x1 =
us+1
x2 =x1s−1 = 1
s−1 .u
s+1
y = 2 us+1 +
1s−1 .
us+1 = 2s−1
(s+1)(s−1) u
The transfer function is therefore:
H(s) =2s− 1
(s+ 1) (s− 1)=
2s− 1
s2 − 1
Solution to Exercise 3.13 (combination of systems)
1) The state equations of the system Sa are:
Sa :
⎧⎪⎪⎨⎪⎪⎩x =
(0 0
1 2
)x+
(1
0
)u
y =(0 1
)x
Linear Systems 117
The transfer function Ga(s) and the characteristicpolynomial Pa (s) are given by:
Ga(s) =(0 1
)((s 0
0 s
)−
(0 0
1 2
))−1(1
0
)=
1
s· 1
s− 2
Pa (s) = s (s− 2)
2) For Sb, we have:
Sb :
⎧⎪⎪⎨⎪⎪⎩x =
(0 0
0 2
)x+
(1
1
)u
y =(1 1
)x
Gb(s) =(1 1
)((s 0
0 s
)−
(0 0
0 2
))−1 (1
1
)=
1
s+
1
s− 2
Pb (s) = s (s− 2)
3) And finally, for the looped system Sc, we have:
Sc :
⎧⎪⎪⎨⎪⎪⎩x =
(0 1
1 2
)x+
(1
0
)u
y =(1 0
)x
Gc(s) =(1 0
)((s 0
0 s
)−
(0 1
1 2
))−1 (1
0
)=
s− 2
s2 − 2s− 1
Pc (s) = s2 − 2s− 1
118 Automation for Robotics
Solution to Exercise 3.14 (calculating a transferfunction)
The transfer function H(s) = y(s)u(s) of this system is given by:
H =1
s− 1·
s2
s+1
1 + s2
s+1
(1s − s2
) · s+ 1
1 + (s+ 1) 1s
This expression is obtained by the product of the transferfunctions of each of the three blocks in series which composethe system. We obtain:
H(s) = − s3 + s4
2s5 + s4 − 6s3 − s2 + 3s+ 1
Solution to Exercise 3.15 (transfer matrix)
1) Its transfer matrix G(s) is given by:
G(s) = C(sI−A)−1B+D
=
(1 2
1 0
)(s− 1 −3
−2 s
)−1 (1
1
)+
(2
0
)
=
(2s2+s−7s2−s−6s+3
s2−s−6
)
2) The relation y = G(s)u is written as:
(s2 − s− 6
)y =
(2s2 + s− 7
s+ 3
)u
Or alternatively, by substituting s by ddt :{
y1 − y1 − 6y1 = 2u+ u− 7u
y2 − y2 − 6y2 = u+ 3u
Linear Systems 119
Solution to Exercise 3.16 (canonical form of a control)
1) The transfer function of our system is:
G(s) =b2s
2 + b1s+ b0s3 + a2s2 + a1s+ a0
2) If y and u represent the Laplace transforms of the signalsy(t) and u(t), we have:
y(s) =(b2s
2 + b1s+ b0) 1
s3 + a2s2 + a1s+ a0u(s)
which can be written in the form:{x1(s) =
1s3+a2s2+a1s+a0
.u(s)
y(s) =(b2s
2 + b1s+ b0)x1(s)
In other words:{s3x1 = u− a2s
2x1 − a1sx1 − a0x1
y = b2s2x1 + b1sx1 + b0x1
Let us now draw the wiring associated with these twoequations. The only differential operator we are allowed touse is the integrator, with transfer function 1
s . First of all, webuild a chain of 3 integrators in order to create s3x1, s
2x1 andsx1 as shown on top of Figure 3.11.
Then, we wire the equation s3x1 = u− a2s2x1− a1sx1− a0x1
(see bottom of Figure 3.11). We then wire the second equationand obtain the wiring of Figure 3.12.
120 Automation for Robotics
Figure 3.11. In order to wire the differential equation, we first buildthe integrator chain, and then we wire the evolution equation
Figure 3.12. Canonical form of control for a system of order 3
3) The state variables of this wiring system are the valuesx1, x2, x3 memorized by each of the integrators (the addersand amplifiers do not memorize anything). By reading the
Linear Systems 121
diagram, we can directly write the state equations of thissystem:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
⎛⎜⎜⎝x1
x2
x3
⎞⎟⎟⎠ =
⎛⎜⎜⎝0 1 0
0 0 1
−a0 −a1 −a2
⎞⎟⎟⎠⎛⎜⎜⎝
x1
x2
x3
⎞⎟⎟⎠+
⎛⎜⎜⎝0
0
1
⎞⎟⎟⎠u
y =(
b0 b1 b2
)⎛⎜⎜⎝x1
x2
x3
⎞⎟⎟⎠This reasoning can be applied in order to find the state
representation of any monovariate linear system of any ordern. This particular form for the state representation, whichinvolves the coefficients of the transfer function in thematrices is called canonical form of control. Thus, generally,in order to obtain the canonical form of control equivalent toa given monovariate linear system, we have to calculate itstransfer function in its developed form. Then, we canimmediately write its canonical form of control.
Solution to Exercise 3.17 (canonical observation form)
1) Let us isolate...y (which corresponds to the term
differentiated the most amount of times). We obtain:...y = b2u− a2y + b1u− a1y + b0u− a0y
By integrating, we can remove the differentials:
y =
∫ ∫ ∫(b2u− a2y + b1u− a1y + b0u− a0y)
2) We have:
y =
∫{ b2u− a2y +
∫[ b1u− a1y +
∫(b0u− a0y)︸ ︷︷ ︸
x1
]
︸ ︷︷ ︸x2︸ ︷︷ ︸
x3
}
122 Automation for Robotics
By defining x1, x2, x3 as the value of each of these threeintegrals, i.e.:⎧⎪⎪⎨⎪⎪⎩
x1 =∫(b0u− a0y)
x2 =∫(b1u− a1y + x1)
x3 =∫(b2u− a2y + x2)
from this we deduce the wiring of Figure 3.13. Let us notethat this diagram is strangely reminiscent of the oneobtained in Exercise 3.16, supposed to represent the samesystem. We move from one to the other by changing thedirection of the arrows, replacing the adders with solderedjoints and the joints with adders.
Figure 3.13. Canonical observation formfor a system of order 3
3) From this diagram, we can directly deduce the stateequations of the system:⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩
⎛⎜⎝
x1
x2
x3
⎞⎟⎠ =
⎛⎜⎝
0 0 −a0
1 0 −a1
0 1 −a2
⎞⎟⎠
⎛⎜⎝
x1
x2
x3
⎞⎟⎠ +
⎛⎜⎝
b0
b1
b2
⎞⎟⎠u
y =(0 0 1
)⎛⎜⎝
x1
x2
x3
⎞⎟⎠
Linear Systems 123
This particular form for the state representation is calledcanonical observation form.
4) Let us note that the transformation A → AT,B → CT,C → BT, gives us the canonical form of control (referto Exercise 3.16). Let us now try to explain why thistransposition, which makes us move from the canonical formof control to the canonical observation form and vice-versa,does not change the input–output behavior of the system. Forthis, let us consider the system Σ whose state matrices are(A,B,C,D) and the system Σ′ whose state matrices are(A′ = AT,B′ = CT, C′ = BT,D′ = DT). The transfer matrix
associated with Σ is:
G(s) = C(sI−A)−1B+D
The transfer matrix associated with Σ′ is:
G′(s) = C′(sI−A′)−1B′ +D′ = BT (sI−AT)−1
CT +DT
= BT ((sI−A)−1
)TCT +DT =
(C(sI−A)−1B
)T+DT
=(C(sI−A)−1B+D
)T= GT(s)
However, here G(s) is a scalar and therefore G(s) = GT(s).Thus, the transformation A → AT,B → CT,C → BT,D → DT
does not change the transfer function of the system if it has asingle input and a single output.
Solution to Exercise 3.18 (modal form)
1) The wiring of the system is given in Figure 3.14.
124 Automation for Robotics
Figure 3.14. Linear system in modal form
2) The transfer function of the system is given by:
G(s) = C(sI−A)−1B+ d
=(c1 · · · cn
)⎛⎜⎜⎜⎝s− λ1 0 . . .
0. . . 0
. . . 0 s− λn
⎞⎟⎟⎟⎠−1
⎛⎜⎜⎜⎜⎜⎝1
1
· · ·1
⎞⎟⎟⎟⎟⎟⎠+ d
=(c1 · · · cn
)⎛⎜⎜⎜⎜⎜⎜⎝
1s−λ1
0 · · · 0
0 1s−λ2
0 · · ·· · · · · · . . . · · ·0 · · · 0 1
s−λn
⎞⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎝1
1
· · ·1
⎞⎟⎟⎟⎟⎟⎠+ d
=c1
s− λ1+
c2s− λ2
+ · · ·+ cns− λn
+ d
Linear Systems 125
3) Its characteristic polynomial is given by det(sI−A), i.e.:
det
⎛⎜⎜⎜⎜⎜⎝s− λ1 0 · · · 00 s− λ2 0 · · ·· · · · · · · · · · · ·0 · · · 0 s− λn
⎞⎟⎟⎟⎟⎟⎠ = (s− λ1) (s− λ2) . . . (s− λn)
Its roots are the λi that are also the eigenvalues of A.
Solution to Exercise 3.19 (Jordan normal form)
1) A wiring for this system is given in Figure 3.15.
2) Its transfer function is given by:
G(s) = 2 +3
s+ 2− 1
(s+ 2)2− 2
(s+ 2)3+
7
s+ 3− 4
(s+ 3)2
3) Its characteristic polynomial is:
P (s) = (s+ 2)3(s+ 3)2
Figure 3.15. System in its Jordan normal form
4
Linear Control
In this chapter, we will study the design of controllers forsystems given by linear state equations of the form:⎧⎨⎩ x = Ax+Bu
y = Cx
We will show in the following chapter that around veryparticular points of the state space, called operating points,many nonlinear systems genuinely behave like linearsystems. The techniques developed in this chapter will thenbe used for the control of nonlinear systems. Let us denote bym, n, p the respective dimensions of the vectors u, x and y.Recall that A is called evolution matrix, B is the controlmatrix and C is the observation matrix. We have assumedhere, in the interest of simplification, that the direct matrixD involved in the observation equation was nil. In the casewhere such a direct matrix exists, we can remove it with asimple loop as shown in Exercise 4.1.
After having defined the fundamental concepts ofcontrollability and observability, we will propose twoapproaches for the design of controllers. First of all, we willassume that the state x is accessible on demand. Eventhough this hypothesis is generally not verified, it will allow
128 Automation for Robotics
us to establish the principles of the pole placement method. Inthe second phase, we will no longer assume that the state isaccessible. We will then have to develop state estimatorscapable of approximating the state vector in order to be ableto employ the tools developed in the first phase. The readersshould consult Kailath [KAI 80] in order to have a wider viewof the methods used for the control of linear systems. Acomplete and pedagogic course accompanied by numerousexercises can be found in the books of Rivoire and Ferrier[RIV 89].
4.1. Controllability and observability
There are multiple equivalent definitions for thecontrollability and observability of linear systems. A simpledefinition is the following.
DEFINITION.– The linear system:⎧⎨⎩ x = Ax+Bu
y = Cx
is said to be controllable if, for every pair of state vectors(x0,x1), we can find a time t1 and a control u(t), t ∈ [0, t1],such that the system, initialized in x0, reaches the state x1,at time t1. It is observable is the knowledge of y(t) and of u(t)for all t ∈ R allows us to determine, in a unique manner, thestate x(t).
CRITERION OF CONTROLLABILITY.– The system iscontrollable if and only if:
rank(B | AB | A2B | . . . | An−1B
)︸ ︷︷ ︸
Γcon
= n
where n is the dimension of x. In other words, the matrix Γcon,called the controllability matrix, obtained by juxtaposing the
Linear Control 129
n matrices B, AB, . . . , An−1B next to one another, has to beof full rank in order for the system to be controllable. Thiscriterion is proved in Exercise 4.5.
CRITERION OF OBSERVABILITY.– The linear system isobservable if:
rank
⎛⎜⎜⎜⎜⎜⎜⎝C
CA...
CAn−1
⎞⎟⎟⎟⎟⎟⎟⎠︸ ︷︷ ︸
Γobs
= n
in other words the matrix, called the observability matrixΓobs, obtained by placing the n matrices C, CA, . . . , CAn−1
below one another, is of full rank. The proof of this criterion isdiscussed in Exercise 4.6.
The above definitions as well as the criteria are also validfor discrete-time linear systems (see Exercise 4.4).
4.2. State feedback control
Let us consider the system which is assumed to becontrollable x = Ax + Bu, and we are looking to find acontroller for this system of the form u = w −Kx, where w isthe new input. This leads to the assumption that x isaccessible on demand, which is normally not the case. We willsee further on how to get rid of this awkward hypothesis. Thestate equations of the looped system are written as:
x = Ax+B (w −Kx) = (A−BK)x+Bw
It is reasonable to choose the control matrix K so as toimpose the poles of the looped system. This problem isequivalent to imposing the characteristic polynomial of the
130 Automation for Robotics
system. Let Pcon(s) be the desired polynomial, that we will ofcourse assume to be of degree n. We need to solve thepolynomial equation:
det(sI−A+BK) = Pcon(s)
referred to as pole placement. This equation can be translatedinto n scalar equations. Let us indeed recall that two monicpolynomials of degree n sn + an−1s+ · · ·+ a0 and sn + bn−1s+· · · + b0 are equal if and only if their coefficients are all equal,i.e. if an−1 = bn−1, . . . , a0 = b0. Our system of n equations hasm.n unknowns which are the coefficients kij , i ∈ {1, . . . ,m},j ∈ {1, . . . , n}. In fact, a single solution matrix K is sufficient.We may, therefore, fix (m − 1) elements of K so that we areleft with only n unknowns. However, the obtained system isnot always linear. The ppol instruction in SCILAB or placein MATLAB allows us to solve the pole placement equation.
4.3. Output feedback control
In this section, we are once again looking to stabilize thesystem:⎧⎨⎩ x = Ax+Bu
y = Cx
but this time, the state x of the system is is no longer assumedto be accessible on demand. The controller, which will be used,is represented in Figure 4.1.
Only the setpoint w and the output of the system y can beused by the controller. The unknowns of the controller are thematrices K, L and H. Let us attempt to explain the structureof this controller. First of all, in order to estimate the state xnecessary for computing the control u, we integrate asimulator of our system into the controller. The simulator is a
Linear Control 131
copy of the system and its state vector is denoted by x. Theerror εy between the output of the simulator y and the outputof the system y allows us to correct, by using a correctionmatrix L, the evolution of the estimated state x. Thecorrected simulator is referred to as the Luenberger observer.We can then apply a state feedback technique that will becarried out using the matrix K. For the calculation of K, wemay use the pole placement method described earlier, whichconsists of solving det(sI − A + BK) = Pcon(s), where Pcon(s)is the characteristic polynomial of degree n chosen for thecontrol dynamics. For the calculation of the correction matrixL, we will try to guarantee a convergence of the error x− x to0. The role of the matrix H (square matrix placed right afterthe setpoint vector w), called a precompensator, is to matchthe components of the setpoint w with certain state variableschosen beforehand. We will also discuss how to choose thismatrix H in order to be able to perform this association.
Figure 4.1. Principle of an output feedback controller
In order to calculate L, let us extract, from the controlledsystem of the figure, the subsystem with input u formed fromthe system to be controlled and its observer (see Figure 4.2).
132 Automation for Robotics
The state equations that describe the system are:⎧⎨⎩ x = Ax+Bu
ddt x = Ax+Bu− L (Cx−Cx)
where the state vector is (x, x). Let us create, only bythought, the quantity εx = x − x, as represented inFigure 4.2. By subtracting the two evolution equations, weobtain:
d
dt(x−x) = Ax+Bu− L (Cx−Cx)−Ax−Bu
= A (x− x)− LC (x− x)
Thus, εx obeys the differential equation:
εx = (A− LC) εx
in which the control u is not involved. The estimation error εxon the state tends toward zero if all the eigenvalues of A−LChave negative real parts. To impose the dynamics of the error(in other words, its convergence speed) means solving:
det (sI−A+ LC) = Pobs(s)
where Pobs(s) is chosen as desired, so as to have the requiredpoles. Since the determinant of a matrix is equal to that of itstranspose, this equation is equivalent to:
det(sI−AT +CTLT
)= Pobs(s)
We obtain a pole placement-type equation.
PRECOMPENSATOR.– The precompensator H allows us toassociate the setpoints (components of w) with certain valuesof the state variables that we are looking at. The choice ofthese state variables is done by using a setpoint matrix E, asillustrated in Exercise 4.11.
Linear Control 133
Figure 4.2. Diagram of the output error betweenthe observer and system
4.4. Summary
The algorithm of the table below summarizes the methodthat allows us to calculate an output feedback controller withprecompensator.
Algorithm REGULKLH(in: A,B,C,E,pcon,pobs ; out: R)
1 K := PLACE(A,B,pcon) ;
2 L := PLACE(AT,CT,pobs
)T;
3 H := − (E (A−BK)−1 B
)−1;
4 R :=
⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩
ddtx = (A−BK− LC) x +
(BH L
)⎛⎝w
y
⎞⎠
u = −K x +(H 0
)⎛⎝w
y
⎞⎠
134 Automation for Robotics
The associated MATLAB function (see the functionRegulKLH.m) is given below.
function [Ar,Br,Cr,Dr]=RegulKLH(A,B,C,E,pcom,pobs);
K=place(A,B,pcom)
L=place(A’,C’,pobs)’
H=-inv(E*inv(A-B*K)*B)
Ar=A-B*K-L*C
Br=[B*H L]
Cr=-K
Dr=[H,0*B’*C’]
end;
Figure 4.3. Loop allowing the removal of the direct matrix D
4.5. Exercises
EXERCISE 4.1.– Removing the direct matrix
Let us consider the system:⎧⎨⎩ x = Ax+Bu
y = Cx+Du
and create the new output z = y − Du, as represented inFigure 4.3. What do the state equations of the systembecome?
Linear Control 135
EXERCISE 4.2.– Non-observable states, non-controllablestates
Let us consider the system:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩x(t) =
⎛⎜⎜⎜⎜⎜⎝−1 1 0 0
0 1 0 0
1 1 1 1
0 1 0 1
⎞⎟⎟⎟⎟⎟⎠ x(t) +
⎛⎜⎜⎜⎜⎜⎝1
0
1
0
⎞⎟⎟⎟⎟⎟⎠ u(t)
y(t) =(1 1 0 0
)x(t) + 1 u(t)
1) Calculate its transfer function.
2) Is this system stable?
3) Draw the associated wiring system and deduce from thisthe non-observable and non-controllable states.
4) The poles of the system (the eigenvalues of the evolutionmatrix) are composed of transmission poles (poles of thetransfer function) and hidden modes. Give the hidden modes.
EXERCISE 4.3.– Using the controllability and observabilitycriteria
Let us consider the system:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩
x =
⎛⎜⎜⎝1 1 0
0 1 0
0 0 1
⎞⎟⎟⎠x+
⎛⎜⎜⎝0 0
1 0
1 a
⎞⎟⎟⎠u
y =
⎛⎝1 1 b
0 1 0
⎞⎠x
1) For which values of a is the system controllable?
2) For which values of b is the system observable?
136 Automation for Robotics
EXERCISE 4.4.– Proof of the controllability criterion indiscrete time
Prove the criterion of controllability in discrete time. Thistheorem states that the system x(k + 1) = Ax(k) + Bu(k) iscontrollable if and only if:
rank(B | AB | A2B | . . . | An−1B
)︸ ︷︷ ︸
Γcon
= n
where n is the dimension of x.
1) Show that:
x(n) = Anx(0) +(B | AB | . . . | An−1B
)⎛⎜⎜⎜⎜⎜⎜⎝
u(n− 1)...
u(1)
u(0)
⎞⎟⎟⎟⎟⎟⎟⎠2) Show that if the matrix
(B | AB | . . . | An−1B
)is of full
rank then, for every initial vector x0, for every target vectorxn, we can find a control u(0),u(1), . . . ,u(n − 1) such that thesystem initialized in x0 reaches xn, at time k = n.
EXERCISE 4.5.– Proof of the controllability criterion incontinuous time
The objective of this exercise is to prove the criterion ofcontrollability in continuous time. This criterion states thatthe system x = Ax+Bu is controllable if and only if:
rank(B | AB | A2B | . . . | An−1B
)︸ ︷︷ ︸
Γcon
= n
where n is the dimension of x.
Linear Control 137
1) We assume that:
rank(B | AB | A2B | . . . | An−1B
)︸ ︷︷ ︸
Γcon
< n
where n = dimx and Γcon is the controllability matrix. Let z bea non-zero vector such that zT.Γcon = 0. By using the solutionof the state equation:
x(t) = eAtx(0) +
∫ t
0eA(t−τ)Bu(τ)dτ
show that the control u cannot influence the value zTx. Deducefrom this that the system is not controllable.
2) Show that if rank(Γcon) = n, for every pair (x(0),x(t1)),t1 > 0, there is a polynomial control u(t), t ∈ [0, t1] which leadsthe system from state x(0) to state x(t1). Here, we will limitourselves to t1 = 1, knowing that the main principle of theproof remains valid for any t1 > 0.
EXERCISE 4.6.– Proof of the observability criterion
Consider the continuous-time linear system:
x = Ax+Bu
y = Cx
1) Show that:⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
y
y
y
...
y(n−1)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
C
CA
CA2
...
CAn−1
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
x+
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 0
CB 0 0 0
CAB CB 0
.... . .
. . .
CAn−2B · · · CAB CB
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
u
u
u
...
u(n−2)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
138 Automation for Robotics
2) Deduce from this that if the observability matrix:
Γobs =
⎛⎜⎜⎜⎜⎜⎜⎝C
CA...
CAn−1
⎞⎟⎟⎟⎟⎟⎟⎠is of full rank, then we can express the state x as a linearfunction of the quantities u, y, u, y, . . ., u(n−2), y(n−2), y(n−1).
EXERCISE 4.7.– Kalman decomposition
A linear system can always be decomposed, after asuitable change of basis, into four subsystems S1, S2, S3, S4
where S1 is controllable and observable, S2 is non-controllableand observable, S3 is controllable and non-observable and S4
is neither controllable nor observable. The dependenciesbetween the subsystems can be summarized in Figure 4.4.
Figure 4.4. Principle of the Kalman decomposition ; C: controllable,O: observable, C: non-controllable, O: non-observable
Let us note that, in the figure, there is no path (respectingthe direction of the arrows) leading from the input u to a non-controllable system. Similarly, there is no path leading from a
Linear Control 139
non-observable system to y. We consider the system describedby the state equation:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩
x(t) =
⎛⎜⎜⎜⎜⎜⎝A11 A12 0 0
0 A22 0 0
A31 A32 A33 A34
0 A42 0 A44
⎞⎟⎟⎟⎟⎟⎠ x(t) +
⎛⎜⎜⎜⎜⎜⎝B1
0
B3
0
⎞⎟⎟⎟⎟⎟⎠ u(t)
y(t) =(C1 C2 0 0
)x(t) + (D) u(t)
Draw a wiring diagram of the system. Show adecomposition in subsystems Si corresponding to the Kalmandecomposition.
EXERCISE 4.8.– Resolution of the pole placement equation
We will illustrate here the resolution of the pole placementequation when the system only has a single input. We considerthe system:
x =
⎛⎝1 2
3 4
⎞⎠x+
⎛⎝1
2
⎞⎠u
that we are looking to stabilize by state feedback of the form:u = w − Kx, with: K = (k1 k2). Calculate K so that thischaracteristic polynomial Pcon(s) of the closed-loop system hasthe roots −1 and −1.
EXERCISE 4.9.– Output feedback of a scalar system
Let us consider the following state equation:⎧⎨⎩ x = 3x+ 2u
y = 4x
140 Automation for Robotics
1) Propose an output feedback controller that puts all thepoles in −1 and such that the setpoint variable correspondingto x (in other words if we fix the setpoint at w, we want thestate x to converge toward w).
2) Give the state equations of the looped system. What arethe poles of the looped system?
EXERCISE 4.10.– Separation principle
Let us consider a system looped by a pole placementmethod, as represented in Figure 4.5.
Figure 4.5. Output feedback controller
1) Give the state equations of the controller.
2) Find the state equations of the looped system. We willtake as state vector the vector (xT xT)T.
3) Let us take εx = x − x and take as new state vector(xT εT
x)T. Give the associated state representation. Show that
εx is not controllable.
Linear Control 141
4) Show that the characteristic polynomial of the loopedsystem is given by:
P (s) = det (sI−A+BK) . det (sI−A+ LC)
What can be deduced from this about the relationshipbetween the poles of this system and the ones we haveplaced?
EXERCISE 4.11.– Choosing the precompensator
Let us consider once again the looped linear system ofFigure 4.5. We will assume that this system is stable. Thesetpoint w(t) is a constant w.
1) Toward what values x and εx do x and εx converge oncethe point of equilibrium has been reached?
2) We call setpoint variables, xc a set of m state variableswhere m = dim(w) = dim (u)) for which we would like that, forconstant w = w, xc converges toward xc = w. Let us assumethat these variables may be obtained by a linear combinationof the components of x through a relation of type:
xc = Ex
where E is an m×n, matrix, referred to as setpoint matrix. Letus assume that the state is x = (x1, x2, x3)
T and that we wishthe setpoint w = (w1, w2)
T to be such that w1 corresponds tox3 and w2 corresponds to 3.28 x1 (x1 is, for example, a positionexpressed in meters and that we want w2 to be expressed infeet, knowing that 1 meter = 3.28 feet). Calculate the setpointmatrix.
3) Calculate H in a way that Ex converges toward xc, againfor a constant setpoint.
EXERCISE 4.12.– Control for a pump-operating motor
We consider the direct current motor of Figure 4.6.
142 Automation for Robotics
Figure 4.6. Direct current motor
Its state equations are of the form:⎧⎨⎩ didt = −R
L i− κLω + u
L
ω = κJ i− ρ
Jω − TrJ
where κ, R, L, J are constant parameters of the motor. Theinputs are the voltage u and the torque Tr, the state variablesare i and ω. We use this motor to pump water from a well. Inthis case, the torque used is Tr = αω, where α is a constantparameter. We will ignore ρ before α. There is consequently asingle input for the motor+pump system: the voltage u.
1) Give the state equations of the motor+pump system.
2) We choose as output y = ω. Calculate the transferfunction of the motor+pump system.
3) Give the differential equation associated with thistransfer function.
4) We will take as state vector x = (y, y)T. Give a staterepresentation of the system in matrix form.
5) A proportional-derivative controller is a linearcombination of the output y, its derivative y and the setpointw (be careful not to confuse the setpoint w with the speed ofthe motor shaft ω). This controller is written in the form:
u = hw − k1y − k2y
Give the state equations of the motor+pump system loopedby such a controller.
Linear Control 143
6) Give the values of k1 and k2 that we need to choose inorder to have the poles of the looped system equal to −1.
7) Find h in such a way that y converges toward w when thesetpoint w is constant.
EXERCISE 4.13.– Proportional, integral and derivativecontrol
We consider a second-order system of the form described bythe following differential equation:
y + a1y + a0y = u
1) Give the state equation of the system in matrix form. Wewill take as state vector x = (y y)T.
2) Let w be a setpoint that we will assume constant. Wewould like y (t) to converge toward w. We define the error by:e (t) = w − y (t). We suggest controlling our system by thefollowing proportional-derivative-integral (PID) controller:
u (t) = α−1
∫ t
0e (τ) dτ + α0e (t) + α1e (t)
where the αi are the coefficients of the controller. This isa state feedback controller, where we assume that x (t) ismeasured. Give the state equations of this PID controller withthe inputs x, w and output u. A state variable z (t) =
∫ t0 e (τ) dτ
will have to be created in order to take into account theintegrator of the control.
3) Draw the wiring diagram of the looped system. Thisdiagram will only be composed of integrators, adders andamplifiers. Encircle the controller on the one hand, and thesystem to be controlled on the other hand.
4) Give the state equations of the looped system in matrixform.
144 Automation for Robotics
5) How do we choose the coefficients αi of the control (as afunction of the ai) so as to have a stable looped system in whichall the poles are equal to −1?
6) We slightly change the value of the parameters a0 anda1 while keeping the same controller. We assume that thismodification does not destabilize our system. The new valuesfor a0, a1 are denoted by a′0, a′1. For a value w for given w, whatvalue y does converge y to? What do you conclude from this?
EXERCISE 4.14.– State feedback for an order 3 system
We consider the system described in Figure 4.7.
Figure 4.7. State feedback for a linear system incanonical control form
1) Give the state equations of this system. What is itscharacteristic polynomial?
2) We would like to control this system by a state feedbackof the form u = −Kx + hw, where w is the setpoint. CalculateK in order for all the poles to be equal to −1.
3) We would like, at equilibrium (in other words, when thesetpoint and the output no longer change), to have y = w.Deduce from this the value to take for the precompensator h.
Linear Control 145
EXERCISE 4.15.– State feedback with integral effect,monovariate case
We consider the system described by the state equations:⎧⎪⎪⎪⎨⎪⎪⎪⎩x =
⎛⎝1 1
0 2
⎞⎠x+
⎛⎝0
1
⎞⎠u
y = x1
where u is the input, y is the output and x is the state vector.
1) Give the characteristic polynomial of the system. Is thesystem stable?
2) We loop the system by the following state feedbackcontrol:
u = α
∫ t
0(w(τ)− x1(τ)) dτ − Kx, with K = (k1 k2)
where w is the setpoint. Give the state equations of thecontroller (we will denote by z the state variable of thecontroller). What are the poles of the controller?
3) Give the state equations of the looped system.
4) Calculate K and α in order for all the poles to be equal to−1.
5) We choose a setpoint w = w constant in time. Whatvalues x and z does the state of the system x and state of thecontroller z tend to ? What value y does the output y tend to?
6) We now replace the evolution matrix A by another matrixA close to A, while keeping the same controller. What value ywill it converge to?
EXERCISE 4.16.– State feedback with integral effect, generalcase
We consider the system described by the state equationx=Ax + Bu + p where p is an unknown and constant
146 Automation for Robotics
disturbance vector that represents an external disturbance.We will take m = dimu and n = dimx. A state feedbackcontroller with integral effect is of the form:
u = Ki
∫ t
0(w − xc) dt−Kx
where w is the setpoint and xc is a vector with same dimensionas u representing the setpoint state variables (in other words,those we wish to control directly by using w). The vector xc
is linked to the state x by the relation xc = Ex, where E is aknown matrix.
1) Give the state equations of the looped system. Recall thatthe state of the looped system will be composed of the statex of the system to control and of the vector z of the valuesmemorized in the integrators. Give the dimensions of thevalues w, xc, p, z, A, B, E, Ki, K.
2) Show, with the MATLAB place function, how we canchoose the matrices K and Ki so that all the poles of the loopedsystem are equal to −1.
3) Show that, for a constant setpoint w and disturbance p,we necessarily have xc = w, once the steady state has beenreached. What is then the value of p (as a function of x and z)?
4) Give the state equations of the controller.
EXERCISE 4.17.– Observer
We consider the system with input u and output y ofFigure 4.8.
1) Give, in matrix form, a state representation for thissystem. Deduce from this a simplification for the wiringsystem.
2) Study the stability of this system.
3) Give the transfer function of this system.
4) Is the system controllable? Is it observable?
Linear Control 147
5) Find an observer that allows us to generate a state x (t),such that the error ||x−x|| converges toward zero at e−t (whichmeans to place all the poles at −1). Give this observer in stateequation form.
Figure 4.8. Second-order system for which we need todesign an observer
EXERCISE 4.18.– Amplitude demodulator
A sinusoidal signal with pulsation ω can be written in theform:
y (t) = a cos (ωt+ b)
where the parameters a and b are unknown. From themeasure of y (t), we need to find the amplitude a of the signaly (t).
1) Find an order 2 state equation capable of generating thesignal y (t). We will take as state variables x1 = ωy and x2 = y.
2) Let us assume that at time t, we know the state vectorx (t). Deduce from this an expression of the amplitude a of thesignal y (t) as a function of x (t).
3) We only measure y (t). Propose a state observer (by a poleplacement method) that generates us an estimation x (t) of thestate x (t). We will place all the poles at −1.
4) Deduce from this the state equations of an estimator withinput y and output a that gives us an estimation a of theamplitude a of a sinusoidal signal with pulsation ω.
148 Automation for Robotics
5) In MATLAB, generate a sampled signal:
y (t) = a cosωt+ n (t)
with t ∈ {0, dt, 2dt, . . . , 10} where dt = 0.01 is the samplingperiod. The signal n (t) is a Gaussian noise with standarddeviation 0.1. This noise is assumed white, in other wordsthat n (t1) is independent of n (t2) for t1 �= t2. We will takean amplitude a = 2 and a pulsation ω = 1. By using thestate observer developed in previous exercises, create a filterwhich, from y (t), estimates the amplitude a of the signal.Draw the filtered signal y (t) as well as the estimation a (t)of the amplitude.
6) Draw in MATLAB the Bode plot of the filter that generatesy (t) from y (t). Discuss.
EXERCISE 4.19.– Output feedback of a non-strictly propersystem
We consider the system S with input u and output ydescribed by the differential equation:
y − 2y = u+ 3u
Since in this case the degree of differentiation of u is notstrictly smaller than that of y, the system is not strictly proper.
1) Write this system in a state representation form.
2) In order to make this system strictly proper (in otherwords, with a direct matrix D equal to zero), we create a newoutput z = y − αu. Give the appropriate value of α. We willdenote by Sz this new system whose input is u and output is z.
3) Find an output feedback controller for Sz that sets all thepoles of the looped system to −1.
4) Deduce from this a controller Sr (in state equation form)for S that sets all the poles of the looped system to −1. We willdenote by Sb the system S looped by the controller Sr.
Linear Control 149
5) Give the state equations of the looped system Sb.
6) Calculate the transfer function of the looped system Sb.
7) Calculate the static gain of the looped system. This gaincorresponds to the ratio y
w in a steady state. Deduce from thisthe gain of the precompensator to be placed before the systemthat would allow us to have a static gain of 1.
EXERCISE 4.20.– Output feedback with integral effect
We consider the system described by the state equation:⎧⎨⎩ x = Ax+Bu+ p
y = Cx
where p is a disturbance vector, representing an externaldisturbance that could not be taken into account in themodeling (wind, the slope of the terrain, the weight of thepeople in an elevator, etc.). The vector p is assumed to beknown and constant. We will take m = dimu, n = dimx andp = dimy. We need to control this system using the outputfeedback controller with integral effect represented inFigure 4.9.
In this figure, w is the setpoint and yc is a vector withthe same dimension as u representing the setpoint outputvariables (in other words, those that we wish to controldirectly using w and thus have yc = w, when the setpoint isconstant). The vector yc satisfies the relation yc = Ey, whereE is a known matrix.
1) Give the state equations of the looped system in matrixform.
2) Let us take ε = x − x. Express these state equations inmatrix form by taking, this time, the state vectors (x, z, ε).
3) By using the MATLAB place function, show how we canarbitrarily set all the poles of the looped system.
150 Automation for Robotics
4) Show that, for a constant setpoint w and disturbance p,in a steady state, we necessarily have yc = w.
5) Give the state equations of the controller, in matrix form.
Figure 4.9. Output feedback controller with integral effect
4.6. Solutions
Solution to Exercise 4.1 (removing the direct matrix)
The new system has the state equations:⎧⎨⎩ x = Ax+Bu
z = Cx
The direct matrix D has, therefore, disappeared.
Solution to Exercise 4.2 (non-observable states and non-controllable states)
1) The transfer function of the system is given by:
(1 1 0 0
)⎛⎜⎜⎜⎝⎛⎜⎜⎜⎝
s 0 0 0
0 s 0 0
0 0 s 0
0 0 0 s
⎞⎟⎟⎟⎠−
⎛⎜⎜⎜⎝−1 1 0 0
0 1 0 0
1 1 1 1
0 1 0 1
⎞⎟⎟⎟⎠⎞⎟⎟⎟⎠
−1⎛⎜⎜⎜⎝1
0
1
0
⎞⎟⎟⎟⎠+ 1 =s+ 2
s+ 1
Linear Control 151
Of course, the inversion is not necessary and the following,faster calculation can be performed:⎧⎪⎪⎨⎪⎪⎩
y = x1 + x2 + u
sx1 = −x1 + x2 + u
sx2 = x2
⇔
⎧⎪⎪⎨⎪⎪⎩y = x1 + u
sx1 = −x1 + u
x2 = 0
⇒ y =
(1
s+ 1+ 1
)u
2) The characteristic polynomial of the evolution matrix isP (s) = s4 − 2s3 + 2s − 1. Its roots are {−1, 1, 1, 1}. The systemis therefore unstable, even though its transfer function s+2
s+1 isstable.
3) The wiring system (not represented here) shows that x1is observable and controllable, x2 is observable andnon-controllable, x3 is non-observable and controllable, x4 isnon-observable and non-controllable.
4) The poles of the system are 1, 1, 1,−1. The transmissionpole is −1. The hidden modes are −1,−1,−1.
Solution to Exercise 4.3 (using the controllability andobservability criteria)
1) The controllability matrix of the system is given by:
Γcon =(B |AB |A2B
)=
⎛⎜⎝ 0 0 1 0 2 0
1 0 1 0 1 0
1 a 1 a 1 a
⎞⎟⎠If a = 0, the rank is equal to 2, and therefore the system is
non-controllable. For a �= 0, rank(Γcon) = 3 and therefore thesystem is controllable.
152 Automation for Robotics
2) The observability matrix is:
Γobs =
⎛⎜⎜⎝C
CA
CA2
⎞⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1 1 b
0 1 0
1 2 b
0 1 0
1 3 b
0 1 0
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠Its rank is equal to 2, for every value of b. The system is,
therefore, not observable. We would have needed rank(Γobs) =3 in order to have an observable system.
Solution to Exercise 4.4 (proof of the controllabilitycriterion in discrete time)
REMINDER.– Let us consider the linear system Ax = b, wherex ∈ R
n and the matrix A is horizontal-rectangular (with itsnumber of columns n higher than its number of rows m). IfA is of full rank (in other words, equal to m) then, the set ofsolutions is an affine space of dimension n−m.
1) Let us note first of all that if x(n) can be chosen asdesired, from any initial vector, it seems to be clear that it ispossible to move in the state space, and therefore that thesystem is controllable. We have:
x(1) = Ax(0) +Bu(0)
x(2) = Ax(1) +Bu(1) = A2x(0) +ABu(0) +Bu(1)
x(3) = Ax(2) +Bu(2) = A3x(0) +A2Bu(0) +ABu(1) +Bu(2)
...
x(n)= Anx(0)+An−1Bu(0) +An−2Bu(1) + . . .
+ABu(n− 2) +Bu(n− 1).
Linear Control 153
Thus:
x(n) = Anx(0) +(B | AB | . . . | An−1B
)︸ ︷︷ ︸Γcon
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
u(n− 1)
u(n− 2)...
u(1)
u(0)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠= Anx(0) + Γconv
where Γcon is the controllability matrix of dimension n × mpand v is the vector of dimension mp of inputs (and futureinputs) that we are looking for.
2) In order to impose x(n) arbitrarily, we need to solve thesystem of n linear equations:
Γconv = x(n)−Anx(0)
with mp unknowns. If Γcon is of full rank (in other words, thatits column vectors form a generating family of Rn), then thereis always at least one solution v for this system.
Solution to Exercise 4.5 (proof of the controllabilitycriterion in continuous time)
1) We have:
zT.x(t) = zTeAtx(0) +
∫ t
0zTeA(t−τ)Bu(τ)dτ
where z is a non-zero vector such that zT.Γcon = 0. It istherefore sufficient to show that the quantity zTeA(t−τ)B isalways zero. We have:
eA(t−τ) =
∞∑i=0
(t− τ)iAi
i!
154 Automation for Robotics
However, the Cayley–Hamilton theorem tells us that thematrix A satisfies the characteristic equation:
An + an−1An−1 + · · ·+ a2A
2 + a1A+ a0I = 0
where sn + an−1sn−1 + · · · + a1s + a0 is the characteristic
polynomial of A. From this, we deduce that all the powers ofA can be expressed as linear combinations ofAi, i ∈ {0, . . . , n− 1}. Thus, eA(t−τ) can be written in the form:
eA(t−τ) =n−1∑i=0
βi · (t− τ)iAi
We, therefore, have:
zTeA(t−τ)B = zT
(n−1∑i=0
βi · (t− τ)iAi
)·B
=n−1∑i=0
βi · (t− τ)i ·(zTAiB
)Since the quantity zTAiB is equal to zero (by assumption),
zTeA(t−τ)B = 0. Thus, for all u we have zT.x(t) = zTeAtx(0),and therefore the control u cannot influence the component ofx following z. The system is, therefore, non-controllable.
2) We will now show that if rank(Γcon) = n, for every pair(x(0),x(1)), there exists a polynomial control u(t), t ∈ [0, 1] ofthe form:
u (t) = m (0) +m (1) t+ · · ·+m (n− 1) tn−1
=
⎛⎜⎜⎝ m (0)
∣∣∣∣∣∣∣∣ m (1)
∣∣∣∣∣∣∣∣ . . .∣∣∣∣∣∣∣∣m (n− 1)
⎞⎟⎟⎠︸ ︷︷ ︸
M
⎛⎜⎜⎜⎜⎜⎜⎝1
t...
tn−1
⎞⎟⎟⎟⎟⎟⎟⎠
Linear Control 155
which leads the system from the state x(0) to the state x(1).We have:
x(1) = eAx(0) +
∫ 1
0eA(1−τ)Bu(τ)dτ
= eA(x(0) +
∫ 1
0e−τABu(τ)dτ
)We, therefore, need to find u such that:∫ 1
0e−τABu(τ)dτ = e−Ax(1)− x(0)︸ ︷︷ ︸
z
However, e−τA can be written in the form:
e−τA =n−1∑i=0
γi.τi.Ai
(we need to apply the Cayley–Hamilton theorem on the matrixτA this time). Thus:
∫ 1
0e−τABu(τ)dτ =
∫ 1
0
n−1∑i=0
γi.τi.Ai.B.u(τ)dτ
=n−1∑i=0
Ai.B.γi
∫ 1
0τ i.u(τ)dτ︸ ︷︷ ︸h(i)
Since the controllability matrix is of rank n and that the γiare non-zero (which we will assume), the system to be solved:
n−1∑i=0
Ai.B.γi.h(i) = z
156 Automation for Robotics
always has a solution h(0), . . . , h(n−1). We, therefore, need tosolve the following n equations:∫ 1
0τ i.u(τ)dτ = h(i), i = 0, . . . , n− 1
However:∫ 10 τ i.u(τ)dτ =
∫ 10 τ i.
(∑n−1k=0 m (k) .τk
)dτ
=∑n−1
k=0 m (k)∫ 10 τ i+kdτ
=∑n−1
k=01
k+i+1m (k)
=
⎛⎜⎜⎝ m (0)
∣∣∣∣∣∣∣∣ m (1)
∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣m (n− 1)
⎞⎟⎟⎠⎛⎜⎜⎜⎜⎜⎝
1i+1
1i+2
1i+n
⎞⎟⎟⎟⎟⎟⎠Thus, the n equations to be solved are written as:
⎛⎜⎜⎝ m (0)
∣∣∣∣∣∣∣∣
∣∣∣∣∣∣∣∣m (n− 1)
⎞⎟⎟⎠
︸ ︷︷ ︸M
.
⎛⎜⎜⎜⎜⎜⎝
1 12
1n
12
13
1n+1
1n
1n+1
12n−1
⎞⎟⎟⎟⎟⎟⎠
︸ ︷︷ ︸T
=
⎛⎝
h (0)
∣∣∣∣∣∣
∣∣∣∣∣∣h (n− 1)
⎞⎠
︸ ︷︷ ︸H
The matrix T is invertible, and therefore M = HT−1, whichgives us the control u.
Solution to Exercise 4.6 (proof of the observabilitycriterion)
In this exercise, we will show that, in the case that oursystem is observable, the knowledge of the first n − 1derivatives of the outputs and the n − 2 derivatives of theinputs allows us to find the state vector.
Linear Control 157
1) Let us differentiate the observation equation n−1 times.We obtain:
y = Cx
y = CAx+CBu
y = CA2x+CABu+CBu
...
y(n−1)= CAn−1x+CAn−2Bu+CAn−3Bu+ . . .
+CABu(n−3) +CBu(n−2).
Or, in matrix form:⎛⎜⎜⎜⎜⎜⎜⎝y
y
y
...
y(n−1)
⎞⎟⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎜⎝C
CA
CA2
...
CAn−1
⎞⎟⎟⎟⎟⎟⎟⎠x+
⎛⎜⎜⎜⎜⎜⎜⎝0 0 0 0
CB 0 0 0
CAB CB 0
.... . .
. . .
CAn−2B · · · CAB CB
⎞⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎝u
u
u
...
u(n−2)
⎞⎟⎟⎟⎟⎟⎟⎠2) This equation can be written in the form z = Γobsx+Φv,
where z is the vector of all the outputs and their derivatives,v is the vector of all the inputs and their derivatives, Γobs isthe observability matrix and Φ is the remaining matrix. Thesystem we need to solve in order to reach state x is given by:
Γobsx = z− Φv
This equation has at most one solution if Γobs is of fullrank. The absence of a solution would mean that v and z areincompatible with the equations of our system, which isincompatible with our assumptions. This solution is given by:
x =(ΓT
obsΓobs
)−1ΓT
obs. (z− Φv)
158 Automation for Robotics
The matrix(ΓT
obsΓobs)−1
ΓTobs is called the pseudoinverse of
the matrix Γobs. It only exists if Γobs is of full rank.
Solution to Exercise 4.7 (Kalman decomposition)
The requested diagram is represented in Figure 4.10. Wecan see the four subsystems: S1 (controllable and observable),S2 (non-controllable and observable), S3 (controllable and non-observable) and S4 (neither controllable nor observable).
Figure 4.10. Details of the Kalman decompositionfor the linear systems
Solution to Exercise 4.8 (resolution of the pole placementequation)
We have:
Pcon(s) = (s+ 1)(s+ 1) = s2 + 2s+ 1
Linear Control 159
The pole placement equation det(sI−A+BK) = Pcon(s) iswritten as:
det
⎛⎝⎛⎝ s 0
0 s
⎞⎠−⎛⎝1 2
3 4
⎞⎠+
⎛⎝1
2
⎞⎠(k1 k2
)⎞⎠ = s2 + 2s+ 1
i.e.:
det
⎛⎝s+ k1 − 1 k2 − 2
2k1 − 3 s+ 2k2 − 4
⎞⎠ = s2 + 2s+ 1
or, alternatively:
s2 + s (k1 + 2k2 − 5) + k2 − 2 = s2 + 2s+ 1
We obtain the following linear system:⎛⎝1 2
0 1
⎞⎠⎛⎝k1
k2
⎞⎠+
⎛⎝−5
−2
⎞⎠ =
⎛⎝2
1
⎞⎠Let K = (k1 k2) = (1 3). We could have obtained this result
directly by using the ppol instruction of SCILAB or place ofMATLAB.
Solution to Exercise 4.9 (output feedback of a scalarsystem)
1) In order to find K and L, we need to solve:
det (s− 3 + 2K) = s+ 1
det (s− 3 + 4L) = s+ 1
We obtain K = 2 and L = 1. For the calculation of theprecompensator, we will take E = 1 (since the setpointvariable is xc = x). Thus:
H = −(E (A−BK)−1B
)−1=
−1
1 · (3− 2 · 2)−1 · 2 =1
2
160 Automation for Robotics
The controller we are looking for is, therefore, given by:
R :=
⎧⎨⎩ ddt x = −5x+ w + y
u = −2x+ 12w
2) The looped system is described by the following evolutionequations:⎧⎨⎩ x = 3x− 4x+ w
ddt x = 4x− 5x+ w
We can then verify whether the poles of this system are theones we have placed (more precisely −1 and −1), which is aconsequence of the separation principle (see Exercise 4.10).
Solution to Exercise 4.10 (separation principle)
1) The controller has as state vector x and as input thesetpoint w and the output of the system to control y. Theoutput of the controller is the control u. The staterepresentation of the controller is expressed by:⎧⎨⎩ d
dt x = Ax+B (Hw −Kx)− L (Cx− y)
u = Hw −Kx
or, in a simplified form:⎧⎨⎩ ddt x = (A−BK− LC) x+BHw + Ly
u = −Kx+Hw
In matrix form, this equation is written as:⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ddt x = (A−BK− LC) x+
(BH L
)⎛⎝w
y
⎞⎠u = −Kx+
(H 0
)⎛⎝w
y
⎞⎠
Linear Control 161
These are the equations we need to wire or program inorder to control our system.
2) The state equations associated with the looped systemwith input w and output y of our output feedback controlledsystem are given by:⎧⎪⎪⎨⎪⎪⎩
x = Ax+B (w −Kx)
dxdt = (A−BK− LC) x+ LCx+Bw
y = Cx
In matrix form, these equations are written as:⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ddt
⎛⎝x
x
⎞⎠ =
⎛⎝ A −BK
LC A−BK− LC
⎞⎠⎛⎝x
x
⎞⎠+
⎛⎝B
B
⎞⎠w
y =(C 0
)⎛⎝x
x
⎞⎠3) Let us take εx = x− x. Since:⎛⎝x
εx
⎞⎠ =
⎛⎝ I 0
−I I
⎞⎠⎛⎝x
x
⎞⎠or equivalently:⎛⎝x
x
⎞⎠ =
⎛⎝I 0
I I
⎞⎠⎛⎝x
εx
⎞⎠
162 Automation for Robotics
another possible state vector for the looped system is (x, εx).After change of basis, the state equations become:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
⎛⎝ x
εx
⎞⎠ =
⎛⎝ I 0
−I I
⎞⎠
⎛⎝ A −BK
LC A−BK− LC
⎞⎠
⎛⎝ I 0
I I
⎞⎠
⎛⎝x
εx
⎞⎠
+
⎛⎝ I 0
−I I
⎞⎠
⎛⎝B
B
⎞⎠w
y =(C 0
)⎛⎝ I 0
I I
⎞⎠
⎛⎝x
εx
⎞⎠
or, alternatively:⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩
⎛⎝ x
εx
⎞⎠ =
⎛⎝A−BK −BK
0 A− LC
⎞⎠⎛⎝x
εx
⎞⎠+
⎛⎝B
0
⎞⎠w
y =(C 0
)⎛⎝x
εx
⎞⎠Let us note that the input w cannot act on εx, which is
compatible with the fact that εx is not a controllablesubvector.
4) The characteristic polynomial of the looped system is:
P (s) = det
⎛⎝sI−⎛⎝A−BK BK
0 A− LC
⎞⎠⎞⎠= det
⎛⎝sI−A+BK −BK
0 sI−A+ LC
⎞⎠= det (sI−A+BK) . det (sI−A+ LC) = Pcon(s).Pobs(s).
Therefore, the poles of the looped system are composed ofthe poles placed for the control and the poles placed for theobservation. This is the separation principle.
Linear Control 163
Solution to Exercise 4.11 (choosing the precompensator)
1) We have:
0 =
⎛⎝A−BK −BK
0 A− LC
⎞⎠⎛⎝ x
εx
⎞⎠+
⎛⎝BH
0
⎞⎠ w
Since (A− LC) is invertible (since all the poles placed forthe observer are strictly stable), εx is necessarily nil. Theprevious equation becomes:
(A−BK) x+BHw = 0
Since A − BK is also invertible (since all the poles placedfor the control are strictly stable), we have:
x = − (A−BK)−1 BHw
2) Since xc1 = x3 and xc2 = 3.28x1, we need to take:
E =
⎛⎝0 0 1
3.28 0 0
⎞⎠3) At equilibrium, we have:
xc = Ex = −E (A−BK)−1 BHw
Thus:
xc = w ⇔ −E (A−BK)−1 BH = I ⇔ H
= −(E (A−BK)−1B
)−1
The insertion of a precompensator, therefore, allows us toassign, to each setpoint composing w, a particular statevariable. The variables set in this manner can then becontrolled independently of one another.
164 Automation for Robotics
Solution to Exercise 4.12 (control for a pump-operatingmotor)
1) We have:⎧⎨⎩ didt = −R
L i− κLω + u
L
ω = κJ i− ρ+α
J
Since ρ is negligible compare to α, these equations can bewritten in the following matrix form:
ddt
⎛⎝ i
ω
⎞⎠ =
⎛⎝−RL − κ
L
κJ −α
J
⎞⎠⎛⎝ i
ω
⎞⎠+
⎛⎝ 1L
0
⎞⎠u
y =(0 1
)⎛⎝ i
ω
⎞⎠2) The transfer function is:
G(s) = C (sI −A)−1 B +D
=(0 1
)⎛⎝s
⎛⎝1 0
0 1
⎞⎠−⎛⎝−R
L − κL
κJ −α
J
⎞⎠⎞⎠−1 ⎛⎝ 1L
0
⎞⎠=
κ
JLs2 + (Lα+ JR) s+Rα+ κ2=
κJL
s2 + Lα+JRJL s+ Rα+κ2
JL
3) The differential equation associated with this transferfunction is:
y +Lα+ JR
JLy +
Rα+ κ2
JLy =
κ
JLu
Linear Control 165
4) By taking as state vector x = (y, y)T, the staterepresentation of the system is written as:⎧⎪⎪⎪⎨⎪⎪⎪⎩
x =
⎛⎝ 0 1
−Rα+κ2
JL −Lα+JRJL
⎞⎠x+
⎛⎝ 0
κJL
⎞⎠u
y =(1 0
)x
5) We have:
x = Ax+B (hw −Kx) = (A−BK)x+Bhw
i.e.: ⎧⎪⎪⎪⎨⎪⎪⎪⎩x =
⎛⎝ 0 1
−Rα+κ2
JL − k1κJL −Lα+JR
JL − k2κJL
⎞⎠x+
⎛⎝ 0
κhJL
⎞⎠w
y =(1 0
)x
6) In order to have all the poles at −1, we need to solve:
s2 +
(Lα+ JR
JL+ k2
κ
JL
)s+
(Rα+ κ2
JL+ k1
κ
JL
)= (s+ 1)2
or alternatively:⎧⎨⎩ Rα+κ2
JL + k1κJL = 1
Lα+JRJL + k2
κJL = 2
and thus:⎧⎨⎩k1 =JL−Rα−κ2
κ
k2 =2JL−Lα−JR
κ
166 Automation for Robotics
7) At equilibrium, x = 0 and therefore:⎧⎪⎪⎪⎨⎪⎪⎪⎩0 =
⎛⎝ 0 1
−1 −2
⎞⎠ x+
⎛⎝ 0
κhJL
⎞⎠ w
y =(1 0
)x
We thus have:
y =(1 0
)⎛⎝ 0 1
−1 −2
⎞⎠−1 ⎛⎝ 0
− κhJL
⎞⎠ w
We will have w = y if:
(1 0
)⎛⎝ 0 1
−1 −2
⎞⎠−1⎛⎝ 0
− κhJL
⎞⎠︸ ︷︷ ︸
= κLJ
h
= 1
in other words if:
h =JL
κ
Finally, the control is expressed by:
u =JL
κw − JL−Rα− κ2
κy − 2JL− Lα− JR
κy
Solution to Exercise 4.13 (proportional, integral andderivative control)
1) Since x = (y y)T, the state equations are written as:⎧⎪⎪⎨⎪⎪⎩x1 = x2
x2 = −a1x2 − a0x1 + u
y = x1
Linear Control 167
2) We have:
u = α−1z + α0 (w − x1) + α1 (w − x1) = α−1z + α0w
−α0x1 − α1x2
Since z (t) =∫ t0 e (τ) dτ , we have z = −x1 + w. The state
equations of the controller are therefore:
z = −x1 + w
u = α−1z + α0w − α0x1 − α1x2.
3) The wiring diagram of the looped system is given inFigure 4.11.
Figure 4.11. PID controller
4) We have:
x2 = −a1x2 − a0x1 + α−1z + α0w − α0x1 − α1x2
= (−a0 − α0)x1 − (α1 + a1)x2 + α−1z + α0w
The state equations of the closed-loop system are therefore:⎛⎜⎝ x1
x2
z
⎞⎟⎠ =
⎛⎜⎜⎝0 1 0
−a0 − α0 −α1 − a1 α−1
−1 0 0
⎞⎟⎟⎠⎛⎜⎜⎝
x1
x2
z
⎞⎟⎟⎠+
⎛⎜⎜⎝0
α0
1
⎞⎟⎟⎠w
y = x1
168 Automation for Robotics
5) The characteristic polynomial is P (s) = s3+(a1 + α1) s2+
(a0 + α0) s+α−1. In order to have poles at −1, we need to haveP (s) = (s+ 1)3 = s3 + 3s2 + 3s+ 1. Therefore:
α1 = 3− a1, α0 = 3− a0, α−1 = 1
6) At equilibrium, we have:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
⎛⎜⎜⎝0 1 0
−a′0 − α0 −α1 − a′1 α−1
−1 0 0
⎞⎟⎟⎠⎛⎜⎜⎝
x1
x2
z
⎞⎟⎟⎠+
⎛⎜⎜⎝0
α0w
w
⎞⎟⎟⎠ =
⎛⎜⎜⎝0
0
0
⎞⎟⎟⎠y − x1 = 0
Therefore y = x1 = w. Regardless of the value of theparameters, if the system is stable, given the integral term,we will always have y = w. Adding an integral term allows fora robust control relative to any kind of constant disturbance.
Solution to Exercise 4.14 (output feedback of an order 3system)
1) The state equations of the system are:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
⎛⎜⎜⎝x1
x2
x3
⎞⎟⎟⎠ =
⎛⎜⎜⎝0 1 0
0 0 1
1 −4 7
⎞⎟⎟⎠⎛⎜⎜⎝
x1
x2
x3
⎞⎟⎟⎠ +
⎛⎜⎜⎝0
0
1
⎞⎟⎟⎠u
y =(−2 7 3
) ⎛⎜⎜⎝x1
x2
x3
⎞⎟⎟⎠The evolution matrix is a companion matrix. We can
directly deduce from the last row of the evolution matrix Athat the characteristic polynomial is P (s) = s3 − 7s2 + 4s− 1.
Linear Control 169
2) The looped system verifies:
x = Ax+B (−Kx+ hw) = (A−BK)x+Bhw
with: K = (k1, k2, k3). Thus, the evolution matrix of the loopedsystem is:
A−BK =
⎛⎜⎜⎝0 1 0
0 0 1
1 −4 7
⎞⎟⎟⎠−
⎛⎜⎜⎝0
0
1
⎞⎟⎟⎠(k1 k2 k3
)
=
⎛⎜⎜⎝0 1 0
0 0 1
−k1 + 1 −k2 − 4 −k3 + 7
⎞⎟⎟⎠
Its characteristic polynomial is:
P (s) = s3 + (k3 − 7) s2 + (k2 + 4) s+ (k1 − 1)
We would like it to be equal to:
P0(s) = (s+ 1)3 = s3 + 3s2 + 3s+ 1
By identification, we obtain:
K = (k1 k2 k3 ) = (2 −1 10)
3) At equilibrium, we have:⎧⎨⎩ (A−BK) x+Bhw = 0
y = Cx
Since x = − (A−BK)−1Bhw, we obtain:
y = −C (A−BK)−1Bhw
170 Automation for Robotics
In order to have y = w, we need to choose:
h = −(C (A−BK)−1B
)−1
i.e.:
h = −
⎛⎜⎜⎜⎝(−2 7 3
)⎛⎜⎜⎝0 1 0
0 0 1
−1 −3 −3
⎞⎟⎟⎠−1 ⎛⎜⎜⎝
0
0
1
⎞⎟⎟⎠⎞⎟⎟⎟⎠
−1
= −1
2
Let us note that, for the calculation of h, we do not need toinvert the 3 × 3 matrix, but calculate the intermediatequantity:
v =
⎛⎜⎜⎝0 1 0
0 0 1
−1 −3 −3
⎞⎟⎟⎠−1⎛⎜⎜⎝
0
0
1
⎞⎟⎟⎠which in this case can be done mentally. Indeed, it is sufficientto solve:⎛⎜⎜⎝
0 1 0
0 0 1
−1 −3 −3
⎞⎟⎟⎠v =
⎛⎜⎜⎝0
0
1
⎞⎟⎟⎠ ⇒ v =
⎛⎜⎜⎝−1
0
0
⎞⎟⎟⎠Solution to Exercise 4.15 (state feedback with integraleffect and monovariate case)
1) The characteristic polynomial of the system is:
P (s) = det
⎛⎝s
⎛⎝1 0
0 1
⎞⎠−⎛⎝1 1
0 2
⎞⎠⎞⎠= det
⎛⎝s− 1 −1
0 s− 2
⎞⎠ = (s− 1) (s− 2)
Linear Control 171
The system is unstable, since the poles are positive.
2) The state equations of the controller are:⎧⎨⎩ z = w − x1
u = −k1x1 − k2x2 + αz
The only pole of the controller is 0 (but in practice, this hasno importance).
3) The looped system has as state equations:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩x1 = x1 + x2
x2 = 2x2 − k1x1 − k2x2 + αz
z = w − x1
y = x1
or, in matrix form:⎛⎜⎜⎝x1
x2
z
⎞⎟⎟⎠ =
⎛⎜⎜⎝1 1 0
−k1 2− k2 α
−1 0 0
⎞⎟⎟⎠⎛⎜⎜⎝
x1
x2
z
⎞⎟⎟⎠+
⎛⎜⎜⎝0
0
1
⎞⎟⎟⎠w
4) The characteristic polynomial is:
P (s) = s3 + (k2 − 3) s2 + (k1 − k2 + 2) s+ α
We would like it to be equal to s3 + 3s2 + 3s + 1. Whenceα = 1, k1 = 7, k2 = 6.
5) At equilibrium, x1 = x2 = z = 0. Therefore:⎛⎜⎜⎝1 1 0
−7 −4 1
−1 0 0
⎞⎟⎟⎠⎛⎜⎜⎝
x1
x2
z
⎞⎟⎟⎠+
⎛⎜⎜⎝0
0
1
⎞⎟⎟⎠ w = 0
172 Automation for Robotics
Thus:⎛⎜⎜⎝x1
x2
z
⎞⎟⎟⎠ =
⎛⎜⎜⎝1 1 0
−7 −4 1
−1 0 0
⎞⎟⎟⎠−1⎛⎜⎜⎝
0
0
−w
⎞⎟⎟⎠ =
⎛⎜⎜⎝w
−w
3w
⎞⎟⎟⎠Thus, y = x1 = w. Let us note that if this is not the case, in
other words if w − x1 was non-zero, then the integrator wouldnot be in equilibrium. This is not possible since the system isstable. The role of the integrator is precisely to ensure zerostatic error and this, even when there are constantdisturbances.
6) If we slightly move the matrices of the system, then,since the eigenvalues of the looped system evolve in acontinuous manner depending on these matrices, the rootswill remain around −1 and therefore the system will remainstable. Thus, z (as well as x1 and x2) will converge toward 0(otherwise, z would diverge). Since z = w − y, y(t) willconverge toward w.
Solution to Exercise 4.16 (state feedback with integraleffect, general case)
1) The state equations of the looped system are written as:⎧⎨⎩ x = (A−BK)x+BKiz+ p
z = w −Ex
The requested dimensions are w:m, xc:m, p:n, z:m, A:n×n, B:n×m, E:m× n, Ki:m×m and K:m× n.
Linear Control 173
2) In matrix form, the equations are written as:⎛⎝ x
z
⎞⎠ =
⎛⎝A−BK BKi
−E 0
⎞⎠⎛⎝x
z
⎞⎠+
⎛⎝ p
w
⎞⎠⎛⎝⎛⎝ A 0
−E 0
⎞⎠−⎛⎝B
0
⎞⎠(K Ki
)⎞⎠⎛⎝x
z
⎞⎠+
⎛⎝ p
w
⎞⎠The choice of K and Ki will be done by pole placement. We
will need to write, in MATLAB:
A1=[A,zeros(n,m);-E,zeros(m,m)];B1=[B;zeros(m,m)]; place(A1,B1,-ones(1:n+m));
3) For a constant setpoint w and disturbance p, in a steadystate, we have:⎧⎨⎩0 = (A−BK)x+BKiz+ p
0 = w −Ex
In other words, w = Ex = xc and p = − (A−BK)x−BKiz.Thus, the setpoint condition is respected (xc = w) and we arecapable to find the disturbance.
4) The state equations of the state feedback controller withintegral effect are:⎧⎨⎩ z = −Ex+w
u = Kiz−Kx
Solution to Exercise 4.17 (observer)
1) We have:⎧⎪⎪⎨⎪⎪⎩x1 = − (u+ x2) + (x1 + (u+ x2)) = x1
x2 = (x1 + (u+ x2)) + (u+ x2) = x1 + 2x2 + 2u
y = x2
174 Automation for Robotics
or, in matrix form:⎧⎪⎪⎪⎨⎪⎪⎪⎩⎛⎝ x1
x2
⎞⎠ =
⎛⎝1 0
1 2
⎞⎠⎛⎝x1
x2
⎞⎠+
⎛⎝0
2
⎞⎠u
y = x2
The wiring system is given in Figure 4.12.
Figure 4.12. Simplified wiring diagram
2) The eigenvalues of the system are λ1 = 1 and λ2 = 2 thathave negative real parts. The system is, therefore, unstable.
3) We have:⎧⎪⎪⎨⎪⎪⎩sx1 = x1
sx2 = x1 + 2x2 + 2u
y = x2
⇒
⎧⎪⎪⎨⎪⎪⎩x1 = 0
x2 =2
s−2 u
y = 2s−2 u
The transfer function is, therefore, 2s−2 .
4) The controllability matrix is:
(B | AB) =
⎛⎝0 0
2 4
⎞⎠
Linear Control 175
Its rank is < 2, and therefore the system is not controllable.The observability matrix is:⎛⎝ C
CA
⎞⎠ =
⎛⎝0 1
1 2
⎞⎠Its rank is equal to 2, and therefore the system is
observable.
5) We solve det (sI−A+ LC) = (s+ 1)2, i.e.:
det
⎛⎝⎛⎝ s 0
0 s
⎞⎠−⎛⎝1 0
1 2
⎞⎠ +
⎛⎝ �1
�2
⎞⎠(0 1
)⎞⎠= s2 + s (−3 + �2)
+ �1 − �2 + 2 = s2 + 2s+ 1
We have −3 + �2 = 2 and �1 − �2 + 2 = 1. Therefore, �2 = 5and �1 = 4. The state equations for the observer are:⎧⎨⎩ d
dt x = (A− LC) x+Bu+ Ly
x = x
i.e.: ⎧⎪⎪⎪⎨⎪⎪⎪⎩ddt x =
⎛⎝1 −4
1 −3
⎞⎠ x+
⎛⎝0 4
2 5
⎞⎠⎛⎝u
y
⎞⎠x = x
Solution to Exercise 4.18 (amplitude demodulator)
1) Since x1 = ωy, x2 = y, we have:⎧⎨⎩ x1 = ωy = ωx2
x2 = y = −ω2a cos (ωt+ b) = −ω2y = −ωx1
176 Automation for Robotics
and therefore the state equations are:⎧⎪⎪⎪⎨⎪⎪⎪⎩x =
⎛⎝ 0 ω
−ω 0
⎞⎠x
y =(
1ω 0
)x
2) We have:⎧⎨⎩x1 = ωy = aω cos (ωt+ b)
x2 = y = −aω sin (ωt+ b)
Therefore, a = 1ω ‖x‖.
3) We take a Luenberger-type observer given by:
d
dtx = (A− LC) x+ Ly
We solve:
det (sI−A+ LC) = (s+ 1)2 = s2 + 2s+ 1
However:
det (sI−A+ LC) = det
⎛⎝⎛⎝ s −ω
ω s
⎞⎠+
⎛⎝ �1
�2
⎞⎠(1ω 0
)⎞⎠= det
⎛⎝ s+ 1ω �1 −ω
ω + 1ω �2 s
⎞⎠
Therefore, the equation to solve becomes:
s2 +1
ω�1s+ ω2 + �2 = s2 + 2s+ 1
Linear Control 177
Let �1 = 2ω and �2 = 1− ω2. The observer is therefore:
d
dtx =
⎛⎝⎛⎝ 0 ω
−ω 0
⎞⎠−⎛⎝ 2ω
1− ω2
⎞⎠(1ω 0
)⎞⎠ x+
⎛⎝ 2ω
1− ω2
⎞⎠ y
i.e.:
d
dtx =
⎛⎝ −2 ω
− 1ω 0
⎞⎠ x+
⎛⎝ 2ω
1− ω2
⎞⎠ y
4) This filter is described by the following state equations:⎧⎪⎪⎪⎨⎪⎪⎪⎩ddt x =
⎛⎝ −2 ω
− 1ω 0
⎞⎠ x+
⎛⎝ 2ω
1− ω2
⎞⎠ y
a = 1ω ‖x‖
This type of filter is used to detect the presence of a signalof known frequency (for example, sound of an underwatertransponder). If a varies slightly in time, the estimator canalso be used to perform amplitude demodulation.
5) The program, which can be found in demodul.m, is givenbelow:
tmax=10; dt=0.01; w=1; t=0:dt:tmax;y=2*cos(w*t)+0.1*randn(size(t));xhat=[0;0]; yhat=0*t; ahat=0*t;for k=1:length(t),xhat=xhat+dt*([-2,w;-1/w0]*xhat+[2*w;1-w^2]*y(k));ahat(k)=(1/w)*norm(xhat);yhat(k)=xhat(1)/w;end
Figure 4.13 illustrates the amplitude demodulationperformed by this program. Let us note that a depends on
178 Automation for Robotics
time, since as time goes by more information is available tothe filter allowing it to refine its estimation.
Figure 4.13. Estimation of the amplitude a of a noisysinusoidal signal of known frequency
6) In order to draw the Bode plot in MATLAB, we build thesystem using the ss(A,B,C,D) instruction, then draw theplot using the bode instruction.
sys = ss([-2,w;-1/w 0],[2*w;1-w^2],[1/w,0],0);bode(sys);
We obtain Figure 4.14.
Figure 4.14. Bode plot of the filter
Linear Control 179
We may recognize the frequency response of a bandpassfilter centered around the pulsation ω = 1 rad/second.
Solution to Exercise 4.19 (output feedback of anon-strictly proper system)
1) We can proceed by identification. The state equation of Sis of the form:⎧⎨⎩ x = ax+ bu
y = cx+ du
Its transfer function is:
s+ 3
s− 2=
s− 2 + 5
s− 2=
5
s− 2+ 1 = c(s− a)−1b+ d =
cb
s− a+ d
We may take, for instance, a = 2, b = 5, c = 1, d = 1. Let usrecall that the state representation is not unique.
2) If we take z = y − du, we obtain the system described bythe equations:⎧⎨⎩ x = ax+ bu
z = cx
3) We solve:⎧⎨⎩s− a+ bk = s+ 1
s− a+ �c = s+ 1⇒
⎧⎨⎩k = 1+ab = 1+2
5 = 35
� = 1+ac = 1+2
1 = 3
For the precompensator, we have:
h = −((a− bk)−1 b
)−1= −
((2− 3)−1 · 5
)−1=
1
5
180 Automation for Robotics
The equations of the controller are therefore:⎧⎨⎩ ddt x = (a− bk − �c) x+ bhw + �z = −4x+ w + 3z
u = −kx+ hw = −35 x+ 1
5w
4) We have, for the initial system:
z = y − u = y +3
5x− 1
5w
Therefore, the controller for S is:⎧⎨⎩ ddt x = −4x+ w + 3
(y + 3
5 x− 15w
)= −11
5 x+ 25w + 3y
u = −35 x+ 1
5w
5) The state equations of the looped system are:⎧⎪⎪⎨⎪⎪⎩x = 2x+ 5u = 2x− 3x+ w
ddt x = −4x+ w + 3z = −4x+ w + 3(x+ u− u)
= −4x+ 3x+ w
i.e.:
d
dt
⎛⎝x
x
⎞⎠ =
⎛⎝2 −3
3 −4
⎞⎠⎛⎝x
x
⎞⎠+
⎛⎝1
1
⎞⎠w
We verify that the eigenvalues of the evolution matrix areboth equal to −1.
6) Since y = x + u = x − 35 x + 1
5w, the transfer function ofthe looped system is:
G (s) = C (sI−A)−1 B+ d =(1 −3
5
)⎛⎝s− 2 3
−3 s+ 4
⎞⎠−1⎛⎝1
1
⎞⎠+1
5
=s+ 3
5 (s+ 1)
Linear Control 181
Let us note that a pole-zero simplification has occurred.This simplification comes from the fact that the stateεx = x − x is a non-controllable state variable and that everynon-controllable or non-observable state variable leads to apole-zero simplification during the calculation of the transferfunction.
7) The static gain of the looped system is equal to 35 . It is
obtained by taking G (s) for s = 0. It can also be obtained fromthe differential equation of the looped system:
5y + 5y = w + 3w
At equilibrium, we have 5y = 3w. In order to have a staticgain equal to 1, we need to add a precompensator before thesystem with gain h = 5
3 .
Solution to Exercise 4.20 (output feedback with integraleffect)
NOTE.– A logical approach would be to express the system inthe form:
d
dt
⎛⎝x
p
⎞⎠ =
⎛⎝A I
0 0
⎞⎠⎛⎝x
p
⎞⎠+
⎛⎝B
0
⎞⎠u
by including the disturbance vector as a component of theparticular state vector. However, the system is notcontrollable (we cannot control the disturbance) and the poleplacement method will not work.
1) The state (x, z, x) of the looped system is composed ofthe state x of the system to control, of the vector z of the
182 Automation for Robotics
memorized quantities in the integrators and of the state xreconstructed by the observer. Since u = −Kiz−Kx, we have:⎧⎪⎪⎨⎪⎪⎩
x = Ax+B (−Kiz−Kx) + p
z = w −ECx
ddt x = Ax+B (−Kiz−Kx)− L(Cx−Cx)
or, in matrix form:
d
dt
⎛⎜⎜⎝x
z
x
⎞⎟⎟⎠ =
⎛⎜⎜⎝A −BKi −BK
−EC 0 0
LC −BKi A−BK− LC
⎞⎟⎟⎠⎛⎜⎜⎝
x
z
x
⎞⎟⎟⎠+
⎛⎜⎜⎝p
w
0
⎞⎟⎟⎠2) We have:
ε = ddt (x− x) = LCx−BKiz+ (A−BK−LC) x−Ax+BKiz
+BKx− p
= A (x− x)− LC (x− x)− p = (A− LC) ε− p
Thus:
d
dt
⎛⎜⎜⎝x
z
ε
⎞⎟⎟⎠ =
⎛⎜⎜⎝A−BK −BKi −BK
−EC 0 0
0 0 A− LC
⎞⎟⎟⎠⎛⎜⎜⎝
x
z
ε
⎞⎟⎟⎠+
⎛⎜⎜⎝p
w
−p
⎞⎟⎟⎠3) The evolution matrix of the looped system is block
diagonal. Thus, the poles of the looped system are composedof the eigenvalues of the following matrix:⎛⎝A−BK −BKi
−EC 0
⎞⎠ =
⎛⎝ A 0
−EC 0
⎞⎠︸ ︷︷ ︸
A
−⎛⎝B
0
⎞⎠︸ ︷︷ ︸
B
(K Ki
)︸ ︷︷ ︸
K
Linear Control 183
and of those of the matrix A − LC. We can thus set the polesby doing:
K = place(A, B,pcon), L = (place(AT,CT,pobs))T
where pcon and pobs are the desired poles.
4) At equilibrium, we have z = −ECx+w = 0. Therefore:
w = ECx = yc
Thus, at equilibrium, the output yc is necessarily equal tothe setpoint w.
5) The state equations of the controller are:⎧⎪⎪⎨⎪⎪⎩z = w −Ey
ddt x = Ax+B (−Kiz−Kx)− L(Cx− y)
u = −Kiz−Kx
or alternatively:⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩ddt
⎛⎝ z
x
⎞⎠ =
⎛⎝ 0 0
−BKiz A−BK− LC
⎞⎠
⎛⎝ z
x
⎞⎠+
⎛⎝ I −E
0 L
⎞⎠
⎛⎝w
y
⎞⎠
u =(−Ki −K
)⎛⎝ z
x
⎞⎠
5
Linearized Control
In Chapter 4, we have shown how to design controllers forlinear systems. However, in practice, the systems are rarelylinear. Nevertheless, if their state vector remains localized ina small zone of the state space, the system may be consideredlinear and the techniques developed in Chapter 4 can then beused. We will first show how to linearize a nonlinear systemaround a given point of the state space. We will then discusshow to stabilize these nonlinear systems.
5.1. Linearization
5.1.1. Linearization of a function
Let f : Rn → R
p be a differentiable function. In theneighborhood of a point x ∈ R
n, the first-order Taylordevelopment of f around x gives us:
f(x) � f(x) +df
dx(x) (x− x)
186 Automation for Robotics
with:
df
dx(x) =
⎛⎜⎜⎜⎜⎜⎜⎝∂f1∂x1
(x) ∂f1∂x2
(x) . . . ∂f1∂xn
(x)
∂f2∂x1
(x) ∂f2∂x2
(x) . . . ∂f2∂xn
(x)...
......
∂fp∂x1
(x)∂fp∂x2
(x) . . .∂fp∂xn
(x)
⎞⎟⎟⎟⎟⎟⎟⎠This matrix is called the Jacobian matrix. Very often, in
order to linearize a function, we use formal calculus forcalculating the Jacobian matrix, then we instantiate thismatrix around x. When we differentiate by hand, we avoidsuch proceedings. We save a lot of calculations if we performthe two operations (differentiation and instantiation)simultaneously (see Exercise 5.2). Finally, there is a similarmethod that is very easy to implement: the finite differencemethod (refer to Exercise 5.4). In order to apply it to thecalculation of df
dx at point x, we approximate the jth column ofthe Jacobian matrix as follows:
∂f
∂xj(x) � f (x+hej)− f (x)
h
where ej is the jth vector of the canonical basis of Rn and h asmall real number. Thus, the Jacobian matrix is approximatedby:
df
dx(x) �
⎛⎜⎜⎝ f(x+he1)−f(x)h
∣∣∣∣∣∣∣∣f(x+he2)−f(x)
h
∣∣∣∣∣∣∣∣ . . .∣∣∣∣∣∣∣∣f(x+hen)−f(x)
h
⎞⎟⎟⎠In order for the approximation to be correct, we need to take
a very small h. However, if h is too small, numerical problemsappear. This method therefore has to be considered weak.
Linearized Control 187
5.1.2. Linearization of a dynamic system
Let us consider the system described by its state equations:
S :
⎧⎨⎩ x = f(x,u)
y = g(x,u)
where x is of dimension n, u is of dimension m and y is ofdimension p. Around the point (x, u), the behavior of S istherefore approximated by the following state equations:⎧⎨⎩ x = f(x, u) +A (x− x) +B (u− u)
y = g(x, u) +C (x− x) +D (u− u)
with:
A = ∂f∂x (x, u) , B = ∂f
∂u (x, u)
C = ∂g∂x (x, u) , D = ∂g
∂u (x, u)
This is an affine system called tangent system to S at point(x, u).
5.1.3. Linearization around an operating point
A point (x, u) is an operating point (also called polarizationpoint) if f(x, u) = 0. If u = 0, we are in the case of a point ofequilibrium. Let us note first of all that if x = x and if u = u,then x = 0, in other words, the system no longer evolves if wemaintain the control u = u when the system is in the statex. In this case, the output y has the value y = y = g(x, u).Around the operating point, (x, u), the system S admits thetangent system:⎧⎪⎨⎪⎩
x = A (x− x) +B (u− u)
y = y +C (x− x) +D (u− u)
188 Automation for Robotics
Let us take u = u−u, x = x−x and y = y−y. These vectorsare called the variations of u, x and y. For small variations u,x and y, we have:⎧⎪⎨⎪⎩
ddt x = Ax+Bu
y = Cx+Du
The system thus formed is called the linearized system of Saround the operating point (x, u).
5.2. Stabilization of a nonlinear system
Let us consider the system described by its state equations:
S :
⎧⎪⎨⎪⎩x = f(x,u)
y = g(x)
in which (x, u) constitutes an operating point. In order for oursystem to behave like a linear system around (x, u), let uscreate the variables u = u − u and y = y − y, as representedin Figure 5.1.
Figure 5.1. Polarization of the system: the initial system, locallyaffine (in the neighborhood of the operating point), is transformed
into a locally linear system
Linearized Control 189
The system with input u and output y thus designed iscalled a polarized system. The state equations of the polarizedsystem can be approximated by:⎧⎪⎨⎪⎩
x = f(x,u) � A. (x− x) +B. (u− u) = A. (x− x) +B.u
y = −y + y = −y + g(x) � −y + y +C (x− x) = C (x− x)
where A, B and C are obtained by the calculation of theJacobian matrix of the functions f and g at point (x, u). Bytaking now x = x− x as state vector instead of x, we obtain:⎧⎪⎨⎪⎩
ddt x = Ax+Bu
y = Cx
which is the linearized system of the nonlinear system. Letxc = Ex be the sub-vector of the set point variables. We canbuild a controller RL for this linear system using theREGULKLH algorithm (A, B, C, E, pcon, pobs) described onpage 133. We know that when the set point w input into RL isconstant, we have Ex = w. However, we would like the inputof the controller w that we are building to satisfy w = Ex. Wetherefore have to build w from w in such a way that w = Ex,at equilibrium. We have:
w = Ex = E (x− x) = w − w
where w = Ex. The controller thus obtained is represented inFigure 5.2, in the thick frame.
190 Automation for Robotics
Figure 5.2. Stabilization of a nonlinear system aroundan operating point by a linear controller
This controller stabilizes and decouples our nonlinearsystem around its operating point. A summary of the methodto calculate a controller for a nonlinear system is given below.
Algorithm REGULNL (in: f ,g,E,pcon,pobs, x, u ; out: R)
1 Ensure that f (x, u) = 0;
2 y := g (x) ; w := Ex;
3 A := ∂f∂x
(x, u) ; B := ∂f∂u
(x, u) ; C := ∂g∂x
(x, u) ;
4 K := PLACE(A,B,pcon) ;
5 L :=(
PLACE(AT,CT,pobs
))T;
6 H := − (E (A−BK)−1 B
)−1;
7 R :=
⎧⎪⎨⎪⎩
ddtx = (A−BK− LC) x+BH (w − w) + L (y − y)
u = u−Kx+H (w − w)
This algorithm returns our controller with inputs y, w andoutput u in the form of its state equations.
Linearized Control 191
5.3. Exercises
EXERCISE 5.1.– Jacobian matrix
Let us consider the function:
f
⎛⎜⎝x1
x2
⎞⎟⎠ =
⎛⎜⎝ x21x2
x21 + x22
⎞⎟⎠1) Calculate the Jacobian matrix of f at x.
2) Linearize this function around x = (1, 2)T .
EXERCISE 5.2.– Linearization by decomposition
Here, we need to linearize a function f (x) around a pointx. The decomposition method is used to represent the functionf as a composition of functions (sin, exp, etc.) and elementaryoperators (+, −, ·, /). We then calculate, for each intermediaryvariable u, the values ∂iu = ∂u
∂xiand thus starting from the
variables xi in order to proceed step by step until the functionf . The rules that we will be able to apply are of the form:
∂i (u+ v) = ∂iu+ ∂iv
∂i (u.v) = u.∂iv + v.∂iu
∂i(uv
)= v.∂iu−u.∂iv
v2
∂i(u2
)= 2u.∂iu
Linearize the function:
f(x1, x2) =
(x21 + x1x2
)2(x1x2)
x21 + x2
around the point x = (1, 1)T using this approach.
192 Automation for Robotics
EXERCISE 5.3.– Linearization of a function by limiteddevelopment
One method for the linearization of a function consists ofreplacing each sub-expression by a limited development (oforder 1 or more) around the point of linearization, which wewill assume here to be equal to zero. The method allows us tosave large amounts of calculation, but it has to be usedcarefully since it may lead to errors due to singularities. Inorder to illustrate this principle, we will attempt to linearizethe function:
f(x) =sinx1.(3− 2x22 cosx1) + 7 expx2. cosx1
1 + sin2 x1
around x = (0, 0). We have:
sinx1 = x1 + ε
cosx1 = 1 + ε
x21 = x22 = ε
expx2 = 1 + x2 + ε
where ε = o (||x||), in other words, that ε is a notation, whichmeans “small before” ||x||. With this notation, we have rules ofthe form:
(u(x) + ε) (v(x) + ε) = u(x).v(x) + ε
u(x)+εv(x)+ε = u(x)
v(x) + ε
where u(x) and v(x) are two functions with real values,continuous at zero. By using the above stated principle,linearize the function f(x) around zero.
Linearized Control 193
EXERCISE 5.4.– Linearization of a system using the finitedifference method in MATLAB
We have the MATLAB code of the evolution function f (x,u)of a system in the form of a script f.m. Give a MATLABfunction function[A,B]=Jf(x,u,h) which gives anestimation of the matrices:
A =∂f
∂x(x,u) et B =
∂f
∂u(x,u)
using the finite difference method. Here, h is the step of themethod and corresponds to a very small quantity (for exampleh = 0.0001).
EXERCISE 5.5.– Linearization of the predator-prey system
Let us once again consider the Lotka–Volterrapredator-prey system, which is given by:⎧⎪⎨⎪⎩
x1 = (1− x2)x1
x2 = (−1 + x1)x2
1) Linearize this system around a non-zero point ofequilibrium x.
2) Among the two vector fields of Figure 5.3, one representsthe field associated with the Lotka–Volterra system and othercorresponds to its tangent system.
3) Calculate the poles of the linearized system. Discuss.
194 Automation for Robotics
Figure 5.3. One of the two vector fields is associated with theLotka–Volterra system. The other corresponds to its tangent system
EXERCISE 5.6.– Linearization of a simple pendulum
The state equations of a simple pendulum are given by:⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩
⎛⎜⎝ x1
x2
⎞⎟⎠ = f(x, u) =
⎛⎜⎝ x2
−�mg sinx1+um�2
⎞⎟⎠y = g(x, u) = � sinx1
Linearize this pendulum around the point(x = (0, 0), u = 0) .
EXERCISE 5.7.– Mass in a liquid
Let us consider a mass in a liquid flow. Its evolution obeysthe following differential equation:
y + y. |y| = u
where y is the position of the mass and u is the force exertedon this mass.
1) Give the state equation for this system. Linearize aroundits point of equilibrium. We will take as state vector x =(y y)T.
Linearized Control 195
2) Study the stability of the system.
3) In the case of a positive initial velocity y (0), calculate thesolution of the nonlinear differential equation when u = 0.What value does y converge to?
EXERCISE 5.8.– Controllability of the segway
Let us recall that the segway is a vehicle with two wheelsand a single axle. We consider first of all that the segwaymoves in a straight line, and therefore a two-dimensional(2D) model will suffice (see Figure 5.4).
Figure 5.4. Segway that is the subject of the controllability study
This system has two degrees of freedom: the angle of thewheel α and the pitch θ. Its state vector is therefore the vectorx = (α, θ, α, θ)T. When it is not controlled, the state equationsof the segway are (refer to Exercise 1.7):⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x1
x2
x3
x4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x3
x4
μ3(μ2x24−μg cosx2) sinx2+(μ2+μ3 cosx2)u
μ1μ2−μ23 cos
2 x2
(μ1μg−μ23x
24 cosx2) sinx2−(μ1+μ3 cosx2)u
μ1μ2−μ23 cos
2 x2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠y = x1
196 Automation for Robotics
where:
μ1 = JM + a2 (m+M) , μ2 = Jp +m�2,
μ3 = am�, μg = g�m
The parameters of our system are the mass M of the disk,its radius a, its inertial momentum JM , the mass m of thebody, its inertial momentum Jp and the distance � between itscenter of gravity and the center of the disk. We have addedthe observation equation y = x1 in order to assume a situationwhere only the angle of the wheel α is measured.
1) Calculate the operating points of the system.
2) Linearize the system around an upper point ofequilibrium, for α = 0.
3) By using the controllability matrix, determine whetherthe system is controllable.
EXERCISE 5.9.– Control of the segway in MATLAB
Let us consider once again the segway of the previousexercise that has two degrees of freedom: the angle of thewheel α and the pitch θ. Recall that its state vector is thereforethe vector x = (α, θ, α, θ)T. We will take m = 10 kg, M = 1 kg,� = 1 m, g = 10 ms−2, a = 0.3 m, Jp = 10 kg.m2 andJM = 1
2Ma2.
1) Simulate this system in a MATLAB environment by usingEuler’s method, in different situations.
2) We assume that we measure the angle α using anodometer. Calculate an output feedback controller that allowsus to place the segway at a given position. We will place allthe poles around −2 (be careful, the pole placement algorithmof MATLAB does not allow two identical poles). Test yourcontroller with a non-zero initial condition.
Linearized Control 197
3) Validate the robustness of your control by adding sensornoise to the measure of x, which is white and Gaussian.
4) Given the model of the tank, propose a three-dimensional(3D) model for the segway. The chosen state vector will be:
x = (θ, α, θ, x, y, ψ, α1, α2)T
where α1 is the angle of the right wheel, α2 is the angle of theleft wheel and ψ is the heading of the segway and the (x, y)the coordinates of the center of the segway’s axle. The inputsare the body/wheels pair u1 and the differential between thewheels u2. Here, α represents the average angular velocitybetween the two wheels, i.e. α = 1
2 (α1 + α2).
5) Propose a control that allows us to regulate the segway,velocity- and heading-wise.
EXERCISE 5.10.– Linearization of the tank
Let us consider the tank system described by the followingstate equations:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
x = v cos θ
y = v sin θ
θ = ω
ω = u1
v = u2
1) Linearize this system around a state x =(x, y, θ, ω, v
)T,which is not necessarily an operating point.
2) Study the rank of the controllability matrix at x followingthe values of v.
3) Knowing that the columns of the controllability matrixrepresent the directions in which we can influence themovement and calculate the non-controllable directions.
198 Automation for Robotics
EXERCISE 5.11.– Linearization of the hovercraft
Let us consider the hovercraft described by the followingstate equations:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
x = vx
y = vy
θ = ω
vx = u1 cos θ
vy = u1 sin θ
ω = u2
1) Linearize around a point x = (x, y, θ, vx, vy, ω)T, which is
not necessarily an operating point.
2) Calculate the controllability matrix around x.
3) Give the conditions for which this matrix is not of fullrank.
4) Knowing that the columns of the controllability matrixrepresent the directions in which we can influence themovement and calculate the non-controllable directions.Discuss.
EXERCISE 5.12.– Linearization of the single-acting cylinder
Let us consider once more the pneumatic cylinder modeledin Exercise 1.17. Its state equations are given by:
S :
⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩x1 = x2
x2 =ax3 − kx1
m
x3 = −x3x1
(x2 − u
a
)
Linearized Control 199
1) Let us assume that x1 > 0. Calculate all the possibleoperating points (x, u) for S.
2) Now differentiate S around an operating point (x, u).Discuss.
EXERCISE 5.13.– Pole placement control of a nonlinearsystem
Let us consider the nonlinear system given by the stateequations:
S :
⎧⎨⎩ x = 2x2 + u
y = 3x
which we wish to stabilize around the state x = 2. Atequilibrium, we would like y to be equal to its set point w.Moreover, we would like all the poles of the looped system tobe equal to −1.
1) Give the state equations of the controller using poleplacement that satisfies these constraints.
2) What are the state equations of the looped system?
EXERCISE 5.14.– State feedback of a wiring system
Let us consider the system represented by the wiringdiagram of Figure 5.5.
Figure 5.5. Wiring diagram for a nonlinear system
200 Automation for Robotics
1) Give the state equations of the system.
2) Calculate its points of equilibrium.
3) Linearize this system around a point of equilibrium xcorresponding to x1 = π. Is this a stable point of equilibrium?
4) Propose a state feedback controller of the form u =−K (x− x) that stabilizes the system around x. We will placethe poles at −1.
EXERCISE 5.15.– Controlling an inverted rod pendulum inMATLAB
Let us consider the inverted rod pendulum represented inFigure 5.6, which is composed of a pendulum placed inunstable equilibrium on a carriage.
Figure 5.6. Inverted rod pendulum that we need to control
The value u is the force exerted on the carriage of massM = 5 kg, x indicates the position of the carriage, θ is theangle between the pendulum and the vertical axis and �R is theforce exerted by the carriage on the pendulum. At the tip B ofthe pendulum of length � = 4 m is a fixated mass m = 1 Kg.Finally, A is the point of articulation between the rod and the
Linearized Control 201
carriage. As seen in Exercise 1.5, the state equations of thissystem are:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x1
x2
x3
x4
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x3
x4
m sinx2(g cosx2−�x24)+u
M+m sin2 x2
sinx2((M+m)g−m�x24 cosx2)+cosx2u
�(M+m sin2 x2)
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠y = x1
where:
x = (x1, x2, x3, x4)T = (x, θ, x, θ)T
The observation equation indicates that only the position ofthe carriage x1 is measured.
1) Calculate all the operating points of the system.
2) Linearize the system around an operating point x =(0, 0, 0, 0) and u=0.
3) With MATLAB, obtain an output feedback controller thatsets all the poles of the looped system to −2. We proposea control in position, which means that the set point wcorresponds to x1.
4) Create a state feedback control that allows us to raise thependulum when it leaves its bottom position. For this, we willattempt to stabilize the mechanical energy of the pendulumbefore moving on to a linear control. We will assume that u ∈[−umax, umax] .
EXERCISE 5.16.– Controlling a car along a wall in MATLAB
Here, we will address the case of a car that drives on anunknown road. The car is equipped with (i) a rangefinder
202 Automation for Robotics
that measures the lateral distance d between the rear axle ofthe car and the edge of the road, (ii) a velocity sensor thatmeasures the velocity v of the front wheels and (iii) an anglesensor that measures the angle δ of the steering wheel (forreasons of simplification, we will assume that δ alsocorresponds to the angle between the front wheels and theaxis of the car). For our simulations, we will assume that ourcar is driving around a closed polygon as shown in Figure 5.7.
Figure 5.7. Car turning around a polygon
1) Let m, a and b be three points of R2 and −→u a vector. Showthat the half-line E (
m,−→u )with vertex m and direction vector−→u intersect the segment [ab] if and only if:⎧⎪⎨⎪⎩
det(a−m,−→u )
. det(b−m,−→u ) ≤ 0
det(a−m,b− a). det(−→u ,b− a) ≥ 0
Moreover, show that in this case, the distance from thepoint m to the segment [ab] following the vector −→u is givenby:
d =det(a−m,b− a)
det(−→u ,b− a)
2) We would like to evolve the car (without using acontroller) around the polygon using Euler’s method. As seen
Linearized Control 203
in Exercise 1.11, we could take as evolution equation:⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
x
y
θ
v
δ
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
v cos δ cos θ
v cos δ sin θ
v sin δL
u1
u2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠with L = 3 m. Here, (x, y) represents the coordinates of thecenter of the car, θ its heading. The polygon representing thecircuit will be represented by a matrix P containing 2 rowsand p + 1 columns. The jth column corresponds to the jth
vertex of the polygon. Since the polygon is closed, the firstand last columns of P are identical. Give a MATLAB functioncalled circuit_g(x,P) that corresponds to the observationfunction of the system and whose arguments are the state ofthe system x and the polygon P. This function will have toreturn the distance measured by the rangefinder as well as vand δ. Also, give a MATLAB function called circuit_f(x,u)that corresponds to the evolution function.
3) We would like the car to move with a constant velocityalong the road. Of course, it is out of the question to involvethe shape of the road in the controller, since it is unknown.Moreover, the position and orientation variables of the carare not measured [in other words, we have no compass orGlobal Positioning System (GPS)]. These quantities, however,are often unnecessary in order to reach the given objective,which is following the road. Indeed, are we not ourselvescapable of driving a car on a road without having a local map,without knowing where we are and where North is? What weare interested in when driving a car is the relative positionof the car with respect to the road. The world as seen by thecontroller is represented in Figure 5.8. In this world, the car
204 Automation for Robotics
moves laterally, like in a computer game, and the edge of theroad remains fixed.
This world has to be such that the model used by thecontroller has state variables describing this relative positionof the car with respect to the road and that the constancy ofthe state variables of this model corresponds to a real situationin which the car follows the road with a constant velocity. Weneed to imagine an ideal world for the controller in which theassociated model admits an operating point that correspondsto the desired behavior of our real system. For this, we willassume that our car drives at a distance x of the edge. Thismodel has to have the same inputs u and outputs y as thereal system, more precisely two inputs (the acceleration of thefront wheels v and the angular velocity of the steering wheel δ)and three outputs (the distance d of the center of the rear axleto the edge of the road, the velocity v of the front wheels andthe angle δ of the steering wheel). Given the fact that only therelative position of the car is of interest to us, our model mustonly contain four state variables: x, θ, v, δ, as represented inthe figure. We must be careful though because the meaningof the variable x has changed since the previous model. Findthe state equations (in other words, the evolution equationand the observation equation) of the system imagined by thecontroller.
4) Let us choose as operating point:
x = (5, π/2, 7, 0) and u = (0, 0)
which corresponds to a velocity of 7 ms−1 and a distance of 5 mbetween the center of the rear axle and the edge of the road.Linearize the system around its operating point.
5) Let us take as set point variables x and v, which meansthat we want our set points w1 and w2 to correspond to thedistance of the edge of the road and the velocity of the car.Design, in MATLAB, a controller by pole placement. All the
Linearized Control 205
poles will be placed at −2.
6) Test the behavior of your controller in the case where thecar is turning around the polygon. Note that the system beingcontrolled (in other words, the turning car) by the controller isdifferent than that it thinks that it is controlling (i.e. a car ina straight line).
Figure 5.8. The controller imagines a simpler world than reality,with a straight vertical wall
EXERCISE 5.17.– Nonlinear control
FIRST PART (proportional and derivative control).– Let usconsider a second-order system described by the differentialequation y = u, where u is the input and y the output.
1) Give a state representation for this system. The chosenstate vector is given by x = (y, y).
2) We would like to control this system using a proportionaland derivative control of the type:
u = k1(w − y) + k2 (w − y) + w
where w is a known function (for instance a polynomial) calledset point and that can here depend on the time t. Let us notethat here, we have assumed that y and y were available for the
206 Automation for Robotics
controller. Give the differential equation satisfied by the errore = w − y between the set point and the output y.
3) Calculate k1 and k2, allowing to have an error e thatconverges toward 0 in an exponential manner, with all thepoles equal to −1.
SECOND PART (control of a tank-like vehicle around a path).–Let us consider the robotic vehicle on the left-hand side ofFigure 5.9, described by the following state equations (tankmodel):⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩
x = v cos θ
y = v sin θ
θ = u1
v = u2
where v is the velocity of the robot, θ its orientation and (x, y)the coordinates of its center. We assume that we are capableof measuring all the state variables of our robot with greatprecision.
Figure 5.9. Our robot (with eyes) follows a vehicle (here a car)that has an attachment point (small white circle) to which the robot
has to attach
Linearized Control 207
1) Let us denote by x = (x, y, θ, v) the state vector of ourrobot and u = (u1, u2) the vector of the inputs. Calculate x andy as a function of x of u. Show that:⎛⎜⎝ x
y
⎞⎟⎠ = A(x).u
where A(x) is a 2× 2 matrix whose expression we will give.
2) Another mobile vehicle (on the right-hand side ofFigure 5.9) tells us, in a wireless manner, the precisecoordinates (xa, ya) of a virtual attachment point (in otherwords, one that only exists mentally), fixed with respect to thisvehicle, to which we need to position ourselves. This meansthat we want the center of the robot (x, y) to be such thatx(t) = xa(t) and y(t) = ya(t). This vehicle also sends usthe first two derivatives (xa, ya, xa, ya) of the coordinates ofthe attachment points. Propose the expression of a controlu (x, xa, ya, xa, ya, xa, ya), which ensures us that the distancebetween the position (x, y) of our robot and that of theattachment point (xa, ya) decreases in an exponential manner.We will set the poles at −1.
ADVICE.– Perform a first loop as such: u = A−1(x) · q, whereq = (q1, q2) is our new input. This first loop will allow us tosimplify your system by making it linear. Then, proceed witha second, proportional and derivative loop.
5.4. Solutions
Solution to Exercise 5.1 (Jacobian matrix)
The Jacobian matrix of f at x is:
df
dx(x) =
⎛⎜⎝ ∂f1∂x1
(x) ∂f1∂x2
(x)
∂f2∂x1
(x) ∂f2∂x2
(x)
⎞⎟⎠ =
⎛⎜⎝2x1x2 x21
2x1 2x2
⎞⎟⎠
208 Automation for Robotics
Around the point x = (1, 2), we have:
f
⎛⎜⎝x1
x2
⎞⎟⎠ �
⎛⎜⎝2
5
⎞⎟⎠+
⎛⎜⎝4 1
2 4
⎞⎟⎠⎛⎜⎝x1 − 1
x2 − 2
⎞⎟⎠ =
⎛⎜⎝ −4 + 4x1 + x2
−5 + 2x1 + 4x2
⎞⎟⎠Solution to Exercise 5.2 (linearization bydecomposition)
We obtain the following table:
x1.= 1 ∂1x1 = 1 ∂2x1 = 0
x2.= 1 ∂1x2 = 0 ∂2x2 = 1
z1 = x21
.= 1 ∂1z1 = 2x1.∂1x1
.= 2 ∂2z1 = 0
z2 = x1x2.= 1 ∂1z2 = x2
.= 1 ∂2z2 = x1
.= 1
z3 = z1 + z2.= 2 ∂1z3 = ∂1z1 + ∂1z2
.= 3 ∂2z3 = ∂2z1 + ∂2z2
.= 1
z4 = z23.= 4 ∂1z4 = 2z3.∂1z3
.= 12 ∂2z4 = 2z3.∂2z3
.= 4
z5 = z4z2.= 4 ∂1z5 = z4.∂1z2 + z2.∂1z4
.= 16 ∂2z5 = z4.∂2z2 + z2.∂2z4
.= 8
z6 = z1 + x2.= 2 ∂1z6 = ∂1z1 + ∂1x2
.= 2 ∂2z6 = ∂2z1 + ∂2x2
.= 1
f = z5z6
.= 2 ∂1f = z6.∂1z5−z5.∂1z6
z26
.= 6 ∂2f = z6.∂2z5−z5.∂2z6
z26
.= 3.
Table 5.1. All intermediate variables with their partial derivatives
In this table, the symbol .= means is equal after
instantiation. The left-hand column gives the decompositionof the function f into intermediary variables, denoted here byzi. The second and third columns give the partial derivatives∂
∂x1and ∂
∂x2. Thus, around x = (1, 1), we have the following
first-order estimation:
f(x1, x2) � 2 + 6 (x1 − 1) + 3 (x2 − 1) = 6x1 + 3x2 − 7
This technique is easily programmed due to the operatoroverload allowed by many object-oriented programminglanguages (such as C++). It is at the foundation of automaticdifferentiation methods allowing us to calculate thedifferential of a function expressed by computer code.
Linearized Control 209
Solution to Exercise 5.3 (linearization of a function bylimited development)
We have:
f(x) = (x1+ε)(3−2ε(1+ε))+7(1+x2+ε)(1+ε)
1+(x1+ε)2
= (3x1+ε)+7(1+x2+ε)
1+(x1+ε)2
= 3x1+7x2+7+ε1+ε
= 3x1 + 7x2 + 7 + ε
Thus, around x = (0, 0), we have the following first-orderapproximation:
f(x) � 3x1 + 7x2 + 7
Solution to Exercise 5.4 (linearization of a system usingthe finite difference method in MATLAB)
In order to have A = ∂f∂x (x, u) and B = ∂f
∂u (x, u) fromf , x, u, without performing any symbolic calculation, we canuse a finite difference method in MATLAB. The correspondingMATLAB code is given below.
function [A,B]=Jf(x,u,h)A=[];for j=1:size(x,1),dx=0*x; dx(j)=h;A=[A,1/h*(f(x+dx,u)-f(x,u))];end;B=[];for j=1:size(u,1),du=0*u; du(j)=h;B=[B,1/h*(f(x,u+du)-f(x,u))];end;
210 Automation for Robotics
Solution to Exercise 5.5 (linearization of the predator-prey system)
1) The point of equilibrium is given by:(x1x2
)=
(1
1
)
Around this point, f(x) can be estimated by its tangentsystem:
f(x) � f(x) +df
dx(x) (x− x) =
(1− x2 −x1x2 −1 + x1
)(x1 − x1x2 − x2
)
=
(0 −1
1 0
)(x1 − 1
x2 − 1
)=
(−x2 + 1
x1 − 1
)
The linearized system is obtained by taking x1 = x1 − x1and x2 = x2 − x2, which is given by:
d
dtx =
(0 −1
1 0
)x
2) The field on the left has two points of equilibrium. Itcorresponds therefore to the Lotka–Volterra system. The oneon the right is thus the tangent system at point (1, 1)T.
3) The eigenvalues are obtained by calculating the roots ofthe characteristic polynomial. Since:
det
(s 1
−1 s
)= s2 + 1
the eigenvalues are ±i. These values correspond to anoscillating system.
Linearized Control 211
Solution to Exercise 5.6 (linearization of a simplependulum)
At point (x = (0, 0),u = 0), we have sin(x1) = x1 + ε. Thelinearized system is therefore written as:⎧⎪⎪⎨⎪⎪⎩
x =
(0 1
−g� 0
)x+
(01
m�2
)u
y =(� 0
)x
Solution to Exercise 5.7 (mass in a liquid)
1) Let us take x = (y y)T. We have the following stateequations:{
x1 = x2x2 = −x2 · |x2|+ u
2) The linearized system is therefore written as:
x =
(0 1
0 0
)+
(0
1
)u
The two eigenvalues are nil which implies the instability ofthe linearized system. Let us note, however, that the liquidfriction does not appear in the expression of the linearizedsystem. This means that for an input u = 0, the velocity x2of the linearized system remains unchanged, whereas for thenonlinear system, the velocity x2 converges toward 0.
3) We will limit ourselves here to a positive initial velocityy (0) = x2 (0). The solution of the differential equation is easilycalculated. It is given by:
y (t) = y (0) + ln (−y (0)) + ln(t− y−1 (0)
)
212 Automation for Robotics
or, equivalently:
x1 (t) = x1 (0) + ln (−x2 (0)) + ln(t− x−1
2 (0))
x2 (t) =1
t− x−12 (0)
Indeed:
y = x2 = − 1(t− x−1
2 (0))2 = −x22 = −y2
Not only the linear system but also the linearized one for anon-zero initial velocity x1 diverges at infinity when u = 0.
Solution to Exercise 5.8 (controllability of the segway)
1) The operating points satisfy f (x, u) = 0, in other words:⎧⎪⎪⎪⎨⎪⎪⎪⎩x3 = 0
x4 = 0
μ3
(μ2x
24 − μg cos x2
)sin x2 + (μ2 + μ3 cos x2) u = 0(
μ1μg − μ23x
24 cos x2
)sin x2 − (μ1 + μ3 cos x2) u = 0
Since x4 = 0, the last two rows are written in the form:(−μgμ3 cos x2 (μ2 + μ3 cos x2)
μ1μg − (μ1 + μ3 cos x2)
)(sin x2u
)=
(0
0
)
The determinant of the matrix is zero if:
μ1μgμ3 cos x2 + μgμ23 cos
2 x2 − μ1μgμ2 − μ1μgμ3 cos x2 = 0
i.e.:
cos2 x2 =μ1μ2
μ23
Linearized Control 213
or:
μ1μ2
μ23
=
(JM + a2 (m+M)
) (Jp +m�2
)(am�)2
>
(a2 (m+M)
) (m�2
)(am�)2
> 1
We can therefore conclude that the determinant of thematrix is never zero and consequently that:(
sin x2u
)=
(0
0
)
The operating points are therefore of the form:
x = (x1, kπ, 0, 0)T and u = 0 with k ∈ Z
2) Let us linearize this system around the operating pointu = 0, x = 0. We obtain the linearized system described by thefollowing state equations:
x =
⎛⎜⎜⎜⎜⎝0 0 1 0
0 0 0 1
0 − μ3μg
μ1μ2−μ230 0
0μ1μg
μ1μ2−μ23
0 0
⎞⎟⎟⎟⎟⎠x+
⎛⎜⎜⎜⎜⎝0
0μ2+μ3
μ1μ2−μ23
− μ1+μ3
μ1μ2−μ23
⎞⎟⎟⎟⎟⎠u
3) The controllability matrix is:
Γcon =
⎛⎜⎜⎜⎝0 b3 0 b4a320 b4 0 b4a42b3 0 b4a32 0
b4 0 b4a42 0
⎞⎟⎟⎟⎠where aij and bi correspond to the coefficients of the Jacobianmatrices A and B. The system is uncontrollable if Γcon is not
214 Automation for Robotics
invertible, in other words, if:
b4 (b3a42 − b4a32) = 0
Since b4 cannot be nil (since μ1 + μ3 > 0), we have:
b3a42 = b4a32 ⇔ μ2+μ3
μ1μ2−μ23
μ1μg
μ1μ2−μ23= μ1+μ3
μ1μ2−μ23
μ3μg
μ1μ2−μ23
⇔ (μ2 + μ3)μ1 = (μ1 + μ3)μ3
⇔ μ2μ1 = μ23
⇔ (JM + a2 (m+M)
) (Jp +m�2
)= (am�)2
which is only possible if JM = 0, Jp = 0 and M = 0 (whichis not realistic). The matrix Γcon is therefore necessarilyinvertible, which implies that the system is controllable.
Solution to Exercise 5.9 (control of the segway inMATLAB)
1) The equations of the segway are:
⎛⎜⎜⎜⎝x1x2x3x4
⎞⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝x3x4
μ3(μ2x24−μg cosx2) sinx2+(μ2+μ3 cosx2)u
μ1μ2−μ23 cos
2 x2(μ1μg−μ2
3x24 cosx2) sinx2−(μ1+μ3 cosx2)u
μ1μ2−μ23 cos
2 x2
⎞⎟⎟⎟⎟⎟⎠The MATLAB program is given below (without the graphical
part).
function v=segway2D_f(x,u)m=10;M=1;l=1;g=10;a=0.3;Jp=10;JM=0.5*M*a^2;mu1=a^2*(m+M)+JM; mu2=Jp+m*l^2; mu3=m*a*l;mug=g*m*l;c2=cos(x(2));s2=sin(x(2));den=mu1*mu2-(mu3*c2)^2;w2=x(4)^2; v=[x(3); x(4);(1/den)*(mu3*(mu2*w2-mug*c2)*s2+(mu2+mu3*c2)*u);
Linearized Control 215
(1/den)*((mu1*mug-mu3^2*w2*c2)*s2-(mu1+mu3*c2)*u)];end
The main program segway2D_main.m is given below.
x=[0;0;0;0.01]; % initial state of the segwaydt = 0.01;m=10;M=1;l=1;g=10;u=0;a=0.3;Jp=10;JM=0.5*M*a^2;mu1=a^2*(m+M)+JM; mu2=Jp+m*l^2; mu3=m*a*l;mug=g*m*l;for t=0:dt:5,u=0;x=x+segway2D_f(x,u)*dt;segway2D_draw(x);end;
2) The linearization of the system around the operatingpoint u = 0, x = 0 is given by (see Exercise 5.8):
x =
⎛⎜⎜⎜⎜⎝0 0 1 0
0 0 0 1
0 − μ3μg
μ1μ2−μ230 0
0μ1μg
μ1μ2−μ23
0 0
⎞⎟⎟⎟⎟⎠x+
⎛⎜⎜⎜⎜⎝0
0μ2+μ3
μ1μ2−μ23
− μ1+μ3
μ1μ2−μ23
⎞⎟⎟⎟⎟⎠u
In order to obtain the state equations of the controller, weadd the following function (see function RegulKLH.mdescribed on page 133). The main program becomes:
x=[0;0;0;0.01]; % initial state of the segway
xr=[0;0;0;0]; % initial state of the observer
dt = 0.01;
m=10;M=1;l=1;g=10;u=0;a=0.3;Jp=10;JM=0.5*M*a^2;
mu1=a^2*(m+M)+JM; mu2=Jp+m*l^2; mu3=m*a*l; mug=g*m*l;
A=[0 0 1 0;0 0 0 1;0 -mu3*mug/(mu1*mu2-mu3^2) 0 0;0
mu1*mug/(mu1*mu2-mu3^2) 0 0]
B=[0;0;(mu2+mu3)/(mu1*mu2-mu3^2);
(-mu1-mu3)/(mu1*mu2-mu3^2)];
216 Automation for Robotics
C=[1 0 0 0];
E=[-a 0 0 0]; % since the setpoint variable is xc=-a*x(1)
[Ar,Br,Cr,Dr]=RegulKLH(A,B,C,E,[-2,-2.1,-2.2,-2.3],
[-2,-2.1,-2.2,-2.3]);
w=1; % desired position
for t=0:dt:5;
y=C*x;
u=Cr*xr+Dr*[w;y];
x=x+f(x,u)*dt;
xr=xr+(Ar*xr+Br*[w;y])*dt;
draw_segway(x);
end;
3) We evaluate the robustness of the control by addingsensor noise using the randn() command, if we needGaussian noise. For instance, we can add a white Gaussiannoise with variance 1 by replacing line y=C*x byy=C*x+randn(size(C*x)).
4) Following the tank model, we have:{x = α cosψ
y = α sinψ
Thus, the state equations are:
d
dt
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
θ
α
θ
x
y
ψ
α1
α2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
θμ3(μ2θ
2−μg cos θ) sin θ+(μ2+μ3 cos θ)u1
μ1μ2−μ23 cos
2 θ
(μ1μg−μ23θ
2 cos θ) sin θ−(μ1+μ3 cos θ)u1
μ1μ2−μ23 cos
2 θ
cosψ.α
sinψ.α
u2α− u2α+ u2
⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
Linearized Control 217
with state vector:
x =(θ, α, θ, x, y, ψ, α1, α2
)T
5) The MATLAB program that performs this control iscomposed of the evolution function of the vehicle, given belowand of the main program:
function dx=segway3d_f(x,u)
theta=x(1); dalpha=x(2); dtheta=x(3); xc=x(4); yc=x(5);
psi=x(6); alpha1=x(7); alpha2=x(8);
c1=cos(x(1));s1=sin(x(1)); den=mu1*mu2-(mu3*c1)^2;
w2=dtheta^2;
ddalpha=(1/den)*(mu3*(mu2*w2-mug*c1)*s1+(mu2+mu3*c1)*u(1));
ddtheta=(1/den)*((mu1*mug-mu3^2*w2*c1)*s1
-(mu1+mu3*c1)*u(1));
dx=[ dtheta; ddalpha; ddtheta; cos(psi)*dalpha; % from
the tank model
sin(psi)*dalpha; u(2); dalpha-u(2); dalpha+u(2);];
The main program (see file segway3D_main.m) is givenbelow:
x=[0;0;0;0;0;0;0;0]; % x=[theta,dalpha,dtheta,x,y,psi,alpha1,alpha2]
dt=0.03; u=[0;0];
m=10;M=1;l=1;g=10;u=0;a=0.3;Jp=10;JM=0.5*M*a^2;
mu1=a^2*(m+M)+JM; mu2=Jp+m*l^2; mu3=m*a*l; mug=g*m*l;
% Construction of the controller which pictures a 2D world
xr2=[0;0;0;0]; % initial state of the observer
A2=[0 0 1 0;0 0 0 1;0 -mu3*mug/(mu1*mu2-mu3^2) 0 0;0
mu1*mug/(mu1*mu2-mu3^2) 0 0];
B2=[0;0;(mu2+mu3)/(mu1*mu2-mu3^2);(-mu1-mu3)/(mu1*mu2-mu3^2)];
C2=[1 0 0 0]; E2=[1 0 0 0];
[Ar2,Br2,Cr2,Dr2]=RegulKLH(A2,B2,C2,E2,[-2,-2.1,-2.2,-2.3],
[-2,-2.1,-2.2,-2.3]);
% The controller is now built
C=[0 0 0 0 0 0 0.5 0.5]; % observation matrix of the system.
218 Automation for Robotics
% We measure the average between the angles of the two wheels
w=0; % position setpoint
for t=0:dt:15,
w=w+0.5*dt; % We let the setpoint evolve in order for the segway to
advance.
% Setpoint corresponding to the average angle between the two wheels,
y=C*x+0.05*randn(); % measurement generated by the sensor.
theta=x(1); xc=x(4); yc=x(5); psi=x(6); alpha1=x(7); alpha2=x(8);
psibar=1;u2=-tan(atan(0.5*(psi-psibar))); % rgulation en cap
u=[Cr2*xr2+Dr2*[w;y];u2]; % u1 corresponds to the torque computed by
the 2D controller
theta=x(1); xc=x(4); yc=x(5); psi=x(6); alpha1=x(7); alpha2=x(8);
segway3d_draw(xc,yc,theta,psi,alpha1,alpha2); % We draw the real
segway in 3D.
x=x+segway3d_f(x,u)*dt;
xr2=xr2+(Ar2*xr2+Br2*[w;y])*dt;
end
In this program, xr2 is the state of the pitch controller ofthe segway. This controller pictures a 2D world.
Solution to Exercise 5.10 (linearization of the tank)
1) We have:⎛⎜⎜⎜⎜⎜⎝x
y
θ
ω
v
⎞⎟⎟⎟⎟⎟⎠ =
⎛⎜⎜⎜⎜⎜⎝0 0 −v sin θ 0 cos θ
0 0 v cos θ 0 sin θ
0 0 0 1 0
0 0 0 0 0
0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎝x− x
y − y
θ − θ
ω − ω
v − v
⎞⎟⎟⎟⎟⎟⎠+
⎛⎜⎜⎜⎜⎜⎝0 0
0 0
0 0
1 0
0 1
⎞⎟⎟⎟⎟⎟⎠(
u1u2
)
The state of equilibrium is obtained for v = ω = 0.
Linearized Control 219
2) The controllability matrix is:
Ccon =
⎛⎜⎜⎜⎜⎜⎝0 0 0 cos θ −v sin θ 0 0 0 0 0
0 0 0 sin θ v cos θ 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎠which is of rank 5 for v �= 0. At equilibrium, v �= 0 andtherefore:
Ccon =
⎛⎜⎜⎜⎜⎜⎝0 0 0 cos θ 0 0 0 0 0 0
0 0 0 sin θ 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎠which is of rank 4. The system is therefore uncontrollable.
3) The uncontrollable directions a are those that areorthogonal to all the columns of Ccon. In order to find them,we need to solve:
CTcon · a = 0, with a �= 0
and thus, for v = 0, we obtain:
a = λ. (sin θ, − cos θ, 0, 0 0)T with λ �= 0
Therefore we cannot move perpendicularly to the directionof the tank (in first order). In second order, it is possible (“slot”principle when we park a car).
Solution to Exercise 5.11 (linearization of thehovercraft)
1) By linearizing, we obtain the following approximation:
x � f (x, u) +∂f
∂x(x, u) · (x− x) +
∂f
∂u(x, u) · (u− u)
220 Automation for Robotics
i.e.:⎛⎜⎜⎜⎜⎜⎜⎜⎝
x
y
θ
vxvyω
⎞⎟⎟⎟⎟⎟⎟⎟⎠�
⎛⎜⎜⎜⎜⎜⎜⎜⎝
vxvyω
u1 cos θ
u1 sin θ
u2
⎞⎟⎟⎟⎟⎟⎟⎟⎠+
⎛⎜⎜⎜⎜⎜⎜⎜⎝
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
0 0 −u1 sin θ 0 0 0
0 0 u1 cos θ 0 0 0
0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎠
⎛⎜⎜⎜⎜⎜⎜⎜⎝
x− x
y − y
θ − θ
vx − vxvy − vyω − ω
⎞⎟⎟⎟⎟⎟⎟⎟⎠
+
⎛⎜⎜⎜⎜⎜⎜⎜⎝
0 0
0 0
0 0
cos θ 0
sin θ 0
0 1
⎞⎟⎟⎟⎟⎟⎟⎟⎠(
u1 − u1u2 − u2
)
2) The controllability matrix is:
Ccon =
⎛⎜⎜⎜⎜⎜⎜⎜⎝
0 0 cos θ 0 0 0 0 −u1 sin θ 0 0 0 0
0 0 sin θ 0 0 0 0 u1 cos θ 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0
cos θ 0 0 0 0 −u1 sin θ 0 0 0 0 0 0
sin θ 0 0 0 0 u1 cos θ 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0
⎞⎟⎟⎟⎟⎟⎟⎟⎠3) This matrix is not of full rank if:
det
⎛⎜⎜⎜⎝0 cos θ 0 −u1 sin θ
0 sin θ 0 u1 cos θ
cos θ 0 −u1 sin θ 0
sin θ 0 u1 cos θ 0
⎞⎟⎟⎟⎠ = 0
in other words if:
u21.(cos2 θ + sin2 θ
)2= 0
Linearized Control 221
Thus, if u1 = 0, the matrix is not of full rank.
4) In order to find the non-controllable directions in the casewhere u1 = 0, we need to find a vector orthogonal to all thecolumns of the controllability matrix. We solve:
CTcon · a = 0, with a �= 0
We obtain:
a =
⎛⎜⎜⎜⎜⎜⎜⎜⎝
x
y
θ
vxvyω
⎞⎟⎟⎟⎟⎟⎟⎟⎠=
⎛⎜⎜⎜⎜⎜⎜⎜⎝
−α sin θ
α cos θ
0
−β sin θ
β cos θ
0
⎞⎟⎟⎟⎟⎟⎟⎟⎠where α and β are constants. We cannot therefore control thelateral skid (normal to the axis of the hovercraft) if we do notaccelerate.
Solution to Exercise 5.12 (linearization of thesingle-acting cylinder)
1) The condition x = 0 gives:⎧⎪⎨⎪⎩x2 = 0
ax3 − kx1 = 0
x3(x2 − u
a
)= 0
in other words:⎧⎪⎨⎪⎩x2 = 0
ax3 − kx1 = 0
u = 0
222 Automation for Robotics
The operating points are therefore to the form:
(x, u) =
(x1, 0,
k
ax1, 0
)2) Let us now differentiate the system around the operating
point (x, u) in order to obtain an affine approximation of it. Weobtain the linearized system:
x =
⎛⎜⎝ 0 1 0
− km 0 a
m
0 −ka 0
⎞⎟⎠ (x− x) +
⎛⎜⎝ 0
0ka2
⎞⎟⎠u
Note that the coefficients of the matrices A and B do notdepend on x. This is quite rare, and it means that thebehavior of a linear control will not depend on the chosenoperating point. Let us note that the absence of the matricesC and D in the linearized system is a consequence of the factthat the nonlinear system considered is autonomous (i.e.without output).
Solution to Exercise 5.13 (pole placement control of anonlinear system)
1) At equilibrium, we have to have f(x, u) = 0. For x = 2,we need to have u = −8. If we want that at equilibrium y = w,we will need to take E = 3. If, moreover, we would like allthe poles of the looped system to be equal to −1, we wouldneed pcon = pobs = −1. The linearization around (x, u) gives usA = 8, B = 1 and C = 3. For K and L, we need to solve the twopolynomial equations:{
det (sI −A+BK) = s+ 1
det(sI −AT + CTLT) = s+ 1
Linearized Control 223
i.e. s−8+K = s+1 and s−8+3L = s+1. Therefore, K = 9 and
L = 3. Moreover, y = 6, w = 3x = 6. H = −(3 (8− 9)−1
)−1=
1/3. Therefore, the controller is written as:
R :
{ddt x = −10x+ (w − 6) /3 + 3 (y − 6)
u = −8− 9x+ (w − 6) /3
2) The state equations of our looped system are:⎧⎪⎨⎪⎩x = 2x2 − 8− 9x+ w−6
3ddt x = −10x+ w−6
3 + 3 (3x− 6)
y = 3x
Solution to Exercise 5.14 (state feedback of a wiringsystem)
1) We have:(x1x2
)=
(x2
u− sinx1 − x2
)
These equations are those of a damped pendulum withangle θ = x1 and angular velocity θ = x2.
2) We solve f (x, u) = 0 with u = 0. We find:
x =
(kπ
0
)with k ∈ Z
3) We take x = (π, 0)T. Therefore:(x1x2
)� f (x, u) +
(0 1
− cos x1 −1
)(x− x) +
(0
1
)u
=
(0 1
1 −1
)︸ ︷︷ ︸
A
·(
x1 − π
x2
)︸ ︷︷ ︸
x−x
+
(0
1
)︸ ︷︷ ︸
B
u
224 Automation for Robotics
The characteristic polynomial is P (s) = s2 + s − 1. SinceP (0) = −1 and that P has a positive curvature, it necessarilyhas a positive real root. The system is therefore unstable.
4) To calculate K, we solve:
det (sI −A+BK) = (s+ 1)2
i.e.:
det
((s 0
0 s
)−
(0 1
1 −1
)+
(0
1
)(k1 k2
))= det
(s −1
k1 − 1 s+ k2 + 1
)
= s2 + s (k2 + 1) + k1 − 1 = s2 + 2s+ 1
By identification, we obtain k1 = 2, k2 = 1. The controller istherefore:
u = (−2, −1) (x− x) = −2x1 + 2π − x2
Solution to Exercise 5.15 (controlling an inverted rodpendulum in MATLAB)
1) In order to linearize this system, we first of all need tocalculate an operating point. For this, we solve f (x, u) = 0, inother words:⎧⎪⎪⎪⎨⎪⎪⎪⎩
x3 = 0
x4 = 0
m sin x2(g cos x2 − �x24) + u = 0
sin x2((M +m)g −m�x24 cos x2) + cos x2u = 0
Since x4 = 0, the last two lines can be written in the form:(mg cos x2 1
(M +m)g cos x2
)(sin x2u
)=
(0
0
)
The determinant of the matrix, however, is zero if:
mg cos2 x2 − (M +m)g = 0
Linearized Control 225
in other words:
cos2 x2 =M +m
m
which is impossible since M+mm > 1. Form this, we deduce that
sin x2 = 0, u = 0. The operating points are of the form:
x = (x1, kπ, 0, 0)T and u = 0 with k ∈ Z
2) Around the operating point x = (0, 0, 0, 0) and u = 0, letus linearize the system by applying the limited developmentmethod. We have:⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩
m sinx2(g cosx2−�x24)+u
M+m sin2 x2= m(x2+ε)(g(1+ε)−�ε)+u
M+m(x2+ε)2
= m(x2+ε)(g+ε)+uM+ε = mgx2+u
M + εsinx2((M+m)g−m�x24 cosx2)+cosx2u
�(M+m sin2 x2)
= (x2+ε)((M+m)g−m�ε(1+ε))+(1+ε)u
�(M+m(x2+ε)2)
= x2(M+m)g+u�M + ε
Thus, we obtain the linearized system:⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩x =
⎛⎜⎜⎜⎝0 0 1 0
0 0 0 1
0 mgM 0 0
0 (M+m)gM� 0 0
⎞⎟⎟⎟⎠x +
⎛⎜⎜⎜⎝0
01M1
M�
⎞⎟⎟⎟⎠u
y =(1 0 0 0
)x
Here, since x = 0, the quantities x and x are the same andit is for this reason that we have kept the notation x in theabove state equation. It is easy to verify that the linearizedsystem of our inverted rod pendulum is observable andcontrollable for the nominal values of the parameters.
3) The REGULKLH algorithm (refer to functionRegulKLH.m described on page 133) allows us to obtain ourcontroller. The MATLAB instructions are the following:
226 Automation for Robotics
A=[0 0 1 0;0 0 0 1;0 m*g/M 0 0;0 (M+m)*g/(l*M)0 0];B=[0;0;1/M;1/(l*M)];C=[1 0 0 0];E=[1 0 0 0]; % setpoint matrixpcom=[-2 -2.1 -2.2 -2.3];pobs=[-2 -2.1 -2.2 -2.3]);K=place(A,B,pcom);L=place(A’,C’,pobs)’;H=-inv(E*inv(A-B*K)*B);Ar=A-B*K-L*C;Br=[B*H L];Cr=-K;Dr=[H,zeros(size(B’*C’))];
In this program, Ar, Br, Cr, Dr are the state matrices forthe controller. Note that due to the algorithm used byMATLAB, the place command requests distinct poles. Theentire MATLAB program can be found in pendinv_main.m.
4) The mechanical energy is given by:
Em (x) =1
2m�2θ2︸ ︷︷ ︸
kinetic energy
+mg� (cos θ − 1)︸ ︷︷ ︸potential energy
=1
2m�2x24 +mg� (cosx2 − 1)
The potential energy constant has been chosen to be zeroin the unstable point of equilibrium of the pendulum. Let ustake V (x) = E2
m (x). This function will be minimal when thependulum is in upper equilibrium. We will therefore try to
Linearized Control 227
minimize V (x). We have:
V (x) =dV
dx(x) .x = 2Em (x) .
dEm
dx(x) .f (x,u)
= m(�2x24 + 2g� (cosx2 − 1)
) (0 −mg� (sinx2) 0 m�2x4
)
·
⎛⎜⎜⎜⎜⎝⎛⎜⎜⎜⎜⎝
x3x4
−m(sinx2)(�x24−g cosx2)
M+m sin2 x2(sinx2)((M+m)g−m�x2
4 cosx2)
�(M+m sin2 x2)
⎞⎟⎟⎟⎟⎠+
⎛⎜⎜⎜⎜⎝0
01
M+m sin2 x2(cosx2)
�(M+m sin2 x2)
⎞⎟⎟⎟⎟⎠u
⎞⎟⎟⎟⎟⎠= a (x) .u+ b (x)
To raise the pendulum, we will need to take:
u = −umax.sign (a (x))
= −umax.sign((
�2x24 + 2g� (cosx2 − 1)) m�x4 cosx2
M +m sin2 x2
)= −umax.sign
((�x24 + 2g (cosx2 − 1)
)(x4 cosx2)
)The corresponding program, which can be found in
pendinv_monte.m, is given below.
m=1;M=5;l=1;g=9.81; dt=0.01;x=[0;3;0.4;0]; % bottom-positioned pendulumfor t=0:dt:12,u=-5*sign(l*x(4)^2+2*g*(cos(x(2))-1)*x(4)
*cos(x(2)));x=x+pendinv_f(x,u)*dt;end
Solution to Exercise 5.16 (controlling a car along a wallin MATLAB)
1) In order to understand the following proof, it is importantto recall the meaning of the sign of the determinant of the two
228 Automation for Robotics
vectors −→u and −→v of R2. We have (i) det(−→u ,−→v ) > 0 if −→v is
on the left-hand side of −→u , (ii) det(−→u ,−→v ) < 0 if −→v is on theright-hand side of −→u and (iii) det(−→u ,−→v ) = 0 if −→u and −→v arecollinear. Thus, for instance, on the left-hand side of Figure5.10, det(a − m,−→u ) > 0 and det(b − m,−→u ) < 0. Let us recallas well that the determinant is a multi-linear form, in otherwords:
det(au+ bv, cx+ dy) = a det(u, cx+ dy) + b det(v, cx+ dy)
= acdet(u,x) + bc det(v,x)
+ad det(u,y) + bd det(v,y)
The line D (m,−→u )
that passes through the point m and thedirection vector −→u cuts the plane into two half-planes: thosethat satisfy det(z − m,−→u ) ≥ 0 and those that satisfydet(z − m,−→u ) ≤ 0. It cuts therefore the segment [ab] if a andb are in different half-planes, in other words, ifdet
(a−m,−→u )
. det(b−m,−→u ) ≤ 0.
Figure 5.10. The line D (m, u) cuts the segment [ab]
In the situation of Figure 5.10 on the left-hand side, thehalf-line E (
m,−→u )with vertex m and direction vector −→u cuts
the segment [ab]. This is not the case on the right sub-figure.Our condition is therefore not sufficient in order to state thatE (
m,−→u )will cut [ab] . Let us assume that
Linearized Control 229
det(a−m,−→u ) · det (b−m,−→u ) ≤ 0, in other words, that the
line D (m,−→u )
cuts the segment [ab] . The points of thehalf-line E (
m,−→u )all satisfy z = m + α−→u , α ≥ 0. As
illustrated in Figure 5.11, m is on the segment [ab] when thevectors m + α−→u − a and b − a are collinear (see figure), inother words, when α satisfies det(m+ α−→u − a,b− a) = 0.
Figure 5.11. The point m is on the segment [ab] when the vectorsm+ α−→u − a and b− a are collinear
Given the multi-linearity of the determinant, this equationcan be re-written as:
det(m− a,b− a) + α det(−→u ,b− a) = 0
By isolating α, we obtain:
α =det(a−m,b− a)
det(−→u ,b− a)
If α ≥ 0, then α represents the distance d traveled by theray starting from m in the direction −→u before meeting thesegment. If α < 0, this means that the ray will never meet thesegment since the latter is on the wrong side.
230 Automation for Robotics
2) The evolution function is given below.
function xdot=f(x,u)xdot=[x(4)*cos(x(5))*cos(x(3));x(4)*cos(x(5))*sin(x(3));x(4)*sin(x(5))/3;u(1);u(2)];end
The observation function of our system given below is adirect consequence of the first question. This function returnsthe distance d measured by the rangefinder, the velocity v ofthe font wheels and the angle of the steering wheel δ. Thefunction first of all calculates the vector u that indicates thedirection of the laser rangefinder and the point m where thelaser leaves from. The distance d returned must be thesmallest one among all the distances likely to be returned byeach of the segments. In the program, function P representsthe matrix containing the vertices of the polygon. Finally, letus note that this function will not be used by the controller,but only in order to simulate our real system. Indeed, ourcontroller does not know the shape of the polygon aroundwhich the car is supposed to be turning: it thinks that the caris driving along an infinite straight wall.
function y=circuit_g(x,P)u=[sin(x(3));-cos(x(3))]; m=[x(1);x(2)]; d=Inf;for j=1:size(P,2)-1a=P(:,j); b=P(:,j+1);if ((det([a-m u])*det([b-m u]) <= 0)&&(det([a-mb-a])/det([u b-a])>=0))d=min(det([a-m b-a])/det([u b-a]),d);endendy=[d;x(4);x(5)];end
Linearized Control 231
3) For the evolution equation of our system as seen by thecontroller, we will use the reasoning seen in Exercise 1.11 tomodel the car, except that y no longer exists. One must becareful with the sign − in the equation x = −v cos δ cos θ sincein our new model, x increases when the car is moving towardthe left. We thus have:⎛⎜⎜⎜⎝
x
θ
v
δ
⎞⎟⎟⎟⎠ =
⎛⎜⎜⎜⎝−v cos δ cos θ
v sin δL
u1u2
⎞⎟⎟⎟⎠For the observation equation, we have:
y =
⎛⎜⎝ xsin θ
v
δ
⎞⎟⎠ →→→
distance returned by the rangefindervelocity of the front wheelsangle of the steering wheel
where L is the distance between the front and rear axles.
4) The linearization around the operating point and for L =3 gives us the matrices of the linearized system:
A =
⎛⎜⎜⎜⎝0 7 0 0
0 0 0 73
0 0 0 0
0 0 0 0
⎞⎟⎟⎟⎠ , B =
⎛⎜⎜⎜⎝0 0
0 0
1 0
0 1
⎞⎟⎟⎟⎠ ,
C =
⎛⎜⎝ 1 0 0 0
0 0 1 0
0 0 0 1
⎞⎟⎠ , D =
⎛⎜⎝ 0 0
0 0
0 0
⎞⎟⎠5) The set point matrix is given by:
E =
(1 0 0 0
0 0 1 0
)
232 Automation for Robotics
By choosing as poles pcon = pobs = (−2,−2,−2,−2), theREGULKLH algorithm (A, B, C, E, pcon, pobs) generated forus the following controller:⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩
ddt x =
⎛⎜⎜⎜⎝−4 7 0 0
−0.6 0 0 0
0 0 −4 0
−0.5 −5.1 0 −8
⎞⎟⎟⎟⎠ x+
⎛⎜⎜⎜⎝0 0 4 0 0
0 0 0.6 0 2.3
0 2 0 2 0
0.5 0 0 0 2
⎞⎟⎟⎟⎠(
w
y
)
−
⎛⎜⎜⎜⎝20
2.9
28
2.4
⎞⎟⎟⎟⎠u =
(0 0 −2 0
−0.49 −5.14 0 −6
)x+
(0 2
0.48 0
)(w −
(5
7
))
6) The corresponding program is given below (refer to thefile circuit_ main.m).
r0=5; v0=7; dt=0.05;
P=[-10 -10 0 10 20 32 35 30 20 0 -10 ; -5 5 15 20 20 15 10 0
-3 -6 -5];
A=[0 7 0 0;0 0 0 7/3;0 0 0 0;0 0 0 0]; B=[0 0;0 0;1 0;0 1];
C=[1 0 0 0;0 0 1 0;0 0 0 1]; E=[1 0 0 0;0 0 1 0];
[Ar,Br,Cr,Dr]=RegulKLH(A,B,C,E,[-2 -2.1 -2.2 -2.3],[-2 -2.1
-2.2 -2.3]);
ubar=[0;0]; xbar=[r0;1.57;v0;0];
wbar=E*xbar; ybar=[xbar(1)/sin(xbar(2));xbar(3);xbar(4)];
x=[-15;0;pi/2;7;0.1]; xr=[0;0;0;0];
for t=0:dt:40,
w=[r0;v0]; y=circuit_g(x,P); u=ubar+Cr*xr+Dr*[w-wbar;y-ybar];
x1=x+circuit_f(x,u)*dt; xr1=xr+(Ar*xr+Br*[w-wbar;y-ybar])*dt;
x=x1;xr=xr1;
end;
Linearized Control 233
Solution to Exercise 5.17 (nonlinear control)
First part
1) The state representation is given by:
x =
(0 1
0 0
)x+
(0
1
)u
y =(1 0
)x
2) We have y = k1(w−y)+k2 (w − y)+ w. Thus, if e = w− y,we obtain:
e+ k1e+ k2e = 0
3) We have:
s2 + k1s+ k2 = (s+ 1)2
or, by identification k1 = 1, k2 = 2.
Second part
1) We have:{x = v cos θ − θv sin θ = u2 cos θ − u1v sin θ
y = v sin θ + θv cos θ = u2 sin θ + u1v cos θ
Thus:(x
y
)=
(−v sin θ cos θ
v cos θ sin θ
)︸ ︷︷ ︸
A(x)
(u1u2
)
2) By taking u = A−1(x)q, we obtain:(x
y
)=
(q1q2
)
234 Automation for Robotics
in other words, a system composed of two decoupledintegrators. A proportional and derivative-type control oneach of these chains gives us:(
q1q2
)=
((xa − x) + 2 (xa − x) + xa(ya − y) + 2 (ya − y) + ya
)
However, x = v cos θ and y = v sin θ. The resulting control istherefore:(
u1
u2
)=
(−v sin θ cos θ
v cos θ sin θ
)−1 ((xa − x) + 2 (xa − v cos θ) + xa
(ya − y) + 2 (ya − v sin θ) + ya
)By applying this control rule, we have the guarantee that
the error will converge toward zero at e−t.
Bibliography
[BAC 92] BACCELLI F., COHEN G., OLSDER G.J., et al.,Synchronization and Linearity: An Algebra for Discrete EventSystems, John Wiley & Sons, New York, 1992.
[BOU 06] BOURLÈS H., Systèmes linéaires; de la modélisation à lacommande, Hermès, Paris, 2006.
[BOY 06] BOYER F., POREZ M., KHALIL W., “Macro-continuouscomputed torque algorithm for a three-dimensional eel-likerobot”, IEEE Transactions on Robotics, vol. 22, no. 4, pp. 763–775, 2006.
[JAU 04] JAULIN L., “Modélisation et commande d’un bateauà voile”, CIFA (Conférence Internationale Francophoned’Automatique), CDROM, Douz, Tunisie, 2004.
[JAU 05] JAULIN L., Représentation d’état pour la modélisation etla commande des systèmes, Hermès, Paris, 2005.
236 Automation for Robotics
[JAU 09] JAULIN L., “Robust set membership state estimation:application to underwater robotics”, Automatica, vol. 45, no. 1,pp. 202–206, 2009.
[JAU 10] JAULIN L., Commande d’un skate-car par biomimétisme,CIFA, Nancy, France, 2010.
[JAU 12a] JAULIN L., LE BARS F., “An interval approach forstability analysis: application to sailboat robotics”, IEEETransaction on Robotics, vol. 27, no. 5, 2012.
[JAU 12b] JAULIN L., LE BARS F., “A simple controller forline following of sailboats”, 5th International Robotic SailingConference, Springer, Cardiff, Wales, England, pp. 107–119,2012.
[KAI 80] KAILATH T., Linear Systems, Prentice Hall, EnglewoodCliffs, 1980.
[KHA 02] KHALIL H.K., Nonlinear Systems, Prentice Hall, 2002.
[KHA 07] KHALIL W., DOMBRE E., Robot Manipulators: Modeling,Performance Analysis and Control, ISTE, London and John Wiley& Sons, New York, 2007.
[LAU 01] LAUMOND J.P., La Robotique Mobile, Hermès, Paris,2001.
[RIV 89] RIVOIRE M., FERRIER J.L., Cours et exercicesd’automatique, Tomes 1, 2 and 3, Eyrolles, Paris, 1989.
[WAL 14] WALTER E., Numerical Methods and Optimization: AConsumer Guide, Springer, London, 2014.
Index
C
change of basis, 97, 98,111–114, 138, 162
control, 127, 185controllability, 127, 128,
135–137, 151–153, 155,174, 195–198, 212, 213,219–221
controller, 28, 32, 67, 129, 130,131, 133, 140, 142–150,160, 167, 171, 173, 180,183, 189, 190, 196, 199,200–206, 215, 217, 218,223–226, 31, 32
D, E, G
dynamics, 2, 3, 6, 18, 21–25,28, 29, 36, 37, 43, 61, 131,132
Euler’s method, 54–56, 58, 60,61, 66, 68, 69, 73, 82, 196,202
graphics, 47
H, I, J
homogeneous coordinates, 52,59, 62, 63, 70, 76
hydraulic system, 17integration method, 54, 56inverted rod pendulum, 6–8,
24, 26–28, 200, 224, 225Jordan normal form, 97, 102,
112, 125
K, L
Kalman decomposition, 138,139, 158
Laplace transform, 87–91, 93,108
linear system, 4, 16, 40,85–88, 92, 93, 95, 97,100–102, 110, 112, 121,124, 128, 129, 137, 138,141, 144, 152, 159, 185,188, 189, 212
linearization, 185, 187,191–194, 197, 198, 208,209, 210, 211, 215, 218,219, 221, 222, 231
linearized control, 185
3 3
238 Automation for Robotics
loop, 8, 11, 32, 60, 67, 73, 94,127, 134, 139, 145, 167, 207
Luenberger, 131, 176
Mmechanical systems, 3, 33, 73mobile robots, 3modal form, 102, 123, 124modeling, 1, 3, 6–8, 11, 12, 24,
26, 32, 34, 36, 149
Oobservability, 127–129, 135,
137, 138, 151, 152, 156,157, 175
observer, 34, 131, 146–148,163, 173–177, 182
operating point, 2, 127, 187,188, 190, 196–199, 201,204, 212, 213, 215, 222,224, 225, 231
output feedback controller,131, 133, 140, 148–150,196, 201
P, Rpath, 58, 60, 68, 69, 73, 138,
206point of equilibrium, 5, 57, 60,
68, 72, 141, 187, 193, 194,196, 200, 210, 226
polarization, 187, 188pole placement, 128, 130–132,
139, 140, 147, 158, 159,173, 181, 196, 199, 205, 222
Runge-Kutta method, 55, 56,60
Ssegway, 8, 28, 195–197, 212,
214, 215–218
servomotor, 4, 7, 8simulation, 36, 47, 49, 51,
53–55, 59, 60, 62, 65, 67,71, 74, 76, 78, 80, 83, 202
skating robot, 65–67stability, 11, 73, 85, 86, 87, 92,
104, 106, 146, 195, 211state
feedback controller, 143,146, 173, 200
representation, 1, 4, 21, 22,40, 87, 91, 95, 100, 112,121, 123, 140, 142, 146,148, 160, 165, 179, 205,233
systems, 1–3, 9, 33, 47, 49, 54,68, 73, 85–89, 91–95, 97,99–101, 103, 109, 110, 116,127–129, 158
T
Taylor’s method, 56, 61, 62, 74transfer
function, 88– 91, 94, 95, 97,99–103, 109, 110, 113,115–119, 121, 123–125,135, 142, 146, 149–151,164, 174, 179–181
matrix, 91, 95, 97, 98, 100,110, 118, 123
tricycle, 62–64, 75–77, 79
V, W
vector field, 47–49, 56–61, 67,68, 73, 193, 194
wiring system, 98, 99, 101,102, 103, 115, 119, 120,135, 146, 151, 167, 174,199, 223
Other titles from
in
Control, Systems and Industrial Engineering
2014 DAVIM J. Paulo Machinability of Advanced Materials
ESTAMPE Dominique Supply Chain Performance and Evaluation Models
FAVRE Bernard Introduction to Sustainable Transports
MICOUIN Patrice Model Based Systems Engineering: Fundamentals and Methods
MILLOT Patrick Designing Human−Machine Cooperation Systems
MILLOT Patrick Risk Management in Life-Critical Systems
NI Zhenjiang, PACORET Céline, BENOSMAN Ryad, REGNIER Stéphane Haptic Feedback Teleoperation of Optical Tweezers
OUSTALOUP Alain Diversity and Non-integer Differentiation for System Dynamics
REZG Nidhal, DELLAGI Sofien, KHATAD Abdelhakim Joint Optimization of Maintenance and Production Policies
STEFANOIU Dan, BORNE Pierre, POPESCU Dumitru, FILIP Florin Gh., EL KAMEL Abdelkader Optimization in Engineering Sciences: Metaheuristics, Stochastic Methods and Decision Support
2013 ALAZARD Daniel Reverse Engineering in Control Design
ARIOUI Hichem, NEHAOUA Lamri Driving Simulation
CHADLI Mohammed, COPPIER Hervé Command-control for Real-time Systems
DAAFOUZ Jamal, TARBOURIECH Sophie, SIGALOTTI Mario Hybrid Systems with Constraints
FEYEL Philippe Loop-shaping Robust Control
FLAUS Jean-Marie Risk Analysis: Socio-technical and Industrial Systems
FRIBOURG Laurent, SOULAT Romain Control of Switching Systems by Invariance Analysis: Application to Power Electronics
GRUNN Emmanuel, PHAM Anh Tuan Modeling of Complex Systems: Application to Aeronautical Dynamics
HABIB Maki K., DAVIM J. Paulo Interdisciplinary Mechatronics: Engineering Science and Research Development
HAMMADI Slim, KSOURI Mekki Multimodal Transport Systems
JARBOUI Bassem, SIARRY Patrick, TEGHEM Jacques Metaheuristics for Production Scheduling
KIRILLOV Oleg N., PELINOVSKY Dmitry E. Nonlinear Physical Systems
LE Vu Tuan Hieu, STOICA Cristina, ALAMO Teodoro, CAMACHO Eduardo F., DUMUR Didier Zonotopes: From Guaranteed State-estimation to Control
MACHADO Carolina, DAVIM J. Paulo Management and Engineering Innovation
MORANA Joëlle Sustainable Supply Chain Management
SANDOU Guillaume Metaheuristic Optimization for the Design of Automatic Control Laws
STOICAN Florin, OLARU Sorin Set-theoretic Fault Detection in Multisensor Systems
2012 AÏT-KADI Daoud, CHOUINARD Marc, MARCOTTE Suzanne, RIOPEL Diane Sustainable Reverse Logistics Network: Engineering and Management
BORNE Pierre, POPESCU Dumitru, FILIP Florin G., STEFANOIU Dan Optimization in Engineering Sciences: Exact Methods
CHADLI Mohammed, BORNE Pierre Multiple Models Approach in Automation: Takagi-Sugeno Fuzzy Systems
DAVIM J. Paulo Lasers in Manufacturing
DECLERCK Philippe Discrete Event Systems in Dioid Algebra and Conventional Algebra
DOUMIATI Moustapha, CHARARA Ali, VICTORINO Alessandro, LECHNER Daniel Vehicle Dynamics Estimation using Kalman Filtering: Experimental Validation
HAMMADI Slim, KSOURI Mekki Advanced Mobility and Transport Engineering
MAILLARD Pierre Competitive Quality Strategies
MATTA Nada, VANDENBOOMGAERDE Yves, ARLAT Jean Supervision and Safety of Complex Systems
POLER Raul et al. Intelligent Non-hierarchical Manufacturing Networks
YALAOUI Alice, CHEHADE Hicham, YALAOUI Farouk, AMODEO Lionel Optimization of Logistics
ZELM Martin et al. I-EASA12
2011 CANTOT Pascal, LUZEAUX Dominique Simulation and Modeling of Systems of Systems
DAVIM J. Paulo Mechatronics
DAVIM J. Paulo Wood Machining
KOLSKI Christophe Human-computer Interactions in Transport
LUZEAUX Dominique, RUAULT Jean-René, WIPPLER Jean-Luc Complex Systems and Systems of Systems Engineering
ZELM Martin, et al. Enterprise Interoperability: IWEI2011 Proceedings
2010 BOTTA-GENOULAZ Valérie, CAMPAGNE Jean-Pierre, LLERENA Daniel, PELLEGRIN Claude Supply Chain Performance / Collaboration, Alignement and Coordination
BOURLÈS Henri, GODFREY K.C. Kwan Linear Systems
BOURRIÈRES Jean-Paul Proceedings of CEISIE’09
DAVIM J. Paulo Sustainable Manufacturing
GIORDANO Max, MATHIEU Luc, VILLENEUVE François Product Life-Cycle Management / Geometric Variations
LUZEAUX Dominique, RUAULT Jean-René Systems of Systems
VILLENEUVE François, MATHIEU Luc Geometric Tolerancing of Products
2009 DIAZ Michel Petri Nets / Fundamental Models, Verification and Applications
OZEL Tugrul, DAVIM J. Paulo Intelligent Machining
2008 ARTIGUES Christian, DEMASSEY Sophie, NÉRON Emmanuel Resources–Constrained Project Scheduling
BILLAUT Jean-Charles, MOUKRIM Aziz, SANLAVILLE Eric Flexibility and Robustness in Scheduling
DOCHAIN Denis Bioprocess Control
LOPEZ Pierre, ROUBELLAT François Production Scheduling
THIERRY Caroline, THOMAS André, BEL Gérard Supply Chain Simulation and Management
2007
DE LARMINAT Philippe Analysis and Control of Linear Systems
LAMNABHI Françoise et al. Taming Heterogeneity and Complexity of Embedded Control
LIMNIOS Nikolaos Fault Trees
2006 NAJIM Kaddour Control of Continuous Linear Systems