linear system theory 2 e sol
TRANSCRIPT
-
8/15/2019 Linear System Theory 2 e Sol
1/106
Solutions Manual
LINEAR SYSTEM THEORY, 2/E
Wilson J. Rugh
Department of Electrical and Computer Engineering
Johns Hopkins University
-
8/15/2019 Linear System Theory 2 e Sol
2/106
PREFACE
With some lingering ambivalence about the merits of the undertaking, but with a bit more dedication than
the first time around, I prepared this Solutions Manual for the second edition of Linear System Theory. Roughly
40% of the exercises are addressed, including all exercises in Chapter 1 and all others used in developments in the
text. This coverage complements the 60% of those in an unscientific survey who wanted a solutions manual, andperhaps does not overly upset the 40% who voted no. (The main contention between the two groups involved the
inevitable appearance of pirated student copies and the view that an available solution spoils the exercise.)
I expect that a number of my solutions could be improved, and that some could be improved using only
techniques from the text. Also the press of time and my flagging enthusiasm for text processing impeded the
crafting of economical solutions—some solutions may contain too many steps or too many words. However I
hope that the error rate in these pages is low and that the value of this manual is greater than the price paid.
Please send comments and corrections to the author at [email protected] or ECE Department, Johns Hopkins
University, Baltimore, MD 21218 USA.
-
8/15/2019 Linear System Theory 2 e Sol
3/106
CHAPTER 1
Solution 1.1
(a) For k = 2, ( A + B)2
= A2
+ AB + BA + B2
. If AB = BA, then ( A + B)2
= A2
+ 2 AB + B2
. In general if AB = BA, then the k -fold product ( A + B)k can be written as a sum of terms of the form A j Bk − j, j = 0, . . . , k . The
number of terms that can be written as A j Bk − j is given by the binomial coefficient
jk
. Therefore AB = BA
implies
( A + B)k = j=0Σk
jk
A j Bk − j
(b) Write
det [λ I − A (t )] = λn + an−1(t )λn−1 + . . . + a1(t )λ + a 0(t )
where invertibility of A (t ) implies a 0(t ) ≠ 0. The Cayley-Hamilton theorem implies
An(t ) + an−1(t ) An−1(t ) + . . . + a0(t ) I = 0
for all t . Multiplying through by A−1(t ) yields
A−1(t ) =a0(t )
−a1(t ) I − . . . − an−1(t ) An−2(t ) − An−1(t )_________________________________
for all t . Since a 0(t ) = det [− A (t )], a 0(t ) = det A (t ). Assume ε > 0 is such that det A (t ) ≥ ε for all t . Since A (t ) ≤ α we have aij (t ) ≤ α, and thus there exists a γ such that a j(t ) ≤ γ for all t . Then, for all t ,
A−1(t ) =det A (t )
a 1(t ) I + . . . + An−1(t )______________________
≤ε
γ + γ α + . . . + αn−1_________________=∆ β
Solution 1.2(a) If λ is an eigenvalue of A, then recursive use of Ap = λ p shows that λk is an eigenvalue of A k . However toshow multiplicities are preserved is more difficult, and apparently requires Jordan form, or at least results on
similarity to upper triangular form.
(b) If λ is an eigenvalue of invertible A, then λ is nonzero and Ap = λ p implies A−1 p = (1 / λ) p. As in (a),addressing preservation of multiplicities is more difficult.
(c) AT has eigenvalues λ1 , . . . , λn since det (λ I − AT ) = det (λ I − A)T = det (λ I − A).(d) A H has eigenvalues λ1
__, . . . , λn
__using (c) and the fact that the determinant (sum of products) of a conjugate is
the conjugate of the determinant. That is
-1-
-
8/15/2019 Linear System Theory 2 e Sol
4/106
Linear System Theory, 2/E Solutions Manual
det (λ I − A H ) = det (λ_ I − A) H = det (λ
_ I − A)
__________
(e) α A has eigenvalues α λ1 , . . . , αλn since Ap = λ p implies (α A) p = (αλ) p.(f) Eigenvalues of AT A are not nicely related to eigenvalues of A. Consider the example
A =
00
0α
, AT A =
00
α0
where the eigenvalues of A are both zero, and the eigenvalues of AT A are 0, α. (If A is symmetric, then (a)applies.)
Solution 1.3(a) If the eigenvalues of A are all zero, then det (λ I − A) = λn and the Cayley-Hamilton theorem shows that A isnilpotent. On the other hand if one eigenvalue, say λ1 is nonzero, let p be a corresponding eigenvector. Then Ak p = λ1
k p ≠ 0 for all k ≥ 0, and A cannot be nilpotent.(b) Suppose Q is real and symmetric, and λ is an eigenvalue of Q. Then λ
_also is an eigenvalue. From the
eigenvalue/eigenvector equation Qp = λ p we get p H Qp = λ p H p. Also Qp_
= λ
_
p
_, and transposing gives
p H Qp = λ_ p H p. Subtracting the two results gives (λ − λ
_) p H p = 0. Since p ≠ 0, this gives λ = λ
_, that is, λ is real.
(c) If A is upper triangular, then λ I − A is upper triangular. Recursive Laplace expansion of the determinant aboutthe first column gives
det (λ I − A) = (λ − a11) . . . (λ − ann)
which implies the eigenvalues of A are the diagonal entries a 11 , . . . , ann .
Solution 1.4(a)
A =
10
00
implies AT A =
01
00
implies A = 1
(b)
A =
13
31
implies AT A =
610
106
Then
det (λ I − AT A) = (λ − 16)(λ − 4)
which implies A = 4.(c)
A =
0
1−i1+i0
implies A H A =
0
(1+i)(1−i)(1−i)(1+i)
0
=
02
20
This gives A = √ 2 .
Solution 1.5 Let
A =
0
1 / α1 / αα
, α > 1
Then the eigenvalues are 1 / α and, using an inequality on text page 7,
A ≥1 ≤ i, j ≤ 2max aij = α
-2-
-
8/15/2019 Linear System Theory 2 e Sol
5/106
Linear System Theory, 2/E Solutions Manual
Solution 1.6 By definition of the spectral norm, for any α ≠ 0 we can write
A = x = 1
max A x = x = 1
max x
A x_______
=α x = 1
maxα x
Aα x________=
x = 1 / αmax
α xα A x__________
Since this holds for any α ≠ 0,
A = x ≠ 0max
x
A x_______=
x ≠ 0max
x
A x_______
Therefore
A ≥ x
A x_______
for any x ≠ 0, which gives
A x ≤ A x
Solution 1.7 By definition of the spectral norm,
AB = x = 1max ( AB) x =
x = 1max A ( Bx)
≤ x = 1max { A Bx} , by Exercise 1.6
= A x = 1max Bx = A B
If A is invertible, then A A−1 = I and the obvious I = 1 give
1 = A A−1 ≤ A A−1
Therefore
A−1 ≥ A
1_____
Solution 1.8 We use the following easily verified facts about partitioned vectors:
x 2
x 1
≥ x 1, x 2 ;
0
x1
= x 1 ,
x2
0
= x 2
Write
Ax =
A21 A11
A22 A12
x2 x1
=
A21 x1 + A22 x 2 A11 x1 + A12 x 2
Then for A11 , for example,
A = x = 1max A x ≥
x = 1max A11 x 1 + A12 x 2
≥ x1 = 1
max A11 x 1 = A11
The other partitions are handled similarly. The last part is easy from the definition of induced norm. For example
if
-3-
-
8/15/2019 Linear System Theory 2 e Sol
6/106
Linear System Theory, 2/E Solutions Manual
A =
0
0
0
A12
then partitioning the vector x similarly we see that
x = 1max A x =
x 2 = 1max A12 x 2 = A12
Solution 1.9 By the Cauchy-Schwarz inequality, and x T = x,
x T A x ≤ x T A x = AT x x
≤ AT x2 = A x2
This immediately gives
x T A x ≥ − A x2
If λ is an eigenvalue of A and x is a corresponding unity-norm eigenvector, then
λ = λ x = λ x = A x ≤ A x = A
Solution 1.10 Since Q = QT , QT Q = Q2 , and the eigenvalues of Q2 are λ12 , . . . , λn
2 . Therefore
Q = √ λmax(Q2) =1 ≤ i ≤ nmax λi
For the other equality Cauchy-Schwarz gives
x T Qx | ≤ xT Q x = Qx x
≤ Q x2 =
[ 1 ≤ i ≤ nmax λ
i
] x T x
Therefore | x T Qx | ≤ Q for all unity-norm x. Choosing xa as a unity-norm eigenvector of Q corresponding tothe eigenvalue that yields
1 ≤ i ≤ nmax λi gives
xaT Qxa = xa
T [1 ≤ i ≤ nmax λi ] xa =
1 ≤ i ≤ nmax λi
Thus x = 1max x T Qx = Q.
Solution 1.11 Since A x = √ ( A x)T ( A x) = √ x T AT A x ,
A = x = 1max √ x T AT A x
=
x = 1max x T AT A x
1 / 2
The Rayleigh-Ritz inequality gives, for all unity-norm x,
x T AT A x ≤ λmax( AT A) xT x = λmax( AT A)
and since A T A ≥ 0, λmax( AT A) ≥ 0. Choosing xa to be a unity-norm eigenvector corresponding to λmax( AT A) gives
xaT AT A xa = λmax( A
T A)
Thus
-4-
-
8/15/2019 Linear System Theory 2 e Sol
7/106
Linear System Theory, 2/E Solutions Manual
x = 1max x T AT A x = λmax( A
T A)
so we have A = √
λmax( AT
A) .
Solution 1.12 Since A T A > 0 we have λi( AT A) > 0, i = 1, . . . , n, and ( AT A)−1 > 0. Then by Exercise 1.11,
A−12 = λmax(( AT A)−1) =
λmin( AT A)
1_________
=λmin( A
T A) . det ( AT A)
i =1Πn
λi( AT A)
__________________ ≤(det A)2
[λmax( AT A)]n −1______________
=(det A)2
A2(n−1)_________
Therefore
A−1 ≤det A
An−1________
Solution 1.13 Assume A ≠ 0, for the zero case is trivial. For any unity-norm x and y,
yT A x ≤ y T A x
≤ y A x = A
Therefore
x, y = 1max yT
A x ≤ A
Now let unity-norm xa be such that A xa = A, and let
ya = A
Axa_____
Then ya = 1 and
yaT A xa =
A
xaT AT A xa___________
= A
A xa2
________=
A
A2______= A
Therefore
x, y = 1max yT A x = A
Solution 1.14 The coefficients of the characteristic polynomial of a matrix are continuous functions of matrixentries, since determinant is a continuous function of the entries (sum of products). Also the roots of a
polynomial are continuous functions of the coefficients. (A proof is given in Appendix A.4 of E.D. Sontag,
Mathematical Control Theory, Springer-Verlag, New York, 1990.) Since a composition of continuous functions
is a continuous function, the pointwise-in-t eigenvalues of A (t ) are continuous in t .
This argument gives that the (nonnegative) eigenvalues of A T (t ) A (t ) are continuous in t . Then the maximum at
each t is continuous in t — plot two eigenvalues and consider their pointwise maximum to see this. Finally since
square root is a continuous function of nonnegative arguments, we conclude A (t ) is continuous in t .However for continuously-differentiable A (t ), A (t ) need not be continuously differentiable in t . Consider the
-5-
-
8/15/2019 Linear System Theory 2 e Sol
8/106
-
8/15/2019 Linear System Theory 2 e Sol
9/106
Linear System Theory, 2/E Solutions Manual
Solution 1.17 Using the product rule to differentiate A (t ) A−1(t ) = I yields
A.(t ) A−1(t ) + A (t )
dt
d ___ A−1(t ) = 0
which gives
dt
d ___ A−1(t ) = − A−1(t ) A
.(t ) A−1(t )
Solution 1.18 Assuming differentiability of both x (t ) and x (t ), and using the chain rule for scalarfunctions,
dt
d ___ x (t )2 = 2 x (t )dt
d ___ x (t )
= 2 x (t )dt
d ___ x (t )
Also we can write, using the product rule and the Cauchy-Schwarz inequality,
dt
d ___ x (t )2 =
dt
d ___ x T (t ) x (t ) = x
. T (t ) x (t ) + x T (t ) x
.(t ) = 2 x T (t ) x
.(t )
≤ 2 x (t ) x.(t )
For t such that x (t ) ≠ 0, comparing these expressions gives
dt
d ___ x (t ) ≤ x.(t )
If x (t ) = 0 on a closed interval, then on that interval the result is trivial. If x (t ) = 0 at an isolated point, then
continuity arguments show that the result is valid. Note that for the differentiable function x (t ) = t , x (t ) = t is not differentiable at t = 0. Thus we must make the assumption that x (t ) is differentiable. (While thisinequality is not explicitly used in the book, the added differentiability hypothesis explains why we always
differentiate x (t )2 = xT (t ) x (t ) instead of x (t ).)
Solution 1.19 To prove the contrapositive claim, suppose for each i, j there is a constant βij such that
0
∫ t
f ij (σ) d σ ≤ βij , t ≥ 0
Then by the inequality on page 7, noting thati, j
max f ij (t ) is a continuous function of t and taking the pointwise-
in-t maximum,
0∫ t
F (σ) d σ ≤0∫ t
√ mni, j
max f ij (σ) d σ
≤ √ mn0
∫ t
i =1Σm
j=1Σn
| f ij (σ) d σ
≤ √ mni=1Σn
j=1Σm
βij
-
8/15/2019 Linear System Theory 2 e Sol
10/106
Linear System Theory, 2/E Solutions Manual
Solution 1.20 If λ(t ), p (t ) are a pointwise-in-t eigenvalue/eigenvector pair for A−1(t ), then
A−1(t ) p (t ) = λ(t ) p (t ) = λ(t ) p (t )
Therefore, for every t ,
λ(t ) = p (t )
A−1(t ) p (t )_____________ ≤ p (t )
A−1(t ) p (t )_______________ ≤ α
Since this holds for any eigenvalue/eigenvector pair,
det A (t ) =det A −1(t )
1___________=
λ1(t ) . . . λn(t )1_________________ ≥
αn1___
> 0
for all t .
Solution 1.21 Using Exercise 1.10 and the assumptions Q (t ) ≥ 0, t b ≥ t a,
t a
∫ t b
Q (σ) d σ =t a
∫ t b
λmax[Q (σ)] d σ ≤t a
∫ t b
tr [Q (σ)] d σ = trt a
∫ t b
Q (σ) d σ
Note that
t a
∫ t b
Q (σ) d σ ≥ 0
since for every x
xT
t a
∫ t b
Q (σ) d σ x =t a
∫ t b
xT Q (σ) x d σ ≥ 0
Thus, using a property of the trace on page 8 of Chapter 1, we have
t a
∫ t b
Q (σ) d σ ≤ trt a
∫ t b
Q (σ) d σ ≤ nt a
∫ t b
Q (σ) d σ
Finally,
t a
∫ t b
Q (σ) d σ ≤ ε I
implies, using Rayleigh-Ritz,
t a
∫ t b
Q (σ) d σ ≤ ε
Therefore
t a
∫ t b
Q (σ) d σ ≤ nε
-8-
-
8/15/2019 Linear System Theory 2 e Sol
11/106
CHAPTER 2
Solution 2.3 The nominal solution for ũ (t ) = sin (3t ) is ỹ (t ) = sin t . Let x 1(t ) = y (t ), x 2(t ) = y.(t ) to write the
state equation
x.(t ) =
−(4 / 3) x13 (t ) − (1 / 3)u (t ) x 2(t )
Computing the Jacobians and evaluating gives the linearized state equation
x δ.
(t ) =
−4 sin2t 0
01
x δ(t ) +
−1 / 30
u δ(t )
y δ(t ) = 1 0
x δ(t )
where
xδ(t ) = x (t ) −
cos t
sin t
, uδ(t ) = u (t ) − sin (3t ) , y
δ(t ) = y (t ) − sin t , x
δ(0) = x (0) −
1
0
Solution 2.5 For ũ = 0 constant nominal solutions are solutions of
0 = x̃2 − 2 x̃ 1 x̃ 2 = x̃ 2(1−2 x̃ 1)
0 = − x̃1 + x̃12
+ x̃22
= x̃1( x̃ 1−1) + x̃22
Evidently there are 4 possible solutions:
x̃a =
00
, x̃b =
01
, x̃c =
1 / 21 / 2
, x̃d =
−1 / 21 / 2
Since
∂ x∂ f ___
=
−1+2 x1−2 x 2
2 x 2
1−2 x1
,∂u∂ f ___
=
1
0
evaluating at each of the constant nominals gives the corresponding 4 linearized state equations.
Solution 2.7 Clearly x̃ is a constant nominal if and only if
0 = A x̃ + bũ
that is, if and only if A x̃ = −bũ. There exists such an x̃ if and only if b ∈ Im [ A ], in other words
-9-
-
8/15/2019 Linear System Theory 2 e Sol
12/106
rank A = rank [ A b ].Also, x̃ is a constant nominal with c x̃ = 0 if and only if
0 = A x̃ + bũ
0 = c x̃
that is, if and only if
c A
x̃ =
0−bũ
As above, this holds if and only if
rank
c A
= rank
c A
0b
Finally, x̃ is a constant nominal with c x̃ = ũ if and only if
0 = A x̃ + bũ = ( A + bc ) x̃
and this holds if and only if
x̃ ∈ Ker [ A + bc ]
(If A is invertible, we can be more explicit. For any ũ the unique constant nominal is x̃ = − A−1bũ . Then ỹ = 0 for ũ ≠ 0 if and only if c A−1b = 0 , and ỹ = ũ if and only if c A−1b = −1.)
Solution 2.8(a) Since
C A
0 B
is invertible, for any K
C A + BK
0 B
=
C A
0 B
K I
I 0
is invertible. Let
C
A + BK
0
B
R3
R1 R4
R2
=
0
I
I
0
Then the 1, 2-block gives R 2 = −( A + BK )−1 BR4 and the 2, 2-block gives CR 2 = I , that is, I = −C ( A + BK )−1 BR4Thus [ C ( A + BK )−1 B ]−1 exists and is given by − R4 .
(b) We need to show that there exists N such that
0 = ( A + BK ) x̃ + BNũ
ũ = Cx̃
The first equation gives
x̃ = −( A + BK )−1 BN ũ
Thus we need to choose N such that
−C ( A + BK )−1 BN ũ = ũ
From part (a) we take N = [−C ( A + BK )−1 B ]−1 = R4 .
-10-
-
8/15/2019 Linear System Theory 2 e Sol
13/106
Linear System Theory, 2/E Solutions Manual
Solution 2.10 For u (t ) = ũ, x̃ is a constant nominal if and only if
0 = ( A + Dũ) x̃ + bũ
This holds if and only if bũ ∈ Im [ A + Dũ], that is, if and only if
rank ( A + Dũ ) = rank
A+Dũ bũ
If A +Dũ is invertible, then
x̃ = −( A + Dũ)−1bũ (+)
If A is invertible, then by continuity of the determinant det ( A + Dũ) ≠ 0 for all ũ such that ũ is sufficientlysmall, and (+) defines a corresponding constant nominal. The corresponding linearized state equation is
x.
δ(t ) = ( A + Dũ) x δ(t ) + [ b − D ( A + Dũ)−1bũ ] u δ(t )
y δ(t ) = C x δ(t )
Solution 2.12 For the given nominal input, nominal output, and nominal initial state, the nominal solutionsatisfies
x̃
.
(t ) =
x̃ 2(t ) − 2 x̃3(t ) x̃ 1(t ) − x̃ 3(t )
1
, x̃(0) =
−2−30
1 = x̃2(t ) − 2 x̃3(t )
Integrating for x̃ 1(t ) and then x̃ 3(t ) easily gives the nominal solution x̃1(t ) = t , x̃ 2(t ) = 2 t − 3, and x̃ 3(t ) = t − 2.The corresponding linearized state equation is specified by
A =
01
0
10
0
−2−10
, B (t )=
0t
0
, C =
0 1 −2
It is unusual that the nominal input and nominal output are constants, but the linearization is time varying.
Solution 2.14 Compute
z.(t ) = x
.(t ) − q
.(t ) = A x (t ) + Bu (t ) + A−1 Bu
.(t )
= A x (t ) − A[− A−1 Bu (t )] + A−1 Bu.(t )
= A z (t ) + A−1 Bu.(t )
If at any value of t a > 0 we have x (t a) = q (t a), that is z (t a) = 0, and u.(t ) = 0 for t ≥ t a, that is u (t ) = u (t a) for
t ≥ t a , then z (t ) = 0 for t ≥ t a . Thus x (t ) = q (t a) for t ≥ t a , and q (t ) represents what could be called an‘instantaneous constant nominal.’
-11-
-
8/15/2019 Linear System Theory 2 e Sol
14/106
CHAPTER 3
Solution 3.2 Differentiating term k +1 of the Peano-Baker series using Leibniz rule gives
∂τ∂___
τ∫ t
A (σ1)τ∫
σ1
A (σ2)τ∫
σ2. . .
τ∫
σk
A (σk +1) d σk +1 . . . d σ1
=
A (t )τ∫ t
A (σ2)τ∫
σ2. . .
τ∫ σk
A (σk +1) d σk +1 . . . d σ2
d τd ___
t −
A (τ)τ∫ τ
A (σ2)τ∫ τ
. . . d σk +1 . . . d σ2
d τd ___ τ
+τ∫ t
A (σ1) ∂τ∂___
τ∫
σ1
A (σ2)τ∫
σ2. . .
τ∫
σk
A (σk +1)
d σk +1 . . . d σ1
=τ∫ t
A (σ1) ∂τ∂___
τ∫
σ1
A (σ2)τ∫
σ2
. . .τ∫
σk
A (σk +1)
d σk +1 . . . d σ1
Repeating this process k times gives
∂τ∂___
τ∫ t
A (σ1)τ∫
σ1
A (σ2)τ∫
σ2. . .
τ∫
σk
A (σk +1) d σk +1 . . . d σ1
=τ∫ t
A (σ1)τ∫
σ1. . .
τ∫
σk −1
A (σk ) ∂τ∂___
τ∫ σk
A (σk +1) d σk +1
d σk . . . d σ1
=τ∫
t
A (σ1)τ∫
σ1
. . .τ∫
σk −1
A (σk )
0 − A (τ) + τ∫
σk
0 d σk +1
d σk . . . d σ1
=τ∫ t
A (σ1)τ∫
σ1
A (σ2)τ∫
σ2. . .
τ∫
σk −1
A (σk ) d σk . . . d σ1
− A (τ)
Recognizing this as term k of the uniformly convergent series for −Φ(t , τ) A (τ) gives
∂τ∂___ Φ(t , τ) = −Φ(t , τ) A (τ)
(Of course it is simpler to use the formula for the derivative of an inverse matrix given in Exercise 1.17 .)
-12-
-
8/15/2019 Linear System Theory 2 e Sol
15/106
Linear System Theory, 2/E Solutions Manual
Solution 3.6 Writing the state equation as a pair of scalar equations, the first one is
x.
1
(t ) =1 + t 2
−t ______ x 1(t )
and an easy computation gives
x 1(t ) =(1 + t 2)1 / 2
x1o_________
Then the second scalar equation then becomes
x.
2(t ) =1 + t 2
−4t ______ x 2(t ) +
(1 + t 2)1 / 2
x1o_________
The complete solution formula gives, with some help from Mathematica,
x.
2(t ) =
(1 + t
2
)
2
1________ x 2o +
0
∫ t
(1 + t 2)2(1 + σ2)3 / 2_________
d σ x 1o
=(1 + t 2)2
1________ x 2o +
(1 + t 2)2√ 1+t 2 (t 3 / 4+5t / 8)+(3 / 8) sinh−1(t )_____________________________
x1o
If x 1o = 1, then as t →∞, x2(t ) → 1 / 4, not zero.
Solution 3.7 From the hint, letting
r (t ) =t o
∫ t
v (σ)φ(σ) d σ
we have r .(t ) = v (t )φ(t ), and
φ(t ) ≤ ψ (t ) + r (t ) (*)
Multiplying (*) through by the nonnegative v (t ) gives
v (t )φ(t ) ≤ v (t )ψ (t ) + v (t )r (t )
or
r .(t ) − v (t )r (t ) ≤ v (t )ψ (t )
Multiply both sides by the positive quantity
e
−t o
∫ t
v (τ) d τ
to obtain
dt
d ___
r (t )e
−t o
∫ t
v (τ) d τ
≤ v (t )ψ (t )e−
t o
∫ t
v (τ) d τ
Integrating both sides from t o to t , and using r (t o) = 0 gives
r (t )e
−t o
∫ t
v (τ) d τ
≤t o
∫ t
v (σ)ψ (σ)e−
t o
∫ σ
v (τ) d τ
d σ
Multiplying through by the positive quantity
-13-
-
8/15/2019 Linear System Theory 2 e Sol
16/106
Linear System Theory, 2/E Solutions Manual
et o
∫ t
v (τ) d τ
gives
r (t ) ≤t o
∫ t
v (σ)ψ (σ)e σ∫ t
v (τ) d τ
d σ
and using (*) yields the desired inequality.
Solution 3.10 Multiply the state equation by 2 zT (t ) to obtain
2 zT (t ) z.(t ) =
dt
d ___ z (t )2
=i=1Σ
n
j=1Σ
n
2 zi(t )aij (t ) z j(t )
≤i=1Σn
j =1Σn
2aij (t ) zi(t ) z j(t ) , t ≥ t o
At each t ≥ t o let
a (t ) = 2n21 ≤ i, j ≤ n
max aij (t )
Note a (t ) is a continuous function of t , as a quick sample sketch indicates. Then, since zi(t ) ≤ z (t ),
dt
d ___ z (t )2 ≤ a (t ) z (t )2 , t ≤ t o
Multiplying through by the positive quantity
e
−t o
∫ t
a (σ) d σ
gives
dt
d ___
e
−t o
∫ t
a (σ) d σ
z (t )2
≤ 0 , t ≤ t o
Integrating both sides from t o to t and using z (t o) = 0 gives
z (t ) = 0 , t ≥ t o
which implies z (t ) = 0 for t ≥ t o .
Solution 3.11 The vector function x (t ) satisfies the given state equation if and only if it satisfies
x (t ) = xo +t o
∫ t
A (σ) x(σ) d σ +t o
∫ t
t o
∫ τ
E (τ, σ) x(σ) d σd τ +t o
∫ t
B (σ)u (σ) d σ
Assuming there are two solutions, their difference z (t ) satisfies
z (t ) =t o
∫ t
A (σ) z(σ) d σ +t o
∫ t
t o
∫ τ
E (τ, σ) z(σ) d σd τ
Interchanging the order of integration in the double integral (Dirichlet’s formula) gives
-14-
-
8/15/2019 Linear System Theory 2 e Sol
17/106
Linear System Theory, 2/E Solutions Manual
z (t ) =t o
∫ t
A (σ) z(σ) d σ +t o
∫ t
σ∫ t
E (τ, σ) d τ z(σ) d σ
=t o
∫ t
A (σ) +σ∫ t
E (τ, σ) d τ
z(σ) d σ
=∆
t o
∫ t
Â(t , σ) z (σ) d σ
Thus
z (t ) = t o
∫ t
Â(t , σ) z (σ) d σ ≤t o
∫ t
Â(t , σ) z (σ) d σ
By continuity, given T > 0 there exists a finite constant α such that Â(t , σ) ≤ α for t o ≤ σ ≤ t ≤ t o + T . Thus
z (t ) ≤t o∫ t
α z (t ) d σ , t ∈ [t o , t o+T ]
and the Gronwall-Bellman inequality gives z (t ) = 0 for t ∈ [t o, t o+T ], implying that there can be no morethan one solution.
Solution 3.13 From the Peano-Baker series,
Φ(t , τ) −
I +τ∫ t
A (σ1) d σ1 + . . . +τ∫ t
A (σ1)τ∫
σ1. . .
τ∫
σk −1
A (σk ) d σk . . . d σ1
= j=k +1Σ
∞
τ∫
t
A (σ1)τ∫
σ1
. . .τ∫
σ j−1
A (σ j) d σ j . . . d σ1
For any fixed T > 0 there is a finite constant α such that A (t ) ≤ α for t ∈ [−T , T ], by continuity. Therefore
j=k +1Σ∞
τ∫ t
A (σ1)τ∫
σ1. . .
τ∫
σ j−1
A (σ j) d σ j . . . d σ1 ≤ j=k +1Σ∞
τ∫ t
A (σ1)τ∫
σ1. . .
τ∫
σ j−1
A (σ j) d σ j . . . d σ1
≤ j=k +1Σ∞
τ∫ t
A (σ1)τ∫
σ1. . .
τ∫
σ j−1
A (σ j) d σ j . . . d σ1
.
.
.
≤ j=k +1Σ∞ α j
τ∫ t
. . .
τ∫
σ j−1
1 d σ j . . . d σ1
≤ j=k +1Σ∞
α j j !
t − τ j_______
≤ j=k +1Σ∞
j !
(α2T ) j______, t , τ ∈ [−T , T ]
We need to show that given ε > 0 there exists K such that
-15-
-
8/15/2019 Linear System Theory 2 e Sol
18/106
Linear System Theory, 2/E Solutions Manual
j=K +1Σ∞
j !
(α2T ) j_______ α2T , then
j=k +1Σ∞
j !
(α2T ) j_______ ≤(k +1)!
(α2T )k +1________ .1 − α2T/k
1_________=
(k −1)!(k +1)(k −α2T )(α2T )k +1__________________
Because of the factorial in the denominator, given ε > 0 there exists a K > α2T such that (*) holds.
Solution 3.15 Writing the complete solution of the state equation at t f , we need to satisfy
H o xo + H f
Φ(t f , t o) xo +t o
∫ t f
Φ(t f , σ) f (σ) d σ
= h (+)
Thus there exists a solution that satisfies the boundary conditions if and only if
h − H f t o
∫ t f
Φ(t f , σ) f (σ) d σ ∈ Im[ H o + H f Φ(t f , t o) ]
There exists a unique solution that satisfies the boundary conditions if H o + H f Φ(t f , t o) is invertible. To computea solution x (t ) satisfying the boundary conditions:
(1) Compute Φ(t , t o) for t ∈ [t o, t f ]
(2) Compute H o + H f Φ(t f , t o)
(3) Computet o
∫ t f
Φ(t f , σ) f (σ) d σ
(4) Solve (+) for xo
(5) Set x (t ) = Φ(t , t o) xo +t o
∫ t
Φ(t , σ) f (σ) d σ, t ∈ [t o, t f ]
-16-
-
8/15/2019 Linear System Theory 2 e Sol
19/106
CHAPTER 4
Solution 4.1 An easy way to compute A (t ) is to use A (t ) = Φ.
(t , 0)Φ(0, t ). This gives
A (t ) =
1−2t −2t −
1
This A (t ) commutes with its integral, so we can write Φ(t , τ) as the matrix exponential
Φ(t , τ) = exp
τ∫ t
A (σ) d σ
= exp
(t −τ)−(t −τ)2
−(t −τ)2−(t −τ)
Solution 4.4 A linear state equation corresponding to the n th-order differential equation is
x
.
(t ) =
−a0(t )0
.
.
.
00
−a1(t )0
.
.
.
01
. . .
. . .
.
..
. . .
. . .
−an−1(t )1
.
..
00
x (t )
The corresponding adjoint state equation is
z.(t ) =
0
0
.
.
.
−10
. . .
. . .
.
.
.
. . .
. . .
−10
.
.
.
0
0
an−1(t )
an−2(t )
.
.
.
a 1(t )
a 0(t )
z (t )
To put this in the form of an n th-order differential equation, start with
z.n(t ) = − zn−1(t ) + an−1(t ) zn(t )
z.n−1(t ) = − zn−2(t ) + an−2(t ) zn(t )
These give
z..
n(t ) = − z.n−1(t ) +
dt
d ___[ an−1(t ) zn(t ) ]
= zn−2(t ) − an−2(t ) zn(t ) +dt
d ___[ an−1(t ) zn(t ) ]
Next,
-17-
-
8/15/2019 Linear System Theory 2 e Sol
20/106
Linear System Theory, 2/E Solutions Manual
z.n−2(t ) = − zn−3(t ) + an−3(t ) zn(t )
gives
dt 3d 3____
zn(t ) = z.n−2(t ) −
dt
d ___[ an−2(t ) zn(t ) ] +
dt 2d 2____
[ an−1(t ) zn(t ) ]
= − zn−3(t ) + an−3(t ) zn(t ) −dt
d ___[ an−2(t ) zn(t ) ] +
dt 2d 2____
[ an−1(t ) zn(t ) ]
Continuing gives the n th-order differential equation
dt nd n____
zn(t ) =dt n−1d n−1_____
[ an−1(t ) zn(t ) ] −dt n−2d n−2_____
[ an−2(t ) zn(t ) ]
+ . . . + (−1)ndt
d ___[ a 1(t ) zn(t ) ] + (−1)n +1a0(t ) zn(t )
Solution 4.6 For the first matrix differential equation, write the transpose of the equation as (transpose anddifferentiation commute)
X . T
(t ) = AT (t ) X T (t ) , X T (t o) = X oT
This has the unique solution X T (t ) = Φ AT (t )(t , t o) X oT , so that
X (t ) = X oΦ AT (t )T (t , t o)
In the second matrix differential equation, let Φk (t , τ) be the transition matrix for Ak (t ), k = 1, 2. Then it is easyto verify (Leibniz rule) that a solution is
X (t ) = Φ1(t , t o) X oΦ2T (t , t o) +t o
∫ t
Φ1(t , σ)F (σ)Φ2T (t , σ) d σ
Or, one can generate this expression by using the obvious integrating factors on the left and right sides of the
differential equation. (To show this is a unique solution, show that the difference Z (t ) between any two solutions
satisfies Z .(t ) = A1(t ) Z (t ) + Z (t ) A2
T (t ), with Z (t o) = 0. Integrate both sides and apply the Bellman-Gronwall
inequality to show Z (t ) is identically zero.)
Solution 4.9 Clearly A (t ) commutes with its integral. Thus we compute
exp
−10
01
τ
and then replace τ by0
∫ t
a (σ) d σ. From the power series for the exponential,
exp
−10
01
τ
=k =0Σ∞
k !
1___ −1
001
k
τk
=k =0Σ∞
(2k )!
1_____ −1
001
2k
τ2k +k =0Σ∞
(2k +1)!
1________ −1
001
2k +1
τ2k +1
=k =0Σ∞
(2k )!
1_____
0(−1)k
(−1)k 0
τ2k +k =0Σ∞
(2k +1)!
1________
(−1)k +10
0(−1)k
τ2k +1
-18-
-
8/15/2019 Linear System Theory 2 e Sol
21/106
Linear System Theory, 2/E Solutions Manual
=
0cos τ
cos τ0
+
−sin τ0
0sin τ
=
−sin τcos τ
cos τsin τ
Replacing τ as noted above gives Φ(t , 0).
Solution 4.10 For sufficiency, suppose Φ x(t , 0) = T (t )e Rt . Then T (0) = I and T (t ) is continuouslydifferentiable. Let z (t ) = T −1(t ) x (t ) so that
Φ z(t , 0) = T −1(t )Φ x(t , 0)T (0) = T −1(t )T (t )e Rt = e Rt
Thus z.(t ) = R z (t ).
For necessity, suppose P (t ) is a variable change that gives
z.(t ) = Ra z (t )
Then
Φ z(t , 0) = e Ra t = P−1(t )Φ x(t , 0)P (0)
that is,
Φ x(t , 0) = P (t )e Ra t P−1(0)
Let T (t ) = P (t )P−1(0) and R = P (0) RaP−1(0). Then
Φ x(t , 0) = T (t )P (0) e P−1(0) RP (0)t P−1(0)
= T (t )P (0) [ P−1(0)e Rt P (0) ] P−1(0)
= T (t )e Rt
Solution 4.11 Suppose
Φ(t , 0) = e A1t e A2t
Then
Φ.
(t , 0) =dt
d ___
e A1t e
A2t
= e A1t ( A1+A2 ) e
A2t
= e A1t ( A1+A2 ) e
− A1t . e A1t e
A 2t
This implies A (t ) = e A1t [ A1+A2 ] e
− A 1t . Therefore A (0) = A1+A2 is clear, and
A.(t ) = A1e
A1t ( A1+A2 ) e− A1t + e
A1t ( A1+A2 ) e− A1t (− A1)
= A1 A (t ) − A (t ) A1
Conversely, assume A1 and A2 are such that
A.(t ) = A1 A (t ) − A (t ) A1 , A (0) = A1 + A2
This matrix differential equation has a unique solution (by rewriting it as a linear vector differential equation), and
from the calculation above this solution is
A (t ) = e A 1t ( A1 + A2 ) e
− A1t
Since
-19-
-
8/15/2019 Linear System Theory 2 e Sol
22/106
Linear System Theory, 2/E Solutions Manual
dt
d ___
e A1t e
A2t
= A (t )e A1t e
A2t , e A10e
A20 = I
we have that Φ(t , 0) = e A1t
e A2t
.
Solution 4.13 Writing
∂t ∂___ Φ A(t , τ) = A (t )Φ A(t , τ) , Φ(τ, τ) = I
in partitioned form shows that
∂t ∂___ Φ21(t , τ) = A22(t )Φ21(t , τ) , Φ21(τ, τ) = 0
Thus Φ21(t , τ) is identically zero. But then
∂t
∂___ Φii
(t , τ) = Aii
(t )Φii
(t , τ) , Φii
(τ, τ) = I
for i = 1, 2, and
∂t ∂___ Φ12(t , τ) = A11(t )Φ12(t , τ) + A12(t )Φ22(t , τ) , Φ12(τ, τ) = 0
Using Exercise 4.6 with F (t ) = A12(t ) Φ22(t , τ) gives
Φ12(t , τ) =τ∫ t
Φ11(t , σ) A12(σ) Φ22(σ, τ) d σ
Solution 4.17 We need to compute a continuously-differentiable, invertible P (t ) such that
1t
t
1
= P−1(t )
2−t 20
2 t 1
P (t ) − P−1(t )P. (t )
Multiplying on the left by P (t ), the result can be written as a dimension-4 linear state equation. Choosing the
initial condition corresponding to P (0) = I , some clever guessing gives
P (t ) =
t 1
10
Solution 4.23 Using the formula for the derivative of an inverse matrix given in Exercise 1.17,
∂t ∂___ Φ A(−τ, −t ) = ∂t
∂___ Φ A−1(−t , −τ) = −Φ A−1(−t , −τ)
∂t ∂___ Φ A(−t , −τ)
Φ A−1(−t , −τ)
= −Φ A−1(−t , −τ)
−∂(−t )
∂_____ Φ A(−t , −τ)
Φ A−1(−t , −τ)
= −Φ A−1(−t , −τ)
− A (−t )Φ A(−t , −τ)
Φ A−1(−t , −τ)
= Φ A−1(−t , −τ) A (−t ) = Φ A(−τ, −t ) A (−t )
Transposing gives
-20-
-
8/15/2019 Linear System Theory 2 e Sol
23/106
Linear System Theory, 2/E Solutions Manual
∂t ∂___ Φ AT (−τ, −t ) = AT (−t )Φ AT (−τ, −t )
Since Φ(−τ, −τ) = I , we have F (t ) = AT
(−t ).
Or we can use the result of Exercise 3.2 to compute:
∂t ∂___ Φ A(−τ, −t ) =− ∂(−t )
∂_____ Φ A(−τ, −t ) = Φ A(−τ, −t ) A (−t )
This implies
∂t ∂___ Φ AT (−τ, −t ) = AT (−t )Φ A(−τ, −t )
Since Φ(−τ, −τ) = I , we have F (t ) = AT (−t ).
Solution 4.25 We can write
Φ(t + σ, σ) = I +σ∫
t+σ
A (τ) d τ +k =2Σ∞
σ∫
t+σ
A (τ1)σ∫ τ1
A (τ2) . . .σ∫
τk −1
A (τk ) d τk . . . d τ1
and
e A
__
t (σ)t = I + A_
t (σ)t +k =2Σ∞
k !
1___ A_
t
k (σ)t k
Then
R (t , σ) = Φ(t + σ, σ) − e A__
t (σ)t
= k =2
Σ
∞
σ∫
t +σ
A (τ1)σ∫ τ1
A (τ2) . . .
σ∫
τk −1
A (τk ) d τk . . . d τ1 −k !
1___ A_
t
k (σ)t k
From A (t ) ≤ α and the triangle inequality,
R (t , σ) ≤ 2k =2Σ∞
αk k !
t k ___= α2t 2
k =2Σ∞
k !
2___ αk −2 t k −2
Using
k !
2___ ≤(k −2)!
1______, k ≥ 2
gives
R (t , σ) ≤ α2t 2k =2Σ∞
(k −2)!1______ αk −2 t k −2
= α2t 2e α t
-21-
-
8/15/2019 Linear System Theory 2 e Sol
24/106
CHAPTER 5
Solution 5.3 Using the series definition, which involves talent in series recognition,
A2k +1 =
10
01
, A2k =
01
10
, k = 0, 1, . . .
gives
e At = I +
t
0
0
t
+2!
1___
0
t 2
t 20
+3!
1___
t 30
0
t 3
+ . . .
=
(e t −e −t ) / 2(e t +e−t ) / 2
(e t +e −t ) / 2
(e t −e−t ) / 2
=
sinh t
cosh t
cosh t
sinh t
Using the Laplace transform method,
(sI − A)−1
=
−1s
s
−1
−1
=
s2−1s_____
s2−11_____
s2−11_____
s2−1s_____
which gives again
e At =
sinh t cosh t
cosh t sinh t
Using the diagonalization method, computing eigenvectors for A and letting
P =
11
−11
gives
P−1 AP =
01
−10
Then
e At = P
0
et
e −t 0
P−1 =
sinh t
cosh t
cosh t
sinh t
Solution 5.4 Since
A (t ) =
1t
t 1
commutes with its integral,
-22-
-
8/15/2019 Linear System Theory 2 e Sol
25/106
Linear System Theory, 2/E Solutions Manual
Φ(t , 0) = e 0∫ t
A (σ) d σ
= exp
t t 2 / 2
t 2 / 2t
And since
0
t 2 / 2
t 2 / 2
0
,
t
0
0
t
commute,
Φ(t , 0) = exp
01
10
t 2 / 2
. exp
10
01
t
Using Exercise 5.3 gives
Φ(t , 0) =
0
e t 2 / 2
e
t 2 / 2
0
sinh t
cosh t
cosh t
sinh t
=
e
t 2 / 2
sinh t
e t 2 / 2 cosh t
e
t 2 / 2
cosh t
e t 2 / 2 sinh t
Solution 5.7 To verify that
A0
∫ t
e Aσ d σ = e At − I
note that the two sides agree at t = 0, and the derivatives of the two sides with respect to t are identical.
If A is invertible and all its eigenvalues have negative real parts, then lim t → ∞ e At = 0. This gives
A0
∫ ∞
e Aσ d σ = − I
that is,
A−1 = −0
∫ ∞
e Aσ d σ =∞∫ 0
e Aσ d σ
Solution 5.9 Evaluating the given expression at t = 0 gives x (0) = 0. Using Leibniz rule to differentiate theexpression gives
x.(t ) =
dt
d ___
0
∫ t
e A (t −σ)e D
σ∫ t
u (τ) d τ
bu (σ) d σ
= bu (t ) +0
∫ t
∂t ∂___
e A (t −σ)e D
σ∫
t
u (τ) d τbu (σ)
d σ
Using the product rule and differentiating the power series for e D
σ∫ t
u (τ) d τ
gives
x.(t ) = bu (t ) +
0
∫ t
Ae A (t −σ)e D
σ∫ t
u (τ) d τ
bu (σ) + e A (t −σ) Du (t )e D
σ∫ t
u (τ) d τ
bu (σ)
d σ
If we assume that AD = DA, then e A (t −σ) D = De A (t −σ) and
-23-
-
8/15/2019 Linear System Theory 2 e Sol
26/106
Linear System Theory, 2/E Solutions Manual
x.(t ) = bu (t ) + A
0
∫ t
e A (t −σ)e D
σ∫ t
u (τ) d τ
bu (σ) d σ + Du (t )
0
∫ t
e A (t −σ)e D
σ∫ t
u (τ) d τ
bu (σ) d σ
= A x (t ) + Dx (t )u (t ) + bu (t )
Solution 5.12 We will show how to define β0(t ), . . . , βn−1(t ) such that
k =0Σn−1
β.k (t )Pk =
k =0Σn−1
βk (t ) APk ,k =0Σn−1
βk (0)Pk = I (*)
which then gives the desired expression by Property 5.1. From the definitions,
P1 = AP 0 − λ1 I , P2 = AP1 − λ2P1 , . . . , Pn−1 = APn−2 − λn−1Pn−2
Also Pn = ( A−λn I )Pn−1 = 0 by the Cayley-Hamilton theorem, so APn−1 = λnPn−1. Now we equate coefficients of like Pk ’s in (*), rewritten as
k =0Σn−1
β.
k (t )Pk =k =0Σn−1
βk (t )[Pk+1 + λk +1Pk ]
to get equations for the desired βk (t )’s:
P0 : β.
0(t ) = λ1β0(t )
P1 : β.
1(t ) = β0(t ) + λ2β1(t )...
Pn−1 : β.
n−1(t ) = βn−2(t ) + λnβn−1(t )
that is,
β.
n−1(t )
.
.
.
β.
1(t )
β.
0(t )
=
0
0
.
.
.
1
λ1
0
0
.
.
.
λ2
0
. . .
. . .
.
.
.
. . .
. . .
1
λn−1
.
.
.
0
0
λn
0
.
.
.
0
0
βn−1(t )
.
.
.
β1(t )β0(t )
With the initial condition provided by β0(0) = 1, βk (0) = 0, k = 1, . . . , n−1, the analytic solution of this stateequation provides a solution for (*). (The resulting expression for e At is sometimes called Putzer’s formula.)
Solution 5.17 Write, by Property 5.11,Φ(t , t o) = P−1(t )e
R (t −t o)P (t o)
where P (t ) is continuous, T -periodic, and invertible at each t . Let
S = P −1(t o) RP (t o) , Q (t , t o) = P−1(t )P (t o)
Then Q (t , t o) is continuous and invertible at each t , and satisfies
Q (t +T , t o) = P−1(t +T )P (t o) = P
−1(t )P (t o) = Q (t , t o)
with Q (t o , t o) = I . Also,
-24-
-
8/15/2019 Linear System Theory 2 e Sol
27/106
Linear System Theory, 2/E Solutions Manual
Φ(t , t o) = P−1(t ) eP (t o)SP
−1(t o) (t −t o)P (t o) = P−1(t )P (t o) e
S (t −t o)P−1(t o)P (t o)
= Q (t , t o)eS (t −t o)
Solution 5.19 From the Floquet decomposition and Property 4.9,
det Φ(T , 0) = det e RT = e 0∫ T
tr [ A (σ)] d σ
Because the integral in the exponent is positive, the product of eigenvalues of Φ(T , 0) is greater than unity, whichimplies that at least one eigenvalue of Φ(T , 0) has magnitude greater than unity.Thus by the argument followingExample 5.12 there exist unbounded solutions.
Solution 5.20 Following the hint, define a real matrix S by
eS 2T = Φ2(T , 0)
and set
Q (t ) = Φ(t , 0)e −St
Clearly Q (t ) is real and continuous, and
Q (t +2T ) = Φ(t +2T , 0)e−S (t+2T ) = Φ(t +2T , T )Φ(T , 0)e−S 2T e −St
= Φ(t +T , 0)Φ(T , 0)e−S 2T e −St = Φ(t +T , T )Φ2(T , 0)e −S 2T e−St
= Φ(t +T , T )e −St = Φ(t , 0)e−St
= Q (t )
That is, Q (t ) is 2T -periodic. (For a proof of the hint, see Chapter 8 of D.L. Lukes, Differential Equations:Classical to Controlled , Academic Press, 1982.)
Solution 5.22 The solution will be T -periodic for initial state xo if and only if xo satisfies (see text equation(32))
[ Φ−1(t o+T , t o) − I ] xo =t o
∫ t o+T
Φ(t o , σ) f (σ) d σ
This linear equation has a solution for xo if and only if
zoT
t o
∫ t o+T
Φ(t o , σ) f (σ) d σ = 0 (*)
for every nonzero vector zo that satisfies
[ Φ−1(t o+T , t o) − I ]T zo = 0 (**)
The solution of the adjoint state equation can be written as
z (t ) = [ Φ−1(t , t o) ]T zo
Then by Lemma 5.14, (**) is precisely the condition that z (t ) be T -periodic. Thus writing (*) in the form
-25-
-
8/15/2019 Linear System Theory 2 e Sol
28/106
Linear System Theory, 2/E Solutions Manual
0 =t o
∫ t o+T
zoT Φ(t o , σ) f (σ) d σ =
t o
∫ t o+T
zT (σ) f (σ) d σ
completes the proof.
Solution 5.24 Note A = − AT , and from Example 5.9,
e At =
−sin t cos t
cos t sin t
Therefore all solutions of the adjoint equation are periodic, with period of the form k 2π, where k is a positiveinteger. The forcing term has period T = 2π / ω, where we assume ω > 0. The rest of the analysis breaks downinto 3 cases.
Case 1: If ω ≠ 1, 1 / 2, 1 / 3, . . . then the adjoint equation has no T -periodic solution, so the condition (Exercise5.22)
0∫ T
zT (σ) f (σ) d σ = 0 (+)
holds vacuously. Thus there will exist corresponding periodic solutions.
Case 2: If ω = 1, then
0
∫ T
zT (σ) f (σ) d σ =0
∫ T
zoT e Aσ f (σ) d σ
= − zo 10
∫ T
sin2(σ) d σ + zo 20
∫ T
cos σ sin σ d σ
≠ 0
so there is no periodic solution.
Case 3: If ω = 1 /k , k = 2, 3, . . . , then since
0
∫ T
cos σ sin (σ /k ) d σ =0
∫ T
sin σ sin (σ /k ) d σ = 0
the condition (+) will hold, and there exist periodic solutions.
In summary, there exist periodic solutions for all ω > 0 except ω = 1.
-26-
-
8/15/2019 Linear System Theory 2 e Sol
29/106
CHAPTER 6
Solution 6.1 If the state equation is uniformly stable, then there exists a positive γ such that for any t o and xo
the corresponding solution satisfies
x (t ) ≤ γ xo , t ≥ t o
Given a positive ε, take δ = ε / γ . Then, regardless of t o , xo ≤ δ implies
x (t ) ≤ γ δ = ε , t ≥ t o
Conversely, given a positive ε suppose positive δ is such that, regardless of t o , xo ≤ δ implies x (t ) ≤ ε,t ≥ t o . For any t a ≥ t o let xa be such that
xa = 1 , Φ(t a , t o) xa = Φ(t a , t o)
Then xo = δ xa satisfies xo = δ, and the corresponding solution at t = t a satisfies
x (t a) = Φ(t a, t o) xo = δΦ(t a, t o) ≤ ε
Therefore
Φ(t a , t o) ≤ ε / δ
Such an xa can be selected for any t a , t o such that t a ≥ t o . Therefore
Φ(t , t o) ≤ ε / δ
for all t and t o with t ≥ t o , and we can take γ = ε / δ to obtain
x (t ) = Φ(t , t o) xo ≤ Φ(t , t o) xo ≤ γ xo , t ≥ t o
This implies uniform stability.
Solution 6.4 Using the fact that A (t ) commutes with its integral,
Φ(t , τ) = e τ∫ t
A (σ) d σ
= I +
e −(t −τ)t −τ
t −τ−e −(t −τ)
+2!
1___
e−(t −τ)t −τ
t −τ−e−(t −τ)
2
+ . . .
For any fixed τ, φ11(t , τ) clearly grows without bound as t → ∞, and thus the state equation is not uniformlystable.
Solution 6.6 Using elementary properties of the norm,
-27-
-
8/15/2019 Linear System Theory 2 e Sol
30/106
Linear System Theory, 2/E Solutions Manual
Φ(t , τ) = I +τ∫ t
A (σ) d σ +τ∫ t
A (σ1)τ∫
σ1
A (σ2) d σ2d σ1 + . . .
= I + τ∫ t
A (σ) d σ + τ∫ t
A (σ1)τ∫
σ1
A (σ2) d σ2d σ1 + . . .
= 1 + τ∫ t
A (σ) d σ + τ∫ t
A (σ1) τ∫
σ1
A (σ2) d σ2d σ1 + . . .
(Be careful of t 0 by assumption, so that
t eλt
= t e−ηt
, t ≥ 0A simple maximization argument (setting the derivative to zero) gives
t e −ηt ≤η e1___
=∆ β , t ≥ 0
so that
t e λt ≤ β , t ≥ 0
Using this bound we can write
t eλt = t e−ηt = t e−(η / 2)t e −(η / 2)t ≤η e2___
e−(η / 2)t , t ≥ 0
Similarly,
t 2 eλt = t 2 e −ηt ≤η e2___
t e−(η / 2)t =η e2___
t e −(η / 4)t e−(η / 4)t ≤η e2___ .
η e4___
e−(η / 4)t , t ≥ 0
and continuing we get, for any j ≥ 0,
t j e λt ≤(η e) j
2 j+( j −1)+ . . . +1
____________e−(η / 2
j)t , t ≥ 0
Therefore
-28-
-
8/15/2019 Linear System Theory 2 e Sol
31/106
Linear System Theory, 2/E Solutions Manual
0
∫ ∞
t j e λt dt ≤(η e) j
2 j +( j−1)+ . . . +1
____________
0
∫ ∞
e−(η / 2 j)t dt
≤(η e) j
2 j+( j−1)+ . . . +1____________ .η2 j___
=e j Re [λ] j +122 j+( j−1)+
. . . +1_____________
Solution 6.12 By Theorem 6.4 uniform stability is equivalent to existence of a finite constant γ such thate At ≤ γ for all t ≥ 0. Writing
e At =k =1Σm
j=1Σσk
W kj( j−1)!
t j−1______e
λk t
where λ1, . . . , λm are the distinct eigenvalues of A, supposeRe[λk ] ≤ 0 , k = 1, . . . , m (*)
Re[λk ] = 0 implies σk = 1
Since t j−1eλk t is bounded if Re[λk ] 0 for some k , the proof of Theorem 6.2 shows that e At grows without bound as t → ∞. The gap
between this necessary condition and the sufficient condition is illustrated by the two cases
A =
0
0
0
0
, A =
0
0
0
1
Both satisfy the necessary condition, neither satisfy the sufficient condition, and the first case is uniformly stable
while the second case is not (unbounded solutions exist, as shown by easy computation of the transition matrix).
(It can be shown that a necessary and sufficient condition for uniform stability is that each eigenvalue of A has
nonpositive real part and any eigenvalue of A with zero real part has algebraic multiplicity equal to its geometric
multiplicity.)
Solution 6.14 Suppose γ , λ > 0 are such that
Φ(t , t o) ≤ γ e−λ(t −t o)
for all t , t o such that t ≥ t o . Then given any xo , t o , the corresponding solution at t ≥ t o satisfies
x (t ) = Φ(t , t o) xo ≤ Φ(t , t o) xo ≤ γ e−λ(t −t o)
xoand the state equation is uniformly exponentially stable.
Now suppose the state equation is uniformly exponentially stable, so that there exist γ , λ > 0 such that
x (t ) ≤ γ e−λ(t −t o) xo , t ≥ t o
for any xo and t o . Given any t o and t a ≥ t o , choose xa such that
Φ(t a , t o) xa = Φ(t a , t o) , xa = 1
Then with xo = xa the corresponding solution at t a satisfies
-29-
-
8/15/2019 Linear System Theory 2 e Sol
32/106
Linear System Theory, 2/E Solutions Manual
x (t a) = Φ(t a , t o) xa = Φ(t a , t o) ≤ γ e−λ(t a−t o)
Since such an xa can be selected for any t o and t a > t o, we have
Φ(t , τ) ≤ γ e−λ(t −τ)
for all t , τ such that t ≥ τ, and the proof is complete.
Solution 6.18 The variable change z (t ) = P−1(t ) x (t ) yields z.(t ) = 0 if and only if
P−1(t ) A (t )P (t ) − P−1(t )P.(t ) = 0
for all t . This clearly is equivalent to P.(t ) = A (t )P (t ), which is equivalent to Φ A(t , τ) = P (t )P−1(τ). Now, if P (t )
is a Lyapunov transformation, that is P (t ) ≤ ρ 0 for all t , then
Φ A(t , τ) ≤ P (t )P−1(τ) ≤ P (t )det P (τ)P (τ)n−1__________
≤ ρn / η =∆ γ
for all t and τ.Conversely, suppose Φ A(t , τ) ≤ γ for all t and τ. Let P (t ) = Φ A(t , 0). Then P (t ) ≤ γ and
P (t ) ≤det P−1(t )
P−1(t )n−1___________= P−1(t )n−1det P (t )
for all t . Using P (t ) ≥ 1 / P−1(t ) gives
det P (t ) ≥P−1(t )n
1__________
and since P−1(t ) = Φ A(0, t ) ≤ γ ,
det P (t ) ≥ γ n1___
Thus P (t ) is a Lyapunov transformation, and clearly
P−1(t ) A (t )P (t ) − P−1(t )P.(t ) = 0
for all t .
-30-
-
8/15/2019 Linear System Theory 2 e Sol
33/106
CHAPTER 7
Solution 7.3 Let  = FA, and take Q = F −1 , which is positive definite since F is positive definite. Then since F
is symmetric,
ÂT Q +QÂ = AT FF −1 + F −1FA = AT + A
-
8/15/2019 Linear System Theory 2 e Sol
34/106
Linear System Theory, 2/E Solutions Manual
η ≤ a (t ) ≤ 1 / (2η)
for all t . Then
2a (t ) + 1 − η ≥ η + 1 > 1
a (t )
a (t )+1_______ − η ≥ 1 +1 / (2η)
1______= 1 + η > 1
and Q (t )−η I ≥ 0, for all t , follows easily. Similarly, with ρ = (2η+1) / η we can show ρ I −Q (t ) ≥ 0 using
ρ − 2a (t ) − 1 ≥η
2η+1______ − 22η1___ − 1 = 1
ρ −a (t )
a (t )+1_______ ≥η
2η+1______ − 1 −a (t )
1____ ≥ 1
Next consider
AT (t )Q (t ) + Q (t ) A (t ) + Q.
(t ) =
0
2a.(t )−2a(t )
−2a(t )−a2(t )
a. (t )_____
0
≤ − ν I
This gives that for uniform exponential stability we also need existence of a small, positive constant ν such that
νa2(t ) − 2a3(t ) ≤ a.(t ) ≤ a (t )− ν / 2
for all t . For example, a (t ) = 1 satisfies these conditions.
Solution 7.11 Suppose that for every symmetric, positive-definite M there exits a unique, symmetric,positive-definite Q such that
AT Q + QA + 2µQ = − M (*)
that is,
( A + µ I )T Q + Q ( A + µ I ) = − M (**)
Then by the argument above Theorem 7.11 we conclude that all eigenvalues of A +µ I have negative real parts.That is, if
0 = det [ λ I − ( A +µ I ) ] = det [ (λ − µ) I − A ]
then Re [λ] 0, this gives Re [λ − µ]
-
8/15/2019 Linear System Theory 2 e Sol
35/106
-
8/15/2019 Linear System Theory 2 e Sol
36/106
Linear System Theory, 2/E Solutions Manual
x T e F T t e Ft x ≤ 2 ( A+µ−ε) x T Qx , t ≥ 0
which gives
eFt ≤ √ 2 ( A+µ−ε)Q , t ≥ 0
Thus the desired inequality follows from (*).
Solution 7.19 To show uniform exponential stability of A (t ), write the 1,2-entry of A (t ) as a (t ), and letQ (t ) = q (t ) I , where
q (t ) =
3 , t ≤ −1 / 2q ⁄ 1 2(t ) , −1 / 2 < t
-
8/15/2019 Linear System Theory 2 e Sol
37/106
CHAPTER 8
Solution 8.3 No. The matrix
A =
0−2
−1√ 8
has negative eigenvalues, but
A + AT =
√ 8−4
−2√ 8
has an eigenvalue at zero.
Solution 8.6 Viewing F (t ) x (t ) as a forcing term, for any t o, xo , and t ≥ t o we can write
x (t ) = Φ A +F (t , t o) xo = Φ A(t , t o) xo + t o∫
t
Φ A(t , σ)F (σ) x(σ) d σ
which gives, for suitable constants γ , λ > 0,
x (t ) ≤ γ e−λ(t −t o) xo +t o
∫ t
γ e−λ(t −σ)F (σ) x(σ) d σ
Thus
e λt x (t ) ≤ γ eλt o xo +t o
∫ t
γ F (σ) eλσ x(σ) d σ
and the Gronwall-Bellman inequality (Lemma 3.2) implies
e λt x (t ) ≤ γ eλt o xoet o∫ t
γ F (σ) d σ
Therefore
-35-
-
8/15/2019 Linear System Theory 2 e Sol
38/106
Linear System Theory, 2/E Solutions Manual
x(t ) ≤ γ e−λ(t −t o)e t o∫ t
γ F (σ) d σ
xo
≤ γ e−λ(t −t o)e t o∫ ∞
γ F (σ) d σ
xo
≤ γ e−λ(t −t o) e γ β xo
and we conclude the desired uniform exponential stability.
Solution 8.8 We can follow the proof of Theorem 8.7 (first and last portions) to show that the solution
Q (t ) =0
∫ ∞
e AT (t )σ e A (t )σ d σ
of
AT (t )Q (t ) + Q (t ) A (t ) = − I
is continuously-differentiable and satisfies, for all t ,
η I ≤ Q (t ) ≤ ρ I
where η and ρ are positive constants. Then with
F (t ) = A (t ) − ⁄ 1 2Q−1(t )Q.
(t )
an easy calculation shows
F T (t )Q (t ) + Q (t )F (t ) + Q.
(t ) = AT (t )Q (t ) + Q (t ) A (t ) = − I
Thus x.(t ) = F (t ) x (t )
is uniformly exponentially stable by Theorem 7.4.
Solution 8.9 As in Exercise 8.8 we have, for all t ,
η I ≤ Q (t ) ≤ ρ I
which implies
Q−1(t ) ≤η1__
Also, by the middle portion of the proof of Theorem 8.7,
Q.
(t ) ≤ 2 A.(t )Q (t )2
Therefore
⁄ 1 2Q−1(t )Q.
(t ) ≤η
βρ2____
for all t . Write
x.(t ) = A (t ) x (t ) = [ A (t ) − ⁄ 1 2Q−1(t )Q
.(t ) ] x (t ) + ⁄ 1 2Q−1(t )Q
.(t ) x (t )
=∆
F (t ) x (t ) + ⁄ 1 2Q−1(t )Q.
(t ) x (t )
-36-
-
8/15/2019 Linear System Theory 2 e Sol
39/106
Linear System Theory, 2/E Solutions Manual
Then the complete solution formula gives
x (t ) = ΦF (t , t o) xo + t o∫
t
ΦF (t , σ) ⁄ 1
2Q−1
(σ)Q
.
(σ) x(σ) d σ
and the result of Exercise 8.8 implies that there exists positive constants γ , λ such that, for any t o and t ≥ t o ,
x (t ) ≤ γ e−λ(t −t o) xo +t o
∫ t
γ e−λ(t −σ)η
βρ2____ x(σ) d σ
Therefore
e λt x (t ) ≤ γ eλt o xo +t o
∫ t
ηγβρ2_____
e λσ x(σ) d σ
and the Gronwall-Bellman inequality (Lemma 3.2) implies
e λt x (t ) ≤ γ eλt o xoet o∫
t
γβρ2 / η d σ
Thus
x (t ) ≤ γ e−(λ−γβρ2 / η)(t −t o) xo
Now, writing the left side as Φ A(t , t o) xo and for any t o and t ≥ t o choosing the appropriate unity-norm xo gives
Φ A(t , t o) ≤ γ e−(λ−γβρ2 / η)(t −t o)
For β sufficiently small this gives the desired uniform exponential stability. (Note that Theorem 8.6 also can beused to conclude that uniform exponential stability of x
.(t ) = F (t ) x (t ) implies uniform exponential stability of
x.(t ) = [ F (t ) + ⁄ 1 2Q−1(t )Q
.(t ) ] x (t ) = A (t ) x (t )
for β sufficiently small.)
Solution 8.10 With F (t ) = A (t ) + (µ / 2) I we have that F (t ) ≤ α + µ / 2, F .(t ) = A
.(t ), and the eigenvalues of
F (t ) satisfy Re [λF (t )] ≤ −µ / 2. The unique solution of
F T (t )Q (t ) + Q (t )F (t ) = − I
is
Q (t ) =0
∫ ∞
eF T (t )σ e F (t )σ d σ
As in the proof of Theorem 8.7, there is a constant ρ such that Q (t ) ≤ ρ for all t . Now, for any n × 1 vector z,
d σd ___ zT e F
T (t )σ e F (t )σ z = z T e F T (t )σ[ F T (t ) + F (t ) ] e F (t )σ z
≥ −(2α + µ) zT e F T (t )σ eF (t )σ z
Thus for any τ ≥ 0,
-37-
-
8/15/2019 Linear System Theory 2 e Sol
40/106
Linear System Theory, 2/E Solutions Manual
− zT e F T (t )τ e F (t )τ z =
τ∫ ∞
d σd ___
zT e F T (t )σ e F (t )σ z
d σ
≥ −(2α + µ)τ∫ ∞
zT e F T (t )σ e F (t )σ z d σ
≥ −(2α + µ)0
∫ ∞
zT e F T (t )σ e F (t )σ z d σ
≥ −(2α + µ) zT Q (t ) z
Thus
e F T (t )τ e F (t )τ ≤ (2α + µ) Q (t ) , τ ≥ 0
and using
e F (t )τ = e A(t )τ e (µ / 2) τ , τ ≥ 0
gives
e A(t )τ ≤ √ (2α+µ)ρ e (−µ / 2) τ , τ ≥ 0
Solution 8.11 Write (the chain rule is valid since u (t ) is a scalar)
q.(t ) = − A−1(u (t ))
du
dA___(u (t ))u
.(t )
A−1(u (t ))b (u (t )) − A−1(u (t ))du
db___(u (t ))u
.(t )
=∆ − B̂(t )u
.(t )
Then
x.(t ) = A (u (t )) x (t ) + b (u (t ))
= A (u (t )) [ x (t ) − q (t ) ] + A (u (t ))q (t ) + b (u (t ))= A (u (t )) [ x (t ) − q (t ) ]
gives
dt
d ___[ x (t ) − q (t ) ] = A (u (t )) [ x (t ) − q (t ) ] + B̂(t )u
.(t ) (*)
Since
dt
d ___ A (u (t )) =
du
dA___(u (t ))u
.(t ) =
du
dA___(u (t ))u
.(t )
we can conclude from Theorem 8.7 that for δ sufficiently small, and u (t ) such that u.(t ) ≤ δ for all t , there exist
positive constants γ and η (depending on u (t )) such that
Φ A (u (t ))(t , σ) ≤ γ e−η (t −σ) , t ≥ σ ≥ 0
But the smoothness assumptions on A (.) and b (.) and the bounds on u (t ) also give that there exists a positive
constant β such that B̂(t ) ≤ β for t ≥ 0. Thus the solution formula for (*) gives
x (t ) − q (t ) ≤ γ x (0) − q (0) + γ βδ / η , t ≥ 0
for u (t ) as above, and the claimed result follows.
-38-
-
8/15/2019 Linear System Theory 2 e Sol
41/106
CHAPTER 9
Solution 9.7 Write
B ( A−β I ) B ( A−β I )2 B . . .
=
B AB−β B A2 B−2β AB+β2 B . . .
=
B AB A 2 B . . .
.
.
.
0
0
0
I m
.
.
.
0
0
I m
−β I m
.
.
.
0
I m
−2β I mβ2 I m
.
.
.
. . .
. . .
. . .
. . .
Clearly the two controllability matrices have the same rank. (The solution is even easier using rank tests from
Chapter 13.)
Solution 9.8 Since A has negative-real-part eigenvalues,
Q =0
∫ ∞
e At BBT e AT t dt
is well defined, symmetric, and
AQ + QAT =0
∫ ∞
Ae At BBT e AT t + e At BBT e A
T t AT
dt
=0
∫ ∞
dt
d ___
e At BBT e AT t
dt
= − BBT
Also it is clear that Q is positive semidefinite. If it is not positive definite, then for some nonzero, n × 1 x,
0 = xT Qx =0
∫ ∞
x T e At BBT e AT t x dt
=0
∫ ∞
x T e At B2 dt
Thus xT e At B = 0 for all t ≥ 0, and it follows that
-39-
-
8/15/2019 Linear System Theory 2 e Sol
42/106
Linear System Theory, 2/E Solutions Manual
0 =dt jd j___
x T e At B
t = 0= xT A j B
for j = 0, 1, 2, . . . . But this implies
x T
B AB . . . An−1 B
= 0
which contradicts the controllability hypothesis. Thus Q is positive definite.
Solution 9.9 Suppose λ is an eigenvalue of A, and p is a corresponding left eigenvector. Then p ≠ 0, and
p T A = λ pT
This implies both
p H A = λ_ p H , AT p = λ p
Now suppose Q is as claimed. Then
p H AQp + p H QAT p = λ_ p H Qp + λ p H Qp
= − p H BBT p
that is,
2 Re [λ] p H Q p = − p H BBT p (*)
This gives Re [λ] ≤ 0 since Q is positive definite. Now suppose Re [λ] = 0. Then (*) gives p H B = 0. Also, for j = 1, 2, . . . ,
p H A j B = λ_ p H A j−1 B = . . . = λ
_ j p H B = 0
Thus
p H
B AB . . . An−1 B
= 0
which contradicts the controllability assumption. Therefore Re [λ]
-
8/15/2019 Linear System Theory 2 e Sol
43/106
Linear System Theory, 2/E Solutions Manual
Now suppose the state equation is output controllable on [t o , t f ], but that W y(t o , t f ) is not invertible. Then
there exists a p × 1 vector ya ≠ 0 such that yaT W y(t o, t f ) ya = 0. Using by now familiar arguments, this gives
yaT C (t f )Φ(t f , t ) B (t ) = 0 , t ∈ [t o , t f ]
Consider the initial state
xo = Φ(t o , t f )C T (t f )[ C (t f )C T (t f ) ]−1 ya
which is well defined and nonzero since rank C (t f ) = p. There exists an input ua(t ) such that
0 = C (t f )Φ(t f , t o) xo +t o
∫ t f
C (t f )Φ(t f , σ) B (σ)ua(σ) d σ
= ya +t o
∫ t f
C (t f )Φ(t f , σ) B (σ)ua(σ) d σ
Premultiplying by ya
T gives
0= yaT ya
This contradicts ya ≠ 0, and thus W y(t o , t f ) is invertible.The rank assumption on C (t f ) is needed in the necessity proof to guarantee that xo is well defined. For
m = p = 1, invertibility of W y(t o , t f ) is equivalent to existence of a t a ∈ (t o , t f ) such that
C (t f )Φ(t f , t a) B (t a) ≠ 0
That is, there exists a t a ∈ (t o , t f ) such that the output response at t f to an impulse input at t a is nonzero.
Solution 9.11 From Exercise 9.10, since rank C = p, the state equation is output controllable if and only if forsome fixed t f > 0,
W y =∆
0
∫ t f
Ce A (t f −t ) BBT e
A T (t f −t )C T dt
is invertible. We will show this holds if and only if
rank
CB CAB . . . CAn−1 B
= p
by showing equivalence of the negations. If W y is not invertible, there exists a nonzero p × 1 vector ya such that ya
T W y ya = 0. Thus
yaT Ce
A (t f −t ) B = 0 , t ∈ [0, t f ]
Differentiating repeatedly, and evaluating at t = t f gives
ya
T CA j B = 0 , j = 0, 1, . . .
Thus
yaT
CB CAB . . . CAn−1 B
= 0
and this implies
rank
CB CAB . . . CAn−1 B
< p
Conversely, if the rank condition fails, then there exists a nonzero ya such that yaT CA j B = 0,
j = 0, . . . , n−1. Then
-41-
-
8/15/2019 Linear System Theory 2 e Sol
44/106
Linear System Theory, 2/E Solutions Manual
yaT Ce
A (t f −t ) B = y aT C
k =0Σn−1
αk (t f −t ) Ak B = 0 , t ∈ [0, t f ]
Therefore yaT W y ya = 0, which implies that W y is not invertible.For m = p = 1 argue as in Solution 9.10 to show that a linear state equation is output controllable if and
only if its impulse response (equivalently, transfer function) is not identically zero.
Solution 9.17 Beginning with
y (t ) = c (t ) x (t )
y.(t ) = c
.(t ) x (t ) + c (t ) x
.(t )
= [c.(t ) + c (t ) A (t )] x (t ) + c (t )b (t )u (t )
= L1(t ) x (t ) + L0(t )b (t )u (t )
it is easy to show by induction that
y (k )(t ) = Lk (t ) x (t ) + j =0Σk −1
dt k − j−1d k − j−1_______ [ L j(t )b (t )u (t ) ] , k = 1, 2, . . .
Now if
Ln(t ) M __
−1=∆
α0(t ) α1(t ) . . . αn −1(t )
then
i=0Σn −1
αi(t ) Li(t ) = α0(t ) . . . αn −1(t )
Ln −1(t )
.
.
.
L0(t )
= Ln(t )
Thus we can write
y (n)(t ) −i=0Σn −1
αi(t ) y(i)(t ) = Ln(t ) x (t ) +
j=0Σn−1
dt n − j−1d n − j−1_______ [ L j(t )b (t )u (t ) ]
−i=0Σn −1
αi(t ) Li(t ) x (t ) −i =0Σn −1
αi(t ) j=0Σi−1
dt i − j −1d i− j−1______ [ L j(t )b (t )u (t ) ]
= j =0Σn−1
dt n − j−1d n − j −1_______ [ L j(t )b (t )u (t ) ] −
i=0Σn −1
αi(t ) j =0Σi−1
dt i− j−1d i− j−1______ [ L j(t )b (t )u (t ) ]
This is in the desired form of an n th-order differential equation.
-42-
-
8/15/2019 Linear System Theory 2 e Sol
45/106
-
8/15/2019 Linear System Theory 2 e Sol
46/106
Linear System Theory, 2/E Solutions Manual
C (t ) B (σ) = H (t )F (σ) (*)
for all t , σ, picking an appropriate t o and t f > t o ,
M x(t o , t f )W x(t o , t f ) =t o
∫ t f
C T (t ) H (t ) dt t o
∫ t f
F (σ) BT (σ) d σ (**)
where the left side is a product of invertible matrices by minimality. Therefore the two matrices on the right side
are invertible. Let
P−1 = M x−1(t o , t f )
t o
∫ t f
C T (t ) H (t ) dt
Then multiply both sides of (*) by C T (t ) and integrate with respect to t to obtain
M x(t o, t f ) B (σ) =
t o
∫ t f
C T (t ) H (t ) dt F (σ)
for all σ. That is,
B (σ) = P−1F (σ)
for all σ. Similarly, (*) gives
C (t )W x(t o , t f ) = H (t )t o
∫ t f
F (σ) BT (σ) d σ
that is,
C (t ) = H (t )t o
∫ t f
F (σ) BT (σ) d σ W x−1(t o , t f )
But (**) then gives
t o
∫ t f
F (σ) BT (σ) d σ W x−1(t o, t f ) =
t o
∫ t f
C T (t ) H (t ) dt
−1
M x(t o , t f ) = P
so we have
C (t ) = H (t )P
for all t . Noting that 0 = P−1 . 0 . P, we have that P is a change of variables relating the two zero- A minimal
realizations. Since a change of variables always can be used to obtain a zero- A realization, this shows that any
two minimal realizations of a given weighting pattern are related by a variable change.
Solution 10.11 Evaluating
X (t+σ) = X (t ) X (σ)
at σ = −t gives that X (t ) is invertible, and X −1(t ) = X (−t ) for all t . Differentiating with respect to t , and withrespect to σ, and using
∂t ∂___
X (t+σ) =∂σ∂___
X (t+σ)
gives
-44-
-
8/15/2019 Linear System Theory 2 e Sol
47/106
Linear System Theory, 2/E Solutions Manual
dt
d ___ X (t )
X (σ) = X (t )
d σd ___
X (σ)
which implies
d σd ___
X (σ) = X (−t )
dt
d ___ X (t )
X (σ)
Integrate both sides with respect to t from a fixed t o to a fixed t f > t o to obtain
(t f − t o)d σd ___
X (σ) =t o
∫ t f
X (−t )
dt
d ___ X (t )
dt X (σ)
Now let
A =t f
−t o
1_____
t o
∫ t f
X (−t )
dt
d ___ X (t )
dt
to write
d σd ___
X (σ) = A X (σ) , X (0) = I
This implies X (σ) = e Aσ.