differential equations and linear algebra - 3e - goode -solutions

468
1 Solutions to Section 1.1 True-False Review: 1. FALSE. A derivative must involve some derivative of the function y = f (x), not necessarily the first derivative. 2. TRUE. The initial conditions accompanying a differential equation consist of the values of y,y ,... at t = 0. 3. TRUE. If we define positive velocity to be oriented downward, then dv dt = g, where g is the acceleration due to gravity. 4. TRUE. We can justify this mathematically by starting from a(t)= g, and integrating twice to get v(t)= gt + c, and then s(t)= 1 2 gt 2 + ct + d, which is a quadratic equation. 5. FALSE. The restoring force is directed in the direction opposite to the displacement from the equilibrium position. 6. TRUE. According to Newton’s Law of Cooling, the rate of cooling is proportional to the difference between the object’s temperature and the medium’s temperature. Since that difference is greater for the object at 100 F than the object at 90 F , the object whose temperature is 100 F has a greater rate of cooling. 7. FALSE. The temperature of the object is given by T (t)= T m + ce -kt , where T m is the temperature of the medium, and c and k are constants. Since e -kt = 0, we see that T (t) = T m for all times t. The temperature of the object approaches the temperature of the surrounding medium, but never equals it. 8. TRUE. Since the temperature of the coffee is falling, the temperature difference between the coffee and the room is higher initially, during the first hour, than it is later, when the temperature of the coffee has already decreased. 9. FALSE. The slopes of the two curves are negative reciprocals of each other. 10. TRUE. If the original family of parallel lines have slopes k for k = 0, then the family of orthogonal tra- jectories are parallel lines with slope - 1 k . If the original family of parallel lines are vertical (resp. horizontal), then the family of orthogonal trajectories are horizontal (resp. vertical) parallel lines. 11. FALSE. The family of orthogonal trajectories for a family of circles centered at the origin is the family of lines passing through the origin. Problems: 1. d 2 y dt 2 = g = dy dt = gt + c 1 = y(t)= gt 2 2 + c 1 t + c 2 . Now impose the initial conditions. y(0) = 0 = c 2 =0. dy dt (0) = c 1 =0. Hence, the solution to the initial-value problem is: y(t)= gt 2 2 . The object hits the ground at time, t 0 , when y(t 0 ) = 100. Hence 100 = gt 2 0 2 , so that t 0 = 200 g 4.52 s, where we have taken g =9.8 ms -2 . 2. From d 2 y dt 2 = g, we integrate twice to obtain the general equations for the velocity and the position of the

Upload: api-19967845

Post on 18-Nov-2014

2.819 views

Category:

Documents


16 download

TRANSCRIPT

Page 1: Differential Equations and Linear Algebra - 3e - Goode -Solutions

1

Solutions to Section 1.1

True-False Review:

1. FALSE. A derivative must involve some derivative of the function y = f(x), not necessarily the firstderivative.

2. TRUE. The initial conditions accompanying a differential equation consist of the values of y, y′, . . . att = 0.

3. TRUE. If we define positive velocity to be oriented downward, then

dv

dt= g,

where g is the acceleration due to gravity.

4. TRUE. We can justify this mathematically by starting from a(t) = g, and integrating twice to get

v(t) = gt + c, and then s(t) =12gt2 + ct + d, which is a quadratic equation.

5. FALSE. The restoring force is directed in the direction opposite to the displacement from the equilibriumposition.

6. TRUE. According to Newton’s Law of Cooling, the rate of cooling is proportional to the differencebetween the object’s temperature and the medium’s temperature. Since that difference is greater for theobject at 100◦F than the object at 90◦F , the object whose temperature is 100◦F has a greater rate ofcooling.

7. FALSE. The temperature of the object is given by T (t) = Tm + ce−kt, where Tm is the temperatureof the medium, and c and k are constants. Since e−kt 6= 0, we see that T (t) 6= Tm for all times t. Thetemperature of the object approaches the temperature of the surrounding medium, but never equals it.

8. TRUE. Since the temperature of the coffee is falling, the temperature difference between the coffee andthe room is higher initially, during the first hour, than it is later, when the temperature of the coffee hasalready decreased.

9. FALSE. The slopes of the two curves are negative reciprocals of each other.

10. TRUE. If the original family of parallel lines have slopes k for k 6= 0, then the family of orthogonal tra-jectories are parallel lines with slope − 1

k . If the original family of parallel lines are vertical (resp. horizontal),then the family of orthogonal trajectories are horizontal (resp. vertical) parallel lines.

11. FALSE. The family of orthogonal trajectories for a family of circles centered at the origin is the familyof lines passing through the origin.

Problems:

1. d2ydt2 = g =⇒ dy

dt = gt + c1 =⇒ y(t) = gt2

2 + c1t + c2. Now impose the initial conditions. y(0) = 0 =⇒c2 = 0.dy

dt (0) =⇒ c1 = 0. Hence, the solution to the initial-value problem is: y(t) = gt2

2 . The object hits the

ground at time, t0, when y(t0) = 100. Hence 100 = gt202 , so that t0 =

√200g ≈ 4.52 s, where we have taken

g = 9.8 ms−2.

2. Fromd2y

dt2= g, we integrate twice to obtain the general equations for the velocity and the position of the

Page 2: Differential Equations and Linear Algebra - 3e - Goode -Solutions

2

ball, respectively:dy

dt= gt+c and y(t) =

12gt2 +ct+d, where c, d are constants of integration. Setting y = 0

to be at the top of the boy’s head (and positive direction downward), we know that y(0) = 0. Since theobject hits the ground 8 seconds later, we have that y(8) = 5 (since the ground lies at the position y = 5).

From the values of y(0) and y(8), we find that d = 0 and 5 = 32g + 8c. Therefore, c =5− 32g

8.

(a) The ball reaches its maximum height at the moment when y′(t) = 0. That is, gt + c = 0. Therefore,

t = − c

g=

32g − 58g

≈ 3.98 s.

(b) To find the maximum height of the tennis ball, we compute

y(3.98) ≈ −253.51 feet.

So the ball is 253.51 feet above the top of the boy’s head, which is 258.51 feet above the ground.

3. Fromd2y

dt2= g, we integrate twice to obtain the general equations for the velocity and the position of the

rocket, respectively:dy

dt= gt + c and y(t) =

12gt2 + ct + d, where c, d are constants of integration. Setting

y = 0 to be at ground level, we know that y(0) = 0. Thus, d = 0.

(a) The rocket reaches maximum height at the moment when y′(t) = 0. That is, gt + c = 0. Therefore, thetime that the rocket achieves its maximum height is t = − c

g. At this time, y(t) = −90 (the negative sign

accounts for the fact that the positive direction is chosen to be downward). Hence,

−90 = y

(− c

g

)=

12g

(− c

g

)2

+ c

(− c

g

)=

c2

2g− c2

g= − c2

2g.

Solving this for c, we find that c = ±√

180g. However, since c represents the initial velocity of the rocket,and the initial velocity is negative (relative to the fact that the positive direction is downward), we choosec = −

√180g ≈ −42.02 ms−1, and thus the initial speed at which the rocket must be launched for optimal

viewing is approximately 42.02 ms−1.

(b) The time that the rocket reaches its maximum height is t = − c

g≈ −−42.02

9.81= 4.28 s.

4. Fromd2y

dt2= g, we integrate twice to obtain the general equations for the velocity and the position of the

rocket, respectively:dy

dt= gt + c and y(t) =

12gt2 + ct + d, where c, d are constants of integration. Setting

y = 0 to be at the level of the platform (with positive direction downward), we know that y(0) = 0. Thus,d = 0.

(a) The rocket reaches maximum height at the moment when y′(t) = 0. That is, gt + c = 0. Therefore, thetime that the rocket achieves its maximum height is t = − c

g. At this time, y(t) = −85 (this is 85 m above

the platform, or 90 m above the ground). Hence,

−85 = y

(− c

g

)=

12g

(− c

g

)2

+ c

(− c

g

)=

c2

2g− c2

g= − c2

2g.

Page 3: Differential Equations and Linear Algebra - 3e - Goode -Solutions

3

Solving this for c, we find that c = ±√

170g. However, since c represents the initial velocity of the rocket,and the initial velocity is negative (relative to the fact that the positive direction is downward), we choosec = −

√170g ≈ −40.84 ms−1, and thus the initial speed at which the rocket must be launched for optimal

viewing is approximately 40.84 ms−1.

(b) The time that the rocket reaches its maximum height is t = − c

g≈ −−40.84

9.81= 4.16 s.

5. If y(t) denotes the displacement of the object from its initial position at time t, the motion of the objectcan be described by the initial-value problem

d2y

dt2= g, y(0) = 0,

dy

dt(0) = −2.

We first integrate this differential equation:d2y

dt2= g =⇒ dy

dt= gt + c1 =⇒ y(t) =

gt2

2+ c1t + c2. Now

impose the initial conditions. y(0) = 0 =⇒ c2 = 0.dy

dt(0) = −2 =⇒ c1 = −2. Hence the soution to the

intial-value problem is y(t) =gt2

2− 2t. We are given that y(10) = h. Consequently, h =

g(10)2

2− 2 · 10 =⇒

h = 10(5g − 2) = 470 m where we have taken g = 9.8 ms−2.

6. If y(t) denotes the displacement of the object from its initial position at time t, the motion of the objectcan be described by the initial-value problem

d2y

dt2= g, y(0) = 0,

dy

dt(0) = v0.

We first integrate the differential equation:d2y

dt2= g =⇒ dy

dt= gt + c1 =⇒ y(t) =

gt2

2+ c1t + c2. Now impose

the initial conditions. y(0) = 0 =⇒ c2 = 0.dy

dt(0) = v0 =⇒ c1 = v0. Hence the soution to the intial-value

problem is y(t) =gt2

2+v0t. We are given that y(t0) = h. Consequently, h = gt20 +v0t0. Solving for v0 yields

v0 =2h− gt20

2t0.

7. y(t) = A cos (ωt− φ) =⇒ dy

dt= −Aω sin (ωt− φ) =⇒ d2y

dt2= −Aω2 cos (ωt− φ). Hence,

d2y

dt2+ ω2y =

−Aω2 cos (ωt− φ) + Aω2 cos (ωt− φ) = 0.

8. y(t) = c1 cos (ωt) + c2 sin (ωt) =⇒ dy

dt= −c1ω sin (ωt) + c2ω cos (ωt) =⇒ d2y

dt2= −c1ω

2 cos (ωt) −

c2ω2 sin (ωt) = −ω2[c1 cos (ωt) + c2 cos (ωt)] = −ω2y. Consequently,

d2y

dt2+ ω2y = 0. To determine the

amplitude of the motion we write the solution to the differential equation in the equivalent form:

y(t) =√

c21 + c2

2

[c1√

c21 + c2

2

cos (ωt) +c2√

c21 + c2

2

sin (ωt)

].

We can now define an angle φ by

cos φ =c1√

c21 + c2

2

and sinφ =c2√

c21 + c2

2

.

Page 4: Differential Equations and Linear Algebra - 3e - Goode -Solutions

4

Then the expression for the solution to the differential equation is

y(t) =√

c21 + c2

2[cos (ωt) cos φ + sin (ωt) sinφ] =√

c21 + c2

2 cos (ωt + φ).

Consequently the motion corresponds to an oscillation with amplitude A =√

c21 + c2

2.

9. We compute the first three derivatives of y(t) = ln t:

dy

dt=

1t,

d2y

dt2= − 1

t2,

d3y

dt3=

2t3

.

Therefore,

2(

dy

dt

)3

=2t3

=d3y

dt3,

as required.

10. We compute the first two derivatives of y(x) = x/(x + 1):

dy

dx=

1(x + 1)2

andd2y

dx2= − 2

(x + 1)3.

Then

y +d2y

dx2=

x

x + 1− 2

(x + 1)3=

x3 + 2x2 + x− 2(x + 1)3

=(x + 1) + (x3 + 2x2 − 3)

(x + 1)3=

1(x + 1)2

+x3 + 2x2 − 3

(1 + x)3,

as required.

11. We compute the first two derivatives of y(x) = ex sinx:

dy

dx= ex(sinx + cos x) and

d2y

dx2= 2ex cos x.

Then

2y cot x− d2y

dx2= 2(ex sinx) cotx− 2ex cos x = 0,

as required.

12. (T − Tm)−1 dT

dt= −k =⇒ d

dt(ln |T − Tm|) = −k. The preceding equation can be integated directly to

yield ln |T − Tm| = −kt + c1. Exponentiating both sides of this equation gives |T − Tm| = e−kt+c1 , whichcan be written as

T − Tm = ce−kt,

where c = ±ec1 . Rearranging yields T (t) = Tm + ce−kt.

13. After 4 p.m. In the first two hours after noon, the water temperature increased from 50◦ F to 55◦

F, an increase of five degrees. Because the temperature of the water has grown closer to the ambient airtemperature, the temperature difference |T −Tm| is smaller, and thus, the rate of change of the temperatureof the water grows smaller, according to Newton’s Law of Cooling. Thus, it will take longer for the watertemperature to increase another five degrees. Therefore, the water temperature will reach 60◦ F more thantwo hours later than 2 p.m., or after 4 p.m.

Page 5: Differential Equations and Linear Algebra - 3e - Goode -Solutions

5

14. The object temperature cools a total of 40◦ F during the 40 minutes, but according to Newton’s Law ofCooling, it cools faster in the beginning (since |T −Tm| is greater at first). Thus, the object cooled half-wayfrom 70◦ F to 30◦ F in less than half the total cooling time. Therefore, it took less than 20 minutes for theobject to reach 50◦ F.

15. Given a family of curves satisfies: x2 + 4y2 = c1 =⇒ 2x + 8ydy

dx= 0 =⇒ dy

dx= − x

4y.

Orthogonal trajectories satisfy:

dy

dx=

4y

x=⇒ 1

y

dy

dx=

4x

=⇒ d

dx(ln |y|) =

4x

=⇒ ln |y| = 4 ln |x|+ c2 =⇒ y = kx4,where k = ±ec2

.

1.5-0.5

-0.4

1.0

x

y(x)

-0.8

0.4

0.5

0.8

-1.0-1.5

Figure 0.0.1: Figure for Exercise 15

16. Given a family of curves satisfies: y =c

x=⇒ x

dy

dx+ y = 0 =⇒ dy

dx= −y

x.

Orthogonal trajectories satisfy:

dy

dx=

x

y=⇒ y

dy

dx= x =⇒ d

dx

(12y2

)= x =⇒ 1

2y2 =

12x2 + c1 =⇒ y2 − x2 = c2,where c2 = 2c1.

17. Given family of curves satisfies: y = cx2 =⇒ c =y

x2. Hence,

dy

dx= 2cx = c

( y

x2

)x =

2y

x.

Orthogonal trajectories satisfy:

dy

dx= − x

2y=⇒ 2y

dy

dx= −x =⇒ d

dx(y2) = −x =⇒ y2 = −1

2x2 + c1 =⇒ 2y2 + x2 = c2,

where c2 = 2c1.18. Given family of curves satisfies: y = cx4 =⇒ c =

y

x4. Hence,

dy

dx= 4cx3 = 4

( y

x4

)x3 =

4y

x.

Page 6: Differential Equations and Linear Algebra - 3e - Goode -Solutions

6

-2

2

-4

y(x)

2

4

-4

x

-2 4

Figure 0.0.2: Figure for Exercise 16

x

1

y(x)

1.6

2

2.0

1.2

0.4

0.8

-1.2

-2-0.4

-2.0

-1.6

-1

-0.8

Figure 0.0.3: Figure for Exercise 17

Orthogonal trajectories satisfy:

dy

dx= − x

4y=⇒ 4y

dy

dx= −x =⇒ d

dx(2y2) = −x =⇒ 2y2 = −1

2x2 + c1 =⇒ 4y2 + x2 = c2,

where c2 = 2c1.

Page 7: Differential Equations and Linear Algebra - 3e - Goode -Solutions

7

1.5

-0.4

1.0

y(x)

x

-0.8

0.4

0.8

-1.0-1.5

Figure 0.0.4: Figure for Exercise 18

19. Given family of curves satisfies: y2 = 2x + c =⇒ dy

dx=

1y.Orthogonal trajectories satisfy:

dy

dx= −y =⇒ y−1 dy

dx= −1 =⇒ d

dx(ln |y|) = −1 =⇒ ln |y| = −x + c1 =⇒ y = c2e

−x.

1

1

3

-3

-2

2

-4

-1

2-1

4

3

y(x)

x

4

Figure 0.0.5: Figure for Exercise 19

20. Given family of curves satisfies: y = cex =⇒ dy

dx= cex = y. Orthogonal trajectories satisfy:

dy

dx= −1

y=⇒ y

dy

dx= −1 =⇒ d

dx

(12y2

)= −1 =⇒ 1

2y2 = −x + c1 =⇒ y2 = −2x + c2.

Page 8: Differential Equations and Linear Algebra - 3e - Goode -Solutions

8

2

1

y(x)

1

x

-1

-2

-1

Figure 0.0.6: Figure for Exercise 20

21. y = mx + c =⇒ dy

dx= m.

Orthogonal trajectories satisfy:dy

dx= − 1

m=⇒ y = − 1

mx + c1.

22. y = cxm =⇒ dy

dx= cmxm−1, but c =

y

xmso

dy

dx=

my

x. Orthogonal trajectories satisfy:

dy

dx= − x

my=⇒ y

dy

dx= − x

m=⇒ d

dx

(12y2

)= − x

m=⇒ 1

2y2 = − 1

2mx2 + c1 =⇒ y2 = − 1

mx2 + c2.

23. y2 + mx2 = c =⇒ 2ydy

dx+ 2mx = 0 =⇒ dy

dx= −mx

y.

Orthogonal trajectories satisfy:

dy

dx=

y

mx=⇒ y−1 dy

dx=

1mx

=⇒ d

dx(ln |y|) =

1mx

=⇒ m ln |y| = ln |x|+ c1 =⇒ ym = c2x.

24. y2 = mx + c =⇒ 2ydy

dx= m =⇒ dy

dx=

m

2y.

Orthogonal trajectories satisfy:

dy

dx= −2y

m=⇒ y−1 dy

dx= − 2

m=⇒ d

dx(ln |y|) = − 2

m=⇒ ln |y| = − 2

mx + c1 =⇒ y = c2e

− 2xm .

25. u = x2 + 2y2 =⇒ 0 = 2x + 4ydy

dx=⇒ dy

dx= − x

2y.

Page 9: Differential Equations and Linear Algebra - 3e - Goode -Solutions

9

Orthogonal trajectories satisfy:

dy

dx=

2y

x=⇒ y−1 dy

dx=

2x

=⇒ d

dx(ln |y|) =

2x

=⇒ ln |y| = 2 ln |x|+ c1 =⇒ y = c2x2.

x

1

y(x)

1.6

2

2.0

1.2

0.4

0.8

-1.2

-2-0.4

-2.0

-1.6

-1

-0.8

Figure 0.0.7: Figure for Exercise 25

26. m1 = tan (a1) = tan (a2 − a) =tan (a2)− tan (a)

1 + tan (a2) tan (a)=

m2 − tan (a)1 + m2 tan (a)

.

Solutions to Section 1.2

True-False Review:

1. FALSE. The order of a differential equation is the order of the highest derivative appearing in thedifferential equation.

2. TRUE. This is condition 1 in Definition 1.2.11.

3. TRUE. This is the content of Theorem 1.2.15.

4. FALSE. There are solutions to y′′ + y = 0 that do not have the form c1 cos x + 5c2 cos x, such asy(x) = sinx. Therefore, c1 cos x + 5c2 cos x does not meet the second requirement set forth in Definition1.2.11 for the general solution.

5. FALSE. There are solutions to y′′ + y = 0 that do not have the form c1 cos x + 5c1 sinx, such asy(x) = cos x + sinx. Therefore, c1 cos x + 5c1 sinx does not meet the second requirement set form inDefinition 1.2.11 for the general solution.

6. TRUE. Since the right-hand side of the differential equation is a function of x only, we can integrateboth sides n times to obtain the formula for the solution y(x).

Page 10: Differential Equations and Linear Algebra - 3e - Goode -Solutions

10

Problems:

1. 2, nonlinear.

2. 3, linear.

3. 2, nonlinear.

4. 2, nonlinear.

5. 4, linear.

6. 3, nonlinear.

7. We can quickly compute the first two derivatives of y(x):

y′(x) = (c1+2c2)ex cos 2x+(−2c1+c2)ex sin 2x and y′′(x) = (−3c1+4c2)ex cos 2x+(−4c1−3c2)ex sinx.

Then we have y′′ − 2y′ + 5y =

[(−3c1 + 4c2)ex cos 2x + (−4c1 − 3c2)ex sinx]−2 [(c1 + 2c2)ex cos 2x + (−2c1 + c2)ex sin 2x]+5(c1ex cos 2x+c2e

x sin 2x),

which cancels to 0, as required. This solution is valid for all x ∈ R.

8. y(x) = c1ex + c2e

−2x =⇒ y′ = c1ex − 2c2e

−2x =⇒ y′′ = c1ex + 4c2e

−2x =⇒ y′′ + y′ − 2y = (c1ex +

4c2e−2x) + (c1e

x − 2c2e−2x) − 2(c1e

x + c2e−2x) = 0. Thus y(x) = c1e

x + c2e−2x is a solution of the given

differential equation for all x ∈ R.

9. y(x) =1

x + 4=⇒ y′ = − 1

(x + 4)2= −y2. Thus y(x) =

1x + 4

is a solution of the given differential

equation for x ∈ (−∞,−4) or x ∈ (−4,∞).

10. y(x) = c1√

x =⇒ y′ =c1

2√

x=

y

2x. Thus y(x) = c1

√x is a solution of the given differential equation for

all x ∈ {x : x > 0}.

11. y(x) = c1e−x sin (2x) =⇒ y′ = 2c1e

−x cos (2x)−c1e−x sin (2x) =⇒ y′′ = −3c1e

−x sin (2x)−4c1e−x cos (2x) =⇒

y′′+2y′+5y = −3c1e−x sin (2x)−4c1e

−x cos (2x)+2[2c1e−x cos (2x)−c1e

−x sin (2x)]+5[c1e−x sin (2x)] = 0.

Thus y(x) = c1e−x sin (2x) is a solution to the given differential equation for all x ∈ R.

12. y(x) = c1 cosh (3x) + c2 sinh (3x) =⇒ y′ = 3c1 sinh (3x) + 3c2 cosh (3x) =⇒ y′′ = 9c1 cosh (3x) +9c2 sinh (3x) =⇒ y′′ − 9y = [9c1 cosh (3x) + 9c2 sinh (3x)] − 9[c1 cosh (3x) + c2 sinh (3x)] = 0. Thus y(x) =c1 cosh (3x) + c2 sinh (3x) is a solution to the given differential equation for all x ∈ R.

13. y(x) =c1

x3+

c2

x=⇒ y′ = −3c1

x4− c2

x2=⇒ y′′ =

12c1

x5+

2c2

x3=⇒ x2y′′ + 5xy′ + 3y = x2

(12c1

x5+

2c2

x3

)+

5x

(−3c1

x4− c2

x2

)+ 3

( c1

x3+

c2

x

)= 0. Thus y(x) =

c1

x3+

c2

xis a solution to the given differential equation

for all x ∈ (−∞, 0) or x ∈ (0,∞).

14. y(x) = c1√

x+3x2 =⇒ y′ =c1

2√

x+6x =⇒ y′′ = − c1

4√

x3+6 =⇒ 2x2y′′−xy′+y = 2x2

(− c1

4√

x3+ 6)−

Page 11: Differential Equations and Linear Algebra - 3e - Goode -Solutions

11

x

(c1

2√

x+ 6x

)+(c1

√x+3x2) = 9x2. Thus y(x) = c1

√x+3x2 is a solution to the given differential equation

for all x ∈ {x : x > 0}.

15. y(x) = c1x2 + c2x

3−x2 sinx =⇒ y′ = 2c1x + 3c2x2−x2 cos x− 2x sinx =⇒ y′′ = 2c1 + 6c2x + x2 sinx−

2x cos x− 2x cos−2 sinx. Substituting these results into the given differential equation yieldsx2y′′ − 4xy′ + 6y = x2(2c1 + 6c2x + x2 sinx− 4x cos x− 2 sinx)− 4x(2c1x + 3c2x

2 − x2 cos x− 2x sinx)

+ 6(c1x2 + c2x

3 − x2 sinx)

= 2c1x2 + 6c2x

3 + x4 sinx− 4x3 cos x− 2x2 sinx− 8c1x2 − 12c2x

3 + 4x3 cos x + 8x2 sinx

+ 6c1x2 + 6c2x

3 − 6x2 sinx

= x4 sinx.

Hence, y(x) = c1x2 + c2x

3 − x2 sinx is a solution to the differential equation for all x ∈ R.

16. y(x) = c1eax + c2e

bx =⇒ y′ = ac1eax + bc2e

bx =⇒ y′′ = a2c1eax + b2c2e

bx. Substituting these resultsinto the differential equation yieldsy′′ − (a + b)y′ + aby = a2c1e

ax + b2c2ebx − (a + b)(ac1e

ax + bc2ebx) + ab(c1e

ax + c2ebx)

= (a2c1 − a2c1 − abc1 + abc1)eax + (b2c2 − abc2 − b2c2 + abc2)ebx

= 0.

Hence, y(x) = c1eax + c2e

bx is a solution to the given differential equation for all x ∈ R.

17. y(x) = eax(c1 + c2x) =⇒ y′ = eax(c2) + aeax(c1 + c2x) = eax(c2 + ac1 + ac2x) =⇒ y′′ = eaax(ac2) +aeax(c2 + ac1 + ac2x) = aeax(2c2 + ac1 + ac2x). Substituting these into the differential equation yieldsy′′ − 2ay′ + a2y = aeax(2c2 + ac1 + ac2x)− 2aeax(c2 + ac1 + ac2x) + a2eax(c1 + c2x)

= aeax(2c2 + ac1 + ac2x− 2c2 − 2ac1 − 2ac2x + ac1 + ac2x)= 0.

Thus, y(x) = eax(c1 + c2x) is a solution to the given differential eqaution for all x ∈ R.

18. y(x) = eax(c1 cos bx + c2 sin bx) so,y′ = eax(−bc1 sin bx + bc2 cos bx) + aeax(c1 cos bx + c2 sin bx)

= eax[(bc2 + ac1) cos bx + (ac2 − bc1) sin bx] so,y′′ = eax[−b(bc2 + ac1) sin bx + b(ac2 + bc1) cos bx] + aeax[(bc2 + ac1) cos bx + (ac2 + bc1) sin bx]

= eax[(a2c1 − b2c1 + 2abc2) cos bx + (a2c2 − b2c2 − abc1) sin bx].Substituting these results into the differential equation yieldsy′′ − 2ay′ + (a2 + b2)y = (eax[(a2c1 − b2c1 + 2abc2) cos bx + (a2c2 − b2c2 − abc1) sin bx])

− 2a(eax[(bc2 + ac1) cos bx + (ac2 − bc1) sin bx]) + (a2 + b2)(eax(c1 cos bx + c2 sin bx))

= eax[(a2c1 − b2c1 + 2abc2 − 2abc2 − 2a2c1 + a2c1 + b2c1) cos bx

+ (a2c2 − b2c2 − 2abc1 + 2abc1 − 2a2c2 + a2c2 + b2c2) sin bx]= 0

.

Thus, y(x) = eax(c1 cos bx + c2 sin bx) is a solution to the given differential equation for all x ∈ R.

19. y(x) = erx =⇒ y′ = rerx =⇒ y′′ = r2erx. Substituting these results into the given differential equationyields erx(r2 +2r−3) = 0, so that r must satisfy r2 +2r−3 = 0, or (r +3)(r−1) = 0. Consequently r = −3and r = 1 are the only values of r for which y(x) = erx is a solution to the given differential equation. Thecorresponding solutions are y(x) = e−3x and y(x) = ex.

Page 12: Differential Equations and Linear Algebra - 3e - Goode -Solutions

12

20. y(x) = erx =⇒ y′ = rerx =⇒ y′′ = r2erx. Substitution into the given differential equation yieldserx(r2 − 8r + 16) = 0, so that r must satisfy r2 − 8r + 16 = 0, or (r − 4)2 = 0. Consequently the only valueof r for which y(x) = erx is a solution to the differential equation is r = 4. The corresponding solution isy(x) = e4x.

21. y(x) = xr =⇒ y′ = rxr−1 =⇒ y′′ = r(r−1)xr−2. Substitution into the given differential equation yieldsxr[r(r − 1) + r − 1] = 0, so that r must satisfy r2 − 1 = 0. Consequently r = −1 and r = 1 are the onlyvalues of r for which y(x) = xr is a solution to the given differential equation. The corresponding solutionsare y(x) = x−1 and y(x) = x.

22. y(x) = xr =⇒ y′ = rxr−1 =⇒ y′′ = r(r−1)xr−2. Substitution into the given differential equation yieldsxr[r(r − 1) + 5r + 4] = 0, so that r must satisfy r2 + 4r + 4 = 0, or equivalently (r + 2)2 = 0. Consequentlyr = −2 is the only value of r for which y(x) = xr is a solution to the given differential equation. Thecorresponding solution is y(x) = x−2.

23. y(x) = 12x(5x2 − 3) = 1

2 (5x3 − 3x) =⇒ y′ = 12 (15x2 − 3) =⇒ y′′ = 15x. Substitution into the Legendre

equation with N = 3 yields (1 − x2)y′′ − 2xy′ + 12y = (1 − x2)(15x) + x(15x2 − 3) + 6x(5x2 − 3) = 0.Consequently the given function is a solution to the Legendre equation with N = 3.

24. y(x) = a0+a1x+a2x2 =⇒ y′ = a1+2a2x =⇒ y′′ = 4a2. Substitution into the given differential equation

yields (1−x2)(2a2)−x(a1+2a2x)+4(a0+a1x+a2x2) = 0 =⇒ 3a1x+2a2+4a0 = 0. For this equation to hold

for all x we require 3a1 = 0, and 2a2 + 4a0 = 0. Consequently a1 = 0, and a2 = −2a0. The correspondingsolution to the differential equation is y(x) = a0(1 − 2x2). Imposing the normalization condition y(1) = 1requires that a0 = −1. Hence, the required solution to the differential equation is y(x) = 2x2 − 1.

25. x sin y − ex = c =⇒ x cos ydy

dx+ sin y − ex = 0 =⇒ dy

dx=

ex − sin y

x cos y.

26. xy2 + 2y − x = c =⇒ 2xydy

dx+ y2 + 2

dy

dx− 1 = 0 =⇒ dy

dx=

1− y2

2(xy + 1).

27. exy + x = c =⇒ exy[xdy

dx+ y] − 1 = 0 =⇒ xexy dy

dx+ yexy = 1 =⇒ 1− yexy

xexy. Given y(1) = 0 =⇒

e0(1) − 1 = c =⇒ c = 0. Therefore, exy − x = 0, so that y =lnx

x.

28. ey/x + xy2 − x = c =⇒ ey/xx

dy

dx− y

x2+ 2xy

dy

dx+ y2 − 1 = 0 =⇒ dy

dx=

x2(1− y2) + yey/x

x(ey/x + 2x2y).

29. x2y2 − sinx = c =⇒ 2x2ydy

dx+ 2xy2 − cos x = 0 =⇒ dy

dx=

cos x− 2xy2

2x2y. Since y(π) =

, then

π2

(1π

)2

− sinπ = c =⇒ c = 1. Hence, x2y2 − sinx = 1 so that y2 =1 + sin x

x2. Since y(π) =

, take the

branch of y where x < 0 so y(x) =√

1 + sin x

x.

30.dy

dx= sinx =⇒ y(x) = − cos x + c for all x ∈ R.

Page 13: Differential Equations and Linear Algebra - 3e - Goode -Solutions

13

31.dy

dx= x−1/2 =⇒ y(x) = 2x1/2 + c for all x > 0.

32.d2y

dx2= xex =⇒ dy

dx= xex − ex + c1 =⇒ y(x) = xex − 2ex + c1x + c2 for all x ∈ R.

33.d2y

dx2= xn, where n is an integer.

If n = −1 thendy

dx= ln |x|+ c1 =⇒ y(x) = x ln |x|+ c1x + c2 for all x ∈ (−∞, 0) or x ∈ (0,∞).

If n = −2 thendy

dx= −x−1 + c1 =⇒ y(x) = c1x + c2 − ln |x| for all x ∈ (−∞, 0) or x ∈ (0,∞).

If n 6= −1 and n 6= −2 thendy

dx=

xn+1

n + 1+ c1 =⇒ xn+2

(n + 1)(n + 2)+ c1x + c2 for all x ∈ R.

34.dy

dx= ln x =⇒ y(x) = x lnx − x + c. Since, y(1) = 2 =⇒ 2 = 1(0) − 1 + c =⇒ c = 3. Thus,

y(x) = x lnx− x + 3.

35.d2y

dx2= cos x =⇒ dy

dx= sinx + c1 =⇒ y(x) = − cos x + c1x + c2.

Thus, y′(0) = 1 =⇒ c1 = 1, and y(0) = 2 =⇒ c2 = 3. Thus, y(x) = 3 + x− cos x.

36.d3y

dx3= 6x =⇒ d2y

dx2= 3x2 + c1 =⇒ dy

dx= x3 + c1x + c2 =⇒ y = 1

4x4 + 12c1x

2 + c2x + c3.

Thus, y′′(0) = 4 =⇒ c1 = 4, and y′(0) = −1 =⇒ c2 = −1, and y(0) = 1 =⇒ c3 = 1. Thus, y(x) =14x4 + 2x2 − x + 1.

37. y′′ = xex =⇒ y′ = xex − ex + c1 =⇒ y = xex − 2ex + c1x + c2.Thus, y′(0) = 4 =⇒ c1 = 5, and y(0) = 3 =⇒ c2 = 5. Thus, y(x) = xex − 2ex + 5x + 5.

38. Starting with y(x) = c1ex + c2e

−x, we find that y′(x) = c1ex − c2e

−x and y′′(x) = c1ex + c2e

−x. Thus,y′′ − y = 0, so y(x) = c1e

x + c2e−x is a solution to the differential equation on (−∞,∞). Next we establish

that every solution to the differential equation has the form c1ex + c2e

−x. Suppose that y = f(x) is anysolution to the differential equation. Then according to Theorem 1.2.15, y = f(x) is the unique solution tothe initial-value problem

y′′ − y = 0, y(0) = f(0), y′(0) = f ′(0).

However, consider the function

y(x) =f(0) + f ′(0)

2ex +

f(0)− f ′(0)2

e−x.

This is of the form y(x) = c1ex + c2e

−x, where c1 = f(0)+f ′(0)2 and c2 = f(0)−f ′(0)

2 , and therefore solves thedifferential equation y′′ − y = 0. Furthermore, evaluation this function at x = 0 yields

y(0) = f(0) and y′(0) = f ′(0).

Consequently, this function solves the initial-value problem above. However, by assumption, y(x) = f(x)solves the same initial-value problem. Owing to the uniqueness of the solution to this initial-value problem,it follows that these two solutions are the same:

f(x) = c1ex + c2e

−x.

Page 14: Differential Equations and Linear Algebra - 3e - Goode -Solutions

14

Consequently, every solution to the differential equation has the form y(x) = c1ex + c2e

−x, and thereforethis is the general solution on any interval I.

39.d2y

dx2= e−x =⇒ dy

dx= −e−x + c1 =⇒ y(x) = e−x + c1x + c2. Thus, y(0) = 1 =⇒ c2 = 0, and

y(1) = 0 =⇒ c1 = − 1e . Hence, y(x) = e−x − 1

ex.

40.d2y

dx2= −6 − 4 ln x =⇒ dy

dx= −2x − 4x lnx + c1 =⇒ y(x) = −2x2 lnx + c1x + c2. Since, y(1) = 0 =⇒

c1 + c2 = 0, and since, y(e) = 0 =⇒ ec1 + c2 = 2e2. Solving this system yields c1 =2e2

e− 1, c2 = − 2e2

e− 1.

Thus, y(x) =2e2

e− 1(x− 1)− 2x2 lnx.

41. y(x) = c1 cos x + c2 sinx(a) y(0) = 0 =⇒ 0 = c1(1) + c2(0) =⇒ c1 = 0. y(π) = 1 =⇒ 1 = c2(0), which is impossible. No solutions.(b) y(0) = 0 =⇒ 0 = c1(1) + c2(0) =⇒ c1 = 0. y(π) = 0 =⇒ 0 = c2(0), so c2 can be anything. Infinitelymany solutions.

42-47. Use some kind of technology to define each of the given functions. Then use the technology tosimplify the expression given on the left-hand side of each differential equation and verify that the resultcorresponds to the expression on the right-hand side.

48. (a) Use some form of technology to substitute y(x) = a + bx + cx2 + dx3 + ex4 + fx5 where a, b, c, d, e, fare constants, into the given Legendre equation and set the coefficients of each power of x in the resultingequation to zero. The result is:

e = 0, 20f + 18d = 0, e + 2c = 0, 3d + 14b = 0, c + 15a = 0.

Now solve for the constants to find: a = c = e = 0, d = − 143 b, f = − 9

10d = 215 b. Consequently the

corresponding solution to the Legendre equation is:

y(x) = bx

(1− 14

3x2 +

215

x4

).

Imposing the normalization condition y(1) = 1 requires 1 = b(1 − 143 + 21

5 ) =⇒ b = 158 . Consequently the

required solution is y(x) = 18x(15− 70x2 + 63x4).

49. (a) J0(x) =∞∑

k=0

(−1)k

(k!)2(x

2

)2k

= 1− 14x2 + 1

64x4 + ...

(b) A Maple plot of J(0, x, 4) is given in the acompanying figure.From this graph, an approximation to the first positive zero of J0(x) is 2.4. Using the Maple internal functionBesselJZeros gives the approximation 2.404825558.

(c) A Maple plot of the functions J0(x) and J(0, x, 4) on the interval [0,2] is given in the acompanying figure.We see that to the printer resolution, these graphs are indistinguishable. On a larger interval, for example,[0,3], the two gaphs would begin to differ dramatically from one another.

(d) By trial and error, we find the smallest value of m to be m = 11. A plot of the functions J(0, x) andJ(0, x, 11) is given in the acompanying figure.

Page 15: Differential Equations and Linear Algebra - 3e - Goode -Solutions

15

–0.2

0

0.2

0.4

0.6

0.8

1

1 2 3 4x

J(0, x, 4)

Approximation to the first positive zero of J0(x)

Figure 0.0.8: Figure for Exercise 49(b)

0.4

0.6

0.8

1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2x

J0(x), J(0, x, 4)

Figure 0.0.9: Figure for Exercise 49(c)

Solutions to Section 1.3

True-False Review:

1. TRUE. This is precisely the remark after Theorem 1.3.2.

2. FALSE. For instance, the differential equation in Example 1.3.6 has no equilibrium solutions.

3. FALSE. This differential equation has equilibrium solutions y(x) = 2 and y(x) = −2.

4. TRUE. For this differential equation, we have f(x, y) = x2 + y2. Therefore, any equation of the formx2 + y2 = k is an isocline, by definition.

5. TRUE. Equilibrium solutions are always horizontal lines. These are always parallel to each other.

6. TRUE. The isoclines have the form x2+y2

2y = k, or x2 + y2 = 2ky, or x2 +(y−k)2 = k2, so the statementis valid.

Page 16: Differential Equations and Linear Algebra - 3e - Goode -Solutions

16

–0.4

–0.2

0

0.2

0.4

0.6

0.8

1

2 4 6 8 10x

J(0, x), J(0, x, 11)

J(0, x, 11)

J(0, x)

Figure 0.0.10: Figure for Exercise 49(d)

7. TRUE. An equilibrium solution is a solution, and two solution curves to the differential equationdydx = f(x, y) do not intersect.

Problems:

1. y = cx−1 =⇒ c = xy. Hence,dy

dx= −cx−2 = −(xy)x−2 = −y

x.

2. y = cx2 =⇒ c =y

x2. Hence,

dy

dx= 2cx = 2

y

x2x =

2y

x.

3. x2 + y2 = 2cx =⇒ x2 + y2

2x= c. Hence, 2x + 2y

dy

dx= 2c =

x2 + y2

x, so that, y

dy

dx=

x2 + y2

2x− x.

Consequently,dy

dx=

y2 − x2

2xy.

4. y2 = cx =⇒ c =y2

x. Hence, 2y

dy

dx= c, so that,

dy

dx=

c

2y=

y

2x.

5. 2cy = x2 − c2 =⇒ c2 + 2cy − x2 = 0 =⇒ c =−2y ±

√4y2 + 4x2

2= −y ±

√x2 + y2.

6. y2 − x2 = c =⇒ 2ydy

dx− 2x = 0 =⇒ dy

dx=

x

y.

7. (x − c)2 + (y − c)2 = 2c2 =⇒ x2 − 2cx + y2 − 2cy = 0 =⇒ c =x2 + y2

2(x + y). Differentiating the given

equation yields 2(x − c) + 2(y − c)dy

dx= 0, so that 2

[x− x2 + y2

2(x + y)

]+ 2

[y − x2 + y2

2(x + y)

]dy

dx= 0, that is

dy

dx= −x2 + 2xy − y2

y2 + 2xy − x2.

Page 17: Differential Equations and Linear Algebra - 3e - Goode -Solutions

17

8. x2 + y2 = c =⇒ 2x + 2ydy

dx= 0 =⇒ dy

dx= −x

y.

2.0

1

-1.2

2-2

1.2

-1

-0.8

-2.0

0.8

x

1.6

0.4

-1.6

-0.4

y(x)

Figure 0.0.11: Figure for Exercise 8

9. y = cx3 =⇒ dy

dx= 3cx2 = 3

y

x3x2 =

3y

x. The initial condition y(2) = 8 =⇒ 8 = c(2)3 =⇒ c = 1. Thus the

unique solution to the initial value problem is y = x3.

8

4

-2

-4

-8

x

2

y(x)

(2, 8)

Figure 0.0.12: Figure for Exercise 9

Page 18: Differential Equations and Linear Algebra - 3e - Goode -Solutions

18

10. y2 = cx =⇒ 2ydy

dx= c =⇒ 2y

dy

dx=

y2

x=⇒ dy

dx= y2x =⇒ 2x · dy − y · dx = 0. The initial condition

y(1) = 2 =⇒ c = 4, so that the unique solution to the initial value problem is y2 = 4x.

3

3

1

3 -1

-3

-1

y(x)

-2 21x

(1, 2)

-

Figure 0.0.13: Figure for Exercise 10

11. (x− c)2 + y2 = c2 =⇒ x2 − 2cx + c2 + y2 = c2, so that

x2 − 2cx + y2 = 0. (0.0.1)

Differentiating with respect to x yields

2x− 2c + 2ydy

dx= 0. (0.0.2)

But from (0.0.1), c =x2 + y2

2xwhich, when substituted into (0.0.2), yields 2x −

(x2 + y2

x

)+ 2y

dy

dx= 0,

that is,dy

dx=

y2 − x2

2xy. Imposing the initial condition y(2) = 2 =⇒ from (0.0.1) c = 2, so that the unique

solution to the initial value problem is y = +√

x(4− x).

12. Let f(x, y) = x sin (x + y), which is continuous for all x, y ∈ R.∂f

∂y= x cos (x + y), which is continuous for all x, y ∈ R.

By Theorem 1.3.2,dy

dx= x sin (x + y), y(x0) = y0 has a unique solution for some interval I ∈ R.

13.dy

dx=

x

x2 + 1(y2 − 9), y(0) = 3.

f(x, y) =x

x2 + 1(y2 − 9), which is continuous for all x, y ∈ R.

∂f

∂y=

2xy

x2 + 1, which is continuous for all x, y ∈ R.

Page 19: Differential Equations and Linear Algebra - 3e - Goode -Solutions

19

-2

y(x)

2

x5

1

3

6

-1

2

-3

31 4

(2, 2)

Figure 0.0.14: Figure for Exercise 11

So the IVP stated above has a unique solution on any interval containing (0, 3). By inspection we see thaty(x) = 3 is the unique solution.

14. The IVP does not necessarily have a unique solution since the hypothesis of the existence and uniqueness

theorem are not satisfied at (0,0). This follows since f(x, y) = xy1/2, so that∂f

∂y= 1

2xy−1/2 which is not

continuous at (0, 0).

15. (a) f(x, y) = −2xy2 =⇒ ∂f

∂y= −4xy. Both of these functions are continuous for all (x, y), and therefore

the hypothesis of the uniqueness and existence theorem are satisfied for any (x0, y0).

(b) y(x) =1

x2 + c=⇒ y′ = − 2x

(x2 + c)2= −2xy2.

(c) y(x) =1

x2 + c.

(i) y(0) = 1 =⇒ 1 =1c

=⇒ c = 1. Hence, y(x) =1

x2 + 1. The solution is valid on the interval (−∞,∞).

(ii) y(1) = 1 =⇒ 1 =1

1 + c=⇒ c = 0. Hence, y(x) =

1x2

. This solution is valid on the interval (0,∞).

(iii) y(0) = −1 =⇒ −1 =1c

=⇒ c = −1. Hence, y(x) =1

x2 − 1. This solution is valid on the interval

(−1, 1). (d) Since, by inspection, y(x) = 0 satisfies the given IVP, it must be the unique solution to the IVP.

16. (a) Both f(x, y) = y(y − 1) and∂f

∂y= 2y − 1 are continuous at all points (x, y). Consequently, the

hypothesis of the existence and uniqueness theorem are satisfied by the given IVP for any x0, y0.(b) Equilibrium solutions: y(x) = 0, y(x) = 1.

Page 20: Differential Equations and Linear Algebra - 3e - Goode -Solutions

20

2

0.4

0.8

1.2

x

-2

y(x)

Figure 0.0.15: Figure for Exercise 15c(i)

5

y(x)

x

2

4

6

6

3

2

41

8

Figure 0.0.16: Figure for Exercise 15c(ii)

(c) Differentiating the given differential equation yieldsd2y

dx2= (2y − 1)

dy

dx= (2y − 1)y(y − 1). Hence the

solution curves are concave up for 0 < y < 12 , and y > 1, and concave down for y < 0, and 1

2 < y < 1.(d) The solutions will be bounded provided 0 ≤ y0 ≤ 1.

17. y′ = 4x. There are no equalibrium solutions. The slope of the solution curves is positive for x > 0 andis negative for x < 0. The isoclines are the lines x = k

4 .

Slope of Solution Curve Equation of Isocline-4 x = −1-2 x = −1/20 x = 02 x = 1/24 x = 1

Page 21: Differential Equations and Linear Algebra - 3e - Goode -Solutions

21

x

y(x)

1.0-1.0

-3

-0.5 0.5

-1

-2

Figure 0.0.17: Figure for Exercise 15c(iii)

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.18: Figure for Exercise 16(d)

18. y′ = 1x . There are no equalibrium solutions. The slope of the solution curves is positive for x > 0 and

increases without bound as x → 0+. The slope of the curve is negative for x < 0 and decreases withoutbound as x → 0−. The isoclines are the lines 1

x = k.

Page 22: Differential Equations and Linear Algebra - 3e - Goode -Solutions

22

–1.5

–1

–0.5

0

0.5

1

1.5

y(x)

–1.5 –1 –0.5 0.5 1 1.5x

Figure 0.0.19: Figure for Exercise 17

Slope of Solution Curve Equation of Isocline±4 x = ±1/4±2 x = ±1/2±1/2 x = ±2±1/4 x = ±4±1/10 x = ±10

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.20: Figure for Exercise 18

19. y′ = x + y. There are no equalibrium solutions. The slope of the solution curves is positive for y > −x,and negative for y < −x. The isoclines are the lines y + x = k.

Slope of Solution Curve Equation of Isocline−2 y = −x− 2−1 y = −x− 10 y = −x1 y = −x + 12 y = −x + 2

Page 23: Differential Equations and Linear Algebra - 3e - Goode -Solutions

23

Since the slope of the solution curve along the isocline y = −x − 1 coincides with the slope of the isocline,it follows that y = −x − 1 is a solution to the differential equation. Differentiating the given differentialequation yields: y′′ = 1 + y′ = 1 + x + y. Hence the solution curves are concave up for y > −x − 1, andconcave down for y < −x− 1. Putting this information together leads to the slope field in the acompanyingfigure.

y(x)

x1 2 3

1

2

3

-1

-1

-2

-2

-3

-3

Figure 0.0.21: Figure for Exercise 19

20. y′ = xy . There are no equalibrium solutions. The slope of the solution curves is zero when x = 0. The

solution has a vertical tangent line at all points along the x-axis (except the origin). Differentiating the

differential equation yields: y′ =1y− x

y2y′ =

1y− x2

y3=

1y3

(y2 − x2). Hence the solution curves are concave

up for y > 0 and y2 > x2; y < 0 and y2 < x2 and concave down for y > 0 and y2 < x2; y < 0 and y2 > x2.The isoclines are the lines x

y = k.

Slope of Solution Curve Equation of Isocline±2 y = ±x/2±1 y = ±x±1/2 y = ±2x±1/4 y = ±4x±1/10 y = ±10x

Note that y = ±x are solutions to the differential equation.

21. y′ = − 4xy . Slope is zero when x = 0 (y 6= 0). The solutions have a vertical tangent line at all points

along the x-axis(except the origin). The isoclines are the lines − 4xy = k. Some values are given in the table

below.

Page 24: Differential Equations and Linear Algebra - 3e - Goode -Solutions

24

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.22: Figure for Exercise 20

Slope of Solution Curve Equation of Isocline±1 y = ±4x±2 y = ±2x±3 y = ±4x/3

Differentiating the given differential equation yields: y′ = −4y

+4xy′

y2= −4

y− 16x2

y3= −4(y2 + 4x2)

y.

Consequently the solution curves are concave up for y < 0, and concave down for y > 0. Putting thisinformation together leads to the slope field in the acompanying figure.

22. y′ = x2y. Equalibrium solution: y(x) = 0 =⇒ no solution curve can cross the x-axis. Slope: zerowhen x = 0 or y = 0. Positive when y > 0 (x 6= 0), negative when y < 0 (x 6= 0). Differentiating the given

differential equation yields:d2y

dx2= 2xy+x2 dy

dx= 2xy+x4y = xy(2+x3). So, when y > 0, the solution curves

are concave up for x ∈ (−∞, (−2)1/3), and for x > 0, and are concave down for x ∈ ((−2)1/3, 0). Wheny < 0, the solution curves are concave up for x ∈ ((−2)1/3, 0), and concave down for x ∈ (−∞, (−2)1/3) andfor x > 0. The isoclines are the hyperbolas x2y = k.

Slope of Solution Curve Equation of Isocline±2 y = ±2/x2

±1 y = ±1/x2

±1/2 y = ±1/(2x)2

±1/4 y = ±1/(4x)2

±1/10 y = ±1/(10x)2

0 y = 0

23. y′ = x2 cos y. The slope is zero when x = 0. There are equalibrium solutions when y = (2k + 1)π2 . The

slope field is best sketched using technology. The accompanying figure gives the slope field for −π2 < y < 3π

2 .

24. y′ = x2 + y2. The slope of the solution curves is zero at the origin, and positive at all the other points.There are no equilibrium solutions. The isoclines are the circles x2 + y2 = k.

Page 25: Differential Equations and Linear Algebra - 3e - Goode -Solutions

25

1 2-1-2

1

2

3

4

y(x)

x

Figure 0.0.23: Figure for Exercise 21

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.24: Figure for Exercise 22

Slope of Solution Curve Equation of Isocline1 x = ±1/42 x = ±1/23 x = ±24 x = ±45 x = ±10

Page 26: Differential Equations and Linear Algebra - 3e - Goode -Solutions

26

–1

0

1

2

3

4

5

y(x)

–3 –2 –1 1 2 3x

Figure 0.0.25: Figure for Exercise 23

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.26: Figure for Exercise 24

25. dTdt = − 1

80 (T − 70). Equilibrium solution: T (t) = 70. The slope of the solution curves is positive for

T > 70, and negative for T < 70.d2T

dt2= − 1

80dT

dt=

16400

(T − 70). Hence the solution curves are concave

up for T > 70, and concave down for T < 70. The isoclines are the horizontal lines − 180 (T − 70) = k.

Slope of Solution Curve Equation of Isocline−1/4 T = 901/5 T = 860 T = 70

1/5 T = 541/4 T = 50

26. y′ = −2xy.

27. y′ =x sinx

1 + y2.

Page 27: Differential Equations and Linear Algebra - 3e - Goode -Solutions

27

0

20

40

60

80

100

T(t)

20 40 60 80 100 120 140t

Figure 0.0.27: Figure for Exercise 25

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.28: Figure for Exercise 26

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.29: Figure for Exercise 27

28. y′ = 3x− y.

Page 28: Differential Equations and Linear Algebra - 3e - Goode -Solutions

28

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.30: Figure for Exercise 28

29. y′ = 2x2 sin y.

–3

–2

–1

0

1

2

3

y(x)

–2 –1 1 2x

Figure 0.0.31: Figure for Exercise 29

30. y′ =2 + y2

3 + 0.5x2.

31. y′ =1− y2

2 + 0.5x2.

32.(a) Slope field for the differential equation y′ = x−1(3 sinx− y).

(b) Slope field with solution curves included.

The figure suggests that the solution to the differential equation are unbounded as x → 0+.(c) Slope field with solution curve corresponding to the initial condition y(π

2 ) = 6π .

This solution curve is bounded as x → 0+.(d) In the accompanying figure we have sketched several solution curves on the interval (0,15].

The figure suggests that the solution curves approach the x-axis as x →∞.

Page 29: Differential Equations and Linear Algebra - 3e - Goode -Solutions

29

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.32: Figure for Exercise 30

–2

–1

0

1

2

y(x)

–2 –1 1 2x

Figure 0.0.33: Figure for Exercise 31

–4

–2

0

2

4

y(x)

2 4 6 8 10x

Figure 0.0.34: Figure for Exercise 32(a)

33.(a) Differentiating the given equation givesdy

dx= 2kx = 2

y

x. Hence the differential equation of the

Page 30: Differential Equations and Linear Algebra - 3e - Goode -Solutions

30

–4

–2

0

2

4

y(x)

2 4 6 8 10x

Figure 0.0.35: Figure for Exercise 32(b)

–4

–2

0

2

4

y(x)

2 4 6 8 10x

Figure 0.0.36: Figure for Exercise 32(c)

–4

–2

0

2

4

y(x)

2 4 6 8 10 12 14x

Figure 0.0.37: Figure for Exercise 32(d)

othogonal trajectories isdy

dx= − x

2y.

Page 31: Differential Equations and Linear Algebra - 3e - Goode -Solutions

31

–4

–2

0

2

4

y(x)

–4 –2 2 4x

Figure 0.0.38: Figure for Exercise 33(a)

(b) The orthogonal trajectories appear to be ellipses. This can be verified by intergrating the differentialequation derived in (a).

34. If a > 0, then as illustrated in the following slope field (a = 0.5, b = 1), it appears that limt→∞ i(t) = ba .

–4

–2

0

2

4

y(x)

–3 –2 –1 1 2 3x

Figure 0.0.39: Figure for Exercise 34 when a > 0

If a < 0, then as illustrated in the following slope field (a = −0.5, b = 1) it appears that i(t) diverges ast →∞.

If a = 0 and b 6= 0, then once more i(t) diverges as t →∞. The acompanying figure shows a representa-tive case when b > 0. Here we see that limt→∞ i(t) = +∞. If b < 0, then limt→∞ i(t) = −∞.

If a = b = 0, then the general solution to the differential equation is i(t) = i0 where i0 is a constant.

Solutions to Section 1.4

Page 32: Differential Equations and Linear Algebra - 3e - Goode -Solutions

32

–4

–2

0

2

4

i(t)

–3 –2 –1 1 2 3t

Figure 0.0.40: Figure for Exercise 34 when a < 0

–4

–2

0

2

4

y(x)

–3 –2 –1 1 2 3x

Figure 0.0.41: Figure for Exercise 34 when a = 0

True-False Review:

1. TRUE. The differential equation dydx = f(x)g(y) can be written 1

g(y)dydx = f(x), which is the proper form,

according to Definition 1.4.1, for a separable differential equation.

2. TRUE. A separable differential equation is a first-order differential equation, so the general solutioncontains one constant. The value of that constant can be determined from an initial condition, as usual.

3. TRUE. Newton’s Law of Cooling is usually expressed as dTdt = −k(T −Tm), and this can be rewritten as

1T − Tm

dT

dt= −k,

and this form shows that the equation is separable.

4. FALSE. The expression x2 + y2 cannot be separated in the form f(x)g(y), so the equation is notseparable.

5. FALSE. The expression x sin(xy) cannot be separated in the form f(x)g(y), so the equation is notseparable.

Page 33: Differential Equations and Linear Algebra - 3e - Goode -Solutions

33

6. TRUE. We can write the given equation as e−y dydx = ex, which is the proper form for a separable

equation.

7. TRUE. We can write the given equation as (1 + y2) dydx = 1

x2 , which is the proper form for a separableequation.

8. FALSE. The expression x+4y4x+y cannot be separated in the form f(x)g(y), so the equation is not separable.

9. TRUE. We can write x3y+x2y2

x2+xy = xy, so we can write the given differential equation as 1y

dydx = x, which

is the proper form for a separable equation.

Problems:

1. Separating the variables and integrating yields∫dy

y= 2

∫xdx =⇒ ln |y| = x2 + c1 =⇒ y(x) = cex2

.

2. Separating the variables and integrating yields∫y−2dy =

∫dx

x2 + 1=⇒ y(x) = − 1

tan−1 x + c.

3. Separating the variables and integrating yields∫eydy =

∫e−xdx = 0 =⇒ ey + e−x = c =⇒ y(x) = ln (c− e−x).

4. Separating the variables and integrating yields∫dy

y=∫

(lnx)−1

xdx =⇒ y(x) = c lnx.

5. Separating the variables and integrating yields∫dx

x− 2=∫

dy

y=⇒ ln |x− 2| − ln |y| = c1 =⇒ y(x) = c(x− 2).

6. Separating the variables and integrating yields∫dy

y − 1=∫

2x

x2 + 3dx =⇒ ln |y − 1| = ln |x2 + 3|+ c1 =⇒ y(x) = c(x2 + 3) + 1.

7. y − xdy

dx= 3− 2x2 dy

dx=⇒ x(2x− 1)

dy

dx= (3− y). Separating the variables and integrating yields

−∫

dy

y − 3=∫

dx

x(2x− 1)=⇒ − ln |y − 3| = −

∫dx

x+∫

22x− 1

dx

=⇒ − ln |y − 3| = − ln |x|+ ln |2x− 1|+ c1

=⇒ x

(y − 3)(2x− 1)= c2 =⇒ y(x) =

cx− 32x− 1

.

8.dy

dx=

cos (x− y)sinx sin y

−1 =⇒ dy

dx=

cos x cos y

sinx sin y=⇒

∫ sin y

cos ydy =

∫ cos x

cos ydx =⇒ − ln | cos y| = ln | sinx|+c1 =⇒

cos y = c csc x.

Page 34: Differential Equations and Linear Algebra - 3e - Goode -Solutions

34

9.dy

dx=

x(y2 − 1)2(x− 2)(x− 1)

=⇒∫ dy

(y + 1)(y − 1)=

12∫ xdx

(x− 2)(x− 1), y 6= ±1. Thus,

−12

∫dy

y + 1+

12

∫dy

y − 1=

12

(2∫

dx

x− 2−∫

dx

x− 1

)=⇒ − ln |y + 1|+ln |y − 1| = 2 ln |x− 2|−ln |x− 1|+c1

=⇒ y − 1y + 1

= c(x− 2)2

x− 1=⇒ y(x) =

(x− 1) + c(x− 2)2

(x− 1)− c(x− 2)2. By inspection we see that y(x) = 1, and y(x) = −1

are solutions of the given differential equation. The former is included in the above solution when c = 0.

10.dy

dx=

x2y − 3216− x2

+ 2 =⇒∫ dy

y − 2=∫ x2

16− x2dx =⇒ ln |y − 2| = −

∫ (1 +

16x2 − 16

)dx =⇒ ln |y − 2| =

−x − 16∫ dx

x2 − 16=⇒ ln |y − 2| = −x − 16

(− 1

8

∫ dx

x + 4+ 1

8

∫ dx

x− 4

)=⇒ ln |y − 2| = −x + 2 ln |x + 4| −

2 ln |x− 4|+ c1 =⇒ y(x) = 2 + c

(x + 4x− 4

)2

e−x.

11. (x−a)(x−b)dy

dx−(y−c) = 0 =⇒

∫ dy

y − c=∫ dx

(x− a)(x− b)=⇒

∫ dy

y − c=

1a− b

∫ ( 1x− a

− 1x− b

)dx =⇒

ln |y − c| = ln

[c1

∣∣∣∣x− a

x− b

∣∣∣∣1/(a−b)]

=⇒

∣∣∣∣∣(y − c)(

x− b

x− a

)1/(a−b)∣∣∣∣∣ = c1 =⇒ y − c = c2

(x− a

x− b

)1/(a−b)

=⇒

y(x) = c + c2

(x− a

x− b

)1/(a−b)

.

12. (x2 + 1)dy

dx+ y2 = −1 =⇒

∫ dy

1 + y2= −

∫ dx

1 + x2=⇒ tan−1 y = tan−1 x + c, but y(0) = 1 so c =

π

4.

Thus, tan−1 y = tan−1 x +π

4or y(x) =

1− x

1 + x.

13. (1− x2)dy

dx+ xy = ax =⇒

∫ dy

a− y= − 1

2

∫− 2x

1− x2dx =⇒ − ln |a− y| = − 1

2 ln |1− x2|+ c1 =⇒ y(x) =

a + c√

1− x2, but y(0) = 2a so c = a and therefore, y(x) = a(1 +√

1− x2).

14.dy

dx= 1− sin (x + y)

sinx sin y=⇒ dy

dx= − tanx cot y =⇒ −

∫ sin y

cos ydy =

∫ sinx

cos xdx =⇒ − ln | cos x cos y| = c, but

y(π

4) =

π

4so c = ln (2). Hence, − ln | cos x cos y| = ln (2) =⇒ y(x) = cos−1

(12 sec x

).

15.dy

dx= y3 sinx =⇒

∫ dy

y3=∫

sinxdx for y 6= 0. Thus − 12y2

= − cos x+c. However, we cannot impose the

initial condition y(0) = 0 on the last equation since it is not defined at y = 0. But, by inspection, y(x) = 0is a solution to the given differential equation and further, y(0) = 0; thus, the unique solution to the initialvalue problem is y(x) = 0.

16.dy

dx= 2

3 (y − 1)1/2 =⇒∫ dy

(y − 1)1/2= 2

3

∫dx if y 6= 1 =⇒ 2(y − 1)1/2 = 2

3x + c but y(1) = 1 so

c = − 23 =⇒ 2

√y − 1 = 2

3x− 23 =⇒

√y − 1 = 1

3 (x− 1). This does not contradict the Existance-Uniquenesstheorem because the hypothesis of the theorem is not satisfied when x = 1.

Page 35: Differential Equations and Linear Algebra - 3e - Goode -Solutions

35

17.(a) mdv

dt= mg − kv2 =⇒ m

k [(mg/k)− v2]dv = dt. If we let a =

√mgk then the preceding equation can

be written asm

k

∫ 1a2 − v2

dv =∫

dt which can be integrated directly to obtain

m

2akln(

a + v

a− v

)= t + c,

that is, upon exponentiating both sides,a + v

a− v= c1e

2akm t.

Imposing the initial condition v(0) = 0, yields c = 0 so that

a + v

a− v= e

2akm t.

Therefore,

v(t) = a

(e

2aktm − 1

e2akt

m + 1

)which can be written in the equivalent form

v(t) = a tanh(

gt

a

).

(b) No. As t →∞, v → a and as t → 0+, v → 0.

(c) v(t) = a tanh(

gta

)=⇒ dy

dt= a tanh

(gta

)=⇒ a

∫tanh

(gta

)dt =⇒ y(t) =

a2

gln(cosh ( gt

a )) + c1 and if

y(0) = 0 then y(t) =a2

gln[cosh ( gt

a )].

18. The required curve is the solution curve to the IVPdy

dx= − x

4y, y(0) = 1

2 . Separating the variables in

the differential equation yields 4y−1dy = −1dx, which can be integrated directly to obtain 2y2 = −x2

2+ c.

Imposing the initial condition we obtain c = 12 , so that the solution curve has the equation 2y2 = −x2 + 1

2 ,or equivalently, 4y2 + 2x2 = 1.

19. The required curve is the solution curve to the IVPdy

dx= ex−y, y(3) = 1. Separating the variables in the

differential equation yields eydy = exdx, which can be integrated directly to obtain ey = ex + c. Imposingthe initial condition we obtain c = e − e3, so that the solution curve has the equation ey = ex + e − e3, orequivalently, y = ln(ex + e− e3).

20. The required curve is the solution curve to the IVPdy

dx= x2y2, y(−1) = 1. Separating the variables

in the differential equation yields 1y2 dy = x2dx, which can be integrated directly to obtain − 1

y = 13x3 + c.

Imposing the initial condition we obtain c = − 23 , so that the solution curve has the equation y = − 1

13 x3− 2

3,

or equivalently, y = 32−x3 .

Page 36: Differential Equations and Linear Algebra - 3e - Goode -Solutions

36

21.(a) Separating the variables in the given differential equation yields1

1 + v2dv = −dt. Integrating we

obtain tan−1 (v) = −t + c. The initial condition v(0) = v0 implies that c = tan−1 (v0), so that tan−1 (v) =−t+tan−1 (v0). The object will come to rest if there is time t, at which the velocity is zero. To determine tr,we set v = 0 in the previous equation which yields tan−1 (0) = tr +tan−1 (v0). Consequently, tr = tan−1 (v0).

The object does not remain at rest since we see from the given differential equation thatdv

dt< 0 att = tr,

and so v is decreasing with time. Consequently v passes through zero and becomes negative for t < tr.

(b) From the chain rule we havedv

dt=

dx

dt. Then

dv

dx= v

dv

dx. Substituting this result into the differential

equation (1.4.17) yields vdv

dx= −(1 + v2). We now separate the variables:

v

1 + v2dv = −dx. Integrating we

obtain ln (1 + v2) = −2x + c. Imposing the initial condition v(0) = v0, x(0) = 0 implies that c = ln (1 + v20),

so that ln (1 + v2) = −2x + ln (1 + v20). When the object comes to rest the distance traveled by the object

is x = 12 ln (1 + v2

0).

22.(a)dv

dt= −kvn =⇒ v−ndv = −kdt.

n 6= 1 :=⇒ 11− n

v1−n = −kt + c. Imposing the initial condition v(0) + v0 yields c =1

1− nv1−n0 , so that

v = [v1−n0 + (n − 1)kt]1/(1−n). The object comes to rest in a finite time if there is a positive value of t for

which v = 0.n = 1 :=⇒ Integratingv−ndv = −kdt and imposing the initial conditions yields v = v0e

−kt, and the objectdoes not come to rest in a finite amount of time.(b) If n 6= 1, 2, then

dx

dt= [v1−n

0 +(n− 1)kt]1/(1−n), where x(t) denotes the distanced traveled by the object.

Consequently, x(t) = − 1k(2− n)

[v1−n0 + (n − 1)kt](2−n)/(1−n) + c. Imposing the initial condition x(0) = 0

yields c =1

k(2− n)v2−n0 , so that x(t) = − 1

k(2− n)[v1−n

o + n(n − 1)kt](2−n)/(1−n) +1

k(2− n)v2−n0 . For

1 < n < 2, we have2− n

1− n< 0, so that limt→∞ x(t) =

1k(2− n)

. Hence the maximum distance that the

object can travel in a finite time is less than1

k(2− n).

If n = 1, then we can integrate to obtain x(t) =v0

k(1− e−kt), where we have imposed the initial condition

x(0) = 0. Consequently, limt→∞ x(t) =v0

k. Thus in this case the maximum distance that the object can

travel in a finite time is less thanv0

k.

(c) If n > 2, then x(t) = − 1k(2− n)

[v1−no + n(n − 1)kt](2−n)/(1−n) +

1k(2− n)

v2−n0 is still valid. However,

in this case2− n

1− n> 0, and so limt→∞ x(t) = +∞. COnsequently, there is no limit to the distance that the

object can travel.

If n = 2, then we return to v = [v1−n0 + (n − 1)kt]1/(1−n). In this case

dx

dt= (v−1

0 + kt)−1, which can

be integrated directly to obtain x(t) =1k

ln (1 + v0kt), where we have imposed the initial condition that

x(0) = 0. Once more we see that limt→∞ x(t) = +∞, so that there is no limit to the distance that the objectcan travel.

Page 37: Differential Equations and Linear Algebra - 3e - Goode -Solutions

37

23. Solving p = p0(ρ

ρ0)1/γ . Consequently the given differential equation can be written as dp = −gρ0(

p

p0)1/γdy,

or equivalently, p−1/γdp = − gρ0

p1/γ0

dy. This can be integrated directly to obtainγp(γ−1)/γ

γ − 1= −gρ0y

p1/γ0

+ c. At

the center of the Earth we have p = p0. Imposing this initial condition on the preceding solution gives

c =γp

(γ−1)/γ0

γ − 1. Substituting this value of c into the general solution to the differential equation we find,

after some simplification, p(γ−1)/γ = p(γ−1)/γ0

[1− (γ − 1)ρ0gy

γp0

], so that p = p0

[1− (γ − 1)ρ0gy

γp0

](γ−1)/γ

.

24.dT

dt= −k(T − Tm) =⇒ dT

dt= −k(T − 75) =⇒ dT

T − 75= −kdt =⇒ ln |T − 75| = −kt + c1 =⇒ T (t) =

75 + ce−kt. T (0) = 135 =⇒ c = 60 so T = 75 + 60e−kt. T (1) = 95 =⇒ 95 = 75 + 60e−k =⇒ k = ln 3 =⇒T (t) = 75 + 60e−t ln 3. Now if T (t) = 615 then 615 = 75 + 60−t ln 3 =⇒ t = −2h. Thus the object was placedin the room at 2p.m.

25.dT

dt= −k(T − 450) =⇒ T (t) = 450 + Ce−kt.T (0) = 50 =⇒ C = −400 so T (t) = 450 − 400e−kt and

T (20) = 150 =⇒ k =120

ln 43 ; hence, T (t) = 450− 400( 3

4 )t/20.

(i) T (40) = 450− 400( 34 )2 = 225◦F.

(ii) T (t) = 350 = 450− 400( 34 )t/20 =⇒ ( 3

4 )t/20 = 14 =⇒ t =

20 ln 4ln(4/3)

≈ 96.4 minutes.

26.dT

dt= −k(T − 34) =⇒ dT

T − 34= −kdt =⇒ T (t) = 34 + ce−kt. T (0) = 38 =⇒ c = 4 so that

T (t) = 34 + 4e−kt.T (1) = 36 =⇒ k = ln 2; hence, T (t) = 34 + 4e−t ln 2. Now T (t) = 98 =⇒ T (t) =34 + 4e−kt = 98 =⇒ 2−t = 16 =⇒ t = −4h. Thus T (−4) = 98 and Holmes was right, the time of death was10 a.m.

27. T (t) = 75 + ce−kt. T (10) = 415 =⇒ 75 + ce−10k = 415 =⇒ 340 = ce−10k and T (20) = 347 =⇒75 + ce−20k = 347 =⇒ 272 = ce−20k. Solving these two equations yields k = 1

10 ln 54 and c = 425; hence,

T = 75 + 425(45 )t/10

(a) Furnace temperature: T (0) = 500◦F.

(b) If T (t) = 100 then 100 = 75 + 425( 45 )t/10 =⇒ t =

10 ln 17ln 5

4

≈ 126.96 minutes. Thus the temperature of

the coal was 100◦F at 6:07 p.m.

28.dT

dt= −k(T − 72) =⇒ dT

T − 72= −kdt =⇒ T (t) = 72 + ce−kt. Since

dT

dt= −20,−k(T − 72) = −20 or

k = 1039 . Since T (1) = 150 =⇒ 150 = 72 + ce−10/39 =⇒ c = 78e10/39; consequently, T (t) = 72 + 78e10(1−t)/39.

(i) Initial temperature of the object: t = 0 =⇒ T (t) = 72 + 78e10/30 ≈ 173◦F

(ii)Rate of change of the temperature after 10 minutes: T (10) = 72 + 78e−30/13 so after 10 minutes,dT

dt=

−1039

(72 + 78e−30/13 − 72) =⇒ dT

dt= −260

13e−30/13 ≈ 2◦F per minute.

Solutions to Section 1.5

Page 38: Differential Equations and Linear Algebra - 3e - Goode -Solutions

38

True-False Review:

1. TRUE. The differential equation for such a population growth is dPdt = kP , where P (t) is the population

as a function of time, and this is the Malthusian growth model described at the beginning of this section.

2. FALSE. The initial population could be greater than the carrying capacity, although in this case thepopulation will asymptotically decrease towards the value of the carrying capacity.

3. TRUE. The differential equation governing the logistic model is (1.5.2), which is certainly separable as

D

P (C − P )dP

dt= r.

Likewise, the differential equation governing the Malthusian growth model is dPdt = kP , and this is separable

as 1P

dPdt = k.

4. TRUE. As (1.5.3) shows, as t → ∞, the population does indeed tend to the carrying capacity Cindependently of the initial population P0. As it does so, its rate of change dP

dt slows to zero (this is bestseen from (1.5.2) with P ≈ C).

5. TRUE. Every five minutes, the population doubles (increase 2-fold). Over 30 minutes, this populationwill double a total of 6 times, for an overall 26 = 64-fold increase.

6. TRUE. An 8-fold increase would take 30 years, and a 16-fold increase would take 40 years. Therefore,a 10-fold increase would take between 30 and 40 years.

7. FALSE. The growth rate is dPdt = kP , and so as P changes, dP

dt changes. Therefore, it is not alwaysconstant.

8. TRUE. From (1.5.2), the equilibrium solutions are P (t) = 0 and P (t) = C, where C is the carryingcapacity of the population.

9. FALSE. If the initial population is in the interval (C2 , C), then although it is less than the carrying

capacity, its concavity does not change. To get a true statement, it should be stated instead that the initialpopulation is less than half of the carrying capacity.

10. TRUE. Since P ′(t) = kP , then P ′′(t) = kP ′(t) = k2P > 0 for all t. Therefore, the concavity is alwayspositive, and does not change, regardless of the initial population.

Problems:

1.dP

dt= kP =⇒ P (t) = P0e

kt. Since P (0) = 10, then P = 10ekt. Since P (3) = 20, then 2 = e3k =⇒ k =ln 23

. Thus P (t) = 10e(t/3) ln 3. Therefore, P (24) = 10e(24/3) ln 3 = 10 · 28 = 2560 bacteria.

2. Using P (t) = P0ekt we obtain P (10) = 5000 =⇒ 5000 = p0e

10k and P (12) = 6000 =⇒ 6000 = P0e12k

which implies that e2k = 65 =⇒ k = 1

2 ln 65 . Hence, P (0) = 5000( 5

6 )5 = 2009.4. Also, P = 2P0 when

t = 12 ln 2 =

2 ln 2ln 6

5

≈ 7.6h.

3. From P (t) = P0ekt and P (0) = 2000 it follows that P (t) = 2000ekt. Since td = 4, k = 1

4 ln 2 soP = 2000et ln 2/4. Therefore, P (t) = 106 =⇒ 106 = 2000et ln 2/4 =⇒ t ≈ 35.86h.

4.dP

dt= kP =⇒ P (t) = P0e

kt. Since, P (0) = 10000 then P (t) = 10000ekt. Since P (5) = 20000 then

Page 39: Differential Equations and Linear Algebra - 3e - Goode -Solutions

39

20000 = 10000ekt =⇒ k = 15 ln 2. Hence P (t) = 10000e(t ln 2)/5.

(a) P (20) = 10000e4 ln 2 = 160000.

(b) 1000000 = 10000e(t ln 2)/5 =⇒ 100 = e(t ln 2)/5 =⇒ t =5 ln 100

ln 2≈ 33.22yrs.

5. P (t) =50C

50 + (C − 50)e−rt. In formulas (1.5.5) and (1.5.6) we have P0 = 500, P1 = 800, P2 = 1000, t1 = 5,

and t2 = 10. Hence, r =15

ln[(1000)(300)(500)(200)

]=

15

ln 3, C =800[(800)(1500)− 2(500)(1000)]

8002 − (500)(1000)≈ 1142.86, so

that P (t) =1142.86)(500)

500 + 642.86e−0.2t ln 3≈ 571430

500 + 642.86e−0.2t ln 3. Inserting t = 15 into the preceding formula

yields P (15) = 1091.

6. P (t) =50C

50 + (C − 50)e−rtIn formulas (1.5.5) and (1.5.6) we have P0 = 50, P1 = 62, P2 = 76, t1 = 2,

and t2 = 2t1 = 4. Hence, r =12

ln[(76)(12)(50)(14)

]≈ 0.132, C =

62[(62)(126)− 2(50)(76)]622 − (50)(76)

≈ 298.727, so that

P (t) =14936.35

50 + 248.727e−0.132t. Inserting t = 20 into the preceding formula yields P (20) ≈ 221.

7. From equation (1.5.5) r > 0 requiresP2(P1 − P0)P0(P2 − P1)

> 1. Rearranging the terms in this inequality and

using the fact that P2 > P1 yields P1 >2P0P2

P0 + P2. Further, C > 0 requires that

P1(P0 + P2)− 2P0P2

P 21 − P0P2

> 0.

From P1 >2P0P2

P0 + P2we see that the numerator in the preceding inequality is positive, and therefore the

denominator must also be positive. Hence in addition to P1 >2P0P2

P0 + P2, we must also have P 2

1 > P0P2.

8. Let y(t) denote the number of passengers who have the flu at time t. Then we must solvedy

dt=

ky(1500 − y), y(0) = 5, y(1) = 10, where k is a positive constant. Separating the differential equation

and integrating yields∫ 1

y(1500− y)dy = k

∫dt. Using a partial fraction decomposition on the left-hand

side gives∫ [ 1

1500y+

11500(1500− y)

]dy = kt + c, so that

11500

ln(

y

1500− y

)= kt + c, which upon

exponentiation yieldsy

1500− y= c1e

1500kt. Imposing the initial condition y(0) = 5, we find that c1 =1

299.

Hence,y

1500− y=

1299

e1500kt. The further condition y(1) = 10 requires10

1490=

1299

e1500k. solving

for k gives k =1

1500ln

299149

. Therefore,y

1500− y=

1299

et ln (299/149). Solving algebraically for y we find

y(t) =1500et ln (299/149)

299 + et ln (299/149)=

15001 + 299e−t ln (299/149)

. Hence, y(14) =1500

1 + 299e−14 ln (299/149)= 1474.

9. (a) Equilibrium solutions: P (t) = 0, P (t) = T .

Slope: P > T =⇒ dP

dt> 0, 0 < P < T =⇒ dP

dt< 0.

Page 40: Differential Equations and Linear Algebra - 3e - Goode -Solutions

40

Isoclines: r(P − T ) = k =⇒ P 2 − TP − k

r= 0 =⇒ P =

12

(T ±

√rT 2 + 4k

r

). We see that slope of the

solution curves satisfies k ≥ −rT 2

4.

Concavity:d2P

dt2= r(2P − T )

dP

dt= r2(2P − T )(P − T )P . Hence, the solution curves are concave up for

P >t

2, and are concave down for 0 < P <

T

2.

(b) See accompanying figure.

0 t

T

P(t)

Figure 0.0.42: Figure for Exercise 9(b)

(c) For 0 < P0 < T , the population dies out with time. For P0 > T , there is a population growth. Theterm threshold level is appropriate since T gives the minimum value of P0 above which there is a populationgrowth.

10. (a) Separating the variables in differential equation (1.5.7) gives1

P (P − T )dP

dt= r, which can be written

in the equivalent form[

1T (P − T )

− 1TP

]dP

dt= r. Integrating yields

1T

ln(

P − T

P

)= rt + c, so that

P − T

P= c1e

Trt. The initial condition P (0) = P0 requiresP0 − T

P0= c1, so that

P − T

P=(

P0 − T

P0

)erTt.

Solving algebraically for P yields P (t) =TP0

P0 − (P0 − T )erTt.

(b) If P0 < T , then the denominator inTP0

P0 − (P0 − T )erTtis positive, and increases without bound as

t →∞. Consequently limt→∞ P (t) = 0. In this case the population dies out as t increases.

(c) If P0 > T , then the denominator ofTP0

P0 − (P0 − T )erTtvanishes when (P0 − T )erTt = P0, that is when

t =1

rTln(

P0

P0 − T

). This means that within a finite time the population grows without bound. We can

interpret this as a mathematical model of a population explosion.

11.dP

dt= r(C − P )(P − T )P, P (0) = P0, r > 0, 0 < T < C.

Equilibrium solutions: P (t) = 0, P (t) = T, P (t) = C. The slope of the solution curves is negative for0 < P < T , and for P > C. It is positive for T < P < C.

Page 41: Differential Equations and Linear Algebra - 3e - Goode -Solutions

41

Concavity:d2P

dt2= r2[(C − P )(P − T ) − (P − T )P + (C − P )P ](C − P )(P − T )P , which simplifies to

d2P

dt2= r2(−3P 2 + 2PT + 2CP −CT )(C − P )(P − T ). Hence changes in concavity occur when P = 1

3 (C +

T ±√

C2 − CT + T 2). A representative slope field with some solution curves is shown in the acompanyingfigure. We see that for 0 < P0 < T the population dies out, whereas for T < P0 < C the population growsand asymptotes to the equilibrium solution P (t) = C. If P0 > C, then the solution decays towards theequilibrium solution P (t) = C.

0

P(t)

t

T

C

Figure 0.0.43: Figure for Exercise 11

12.dP

dt= rP (lnC − lnP ), P (0) = P0, and r, C, and P0 are positive constants.

Equilibrium solutions: P (t) = C. The slope of the solution curves is positive for 0 < P < C, and negativefor P > C.

Concavity:d2P

dt2= r

[ln(

C

P

)− 1]

dP

dt= r2

[ln(

C

P

)− 1]

P lnC

P. Hence, the solution curves are concave

up for 0 < P <C

eand P > C. They are concave down for

C

e< P < C. A representative slope field with

some solution curves are shown in the acompanying figure.

0

5

10

15

20

25

P(t)

1 2 3 4 5t

Figure 0.0.44: Figure for Exercise 12

Page 42: Differential Equations and Linear Algebra - 3e - Goode -Solutions

42

13. Separating the variables in (1.5.8) yields1

P (lnC − lnP )dP

dt= r which can be integrated directly to

obtain − ln (lnC − lnP ) = rt + c so that ln (CP ) = c1e

−rt. The initial condition P (0) = P0 requires thatln ( C

P0) = c1. Hence, ln (C

P ) = e−rt ln ( CP0

) so that P (t) = Celn (P0/k)e−rt

. Since limt→∞ e−rt = 0, it followsthat limt→∞ P (t) = C.

14. Using the exponential decay model we havedP

dt= kP , which is easily integrated to obtain P (t) = P0e

kt.

The initial condition P (0) = 400 requires that P0 = 400, so that P (t) = 400ekt. We also know that

P (30) = 340. This requires that 340 = 400e30k so that k =130

ln(

1720

). Consequently,

P (t) = 400et30 ln( 17

20 ). (0.0.3)

(a) From (0.0.3), P (60)400e2 ln( 1720 ) = 289.

(b) From (0.0.3), P (100) = 400e103 ln( 17

20 ) ≈ 233(c) From (0.0.3), the half-life, tH , is determine from

200 = 400etH30 ln( 17

20 ) =⇒ tH = 30ln 2

ln(

2017

) ≈ 128 days.

15. (a) More.

(b) Using the exponential decay model we havedP

dt= kP , which is easily integrated to obtain P (t) = P0e

kt.

The initial condition P (0) = 100, 000 requires that P0 = 100, 000, so that P (t) = 100, 000ekt. We also know

that P (10) = 80, 000. This requires that 100, 000 = 80, 000e10k so that k =110

ln(

45

). Consequently,

P (t) = 100, 000et10 ln( 4

5 ). (0.0.4)

Using (0.0.4), the half-life is determined from

50, 000 = 100, 000etH10 ln( 4

5 ) =⇒ tH = 10ln 2

ln(

54

) ≈ 31.06 min.

(c) Using (0.0.4) there will be 15,000 fans left in the stadium at time t0, where

15, 000 = 100, 000et010 ln( 4

5 ) =⇒ t0 = 10ln(

320

)ln(

415

) ≈ 85.02 min.

16. Using the exponential decay model we havedP

dt= kP , which is easily integrated to obtain P (t) = P0e

kt.Since the half-life is 5.2 years, we have

12P0 = P0e

5.2k =⇒ k = − ln 25.2

.

Therefore,P (t) = P0e

−t ln 25.2 .

Page 43: Differential Equations and Linear Algebra - 3e - Goode -Solutions

43

Consequently, only 4% of the original amount will remain at time t0 where

4100

P0 = P0e−t0

ln 25.2 =⇒ t0 = 5.2

ln 25ln 2

≈ 24.15 years.

17. Maple, or even a TI 92 plus, has no problem in solving these equations.

18. (a) Malthusian model is P (t) = 151.3ekt. Since P (1) = 179.4, then 179.4 = 151.3e10k =⇒ k = 110 ln 179.4

151.3 .

Hence, P (t) = 151.3et ln (179.4/151.1)

10 .

(b)P (t) =151.3C

151.3 + (C − 151.3)e−rt. Imposing the initial conditions P (10) = 179.4 and P (20) = 203.3

gives the pair of equations 179.4 = 151.3e10 ln (179.4/151.1)

10 and 203.3 = 151.3e20 ln (179.4/151.1)

10 whose solution is

C ≈ 263.95, r ≈ 0.046. Using these values for C and r gives P (t) =39935.6

151.3 + 112.65e−0.046t.

(c) Malthusian model: P (30) ≈ 253 million; P (40) ≈ 300 million.

160

180

200

220

240

260

280

300

320

0 10 20 30 40t

P(t)

Figure 0.0.45: Figure for Exercise 18(c)

Logistics model: P (30) ≈ 222 million; P (40) ≈ 236 million.The logistics model fits the data better than the Malthusian model, but still gives a significant underestimateof the 1990 population.

19. P (t) =50C

50 + (C − 50)e−rt. Imposing the conditions P (5) = 100, P (15) = 250 gives the pair of equations

100 =50C

50 + (C − 50)e−5rand 250 =

50C

50 + (C − 50)e−15rwhose positive solutions are C ≈ 370.32, r ≈ 0.17.

Using these values for C and r gives P (t) =18500

50 + 18450e−0.17t. From the figure we see that it will take

approximately 52 years to reach 95% of the carrying capacity.

Solutions to Section 1.6

True-False Review:

1. FALSE. Any solution to the differential equation (1.6.7) serves as an integrating factor for the differential

Page 44: Differential Equations and Linear Algebra - 3e - Goode -Solutions

44

0

50

100

150

200

250

300

350

10 20 30 40 50 60 70t

P(t)

Carrying Capacity

Figure 0.0.46: Figure for Exercise 19

equation. There are infinitely many solutions to (1.6.7), taking the form I(x) = c1e∫

p(x)dx, where c1 is anarbitrary constant.

2. TRUE. Any solution to the differential equation (1.6.7) serves as an integrating factor for the differential

equation. There are infinitely many solutions to (1.6.7), taking the form I(x) = c1e∫

p(x)dx, where c1 is an

arbitrary constant. The most natural choice is c1 = 1, giving the integrating factor I(x) = e∫

p(x)dx.

3. TRUE. Multiplying y′ + p(x)y = q(x) by I(x) yields y′I + pIy = qI. Assuming that I ′ = pI, therequirement on the integrating factor, we have y′I + I ′y = qI, or by the product rule, (I · y)′ = qI, asrequested.

4. FALSE. Before determining an integrating factor, the equation must be rewritten as

dy

dx− x2y = sinx,

and with p(x) = −x2, we have integrating factor I(x) = e∫−x2dx, not e

∫x2dx.

5. TRUE. Rewriting the differential equation as

dy

dx+

1x

y = x,

we have p(x) = 1x , and so an integrating factor must have the form I(x) = e

∫p(x)dx = e

∫dxx = eln x+c = ecx.

Since 5x does indeed have this form, it is an integrating factor.

Problems:In this section the function I(x) = e

∫p(x)dx will represent the integrating factor for a differential equation

of the form y′ + p(x)y = q(x).

1. y′ − y = e2x. I(x) = e−∫

dx = e−x =⇒ d(e−xy)dx

= ex =⇒ e−xy = ex + c =⇒ y(x) = ex(ex + c).

2. x2y′ − 4xy = x7 sinx, x > 0 =⇒ y′ − 4xy = x5 sinx. I(x) = x−4 =⇒ d(x−4y)

dx= x sinx =⇒ x−4y =

sinx− x cos x + c =⇒ y(x) = x4(sinx− x cos x + c).

Page 45: Differential Equations and Linear Algebra - 3e - Goode -Solutions

45

3. y′ + 2xy = 2x3. I(x) = e2∫

xdx = ex2=⇒ d

dx(ex2

y) = 2ex2x3 =⇒ ex2

y = 2∫

ex2x3dx =⇒ ex2

y =

ex2(x2 − 1) + c =⇒ y(x) = x2 − 1 + ce−x2

.

4. y′+2x

1− x2y = 4x,−1 < x < 1. I(x) =

11− x2

=⇒ d

dx

(y

1− x2

)=

4x

1− x2=⇒ y

1− x2= − ln(1− x2)2 +

c =⇒ y(x) = (1− x2)[− ln (1− x2)2 + c].

5. y′ +2x

1 + x2y =

4(1 + x2)2

. I(x) = e

∫2x

1+x2 dx = 1 + x2 =⇒ d

dx[(1 + x2)y] =

4(1 + x2)2

=⇒ (1 + x2)y =

4∫ dx

1 + x2=⇒ (1 + x2)y = 4 tan−1 x + c =⇒ y(x) =

11 + x2

(4 tan−1 x + c).

6. 2 cos2 xdy

dx+ y sin 2x = 4 cos4 x, 0 ≤ x ≤ π

2 =⇒ y′ +sin 2x

2 cos2 xy = 2 cos2 x. I(x) =

1cos x

=⇒ d

dx(y sec x) =

cos x =⇒ y(x) = cos x(2 sinx + c) =⇒ y(x) = sin 2x + c cos x.

7. y′ +1

x lnxy = 9x2. I(x) = e

∫dx

x ln x = lnx =⇒ d

dx(y lnx) = 9

∫x2 lnxdx =⇒ y lnx = 3x3 lnx− x3 + c =⇒

y(x) =3x3 lnx− x3 + c

lnxbut y(e) = 2e3 so c = 0; thus, y(x) =

x3(3 ln x− 1)lnx

.

8. y′− y tanx = 8 sin3 x. I(x) = cos x =⇒ d

dx(y cos x) = 8 cos x sin3 x =⇒ y cos x = 8

∫cos x sin3 xdx+ c =⇒

y cos x = 2 sin4 x + c =⇒ y(x) =1

cos x(2 sin4 x + c).

9. tdx

dt+ 2x = 4et =⇒ x′ +

2tx =

4et

t. I(x) = e2

∫dtt = t2 =⇒ d

dt(t2x) = 4tet =⇒ t2x = 4

∫tetdt + c =⇒

t2x = 4et(t− 1) + c =⇒ x(t) =4et(t− 1) + c

t2.

10. y′ = (sinx sec x)y − 2 sinx =⇒ y′ − (sinx sec x)y = −2 sinx. I(x) = cos x =⇒ d

dx(y cos x) =

−2 sinx cos x =⇒ y cos x = −2∫

sinx cos xdx + c =12

cos 2x + c =⇒ y(x) =1

cos x

(12

cos 2x + c

).

11. (1−y sinx)dx−cos xdy = 0 =⇒ y′+(sinx sec x)y = sec x. I(x) = e∫

sin x sec xdx = sec x =⇒ d

dx(y sec x) =

sec2 x =⇒ y sec x =∫

sec2 xdx+c =⇒ y sec x = tan x+c =⇒ y(x) = cos x(tanx+c) =⇒ y(x) = sin x+c cos x.

12. y′ − x−1y = 2x2 lnx. I(x) = e−∫

1x dx = x−1 =⇒ d

dx(x−1y) = 2x lnx =⇒ x−1y = 2

∫x lnxdx + c =⇒

x−1y =12x2(2 ln x− 1) + c. Hence, y(x) =

12x3(2 ln x− 1) + cx.

13. y′ + αy = eβx. I(x) = eα∫

dx = eαx =⇒ d

dx(eαxy) = e(α+β)x =⇒ eαxy =

∫e(α+β)xdx + c. If α + β = 0,

then eαxy = x+ c =⇒ y(x) = e−αx(x+ c). If α+β 6= 0, then eαxy =e(α+β)x

α + β+ c =⇒ y(x) =

eβx

α + β+ ce−αx.

Page 46: Differential Equations and Linear Algebra - 3e - Goode -Solutions

46

14. y′ +m

xy = lnx. I(x) = xm =⇒ d

dx(xmy) = xm lnx =⇒ xmy =

∫xm lnxdx + c. If m = −1, then

xmy =(lnx)2

2+ c =⇒ y(x) = x

[(lnx)2

2+ c

]. If m 6= −1, then xmy =

xm+1

m + 1lnx− xm+1

(m + 1)2+ c =⇒ y(x) =

x

m + 1lnx− x

(m + 1)2+

c

xm.

15. y′+2x

y = 4x. I(x) = e2∫

dxx = e2 ln x = x2 =⇒ d

dx(x2y) = 4x3 =⇒ x2y = 4

∫x3dx + c =⇒ x2y = x4 + c,

but y(1) = 2 so c = 1; thus, y(x) =x4 + 1

x2.

16. y′ sinx − y cos x = sin 2x =⇒ y′ − y cot x = 2 cos x. I(x) = csc x =⇒ d

dx(y csc x) = 2 csc x cos x =⇒

y csc x = 2 ln (sinx) + c, but y(π2 ) = 2 so c = 2; thus, y(x) = 2 sinx[ln (sinx) + 1].

17. x′+2

4− tx = 5. I(t) = e2

∫dt

4−t = e−2 ln(4−t) = (4−t)t−2 =⇒ d

dt((4−t)−2x) = 5(4−t)−2 =⇒ (4−t)−2x =

5∫

(4− t)−2dt+c =⇒ (4−t)−2x = 5(4−t)−1 +c, but x(0) = 4 so c = −1; thus, x(t) = (4−t)2[5(4−t)−1−1]or x(t) = (4− t)(1 + t).

18. (y − e−x)dx + dy = 0 =⇒ y′ + y = ex. I(x) = ex =⇒ d

dx(exy) = e2x =⇒ exy =

e2x

2+ c, but y(0) = 1 so

c =12; thus, y(x) =

12(ex + e−x) = coshx.

19. y′ + y = f(x), y(0) = 3,

f(x) ={

1, if x ≤ 1,0, if x > 1.

I(x) = e∫

dx = ex =⇒ d

dx(exy) = exf(x) =⇒ [exy]x0 =

∫ x

0exf(x)dx =⇒ exy − y(0) =

∫ x

0exf(x)dx =⇒

exy − 3 =∫ x

0exdx =⇒ y(x) = e−x

[3 +

∫ x

0exf(x)dx

].

If x ≤ 1,∫ x

0exf(x)dx =

∫ x

0exdx = ex − 1 =⇒ y(x) = e−x(2 + ex)

If x > 1,∫ x

0exf(x)dx =

∫ x

0exdx = e− 1 =⇒ y(x) = e−x(2 + e).

20. y′ − 2y = f(x), y(0) = 1,

f(x) ={

1− x, if x < 1,0, if x ≥ 1.

I(x) = e−∫

2dx = e−2x =⇒ d

dx(e−2xy) = e−2xf(x) =⇒ [e−2xy]x0 =

∫ x

0e−2xf(x)dx =⇒ e−2xy − y(0) =∫ x

0e−2xf(x)dx =⇒ e−2xy − 1 =

∫ x

0e−2xf(x) =⇒ y(x) = e2x

[1 +

∫ x

0e−2xf(x)dx

].

If x < 1,∫ x

0e−2xf(x)dx =

∫ x

0e−2x(1− x)dx =

14e−2x(2x−1+e2x) =⇒ y(x) = e2x

[1 +

14e−2x(2x− 1 + e2x)

]=

14(5e2x + 2x− 1).

If x ≥ 1,∫ x

0e−2xf(x)dx =

∫ x

0e−2x(1− x)dx =

14(1+e−2) =⇒ y(x) = e2x

[1 +

14(1 + e−2)

]=

14e2x(5+e−2).

21. On (−∞, 1), y′ − y = 1 =⇒ I(x) = e−x =⇒ y(x) = c1ex − 1. Imposing the initial condition y(0) = 0

requires c1 = 1, so that y(x) = ex − 1, for x < 1.

Page 47: Differential Equations and Linear Algebra - 3e - Goode -Solutions

47

On [1,∞), y′ − y = 2− x =⇒ I(x) = e−x =⇒ d

dx(e−xy) = (2− x)e−x =⇒ y(x) = x− 1 + c2e

−x.

Continuity at x = 1 requires that limx→1 y(x) = y(1). Consequently we must choose c2 to satisfy c2e = e−1,so that c2 = 1− e−1. Hence, for x ≥ 1, y(x) = x− 1 + (1− e−1)ex.

22.d2y

dx2+

1x

dy

dx= 9x, x > 0. Let u =

dy

dxso

du

dx=

d2y

dx2. The first equation becomes

du

dx+

1x

u = 9x which is

first-order linear. An integrating factor for this is I(x) = x sod

dx(xu) = 9x2 =⇒ xu =

∫x2dx + c =⇒ xu =

3x3 + c1 =⇒ u = 3x2 + c1x−1, but u =

dy

dxso

dy

dx= 3x2 + c1x

−1 =⇒ y =∫

(3x2 + c1x−1)dx + c2 =⇒ y(x) =

x3 + c1 lnx + c2.

23. The differential equation for Newton’s Law of Cooling is dTdt = −k(T − Tm). We can re-write this

equation in the form of a first-order linear differential equation: dTdt + kT = kTm. An integrating factor for

this differential equation is I = e∫

k dt = ekt. Thus,d

dt(Tekt) = kTmekt. Integrating both sides, we get

Tekt = Tmekt + c, and hence, T = Tm + ce−kt, which is the solution to Newton’s Law of Cooling.

24.dTm

dt= α =⇒ Tm = αt + c1 so

dT

dt= −k(T − αt − c1) =⇒ dT

dt+ kT = k(αt + c1). An integrating

factor for this differential equation is I = ek∫

dt = ekt. Thus,d

dt(ektT ) = kekt(αt + c1) =⇒ ektT =

ekt(αt − α

k+ c1) + c2 =⇒ T = αt − α

k+ c1 + c2e

−kt =⇒ T (t) = α(t − 1k

) + β + T0e−kt where β = c1 and

T0 = c2.

25.dTm

dt= 10 =⇒ Tm = 10t + c1 but Tm = 65 when t = 0 so c1 = 65 and Tm = 10t + 65.

dT

dt=

−k(T − Tm) =⇒ dT

dt= −k(T − 10t − 65), but

dT

dt(1) = 5, so k =

18. The last differential equation can be

writtendT

dt+ kT = k(10t + 65) =⇒ d

dt(ektT ) = 5kekt(2t + 13) =⇒ ektT = 5kekt

(2k

t− 2k2

+13k

)+ c =⇒

T = 5(2t − 2k

+ 13) + ce−kt, but k =18

so T (t) = 5(2t − 3) + ce−t8 . Since T (1) = 35, c = 40e

18 . Thus,

T (t) = 10t− 15 + 40e18 (1−t).

26. (a) In this case, Newton’s law of cooling isdT

dt= − 1

40(t − 80e−t/20). This linear differential equation

has standard formdT

dt+

140

T = 2e−t/20, with integrating factor I(t) = et/40. Consequently the differential

equation can be written in the integrable formd

dt(et/40T ) = 2e−t/40, so that T (t) = −80e−t/20 + ce−t/40.

Then T (0) = 0 =⇒ c = 80, so that T (t) = 80(e−t/40 − e−t/20).(b) We see that limt→∞ = 0. This is a reasonable result since the temperature of the surrounding medium alsoapproaches zero as t →∞. We would expect the temperature of the object to approach to the temperatureof the surrounding medium at late times.

(c) T (t) = 80(e−t/40 − e−t/20) =⇒ dT

dt= 80

(− 1

40e−t/40 +

120

e−t/20

). So T (t) has only one critical point

when 80(− 1

40e−t/40 +

120

e−t/20

)= 0 =⇒ t = 40 ln 2. Since T (0) = 0, and limt→∞ T (t) = 0 the function

assumes a maximum value at tmax = 40 ln 2. T (tmax) = 80(e− ln 2−e−2 ln 2) = 20◦F, Tm(tmax) = 80e−2 ln 2 =

Page 48: Differential Equations and Linear Algebra - 3e - Goode -Solutions

48

20◦F.(d) The behavior of T (t) and Tm(t) is given in the accompanying figure.

10060

t

20

60

40

8040 120

0

0

80

20

T(t), Tm(t)

T(t)

Tm(t)

Figure 0.0.47: Figure for Exercise 26(d)

27. (a) The temperature varies from a minimum of A−B at t = 0 to a maximum of A + B when t = 12.

T

5 10 15 20t

A - B

A + B

Figure 0.0.48: Figure for Exercise 27(a)

(b) First write the differential equation in the linear formdT

dt+ k1T = k1(A − B cos ωt) + T0. Multiplying

by the integrating factor I = ek1t reduces this differential equation to the integrable formd

dt(ek1tT ) = k1e

k1t(A−B cos ωt) + T0ek1t.

Consequently,

ek1tT (t) =(

Aek1t −Bk1

∫ek1t cos ωtdt +

T0

k1ek1t + c

)

Page 49: Differential Equations and Linear Algebra - 3e - Goode -Solutions

49

so thatT (t) = A +

T0

k1− Bk1

k21 + ω2

(k1 cos ωt + ω sinωt) + ce−k1t.

This can be written in the equivalent form

T (t) = A +T0

k1− Bk1√

k21 + ω2

cos (ωt− α) + ce−k1t

for an approximate phase constant α.

28. (a)dy

dx+ p(x)y = 0 =⇒ dy

y= −p(x)dx =⇒

∫ dy

y= −

∫p(x)dx =⇒ ln |y| = −

∫p(x)dx + c =⇒ yH =

c1e−∫

p(x)dx.

(b) Replace c1 in part (a) by u(x) and let v = e−∫

p(x)dx. y = uv =⇒ dy

dx= u

dv

dx+v

du

dx. Substituting this last

result into the original differential equation,dy

dx+p(x) = q(x), we obtain u

dv

dx+v

du

dx+p(x)y = q(x), but since

dv

dx= −vp, the last equation reduces to v

du

dx= q(x) =⇒ du = v−1(x)q(x)dx =⇒ u =

∫v−1(x)q(x)dx + c.

Substituting the values for u and v into y = uv, we obtain y = e−∫

p(x)dx[∫

e∫

p(x)dxq(x)dx + c].

29. The associated homogeneous equation isdy

dx+x−1y = 0, with solution yH = cx−1. According to problem

28 we determine the function u(x) such that y(x) = x−1u(x) is a solution to the given differential equation.

We havedy

dx= x−1 du

dx−x−2u. Substituting into

dy

dx+x−1y = cos x yields x−1 du

dx− 1

x2u+x−1(x−1u) = cos x,

so thatdu

dx= x cos x. Integrating we obtain u = x sinx + cos x + c, so that y(x) = x−1(x sinx + cos x + c).

30. The associated homogeneous equation isdy

dx+ y = 0, with solution yH = ce−x. According to problem

28 we determine the function u(x) such that y(x) = e−xu(x) is a solution to the given differential equation.

We havedy

dx=

du

dxe−x − e−xu. Substituting into

dy

dx+ y = e−2x yields

du

dxe−x − e−xu + e−xu(x) = e−2x, so

thatdu

dx= e−x. Integrating we obtain u = −e−x + c, so that y(x) = e−x(−e−x + c).

31. The associated homogeneous equation isdy

dx+ cot x · y = 0, with solution yH = c · csc x. According to

problem 28 we determine the function u(x) such that y(x) = csc x ·u(x) is a solution to the given differential

equation. We havedy

dx= csc x · du

dx− csc x · cot x · u. Substituting into

dy

dx+ cot x · y = 2 cos x yields

csc x · du

dx− csc x · cot x · u + csc x · cot x · u = cos x, so that

du

dx= 2 cos x sinx. Integrating we obtain

u = sin2 x + c, so that y(x) = csc x(sin2 x + c).

32. The associated homogeneous equation isdy

dx− 1

xy = 0, with solution yH = cx. We determine the

function u(x) such that y(x) = xu(x) is a solution of the given differential equation. We havedy

dx= x

du

dx+u.

Substituting intody

dx− 1

xy = x lnx and simplifying yields

du

dx= ln x, so that u = x lnx−x+c. Consequently,

Page 50: Differential Equations and Linear Algebra - 3e - Goode -Solutions

50

y(x) = x(x lnx− x + c).

Problems 33 - 39 are easily solved using a differential equation solver such as the dsolve package in Maple.

Solutions to Section 1.7

True-False Review:

1. TRUE. Concentration of chemical is defined as the ratio of mass to volume; that is, c(t) = A(t)V (t) .

Therefore, A(t) = c(t)V (t).

2. FALSE. The rate of change of volume is “rate in” − “rate out”, which is r1 − r2, not r2 − r1.

3. TRUE. This is reflected in the fact that c1 is always assumed to be a constant.

4. FALSE. The concentration of chemical leaving the tank is c2(t) = A(t)V (t) , and since both A(t) and V (t)

can be nonconstant, c2(t) can also be nonconstant.

5. FALSE. Kirchoff’s second law states that the sum of the voltage drops around a closed circuit is zero,not that it is independent of time.

6. TRUE. This is essentially Ohm’s law, (1.7.10).

7. TRUE. Due to the negative exponential in the formula for the transient current, iT (t), it decays to zeroas t →∞. Meanwhile, the steady-state current, iS(t), oscillates with the same frequency ω as the alternatingcurrent, albeit with a phase shift.

8. TRUE. The amplitude is given in (1.7.19) as A = E0√R2+ω2L2 , and so as ω gets larger, the amplitude A

gets smaller.

Problems:

1. Given V (0) = 10, A(0) = 20, c1 = 4, r1 = 2, and r2 = 1. Then ∆V = r1∆t−r2∆t =⇒ dV

dt= 1 =⇒ V (t) =

t + 10 since V (0) = 10. ∆A ≈ c1r1∆t− c2r2∆t =⇒ dA

dt= 8− c2 = 8− A

V= 8− A

t + 10=⇒ dA

dt+

1t + 10

A =

8 =⇒ (t+10)A = 4(t+10)2 +c1. Since A(0) = 20 =⇒ c1 = −200 so A(t) =4

t + 10[(t+10)2−50]. Therefore,

A(40) = 196 g.

2. Given V (0) = 600, A(0) = 1500, c1 = 5, r1 = 6, and r2 = 3. We need to findA(60)V (60)

. ∆V = r1∆t −

r2∆t =⇒ dV

dt= 3 =⇒ V (t) = 3(t + 200) since V (0) = 600. ∆A ≈ c1r1∆t − c2r2∆t =⇒ dA

dt= 30 − 3c2 =

30− 3A

V= 30− A

t + 200=⇒ (t + 200)A = 15(t + 200)2 + c. Since A(0) = 1500, c = −300000 and therefore

A(t) =15

t + 200[(t + 200)2 − 20000]. Thus

A(60)V (60)

=596169

g/L.

3. Given V (0) = 20, A(0) = 0, c1 = 10, r1 = 4, and r2 = 2. Then ∆V = r1∆t− r2∆t =⇒ dV

dt= 2 =⇒ V =

2(t + 10) since V (0) = 20. Thus V (t) = 40 for t = 10, so we must find A(10).∆A ≈ c1r1∆t − c2r2∆t =⇒dA

dt= 40 − 2c2 = 40 − 2A

V= 40 − A

t + 10=⇒ dA

dt+

1t + 10

A = 40 =⇒ d

dt[(t + 10)A] = 40(t + 10)dt =⇒

Page 51: Differential Equations and Linear Algebra - 3e - Goode -Solutions

51

(t+10)A = 20(t+10)2 +c. Since A(0) = 0 =⇒ c = −2000 so A(t) =20

t + 10[(t+10)2−100] and A(10) = 300

g.

4. Given V (0) = 100, A(0) = 100, c1 = 0.5, r1 = 6, and r2 = 4. Then ∆V = r1∆t − r2∆t =⇒ dV

dt= 2 =⇒

V (t) = 2(t + 50) since V (0) = 100. ThendA

dt+

4A

2(t + 50)= 3 =⇒ dA

dt+

2A

t + 50= 3 =⇒ d

dt[(t + 50)2A] =

3(t+50)2 =⇒ (t+50)2A = (t+50)3+c but A(0) = 100 so c = 125000 and therefore A(t) = t+50+125000

(t + 50)2.

The tank is full when V (t) = 200, that is when 2(t+50) = 200 so that t = 50 min. Therefore the concentration

just before the tank overflows is:A(50)V (50)

=916

g/L.

5. Given V (0) = 10, A(0) = 0, c1 = 0.5, r1 = 3, r2 =, andA(5)V (5)

= 0.2.

(a) ∆V = r1∆t − r2∆t =⇒ dV

dt= 1 =⇒ V (t) = t + 10 since V (0) = 10. Then ∆A ≈ c1r1∆t − c2r2∆t =⇒

dA

dt= −2c2 = −2

A

V= − 2A

t + 10=⇒ dA

A= − 2dt

t + 10=⇒ ln |A| = −2 ln |t + 10|+c =⇒ A = k(t+10)−2. Then

A(5) = 3 since V (5) = 15 andA(5)V (5)

= 0.2. Thus, k = 675 and A(t) =675

(t + 10)2. In particular, A(0) = 6.75

g.

(b) Find V (t) whenA(t)V (t)

= 0.1. From part (a) A(t) =675

(t + 10)2and V (t) = t + 10 =⇒ A(t)

V (t)=

675(t + 10)3

.

SinceA(t)V (t)

= 0.1 =⇒ (t + 10)3 = 6750 =⇒ t + 10 = 15 3√

2 so V (t) = t + 10 = 15 3√

2 L.

6. Given V (0) = 20, c1 = 1, r1 = 3, and r2 = 2. Then ∆V = r1∆t− r2∆t =⇒ dV

dt= 1 =⇒ V = t + 20 since

V (0) = 20. ThendA

dt+

2t + 20

A = 3 =⇒ d

dt[(t + 20)2A] = 3(t + 20)2 =⇒ A(t) =

(20 + t)3 + c

(t + 20)2and since

A(0) = 0, c = −203 which means that A(t) =(t + 20)3 − 203

(t + 20)2.

(b) The concentration of chemical in the tank, c2, is given by c2 =A(t)V (t)

or c2 =A(t)

t + 20so from part (a),

c2 =(t + 20)3 − 203

(t + 20)3. Therefore c2 = dfrac12 g/l when

12

=(t + 20)3 − 203

(t + 20)3=⇒ t = 20( 3

√2− 1)minutes.

7. Given V (0) = w, c1 = k, r1 = r, r2 = r, and A(0) = A0. Then ∆V = r1∆r − r2∆t =⇒ dV

dt=

0 =⇒ V (t) = V (0) = w for all t. Then ∆A = c1r1∆t − c2r2∆t =⇒ dA

dt= kr − r

A

V= kr − r

A

V=

kr − r

wA =⇒ dA

dt+

r

wA = kr =⇒ d

dt(e−rt/wA) = kre−rt/w =⇒ A(t) = kw + ce−rt/w. Since A(0) = A0 so

c = A0 − kw =⇒ A(t) = e−rt/w[kw(ert/w − 1) + A0].

(b) limt→∞A(t)V (t)

= limt→∞e−rt/w

w[kw(ert/w − 1) + A0] = limt→∞[k +

(A0

w− k

)e−rt/w] = k. This is

reasonable since the volume remains constant, and the solution in the tank is gradually mixed with andreplaced by the solution of concentration k flowing in.

Page 52: Differential Equations and Linear Algebra - 3e - Goode -Solutions

52

8. (a) For the top tank we have:dA1

dt= c1r1 − c2r2 =⇒ dA1

dt= c1r1 − r2

A1(t)V1(t)

=⇒ dA1

dt= c1r1 −

r2

(r1 − r2)t + V1A1(t) =⇒ dA1

dt+

r2

(r1 − r2)t + v1A1 = c1r1.

For the bottom tank we have:dA2

dt= c2r2 − c3r3 =⇒ dA2

dt= r2

A1

(r1 − r2)t + V1− r3

A2(t)V2(t)

=⇒ dA2

dt=

r2A1

(r1 − r2)t + V1− r3

A2(t)(r2 − r3)t + V2

=⇒ dA2

dt+

r3

(r1 − r2)t + V2A2 =

r2A1

(r1 − r2)t + V1.

(b) From part (a)dA1

dt+

r2

(r1 − r2)t + v1A1 = c1r1 =⇒ dA1

dt+

42t + 40

A1 = 3 =⇒ dA1

dt+

2t + 20

A1 = 3 =⇒

d

dt[(t + 20)2A] = 3(t + 20)2 =⇒ A1 = t + 20 +

c

(t + 20)2but A1(0) = 4 so c = −6400. Consequently

A1(t) = t + 20 − 6400(t + 20)2

. ThendA2

dt+

3t + 20

A2 =2

t + 20

[t + 20− 6400

(t + 20)2

]=⇒ dA2

dt+

3t + 20

A2 =

2[(t + 20)3 − 6400](t + 20)3

=⇒ d

dt[(t + 20)3A2] = (t + 20)3

{2[(t + 20)3 − 6400]

(t + 20)3

}=⇒ A2(t) =

t + 202

− 12800t

(t + 20)3+

k

(t + 20)3but A2(0) = 20 so k = 80000. Thus A2(t) =

t + 202

− 12800t

(t + 20)3+

80000(t + 20)3

and in particular

A2(10) =1199

≈ 13.2 g.

9. Let E(t) = 20, R = 4 and L = 110 . Then

di

dt+

R

Li =

1L

E(t) =⇒ di

dt+ 40i = 200 =⇒ d

dt(e40ti) =

200e40t =⇒ i(t) = 5 + ce−40t. But i(0) = 0 =⇒ c = −5. Consequently i(t) = (1− e−40t).

10. Let R = 5, C = 150 and E(t) = 100. Then

dq

dt+

1RC

q =E

R=⇒ dq

dt+10q = 20 =⇒ d

dt(qe10t) = 20e10t =⇒

q(t) = 2 + ce−10t. But q(0) = 0 =⇒ c = −2 so q(t) = 2(1− e−40t).

11. Let R = 2, L = 23 and E(t) = 10 sin 4t. Then

di

dt+

R

Li =

1L

E(t) =⇒ di

dt+ 3i = 15 sin 4t =⇒

d

dt(e3ti) = 15e3t sin 4t =⇒ e3ti =

3e3t

5(3 sin 4t − 4 cos 4t) + c =⇒ i = 3

(35

sin 4t− 45

cos 4t

)+ ce−3t, but

i(0) = 0 =⇒ c =125

so i(t) =35(3 sin 4t− 4 cos 4t + 4e−3t).

12. Let R = 2, C = 18 and E(t) = 10 cos 3t. Then

dq

dt+

1RC

q =E

R=⇒ dq

dt+ 4q = 5 cos 3t =⇒ d

dt(e4tq) =

5e4t cos 3t =⇒ e4tq =e4t

5(4 cos 3t + 3 sin 3t) + c =⇒ q(t) = 1

5 (4 cos 3t + 3 sin 3t) + ce4t, but q(0) = 1 =⇒ c =15(4 cos 3t + 3 sin 3t) +

15e−4t and i(t) =

dq

dt=

15(9 cos 3t− 12 sin 3t− 4e−4t).

13. In an RC circuit for t > 0 the differential equation is given bydq

dt+

1RC

q =E

R. If E(t) = 0 then

dq

dt+

1RC

q = 0 =⇒ d

dt(et/RCq) = 0 =⇒ q = cet/RC and if q(0) = 5 then q(t) = 5e−t/RC . Then limt→∞ q(t) =

0. Yes, this is reasonable. As the time increases and E(t) = 0, the charge will dissipate to zero.

14. In an RC circuit the differential equation is given by i +1

RCq =

E0

R. Differentiating this equation with

Page 53: Differential Equations and Linear Algebra - 3e - Goode -Solutions

53

respect to t we obtaindi

dt+

1RC

dq

dt= 0, but

dq

dt= i so

di

dt+

1RC

i = 0 =⇒ d

dt(et/RCi) = 0 =⇒ i(t) = ce−t/RC .

Since q(0) = 0, i(0) =E0

Rand so i(t) =

E0

Re−t/RC =⇒ d = E0k so q(t) = E0k(1 − e−t/RC). Then

limt→∞ q(t) = E0k, and lim t →∞i(t) = 0.

t

E0k

q(t)

Figure 0.0.49: Figure for Exercise 14

15. In an RL circuit,di

dt+

R

Li =

E(t)L

and since E(t) = E0 sinωt, thendi

dt+

R

Li =

E0

Lsinωt =⇒

d

dt(eRt/Li) =

E0

LeRt/L sinωt =⇒ i(t) =

E0

R2 + L2ω2[R sinωt − ωL cos ωt] + Ae−Rt/L. We can write this

as i(t) =E0√

R2 + L2ω2

[R√

R2 + L2ω2sinωt− ωL√

R2 + L2ω2cos ωt

]+ Ae−Rt/L. Defining the phase φ by

cos φ =R√

R2 + L2ω2, sinφ =

ωL√R2 + L2ω2

, we have i(t) =E0√

R2 + L2ω2[cos φ sinωt−sinφ cos ωt]+Ae−Rt/L.

That is, i(t) =E0√

R2 + L2ω2sin (ωt− φ) + Ae−Rt/L.

Transient part of the solution: iT (t) = Ae−Rt/L.

Steady state part of the solution: iS(t) =E0√

R2 + L2ω2sin (ωt− φ).

16. We must solve the initial value problemdi

dt+ ai =

E0

L, i(0) = 0, where a =

R

L, and E0 denotes the

constant EMF. An integrating factor for the differential equation is I = eat, so that the differential equation

can be written in the formd

dt(eati) =

E0

Leat. Integrating yields i(t) =

E0

aL+ c1e

−at. The given initial

condition requires c1 +E0

aL= 0, so that c1 = −E0

aL. Hence i(t) =

E0

aL(1− e−at) =

E0

R(1− e−at).

17.dq

dt+

1RC

q =E(t)R

=⇒ dq

dt+

1RC

q =E0

Re−at =⇒ d

dt(et/RCq) =

E0

Re(1/RC−a)t =⇒

q(t) = e−t/RC

[E0C

1− aRCe(1/RC−a)t + k

]=⇒ q(t) =

E0C

1− aRCe−at +ke−t/RC . Imposing the initial condition

Page 54: Differential Equations and Linear Algebra - 3e - Goode -Solutions

54

q(0) = 0 (capacitor initially uncharged) requires k = − E0C

1− aRC, so that q(t) =

E0C

1− aRC(e−at − e−t/RC).

Thus i(t) =dq

dt=

E0C

1− aRC

(1

RCe−t/RC − ae−at

).

18.d2q

dt2+

1LC

q = 0 =⇒ idi

dq+

1LC

q = 0, since i =dq

dt. Then idi = − 1

LCqdq =⇒ i2 = − 1

LCq2 + k but

q(0) = q0 and i(0) = 0 so k =q20

LC=⇒ i2 = − 1

LCq2 +

q20

LC=⇒ i = ±

√q20 − q2

√LC

=⇒ dq

dt= ±

√q20 − q2

√LC

=⇒

dq√q2o − q2

= ± dt√LC

=⇒ sin−1 ( qq0

) = ± t√LC

+ k1 =⇒ q = q0 sin(± t√

LC+ k1

)but q(0) = q0 so q0 =

q0 sin k1 =⇒ k1 = π2 + 2nπ where n is an integer =⇒ q = q0 sin

(± 1√

LC+

π

2

)=⇒ q(t) = q0 cos

(t√LC

)and i(t) =

dq

dt= − q0√

LCsin(

t√LC

).

19.d2q

dt2+

1LC

q =E0

L. Since i =

dq

dtthen

d2

dt2=

di

dq

dq

dt= i

di

dq. Hence the original equation can be written

as idi

dq+

1LC

q =E0

L=⇒ idi +

1LC

qdq =E0

Ldq or

i2

2+

q2

2LC=

E0q

L+ A. Since i(0) = 0 and q(0) = q0

then A =q20

2LC− E0q0

L. From

i2

2+

q2

2LC=

E0q

L+ A we get that i =

[2A +

2E0q

L− q2

LC

]1/2

=⇒ i =[2A +

(2E0C)2

LC− (q − E0C)2

LC

]1/2

and we let D2 = 2A+(E0C)2

LCthen i

dq

dt= D

[1−

(q − E0C

D√

LC

)2]1/2

=⇒

√LC sin−1

(q − E0C

D√

LC

)= t + B. Then since q(0) = 0 so B =

√LC sin−1

(q − E0C

D√

LC

)and therefore

q − E0C

D√

LC= sin

(t + B√

LC

)=⇒ q(t) = D

√LC sin

(t + B√

LC

)+ E0c =⇒ i =

dq

dt= D cos

(t + B√

LC

). Since

D2 =2A + (E0C)2

LCand A =

q20

2LC− E0q0

Lwe can substitute to eliminate A and obtain D = ±|q0 − E0C|√

LC.

Thus q(t) = ±|q0 − E0C| sin(

t + B√LC

)+ E0c.

Solutions to Section 1.8

True-False Review:

1. TRUE. We have

f(tx, ty) =2(xt)(yt)− (xt)2

2(xt)(yt) + (yt)2=

2xyt2 − x2t2

2xyt2 + y2t2=

2xy − x2

2xy + y2= f(x, y),

so f is homogeneous of degree zero.

2. FALSE. We have

f(tx, ty) =(yt)2

(xt) + (yt)2=

y2t2

xt + y2t2=

y2t

x + y2t6= y2

x + y2,

Page 55: Differential Equations and Linear Algebra - 3e - Goode -Solutions

55

so f is not homogeneous of degree zero.

3. FALSE. Setting f(x, y) = 1+xy2

1+x2y , we have

f(tx, ty) =1 + (xt)(yt)2

1 + (xt)2(yt)=

1 + xy2t3

1 + x2yt36= f(x, y),

so f is not homogeneous of degree zero. Therefore, the differential equation is not homogeneous.

4. TRUE. Setting f(x, y) = x2y2

x4+y4 , we have

f(tx, ty) =(xt)2(yt)2

(xt)4 + (yt)4=

x2y2t4

x4t4 + y4t4=

x2y2

x4 + y4= f(x, y).

Therefore, f is homogeneous of degree zero, and therefore, the differential equation is homogeneous.

5. TRUE. This is verified in the calculation leading to Theorem 1.8.5.

6. TRUE. This is verified in the calculation leading to (1.8.12).

7. TRUE. We can rewrite the equation as

y′ −√

xy =√

xy1/2,

which is the proper form for a Bernoulli equation, with p(x) = −√

x, q(x) =√

x, and n = 1/2.

8. FALSE. The presence of an exponential exy involving y prohibits this equation from having the properform for a Bernoulli equation.

9. TRUE. This is a Bernoulli equation with p(x) = x, q(x) = x2, and n = 2/3.

Unless otherwise indicated in this section v =y

x,dy

dx= v + x

dv

dxand t > 0.

Problems:

1. f(tx, ty) =(tx)2 − (ty)2

(tx)(ty)=

x2 − y2

xy= f(x, y). Thus f is homogeneous of degree zero. f(x, y) =

x2 − y2

xy=

1− ( yx )2

yx

=1− v2

v= F (v).

2. f(tx, ty) = (tx)− (ty) = t(x− y) = tf(x, y). Thus f is homogeneous of degree one.

3. f(tx, ty) =(tx) sin ( tx

ty )− (ty) cos ( tytx )

yx

=x sin x

y − y cos yx

y= f(x, y). Thus f is homogeneous of degree

zero. f(x, y) =sin x

y −yx cos y

xyx

=sin 1

v − v cos v

v= F (v).

4. f(tx, ty) =

√(tx)2 + (ty)2

tx− ty=|t|√

x2 + y2

t(x− y)=

√x2 + y2

x− y= f(x, y). Thus f is homogeneous of degree

zero. F (v) =√

1 + v2

1− v.

5. f(tx, ty) =ty

tx− 1. Thus f is not homogeneous.

Page 56: Differential Equations and Linear Algebra - 3e - Goode -Solutions

56

6. f(tx, ty) =tx− 3

ty+

5(ty) + 93(ty)

=3(tx) + 5(ty)

3(ty)=

3x + 5y

3y= f(x, y). Thus f is homogeneous of degree

zero. f(x, y) =3x + 5y

3y=

3 + 5 yx

3 yx

=3 + 5v

3v= F (v).

7. f(tx, ty) =

√(tx)2 + (ty)2

tx=

√x2 + y2

x= f(x, y). Thus f is homogeneous of degree zero. f(x, y) =√

x2 + y2

x=|x|√

1 + ( yx )2

x= −

√1 +

(y

x

)2

= −√

1 + v2 = F (v).

8. f(tx, ty) =

√(tx)2 + 4(ty)2 − (tx) + (ty)

(tx) + 3(ty)=

√x2 + 4y2 − x + y

x + 3y= f(x, y). Thus f is homogeneous of

degree zero. f(x, y) =

√x2 + 4y2 − x + y

x + 3y=

√1 + 4( y

x )2 − 1 + yx

1 + 3 yx

=√

1 + 4v2 − 1 + v

1 + 3v= F (v).

9. (3x − 2y)dy

dx= 3y =⇒ (3 − 2

y

x)dy

dx= 3

y

x=⇒ (3 − 2v)

(v + x

dv

dx

)= 3v =⇒ x

dv

dx=

3v

3− 2v− v =⇒∫ 3− 2v

2v2dv =

∫ dx

x=⇒ − 3

2v− ln |v| = ln |x| + c1 =⇒ −3x

2y− ln |y

x| = ln |x| + c1 =⇒ ln y = −3x

2y+ c2 =⇒

y2 = ce−3x/y.

10.dy

dx=

(x + y)2

2x2=⇒ dy

dx=

12

(1 +

y

x

)2

=⇒ v + xdv

dx=

12(1 + v)2 =⇒

∫ dv

v2 + 1=∫ dx

x=⇒ tan−1 v =

12

ln |x|+ c =⇒ tan−1(y

x

)=

12

ln |x|+ c.

11. sin(y

x

)(x

dy

dx− y

)= x cos

(y

x

)=⇒ sin

(x

y

)(dy

dx− y

x

)= cos

(y

x

)=⇒ sin v

(v + x

dv

dx− v

)=

cos v =⇒ sin v

(x

dv

dx

)= cos v =⇒

∫ sin v

cos vdv =

∫ dx

x=⇒ − ln | cos v| = ln |x| + c1 =⇒

∣∣∣x cos(y

x

)∣∣∣ = c2 =⇒

y(x) = x cos−1( c

x

).

12.dy

dx=

√16x2 − y2 + y

x=⇒ dy

dx=√

16− ( yx )2 + y

x =⇒ v + xdv

dx=√

16− v2 + v =⇒∫ dv√

16− v2=∫ dx

x=⇒ sin−1(v

4 ) = ln |x|+ c =⇒ sin−1 ( y4x ) = ln |x|+ c.

13. We first rewrite the given differential equation in the equivalent form y′ =

√(9x2 + y2) + y

x. Factoring

out an x2 from the square root yields y′ =|x|√

9 + ( yx )2 + y

x. Since we are told to solve the differential

equation on the interval x > 0 we have |x| = x, so that y′ = 9 + ( yx )2 + y

x , which we recognize as beinghomogeneous. We therefore let y = xV , so that y′ = xV ′ + V . Substitution into the preceding differentialequation yields xV ′ + V =

√9 + V 2 + V , that is xV ′ =

√9 + V 2. Separating the variables in this equation

we obtain1√

9 + V 2dV =

1x

dx. Integrating we obtain ln (V +√

9 + V 2) = ln c1x. Exponentiating both sides

yields V +√

9 + V 2 = c1x. Substitutingy

x= V and multiplying through by x yields the general solution

Page 57: Differential Equations and Linear Algebra - 3e - Goode -Solutions

57

y +√

9x2 + y2 = c1x2.

14. The given differential equation can be written in the equivalent form

dy

dx=

y(x2 − y2)x(x2 + y2)

,

which we recognize as being first order homogeneous. The substitution y = xv yields

v + xdv

dx=

v(1− v2)1 + v2

=⇒ xdv

dx= − 2v3

1 + v2,

so that ∫1 + v2

v3dv = −2

∫dx

x=⇒ −v−2

2+ ln |v| = −2 ln |x|+ c1.

Consequently,

− x2

2y2+ ln |xy| = c1.

15. xdy

dx+ y lnx = y ln y =⇒ dy

dx=

y

xln

y

x=⇒ v + x

dv

dx= v ln v =⇒

∫ dv

v(ln v − 1)∫ dx

x=⇒ ln | ln v − 1| =

ln |x|+ c1 =⇒ln y

x − 1x

= c =⇒ y(x) = xe1+cx.

16.dy

dx=

y2 + 2xy − 2x2

x2 − xy + y2=⇒ v+x

dv

dx=

v2 + 2v − 21− v + v2

=⇒ xdv

dx=−v3 + 2v2 + v − 2

v2 − v + 1=⇒

∫ v2 − v + 1v3 − 2v2 − v + 2

dv =

−∫ dx

x=⇒

∫ v2 − v + 1(v − 1)(v + 2)(v + 1)

dv = −∫ dx

x=⇒

∫ [ 1v − 2

− 12(v − 1)

+1

2(v + 1)

]dv = −

∫ dx

x=⇒

ln |v − 2|−12

ln |v − 1|+12

ln |v + 1| = − ln |x|+c1 =⇒ ln∣∣∣∣ (v − 2)2(v + 1)

v − 1

∣∣∣∣ = −2 ln x+c2 =⇒ (y − 2x)2(y + x)y − x

=c.

17. 2xydy − (x2e−y2/x2+ 2y2)dx = 0 =⇒ 2

y

x

dy

dx−(

e−y2/x2+ 2

(y

x

)2)

= 0 =⇒ 2v

(v + x

dv

dx

)− (e−v2

+

2v2) = 0 =⇒ 2vxdv

dx= e−v2

=⇒∫

ev2(2vdv) =

∫ dx

x=⇒ ev2

= ln |x| + c1 =⇒ ey2/x2= ln (cx) =⇒ y2 =

x2 ln (ln (cx)).

18. x2 dy

dx= y2 + 3xy + x2 =⇒ dy

dx= ( y

x )2 + 3 yx + 1 =⇒ v + x

dv

dx= v2 + 3v + 1 =⇒ x

dv

dx= (v + 1)2 =⇒∫ dv

(v + 1)2=∫ dx

x=⇒ − 1

v + 1= ln |x| + c1 =⇒ − 1

yx + 1

= ln |x| + c1 =⇒ y

x= − 1

ln (cx)=⇒ y(x) =

−x

[1 +

1ln (cx)

].

19.dy

dx=

√x2 + y2 − x

y=⇒ dy

dx=

√1 + ( y

x )2 − 1yx

=⇒ v + xdv

dx=√

1 + v2−v

=⇒ xdv

dx=√

1 + v2 − x

v=⇒∫ v√

1 + v2 − 1− v2dv =

∫ dx

x=⇒ ln |1− u| = ln |x| + c1 =⇒ |x(1 − u)| = c2 =⇒ 1 − u =

c

x=⇒ u2 =

c2

x2− 2

c

x+ 1 =⇒ v2 =

c2

x2− 2

c

x=⇒ y2 = c2 − 2cx.

Page 58: Differential Equations and Linear Algebra - 3e - Goode -Solutions

58

20. 2x(y+2x)dy

dx= y(4x−y) =⇒ 2

(y

x+ 2) dy

dx= y

x (4− yx ) =⇒ 2(v+2)

(v + x

dv

dx

)= v(4−v) =⇒ 2x

dv

dx=

− 3v2

v + 2=⇒ 2

∫ v + 2v2

dv = −3∫ dx

x=⇒ 2 ln |v| − 4

v = −3 ln |x|+ c1 =⇒ y2 = cxe4x/y.

21. xdy

dx= x tan ( y

x ) + y =⇒ v + xdv

dx= tan v + v =⇒ x

dv

dx= tan v =⇒

∫cot vdv =

∫ dx

x=⇒ ln | sin v| =

ln |x|+ c1 =⇒ sin v = cx =⇒ v = sin−1 (cx) =⇒ y(x) = x sin−1 (cx).

22.dy

dx=

x√

x2 + y2 + y2

xy=⇒ dy

dx=

√(x

y

)2

+ 1 +y

x=⇒ v + x

dv

dx=√

( 1v )2 + 1 + v =⇒ dv

dx=√

( 1v )2 + 1 =⇒

∫ dv√( 1

v )2=∫ dx

x=⇒ ln |v +

√1 + v2| = ln |x|+c =⇒ v+

√1 + v2 = cx =⇒ y

x +√

1 + ( yx )2 =

xc =⇒ 2( yx ) + 1 = (cx)2 =⇒ y2 = x2 [(cx)2 − 1]

2.

23. The given differential equation can be written as (x−4y)dy = (4x+y)dx. Converting to polar coordinateswe have x = r cos θ =⇒ dx = cos θdr− r sin θdθ, and y = r sin θdr + r cos θdθ. Substituting these results intothe preceding differential equation and simplifying yields the separable equation 4r−1dr = dθ which can beintegrated directly to yield 4 ln r = θ + c, so that r = c1e

θ/4.

24.dy

dx=

2(2y − x)x + y

. Since x = 0, divide the numerator and denominator by y yieldsdy

dx=

xy + 1

2(2− xy )

.

Now let v = xy so that

dx

dy= v + y

dv

dy=⇒ v + y

dv

dy=

v + 12(2− v)

=⇒∫ 2(2− v)

2v2 − 3v + 1dv =

∫ dy

y=⇒

−6∫ dv

2v − 1+ 2

∫ dv

v − 1= ln |y|+ c1 =⇒ ln

(v − 1)2

|2v − 1|3= ln (c2|y|) =⇒ (x− y)2 = c(y − 2x)3. Since y(0) = 2

then c =12. Thus, (x− y)2 =

12(y − 2x)3.

25.dy

dx=

2x− y

x + 4y=⇒ dy

dx=

2− yx

1 + 4 yx

=⇒ v+xdv

dx=

2− v

1 + 4v=⇒ x

dv

dx=

2− 2v − 4v2

1 + 4v=⇒ 1

2∫ 1 + 4v

2v2 + v − 1dv =

−∫ dx

x=⇒ 1

2ln |2v2 + v − 1| = − ln |x| + c =⇒ 1

2ln |x2(2v2 + v − 1)| = c =⇒ 1

2ln |2y2 + yx− x2| = c, but

y(1) = 1 so c =12

ln 2. Thus12

ln |2y2 + yx− x2| =12

ln 2 and since y(1) = 1 it must be the case that

2y2 + yx− x2 = 2.

26.dy

dx=

y −√

x2 + y2

x=⇒ dy

dx=

y

x−√

1 + (y

x)2 =⇒ x

dv

dx= −

√1 + v2 =⇒

∫ dv√1 + v2

= −∫ dx

x=⇒

ln (v +√

1 + v2) = − ln |x| + c1 =⇒ y|x|x

+√

x2 + y2 = c2. Since y(3) = 4 then c2 = 9. Then take|x|x

= 1

since we must have y(3) = 4; thus y +√

x2 + y2 = 9.

27.dy

dx− y

x=

√4−

(y

x

)2

=⇒ v + xdv

dx= v +

√4− v2 =⇒

∫ dv√4− v2

=∫ dx

x=⇒ sin−1 v

2= ln |x|+ c =⇒

sin−1 y

2x= ln x + c since x > 0.

Page 59: Differential Equations and Linear Algebra - 3e - Goode -Solutions

59

28. (a)dy

dx=

x + ay

ax− y. Substituting y = xv and simplifying yields x

dv

dx=

1 + v2

a− v. Separating the variables

and integrating we obtain a tan−1 v − 12

ln (1 + v2) = lnx + ln c or equivalently, a tan−1 yx −

12 ln (x2 + y2) =

ln c. Substituting for x = r cos θ, y = r sin θ yields aθ − ln r = ln c. Exponentiating then gives r = keaθ.(b) The initial condition y(1) = 1 corresponds to r(π

4 ) =√

2. Imposing this condition on the polar formof the solution obtained in (a) yields k =

√2e−π/8. Hence, the solution to the initial value problem is

r =√

2e(θ−π/4)/2. When a =12, the differential equation is

dy

dx=

2x + y

x− 2y. Consequently every solution

curve has a vertical tangent line at points of intersection with the line y =x

2. The maximum interval of

existence for the solution of the initial value problem can be obtained by determining where y =x

2intersects

the curve r =√

2e(θ−π/4)/2. The line y =x

2has a polar equation tan θ =

12. The corresponding values of θ

are θ = θ1 = tan−1 12≈ 0.464, θ = θ2 = θ1 + π ≈ 3.61. Consequently, the x-coordinates of the intersection

points are x1 = r cos θ1 =√

2e(θ1−π/4)/2 cos θ1 ≈ 1.08, x2 = r cos θ2 =√

2e(θ2−π/4)/2 cos θ2 ≈ −5.18. Hencethe maximum interval of existence for the solution is approximately (−5.18, 1.08).(c) See the accompanying figure.

-4

-3

-2

-1

0

1

2

-6 -5 -4 -3 -2 -1 1x

y(x)

Figure 0.0.50: Figure for Exercise 28(c)

29. Given family of curves satisfies: x2 + y2 = 2cy =⇒ cx2 + y2

2y. Hence 2x + 2y

dy

dx= 2c

dy

dx=⇒ dy

dx=

x

c− y=

2xy

x2 − y2. Orthogonal trajectories satisfies:

dy

dx=

y2 − x2

2xy. Let y = vx so that

dy

dx= v + x

dv

dx.

Substituting these results into the last equation yields xdv

dx= −v2 + 1

2v=⇒ ln |v1 + 1| = − ln |x| + c1 =⇒

Page 60: Differential Equations and Linear Algebra - 3e - Goode -Solutions

60

y2

x2+ 1 =

c2

x=⇒ x2 + y2 = 2kx.

x

y(x)

Figure 0.0.51: Figure for Exercise 29

30. Given family of curves satisfies: (x−c)2+(y−c)2 = 2c2 =⇒ c =x2 + y2

2(x + y). Hence 2(x−c)+2(y−c)

dy

dx=

0 =⇒ c− x

y − c=

y2 − 2xy − x2

y2 + 2xy − x2. Orthogonal trajectories satisfies:

dy

dx=

y2 + 2xy − x2

x2 + 2xy − y2. Let y = vx so

thatdy

dx= v + x

dv

dx. Substituting these results into the last equation yields v + x

dv

dx=

v2 + 2v − 11 + 2v − v2

=⇒1 + 2v − v2

v3 − v2 + v − 1dv =

1x

dx =⇒(

1v − 1

− 2v

v2 + 1

)dv =

1x

dx =⇒ x2+y2 = 2k(x−y) =⇒ (x−k)2+(y+k)2 =

2k2.

31. (a) Let r represent the radius of one of the circles with center at (a.ma) and passing through (0, 0).r =

√(a− 0)2 + (ma− 0)2 = |a|

√1 + m2. Thus, the circle’s equation can be written as (x−a)2+(y−ma)2 =

(|a|√

1 + m2)2 or (x− a)2 + (y −ma)2 = a2(1 + m2).

(b) (x − a)2 + (y −ma)2 = a2(1 + m2) =⇒ a =x2 + y2

2(x + my). Differentiating the first equation with respect

x and solving we obtaindy

dx=

a− x

y −ma. Substituting for a and simplifying yields

dy

dx=

y2 − x2 − 2mxy

my2 −mx2 + 2xy.

Orthogonal trajectories satisfies:dy

dx=

mx2 −my2 − 2xy

y2 − x2 − 2mxy=⇒ dy

dx=

m−m( yx )2 − 2 y

x

( yx )2 − 1− 2m y

x

. Let y = vx so that

dy

dx= v +x

dv

dx. Substituting these results into the last equation yields v +x

dv

dx=

m−mv2 − 2v

v2 − 1− 2mv=⇒ xdv

dx=

(m− v)(1 + v2)v2 − 2mv − 1

=⇒∫ v2 − 2mv − 1

(m− v)(1 + v2)dv =

∫ dx

x=⇒

∫ dv

v −m−∫ 2v

1 + v2dv =

∫ dx

x=⇒ ln |v −m| −

Page 61: Differential Equations and Linear Algebra - 3e - Goode -Solutions

61

x

y(x)

Figure 0.0.52: Figure for Exercise 30

ln (1 + v2) = ln |x| + c1 =⇒ v −m = c2x(1 + v2) =⇒ y −mx = c2x2 + c2y

2 =⇒ x2 + y2 + cmx − cy = 0.Completing the square we obtain (x + cm/2)2 + (y − c/2)2 = c2/4(m2 + 1). Now letting b = c/2, the lastequation becomes (x + bm)2 + (y − b)2 = b2(m2 + 1) which is a family or circles lying on the line y = −myand passing through the origin.(c) See the accompanying figure.

x

y(x)

Figure 0.0.53: Figure for Exercise 31(c)

Page 62: Differential Equations and Linear Algebra - 3e - Goode -Solutions

62

32. x2 + y2 = c =⇒ dy

dx= −x

y= m2. m1 =

m2 − tan (π4 )

1 + m2 tan (π4 )

=−x/y − 11− x/y

=x + y

x− y. Let y = vx so that

dy

dx= v + x

dv

dx. Substituting these results into the last equation yields v + x

dv

dx=

1 + v

1− v=⇒ 1− v

1 + v2dv =

dx

x=⇒

∫ (− v

1 + v2+

1v2 + 1

)dv =

∫ dx

x=⇒ −1

2ln (1 + v2)+tan−1 v = ln |x|+c1 =⇒ Oblique trajectories:

ln (x2 + y2)− 2 tan−1 (y/x) = c2.

33. y = cx6 =⇒ dy

dx= 6y/x = m2. m1 =

m2 − tan (π4 )

1 + m2 tan (π4 )

=6y/x− 11 + 6y/x

=6y − x

6y + x. Let y = vx so that

dy

dx= v + x

dv

dx. Substitute these results into the last equation yields v + x

dv

dx=

6v − 16v + 1

=⇒ xdv

dx=

(3v − 1)(1− 2v)6v + 1

=⇒∫ ( 9

3v − 1− 8

2v − 1

)dv =

∫ dx

x=⇒ 3 ln |3v − 1| − 4 ln |2v − 1| = ln |x| + c1 =⇒

Oblique trajectories (3y − x)3 = k(2y − x)4.

34. x2 + y2 = 2cx =⇒ c =x2 + y2

2xand

dy

dx=

y2 − x2

2xy= m2. m1 =

m2 − tan (π4 )

1 + m2 tan (π4 )

=

y2 − x2

2xy− 1

1 +y2 − x2

2xy

=

y2 − x2 − 2xy

y2 − x2 + 2xy. Let y = vx so that

dy

dx= v + x

dv

dx. Substituting these results into the last equa-

tion yields v + xdv

dx=

v2 − 2v − 1v2 + 2v − 1

=⇒ xdv

dx=

−v3 − v2 − v − 1v2 + 2v − 1

=⇒ −v2 − 2v + 1v3 + v2 + v + 1

dv =dx

x=⇒∫ ( 1

v + 1− 2v

v2 + 1

)dv =

∫ dx

x=⇒ ln |v + 1| − ln (v2 + 1) = ln |x|+ c1 =⇒ ln |y + x| = ln |y2 + x2|+ c1 =⇒

Oblique trajectories: x2 + y2 = 2k(x + y) or, equivalently, (x− k)2 + (y − k)2 = 2k2.

35. (a) y = cx−1 =⇒ dy

dx= −cx−2 = −y

x. m1 =

m2 − tanα0

1 + m2 tanα0=

−y/x− tanα0

1− y/x tanα0. Let y = vx so

thatdy

dx= v + x

dv

dx. Substituting these results into the last equation yields v + x

dv

dx=

tanα0 + v

v tanα0 − 1=⇒

2v tanα0 − 2v2 tanα0 − 2v − tan alpha0

dv = −2dx

x=⇒ ln |v2 tanα0 − 2v − tanα0| = −2 ln |x|+c1 =⇒ (y2−x2) tan α0−

2xy = k.(b) See the accompanying figure.

36. (a) x2 +y2 = c =⇒ dy

dx= −x

y. m1 =

m2 − tanα0

1 + m2 tanα0=

−x/y −m

1− (x/y)m=

x + my

mx− y. Let y = vx so that

dy

dx=

v + xdv

dx. Substituting these results into the last equation yields v + x

dv

dx=

1 + mv

m− v=⇒ x

dv

dx=

1 + v2

m− v=⇒

v −m

1 + v2dv = −dx

x=⇒

∫ ( v

1 + v2− m

1 + v2

)dv = −

∫ dx

x=⇒ 1

2ln (1 + v2)−m tan v = − ln |x|+ c1. In polar

coordinates, r =√

x2 + y2 and θ = tan−1 y/x, so this result becomes ln r−mθ = c1 =⇒ r = emθ where k isan arbitrary constant.(b) See the accompanying figure.

37.dy

dx− 1

xy = 4x2y−1 cos x. This is a Bernoulli equation. Multiplying both sides y results in y

dy

dx− 1

xy2 =

Page 63: Differential Equations and Linear Algebra - 3e - Goode -Solutions

63

y(x)

x

Figure 0.0.54: Figure for Exercise 35(b)

x

y(x)

Figure 0.0.55: Figure for Exercise 36(b)

4x2 cos x. Let u = y2 sodu

dx= 2y

dy

dxor y

dy

dx=

12

du

dx. Substituting these results into y

dy

dx− 1

xy2 = 4x2 cos x

yieldsdu

dx− 2

xu = 8x2 cos x which has an integrating factor I(x) = x−2 =⇒ d

dx(x−2u) = 8 cos x =⇒ x−2u =

8∫

cos xdx + c =⇒ x−2u = 8 sin x + c =⇒ u = x2(8 sinx + c) =⇒ y2 = x2(8 sinx + c).

38. y−3 dy

dx+

12y−2 tanx = 2 sinx. This is Bernoulli equation. Let u = y−2 so

du

dx= −2y−3 dy

dxor

y−3 dy

dx=

12

du

dx. Substituting these results into the last equation yields

du

dx−u tanx = −4 sinx. An integrating

Page 64: Differential Equations and Linear Algebra - 3e - Goode -Solutions

64

factor for this equation is I(x) = cos x. Thus,d

dx(u cos x) = −4 cos x sinx =⇒ u cos x = 4

∫cos x sinxdx =⇒

u(x) =1

cos x(cos2 x + c) =⇒ y−2 = 2 cos x +

c

cos x.

39.dy

dx− 3

2xy = 6y1/3x2 lnx or

1y1/3

dy

dx− 3

2xy2/3 = 6x2 lnx. Let u = y2/3 =⇒ dy

dx=

23y−1/3 dy

dx. Substituting

these results into1

y1/3

dy

dx− 3

2xy2/3 = 6x2 lnx yields

du

dx− 1

xu = 4x2 lnx. An integrating factor for this

equation is I(x) =1x

sod

dx(x−1u) = 4x lnx =⇒ x−1u = 4

∫x lnxdx + c =⇒ x−1u = 2x2 lnx − x2 + c =⇒

u(x) = x(2x2 − x2 + c) =⇒ y2/3 = x(2x2 − x2 + c).

40.dy

dx+

2x

y = 6√

1 + x2y1/2 or y−1/2 dy

dx+

2x

y1/2 = 6√

1 + x2. Let u = y1/2 =⇒ 2du

dx= y−1/2 dy

dx.

Substituting these results into y−1/2 dy

dx+

2x

y1/2 = 6√

1 + x2 yieldsdu

dx+

1x

u = 3√

1 + x2. An integrating

factor for this equation is I(x) = x sod

dx(xu) = 3x

√1 + x2 =⇒ xu =

∫x√

1 + x2dx + c =⇒ xu =

(1 + x2)3/2 + c =⇒ u =1x

(1 + x2)3/2 +c

x=⇒ y1/2 =

1x

(1 + x2)3/2 +c

x.

41.dy

dx+

2x

y = 6y2x4 or y−2 dy

dx+

2x

y−1 = 6x4. Let u = y−1 =⇒ −du

dx= y−2 dy

dx. Substituting these results

into y−2 dy

dx+

2x

y−1 = 6x4 yieldsdu

dx− 2

xu = −6x4. An integrating factor for this equation is I(x) = x−2 so

d

dx(x−2u) = −6x2 =⇒ x−2u = −2x3 +c =⇒ u = −2x5 +cx2 =⇒ y−1 = −2x5 +cx2 =⇒ y(x) =

1x2(c− 2x3)

.

42. 2x

(dy

dx+ y3x2

)+ y = 0 or y−3 dy

dx+

12x

y−2 = −x−2. Let u = y−2 =⇒ −12

du

dx= y−3 dy

dx. Substituting

these results into y−3 dy

dx+

12x

y−2 = −x−2 yieldsdu

dx− 1

xu = 2x2. An integrating factor for this equation is

I(x) =1x

sod

dx(x−1u) = 2x =⇒ x−1u = x2 + c =⇒ u = x3 + cx =⇒ y−2 = x3 + cx.

43. (x− a)(x− b)(

dy

dx− y1/2

)= 2(b− a)y or y−1/2 dy

dx− 2(b− a)

(x− a)(x− b)y1/2 = 1. Let u = y1/2 =⇒ 2

du

dx=

y−1/2 dy

dx. Substituting these results into y−1/2 dy

dx− 2(b− a)

(x− a)(x− b)y1/2 = 1 yields

du

dx− (b− a)

(x− 1)(x− b)u =

12.

An integrating factor for this equation is I(x) =x− a

x− bso

d

dx

(x− a

x− bu

)=

x− a

2(x− b)=⇒ x− a

x− bu =

12[x+(b−

a) ln |x− b|+c] =⇒ y1/2 =x− b

2(x− a)[x+(b−a) ln |x− b|+c] =⇒ y(x) =

14

(x− b

x− a

)2

[x+(b−a) ln |x− b|+c]2.

44.dy

dx+

6x

y = 3y2/3 cos x

xor y−2/3 dy

dx+

6x

y1/3 = 3cos x

x. Let u = y1/3 =⇒ 3

du

dx= y−2/3 dy

dx. Substituting

these results into y−2/3 dy

dx+

6x

y1/3 = 3cos x

xyields

du

dx+

2x

u =cos x

x. An integrating factor for this equation

is I(x) = x2 sod

dx(x2u) = x cos x =⇒ x2u = cos x + x sinx + c =⇒ y1/3 =

cos x + x sinx + c

x2.

Page 65: Differential Equations and Linear Algebra - 3e - Goode -Solutions

65

45.dy

dx+ 4xy = 4x3y1/2 or y−1/2 dy

dx+ 4xy1/2 = 4x3. Let u = y1/2 =⇒ 2

du

dx= y−1/2 dy

dx. Substituting these

results into y−1/2 dy

dx+4xy1/2 = 4x3 yields

du

dx+2xu = 2x3. An integrating factor for this equation is I(x) =

ex2so

d

dx(ex2

u) = 2ex2x3 =⇒ ex2

u = ex2(x2−1)+c =⇒ y1/2 = x2−1+ce−x2

=⇒ y(x) = [(x2−1)+ce−x2]2.

46.dy

dx− 1

2x lnx= 2xy3 or y−3 dy

dx− 1

2x lnxy−2 = 2x. Let u = y−2 =⇒ 1

2du

dx= y−3 dy

dx. Substituting these

results into y−3 dy

dx− 1

2x lnxy−2 = 2x yields

du

dx+

1x lnx

u = −4x. An integrating factor for this equation is

I(x) = lnx sod

dx(u lnx) = −4x lnx =⇒ u lnx = x2 − 2x2 lnx + c =⇒ y2 =

lnx

x2(1− 2 ln x) + c.

47.dy

dx− 1

(π − 1)xy =

3(1− π)

xyπ or y−π dy

dx− 1

(π − 1)xy1−π =

3x

1− π. Let u = y1−π =⇒ 1

1− π

du

dx= y−π dy

dx.

Substituting these results into y−π dy

dx− 1

(π − 1)xy1−π =

3x

1− πyields

du

dx+

1x

u = 3x. An integrating factor for

this equation is I(x) = x sod

dx(xu) = 3x2 =⇒ xu = x3 + c =⇒ y1−π =

x3 + c

x=⇒ y(x) =

(x3 + c

x

)1/(1−π)

.

48. 2dy

dx+y cot x = 8y−1 cos3 x or 2y

dy

dx+y2 cot x = 8 cos2 x. Let u = y2 =⇒ du

dx= 2y

dy

dx. Substituting these

results into 2ydy

dx+ y2 cot x = 8 cos2 x yields

du

dx+ u sec x = sec x. An integrating factor for this equation is

I(x) = sin x sod

dx(u sinx) = 8 cos3 x sinx =⇒ u sinx = −2 cos4 x + c =⇒ y2 =

−2 cos4 x + c

sinx.

49. (1−√

3)dy

dx+y sec x = y

√3 sec x or (1−

√3)y−

√3 dy

dx+y1−

√3 sec x = sec x. Let u = y1−

√3 =⇒ du

dx= (1−

√3)y−

√3 dy

dx. Substituting these results into (1−

√3)y−

√3 dy

dx+y1−

√3 sec x = sec x yields

du

dx+u sec x = sec x.

An integrating factor for this equation is I(x) = sec x+tanx sod

dx[(sec x+tanx)u] = sec x(sec x+tanx) =⇒

(sec x + tanx)u = tan x + sec x + c =⇒ y1−√

3 = 1 +1

sec x + tanx=⇒ y(x) =

(1 +

c

sec x + tanx

)1/(1−√

3)

.

50.dy

dx+

2x

1 + x2y = xy2 or

1y2

dy

dx+

2x

1 + x2

1y

= x. Let u = y−1 sodu

dx= −y−2 dy

dx. Substituting these

results into1y2

dy

dx+

2x

1 + x2

1y

= x yieldsdu

dx− 2x

1 + x2u = −x. An integrating factor for this equation is

I(x) =1

1 + x2so

d

dx

(u

1 + x2

)= − x

1 + x2=⇒ u

1 + x2= −

∫ xdx

1 + x2+c =⇒ u

1 + x2= −1

2ln (1 + x2)+c =⇒

u = (1 + x2)(−1

2ln (1 + x2) + c

)=⇒ y−1 = (1 + x2)

(−1

2ln (1 + x2) + c

). Since y(0) = 1 =⇒ c = 1 so

1y

= (1 + x2)(−1

2ln (1 + x2) + 1

).

51.dy

dx+ y cot x = y3 sin3 x or y−3 dy

dx+ y−2 cot x = sin3 x. Let u = y−2 =⇒ −1

2du

dx= y−3 dy

dx. Substituting

Page 66: Differential Equations and Linear Algebra - 3e - Goode -Solutions

66

these results into y−3 dy

dx+ y−2 cot x = sin3 x yields

du

dx− 2u cot x = −2 sin3 x. An integrating factor for this

equation is I(x) = csc2 x sod

dx(u csc2 x) = − sinx =⇒ u csc2 x = 2 cos x + c. Since y(π/2) = 1 =⇒ c = 1.

Thus y2 =1

sin2 x(2 cos x + 1).

52.dy

dx= F (ax + by + c). Let v = ax + by + c so that

dv

dx= a + b

dy

dx=⇒ b

dy

dx=

dv

dx− a =⇒ dy

dx=

1b

(dv

dx− a

)= F (v) =⇒ dv

dx− a = bF (v) =⇒ dv

dx= bF (v) + a =⇒ dv

bf(v) + a= dx.

53.dy

dx= (9x − y)2. Let v = 9x − y so that

dy

dx= 9 − dv

dx=⇒ dv

dx= 9 − v2 =⇒

∫ dv

9− v2=∫

dx =⇒13

tanh−1 (v/3) = x + c1 but y(0) = 0 so c = 0. Thus, tanh−1 (3x− y/3) = 3x or y(x) = 3(3x− tanh 3x).

54.dy

dx= (4x + y + 2)2. Let v = 4x + y + 2 so that

dv

dx= 4 +

dy

dx=⇒ dv

v2 + 4= dx =⇒

∫ dv

v2 + 4=∫

dx =⇒12

tan−1 v/2 = x + c1 =⇒ tan−1 (2x + y/2 + 1) = 2x + c =⇒ y(x) = 2[tan (2x + c)− 2x− 1].

55.dy

dx= sin2 (3x− 3y + 1). Let v = 3x − 3y + 1 so that

dy

dx1 − 1

3dv

dx=⇒ 1 − 1

3dv

dx= sin2 v =⇒

dv

dx= 3 cos2 v =⇒

∫sec2 vdv = 3

∫dx =⇒ tan v = 3x + c =⇒ tan (3x− 3y + 1) = 3x + c =⇒ y(x) =

13[3x− tan−1 (3x + c) + 1].

56. V = xy =⇒ V ′ = xy′ + y =⇒ y′ = (V ′ − y)/x. Substitution into the differential equation yields

(V ′ − y)/x = yF (V )/x =⇒ V ′ = y[F (V ) + 1] =⇒ V ′ = V [F (V ) + 1]/x, so that1

V [F (V ) + 1]dV

dx=

1x

.

57. Substituting into1

V [F (V ) + 1]dV

dx=

1x

for F (V ) = ln V − 1 yields1

V lnVdV =

1x

dx =⇒ ln lnV =

ln cx =⇒ V = ecx =⇒ y(x) =1x

ecx.

58. (a) x = u − 1, y = v + 1 =⇒ dy

dx=

dv

du. Substitution into the given differential equation yields

dv

du=

u + 2v

2u− v.

(b) The differential equation obtained in (a) is first order homogeneous. We therefore let W = v/u, and

substitute into the differential equation to obtain W ′u + W =1 + 2W

2−W=⇒ W ′u =

1 + W 2

2−W. Separating the

variables yields(

21 + W 2

− W

1 + W 2

)dW =

1u

du. This can be integrated directly to obtain 2 tan−1 W −12

ln (1 + W 2) = lnu + ln c. Simplifying we obtain cu2(1 + W 2) = e4 tan−1 W =⇒ c(u2 + v2) = etan−1 (v/u).

Substituting back in for x and y yields c[(x + 1)2 + (y − 1)2] = etan−1 [(y−1)/(x+1)].

59. (a) y = Y (x) + v−1(x) =⇒ y′ = Y ′(x) − v−2(x)v′(x). Now substitute into the given differential

Page 67: Differential Equations and Linear Algebra - 3e - Goode -Solutions

67

equation and simplify algebraically to obtain Y ′(x) + p(x)Y (x) + q(x)Y 2(x) − v−2(x)v′(x) + v−1(x)p(x) +q(x)[2Y (x)v−1(x) + v−2(x)] = r(x). We are told that Y (x) is a particular solution to the given differentialequation, and therefore Y ′(x) + p(x)Y (x) + q(x)Y 2(x) = r(x). Consequently the transformed differentialequation reduces to −v−2(x)v′(x) + v−1p(x) + q(x)[2Y (x)v−1(x) + v−2(x)] = 0, or equivalently v′ − [p(x) +2Y (x)q(x)]v = q(x).(b) The given differential equation can be written as y′ − x−1y − y2 = x−2, which is a Riccati differentialequation with p(x) = −x−1, q(x) = −1, and r(x) = x−2. Since y(x) = −x−1 is a solution to the givendifferential equation, we make a substitution y(x) = −x−1 + v−1(x). According to the result from part (a),the given differential equation then reduces to v′−(−x−1+2x−1)v = −1, or equivalently v′−x−1v = −1. This

linear differential equation has an integrating factor I(x) = x−1, so that v must satisfyd

dx(x−1v) = −x−1 =⇒

v(x) = x(c−lnx). Hence the solution to the original equation is y(x) = − 1x

+1

x(c− lnx)=

1x

(1

c− lnx− 1)

.

60. (a) If y = axr, then y′ = arxr−1. Substituting these expressions into the given differential equationyields arxr−1 + 2axr−1 − a2x2r = −2x−2. For this to hold for all x > 0, the powers of x must match up oneither side of the equation. Hence, r = −1. Then a is determined from the quadratic −a+2a−a2 = −1 ⇐⇒a2−a−2 = 0 ⇐⇒ (a−2)(a+1) = 0. Consequently, a = 2,−1 in order for us to have a solution to the givendifferential equation. Therefore, two solutions to the differential equation are y1(x) = 2x−1, y2(x) = −x−1.(b) Taking Y (x) = 2x−1 and using the result from problem 59(a), we now substitute y(x) = 2x−1 + v−1 intothe given Riccati equation. The result is (−2x−2 − v−2v′) + 2x−1(2x−1 + v−1)− (4x−2 + 4x−1v−1 + v−2) =−2x−2. Simplifying this equation yields the linear equation v′+2x−1v = −1. Multiplying by the integrating

factor I(x) = e∫

2x−1dx = x2 results in the integrable differential equationd

dx(x2v) = −x2. Integrating this

differential equation we obtain v(x) = x2

(−1

3x3 + c

)=

13x2(c1 − x3). Consequently, the general solution

to the Riccati equation is y(x) =2x

+3

x2(c1 − x3).

61. (a) y = x−1 + w(x) =⇒ y′ = −x−2 + w′. Substituting into the given differential equation yields(−x−2 + w′) + 7x−1(x−1 + w)− 3(x−2 + 2x−1w + w2) = 3x−2 which simplifies to w′ + x−1w − 3w2 = 0.(b) The preceding equation can be written in the equivalent form w−2w′ + x−1w−1 = 3. We let u = w−1,so that u′ = −w−2w′. Substitution into the differential equation gives, after simplification, u′ − x−1u = −3.An integrating factor for this linear differential equation is I(x) = x−1, so that the differential equation

can be written in the integrable formd

dx(x−1u) = −3x−1. Integrating we obtain u(x) = x(−3 ln x +

c), so that w(x) =1

x(c− 3 ln x). Consequently the solution to the original Riccati equation is y(x) =

1x

(1 +

1c− 3 ln 3

).

62. y−1 dy

dx+ p(x) ln y = q(x). If we let u = ln y, then

du

dx=

1y

dy

dxand the given equation becomes

du

dx+

p(x)u = q(x) which is a first order linear and has a solution of the form u = e−∫

p(x)dx[∫

e∫

p(x)dxq(x)dx + c].

Substituting ln y = e−∫

p(x)dx[∫

e∫

p(x)dxq(x)dx + c]

into u = ln y we obtain y(x) = eI−1[∫

I(t)q(t)dt+c]

where I(x) = e∫

p(t)dt and c is an arbitrary constant.

Page 68: Differential Equations and Linear Algebra - 3e - Goode -Solutions

68

63. y−1 dy

dx− 2

xln y =

1− 2 ln x

x. Let u = ln y so using the technique of the preceding problem:

du

dx− 2

xu =

1− 2 ln x

xand u = e

2∫ dx

x

∫ e−2∫ dx

x

(1− 2 ln x

x

)dx + c1

= x2

[∫ (1− 2 ln x

x3dx

)+ c1

]= lnx + cx2,

and since u = ln y, ln y = ln x + cx2. Now y(1) = e so c = 1 =⇒ y(x) = xex2.

64. If u = f(y), thendu

dx= f ′(y)

dy

dxand the given equation f ′(y)

dy

dx+ p(x)f(y) = q(x) becomes

du

dx+

p(x)u = q(x) which has a solution of the form u(x) = e−∫

p(x)dx[∫

e∫

p(x)dxq(x)dx + c]. Substituting

f(y) = e−∫

p(x)dx[∫

e∫

p(x)dxq(x)dx + c]

into u = f(y) and using the fact that f is invertible, we obtain

y(x) = f−1[I−1

(∫I(t)q(t)dt

)+ c]

where I(x) = e∫

p(t)dt and c is and arbitrary constant.

65. sec2 ydy

dx+

12√

1 + xtan y =

12√

1 + x. Let u = tan y so that

du

dx= sec2 y

dy

dxand the given equation

becomesdu

dx+

12√

1 + xu =

12√

1 + xwhich is first order linear. An integrating factor for this equation is

I(x) = e√

1+x =⇒ d

dx(e√

1+xu) =e√

1+x

2√

1 + x=⇒ e

√1+xu =

∫ e√

1+x

2√

1 + x=⇒ e

√1+xu = e

√1+x + c =⇒ u =

1 + ce−√

1+x. But u = tan y so tan y = 1 + ce−√

1+x or y(x) = tan−1 (1 + ce−√

1+x).

Solutions to Section 1.9

True-False Review:

1. FALSE. The requirement, as stated in Theorem 1.9.4, is that My = Nx, not Mx = Ny, as stated.

2. FALSE. A potential function φ(x, y) is not an equation. The general solution to an exact differentialequation takes the form φ(x, y, ) = c, where φ(x, y) is a potential function.

3. FALSE. According to Definition 1.9.2, M(x)dx + N(y)dy = 0 is only exact if there exists a functionφ(x, y) such that φx = M and φy = N for all (x, y) in a region R of the xy-plane.

4. TRUE. This is the content of part 1 of Theorem 1.9.11.

5. FALSE. If φ(x, y) is a potential function for M(x, y)dx + N(x, y)dy = 0, then so is φ(x, y) + c for anyconstant c.

6. TRUE. We haveMy = 2e2x − cos y and Nx = 2e2x − cos y,

and so since My = Nx, this equation is exact.

7. TRUE. We have

My =(x2 + y)2(−2x) + 4xy(x2 + y)

(x2 + y)4= Nx,

and so this equation is exact.

8. FALSE. We haveMy = 2y and Nx = 2y2,

Page 69: Differential Equations and Linear Algebra - 3e - Goode -Solutions

69

and since My 6= Nx, we conclude that this equation is not exact.

9. FALSE. We have

My = ex sin y cos y + xex sin y cos y and Nx = cos y sin yex sin y,

and since My 6= Nx, we conclude that this equation is not exact.

Problems:

1. (y + 3x2)dx + xdy = 0. M = y + 3x2 and N = x =⇒ My = 1 and Nx = 1 =⇒ My = Nx =⇒ thedifferential equation is exact.

2. [cos (xy) − xy sin (xy)]dx − x2 sin (xy)dy = 0 =⇒ M = cos (xy) − xy sin (xy) and N = −x2 sin (xy) =⇒My = −2x sin (xy) − x2y cos (xy) and Nx = −2x sin (xy) − x2y cos (xy) =⇒ My = Nx =⇒ the differentialequation is exact.

3. yexydx + (2y − xe−xy)dy = 0. M = yexy and N = 2y − xe−xy =⇒ My = yxexy + exy and Nx =xye−xy − e−xy =⇒ My 6= Nx =⇒ the differential equation is not exact.

4. 2xydx + (x2 + 1)dy = 0. M = 2xy and N = x2 + 1 =⇒ My = 2x and Nx = 2x =⇒ My = Nx =⇒

the differential equation is exact so there exists a potential function φ such that (a)∂φ

∂x= 2xy and (b)

∂φ

∂x=

2xy +dh(x)

dxso from (a), 2xy = 2xy +

dh(x)dx

=⇒ dh(x)dx

= 0 =⇒ h(x) is a constant. Since we need just one

potential function, let h(x) = 0. Thus, φ(x, y) = (x2 + 1)y; hence, (x2 + 1)y = c.

5. (y2 + cos x)dx + (2xy + sin y)dy = 0. M = y2 + cos x and N = 2xy + sin y =⇒ My = 2y andNx = 2y =⇒ My = Nx =⇒ the differential equation is exact so there exists a potential function φ such that

(a)∂φ

∂x= y2 +cos x and (b)

∂φ

∂y= 2xy +sin y. From (a) φ(x, y) = xy2 +sinx+h(y) =⇒ ∂φ

∂y= 2xy +

dh(y)dy

so

from (b) 2xy +dh(y)

dy= 2xy + sin y =⇒ dh

dy= sin y =⇒ h(y) = − cos y where the constant of integration has

been set to zero since we just need one potential function. φ(x, y) = xy2+sinx−cos y =⇒ xy2+sinx−cos y =c.

6. Givenxy − 1

xdx +

xy + 1y

dy = 0 then My = Nx = 1 =⇒ then the differential equation is exact so

there exists a potential function φ such that (a)∂φ

∂x=

xy − 1x

and (b)∂φ

∂y=

xy + 1y

. From (a) φ(x, y) =

xy − ln |x|+ h(y) =⇒ ∂φ

∂y= x +

dh(y)dy

so from (b), x +dh(y)

dy=

xy + 1y

=⇒ dh(y)dy

= y−1 =⇒ h(y) = ln |y|

where the constant of integration has been set to zero since we need just one potential function. φ(x, y) =xy + ln |y/x| =⇒ xy + ln |x/y| = c.

7. Given (4e2x + 2xy − y2)dx + (x − y)2dy = 0 then My = Nx = 2y so the differential equation is exact

and there exists a potential function φ such that (a)∂φ

∂x= 4e2x + 2xy − y2 and (b)

∂φ

∂y= (x − y)2.

From (b) φ(x, y) = x2y − xy2 +y3

3+ h(x) =⇒ ∂φ

∂x= 2xy − y2 +

dh(x)dx

so from (a) 2xy − y2 +dh(x)

dx=

4e2x + 2xy − y2 =⇒ dh(x)dx

= 4e2x =⇒ h(x) = 2e2x where the constant of integration has been set to zero

Page 70: Differential Equations and Linear Algebra - 3e - Goode -Solutions

70

since we need just one potential function. φ(x, y) = x2y − xy2 +y3

3+ 2e2x =⇒ x2y − xy2 +

y3

3+ 2e2x =

c1 =⇒ 6e2x + 3x2y − 3xy2 + y3 = c.

8. Given (y2− 2x)dx+2xydy = 0 then My = Nx = 2xy so the differential equation is exact and there exists

a potential function φ such that (a)∂φ

∂x= y2 − 2x and (b)

∂φ

∂y= 2xy. From (b) φ(x, y) = xy2 + h(x) =⇒

∂φ

∂x= y2 +

dh(x)dx

so from (a) y2 +dh(x)

dx= y2−2x =⇒ dh(x)

dx= −2x =⇒ h(x) = −2x where the constant of

integration has been set to zero since we just need one potential function. φ(x, y) = xy2−x2 =⇒ xy2−x2 = c.

9. Given(

1x− y

x2 + y2

)dx +

x

x2 + y2dy = 0 then My = Nx =

y2 − x2

(x2 + y2)2so the differential equation

is exact and there exists a potential function φ such that (a)∂φ

∂x=

1x− y

x2 + y2and (b)

∂φ

∂y=

x

x2 + y2.

From (b) φ(x, y) = tan−1(y/x) + h(x) =⇒ ∂φ

∂x= − y

x2 + y2+

dh(x)dx

so from (a) − y

x2 + y2+

dh(x)dx

=

1x− y

x2 + y2=⇒ dh

dx= x−1 =⇒ h(x) = ln |x| where the constant of integration is set to zero since we only

need one potential function. φ(x, y) = tan−1(y/x) + ln |x| =⇒ tan−1(y/x) + ln |x| = c.

10. Given [1+ln (xy)]dx+x

ydy = 0 then My = Nx = y−1 so the differential equation is exact and there exists

a potential function φ such that (a)∂φ

∂x= 1+ln (xy) and (b) φ(x, y) = x ln y+h(x) =⇒ ∂φ

∂x= ln y+

dh(x)dx

so

from (a) ln y+dh(x)

dx= 1 ln (xy) =⇒ dh

dx= 1+lnx =⇒ h(x) = c lnx where the constant of integration is set to

zero since we only need one potential function. φ(x, y) = x ln y+x lnx =⇒ x ln y+x lnx = c =⇒ x ln (xy) = c.

11. Given [y cos (xy)− sinx]dx+x cos (xy)dy = 0 then My = Nx = −xy sin (xy)+cos (xy) so the differential

equation is exact so there exists a potential function φ such that (a)∂φ

∂x= y cos (xy) − sinx and (b)

∂φ

∂x= x cos (xy). From (b) φ(x, y) = sin (xy) + h(x) =⇒ ∂φ

∂x= y cos (xy) +

dh(x)dx

so from (a) y cos (xy) +

dh(x)dx

= y cos (xy)− sinx =⇒ dh

dx= − sinx =⇒ h(x) = cos x where the constant of integration is set to zero

since we only need one potential function. φ(x, y) = sin (xy) + cos x =⇒ sin (xy) + cos x = c.

12. Given (2xy +cos y)dx+(x2−x sin y− 2y)dy = 0 then My = Nx = 2x− sin y so the differential equation

is exact so there is a potential function φ such that (a)∂φ

∂x= 2xy + cos y and (b)

∂φ

∂y= x2 − x sin y − 2y.

From (a) φ(x, y) = x2y + x cos y + h(y) =⇒ ∂φ

∂y= x2 − x sin y +

dh(y)dy

so from (b) x2 − x sin y +dh(y)

dy=

x2 − x sin y− 2y =⇒ dh

dy= −2y =⇒ h(y) = −y2 where the constant of integration has been set to zero since

we only need one potential function. φ(x, y) = x2y + x cos y − y2 =⇒ x2y + x cos y − y2 = c.

13. Given (3x2 lnx + x2 − y)dx − xdy = 0 then My = Nx = −1 so the differential equation is exact so

there exists a potential function φ such that (a)∂φ

∂x= 3x2 lnx + x2 − y and (b)

∂φ

∂y= −x. From (b)

Page 71: Differential Equations and Linear Algebra - 3e - Goode -Solutions

71

φ(x, y) = −xy + h(x) =⇒ ∂φ

∂x= −y +

dh(x)dx

so from (a) −y +dh(x)

dx= 3x2 lnx + x2 − y =⇒ dh(x)

dx=

3x2 lnx + x2 =⇒ h(x) = x3 lnx where the constant of integration has been set to zero since we only needone potential function. φ(x, y) = −xy + x3 lnx =⇒ −xy + x3 lnx = c. Now since y(1) = 5, c = −5; thus,

x3 lnx− xy = −5 or y(x) =x3 lnx + 5

x.

14. Given 2x2 dx

dy+ 4xy = 3 sin x =⇒ (4xy − 3 sinx)dx + 2x2dy = 0 then My = Nx = 4x so the differential

equation is exact so there exists a potential function φ such that (a)∂φ

∂x= 4xy − 3 sinx and (b)

∂φ

∂y= 2x2.

From (b) φ(x, y) = 2x2y +h(x) =⇒ ∂φ

∂x= 4xy +

dh(x)dx

so from (a) 4xy +dh(x)

dx= 4xy−3 sinx =⇒ dh(x)

dx=

−3 sinx =⇒ h(x) = 3 cos x where the constant of integration has been set to zero since we only need onepotential function. φ(x, y) = 2x2y + 3 cos x =⇒ 2x2y + 3 cos x = c. Now since y(2π) = 0, c = 3; thus,

2x2y + 3 cos x = 3 or y(x) =3− 3 cos x

2x2.

15. Given (yexy + cos x)dx + xexydy = 0 then My = Nx = xyexy + exy so the differential equation is

exact so there exists a potential function φ such that (a)∂φ

∂x= yexy + cos x and (b)

∂φ

∂y= xexy. From (b)

φ(x, y) = exy+h(x) =⇒ ∂φ

∂x= yexy+

dh(x)dx

so from (a) yexy+cos x =⇒ dh(x)dx

= cos x =⇒ h(x) = sin x where

the constant of integration is set to zero since we only need one potential function. φ(x, y) = exy + sinx =⇒

exy + sinx = c. Now since y(π/2) = 0, c = 2; thus, exy + sinx = 2 or y(x) =ln (2− sinx)

x.

16. If φ(x, y) is a potential function for Mdx + Ndy = 0 =⇒ d(φ(x, y)) = 0 so d(φ(x, y) + c) = d(φ(x, y)) +d(c) = 0 + 0 = 0 =⇒ φ(x, y) + c is also a potential function.

17. M = cos (xy)[tan (xy) + xy] and N = x2 cos (xy) =⇒ My = 2x cos (xy) − x2y sin (xy) = Nx =⇒ My =Nx =⇒ Mdx = Ndy = 0 is exact so I(x, y) = cos (xy) is an integrating factor for [tan (xy)+xy]dx+x2dy = 0.

18. M = sec x[2x− (x2 + y2) tan x] and N = 2y sec x =⇒ My = −2y sec x tanx and Nx = 2y sec x tanx =⇒My 6= Nx =⇒ Mdx + Ndy = 0 is not exact so I(x) = sec x is not an integrating factor for [2x − (x2 +y2) tan x]dx + 2ydy = 0.

19. M = e−x/y(x2y−1 − 2x) and N = −e−x/yx3y−2 =⇒ My = e−x/y(x3y−3 − 3x2y−2) = Nx =⇒ Mdx +Ndy = 0 is exact so I(x, y) = y−2e−x/y is an integrating factor for y[x2 − 2xy]dx− x3dy = 0.

20. Given (xy − 1)dx + x2dy = 0 then M = xy − 1 and N = x2. Thus My = x and Nx = 2x soMy −Nx

N= −x−1 = f(x) is a function of x alone so I(x) = e

∫f(x)dx = x−1 is an integrating factor for the

given equation. Multiplying the given equation by I(x) results in the exact equation (y−x−1)dx+xdy = 0.We find that φ(x, y) = xy− ln |x| and hence, the general solution of our differential equation is xy− ln |x| = c.

21. Given ydx − (2x + y4)dy = 0 then M = y and N = −(2x + y4). Thus My = 1 and Nx = −2 soMy −Nx

M= 3y−1 = g(y) is a function of y alone so I(y) = e−

∫g(y)dy = 1/y3 is an integrating factor

for the given differential equation. Multiplying the given equation by I(y) results in the exact equationy−2dx − (2xy−3 + y)dy = 0. We find that φ(x, y) = xy−2 − y2/2 and hence, the general solution of our

Page 72: Differential Equations and Linear Algebra - 3e - Goode -Solutions

72

differential equation is xy−2 − y2/2 = c1 =⇒ 2x− y4 = cy2.

22. Given x2ydx + y(x3 + e−3y sin y)dy = 0 then M = x2y and N = y(x3 + e−3y sin y). Thus My = x2

and Nx = 3x2y soMy −Nx

M= y−1 − 3 = g(y) is a function of y alone so I(y) = e

∫g(y)dy = e3y/y is an

integrating factor for the given equation. Multiplying the equation by I(y) results in the exact equation

x2e3ydx + e3y(x3 + e−3y sin y)dy = 0. We find that φ(x, y) =x3e3y

3− cos y and hence the general solution

of our differential equation isx3e3y

3− cos y = c.

23. Given (y − x2)dx + 2xdy = 0 then M = y − x2 and N = 2x. Thus My = 1 and Nx = 2 soMy −Nx

N=

− 12x

= f(x) is a function of x alone so I(x) = e∫

f(x)dx =1√x

is an integrating factor for the given equation.

Multiplying the given equation by I(x) results in the exact equation (x−1/2y−x3/2)dx+2x1/2dy = 0. We find

that φ(x, y) = 2x1/2y− 2x5/2

5and hence the general solution of our differential equation is 2x1/2y− 2x5/2

5= c

or y(x) =c + 2x5/2

10√

x.

24. Given xy[2 ln (xy) + 1]dx + x2dy = 0 then M = xy[2 ln (xy) + 1] and N + x2. Thus My = 3x +

2x ln (xy) and Nx = 2x soMY −Nx

M= y−1 = g(y) is a function of y only so I(y) = e

∫g(y)dy =

1y

is an

integrating factor for the given equation. Multiplying the given equation by I(y) results in the exact equationx[2 ln (xy) + 1]dx + x2y−1dy = 0. We find that φ(x, y) = x2 ln y + x2 lnx and hence the general solution ofour differential equation is x2 ln y + x2 lnx = c or y(x) = xec/x2

.

25. Givendy

dx+

2xy

1 + x2=

1(1 + x2)2

=⇒ (2xy +2x3y−1)dx+(1+x2)2dy = 0 then M = 2xy +2x3y−1 and

N = (1 + x2)2. Thus My = 2x + 2x3 and Nx = 4x(1 + x2) soMy −Nx

N= − 2x

1 + x2= f(x) is a function of

x alone so I(x) = e∫

f(x)dx =1

1 + x2is an integrating factor for the given equation. Multiplying the given

equation by I(x) yields the exact equation(

2xy − 11 + x2

)dx + (1 + x2)dy = 0. We find that φ(x, y) =

(1 + x2)y − tan−1 x and hence the general solution of our differential equation is (1 + x2)y − tan−1 x = c or

y(x) =tan−1x + c

1 + x2.

26. Given (3xy − 2y−1)dx + x(x + y−2)dy = 0 then M = 3xy − 2y−1 and N = x(x + y−2). Thus

My = 3x+2y−2 and Nx = 2x+y−2 soMy −Nx

N=

1x

= f(x) is a function of x alone so I(x) = e∫

f(x)dx = x

is an integrating factor for the given equation. Multiplying the given equation by I(x) results in the exactequation (3x2y− 2xy−1)dx + x2(x + y−2)dy = 0. We find that φ(x, y) = x3y− x2y−1 and hence the generalsolution of our differential equation is x3y − x2y−1 = c.

27. Given (y−1 − x−1)dx + (xy−2 − 2y−1)dy = 0 =⇒ xrys(y−1 − x−1)dx + xrys(xy−2 − 2y−1)dy = 0 =⇒(xrys−1−xr−1ys)dx+(xr+1ys−2−2xrys−1)dy = 0. Then M = xrys−1−xr−1ys and N = xr+1ys−2−2xrys−1

so My = xr(s − 1)ys−2 − xr−1sys−1 and Nx = (r + 1)xrys−2 − 2rxr−1ys−1. The equation is exact if and

Page 73: Differential Equations and Linear Algebra - 3e - Goode -Solutions

73

only if My = Nx =⇒ xrys−1 − xr−1ys = (r + 1)xrys−2 − 2rxr−1ys−1 =⇒ s− 1y2

− s

xy=

r + 1y2

− 2r

xy=⇒

s− r − 2y2

=s− 2r

xy. From the last equation we require that s−r−2 = 0 and s−2r = 0. Solving this system

yields r = 2 and s = 4.

28. Given y(5xy2 + 4)dx + x(xy2 − 1)dy = 0 =⇒ xrysy(5xy2 + 4)dx + xrysx(xy2 − 1)dy = 0. ThenM = xrys+1(5xy2 + 4) and N = xr+1ys(xy2 − 1) so My = 5(s + 3)xr+1ys+2 + 4(s + 1)xrys and Nx =(r + 2)xr+1ys−2 − (r + 1)xrys. The equation is exact if and only if My = Nx =⇒ 5(s + 3)xr+1ys+2 + 4(s +1)xrys = (r + 2)xr+1ys+2 − (r + 1)xrys =⇒ 5(s + 3)xy2 + 4(s + 1) = (r + 2)xy2 − (r + 1). From the lastequation we require that 5(s + 3) = r + 2 and 4(s + 1) = −(r + 1). Solving this system yields r = 3 ands = −2.

29. Given 2y(y + 2x2)dx + x(4y + 3x2)dy = 0 =⇒ xrys2y(y + 2x2)dx + xrysx(4y + 3x2)dy = 0. ThenM = 2xrys+2 + 4xr+2ys+1 and N = 4xr+1ys+1 + 3xr+3ys so My = 2xr(s + 2)ys+1 + 4xr+2(s + 1)ys andNx = 4(r + 1)xrys+1 + 3(r + 3)xr+2ys. The equation is exact if and only if My = Nx =⇒ 2xr(s + 2)ys+1 +4xr+2(s + 1)ys = 4(r + 1)xrys+1 + 3(r + 3)xr+2ys =⇒ 2(s + 2)y + 4x2(s + 1) = 4(r + 1)y + 3(r + 3)x2. Fromthis last equation we require that 2(s + 2) = 4(r + 1) and 4(s + 1) = 3(r + 3). Solving this system yieldsr = 1 and s = 2.

30. Suppose thatMy −Nx

M= g(y) is a function of y only. Then dividing the equation (1.9.21) by M ,

it follows that I is an integrating factor for M(x, y)dx + N(x, y)dy = 0 if and only if it is a solution ofN

M

∂I

∂x− ∂I

∂y= Ig(y) (30.1). We must show that this differential equation has a solution I = I(y). However,

if I = I(y), then (30.1) reduces todI

dy= −Ig(y), which is a separable equation with solution I(y) = e−

∫g(t)dt.

31. (a) Notedy

dx+ py = q can be written in the differential form as (py − q)dx + dy = 0 (31.1). This

has M = py − q and N = 1 so thatMy −Nx

N= p(x). Consequently, an integrating factor for (31.1) is

I(x) = e∫ x

p(t)dt.

(b) Multiplying (31.1) by I(x) = e∫ x

p(t)dt yields the exact equation e∫ x

p(t)dt(py − q)dx + e∫ x

p(t)dtdy = 0.

Hence, there exists a potential function φ such that (i)∂φ

∂x= e∫ x

p(t)dt(py−q) and (ii)∂φ

∂y= e∫ x

p(t)dt. From

(i) p(x)ye∫ x

p(t)dt +dh(x)

dx= e∫ x

p(t)dt(py − q) =⇒ dh(x)dx

= −q(x)e∫ x

p(t)dt =⇒ h(x) = −∫

q(x)e∫ x

p(t)dtdx

where the constant of integration has been set to zero since we just need one potential function. Consequently,φ(x, y) = ye

∫ xp(t)dt −

∫q(x)e

∫ xp(t)dtdx =⇒ y(x) = I−1

(∫ xIq(t)dt + c

), where I(x) = e

∫ xp(t)dt.

Solutions to Section 1.10

True-False Review:

1. TRUE. This is well-illustrated by the calculations shown in Example 1.10.1.

2. TRUE. The equationy1 = y0 + f(x0, y0)(x1 − x0)

Page 74: Differential Equations and Linear Algebra - 3e - Goode -Solutions

74

is the tangent line to the curve dydx = f(x, y) at the point (x0, y0). Once the point (x1, y1) is determined, the

procedure can be iterated over and over at the new points obtained to carry out Euler’s method.

3. FALSE. It is possible, depending on the circumstances, for the errors associated with Euler’s method todecrease from one step to the next.

Problems:

1. Applying Euler’s method with y′ = 4y−1, x0 = 0, y0 = 1, and h = 0.05 we have yn+1 = yn+0.05(4yn−1).This generates the sequence of approximants given in the table below.

n xn yn

1 0.05 1.152 0.10 1.333 0.15 1.5464 0.20 1.8055 0.25 2.1166 0.30 2.4897 0.35 2.9378 0.40 3.4759 0.45 4.12010 0.50 4.894

Consequently the Euler approximation to y(0.5) is y10 = 4.894. (Actual value: y(.05) = 5.792 roundedto 3 decimal places).

2. Applying Euler’s method with y′ = − 2xy

1 + x2, x0 = 0, y0 = 1, and h = 0.1 we have yn+1 = yn−0.2

xnyn

1 + x2n

.

This generates the sequence of approximants given in the table below.

n xn yn

1 0.1 12 0.2 0.9803 0.3 0.9424 0.4 0.8915 0.5 0.8296 0.6 0.7637 0.7 0.6968 0.8 0.6109 0.9 0.56910 1.0 0.512

Consequently the Euler approximation to y(1) is y10 = 0.512. (Actual value: y(1) = 0.5).

3. Applying Euler’s method with y′ = x−y2, x0 = 0, y0 = 2, and h = 0.05 we have yn+1 = yn+0.05(xn−y2n).

This generates the sequence of approximants given in the table below.

Page 75: Differential Equations and Linear Algebra - 3e - Goode -Solutions

75

n xn yn

1 0.05 1.802 0.10 1.6413 0.15 1.5114 0.20 1.4045 0.25 1.3166 0.30 1.2427 0.35 1.1808 0.40 1.1279 0.45 1.08410 0.50 1.048

Consequently the Euler approximation to y(0.5) is y10 = 1.048. (Actual value: y(.05) = 1.088 roundedto 3 decimal places).

4. Applying Euler’s method with y′ = −x2y, x0 = 0, y0 = 1, and h = 0.2 we have yn+1 = yn−0.2x2nyn. This

generates the sequence of approximants given in the table below.

n xn yn

1 0.2 12 0.4 0.9923 0.6 0.9604 0.8 0.8915 1.0 0.777

Consequently the Euler approximation to y(1) is y5 = 0.777. (Actual value: y(1) = 0.717 rounded to 3decimal places).

5. Applying Euler’s method with y′ = 2xy2, x0 = 0, y0 = 1, and h = 0.1 we have yn+1 = yn + 0.1xny2n. This

generates the sequence of approximants given in the table below.

n xn yn

1 0.1 0.52 0.2 0.5053 0.3 0.5154 0.4 0.5315 0.5 0.5546 0.6 0.5847 0.7 0.6258 0.8 0.6809 0.9 0.75410 1.0 0.858

Consequently the Euler approximation to y(1) is y10 = 0.856. (Actual value: y(1) = 1).

6. Applying the modified Euler method with y′ = 4y − 1, x0 = 0, y0 = 1, and h = 0.05 we have y∗n+1 =yn + 0.05(4yn − 1)yn+1 = yn + 0.025(4yn − 1 + 4y∗n+1 − 1). This generates the sequence of approximants given in the tablebelow.

Page 76: Differential Equations and Linear Algebra - 3e - Goode -Solutions

76

n xn yn

1 0.05 1.1652 0.10 1.36633 0.15 1.61194 0.20 1.91155 0.25 2.27706 0.30 2.72307 0.35 3.26708 0.40 3.93089 0.45 4.740610 0.50 5.7285

Consequently the modified Euler approximation to y(0.5) is y10 = 5.7285. (Actual value: y(.05) = 5.7918rounded to 4 decimal places).

7. Applying the modified Euler method with y′ = − 2xy

1 + x2, x0 = 0, y0 = 1, and h = 0.1 we have y∗n+1 =

yn − 0.2xnyn

1 + x2n

yn+1 = yn + 0.05[− xnyn

1 + x2n

− 2xn+1y

∗n+1

1 + x2n+1

]. This generates the sequence of approximants given in the table

below.

n xn yn

1 0.1 0.99002 0.2 0.96163 0.3 0.91774 0.4 0.86255 0.5 0.80076 0.6 0.71637 0.7 0.67218 0.8 0.61089 0.9 0.553610 1.0 0.5012

Consequently the modified Euler approximation to y(1) is y10 = 0.5012. (Actual value: y(1) = 0.5).

8. Applying the modified Euler method with y′ = x − y2, x0 = 0, y0 = 2, and h = 0.05 we have y∗n+1 =yn − 0.05(xn − y2

n)yn+1 = yn + 0.025(xn − y2

n + xn+1 − (y∗n+1)2). This generates the sequence of approximants given in the

table below.

Page 77: Differential Equations and Linear Algebra - 3e - Goode -Solutions

77

n xn yn

1 0.05 1.82032 0.10 1.67253 0.15 1.54974 0.20 1.44685 0.25 1.36006 0.30 1.28667 0.35 1.22438 0.40 1.17159 0.45 1.126910 0.50 1.0895

Consequently the modified Euler approximation to y(0.5) is y10 = 1.0895. (Actual value: y(.05) = 1.0878rounded to 4 decimal places).

9. Applying the modified Euler method with y′ = −x2y, x0 = 0, y0 = 1, and h = 0.2 we have y∗n+1 =yn − 0.2x2

nyn

yn+1 = yn − 0.1[x2nyn + x2

n+1y∗n+1]. This generates the sequence of approximants given in the table below.

n xn yn

1 0.2 0.99602 0.4 0.97623 0.6 0.92664 0.8 0.83825 1.0 0.7114

Consequently the modified Euler approximation to y(1) is y5 = 0.7114. (Actual value: y(1) = 0.7165rounded to 4 decimal places).

10. Applying the modified Euler method with y′ = 2xy2, x0 = 0, y0 = 1, and h = 0.1 we have y∗n+1 =yn + 0.1xny2

n

yn+1 = yn+0.05[xny2n+xn+1(y∗n+1)

2]. This generates the sequence of approximants given in the table below.

n xn yn

1 0.1 0.50252 0.2 0.51023 0.3 0.52354 0.4 0.54345 0.5 0.57136 0.6 0.60957 0.7 0.66178 0.8 0.73429 0.9 0.837910 1.0 0.9941

Consequently the modified Euler approximation to y(1) is y10 = 0.9941. (Actual value: y(1) = 1).

11. We have y′ = 4y−1, x0 = 0, y0 = 1, and h = 0.05. So, k1 = 0.05(4yn−1), k2 = 0.05[4(yn+ 12k1)−1], k3 =

0.05[4(yn + 12k2)− 1], k4 = 0.05[4(yn + 1

2k3)− 1],

Page 78: Differential Equations and Linear Algebra - 3e - Goode -Solutions

78

yn+1 = yn + 16 (k1 + k2 + k3 + k4). This generates the sequence of approximants given in the table below

(computations rounded to five decimal places).

n xn yn

1 0.05 1.166052 0.10 1.368863 0.15 1.616584 0.20 1.919145 0.25 2.288686 0.30 2.740057 0.35 3.291358 0.40 3.964719 0.45 4.7871410 0.50 5.79167

Consequently the Runge-Kutta approximation to y(0.5) is y10 = 5.79167. (Actual value: y(.05) = 5.79179rounded to 5 decimal places).

12. We have y′ = −2xy

1 + x2, x0 = 0, y0 = 1, and h = 0.1. So, k1 = −0.2

xnyn

1 + x2n

, k2 = −0.2(xn + 0.05)(yn + k1

2 )[1 + (xn + 0.05)2]

), k3 =

−0.2(xn + 0.05)(yn + k2

2 )[1 + (xn + 0.05)2]

, k4 = −0.2xn+1(yn + k3)[1 + (xn+1)2]

,

yn+1 = yn + 16 (k1 + k2 + k3 + k4). This generates the sequence of approximants given in the table below

(computations rounded to seven decimal places).

n xn yn

1 0.1 0.99009902 0.2 0.96153833 0.3 0.91743094 0.4 0.86206865 0.5 0.79999966 0.6 0.73529377 0.7 0.67114068 0.8 0.60975589 0.9 0.552486010 1.0 0.4999999

Consequently the Runge-Kutta approximation to y(1) is y10 = 0.4999999. (Actual value: y(.05) = 0.5).

13. We have y′ = x− y2, x0 = 0, y0 = 2, and h = 0.05. So, k1 = 0.05(xn − y2n), k2 = 0.05[xn + 0.025− (yn +

k12 )2], k3 = 0.05[xn + 0.025− (yn + k2

2 )2], k4 = 0.05[xn+1 − (yn + k3)2]],yn+1 = yn + 1

6 (k1 + k2 + k3 + k4). This generates the sequence of approximants given in the table below(computations rounded to six decimal places).

Page 79: Differential Equations and Linear Algebra - 3e - Goode -Solutions

79

n xn yn

1 0.05 1.1.819362 0.10 1.6711353 0.15 1.5480794 0.20 1.4450255 0.25 1.3581896 0.30 1.2847387 0.35 1.2225018 0.40 1.1697899 0.45 1.12526310 0.50 1.087845

Consequently the Runge-Kutta approximation to y(0.5) is y10 = 1.087845. (Actual value: y(0.5) = 1.087845rounded to 6 decimal places).

14. We have y′ = −x2y, x0 = 0, y0 = 1, and h = 0.2. So, k1 = −0.2x2nyn, k2 = −0.2(xn +0.1)2(yn + k1

2 ), k3 =−0.2(xn + 0.1)2(yn + k2

2 ), k4 = −0.2(xn+1)2(yn + k3),yn+1 = yn + 1

6 (k1 + k2 + k3 + k4). This generates the sequence of approximants given in the table below(computations rounded to six decimal places).

n xn yn

1 0.2 0.9973372 0.4 0.9788923 0.6 0.9305304 0.8 0.8431025 1.0 0.716530

Consequently the Runge-Kutta approximation to y(1) is y10 = 0.716530. (Actual value: y(1) = 0.716531rounded to 6 decimal places).

15. We have y′ = 2xy2, x0 = 0, y0 = 1, and h = 0.1. So, k1 = 0.2xn−y2n, k2 = 0.2(xn +0.05)(yn + k1

2 )2, k3 =0.2(xn + 0.05)(yn + k2

2 )2, k4 = 0.2xn+1(yn + k3)2]],yn+1 = yn + 1

6 (k1 + k2 + k3 + k4). This generates the sequence of approximants given in the table below(computations rounded to six decimal places).

n xn yn

1 0.1 0.5025132 0.2 0.5102043 0.3 0.5235604 0.4 0.5434785 0.5 0.5714296 0.6 0.6097567 0.7 0.6622528 0.8 0.7352959 0.9 0.84033610 1.0 0.999996

Consequently the Runge-Kutta approximation to y(1) is y10 = 0.999996. (Actual value: y(1) = 1).

Page 80: Differential Equations and Linear Algebra - 3e - Goode -Solutions

80

16. We have y′+110

y = e−x/10 cos x, x0 = 0, y0 = 0, and h = 0.5 Hence, k1 = 0.5(− 1

10yn + exn/10 cos xn

), k2 =

0.5[− 1

10

(yn +

12k1

)+ e(−xn+0.25)/10 cos (xn + 0.25)

], k3 = 0.5

[− 1

10

(yn +

12k2

)+ e(−xn+0.25)/10 cos (xn + 0.25)

],

k4 = 0.5[− 1

10(yn + k3) + e−xn+1/10 cos xn+1

], yn+1 = yn + 1

6 (k1 + 2k2 + 2k3 + k4). This generates the se-

quence of approximants plotted in the accompanying figure. We see that the solution appears to be oscillatingwith a diminishing amplitude. Indeed, the exact solution to the initial value problem is y(x) = e−x/10 sinx.The corresponding solution curve is also given in the figure.

5 251510 20

0.25

0.5

0.75

-0.25

-0.5

y(x)

x

Figure 0.0.56: Figure for Exercise 16

Solutions to Section 1.11

Problems:

1.d2y

dx2=

2x

dy

dx+ 4x2. Let u =

dy

dxso that

du

dx=

d2y

dx2. Substituting these results into the first equation

yieldsdu

dx=

2x

u + 4x2 =⇒ du

dx− 2

xu = 4x2. An appropriate integrating factor for this equation is I(x) =

e−2∫ dx

x = x−2 =⇒ d(x−2u) = 4 =⇒ x−2u = 4∫

dx =⇒ x−2u = 4x + c1 =⇒ u = 4x3 + c1x2 =⇒ dy

dx=

4x3 + c1x2 =⇒ y(x) = c1x

3 + x4 + c2.

2.d2y

dx2=

1(x− 1)(x− 2)

[dy

dx− 1]. Let u =

dy

dxso that

du

dx=

d2y

dx2. Substituting these results into

Page 81: Differential Equations and Linear Algebra - 3e - Goode -Solutions

81

the first equation yieldsdu

dx=

1(x− 1)(x− 2)

(u − 1) =⇒ du

dx− 1

(x− 1)(x− 2)u = − 1

(x− 1)(x− 2). An

appropriate integrating factor for this equation is I(x) = e−∫ 1

(x− 1)(x− 2)dx

=x− 1x− 2

=⇒ d

dx

(x− 1x− 2

u

)=

1(x− 2)2

=⇒ x− 1x− 2

u =∫

(x− 2)−2dx =⇒ u = − 1x− 1

+ c1 =⇒ dy

dx= − 1

x− 1+ c1 =⇒ y(x) = − ln |x− 1|+

c1x + c2.

3.d2y

dx2+

2y

(dy

dx

)2

=dy

dx. Let u =

dy

dxso that

du

dx= u

du

dy=

d2y

dx2. Substituting these results into the

first equation yields udu

dy+

2yu2 = u =⇒ u = 0 or

du

dy− 2

yu = 1. An appropriate integrating factor for the

last equation is I(y) = e

∫ 2y

dy

= y2 =⇒ d

dy(y2u) = y2 =⇒ y2u =

∫y2dy =⇒ y2u =

y3

3+ c1 =⇒ dy

dx=

y

3+

c1

y2=⇒ ln |y3 + c2| = x + c3 =⇒ y(x) = 3

√c4ex + c5.

4.d2y

dx2=(

dy

dx

)2

tan y. Let u =dy

dxso that

du

dx= u

du

dy=

d2y

dx2. Substituting these results into the first

equation yields udu

dy= u2 tan y. If u = 0 then

du

dx= 0 =⇒ y equals a constant and this is a solution to the

equation. Now suppose that u 6= 0. Thendu

dy= u tan y =⇒

∫ du

u=∫

tan ydy =⇒ u = c1 sec y =⇒ dy

dx=

c1 sec y =⇒ y(x) = sin−1 (c1x + c2).

5.d2y

dx2+ tanx

dy

dx=(

dy

dx

)2

. Let u =dy

dxso that

du

dx=

d2y

dx2. Substituting these results into the first

equation yieldsdu

dx+ tanxu = u2 which is a Bernoulli equation. Letting z = u−1 gives

1u2

=du

dx= −dz

dx.

Substituting these results into the last equation yieldsdz

dx− tanxz = −1. Then an integrating factor for

this equation is I(x) = e−∫

tan xdx = cos x =⇒ d

dx(z cos x) = − cos x =⇒ z cos x = −

∫cos xdx =⇒ z =

− sinx + c1

cos x=⇒ u =

cos x

c1 − sinx=⇒ dy

dx=

cos x

c1 − sinx=⇒ y(x) = c2 − ln |c1 − sinx|.

6.d2x

dt2=(

dx

dt

)2

+ 2dx

dt. Let u =

dx

dtso that

du

dt=

d2x

dt2. Substituting these results into the first equation

yieldsdu

dt= u2 + 2u =⇒ du

dt− 2u = u2 which is a Bernoulli equation. If u = 0 then x is a constant which

satisfies the equation. Now suppose that u 6= 0. Let z = u−1 so thatdz

dt= − 1

u2

du

dt. Substituting these

results into the last equation yieldsdz

dt+ 2z = −1. An integrating factor for this equation is I(x) = e2t =⇒

d

dt(e2tz) = −e2t =⇒ z = ce−2t − 1

2=⇒ u =

2e2t

2c− e2t=⇒ x =

∫ 2e2t

2c− e2tdt =⇒ x(t) = c2 − ln |c1 − e2t|.

7.d2y

dx2− 2

dy

dx= 6x4. Let u =

dy

dxso that

du

dx=

d2y

dx2. Substituting these results into the first equation

Page 82: Differential Equations and Linear Algebra - 3e - Goode -Solutions

82

yieldsdu

dx− 2

xu = 6x4. An appropriate integrating factor for this equation is I(x) = e

−2∫ dx

x = x−2 =⇒d

dx(x−2u) = 6x2 =⇒ x−2u = 6

∫x2dx =⇒ u = 2x5 + cx2 =⇒ dy

dx= 2x5 + cx2 =⇒ y(x) =

13x6 + c1x

3 + c2.

8. td2x

dt2= 2

(t +

dx

dt

). Let u =

dx

dtso that

du

dt=

d2x

dt2. Substituting these results into the first equation

yieldsdu

dt− 2

tu = 2. An integrating factor for this equation is I(x) = t−2 =⇒ d

dt(t−2u) = 2t−2 =⇒ u =

−2t + ct2 =⇒ dx

dt= −2t + ct2 =⇒ x(t) = c1t

3 − t2 + c2.

9.d2y

dx2− α

(dy

dx

)2

− βdy

dx= 0. Let u =

dy

dxso that

du

dx=

d2y

dx2. Substituting these results into the

first equation yieldsdu

dx− βu = αu2 which is a Bernoulli equation. If u = 0 then y is a constant and

satisfies the equation. Now suppose that u 6= 0. Let z = u−1 so thatdz

dx= −u−2 du

dx. Substituting

these results into the last equation yieldsdz

dx+ βz = −α. The an integrating factor for this equation is

I(x) = eβ∫

dx = eβx =⇒ eβxz = −α∫

eβxdx =⇒ −α

β+ ce−βx =⇒ u =

βeβx

cβ − αeβx=⇒ dy

dx=

βeβx

cβ − αeβx=⇒

y =∫ βeβx

cβ − αeβxdx =⇒ y(x) = − 1

αln |c1 + c2e

βx|.

10.d2y

dx2− 2

x

dy

dx= 18x4. Let u =

dy

dxso that

du

dx=

d2y

dx2. Substituting these results into the first equation

yieldsdu

dx− 2

xu = 18x4 which has I(x) = x−2 as an integrating factor so

d

dx(x−2u) = 18x2 =⇒ u =

6x5 + cx2 =⇒ dy

dx= 6x5 + cx2 =⇒ y(x) = x6 + c1x

3 + c2.

11.d2y

dx2= − 2x

1 + x2

dy

dx. Let u =

dy

dxso that

du

dx= dfracd2ydx2. If u = 0 then y is a constant and

satisfies the equation. Now suppose that u 6= 0. Substituting these results into the first equation yieldsdu

dx= − 2x

1 + x2u =⇒ ln |u| = − ln (1 + x2) + c =⇒ u =

c1

1 + x2=⇒ dy

dx=

c1

1 + x2=⇒ y(x) = c1 tan−1 x + c2.

12.d2y

dx2+

1y

(dy

dx

)2

= ye−3

(dy

dx

)3

. Let u =dy

dxso that

du

dx= u

du

dy=

d2y

dx2. Substituting these results into

the first equation yields udu

dx+

1yu2 = ye−yu3. If u = 0 then y is a constant and satisfies the equation. Now

suppose that u 6= 0. Substituting these results into the first equation yieldsdu

dx+

u

y= ye−yu2 which is a

Bernoulli equation. Let v = u−1 so thatdv

dy= −u−2 du

dy. Substituting these results into the last equation

yieldsdv

dy− v

y= −ye−y. Then I(y) = y−1 is an integrating factor for the equation thus

d

dy(y−1v) = −e−1 =⇒

v = y(e−1 + c) =⇒ u =ey

y + cyey=⇒ dy

dx=

ey

y + cyey=⇒ (ye−y + cy)dy = dx =⇒ e−y(y + 1) + c1y

2 − x.

Page 83: Differential Equations and Linear Algebra - 3e - Goode -Solutions

83

13.d2y

dx2− tanx

dy

dx= 1. Let u =

dy

dxso that u

du

dy=

d2y

dx2. Substituting these results into the first equation

yieldsdu

dx−u tanx = 1. An appropriate integrating factor for this equation is I(x) = e−

∫tan xdx = cos x =⇒

d

dx(u cos x) = cos x =⇒ u cos x = sinx + c =⇒ u(x) = tanx + c sec x =⇒ dy

dx= tan x + c sec x =⇒ y(x) =

ln secx + c1 ln (sec x + tanx) + c2.

14. yd2y

dx2= 2

(dy

dx

)2

+ y2. Let u =dy

dxso that

du

dx= u

du

dy=

d2y

dx2. Substituting these results into the first

equation yields udu

dy− 2

yu2 = y, a Bernoulli equation. Let z = u2 so that u

du

dy=

12

dz

dy. Substituting these

results into the last equation yieldsdz

dy− 4

yz = 2y which has I(y) = y−4 as an integrating factor. Therefore,

d

dy(y−4z) = 2y−3 =⇒ z = c1y

4 − y2 =⇒ u2 = c1y4 − y2 =⇒ u = ±

√c1y4 − y2 =⇒ dy

dx= ±

√c1y4 − y2 =⇒

cos−1

(1

y√

c1

)= ±x + c2. Using the facts that f(0) = 1 and y′(0) = 0 we find that c1 = 1 and c2 = 0; thus

y(x) = sec x.

15.d2y

dx2= ω2y where ω > 0. Let u =

dy

dxso that

du

dx= u

du

dy=

d2y

dx2. Substituting these results into the first

equation yields udu

dy= ω2y =⇒ u2 = ω2y2 + c2. Using the given that y(0) = a and y′(0) = 0 we find that

c2 = a2ω2. Thendy

dx= ±ω

√y2 − a2 =⇒ 1

ωcosh−1 (y/a) = ±x + c =⇒ y(x) = a cosh [ω(c± x)] =⇒ y′ =

±aω sinh [ω(c± x)] and since y′(0) = 0, c = 0; hence, y(x) = a cosh (ωx).

16. Let u =dy

dxso that u

du

dx=

d2y

dx2. Substituting these results into the differential equation yields

udu

dy=

1a

√1 + u2. Separating the variables and integrating we obtain

√1 + u2 =

1ay+c. Imposing the initial

conditions y(0) = a,dy

dx(0) = 0 gives c = 0. Hence,

√1 + u2 =

1ay so that 1 + u2 =

1a2

y2 or equivalently,

u = ±√

y2/a2 − 1. Substituting u =dy

dxand separating the variables gives

1√y2 − a2

= ± 1|a|

dx which

can be integrated to obtain cosh−1 (y/a) = ±x/a + c1 so that y = a cosh (±x/a + c1). Imposing the initialconditions y(0) = a gives c1 = 0 so that y(x) = a cosh (x/a).

17.d2y

dx2+ p(x)

dy

dx= q(x). Let u =

dy

dxso that

du

dx=

d2y

dx2. Substituting these results into the first equation

gives us the equivalent system:du

dx+p(x)u = q(x) which has a solution u = e−

∫p(x)dx

[∫e−∫

p(x)dxq(x)dx + c1

]so

dy

dx= e−

∫p(x)dx

[∫e−∫

p(x)dxq(x)dx + c1

]. Thus y =

∫ {e−∫

p(x)dx[∫

e−∫

p(x)dxq(x)dx + c1dx]}

+ c2 isa solution to the original equation.

18. (a) u1 = y =⇒ u2 =du1

dx=

dy

dx=⇒ u3 =

du2

dx=

d2y

dx2=⇒ du3

dx=

d3y

dx3; thus

d3y

dx3= F

(x,

d2y

dx2

)since

Page 84: Differential Equations and Linear Algebra - 3e - Goode -Solutions

84

the latter equation is equivalent todu3

dx= F (x, u3).

(b)d3y

dx3=

1x

(d2y

dx2− 1)

. Replace this equation by the equivalent first order system:du1

dx= u2,

du2

dx= u3,

anddu3

dx=

1x

(u3− 1) =⇒∫ du3

u3 − 1=∫ dx

x=⇒ u3 = Kx+1 =⇒ du2

dx= Kx+1 =⇒ u2 =

K

2x2 +x+ c2 =⇒

du1

dx=

K

2x2 + x + c2 =⇒ u1 =

K

6x3 +

12x2 + c2x + c3 =⇒ y(x) = u1 = c1x

3 +12x2 + c2x + c3.

19. Givend2θ

dt2+

g

Lsin θ = 0, θ(0) = θ0, and

dt(0) = 0.

(a)d2θ

dt2+

g

Lθ = 0. Let u =

dtso that

du

dt=

d2θ

dt2=

du

dt= u

du

dθ. Substituting these results

into the last equation yields udu

dθ+

g

Lθ = 0 =⇒ u2 = − g

Lθ2 + c2

1, butdθ

dt(0) = 0 and θ(0) = θ0 so

c21 =

g

Lθ20 =⇒ u2 =

g

L(θ2

0 − θ2) =⇒ u = ±√

g

L

√θ20 − θ2 =⇒ sin−1

θ0

)= ±

√g

Lt + c2, but θ(0) = θ0 so

c2 =π

2=⇒ sin−1

θ0

)=

π

2±√

g

Lt =⇒ θ = θ0 sin

2±√

g

Lt

)=⇒ θ = θ0 cos

(√g

Lt

). Yes, the predicted

motion is reasonable.

(b)d2θ

dt2+

g

Lsin θ = 0. Let u =

dtso that

du

dt=

d2θ

dt2=

du

dt= u

du

dθ. Substituting these results into the last

equation yields udu

dθ+

g

Lsin θ = 0 =⇒ u2 =

2g

Lcos θ+c. Since θ(0) = θ0 and

dt(0) = 0, then c = −2g

Lcos θ0

and so u2 =2g

Lcos θ − 2g

Lcos θ0 =⇒ dθ

dt= ±

√2g

Lcos θ − 2g

Lcos θ0 =⇒ dθ

dt= ±

√2g

L[cos θ − cos θ0]1/2.

(c) From part (b),√

L

2g

[cos θ − cos θ0]1/2= ±dt. When the pendulum goes from θ = θ0 to θ = 0

(which corresponds to one quarter of a period)dθ

dtis negative; hence, choose the negative sign. Thus,

T = −√

L

2g

∫ 0

θ0

[cos θ − cos θ0]1/2=⇒ T =

√L

2g

∫ θ0

0

[cos θ − cos θ0]1/2.

(d) T =√

L

2g

∫ θ0

0

[cos θ − cos θ0]1/2=⇒

T =

√L

2g

∫ θ0

0

dθ[2 sin2

(θ0

2

)− 2 sin2

2

)]1/2=

12

√L

2g

∫ θ0

0

dθ[sin2

(θ0

2

)− sin2

2

)]1/2.

Let k = sin(

θ0

2

)so that

T =12

√L

2g

∫ θ0

0

dθ[k2 − sin2

2

)]1/2. (0.0.5)

Now let sin θ/2 = k sinu. When θ = 0, u = 0 and when θ = θ0, u = π/2; moreover, dθ =2k cos (u)du

cos (θ/2)=⇒

Page 85: Differential Equations and Linear Algebra - 3e - Goode -Solutions

85

dθ =2k√

1− sin2 (u)du√1− sin2 (θ/2)

=⇒ dθ =2√

k2 − (k sin (u))2du√1− k2 sin2 (u)

=⇒ dθ =2√

k2 − sin2 (θ/2)du√1− k2 sin2 (u)

. Making this

change of variables in equation (0.0.5) yields

T =

√L

g

∫ π/2

0

du√1− k2 sin2 (u)

where k = sin θ0/2.

Solutions to Section 1.12

1.

2.

3. We first determine the slope of the given family at the point (x, y). Differentiating

y = cx3 (0.0.6)

with respect to x yieldsdy

dx= 3cx2. (0.0.7)

From (0.0.6) we have c = yx3 which, when substituted into Equation (0.0.7) yields

dy

dx=

3y

x.

Consequently, the differential equation for the orthogonal trajectories is

dy

dx= − x

3y.

Separating the variables and integrating gives

32y2 = −1

2x2 + C,

which can be written in the equivalent form

x2 + 3y2 = k.

4. We first determine the slope of the given family at the point (x, y). Differentiating

y2 = cx3 (0.0.8)

with respect to x yields

2ydy

dx= 3cx2

Page 86: Differential Equations and Linear Algebra - 3e - Goode -Solutions

86

so thatdy

dx=

3cx2

2y. (0.0.9)

From (0.0.8) we have c = y2

x3 which, when substituted into Equation (0.0.9) yields

dy

dx=

3y

2x.

Consequently, the differential equation for the orthogonal trajectories is

dy

dx= −2x

3y.

Separating the variables and integrating gives

32y2 = −x2 + C,

which can be written in the equivalent form

2x2 + 3y2 = k.

5. We first determine the slope of the given family at the point (x, y). Differentiating

y = ln(cx) (0.0.10)

with respect to x yieldsdy

dx=

1x

.

Consequently, the differential equation for the orthogonal trajectories is

dy

dx= −x.

which can be integrated directly to obtain

y = −12x2 + k.

6. We first determine the slope of the given family at the point (x, y). Differentiating

x4 + y4 = c (0.0.11)

with respect to x yields

4x3 + 4y3 dy

dx= 0

so thatdy

dx= −x3

y3. (0.0.12)

Consequently, the differential equation for the orthogonal trajectories is

dy

dx=

y3

x3.

Page 87: Differential Equations and Linear Algebra - 3e - Goode -Solutions

87

Separating the variables and integrating gives

−12y−2 = −1

2x−2 + C,

which can be written in the equivalent form

y2 − x2 = kx2y2.

7. (a) We first determine the slope of the given family at the point (x, y). Differentiating

x2 + 3y2 = 2cy (0.0.13)

with respect to x yields

2x + 6ydy

dx= 2c

dy

dx

so thatdy

dx=

x

c− 3y. (0.0.14)

From (0.0.13) we have c = x2+3y2

2y which, when substituted into Equation (0.0.14) yields

dy

dx=

xx2+3y2

2y − 3y=

2xy

x2 − 3y2, (0.0.15)

as required.

(b) It follows from Equation(0.0.15) that the differential equation for the orthogonal trajectories is

dy

dx= −3y2 − x2

2xy.

This differential equation is first-order homogeneous. Substituting y = xV into the precedingdifferential equation gives

xdV

dx+ V =

3V 2 − 12V

which simplifies todV

dx=

V 2 − 12V

.

Separating the variables and integrating we obtain

ln(V 2 − 1) = lnx + C,

or, upon exponentiation,V 2 − 1 = kx.

Inserting V = y/x into the preceding equation yields

y2

x2− 1 = kx,

that is,y2 − x2 = kx3.

Page 88: Differential Equations and Linear Algebra - 3e - Goode -Solutions

88

y(x)

x

Figure 0.0.57: Figure for Exercise 8

y(x)

x

Figure 0.0.58: Figure for Exercise 9

8. See accompanying figure.

9. See accompanying figure.

10. (a) If v(t) = 25, thendv

dt= 0 =

12(25− v).

Page 89: Differential Equations and Linear Algebra - 3e - Goode -Solutions

89

(b) The accompanying figure suggests that

limt→∞

v(t) = 25.

5

10

10

15

20

25

5

v(t)

t

Figure 0.0.59: Figure for Exercise 10

11. (a) Separating the variables in Equation (1.12.6) yields

mv

mg − kv2

dv

dy= 1

which can be integrated to obtain

−m

2kln(mg − kv2) = y + c.

Multiplying both sides of this equation by −1 and exponentiating gives

mg − kv2 = c1e− 2k

m y.

The initial condition v(0) = 0 requires that c1 = mg, which, when inserted into the precedingequation yields

mg − kv2 = mge−2km y,

or equivalently,v2 =

mg

k

(1− e−

2km y)

,

as required.

(b) See accompanying figure.

Page 90: Differential Equations and Linear Algebra - 3e - Goode -Solutions

90

mg/k

v2(y)

y

Figure 0.0.60: Figure for Exercise 11

12. The given differential equation is separable. Separating the variables gives

ydy

dx= 2

lnx

x,

which can be integrated directly to obtain

12y2 = (lnx)2 + c,

or, equivalently,y2 = 2(lnx)2 + c1.

13. The given differential equation is first-order linear. We first divide by x to put the differential equationin standard form:

dy

dx− 2

xy = 2x lnx. (0.0.16)

An integrating factor for this equation is I = e∫

(−2/x)dx = x−2. Multiplying Equation (0.0.16) by x−2

reduces it tod

dx(x−2y) = 2x−1 lnx,

which can be integrated to obtainx−2y = (lnx)2 + c

so thaty(x) = x2[(ln x)2 + c].

Page 91: Differential Equations and Linear Algebra - 3e - Goode -Solutions

91

14. We first re-write the given differential equation in the differential form

2xy dx + (x2 + 2y)dy = 0. (0.0.17)

ThenMy = 2x = Nx

so that the differential equation is exact. Consequently, there exists a potential function φ satisfying

∂φ

∂x= 2xy,

∂φ

∂y= x2 + 2y.

Integrating these two equations in the usual manner yields

φ(x, y) = x2y + y2.

Therefore Equation (0.0.17) can be written in the equivalent form

d(x2y + y2) = 0

with general solutionx2y + y2 = c.

15. We first rewrite the given differential equation as

dy

dx=

y2 + 3xy + x2

x2,

which is first order homogeneous. Substituting y = xV into the preceding equation yields

xd

dx+ V = V 2 + 3V + 1

so thatx

dV

dx= V 2 + 2V + 1 = (V + 1)2,

or, in separable form,1

(V + 1)2dV

dx=

1x

.

This equation can be integrated to obtain

−(V + 1)−1 = ln x + c

so thatV + 1 =

1c1 − lnx

.

Inserting V = y/x into the preceding equation yields

y

x+ 1 =

1c1 − lnx

,

so thaty(x) =

1c1 − lnx

− x.

Page 92: Differential Equations and Linear Algebra - 3e - Goode -Solutions

92

16. We first rewrite the given differential equation in the equivalent form

dy

dx+ y · tanx = y2 sinx,

which is a Bernoulli equation. Dividing this equation by y2 yields

y−2 dy

dx+ y−1 tanx = sinx. (0.0.18)

Now make the change of variables u = y−1 in which case dudx = −y−2 dy

dx . Substituting these results intoEquation (0.0.18) gives the linear differential equation

−du

dx+ u · tanx = sinx

or, in standard form,du

dx− u · tanx = − sinx. (0.0.19)

An integrating factor for this differential equation is I = e−∫

tan x dx = cos x. Multiplying Equation(0.0.19) by cos x reduces it to

d

dx(u · cos x) = − sinx cos x

which can be integrated directly to obtain

u · cos x =12

cos2 x + c,

so that

u =cos2 x + c1

cos x.

Inserting u = y−1 into the preceding equation and rearranging yields

y(x) =2 cos x

cos2 x + c1.

17. The given differential equation is linear with integrating factor

I = e

∫2e2x

1+e2x dx = eln(1+e2x) = 1 + e2x.

Multiplying the given differential equation by 1 + e2x yields

d

dx

[(1 + e2x)y

]=

e2x + 1e2x − 1

= −1 +2e2x

e2x − 1

which can be integrated directly to obtain

(1 + e2x)y = −x + ln |e2x − 1|+ c,

so that

y(x) =−x + ln |e2x − 1|+ c

1 + e2x.

Page 93: Differential Equations and Linear Algebra - 3e - Goode -Solutions

93

18. We first rewrite the given differential equation in the equivalent form

dy

dx=

y +√

x2 − y2

x,

which we recognize as being first order homogeneous. Inserting y = xV into the preceding equationyields

xdV

dx+ V = V +

|x|x

√1− V 2,

that is,1√

1− V 2

dV

dx= ± 1

x.

Integrating we obtainsin−1 V = ± ln |x|+ c,

so thatV = sin(c± ln |x|).

Inserting V = y/x into the preceding equation yields

y(x) = x sin(c± ln |x|).

19. We first rewrite the given differential equation in the equivalent form

(sin y + y cos x + 1)dx− (1− x cos y − sinx)dy = 0.

ThenMy = cos y + cos x = Nx

so that the differential equation is exact. Consequently, there is a potential function satisfying

∂φ

∂x= sin y + y cos x + 1,

∂φ

∂y= −(1− x cos y − sinx).

Integrating these two equations in the usual manner yields

φ(x, y) = x− y + x sin y + y sinx,

so that the differential equation can be written as

d(x− y + x sin y + y sinx) = 0,

and therefore has general solution

x− y + x sin y + y sinx = c.

20. Writing the given differential equation as

dy

dx+

1x

y =252

y−1x2 lnx,

we see that it is a Bernoulli equation with n = −1. We therefore divide the equation by y−1 to obtain

ydy

dx+

1x

y2 =252

x2 lnx.

Page 94: Differential Equations and Linear Algebra - 3e - Goode -Solutions

94

We now make the change of variables u = y2, in which case, dudx = 2y dy

dx . Inserting these results intothe preceding differential equation yields

12

du

dx+

1x

u =252

x2 lnx,

or, in standard form,du

dx+

2x

u = 25x2 lnx.

An integrating factor for this linear differential equation is I = e∫

(2/x)dx = x2. Multiplying the previousdifferential equation by x2 reduces it to

d

dx(x2u) = 25x4 lnx

which can be integrated directly to obtain

x2u = 25(

15x5 lnx− 1

25x5

)+ c

so thatu = x3(5 ln x− 1) + cx−2.

Making the replacement u = y2 in this equation gives

y2 = x3(5 ln x− 1) + cx−2.

21. The given differential equation can be written in the equivalent form

dy

dx=

ex−y

e2x+y= e−xe−2y,

which we recognize as being separable. Separating the variables gives

e2y dy

dx= e−x

which can be integrated to obtain12e2y = −e−x + c

so thaty(x) =

12

ln(c1 − 2e−x).

22. The given differential equation is linear with integrating factor I = e∫

cot x dx = sinx. Multiplying thegiven differential equation by sinx reduces it to

d

dx(y sinx) =

sinx

cos x

which can be integrated directly to obtain

y sinx = − ln(cos x) + c,

so that

y(x) =c− ln(cos x)

sinx.

Page 95: Differential Equations and Linear Algebra - 3e - Goode -Solutions

95

23. Writing the given differential equation as

dy

dx+

2ex

1 + exy = 2y

12 e−x,

we see that it is a Bernoulli equation with n = 1/2. We therefore divide the equation by y12 to obtain

y−12dy

dx+

2ex

1 + exy

12 = 2e−x.

We now make the change of variables u = y12 , in which case, du

dx = 12y−

12

dydx . Inserting these results

into the preceding differential equation yields

2du

dx+

2ex

1 + exu = 2e−x,

or, in standard form,du

dx+

ex

1 + exu = e−x.

An integrating factor for this linear differential equation is

I = e∫

ex

1+ex dx = eln(1+ex) = 1 + ex.

Multiplying the previous differential equation by 1 + ex reduces it to

d

dx[(1 + ex)u] = e−x(1 + ex) = e−x + 1

which can be integrated directly to obtain

(1 + ex)u = −e−x + x + c

so that

u =x− e−x + c

1 + ex.

Making the replacement u = y12 in this equation gives

y12 =

x− e−x + c

1 + ex.

24. We first rewrite the given differential equation in the equivalent form

dy

dx=

y

x

[ln(y

x

)+ 1].

The function appearing on the right of this equation is homogeneous of degree zero, and therefore thedifferential equation itself is first order homogeneous. We therefore insert y = xV into the differentialequation to obtain

xdV

x+ V = V (lnV + 1),

so thatx

dV

dx= V lnV.

Page 96: Differential Equations and Linear Algebra - 3e - Goode -Solutions

96

Separating the variables yields1

V lnV

dV

dx=

1x

which can be integrated to obtainln(lnV ) = lnx + c.

Exponentiating both side of this equation gives

lnV = c1x,

or equivalently,V = ec1x.

Inserting V = y/x in the preceding equation yields

y = xec1x.

25. For the given differential equation we have

M(x, y) = 1 + 2xey, N(x, y) = −(ey + x),

so thatMy −Nx

M=

1 + 2xey

1 + 2xey= 1.

Consequently, an integrating factor for the given differential equation is

I = e−∫

dy = e−y.

Multiplying the given differential equation by e−y yields the exact differential equation

(2x + e−y)dx− (1 + xe−y)dy = 0. (0.0.20)

Therefore, there exists a potential function φ satisfying

∂φ

∂x= 2x + e−y,

∂φ

∂y= −(1 + xe−y).

Integrating these two equations in the usual manner yields

φ(x, y) = x2 − y + xe−y.

Therefore Equation (0.0.20) can be written in the equivalent form

d(x2 − y + xe−y) = 0

with general solutionx2 − y + xe−y = c.

26. The given differential equation is first-order linear. However, it can also e written in the equivalentform

dy

dx= (1− y) sinx

Page 97: Differential Equations and Linear Algebra - 3e - Goode -Solutions

97

which is separable. Separating the variables and integrating yields

− ln |1− y| = − cos x + c,

so that1− y = c1e

cos x.

Hence,y(x) = 1− c1e

cos x.

27. For the given differential equation we have

M(x, y) = 3y2 + x2, N(x, y) = −2xy,

so thatMy −Nx

N= − 4

x.

Consequently, an integrating factor for the given differential equation is

I = e−∫

4x dx = x−4.

Multiplying the given differential equation by x−4 yields the exact differential equation

(3y2x−4 + x−2)dx− 2yx−3dy = 0. (0.0.21)

Therefore, there exists a potential function φ satisfying

∂φ

∂x= 3y2x−4 + x−2,

∂φ

∂y= −2yx−3.

Integrating these two equations in the usual manner yields

φ(x, y) = −y2x−3 − x−1.

Therefore Equation (0.0.21) can be written in the equivalent form

d(−y2x−3 − x−1) = 0

with general solution−y2x−3 − x−1 = c,

or equivalently,x2 + y2 = c1x

3.

Notice that the given differential equation can be written in the equivalent form

dy

dx=

3y2 + x2

2xy,

which is first-order homogeneous. Another equivalent way of writing the given differential equation is

dy

dx− 3

2xy =

12xy−1,

which is a Bernoulli equation.

Page 98: Differential Equations and Linear Algebra - 3e - Goode -Solutions

98

28. The given differential equation can be written in the equivalent form

dy

dx− 1

2x lnxy = −9

2x2y3

which is a Bernoulli equation with n = 3. We therefore divide the equation by y3 to obtain

y−3 dy

dx− 1

2x lnxy−2 = −9

2x2.

We now make the change of variables u = y−2, in which case, dudx = −2y−3 dy

dx . Inserting these resultsinto the preceding differential equation yields

−12

du

dx− 1

2x lnxu = −9

2x2,

or, in standard form,du

dx+

1x lnx

u = 9x2.

An integrating factor for this linear differential equation is

I = e∫

1x ln x dx = eln(ln x) = ln x.

Multiplying the previous differential equation by lnx reduces it to

d

dx(lnx · u) = 9x2 lnx

which can be integrated to obtain

lnx · u = x3(3 lnx− 1) + c

so that

u =x3(3 ln x− 1) + c

lnx.

Making the replacement u = y3 in this equation gives

y3 =x3(3 ln x− 1) + c

lnx.

29. Separating the variables in the given differential equation yields

1y

dy

dx=

2 + x

1 + x= 1 +

11 + x

,

which can be integrated to obtain

ln |y| = x + ln |1 + x|+ c.

Exponentiating both sides of this equation gives

y(x) = c1(1 + x)ex.

Page 99: Differential Equations and Linear Algebra - 3e - Goode -Solutions

99

30. The given differential equation can be written in the equivalent form

dy

dx+

2x2 − 1

y = 1 (0.0.22)

which is first-order linear. An integrating factor is

I = e

∫2

x2−1dx = e

∫( 1

x−1−1

x+1 )dx = e[ln(x−1)−ln(x+1)] =x− 1x + 1

.

Multiplying (0.0.22) by (x− 1)/(x + 1) reduces it to the integrable form

d

dx

(x− 1x + 1

· y)

=x− 1x + 1

= 1− 2x + 1

.

Integrating both sides of this differential equation yields(x− 1x + 1

· y)

= x− 2 ln(x + 1) + c

so that

y(x) =(

x + 1x− 1

)[x− 2 ln(x + 1) + c].

31. The given differential equation can be written in the equivalent form

[y sec2(xy) + 2x]dx + x sec2(xy)dy = 0

ThenMy = sec2(xy) + 2xy sec2(x) tan(xy) = Nx

so that the differential equation is exact. Consequently, there is a potential function satisfying

∂φ

∂x= y sec2(xy) + 2x,

∂φ

∂y= x sec2(xy).

Integrating these two equations in the usual manner yields

φ(x, y) = x2 + tan(xy),

so that the differential equation can be written as

d(x2 + tan(xy)) = 0,

and therefore has general solutionx2 + tan(xy) = c,

or equivalently,

y(x) =tan−1(c− x2)

x.

Page 100: Differential Equations and Linear Algebra - 3e - Goode -Solutions

100

32. CHANGE PROBLEM IN TEXT TO

dy

dx+ 4xy = 4x

√y.

The given differential equation is a Bernoulli equation with n = 12 . We therefore divide the equation

by y12 to obtain

y−12dy

dx− 4xy

12 = 4x.

We now make the change of variables u = y12 , in which case, du

dx = 12y−

12

dydx . Inserting these results

into the preceding differential equation yields

2du

dx+ 4xu = 4x,

or, in standard form,du

dx+ 2xu = 2x.

An integrating factor for this linear differential equation is

I = e∫

2x dx = ex2.

Multiplying the previous differential equation by ex2reduces it to

d

dx

(ex2

u)

= 2xex2.

which can be integrated directly to obtain

ex2u = ex2

+ c

so thatu = 1 + cex2

.

Making the replacement u = y12 in this equation gives

y12 = 1 + cex2

.

33. CHANGE PROBLEM IN TEXT TO

dy

dx=

x2

x2 + y2+

y

x

then the answer is correct. The given differential equation is first-order homogeneous. Inserting y = xVinto the given equation yields

xdV

dx+ V =

11 + V 2

+ V,

that is,

(1 + V 2)dV

dx=

1x

.

Page 101: Differential Equations and Linear Algebra - 3e - Goode -Solutions

101

Integrating we obtain

V +13V 3 = ln |x|+ c.

Inserting V = y/x into the preceding equation yields

y

x+

y3

3x3= ln |x|+ c.

34. For the given differential equation we have

My =1y

= Nx

so that the differential equation is exact. Consequently, there is a potential function satisfying

∂φ

∂x= ln(xy) + 1,

∂φ

∂y=

x

y+ 2y.

Integrating these two equations in the usual manner yields

φ(x, y) = x ln(xy) + y2,

so that the differential equation can be written as

d[x ln(xy) + y2] = 0,

and therefore has general solutionx ln(xy) + y2 = c.

35. The given differential equation is a Bernoulli equation with n = −1. We therefore divide the equationby y−1 to obtain

ydy

dx+

1x

y2 =25 ln x

2x3.

We now make the change of variables u = y2, in which case, dudx = 2y dy

dx . Inserting these results intothe preceding differential equation yields

12

du

dx+

1x

u =25 ln x

2x3,

or, in standard form,du

dx+

2x

u = 25x−3 lnx.

An integrating factor for this linear differential equation is

I = e∫

2x dx = x2.

Multiplying the previous differential equation by x2 reduces it to

d

dx(x2u) = 25x−1 lnx,

Page 102: Differential Equations and Linear Algebra - 3e - Goode -Solutions

102

which can be integrated directly to obtain

x2u =252

(lnx)2 + c

so that

u =25(lnx)2 + c

2x2.

Making the replacement u = y2 in this equation gives

y2 =25(lnx)2 + c

2x2.

36. The problem as written is separable, but the integration does not work. CHANGE PROBLEM TO:

(1 + y)dy

dx= xex−y.

The given differential equation can be written in the equivalent form

ey(1 + y)dy

dx= xex

which is separable. Integrating both sides of this equation gives

yey = ex(x− 1) + c.

37. The given differential equation can be written in the equivalent form

dy

dx− cos x

sinxy = − cos x

which is first order linear with integrating factor

I = e−∫

cos xsin x dx = e− ln(sin x) =

1sinx

.

Multiplying the preceding differential equation by 1sin x reduces it to

d

dx

(1

sinx· y)

= −cos x

sinx

which can be integrated directly to obtain

1sinx

· y = − ln(sin x) + c

so thaty(x) = sin x[c− ln(sin x)].

38. The given differential equation is linear, and therefore can be solved using an appropriate integratingfactor. However, if we rearrange the terms in the given differential equation then it can be written inthe equivalent form

11 + y

dy

dx= x2

Page 103: Differential Equations and Linear Algebra - 3e - Goode -Solutions

103

which is separable. Integrating both sides of the preceding differential equation yields

ln(1 + y) =13x3 + c

so thaty(x) = c1e

13 x3

− 1.

Imposing the initial condition y(0) = 5 we find c1 = 6. Therefore the solution to the initial-valueproblem is

y(x) = 6e13 x3

− 1.

39. The given differential equation can be written in the equivalent form

e−6y dy

dx= −e−4x

which is separable. Integrating both sides of the preceding equation yields

−16e−6y =

14e−4x + c

so that

y(x) = −16

ln(

c1 −32e−4x

).

Imposing the initial condition y(0) = 0 requires that

0 = ln(

c1 −32

).

Hence, c1 = 52 , and so

y(x) = −16

ln(

5− 3e−4x

2

).

40. For the given differential equation we have

My = 4xy = Nx

so that the differential equation is exact. Consequently, there is a potential function satisfying

∂φ

∂x= 3x2 + 2xy2,

∂φ

∂y= 2x2y.

Integrating these two equations in the usual manner yields

φ(x, y) = x2y2 + x3,

so that the differential equation can be written as

d(x2y2 + x3) = 0,

and therefore has general solutionx2y2 + x3 = c.

Page 104: Differential Equations and Linear Algebra - 3e - Goode -Solutions

104

Imposing the initial condition y(1) = 3 yields c = 10. Therefore,

x2y2 + x3 = 10

so that

y2 =10− x3

x2.

Note that the given differential equation can be written in the equivalent form

dy

dx+

1x

y = −32y−1,

which is a Bernoulli equation with n = −1. Consequently, the Bernoulli technique could also havebeen used to solve the differential equation.

41. The given differential equation is linear with integrating factor

I = e−∫

sin x dx = ecos x.

Multiplying the given differential equation by ecos x reduces it to the integrable form

d

dx(ecos x · y) = 1,

which can be integrated directly to obtain

ecos x · y = x + c..

Hence,y(x) = e− cos x(x + c).

Imposing the given initial condition y(0) = 1e requires that c = 1. Consequently,

y(x) = e− cos x(x + 1).

42. (a) For the given differential equation we have

My = mym−1, Nx = −nxn−1y3.

We see that the only values for m and n for which My = Nx are m = n0. Consequently, theseare the only values of m and n for which the differential equation is exact.

(b) We rewrite the given differential equation in the equivalent form

dy

dx=

x5 + ym

xny3, (0.0.23)

from which we see that the differential equation is separable provided m = 0. In this case thereare no restrictions on n.

(c) From Equation (0.0.23) we see that the only values of m and n for which the differential equationis first-order homogeneous are m = 5 and n = 2.

Page 105: Differential Equations and Linear Algebra - 3e - Goode -Solutions

105

(d) We now rewrite the given differential equation in the equivalent form

dy

dx− x−nym−3 = x5−ny−3. (0.0.24)

Due to the y−3 term on the right-hand side of the preceding differential equation, it follows thatthere are no values of m and n for which the equation is linear.

(e) From Equation (0.0.24) we see that the differential equation is a Bernoulli equation wheneverm = 4. There are no constraints on n in this case.

43. In Newton’s Law of Cooling we have

Tm = 180◦F, T (0) = 80◦F, T (3) = 100◦F.

We need to determine the time, t0 when T (t0) = 140◦F. The temperature of the sandals at time t isgoverned by the differential equation

dT

dt= −k(T − 180).

This separable differential equation is easily integrated to obtain

T (t) = 180 + ce−kt.

Since T (0) = 80 we have80 = 180 + c =⇒ c = −100.

Hence,T (t) = 180− 100e−kt.

Imposing the condition T (3) = 100 requires

100 = 180− 100e−3k.

Solving for k we find k = 13 ln

(54

). Inserting this value for k into the preceding expression for T (t)

yieldsT (t) = 180− 100e−

t3 ln( 5

4 ).

We need to find t0 such that140 = 180− 100e−

t03 ln( 5

4 ).

Solving for t0 we find

t0 = 3ln(

52

)ln(

54

) ≈ 12.32 min.

44. In Newton’s Law of Cooling we have

Tm = 70◦F, T (0) = 150◦F, T (10) = 125◦F.

We need to determine the time, t0 when T (t0) = 100◦F. The temperature of the plate at time t isgoverned by the differential equation

dT

dt= −k(T − 70).

Page 106: Differential Equations and Linear Algebra - 3e - Goode -Solutions

106

This separable differential equation is easily integrated to obtain

T (t) = 70 + ce−kt.

Since T (0) = 150 we have150 = 70 + c =⇒ c = 80.

Hence,T (t) = 70 + 80e−kt.

Imposing the condition T (10) = 125 requires

125 = 70 + 80e−10k.

Solving for k we find k = 110 ln

(1611

). Inserting this value for k into the preceding expression for T (t)

yieldsT (t) = 70 + 80e−

t10 ln( 16

11 ).

We need to find t0 such that100 = 70 + 80e−

t010 ln( 16

11 ).

Solving for t0 we find

t0 = 10ln(

83

)ln(

1611

) ≈ 26.18 min.

45. Let T (t) denote the temperature of the object at time t, and let Tm denote the temperature of thesurrounding medium. Then we must solve the initial-value problem

dT

dt= k(T − Tm)2, T (0) = T0,

where k is a constant. The differential equation can be written in separated form as

1(T − Tm)2

dT

dt= k.

Integrating both sides of this differential equation yields

− 1T − Tm

= kt + c

so thatT (t) = Tm − 1

kt + c.

Imposing the initial condition T (0) = T0 we find that

c =1

Tm − T0

which, when substituted back into the preceding expression for T (t) yieldss

T (t) = Tm − 1kt + 1

Tm−T0

= Tm − Tm − T0

k(Tm − T0)t + 1.

As t →∞, T (t) approaches Tm.

Page 107: Differential Equations and Linear Algebra - 3e - Goode -Solutions

107

46. We are given the differential equation

dT

dt= −k(T − 5 cos 2t) (0.0.25)

together with the initial conditions

T (0) = 0;dT

dt(0) = 5. (0.0.26)

(a) Setting t = 0 in (0.0.25) and using (0.0.26) yields

5 = −k(0− 5)

so that k = 1.

(b) Substituting k = 1 into the differential equation (0.0.25) and rearranging terms yields

dT

dt+ T = 5 cos t.

An integrating factor for this linear differential equation is I = e∫

dt = et. Multiplying thepreceding differential equation by et reduces it to

d

dt(et · T ) = 5et cos 2t

which upon integration yields

et · T = et(cos 2t + 2 sin 2t) + c,

so thatT (t) = ce−t + cos 2t + 2 sin 2t.

Imposing the initial condition T (0) = 0 we find that c = −1. Hence,

T (t) = cos 2t + 2 sin 2t− e−t.

(c) For large values of t we haveT (t) ≈ cos 2t + 2 sin 2t,

which can be written in phase-amplitude form as

T (t) ≈√

5 cos(2t− φ),

where tanφ = 2. consequently, for large t, the temperature is approximately oscillatory withperiod π and amplitude

√5.

47. If we let C(t) denote the number of sandhill cranes in the Platte River valley t days after April 1, thenC(t) is governed by the differential equation

dC

dt= −kC (0.0.27)

together with the auxiliary conditions

C(0) = 500, 000; C(15) = 100, 000. (0.0.28)

Page 108: Differential Equations and Linear Algebra - 3e - Goode -Solutions

108

Separating the variables in the differential equation (0.0.27) yields

1C

dC

dt= −k,

which can be integrated directly to obtain

lnC = −kt + c.

Exponentiation yieldsC(t) = c0e

−kt.

The initial condition C(0) = 500, 000 requires c0 = 500, 000, so that

C(t) = 500, 000e−kt. (0.0.29)

Imposing the auxiliary condition C(15) = 100, 000 yields

100, 000 = 500, 000e−15k.

Taking the natural logarithm of both sides of the preceding equation and simplifying we find thatk = 1

15 ln 5. Substituting this value for k into (0.0.29) gives

C(t) = 500, 000e−t15 ln 5. (0.0.30)

(a) C(3) = 500, 000e−2 ln 5 = 500, 000 · 125 = 20, 000 sandcranes.

(b) C(35) = 500, 000e−3515 ln 5 ≈ 11696 sandcranes.

(c) We need to determine t0 such that

1000 = 500, 000e−t015 ln 5

that is,

e−t015 ln 5 =

1500

.

Taking the natural logarithm of both sides of this equation and simplifying yields

t0 = 15 · ln 500ln 5

≈ 57.9 days after April 1.

48. Substituting P0 = 200, 000 into equation (1.5.3) in the text (page 45) yields

P (t) =200, 000C

200, 000 + (C − 200, 000)e−rt. (0.0.31)

We are givenP (3) = P (t1) = 230, 000, P (6) = P (t2) = 250, 000.

Since t2 = 2t1 we can use the formulas (1.5.5), (1.5.6) on page 47 of the text to obtain r and C directlyas follows:

r =13

ln[25(23− 20)20(25− 23)

]=

13

ln(

158

)≈ 0.21.

Page 109: Differential Equations and Linear Algebra - 3e - Goode -Solutions

109

C =230, 000[(23)(45)− (40)(25)]

(23)2 − (20)(25)= 277586.

Substituting these values for r and C into (0.0.31) yields

P (t) =55517200000

200, 000 + (77586)e−0.21t.

Therefore,

P (10) =55517200000

200, 000 + (77586)e−2.1≈ 264, 997,

andP (20) =

55517200000200, 000 + (77586)e−4.2

≈ 275981.

49. The differential equation for determining q(t) is

dq

dt+

54q =

32

cos 2t,

which has integrating factor I = e∫

54 dt = e

54 t. Multiplying the preceding differential equation by e

54 t

reduces it to the integrable formd

dt

(e

54 t · q

)=

32e

54 t cos 2t.

Integrating and simplifying we find

q(t) =689

(5 cos 2t + 8 sin 2t) + ce−54 t. (0.0.32)

The initial condition q(0) = 3 requires

3 =3089

+ c,

so that c = 23789 . Making this replacement in (0.0.32) yields

q(t) =689

(5 cos 2t + 8 sin 2t) +23789

e−54 t.

The current in the circuit is

i(t) =dq

dt=

1289

(8 cos 2t− 5 sin 2t)− 1185356

e−54 t.

Answer in text has incorrect exponent.

50. The current in the circuit is governed by the differential equation

di

dt+ 10i =

1003

,

which has integrating factor I = e∫

10 dt = e10t. Multiplying the preceding differential equation by e10t

reduces it to the integrable formd

dt

(e10t · i

)=

1003

e10t.

Page 110: Differential Equations and Linear Algebra - 3e - Goode -Solutions

110

Integrating and simplifying we find

i(t) =103

+ ce−10t. (0.0.33)

The initial condition i(0) = 3 requires

3 =103

+ c,

so that c = − 13 . Making this replacement in (0.0.33) yields

i(t) =13(10− e−10t).

51. We are given:

r1 = 6 L/min, c1 = 3 g/L, r2 = 4 L/min, V (0) = 30 L, A(0) = 0 g,

and we need to determine the amount of salt in the tank when V (t) = 60L. Consider a small timeinterval ∆t. Using the preceding information we have:

∆V = 6∆t− 4∆t = 2∆t,

and∆A ≈ 18∆t− 4

A

V∆t.

Dividing both of these equations by ∆t and letting ∆t → 0 yields

dV

dt= 2. (0.0.34)

dA

dt+ 4

A

V= 18. (0.0.35)

Integrating (0.0.34) and imposing the initial condition V (0) = 30 yields

V (t) = 2(t + 15). (0.0.36)

We now insert this expression for V (t) into (0.0.35) to obtain

dA

dt+

2t + 15

A = 18.

An integrating factor for this differential equation is I = e∫

2t+15 dt = (t+15)2. Multiplying the preceding

differential equation by (t + 15)2 reduces it to the integrable form

d

dt

[(t + 15)2A

]= 18(t + 15)2.

Integrating and simplifying we find

A(t) =6(t + 15)3 + c

(t + 15)2.

Imposing the initial condition A(0) = 0 requires

0 =6(15)3 + c

(15)2,

Page 111: Differential Equations and Linear Algebra - 3e - Goode -Solutions

111

so that c = −20250. Consequently,

A(t) =6(t + 15)3 − 20250

(t + 15)2.

We need to determine the time when the solution overflows. Since the tank can hold 60 L of solution,from (0.0.36) overflow will occur when

60 = 2(t + 15) =⇒ t = 15.

The amount of chemical in the tank at this time is

A(15) =6(30)3 − 20250

(30)2≈ 157.5 g.

52. Applying Euler’s method with y′ = x2 + 2y2, x0 = 0, y0 = −3, and h = 0.1 we have yn+1 = yn +0.1(x2

n + 2y2n). This generates the sequence of approximants given in the table below.

n xn yn

1 0.1 −1.22 0.2 −0.9113 0.3 −0.741024 0.4 −0.622195 0.5 −0.528776 0.6 −0.447857 0.7 −0.3717368 0.8 −0.295109 0.9 −0.2136810 1.0 −0.12355

Consequently the Euler approximation to y(1) is y10 = −0.12355.

53. Applying Euler’s method with y′ =3x

y+ 2, x0 = 1, y0 = 2, and h = 0.05 we have

yn+1 = yn + 0.05(

3xn

yn+ 2)

.

This generates the sequence of approximants given in the table below.

n xn yn

1 1.05 2.17502 1.10 2.347413 1.15 2.517704 1.20 2.686225 1.25 2.853236 1.30 3.018947 1.35 3.183538 1.40 3.347149 1.45 3.5098810 1.50 3.67185

Page 112: Differential Equations and Linear Algebra - 3e - Goode -Solutions

112

Consequently, the Euler approximation to y(1.5) is y10 = 3.67185.

54. Applying the modified Euler method with y′ = x2 + 2y2, x0 = 0, y0 = −3, and h = 0.1 generates thesequence of approximants given in the table below.

n xn yn

1 0.1 −1.95552 0.2 −1.429063 0.3 −1.114994 0.4 −0.904665 0.5 −0.749766 0.6 −0.625557 0.7 −0.517788 0.8 −0.417239 0.9 −0.3171910 1.0 −0.21196

Consequently, the modified Euler approximation to y(1) is y10 = −0.21196. Comparing this to thecorresponding Euler approximation from Problem 52 we have

|yME − yE| = |0.21196− 0.12355| = 0.8841.

55. Applying the modified Euler method with y′ =3x

y+ 2, x0 = 1, y0 = 2, and h = 0.05 generates the

sequence of approximants given in the table below.

n xn yn

1 1.05 2.173712 1.10 2.345103 1.15 2.514574 1.20 2.682415 1.25 2.848866 1.30 3.014117 1.35 3.178318 1.40 3.341599 1.45 3.5040410 1.50 3.66576

Consequently, the modified Euler approximation to y(1.5) is y10 = 3.66576. Comparing this to thecorresponding Euler approximation from Problem 53 we have

|yME − yE| = |3.66576− 3.67185| = 0.00609.

56. Applying the Runge-Kutta method with y′ = x2 + 2y2, x0 = 0, y0 = −3, and h = 0.1 generates thesequence of approximants given in the table below.

Page 113: Differential Equations and Linear Algebra - 3e - Goode -Solutions

113

n xn yn

1 0.1 −1.873922 0.2 −1.361273 0.3 −1.064764 0.4 −0.867345 0.5 −0.721436 0.6 −0.603537 0.7 −0.500288 0.8 −0.403039 0.9 −0.3054110 1.0 −0.20195

Consequently the Runge-Kutta approximation to y(1) is y10 = −0.20195. Comparing this to thecorresponding Euler approximation from Problem 52 we have

|yRK − yE| = |0.20195− 0.12355| = 0.07840.

57. Applying the Runge-Kutta method with y′ =3x

y+ 2, x0 = 1, y0 = 2, and h = 0.05 generates the

sequence of approximants given in the table below.

n xn yn

1 1.05 2.173692 1.10 2.345063 1.15 2.514524 1.20 2.682355 1.25 2.848806 1.30 3.014047 1.35 3.178238 1.40 3.341519 1.45 3.5039610 1.50 3.66568

Consequently the Runge-Kutta approximation to y(1.5) is y10 = 3.66568. Comparing this to thecorresponding Euler approximation from Problem 53 we have

|yRK − yE| = |3.66568− 3.67185| = 0.00617.

Last digit in answer the text needs changing.

Solutions to Section 2.1

True-False Review:

1. TRUE. A diagonal matrix has no entries below the main diagonal, so it is upper triangular. Likewise,it has no entries above the main diagonal, so it is also lower triangular.

2. FALSE. An m× n matrix has m row vectors and n column vectors.

3. TRUE. Since A is symmetric, A = AT . Thus, (AT )T = A = AT , so AT is symmetric.

Page 114: Differential Equations and Linear Algebra - 3e - Goode -Solutions

114

4. FALSE. The trace of a matrix is the sum of the entries along the main diagonal.

5. TRUE. If A is skew-symmetric, then AT = −A. But A and AT contain the same entries along the maindiagonal, so for AT = −A, both A and −A must have the same main diagonal. This is only possible if allentries along the main diagonal are 0.

6. TRUE. If A is both symmetric and skew-symmetric, then A = AT = −A, and A = −A is only possibleif all entries of A are zero.

7. TRUE. Both matrix functions are defined for values of t such that t > 0.

8. FALSE. The (3, 2)-entry contains a function that is not defined for values of t with t ≤ 3. So for example,this matrix functions is not defined for t = 2.

9. TRUE. Each numerical entry of the matrix function is a constant function, which has domain R.

10. FALSE. For instance, the matrix function A(t) = [t] and B(t) = [t2] satisfy A(0) = B(0), but A and Bare not the same matrix function.

Problems:

1. a31 = 0, a24 = −1, a14 = 2, a32 = 2, a21 = 7, a34 = 4.

2.[

1 5−1 3

]; 2× 2 matrix.

3.[

2 1 −10 4 −2

]; 2× 3 matrix.

4.

−1

11

−5

; 4× 1 matrix.

5.

1 −3 −23 6 02 7 4

−4 −1 5

; 4× 3 matrix.

6.

0 −1 21 0 3

−2 −3 0

; 3× 3 matrix.

7. tr(A) = 1 + 3 = 4.

8. tr(A) = 1 + 2 + (−3) = 0.

9. tr(A) = 2 + 2 + (−5) = −1.

10. Column vectors:[

13

],

[−1

5

].

Row vectors: [1 − 1], [3 5].

11. Column vectors:

1−1

2

,

3−2

6

,

−457

.

Row vectors: [1 3 − 4], [−1 − 2 5], [2 6 7].

Page 115: Differential Equations and Linear Algebra - 3e - Goode -Solutions

115

12. Column vectors:[

25

],

[10−1

],

[63

]. Row vectors: [2 10 6], [5 − 1 3].

13. A =

1 23 45 1

. Column vectors:

135

,

241

.

14. B =

2 5 0 1−1 7 0 2

4 −6 0 3

. Row vectors: [2 5 0 1], [−1 7 0 2], [4 − 6 0 3].

15. A = [a1,a2, . . . ,ap] has p columns and each column q-vector has q rows, so the resulting matrix hasdimensions q × p.

16. One example:

2 0 00 3 00 0 −1

.

17. One example:

2 3 1 20 5 6 20 0 3 50 0 0 1

.

18. One example:

1 3 −1 2

−3 0 4 −31 −4 0 1

−2 3 −1 0

.

19. One example:

3 0 00 2 00 0 5

.

20. The only possibility here is the zero matrix:

0 0 00 0 00 0 0

.

21. One example:[ 1√

3−t

√t + 2 0

0 0 0

].

22. One example:

t2 − t 0

0 00 00 0

.

23. One example:[

t2 + 1 1 1 1 1].

24. One example:[

1t2+1

0

].

25. One example: Let A and B be 1× 1 matrix functions given by

A(t) = [t] and B(t) = [t2].

26. Let A be a symmetric upper-triangular matrix. Then all elements below the main diagonal are zeros.Consequently, since A is symmetric, all elements above the main diagonal must also be zero. Hence, the

Page 116: Differential Equations and Linear Algebra - 3e - Goode -Solutions

116

only nonzero entries can occur along the main diagonal. That is, A is a diagonal matrix.

27. Since A is skew-symmetric, a11 = a22 = a33 = 0. Further, a12 = −a21 = −1, a13 = −a31 = −3, and

a32 = −a23 = 1. Consequently, A =

0 −1 −31 0 −13 1 0

.

Solutions to Section 2.2

True-False Review:

1. FALSE. The correct statement is (AB)C = A(BC), the associative law. A counterexample to theparticular statement given in this review item can be found in Problem 7.

2. TRUE. Multiplying from left to right, we note that AB is an m× p matrix, and right multiplying ABby the p× q matrix C, we see that ABC is an m× q matrix.

3. TRUE. We have (A + B)T = AT + BT = A + B, so A + B is symmetric.

4. FALSE. For example, let A =

0 1 0−1 0 0

0 0 0

, B =

0 0 30 0 0

−3 0 0

. Then A and B are skew-symmetric,

but AB =

0 0 00 0 −30 0 0

is not symmetric.

5. FALSE. The correct equation is (A+B)2 = A2 +AB +BA+B2. The statement is false since AB +BA

does not necessarily equal 2AB. For instance, if A =[

1 00 0

]and B =

[0 10 0

], then (A+B)2 =

[1 10 0

]and A2 + 2AB + B2 =

[1 20 0

]6= (A + B)2.

6. FALSE. For example, let A =[

0 10 0

]and B =

[1 00 0

]. Then AB = 0 even though A 6= 0 and

B 6= 0.

7. FALSE. For example, let A =[

0 01 0

]and let B =

[0 00 0

]. Then A is not upper triangular, despite

the fact that AB is the zero matrix, hence automatically upper triangular.

8. FALSE. For instance, the matrix A =[

1 00 0

]is neither the zero matrix nor the identity matrix, and

yet A2 = A.

9. TRUE. The derivative of each entry of the matrix is zero, since in each entry, we take the derivative ofa constant, thus obtaining zero for each entry of the derivative of the matrix.

10. FALSE. The correct statement is given in Problem 41. The problem with the statement as given isthat the second term should be dA

dt B, not B dAdt .

11. FALSE. For instance, the matrix function A =[

2et 00 3et

]satisfies A = dA

dt , but A does not have the

form[

cet 00 cet

].

Page 117: Differential Equations and Linear Algebra - 3e - Goode -Solutions

117

12. TRUE. This follows by exactly the same proof as given in the text for matrices of numbers (see part3 of Theorem 2.2.21).

Problems:

1. 2A =[

2 4 −26 10 4

], −3B =

[−6 3 −9−3 −12 −15

],

A− 2B =[

1 2 −13 5 2

]−[

4 −2 62 8 10

]=[−3 4 −7

1 −3 −8

],

3A + 4B =[

3 6 −39 15 6

]+[

8 −4 124 16 20

]=[

11 2 913 31 26

].

2. Solving for D, we have2A + B − 3C + 2D = A + 4C

2D = −A−B + 7C

D =12(−A−B + 7C).

When appropriate substitutions are made for A,B, and C, we obtain:

D =

−5 −2.5 2.51.5 6.5 9

−2.5 2.5 −0.5

.

3.

AB =[

5 10 −327 22 3

], BC =

98

−6

, DC = [10],

DB = [6 14 − 4], CD =

2 −2 3−2 2 −3

4 −4 6

.

CA and AD cannot be computed.

4.

AB =[

2− i 1 + i−i 2 + 4i

] [i 1− 3i0 4 + i

]=[

(2− i)i + (1 + i)0 (2− i)(1− 3i) + (1 + i)(4 + i)−i(i) + (2 + 4i)0 −i(1− 3i) + (2 + 4i)(4 + i)

]=[

1 + 2i 2− 2i1 1 + 17i

].

Page 118: Differential Equations and Linear Algebra - 3e - Goode -Solutions

118

5.

AB =[

3 + 2i 2− 4i5 + i −1 + 3i

] [−1 + i 3 + 2i4− 3i 1 + i

]=[

(3 + 2i)(−1 + i) + (2− 4i)(4− 3i) (3 + 2i)(3 + 2i) + (2− 4i)(1 + i)(5 + i)(−1 + i) + (−1 + 3i)(4− 3i) (5 + i)(3 + 2i) + (−1 + 3i)(1 + i)

]=[−9− 21i 11 + 10i−1 + 19i 9 + 15i

].

6.

AB =[

3− 2i i−i 1

] [−1 + i 2− i 01 + 5i 0 3− 2i

]=[

(3− 2i)(−1 + i) + i(1 + 5i) (3− 2i)(2− i) + i · 0 (3− 2i)0 + i(3− 2i)−i(−1 + i) + 1(1 + 5i) −i(2− i) + 1 · 0 −i · 0 + 1(3− 2i)

]=[−6 + 6i 4− 7i 2 + 3i2 + 6i −1− 2i 3− 2i

]7.

ABC =[

1 −1 2 3−2 3 4 6

]3 21 54 −3

−1 6

C

=[

7 97 35

] [−3 2

1 −4

]=[−12 −22

14 −126

].

CAB =[−3 2

1 −4

] [1 −1 2 3

−2 3 4 6

]B

=[−7 9 2 3

9 −13 −14 −21

]3 21 54 −3

−1 6

=[

−7 43−21 −131

].

8.

(2A− 3B)C =(

2[

1 −23 1

]− 3

[−1 2

5 3

])C

=[

5 −10−9 −7

] [3

−1

]=[

25−20

].

9.

Ac =[

1 3−5 4

] [6

−2

]= 6

[1

−5

]+ (−2)

[34

]=[

0−38

].

10.

Ac =

3 −1 42 1 57 −6 3

23

−4

= 2

327

+ 3

−11

−6

+ (−4)

453

=

−13−13−16

.

11.

Ac =

−1 24 75 −4

[ 5−1

]= 5

−145

+ (−1)

27

−4

=

−71329

.

Page 119: Differential Equations and Linear Algebra - 3e - Goode -Solutions

119

12. The dimensions of B should be n × r in order that ABC is defined. The elements of the ith row of Aare ai1, ai2, . . . , ain and the elements of the jth column of BC are

r∑m=1

b1mcmj ,r∑

m=1

b2mcmj , . . . ,r∑

m=1

bnmcmj ,

so the element in the ith row and jth column of ABC = A(BC) is

ai1

r∑m=1

b1mcmj + ai2

r∑m=1

b2mcmj + · · ·+ ain

r∑m=1

bnmcmj

=n∑

k=1

aik

(r∑

m=1

bkmcmj

)=

n∑k=1

(r∑

m=1

aikbkm

)cmj .

13. (a):

A2 = AA =[

1 −12 3

] [1 −12 3

]=[−1 −4

8 7

].

A3 = A2A =[−1 −4

8 7

] [1 −12 3

]=[−9 −1122 13

].

A4 = A3A =[−9 −1122 13

] [1 −12 3

]=[−31 −24

48 17

].

(b):

A2 = AA =

0 1 0−2 0 1

4 −1 0

0 1 0−2 0 1

4 −1 0

=

−2 0 14 −3 02 4 −1

.

A3 = A2A =

−2 0 14 −3 02 4 −1

0 1 0−2 0 1

4 −1 0

=

4 −3 06 4 −3

−12 3 4

.

A4 = A3A =

4 −3 06 4 −3

−12 3 4

0 1 0−2 0 1

4 −1 0

=

6 4 −3−20 9 4

10 −16 3

.

14. (a):(A + B)2 = (A + B)(A + B) = A(A + B) + B(A + B) = A2 + AB + BA + B2.

(b):(A−B)2 = (A + (−1)B)2

= A2 + A(−1)B + (−1)BA + [(−1)B]2 by part (a)

= A2 −AB −BA + B2.

15.

A2 − 2A− 8I2 =[

14 −2−10 6

]− 2

[3 −1

−5 −1

]− 8

[1 00 1

]=[

14 −2−10 6

]+[−6 210 2

]+[−8 0

0 −8

]= 02.

Page 120: Differential Equations and Linear Algebra - 3e - Goode -Solutions

120

16.

A2 =

1 0 00 1 00 0 1

− 0 −1 0

0 0 −10 0 0

=

1 1 00 1 10 0 1

.

Substituting A =

1 x z0 1 y0 0 1

for A, we have

1 x z0 1 y0 0 1

1 x z0 1 y0 0 1

=

1 1 00 1 10 0 1

,

that is, 1 2x 2z + xy0 1 2y0 0 1

=

1 1 00 1 10 0 1

.

Since corresponding elements of equal matrices are equal, we obtain the following implications:

2y = 1 =⇒ y = 1/2,

2x = 1 =⇒ x = 1/2,

2z + xy = 0 =⇒ 2z + (1/2)(1/2) = 0 =⇒ z = −1/8.

Thus, A =

1 1/2 −1/80 1 1/20 0 1

.

17. In order that A2 = A, we require[

x 1−2 y

] [x 1

−2 y

]=[

x 1−2 y

], that is,

[x2 − 2 x + y

−2x− 2y −2 + y2

]=[

x 1−2 y

], or equivalently,

[x2 − x− 2 x + y − 1−2x− 2y + 2 y2 − y − 2

]= 02. Since corresponding elements of equal ma-

trices are equal, it follows that

x2 − x− 2 = 0 =⇒ x = −1 or x = 2, and

y2 − y − 2 = 0 =⇒ y = −1 or y = 2.

Two cases arise from x + y − 1 = 0:(a): If x = −1, then y = 2.(b): If x = 2, then y = −1. Thus,

A =[−1 1−2 2

]or A =

[2 1

−2 −1

].

18.

σ1σ2 =[

0 11 0

] [0 −ii 0

]=[

i 00 −i

]= i

[1 00 −1

]= iσ3.

σ2σ3 =[

0 −ii 0

] [1 00 −1

]=[

0 ii 0

]= i

[0 11 0

]= iσ1.

σ3σ1 =[

1 00 −1

] [0 11 0

]=[

0 1−1 0

]= i

[0 −ii 0

]= iσ2.

Page 121: Differential Equations and Linear Algebra - 3e - Goode -Solutions

121

19.[A,B] = AB −BA

=[

1 −12 1

] [3 14 2

]−[

3 14 2

] [1 −12 1

]=[−1 −110 4

]−[

5 −28 −2

]=[−6 1

2 6

]6= 02.

20.[A1, A2] = A1A2 −A2A1

=[

1 00 1

] [0 10 0

]−[

0 10 0

] [1 00 1

]=[

0 10 0

]−[

0 10 0

]= 02, thus A1 and A2 commute.

[A1, A3] = A1A3 −A3A1

=[

1 00 1

] [0 01 0

]−[

0 01 0

] [1 00 1

]=[

0 01 0

]−[

0 01 0

]= 02, thus A1 and A3 commute.

[A2, A3] = A2A3 −A3A2

=[

0 10 0

] [0 01 0

]−[

0 01 0

] [0 10 0

]=[

1 00 0

]−[

0 00 1

]=[

1 00 −1

]6= 02.

Then [A3, A2] = −[A2, A3] =[−1 0

0 1

]6= 02. Thus, A2 and A3 do not commute.

21.[A1, A2] = A1A2 −A2A1

=14

[0 ii 0

] [0 −11 0

]− 1

4

[0 −11 0

] [0 ii 0

]=

14

[i 00 −i

]− 1

4

[−i 00 i

]=

14

[2i 00 −2i

]=

12

[i 00 −i

]= A3.

[A2, A3] = A2A3 −A3A2

=14

[0 −11 0

] [i 00 −i

]− 1

4

[i 00 −1

] [0 −11 0

]=

14

[0 ii 0

]− 1

4

[0 −i−i 0

]=

14

[0 2i2i 0

]=

12

[0 ii 0

]= A1.

Page 122: Differential Equations and Linear Algebra - 3e - Goode -Solutions

122

[A3, A1] = A3A1 −A1A3

=14

[i 00 −i

] [0 ii 0

]− 1

4

[0 ii 0

] [i 00 −i

]=

14

[0 −1

−1 0

]− 1

4

[0 11 0

]=

14

[0 −22 0

]=

12

[0 −11 0

]= A2.

22.

[A, [B,C]] + [B, [C,A]] + [C, [A,B]]= [A,BC − CB] + [B,CA−AC] + [C,AB −BA]= A(BC − CB)− (BC − CB)A + B(CA−AC)− (CA−AC)B + C(AB −BA)− (AB −BA)C= ABC −ACB −BCA + CBA + BCA−BAC − CAB + ACB + CAB − CBA−ABC + BAC = 0.

23.Proof that A(BC) = (AB)C: Let A = [aij ] be of size m× n, B = [bjk] be of size n× p, and C = [ckl] beof size p× q. Consider the (i, j)-element of (AB)C:

[(AB)C]ij =p∑

k=1

(n∑

h=1

aihbhk

)ckj =

n∑h=1

aih

(p∑

k=1

bhkckj

)= [A(BC)]ij .

Proof that A(B + C) = AB + AC: We have

[A(B + C)]ij =n∑

k=1

aik(bkj + ckj)

=n∑

k=1

(aijbkj + aikckj)

=n∑

k=1

aikbkj +n∑

k=1

aikckj

= [AB + AC]ij .

24.Proof that (AT )T = A: Let A = [aij ]. Then AT = [aji], so (AT )T = [aji]T = aij = A, as needed.Proof that (A + C)T = AT + CT : Let A = [aij ] and C = [cij ]. Then [(A + C)T ]ij = [A + C]ji =[A]ji + [C]ji = aji + cji = [AT ]ij + [CT ]ij = [AT + CT ]ij . Hence, (A + C)T = AT + CT .

25. We have

(IA)ij =m∑

k=1

δikakj = δiiaij = aij ,

for 1 ≤ i ≤ m and 1 ≤ j ≤ p. Thus, ImAm×p = Am×p.

26. Let A = [aij ] and B = [bij ] be n× n matrices. Then

tr(AB) =n∑

k=1

(n∑

i=1

akibik

)=

n∑k=1

(n∑

i=1

bikaki

)=

n∑i=1

(n∑

k=1

bikaki

)= tr(BA).

Page 123: Differential Equations and Linear Algebra - 3e - Goode -Solutions

123

27.

AT =

1 2 3

−1 0 41 2 −14 −3 0

, BT =[

0 −1 1 21 2 1 1

],

AAT =

1 −1 1 42 0 2 −33 4 −1 0

1 2 3−1 0 4

1 2 −14 −3 0

=

19 −8 −2−8 17 4−2 4 26

,

AB =

1 −1 1 42 0 2 −33 4 −1 03 4 −1 0

0 1−1 2

1 12 1

=

10 4−4 1−5 10

,

BT AT =[

0 −1 1 21 2 1 1

]1 2 3

−1 0 41 2 −14 −3 0

=[

10 −4 −54 1 10

].

28.(a): We have

S = [s1, s2, s3] =

−x −y z0 y 2zx −y z

,

so

AS =

2 2 12 5 21 2 2

−x −y z0 y 2zx −y z

=

−x −y 7z0 y 14zx −y 7z

= [s1, s2, 7s3].

(b):

ST AS = ST (AS) =

−x 0 x−y y −y

z 2z z

−x −y 7z0 y 14zx −y 7z

=

2x2 0 00 3y2 00 0 42z2

,

but ST AS = diag(1, 1, 7), so we have the following

2x2 = 1 =⇒ x = ±√

22

3y2 = 1 =⇒ y = ±√

33

6z2 = 1 =⇒ z = ±√

66

.

29.

(a):

2 0 0 00 2 0 00 0 2 00 0 0 2

.

Page 124: Differential Equations and Linear Algebra - 3e - Goode -Solutions

124

(b):

7 0 00 7 00 0 7

.

30. Suppose A is an n×n scalar matrix with trace k. If A = aIn, then tr(A) = na = k, so we conclude thata = k/n. So A = k

nIn, a uniquely determined matrix.

31. We have

ST =[12(A + AT )

]T

=12(A + AT )T =

12(AT + A) = S

and

TT =[12(A−AT )

]T

=12(AT −A) = −1

2(A−AT ) = −T.

Thus, S is symmetric and T is skew-symmetric.

32.

S =12

1 −5 33 2 47 −2 6

+

1 3 7−5 2 −2

3 4 6

=12

2 −2 10−2 4 210 2 12

=

1 −1 5−1 2 1

5 1 6

.

T =12

1 −5 33 2 47 −2 6

− 1 3 7−5 2 −2

3 4 6

=12

0 −8 −48 0 64 −6 0

=

0 −4 −24 0 32 −3 0

.

33. If A is an n× n symmetric matrix, then AT = A, so it follows that

T =12(A−AT ) =

12(A−A) = 0n.

If A is an n× n skew-symmetric matrix, then AT = −A and it follows that

S =12(A + AT ) =

12(A + (−A)) = 0n.

34. Let A be any n× n matrix. Then

A =12(2A) =

12(A + AT + A−AT ) =

12(A + AT ) +

12(A−AT ),

a sum of a symmetric and skew-symmetric matrix, respectively, by Problem 31.

35. If A = [aij ] and D = diag(d1, d2, . . . , dn), then we must show that the (i, j)-entry of DA is diaij . Inindex notation, we have

(DA)ij =n∑

k=1

diδikakj = diδiiaij = diaij .

Hence, DA is the matrix obtained by multiplying the ith row vector of A by di, where 1 ≤ i ≤ n.

36.(a): We have (AAT )T = (AT )T AT = AAT , so that AAT is symmetric.(b): We have (ABC)T = [(AB)C]T = CT (AB)T = CT (BT AT ) = CT BT AT , as needed.

37. A′(t) =[−2e−2t

cos t

].

Page 125: Differential Equations and Linear Algebra - 3e - Goode -Solutions

125

38. A′(t) =[

1 cos t− sin t 4

].

39. A′(t) =[

et 2e2t 2t2et 8e2t 10t

].

40. A′(t) =

cos t − sin t 0sin t cos t 10 3 0

.

41. We show that the (i, j)-entry of both sides of the equation agree. First, recall that the (i, j)-entry ofAB is

∑nk=1 aikbkj , and therefore, the (i, j)-entry of d

dt (AB) is (by the product rule)

n∑k=1

a′ikbkj + aikb′kj =n∑

k=1

a′ikbkj +n∑

k=1

aikb′kj .

The former term is precise the (i, j)-entry of the matrix dAdt B, while the latter term is precise the (i, j)-entry

of the matrix AdBdt . Thus, the (i, j)-entry of d

dt (AB) is precisely the sum of the (i, j)-entry of dAdt B and the

(i, j)-entry of AdBdt . Thus, the equation we are proving follows immediately.

42. We have∫ π/2

0

[cos tsin t

]dt =

[sin t− cos t

] ∣∣∣π/20 =

[sin(π/2)− cos(π/2)

]−[

sin 0− cos 0

]=[

10

]−[

0−1

]=[

11

].

43. We have∫ 1

0

[et e−t

2et 5e−t

]dt =

[et −e−t

2et −5e−t

] ∣∣10 =

[e −1/e2e −5/e

]−[

1 −12 −5

]=[

e− 1 1− 1/e2e− 2 5− 5/e

].

44. We have

∫ 1

0

e2t sin 2tt2 − 5 tet

sec2 t 3t− sin t

dt =

12e2t − 1

2 cos 2tt3

3 − 5t tet − et

tan t 32 t2 + cos t

∣∣10

=

e2

2 − cos 22

−14/3 0tan 1 3

2 + cos 1

− 1

2 − 12

0 −10 1

=

e2−12

1−cos 22

−14/3 1tan 1 1

2 + cos 1

.

45. We have∫ 1

0

[et e2t t2

2et 4e2t 5t2

]dt = =

[et 1

2e2t t3

32et 2e2t 5

3 t3

] ∣∣10

=[

e e2/2 1/32e 2e2 5/3

]−[

1 1/2 02 2 0

]=[

e− 1 e2−12 1/3

2e− 2 2e2 − 2 5/3

].

46.∫ [

2t3t2

]dt =

[t2

t3

].

Page 126: Differential Equations and Linear Algebra - 3e - Goode -Solutions

126

47.∫ sin t cos t 0

− cos t sin t t0 3t 1

dt =

− cos t sin t 0− sin t − cos t t2/2

0 3t2/2 t

.

48.∫ [

et e−t

2et 5e−t

]dt =

[et −e−t

2et −5e−t

].

49.∫ e2t sin 2t

t2 − 5 tet

sec2 t 3t− sin t

dt =

12e2t − 1

2 cos 2tt3

3 − 5t tet − et

tan t 32 t2 + cos t

.

Solutions to Section 2.3

True-False Review:

1. FALSE. The last column of the augmented matrix corresponds to the constants on the right-handside of the linear system, so if the augmented matrix has n columns, there are only n − 1 unknowns underconsideration in the system.

2. FALSE. Three distinct planes can intersect in a line (e.g. Figure 2.3.1, lower right picture). For instance,the xy-plane, the xz-plane, and the plane y = z intersect in the x-axis.

3. FALSE. The right-hand side vector must have m components, not n components.

4. TRUE. If a linear system has two distinct solutions x1 and x2, then any point on the line containing x1

and x2 is also a solution, giving us infinitely many solutions, not exactly two solutions.

5. TRUE. The augmented matrix for a linear system has one additional column (containing the constantson the right-hand side of the equation) beyond the matrix of coefficients.

6. FALSE. For instance, if A =[

0 10 0

], then solutions to Ax = 0 take the form

[t0

], while solutions

to AT x = 0 take the form[

0t

]. The solution sets are not the same.

Problems:

1.2 · 1− 3(−1) + 4 · 2 = 13,

1 + (−1)− 2 = −2,

5 · 1 + 4(−1) + 2 = 3.

2.2 + (−3)− 2 · 1 = −3,

3 · 2− (−3)− 7 · 1 = 2,

2 + (−3) + 1 = 0,

2 · 2 + 2(−3)− 4 · 1 = −6.

3.(1− t) + (2 + 3t) + (3− 2t) = 6,

(1− t)− (2 + 3t)− 2(3− 2t) = −7,

Page 127: Differential Equations and Linear Algebra - 3e - Goode -Solutions

127

5(1− t) + (2 + 3t)− (3− 2t) = 4.

4.s + (s− 2t)− (2s + 3t) + 5t = 0,

2(s− 2t)− (2s + 3t) + 7t = 0,

4s + 2(s− 2t)− 3(2s + 3t) + 13t = 0.

5. The lines 2x + 3y = 1 and 2x + 3y = 2 are parallel in the xy-plane, both with slope −2/3; thus, thesystem has no solution.

6. A =

1 2 −32 4 −57 2 −1

,b =

123

, A# =

1 2 −3 12 4 −5 27 2 −1 3

.

7. A =[

1 1 1 −12 4 −3 7

],b =

[32

], A# =

[1 1 1 −1 32 4 −3 7 2

].

8. A =

1 2 −12 3 −25 6 −5

,b =

000

, A# =

1 2 −1 02 3 −2 05 6 −5 0

.

9. It is acceptable to use any variable names. We will use x1, x2, x3, x4:

x1 − x2 +2x3 + 3x4 = 1,

x1 + x2 −2x3 + 6x4 = −1,

3x1 + x2 +4x3 + 2x4 = 2.

10. It is acceptable to use any variable names. We will use x1, x2, x3:

2x1 + x2 +3x3 = 3,

4x1 − x2 +2x3 = 1,

7x1 + 6x2 +3x3 = −5.

11. Given Ax = 0 and Ay = 0, and an arbitrary constant c,

(a): we haveAz = A(x + y) = Ax + Ay = 0 + 0 = 0

andAw = A(cx) = c(Ax) = c0 = 0.

(b): No, becauseA(x + y) = Ax + Ay = b + b = 2b 6= b,

andA(cx) = c(Ax) = cb 6= b

in general.

12.[

x′1x′2

]=[−4 3

6 −4

] [x1

x2

]+[

4tt2

].

Page 128: Differential Equations and Linear Algebra - 3e - Goode -Solutions

128

13.[

x′1x′2

]=[

t2 −t− sin t 1

] [x1

x2

].

14.[

x′1x′2

]=[

0 e2t

− sin t 0

] [x1

x2

]+[

01

].

15.

x′1x′2x′3

=

0 − sin t 1−et 0 t2

−t t2 0

x1

x2

x3

+

tt3

1

.

16. We have

x′(t) =[

4e4t

−2(4e4t)

]=[

4e4t

−8e4t

]and

Ax + b =[

2 −1−2 3

] [e4t

−2e4t

]+[

00

]=[

2e4t + (−1)(−2e4t) + 0−2e4t + 3(−2e4t) + 0

]=[

4e4t

−8e4t

].

17. We have

x′(t) =[

4(−2e−2t) + 2 cos t3(−2e−2t) + sin t

]=[−8e−2t + 2 cos t−6e−2t + sin t

]and

Ax + b =[

1 −4−3 2

] [4e−2t + 2 sin t3e−2t − cos t

]+[−2(cos t + sin t)7 sin t + 2 cos t

]=[

4e−2t + 2 sin t− 4(3e−2t − cos t)− 2(cos t + sin t)−3(4e−2t + 2 sin t) + 2(3e−2t − cos t) + 7 sin t + 2 cos t

]=[−8e−2t + 2 cos t−6e−2t + sin t

].

Solutions to Section 2.4

True-False Review:

1. TRUE. The precise row-echelon form obtained for a matrix depends on the particular elementary rowoperations (and their order). However, Theorem 2.4.15 states that there is a unique reduced row-echelonform for a matrix.

2. FALSE. Upper triangular matrices could have pivot entries that are not 1. For instance, the following

matrix is upper triangular, but not in row echelon form:[

2 00 0

].

3. TRUE. The pivots in a row-echelon form of an n×n matrix must move down and to the right as we lookfrom one row to the next beneath it. Thus, the pivots must occur on or to the right of the main diagonal ofthe matrix, and thus all entries below the main diagonal of the matrix are zero.

4. FALSE. This would not be true, for example, if A was a zero matrix with 5 rows and B was a nonzeromatrix with 4 rows.

5. FALSE. If A is a nonzero matrix and B = −A, then A + B = 0, so rank(A + B) = 0, but rank(A),rank(B) ≥ 1 so rank(A)+ rank(B) ≥ 2.

6. FALSE. For example, if A = B =[

0 10 0

], then AB = 0, so rank(AB) = 0, but rank(A)+ rank(B) =

1 + 1 = 2.

Page 129: Differential Equations and Linear Algebra - 3e - Goode -Solutions

129

7. TRUE. A matrix of rank zero cannot have any pivots, hence no nonzero rows. It must be the zeromatrix.

8. TRUE. The matrices A and 2A have the same reduced row-echelon form, since we can move betweenthe two matrices by multiplying the rows of one of them by 2 or 1/2, a matter of carrying out elementaryrow operations. If the two matrices have the same reduced row-echelon form, then they have the same rank.

9. TRUE. The matrices A and 2A have the same reduced row-echelon form, since we can move betweenthe two matrices by multiplying the rows of one of them by 2 or 1/2, a matter of carrying out elementaryrow operations.

Problems:

1. Row-echelon form.

2. Neither.

3. Reduced row-echelon form.

4. Neither.

5. Reduced row-echelon form.

6. Row-echelon form.

7. Reduced row-echelon form.

8. Reduced row-echelon form.

9. [2 11 −3

]1∼[

1 −32 1

]2∼[

1 −30 7

]3∼[

1 −30 1

],Rank (A) = 2.

1. P12 2. A12(−2) 3. M2( 17 )

10. [2 −4

−4 8

]1∼[

1 −2−4 8

]2∼[

1 −20 0

],Rank (A) = 1.

1. M1( 12 ) 2. A12(4)

11. 2 1 42 −3 43 −2 6

1∼

3 −2 62 −3 42 1 4

2∼

1 1 22 −3 42 1 4

3∼

1 1 20 −5 00 −1 0

4∼

1 1 20 −1 00 −5 0

5∼

1 1 20 1 00 −5 0

6∼

1 1 20 1 00 0 0

,Rank (A) = 2.

1. P13 2. A21(−1) 3. A12(−2), A13(−3) 4. P23 5. M2(−1) 6. A32(5)

12. 0 1 30 1 40 3 5

1∼

0 1 30 0 10 0 4

2∼

0 1 30 0 10 0 0

,Rank (A) = 2.

Page 130: Differential Equations and Linear Algebra - 3e - Goode -Solutions

130

1. A12(−1), A13(−3) 2. A23(−4)

13. 2 −13 22 5

1∼

3 22 −12 5

2∼

1 32 −12 5

3∼

1 30 −70 −1

4∼

1 30 −10 −7

5∼

1 30 10 0

,Rank (A) = 2.

1. P12 2. A21(−1) 3. A12(−2), A13(−2) 4. P23 5. M2(−1), A23(7).

14. 2 −1 33 1 −22 −2 1

1∼

3 1 −22 −1 32 −2 1

2∼

1 2 −52 −1 30 −1 −2

3∼

1 2 −50 −5 130 −1 −2

4∼

1 2 −50 −1 −20 −5 13

5∼

1 2 −50 1 20 −5 13

6∼

1 2 −50 1 20 0 23

7∼

1 2 −50 1 20 0 1

,Rank (A) = 3.

1. P12 2. A21(−1), A23(−1) 3. A12(−2) 4. P23 5. M2(−1) 6. A23(5) 7. M3(1/23).

15. 2 −1 3 41 −2 1 31 −5 0 5

1∼

1 −2 1 32 −1 3 41 −5 0 5

2∼

1 −2 1 30 3 1 −20 0 0 0

3∼

1 −2 1 30 1 1

3 − 23

0 0 0 0

,Rank (A) = 2.

1. P12 2. A12(−2), A13(−1) 3. M2(1/3)

16. 2 −2 −1 33 −2 3 11 −1 1 02 −1 2 2

1∼

1 −1 1 03 −2 3 12 −2 −1 32 −1 2 2

2∼

1 −1 1 00 1 0 10 0 −3 30 1 0 2

3∼

1 −1 1 00 1 0 10 0 −3 30 0 0 1

4∼

1 −1 1 00 1 0 10 0 1 −10 0 0 1

,Rank (A) = 4.

1. P13 2. A12(−3), A13(−2), A14(−2) 3. A24(−1) 4. M3(1/3)

17. 4 7 4 73 5 3 52 −2 2 −25 −2 5 −2

1∼

1 2 1 23 5 3 52 −2 2 −25 −2 5 −2

2∼

1 2 1 20 −1 0 −10 −6 0 −60 −12 0 −12

3∼

1 2 1 20 1 0 10 −6 0 −60 −12 0 −12

Page 131: Differential Equations and Linear Algebra - 3e - Goode -Solutions

131

4∼

1 2 1 20 1 0 10 0 0 00 0 0 0

,Rank (A) = 2.

1. A21(−1) 2. A12(−3), A13(−2), A14(−5) 3. M2(−1) 4. A23(6), A24(12)

18. 2 1 3 4 21 0 2 1 32 3 1 5 7

1∼

1 0 2 1 32 1 3 4 22 3 1 5 7

2∼

1 0 2 1 30 1 −1 2 −40 3 −3 3 1

3∼

1 0 2 1 30 1 −1 2 −40 0 0 −3 1

4∼

1 0 2 1 30 1 −1 2 −40 0 0 1 − 1

3

,Rank (A) = 3.

1. P12 2. A12(−2), A13(−2), 3. A23(−3) 4. M3(− 13 )

19. [3 21 −1

]1∼[

1 −13 2

]2∼[

1 −10 5

]3∼[

1 −10 1

]4∼[

1 00 1

]= I2,Rank (A) = 2.

1. P12 2. A12(−3) 3. M2( 15 ) 4. A21(1)

20. 3 7 102 3 −11 2 1

1∼

1 2 12 3 −13 7 10

2∼

1 2 10 −1 −30 1 7

3∼

1 2 10 1 30 1 7

4∼

1 0 −50 1 30 0 4

5∼

1 0 −50 1 30 0 1

6∼

1 0 00 1 00 0 1

= I3,Rank (A) = 3.

1. P13 2. A12(−2), A13(−3) 3. M2(−1) 4. A21(−2), A23(−1) 5. M3( 14 ) 6. A31(5), A32(−3)

21. 3 −3 62 −2 46 −6 12

1∼

1 −1 20 0 00 0 0

,Rank (A) = 1.

1. M1( 13 ), A12(−2), A13(−6)

22. 3 5 −122 3 −7

−2 −1 1

1∼

1 2 −50 −1 30 3 −9

2∼

1 2 −50 1 −30 3 −9

3∼

1 0 10 1 −30 0 0

,Rank (A) = 2.

Page 132: Differential Equations and Linear Algebra - 3e - Goode -Solutions

132

1. A21(−1), A12(−2), A13(2) 2. M2(−1) 3. A21(−2), A23(−3)

23. 1 −1 −1 23 −2 0 72 −1 2 44 −2 3 8

1∼

1 −1 −1 20 1 3 10 1 4 00 2 7 0

2∼

1 0 2 30 1 3 10 0 1 −10 0 1 −2

3∼

1 0 0 50 1 0 40 0 1 −10 0 0 −1

4∼

1 0 0 50 1 0 40 0 1 −10 0 0 1

5∼

1 0 0 00 1 0 00 0 1 00 0 0 1

= I4,Rank (A) = 4.

1. A12(−3), A13(−2), A14(−4) 2. A21(1), A23(−1), A24(−2) 3. A31(−2), A32(−3), A34(−1)

4. M4(−1) 5. A41(−5), A42(−4), A43(1)

24. 1 −2 1 33 −6 2 74 −8 3 10

1∼

1 −2 1 30 0 −1 −20 0 −1 −2

2∼

1 −2 1 30 0 1 20 0 −1 −2

3∼

1 −2 0 10 0 1 20 0 0 0

,Rank (A) = 2.

1. A12(−3), A13(−4) 2. M2(−1) 3. A21(−1), A23(1)

25. 0 1 2 10 3 1 20 2 0 1

1∼

0 1 2 10 0 −6 −20 0 −4 −1

2∼

0 1 2 10 0 1 1/30 0 −4 −1

3∼

0 1 0 1/30 0 1 1/30 0 0 1/3

4∼

0 1 0 1/30 0 1 1/30 0 0 1

5∼

0 1 0 00 0 1 00 0 0 1

,Rank (A) = 3.

1. A12(−3), A13(−2) 2. M2(− 16 ) 3. A21(−2), A23(4) 4. M3(3) 5. A32(− 1

3 ), A31(− 13 )

Solutions to Section 2.5

True-False Review:

1. FALSE. This process is known as Gaussian elimination. Gauss-Jordan elimination is the process bywhich a matrix is brought to reduced row echelon form via elementary row operations.

2. TRUE. A homogeneous linear system always has the trivial solution x = 0, hence it is consistent.

3. TRUE. The columns of the row-echelon form that contain leading 1s correspond to leading variables,while columns of the row-echelon form that do not contain leading 1s correspond to free variables.

4. TRUE. If the last column of the row-reduced augmented matrix for the system does not contain apivot, then the system can be solved by back-substitution. On the other hand, if this column does contain

Page 133: Differential Equations and Linear Algebra - 3e - Goode -Solutions

133

a pivot, then that row of the row-reduced matrix containing the pivot in the last column corresponds to theimpossible equation 0 = 1.

5. FALSE. The linear system x = 0, y = 0, z = 0 has a solution in (0, 0, 0) even though none of thevariables here is free.

6. FALSE. The columns containing the leading 1s correspond to the leading variables, not the free variables.

Problems:

For the problems of this section, A will denote the coefficient matrix of the given system, andA# will denote the augmented matrix of the given system.

1. Converting the given system of equations to an augmented matrix and using Gaussian elimination weobtain the following equivalent matrices: 1 2 1 1

3 5 1 32 6 7 1

1∼

1 2 1 10 −1 −2 00 2 5 −1

2∼

1 2 1 10 1 2 00 2 5 −1

3∼

1 2 1 10 1 2 00 0 1 −1

.

1. A12(−3), A13(−2) 2. M2(−1) 3. A23(−2)

The last augmented matrix results in the system:

x1 + 2x2 + x3 = 1,

x2 + 2x3 = 0,

x3 = −1.

By back substitution we obtain the solution (−2, 2,−1).

2. Converting the given system of equations to an augmented matrix and using Gaussian elimination, weobtain the following equivalent matrices: 3 −1 0 1

2 1 5 47 −5 −8 −3

1∼

1 −2 −5 −32 1 5 47 −5 −8 −3

2∼

1 −2 −5 −30 5 15 100 9 27 18

3∼

1 −2 −5 −30 1 3 20 9 27 18

4∼

1 0 1 10 1 3 20 0 0 0

.

1. A21(−1) 2. A12(−2), A13(−7) 3. M2( 15 ) 4. A21(2), A23(−9)

The last augmented matrix results in the system:

x1 + x3 = 1,

x2 + 3x3 = 2.

Let the free variable x3 = t, a real number. By back substitution we find that the system has the solutionset {(1− t, 2− 3t, t) : for all real numbers t}.

3. Converting the given system of equations to an augmented matrix and using Gaussian elimination weobtain the following equivalent matrices:

Page 134: Differential Equations and Linear Algebra - 3e - Goode -Solutions

134

3 5 −1 141 2 1 32 5 6 2

1∼

1 2 1 33 5 −1 42 5 6 2

2∼

1 2 1 30 −1 −4 −50 1 4 −4

3∼

1 2 1 30 1 4 50 1 4 −4

4∼

1 2 1 30 1 4 50 0 0 −9

5∼

1 2 1 30 1 4 50 0 0 1

.

1. P12 2. A12(−3), A13(−2) 3. M2(−1) 4. A23(−1) 5. M4(− 19 )

This system of equations is inconsistent since 2 = rank(A) < rank(A#) = 3.

4. Converting the given system of equations to an augmented matrix and using Gaussian elimination weobtain the following equivalent matrices: 6 −3 3 12

2 −1 1 4−4 2 −2 −8

1∼

1 − 12 − 1

2 22 −1 1 4

−4 2 −2 −8

1∼

1 − 12

12 2

0 0 0 00 0 0 0

.

1. M1( 16 ) 2. A12(−2), A13(4)

Since x2 and x3 are free variables, let x2 = s and x3 = t. The single equation obtained from the augmentedmatrix is given by x1 − 1

2x2 + 12x3 = 2. Thus, the solution set of our system is given by

{(2 +s

2− t

2, s, t) : s, t any real numbers }.

5. Converting the given system of equations to an augmented matrix and using Gaussian elimination weobtain the following equivalent matrices:

2 −1 3 143 1 −2 −17 2 −3 35 −1 −2 5

1∼

3 1 −2 −12 −1 3 147 2 −3 35 −1 −2 5

2∼

1 2 −5 −152 −1 3 −147 2 −3 35 −1 −2 5

3∼

1 2 −5 −150 −5 13 440 −12 32 1080 −11 23 80

4∼

1 2 −5 −150 −12 32 1080 −5 13 440 −11 23 80

5∼

1 2 −5 −150 −1 9 280 −5 13 440 −11 23 80

6∼

1 2 −5 −150 1 −9 −280 −5 13 440 −11 23 80

7∼

1 2 −5 −150 1 −9 −280 0 −32 −960 0 −76 −228

8∼

1 2 −5 −150 1 −9 −280 0 32 960 0 −76 −228

9∼

1 2 −5 −150 1 −9 −280 0 1 30 0 0 0

.

1. P12 2. A21(−1) 3. A12(−2), A13(−7), A14(−5) 4. P23

5. A42(−1) 6. M2(−1) 7. A23(5), A24(11) 8. M3(−1) 9. M3( 132 ), A34(76).

Page 135: Differential Equations and Linear Algebra - 3e - Goode -Solutions

135

The last augmented matrix results in the system of equations:

x1 − 2x2 − 5x3 = −15,

x2 − 9x3 = −28,

x3 = 3.

Thus, using back substitution, the solution set for our system is given by {(2,−1, 3)}.6. Converting the given system of equations to an augmented matrix and using Gaussian elimination weobtain the following equivalent matrices:

2 −1 −4 53 2 −5 85 6 −6 201 1 −3 −3

1∼

1 1 −3 −33 2 −5 85 6 −6 202 −1 −4 −5

2∼

1 1 −3 −30 −1 4 170 1 9 350 −3 2 11

3∼

1 1 −3 −30 1 −4 −170 1 9 350 −3 2 11

4∼

1 1 −3 −30 1 −4 −170 0 13 520 0 −10 −40

5∼

1 1 −3 −30 1 −4 −170 0 1 40 0 −10 −40

6∼

1 1 −3 −30 1 −4 −170 0 1 40 0 0 0

.

1. P14 2. A12(−3), A13(−5), A14(−2) 3. M2(−1) 4. A23(−1), A24(3) 5. M3( 113 ) 6. A34(10)

The last augmented matrix results in the system of equations:

x1 + x2 − 3x3 = − 3,

x2 − 4x3 = −17,

x3 = 4.

By back substitution, we obtain the solution set {(10,−1, 4)}.7. Converting the given system of equations to an augmented matrix and using Gaussian elimination weobtain the following equivalent matrices: 1 2 −1 1 1

2 4 −2 2 25 10 −5 5 5

1∼

1 2 −1 1 10 0 0 0 00 0 0 0 0

.

1. A12(−2), A13(−5)

The last augmented matrix results in the equation x1 + 2x3 − x3 + x4 = 1. Now x2, x3, and x4 are freevariables, so we let x2 = r, x3 = s, and x4 = t. It follows that x1 = 1−2r+s− t. Consequently, the solutionset of the system is given by {(1− 2r + s− t, r, s, t) : r, s, t and real numbers }.8. Converting the given system of equations to an augmented matrix and using Gaussian elimination weobtain the following equivalent matrices:

1 2 −1 1 12 −3 1 −1 21 −5 2 −2 14 1 −1 1 3

1∼

1 2 −1 1 10 −7 3 −3 00 −7 3 −3 00 −7 3 −3 −1

2∼

1 2 −1 1 10 1 − 3

737 0

0 −7 3 −3 00 −7 3 −3 −1

Page 136: Differential Equations and Linear Algebra - 3e - Goode -Solutions

136

3∼

1 2 −1 1 10 1 − 3

737 0

0 0 0 0 00 0 0 0 −1

4∼

1 2 −1 1 10 1 − 3

737 0

0 0 0 0 −10 0 0 0 0

5∼

1 2 −1 1 10 1 − 3

737 0

0 0 0 0 10 0 0 0 0

.

1. A12(−2), A13(−1), A14(−4) 2. M2(− 17 ) 3. A23(7), A24(7) 4. P34 5. M3(−1)

The given system of equations is inconsistent since 2 = rank(A) < rank(A#) = 3.

9. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices:

1 2 1 1 −2 30 0 1 4 −3 22 4 −1 −10 5 0

1∼

1 2 1 1 −2 30 0 1 4 −3 20 0 −3 −12 9 −6

2∼

1 2 1 1 −2 30 0 1 4 −3 20 0 0 0 0 0

.

1. A13(−2) 2. A23(3)

The last augmented matrix indicates that the first two equations of the initial system completely determineits solution. We see that x4 and x5 are free variables, so let x4 = s and x5 = t. Then x3 = 2− 4x4 + 3x5 =2−4s+3t. Moreover, x2 is a free variable, say x2 = r, so then x1 = 3−2r−(2−4s+3t)−s+2t = 1−2r+3s−t.Hence, the solution set for the system is

{(1− 2r + 3s− t, r, 2− 4s + 3t, s, t) : r, s, t any real numbers }.

10. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices:

2 −1 −2 24 3 −2 −11 4 1 4

1∼

1 4 1 44 3 −2 −12 −1 −1 2

2∼

1 4 1 40 −13 −6 −170 −9 −3 −6

3∼

1 4 1 40 −9 −3 −60 −13 −6 −17

4∼

1 4 1 40 12 4 80 −13 −6 −17

5∼

1 4 1 40 12 4 80 −1 −2 −9

6∼

1 4 1 40 −1 −2 −90 12 4 8

7∼

1 4 1 40 1 2 90 12 4 8

8∼

1 0 −7 −320 1 2 90 0 −20 −100

9∼

1 0 −7 −320 1 2 90 0 1 5

10∼

1 0 0 30 1 0 −10 0 1 5

.

1. P13 2. A12(−4), A13(−2) 3. P23 4. M2(− 43 ) 5. A23(1)

6. P23 7. M2(−1) 8. A21(−4), A23(−12) 9. M3(− 120 ) 10. A31(7), A32(−2)

The last augmented matrix results in the solution (3,−1, 5).

11. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices: 3 1 5 2

1 1 −1 12 1 2 3

1∼

1 1 −1 13 1 5 22 1 2 3

2∼

1 1 −1 10 −2 8 −10 −1 4 1

Page 137: Differential Equations and Linear Algebra - 3e - Goode -Solutions

137

3∼

1 1 −1 10 1 −4 1

20 −1 4 1

4∼

1 1 −1 10 1 −4 1/20 0 0 3/2

.

We can stop here, since we see from this last augmented matrix that the system is inconsistent. In particular,2 = rank(A) < rank(A#) = 3.

1. P12 2. A12(−3), A13(−2) 3. M2(− 12 ) 4. A23(1)

12. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices: 1 0 −2 −3

3 −2 4 −91 −4 2 −3

1∼

1 0 −2 −30 −2 2 00 −4 4 0

2∼

1 0 −2 −30 1 −1 00 −4 4 0

3∼

1 0 −2 −30 1 −1 00 0 0 0

.

1. A12(−3), A13(−1) 2. M2(− 12 ) 3. A23(4)

The last augmented matrix results in the following system of equations:

x1 − 2x3 = −3 and x2 − x3 = 0.

Since x3 is free, let x3 = t. Thus, from the system we obtain the solutions {(2t−3, t, t) : t any real number }.13. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices: 2 −1 3 −1 3

3 2 1 −5 −61 −2 3 1 6

1∼

1 −2 3 1 63 2 1 −5 −62 −1 3 −1 3

2∼

1 −2 3 1 60 8 −8 −8 −240 3 −3 −3 −9

3∼

1 −2 3 1 60 1 −1 −1 −30 3 −3 −3 −9

4∼

1 0 1 −1 00 1 −1 −1 −30 0 0 0 0

.

1. P13 2. A12(−3), A13(−2) 3. M2( 18 ) 4. A21(2), A23(−3)

The last augmented matrix results in the following system of equations:

x1 + x3 − x4 = 0 and x2 − x3 − x4 = −3.

Since x3 and x4 are free variables, we can let x3 = s and x4 = t, where s and t are real numbers. It followsthat the solution set of the system is given by {(t− s, s + t− 3, s, t) : s, t any real numbers }.14. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices:

1 1 1 −1 41 −1 −1 −1 21 1 −1 1 −21 −1 1 1 −8

1∼

1 1 1 −1 40 −2 −2 0 −20 0 −2 2 −60 −2 0 2 −12

2∼

1 1 1 −1 40 1 1 0 10 0 1 −1 30 1 0 −1 6

Page 138: Differential Equations and Linear Algebra - 3e - Goode -Solutions

138

3∼

1 0 0 −1 30 1 1 0 10 0 1 −1 30 0 −1 −1 5

4∼

1 0 0 −1 30 1 0 1 −20 0 1 −1 30 0 0 −2 8

5∼

1 0 0 −1 30 1 0 1 −20 0 1 −1 30 0 0 1 −4

6∼

1 0 0 0 −10 1 0 0 20 0 1 0 −10 0 0 1 −4

.

1. A12(−1), A13(−1), A14(−1) 2. M2(− 12 ), M3(− 1

2 ), M4(− 12 ) 3. A24(−1)

4. A32(−1), A34(1) 5. M4(− 12 ) 6. A41(1), A42(−1), A43(1)

It follows from the last augmented matrix that the solution to the system is given by (−1, 2,−1,−4).

15. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices:

2 −1 3 1 −1 111 −3 −2 −1 −2 23 1 −2 −1 1 −21 2 1 2 3 −35 −3 −3 1 2 2

1∼

1 −3 −2 −1 −2 22 −1 3 1 −1 113 1 −2 −1 1 −21 2 1 2 3 −35 −3 −3 1 2 2

2∼

1 −3 −2 −1 −2 20 5 7 3 3 70 10 4 2 7 −80 5 3 3 5 −50 12 7 6 12 −8

3∼

1 −3 −2 −1 −2 20 1 7

535

35

75

0 10 4 2 7 −80 5 3 3 5 −50 12 7 6 12 −8

4∼

1 0 11

545 − 1

5315

0 1 75

35

35

75

0 0 −10 −4 1 −220 0 −4 0 2 −120 0 − 49

5 − 65

245 − 124

5

5∼

1 0 11

545 − 1

5315

0 1 75

35

35

75

0 0 1 25 − 1

10115

0 0 −4 0 2 −120 0 − 49

5 − 65

245 − 124

5

6∼

1 0 0 − 2

25150

3425

0 1 0 125

3750 − 42

250 0 1 2

5 − 110

115

0 0 0 85

85 − 16

50 0 0 68

2519150 − 81

25

7∼

1 0 0 − 2

25150

3425

0 1 0 125

3750 − 42

250 0 1 2

5 − 110

115

0 0 0 1 1 −20 0 0 68

2519150 − 81

25

8∼

1 0 0 0 1

1065

0 1 0 0 710 − 8

50 0 1 0 − 1

2 30 0 0 1 1 −20 0 0 0 11

10115

9∼

1 0 0 0 1

1065

0 1 0 0 710 − 8

50 0 1 0 − 1

2 30 0 0 1 1 −20 0 0 0 1 2

10∼

1 0 0 0 0 10 1 0 0 0 −30 0 1 0 0 40 0 0 1 0 −40 0 0 0 1 2

1. P12 2. A12(−2), A13(−3), A14(−1), A15(−5) 3. M2( 1

5 ) 4. A21(3), A23(−10), A24(−5), A25(−12)

5. M3(− 110 ) 6. A31(− 11

5 ), A32(− 75 ), A34(4), A35( 49

5 ) 7. M4( 58 )

8. A41( 225 ), A42(− 1

25 ), A43(− 25 ), A45(− 68

25 ) 9. M5( 1011 ) 10. A51(− 1

10 ), A52(− 710 ), A53( 1

2 ), A54(−1)

It follows from the last augmented matrix that the solution to the system is given by (1,−3, 4,−4, 2).

16. The equation Ax = b reads 1 −3 15 −4 12 4 −3

x1

x2

x3

=

815−4

.

Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination weobtain the following equivalent matrices: 1 −3 1 8

5 −4 1 152 4 −3 −4

1∼

1 −3 1 80 11 −4 −250 10 −5 −20

2∼

1 −3 1 80 1 1 −50 10 −5 −20

Page 139: Differential Equations and Linear Algebra - 3e - Goode -Solutions

139

3∼

1 0 4 −70 1 1 −50 0 −15 30

4∼

1 0 4 −70 1 1 −50 0 1 −2

5∼

1 0 0 10 1 0 −30 0 1 −2

.

1. A12(−5), A13(−2) 2. A32(−1) 3. A21(3), A23(−10) 4. M3(− 115 ) 5. A31(−4), A32(−1)

Thus, from the last augmented matrix, we see that x1 = 1, x2 = −3, and x3 = −2.

17. The equation Ax = b reads 1 0 53 −2 112 −2 6

x1

x2

x3

=

022

.

Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination weobtain the following equivalent matrices: 1 0 5 0

3 −2 11 22 −2 6 2

1∼

1 0 5 00 −2 −4 20 −2 −4 2

2∼

1 0 5 00 1 2 −10 −2 −4 2

3∼

1 0 5 00 1 2 −10 0 0 0

.

1. A12(−3), A13(−2) 2. M2(−1/2) 3. A23(2)

Hence, we have x1 + 5x3 = 0 and x2 + 2x3 = −1. Since x3 is a free variable, we can let x3 = t, where t isany real number. It follows that the solution set for the given system is given by {(−5t,−2t− 1, t) : t ∈ R}.18. The equation Ax = b reads 0 1 −1

0 5 10 2 1

x1

x2

x3

=

−285

.

Converting the given system of equations to an augmented matrix using Gauss-Jordan elimination we obtainthe following equivalent matrices: 0 1 −1 −2

0 5 1 80 2 1 5

1∼

0 1 −1 −20 0 6 180 0 3 9

2∼

0 1 −1 −20 0 1 30 0 3 9

3∼

0 1 0 10 0 1 30 0 0 0

.

1. A12(−5), A13(−2) 2. M2(1/6) 3. A21(1), A23(−3)

Consequently, from the last augmented matrix it follows that the solution set for the matrix equation isgiven by {(t, 1, 3) : t ∈ R}.19. The equation Ax = b reads 1 −1 0 −1

2 1 3 73 −2 1 0

x1

x2

x3

=

224

.

Page 140: Differential Equations and Linear Algebra - 3e - Goode -Solutions

140

Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination weobtain the following equivalent matrices: 1 −1 0 −1 2

2 1 3 7 23 −2 1 0 4

1∼

1 −1 0 −1 20 3 3 9 −20 1 1 3 −2

2∼

1 −1 0 −1 20 1 1 3 −20 3 3 9 −2

3∼

1 0 1 2 00 1 1 3 −20 0 0 0 4

.

1. A12(−2), A13(−3) 2. P23 3. A21(1), A23(−3)

From the last row of the last augmented matrix, it is clear that the given system is inconsistent.

20. The equation Ax = b reads1 1 0 −13 1 −2 32 3 1 1

−2 3 5 −2

x1

x2

x3

x4

=

283

−9

.

Converting the given system of equations to an augmented matrix and using Gauss-Jordan elimination weobtain the following equivalent matrices:

1 1 0 1 23 1 −2 3 82 3 1 2 3

−2 3 5 −2 −9

1∼

1 1 0 1 20 −2 −2 0 20 1 1 0 −10 5 5 0 −5

2∼

1 1 0 1 20 1 1 0 −10 −2 −2 0 20 5 5 0 −5

3∼

1 0 −1 1 30 1 1 0 −10 0 0 00 0 0 0 0

.

1. A12(−3), A13(−2), A14(2) 2. P23 3. A21(−1), A23(2), A24(−5)

From the last augmented matrix, we obtain the system of equations: x1 − x3 + x4 = 3, x2 + x3 = −1. Sinceboth x3 and x4 are free variables, we may let x3 = r and x4 = t, where r and t are real numbers. Thesolution set for the system is given by {(3 + r − t,−r − 1, r, t) : r, t ∈ R}.

21. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices: 1 2 −1 3

2 5 1 71 1 −k2 −k

1∼

1 2 −1 30 1 3 10 −1 1− k2 −3− k

2∼

1 2 −1 30 1 3 10 0 4− k2 −2− k

.

1. A12(−2), A13(−1) 2. A23(1)

(a): If k = 2, then the last row of the last augmented matrix reveals an inconsistency; hence the system hasno solutions in this case.

(b): If k = −2, then the last row of the last augmented matrix consists entirely of zeros, and hence we haveonly two pivots (first two columns) and a free variable x3; hence the system has infinitely many solutions.

(c): If k 6= ±2, then the last augmented matrix above contains a pivot for each variable x1, x2, and x3, andcan be solved for a unique solution by back-substitution.

Page 141: Differential Equations and Linear Algebra - 3e - Goode -Solutions

141

22. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices:

2 1 −1 1 01 1 1 −1 04 2 −1 1 03 −1 1 k 0

1∼

1 1 1 −1 02 1 −1 1 04 2 −1 1 03 −1 1 k 0

2∼

1 1 1 −1 00 −1 −3 3 00 −2 −5 5 00 −4 −2 k + 3 0

3∼

1 1 1 −1 00 1 3 −3 00 −2 −5 5 00 −4 −2 k + 3 0

4∼

1 1 1 −1 00 1 3 −3 00 0 1 −1 00 0 10 k − 9 0

5∼

1 1 1 −1 00 1 3 −3 00 0 1 −1 00 0 0 k + 1 0

.

1. P12 2. A12(−2), A13(−4), A14(−3) 3. M2(−1) 4. A23(2), A24(4) 5. A34(−10)

(a): Note that the trivial solution (0, 0, 0, 0) exists under all circumstances, so there are no values of k forwhich there is no solution.

(b): From the last row of the last augmented matrix, we see that if k = −1, then the variable x4 correspondsto an unpivoted column, and hence it is a free variable. In this case, therefore, we have infinitely solutions.

(c): Provided that k 6= −1, then each variable in the system corresponds to a pivoted column of the lastaugmented matrix above. Therefore, we can solve the system by back-substitution. The conclusion fromthis is that there is a unique solution, (0, 0, 0, 0).

23. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices: 1 1 −2 4

3 5 −4 162 3 −a b

1∼

1 1 −2 40 2 2 40 1 4− a b− 8

2∼

1 1 −2 40 1 1 20 1 4− a b− 8

3∼

1 0 −3 20 1 1 20 0 3− a b− 10

.

1. A12(−3), A13(−2) 2. M2( 12 ) 3. A21(−1), A23(−1)

(a): From the last row of the last augmented matrix above, we see that there is no solution if a = 3 andb 6= 10.

(b): From the last row of the augmented matrix above, we see that there are infinitely many solutionsif a = 3 and b = 10, because in that case, there is no pivot in the column of the last augmented matrixcorresponding to the third variable x3.

(c): From the last row of the augmented matrix above, we see that if a 6= 3, then regardless of the valueof b, there is a pivot corresponding to each variable x1, x2, and x3. Therefore, we can uniquely solve thecorresponding system by back-substitution.

24. Converting the given system of equations to an augmented matrix and using Gauss-Jordan eliminationwe obtain the following equivalent matrices: 1 −a 3

2 1 6−3 a + b 1

1∼

1 −a 30 1 + 2a 00 b− 2a 10

.

Page 142: Differential Equations and Linear Algebra - 3e - Goode -Solutions

142

From the middle row, we see that if a 6= − 12 , then we must have x2 = 0, but this leads to an inconsistency in

solving for x1 (the first equation would require x1 = 3 while the last equation would require x1 = − 13 . Now

suppose that a = − 12 . Then the augmented matrix on the right reduces to

[1 −1/2 30 b + 1 10

]. If b = −1,

then once more we have an inconsistency in the last row. However, if b 6= −1, then the row-echelon formobtained has full rank, and there is a unique solution. Therefore, we draw the following conclusions:

(a): There is no solution to the system if a 6= − 12 or if a = − 1

2 and b = −1.

(b): Under no circumstances are there an infinite number of solutions to the linear system.

(c): There is a unique solution if a = − 12 and b 6= −1.

25. The corresponding augmented matrix for this linear system can be reduced to row-echelon form via 1 1 1 y1

2 3 1 y2

3 5 1 y3

1∼

1 1 1 y1

0 1 −1 y2 − 2y1

0 2 −2 y3 − 3y1

2∼

1 1 1 y1

0 1 −1 y2 − 2y1

0 0 0 y1 − 2y2 + y3

.

1. A12(−2), A13(−3) 2. A23(−2)

For consistency, we must have rank(A) = rank(A#), which requires (y1, y2, y3) to satisfy y1 − 2y2 + y3 = 0.If this holds, then the system has an infinite number of solutions, because the column of the augmentedmatrix corresponding to y3 will be unpivoted, indicating that y3 is a free variable in the solution set.

26. Converting the given system of equations to an augmented matrix and using Gaussian elimination weobtain the following row-equivalent matrices. Since a11 6= 0:[

a11 a12 b1

a21 a22 b2

]1∼

[1 a12

a11

b1a11

0 a22a11−a21a12a11

a11b2−a21b1a11

]2∼

[1 a12

a11

b1a11

0 ∆a11

∆2a11

].

1. M1(1/a11), A12(−a21) 2. Definition of ∆ and ∆2

(a): If ∆ 6= 0, then rank(A) = rank(A#) = 2, so the system has a unique solution (of course, we are assuminga11 6= 0 here). Using the last augmented matrix above,

(∆

a11

)x2 = ∆2

a11, so that x2 = ∆2

∆ . Using this, we can

solve x1 + a12a11

x2 = b1a11

for x1 to obtain x1 = ∆1∆ , where we have used the fact that ∆1 = a22b1 − a12b2.

(b): If ∆ = 0 and a11 6= 0, then the augmented matrix of the system is[

1 a12a11

b1a11

0 0 ∆2

], so it follows that

the system has (i) no solution if ∆2 6= 0, since rank(A) < rank(A#) = 2, and (ii) an infinite number ofsolutions if ∆2 = 0, since rank(A#) < 2.

(c): An infinite number of solutions would be represented as one line. No solution would be two parallellines. A unique solution would be the intersection of two distinct lines at one point.

27. We first use the partial pivoting algorithm to reduce the augmented matrix of the system: 1 2 1 13 5 1 32 6 7 1

1∼

3 5 1 31 2 1 12 6 7 1

2∼

3 5 1 30 1/3 2/3 00 8/3 19/3 −1

Page 143: Differential Equations and Linear Algebra - 3e - Goode -Solutions

143

3∼

3 5 1 30 8/3 19/3 −10 1/3 2/3 0

4∼

3 5 1 30 8/3 19/3 −10 0 −1/8 1/8

.

1. P12 2. A12(−1/3), A13(−2/3) 3. P23 4. A23(−1/8)

Using back substitution to solve the equivalent system yields the unique solution (−2, 2,−1).

28. We first use the partial pivoting algorithm to reduce the augmented matrix of the system:2 −1 3 143 1 −2 −17 2 −3 35 −1 −2 5

1∼

7 2 −3 33 1 −2 −12 −1 3 145 −1 −2 5

2∼

7 2 −3 30 1/7 −5/7 −16/70 −11/7 27/7 92/70 −17/7 1/7 20/7

3∼

7 2 −3 30 −17/7 1/7 20/70 −11/7 27/7 92/70 1/7 −5/7 −16/7

4∼

7 2 −3 30 −17/7 1/7 20/70 0 64/17 192/170 0 −12/17 −36/17

5∼

7 2 −3 30 −17/7 1/7 20/70 0 64/17 192/170 0 0 0

.

1. P13 2. A12(−3/7), A13(−2/7), A14(−5/7) 3. P24

4. A23(−11/17), A24(1/17) 5. A34(3/16)

Using back substitution to solve the equivalent system yields the unique solution (2,−1, 3).

29. We first use the partial pivoting algorithm to reduce the augmented matrix of the system:2 −1 −4 53 2 −5 85 6 −6 201 1 −3 −3

1∼

5 6 −6 −203 2 −5 82 −1 −4 51 1 −3 −3

2∼

5 6 −6 200 −8/5 −7/5 −40 −17/5 −8/5 −30 −1/5 −9/5 −7

3∼

5 6 −6 200 −17/5 −8/5 −30 −8/5 −7/5 −40 −1/5 −9/5 −7

4∼

5 6 −6 200 −17/5 −8/5 −30 0 −11/17 −44/170 0 −29/17 −116/17

5∼

5 6 −6 200 −17/5 −8/5 −30 0 −29/17 −116/170 0 −11/17 −44/17

6∼

5 6 −6 200 −17/5 −8/5 −30 0 −29/17 −116/170 0 0 0

.

1. P13 2. A12(−3/5), A13(−2/5), A14(−1/5) 3. P23

4. A23(−8/17), A24(−1/17) 5. P34 6. A34(−11/29)

Page 144: Differential Equations and Linear Algebra - 3e - Goode -Solutions

144

Using back substitution to solve the equivalent system yields the unique solution (10,−1, 4).

30. We first use the partial pivoting algorithm to reduce the augmented matrix of the system: 2 −1 −1 24 3 −2 −11 4 1 4

1∼

4 3 −2 −12 −1 −1 21 4 1 4

2∼

4 3 −2 −10 −5/2 0 5/20 13/4 3/2 17/4

3∼

4 3 −2 −10 13/4 3/2 17/40 −5/2 0 5/2

4∼

4 3 −2 −10 13/4 3/2 17/40 0 15/13 75/13

.

1. P12 2. A12(−1/2), A13(−1/4) 3. P23 4. A23(10/13)

Using back substitution to solve the equivalent system yields the unique solution (3,−1, 5).

31.

(a): Let

A# =

a11 0 0 . . . 0 b1

a21 a22 0 . . . 0 b2

a31 a32 a33 . . . 0 b3

. . . . . . . . . . . . . . . . . .an1 an2 an3 . . . ann bn

represent the corresponding augmented matrix of the given system. Since a11x1 = b1, we can solve for x1

easily:

x1 =b1

a11, (a11 6= 0).

Now since a21x1 + a22x2 = b2, by using the expression for x1 we just obtained, we can solve for x2:

x2 =a11b2 − a21b1

a11a22.

In a similar manner, we can solve for x3, x4, . . . , xn.

(b): We solve instantly for x1 from the first equation: x1 = 2. Substituting this into the middle equation,we obtain 2 · 2 − 3 · x2 = 1, from which it quickly follows that x2 = 1. Substituting for x1 and x2 in thebottom equation yields 3 · 2 + 1 − x3 = 8, from which it quickly follows that x3 = −1. Consequently, thesolution of the given system is (2, 1,−1).

32. This system of equations is not linear in x1, x2, and x3; however, the system is linear in x31, x2

2, and x3,so we can first solve for x3

1, x22, and x3. Converting the given system of equations to an augmented matrix

and using Gauss-Jordan elimination we obtain the following equivalent matrices: 4 2 3 121 −1 1 23 1 −1 2

1∼

1 −1 1 24 2 3 123 1 −1 2

2∼

1 −1 1 20 6 −1 40 4 −4 −4

3∼

1 −1 1 20 4 −4 −40 6 −1 4

4∼

1 −1 1 20 1 −1 −10 6 −1 4

5∼

1 0 0 10 1 −1 −10 0 5 10

6∼

1 0 0 10 1 −1 −10 0 1 2

7∼

1 0 0 10 1 0 10 0 1 2

.

Page 145: Differential Equations and Linear Algebra - 3e - Goode -Solutions

145

1. P12 2. A12(−4), A13(−3) 3. P23 4. M2(1/4)

5. A21(1), A23(−6) 6. M2(1/5) 7. A32(1)

Thus, taking only real solutions, we have x31 = 1, x2

2 = 1, and x3 = 2. Therefore, x1 = 1, x2 = ±1, andx3 = 2, leading to the two solutions (1, 1, 2) and (1,−1, 2) to the original system of equations. There is nocontradiction of Theorem 2.5.9 here since, as mentioned above, this system is not linear in x1, x2, and x3.

33. Reduce the augmented matrix of the system: 3 2 −1 02 1 1 05 −4 1 0

1∼

1 1 −2 00 −1 5 00 −9 11 0

2∼

1 1 −2 00 1 −5 00 −9 11 0

3∼

1 0 3 00 1 −5 00 0 −34 0

4∼

1 0 3 00 1 −5 00 0 1 0

5∼

1 0 0 00 1 0 00 0 1 0

.

1. A21(−1), A12(−2), A13(−5) 2. M2(−1) 3. A21(−1), A23(9)

4. M3(−1/34) 5. A31(−3), A32(5)

Therefore, the unique solution to this system is x1 = x2 = x3 = 0: (0, 0, 0).

34. Reduce the augmented matrix of the system:2 1 −1 03 −1 2 01 −1 −1 05 2 −2 0

1∼

1 −1 −1 03 −1 2 02 1 −1 05 2 −2 0

2∼

1 −1 −1 00 2 5 00 3 1 00 7 3 0

3∼

1 −1 −1 00 3 1 00 2 5 00 7 3 0

4∼

1 −1 −1 00 1 −4 00 2 5 00 7 3 0

5∼

1 0 −5 00 1 −4 00 0 13 00 0 31 0

6∼

1 0 −5 00 1 −4 00 0 1 00 0 31 0

7∼

1 0 0 00 1 0 00 0 1 00 0 0 0

.

1. P13 2. A12(−3), A13(−2), A14(−5) 3. P23 4. A32(−1)

5. A21(1), A23(−2), A24(−7) 6. M3(1/13) 7. A31(5), A32(4), A34(−31)

Therefore, the unique solution to this system is x1 = x2 = x3 = 0: (0, 0, 0).

35. Reduce the augmented matrix of the system: 2 −1 −1 05 −1 2 01 1 4 0

1∼

1 1 4 05 −1 2 02 −1 −1 0

2∼

1 1 4 00 −6 −18 00 −3 −9 0

3∼

1 1 4 00 1 3 00 −3 −9 0

4∼

1 0 1 00 1 3 00 0 0 0

.

1. P13 2. A12(−5), A13(−2) 3. M2(−1/6) 4. A21(−1), A23(3)

Page 146: Differential Equations and Linear Algebra - 3e - Goode -Solutions

146

It follows that x1 + x3 = 0 and x2 + 3x3 = 0. Setting x3 = t, where t is a free variable, we get x2 = −3t andx1 = −t. Thus we have that the solution set of the system is {(−t,−3t, t) : t ∈ R}.

36. Reduce the augmented matrix of the system: 1 + 2i 1− i 1 0i 1 + i −i 02i 1 1 + 3i 0

1∼

i 1 + i −i 01 + 2i 1− i 1 0

2i 1 1 + 3i 0

2∼

1 1− i −1 01 + 2i 1− i 1 0

2i 1 1 + 3i 0

3∼

1 1− i −1 00 −2− 2i 1 + 2i 00 −1− 2i 1 + 5i 0

4∼

1 1− i −1 00 −2− 2i 1 + 2i 00 1 3i 0

5∼

1 1− i −1 00 0 −5 + 8i 00 1 3i 0

6∼

1 1− i −1 00 1 3i 00 0 −5 + 8i 0

7∼

1 1− i −1 00 1 3i 00 0 1 0

8∼

1 0 0 00 1 0 00 0 1 0

.

1. P12 2. M1(−i) 3. A12(−1− 2i), A13(−2i) 4. A23(−1) 5. A32(2 + 2i)

6. P23 7. M3( 1−5+8i ) 8. A21(−1 + i), A31(1), A32(−3i)

Therefore, the unique solution to this system is x1 = x2 = x3 = 0: (0, 0, 0).

37. Reduce the augmented matrix of the system: 3 2 1 06 −1 2 0

12 6 4 0

1∼

1 23

13 0

6 −1 2 012 6 4 0

2∼

1 23

13 0

0 −5 0 00 −2 0 0

3∼

1 23

13 0

0 1 0 00 −2 0 0

4∼

1 0 13 0

0 1 0 00 0 0 0

.

1. M1(1/3) 2. A12(−6), A13(−12) 3. M2(−1/5) 4. A21(−2/3), A23(2)

From the last augmented matrix, we have x1 + 13x3 = 0 and x2 = 0. Since x3 is a free variable, we let x3 = t,

where t is a real number. It follows that the solution set for the given system is given by {(t, 0,−3t) : t ∈ R}.

38. Reduce the augmented matrix of the system:2 1 −8 03 −2 −5 05 −6 −3 03 −5 1 0

1∼

3 −2 −5 02 1 −8 05 −6 −3 03 −5 1 0

2∼

1 −3 3 02 1 −8 05 −6 −3 03 −5 1 0

3∼

1 −3 3 00 7 −14 00 9 −18 00 4 −8 0

4∼

1 −3 3 00 1 −2 00 9 −18 00 4 −8 0

5∼

1 0 −3 00 1 −2 00 0 0 00 0 0 0

.

1. P12 2. A21(−1) 3. A12(−2), A13(−5), A14(−3) 4. M2(1/7) 5. A21(3), A23(−9), A24(−4)

Page 147: Differential Equations and Linear Algebra - 3e - Goode -Solutions

147

From the last augmented matrix we have: x1 − 3x3 = 0 and x2 − 2x3 = 0. Since x3 is a free variable, we letx3 = t, where t is a real number. It follows that x2 = 2t and x1 = 3t. Thus, the solution set for the givensystem is given by {(3t, 2t, t) : t ∈ R}.39. Reduce the augmented matrix of the system: 1 1 + i 1− i 0

i 1 i 01− 2i −1 + i 1− 3i 0

1∼

1 1 + i 1− i 00 2− i −1 00 −4 + 2i 2 0

2∼

1 1 + i 1− i 00 2− i −1 00 0 0 0

3∼

1 1 + i 1− i 00 1 −2−i

5 00 0 0 0

4∼

1 0 6−2i5 0

0 1 −2−i5 0

0 0 0 0

.

1. A12(−i), A13(−1 + 2i) 2. A23(2) 3. M2( 12−i ) 4. A21(−1− i)

From the last augmented matrix we see that x3 is a free variable. We set x3 = 5s, where s ∈ C. Thenx1 = 2(i− 3)s and x2 = (2 + i)s. Thus, the solution set of the system is {(2(i− 3)s, (2 + i)s, s) : s ∈ C}.40. Reduce the augmented matrix of the system:

1 −1 1 00 3 2 03 0 −1 05 1 −1 0

1∼

1 −1 1 00 3 2 00 3 −4 00 6 −6 0

2∼

1 −1 1 00 1 2/3 00 3 −4 00 6 −6 0

3∼

1 0 5/3 00 1 2/3 00 0 −6 00 0 −10 0

4∼

1 0 5/3 00 1 2/3 00 0 1 00 0 −10 0

5∼

1 0 0 00 1 0 00 0 1 00 0 0 0

.

1. A13(−3), A14(−5) 2. M2(1/3) 3. A21(1), A23(−3), A24(−6)

4. M3(−1/6) 5. A31(−5/3), A32(−2/3), A34(10)

Therefore, the unique solution to this system is x1 = x2 = x3 = 0: (0, 0, 0).

41. Reduce the augmented matrix of the system:2 −4 6 03 −6 9 01 −2 3 05 −10 15 0

1∼

1 −2 3 03 −6 9 02 −4 6 05 −10 15 0

2∼

1 −2 3 00 0 0 00 0 0 00 0 0 0

.

1. M1(1/2) 2. A12(−3), A13(−2), A14(−5)

From the last matrix we have that x1 − 2x3 + 3x3 = 0. Since x2 and x3 are free variables, let x2 = s andlet x3 = t, where s and t are real numbers. The solution set of the given system is therefore {(2s− 3t, s, t) :s, t ∈ R}.42. Reduce the augmented matrix of the system: 4 −2 −1 −1 0

3 1 −2 3 05 −1 −2 1 0

1∼

1 −3 1 −4 03 1 −2 3 05 −1 −2 1 0

2∼

1 −3 1 −4 00 10 −5 15 00 14 −7 21 0

Page 148: Differential Equations and Linear Algebra - 3e - Goode -Solutions

148

3∼

1 −3 1 −4 00 2 −1 3 00 2 −1 3 0

4∼

1 −3 1 −4 00 2 −1 3 00 0 0 0 0

5∼

1 −3 1 −4 00 1 −1/2 3/2 00 0 0 0 0

.

1. A21(−1) 2. A12(−3), A13(−5) 3. M2(1/5), M3(1/7)

4. A23(−1) 5. M2(1/2)

From the last augmented matrix above we have that x2− 12x3+ 3

2x4 = 0 and x1−3x2+x3−4x4 = 0. Since x3

and x4 are free variables, we can set x3 = 2s and x4 = 2t, where s and t are real numbers. Then x2 = s− 3tand x1 = s− t. It follows that the solution set of the given system is {(s− t, s− 3t, 2s, 2t) : s, t ∈ R}.43. Reduce the augmented matrix of the system:

2 1 −1 1 01 1 1 −1 03 −1 1 −2 04 2 −1 1 0

1∼

1 1 1 −1 02 1 −1 1 03 −1 1 −2 04 2 −1 1 0

2∼

1 1 1 −1 00 −1 −3 3 00 −4 −2 1 00 −2 −5 5 0

3∼

1 1 1 −1 00 1 3 −3 00 −4 −2 1 00 −2 −5 5 0

4∼

1 0 −2 2 00 1 3 −3 00 0 10 −11 00 0 −3 3 0

5∼

1 0 −2 2 00 1 3 −3 00 0 −3 3 00 0 10 −11 0

6∼

1 0 −2 2 00 1 3 −3 00 0 1 −1 00 0 10 −11 0

7∼

1 0 0 0 00 1 0 0 00 0 1 −1 00 0 0 −1 0

8∼

1 0 0 0 00 1 0 0 00 0 1 −1 00 0 0 1 0

9∼

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 0

.

1. P12 2. A12(−2), A13(−3), A14(−4) 3. M2(−1) 4. A21(−1), A23(4), A24(2)

5. P34 6. M3(−1/3) 7. A31(2), A32(−3), A34(−10) 8. M4(−1) 9. A43(1)

From the last augmented matrix, it follows that the solution set to the system is given by {(0, 0, 0, 0)}.44. The equation Ax = 0 is [

2 −13 4

] [x1

x2

]=[

00

].

Reduce the augmented matrix of the system:[2 −1 03 4 0

]1∼[

1 − 12 0

3 4 0

]2∼[

1 − 12 0

0 112 0

]3∼[

1 − 12 0

0 1 0

]4∼[

1 0 00 1 0

].

1. M1(1/2) 2. A12(−3) 3. M2(2/11) 4. A21(1/2)

From the last augmented matrix, we see that x1 = x2 = 0. Hence, the solution set is {(0, 0)}.45. The equation Ax = 0 is [

1− i 2i1 + i −2

] [x1

x2

]=[

00

].

Reduce the augmented matrix of the system:[1− i 2i 01 + i −2 0

]1∼[

1 −1 + i 01 + i −2 0

]2∼[

1 −1 + i 00 0 0

].

Page 149: Differential Equations and Linear Algebra - 3e - Goode -Solutions

149

1. M1( 1+i2 ) 2. A12(−1− i)

It follows that x1 + (−1 + i)x2 = 0. Since x2 is a free variable, we can let x2 = t, where t is a real number.The solution set to the system is then given by {(t(1− i), t) : t ∈ R}.46. The equation Ax = 0 is [

1 + i 1− 2i−1 + i 2 + i

] [x1

x2

]=[

00

].

Reduce the augmented matrix of the system:[1 + i 1− 2i 0−1 + i 2 + i 0

]1∼[

1 − 1+3i2 0

−1 + i 2 + i 0

]2∼[

1 − 1+3i2 0

0 0 0

].

1. M1( 1−i2 ) 2. A12(1− i)

It follows that x1 − 1+3i2 x2 = 0. Since x2 is a free variable, we can let x2 = r, where r is any complex

number. Thus, the solution set to the given system is {( 1+3i2 r, r) : r ∈ C}.

47. The equation Ax = 0 is 1 2 32 −1 01 1 1

x1

x2

x3

=

000

.

Reduce the augmented matrix of the system: 1 2 3 02 −1 0 01 1 1 0

1∼

1 2 3 00 −5 −6 00 −1 −2 0

2∼

1 2 3 00 −1 −2 00 −5 −6 0

3∼

1 2 3 00 1 2 00 −5 −6 0

4∼

1 0 −1 00 1 2 00 0 4 0

5∼

1 0 −1 00 1 2 00 0 1 0

6∼

1 0 0 00 1 0 00 0 1 0

.

1. A12(−2), A13(−1) 2. P23 3. M2(−1) 4. A21(−2), A23(5) 5. M3(1/4) 6. A31(1), A32(−2)

From the last augmented matrix, we see that the only solution to the given system is x1 = x2 = x3 = 0:{(0, 0, 0)}.48. The equation Ax = 0 is 1 1 1 −1

−1 0 −1 21 3 2 2

x1

x2

x3

x4

=

0000

.

Reduce the augmented matrix of the system: 1 1 1 −1 0−1 0 −1 2 0

1 3 2 2 0

1∼

1 1 1 −1 00 1 0 1 00 2 1 3 0

2∼

1 0 1 −2 00 1 0 1 00 0 1 1 0

3∼

1 0 0 −3 00 1 0 1 00 0 1 1 0

.

1. A12(1), A13(−1) 2. A21(−1), A23(−2) 3. A31(−1)

Page 150: Differential Equations and Linear Algebra - 3e - Goode -Solutions

150

From the last augmented matrix, we see that x4 is a free variable. We set x4 = t, where t is a real number.The last row of the reduced row echelon form above corresponds to the equation x3 + x4 = 0. Therefore,x3 = −t. The second row corresponds to the equation x2 +x4 = 0, so we likewise find that x2 = −t. Finally,from the first equation we have x1 − 3x4 = 0, so that x1 = 3t. Consequently, the solution set of the originalsystem is given by {(3t,−t,−t, t) : t ∈ R}.49. The equation Ax = 0 is 2− 3i 1 + i i− 1

3 + 2i −1 + i −1− i5− i 2i −2

x1

x2

x3

=

000

.

Reduce the augmented matrix of this system: 2− 3i 1 + i i− 1 03 + 2i −1 + i −1− i 05− i 2i −2 0

1∼

1 −1+5i13

−5−i13 0

3 + 2i −1 + i −1− i 05− i 2i −2 0

2∼

1 −1+5i13

−5−i13 0

0 0 0 00 0 0 0

.

1. M1( 2+3i13 ) 2. A12(−3− 2i), A13(−5 + i)

From the last augmented matrix, we see that x1 + −1+5i13 x2 + −5−i

13 x3 = 0. Since x2 and x3 are free variables,we can let x2 = 13r and x3 = 13s, where r and s are complex numbers. It follows that the solution set ofthe system is {(r(1− 5i) + s(5 + i), r, s) : r, s ∈ C}.50. The equation Ax = 0 is 1 3 0

−2 −3 01 4 0

x1

x2

x3

=

000

.

Reduce the augmented matrix of the system: 1 3 0 0−2 −3 0 0

1 4 0 0

1∼

1 3 0 00 3 0 00 1 0 0

2∼

1 3 0 00 1 0 00 3 0 0

3∼

1 0 0 00 1 0 00 0 0 0

.

1. A12(2), A13(−1) 2. P23 3. A21(−3), A23(−3)

From the last augmented matrix we see that the solution set of the system is {(0, 0, t) : t ∈ R}.51. The equation Ax = 0 is

1 0 33 −1 72 1 81 1 5

−1 1 −1

x1

x2

x3

=

000

.

Reduce the augmented matrix of the system:1 0 3 03 −1 7 02 1 8 01 1 5 0

−1 1 −1 0

1∼

1 0 3 00 −1 −2 00 1 2 00 1 2 00 1 2 0

2∼

1 0 3 00 1 2 00 1 2 00 1 2 00 1 2 0

3∼

1 0 3 00 1 2 00 0 0 00 0 0 00 0 0 0

.

Page 151: Differential Equations and Linear Algebra - 3e - Goode -Solutions

151

1. A12(−3), A13(−2), A14(−1), A15(1) 2. M2(−1) 3. A23(−1), A24(−1), A25(−1)

From the last augmented matrix, we obtain the equations x1 + 3x3 = 0 and x2 + 2x3 = 0. Since x3 is afree variable, we let x3 = t, where t is a real number. The solution set for the given system is then given by{(−3t,−2t, t) : t ∈ R}.52. The equation Ax = 0 is 1 −1 0 1

3 −2 0 5−1 2 0 1

x1

x2

x3

x4

=

000

.

Reduce the augmented matrix of the system: 1 −1 0 1 03 −2 0 5 0

−1 2 0 1 0

1∼

1 −1 0 1 00 1 0 2 00 1 0 2 0

2∼

1 0 0 3 00 1 0 2 00 0 0 0 0

.

1. A12(−3), A13(1) 2. A21(1), A23(−1)

From the last augmented matrix we obtain the equations x1 + 3x4 = 0 and x2 + 2x4 = 0. Because x3 andx4 are free, we let x3 = t and x4 = s, where s and t are real numbers. It follows that the solution set of thesystem is {(−3s,−2s, t, s) : s, t ∈ R}.53. The equation Ax = 0 is 1 0 −3 0

3 0 −9 0−2 0 6 0

x1

x2

x3

x4

=

000

.

Reduce the augmented matrix of the system: 1 0 −3 0 03 0 −9 0 0

−2 0 6 0 0

1∼

1 0 −3 0 00 0 0 0 00 0 0 0 0

.

1. A12(−3), A13(2)

From the last augmented matrix we obtain x1 − 3x3 = 0. Therefore, x2, x3, and x4 are free variables, sowe let x2 = r, x3 = s, and x4 = t, where r, s, t are real numbers. The solution set of the given system istherefore {(3s, r, s, t) : r, s, t ∈ R}.54. The equation Ax = 0 is 2 + i i 3− 2i

i 1− i 4 + 3i3− i 1 + i 1 + 5i

x1

x2

x3

=

000

.

Reduce the augmented matrix of the system: 2 + i i 3− 2i 0i 1− i 4 + 3i 0

3− i 1 + i 1 + 5i 0

1∼

i 1− i 4 + 3i 02 + i i 3− 2i 03− i 1 + i 1 + 5i 0

2∼

1 −1− i 3− 3i 02 + i i 3− 2i 03− i 1 + i 1 + 5i 0

Page 152: Differential Equations and Linear Algebra - 3e - Goode -Solutions

152

3∼

1 −1− i 3− 4i 00 1 + 4i −7 + 3i 00 5 + 3i −4 + 20i 0

4∼

1 −1− i 3− 4i 00 1 5+31i

17 00 5 + 3i −4 + 20i 0

5∼

1 0 25−32i17 0

0 1 5+31i17 0

0 0 10i 0

6∼

1 0 25−32i17 0

0 1 5+31i17 0

0 0 1 0

7∼

1 0 0 00 1 0 00 0 1 0

.

1. P12 2. M1(−i) 3. A12(−2− i), A13(−3 + i) 4. M2( 1−4i17 ) 5. A21(1 + i), A23(−5− 3i)

6. M3(−i/10) 7. A31(−25+32i17 ), A32(−5−31i

17 )

From the last augmented matrix above, we see that the only solution to this system is the trivial solution.

Solutions to Section 2.6

True-False Review:

1. FALSE. An invertible matrix is also known as a nonsingular matrix.

2. FALSE. For instance, the matrix[

1 12 2

]does not contain a row of zeros, but fails to be invertible.

3. TRUE. If A is invertible, then the unique solution to Ax = b is x = A−1b.

4. FALSE. For instance, if A =[

1 0 00 0 1

]and B =

1 00 00 1

, then AB = I2, but A is not even a square

matrix, hence certainly not invertible.

5. FALSE. For instance, if A = In and B = −In, then A and B are both invertible, but A + B = 0n is notinvertible.

6. TRUE. We have(AB)B−1A−1 = In and B−1A−1(AB) = In,

and therefore, AB is invertible, with inverse B−1A−1.

7. TRUE. From A2 = A, we subtract to obtain A(A− I) = 0. Left multiplying both sides of this equationby A−1 (since A is invertible, A−1 exists), we have A−I = A−10 = 0. Therefore, A = I, the identity matrix.

8. TRUE. From AB = AC, we left-multiply both sides by A−1 (since A is invertible, A−1 exists) to obtainA−1AB = A−1AC. Since A−1A = I, we obtain IB = IC, or B = C.

9. TRUE. Any 5× 5 invertible matrix must have rank 5, not rank 4 (Theorem 2.6.5).

10. TRUE. Any 6× 6 matrix of rank 6 is invertible (Theorem 2.6.5).

Problems:

1. We have

AA−1 =[

2 −13 −1

] [−1 1−3 2

]=[

(2)(−1) + (−1)(−3) (2)(1) + (−1)(2)(3)(−1) + (−1)(−3) (3)(1) + (−1)(2)

]=[

1 00 1

]= I2.

2. We have

AA−1 =[

4 93 7

] [7 −9

−3 4

]=[

(4)(7) + (9)(−3) (4)(−9) + (9)(4)(3)(7) + (7)(−3) (3)(−9) + (7)(4)

]=[

1 00 1

]= I2.

Page 153: Differential Equations and Linear Algebra - 3e - Goode -Solutions

153

3. We have

AA−1 =

3 5 11 2 12 6 7

8 −29 3−5 19 −2

2 −8 1

=

(3)(8) + (5)(−5) + (1)(2) (3)(−29) + (5)(19) + (1)(−8) (3)(3) + (5)(−2) + (1)(1)(1)(8) + (2)(−5) + (1)(2) (1)(−29) + (2)(19) + (1)(−8) (1)(3) + (2)(−2) + (1)(1)(2)(8) + (6)(−5) + (7)(2) (2)(−29) + (6)(19) + (7)(−8) (2)(3) + (6)(−2) + (7)(1)

=

1 0 00 1 00 0 1

= I3.

4. We have

[A|I2] =[

1 2 1 01 3 0 1

]1∼[

1 2 1 00 1 −1 1

]2∼[

1 0 3 −20 1 −1 1

]= [I2|A−1].

Therefore,

A−1 =[

3 −2−1 1

].

1. A12(−1) 2. A21(−2)

5. We have

[A|I2] =[

1 1 + i 1 01− i 1 0 1

]1∼[

1 1 + i 1 00 −1 −1 + i 1

]2∼[

1 1 + i 1 00 1 1− i −1

]3∼[

1 0 −1 1 + i0 1 1− i −1

]= [I2|A−1].

Thus,

A−1 =[

−1 1 + i1− i −1

].

1. A12(−1 + i) 2. M2(−1) 3. A21(−1− i)

6. We have

[A|I2] =[

1 −i 1 0i− 1 2 0 1

]1∼[

1 −i 1 00 1− i 1− i 1

]2∼[

1 −i 1 00 1 1 1+i

2

]3∼[

1 0 1 + i −1+i2

0 1 1 1+i2

]= [I2|A−1].

Thus,

A−1 =[

1 + i −1+i2

1 1+i2

].

1. A12(1− i) 2. M2(1/(1− i)) 3. A21(i)

Page 154: Differential Equations and Linear Algebra - 3e - Goode -Solutions

154

7. Note that AB = 02 for all 2× 2 matrices B. Therefore, A is not invertible.

8. We have

[A|I3] =

1 −1 2 1 0 02 1 11 0 1 04 −3 10 0 0 1

1∼

1 −1 2 1 0 00 3 7 −2 1 00 1 2 −4 0 1

2∼

1 −1 2 1 0 00 1 2 −4 0 10 3 7 −2 1 0

3∼

1 0 4 −3 0 10 1 2 −4 0 10 0 1 10 1 −3

4∼

1 0 0 −43 −4 130 1 0 −24 −2 70 0 1 10 1 −3

= [I3|A−1].

Thus,

A−1 =

−43 −4 13−24 −2 7

10 1 −3

.

1. A12(−2), A13(−4) 2. P23 3. A21(1), A23(−3) 4. A31(−4), A32(−2)

9. We have

[A|I3] =

3 5 1 1 0 01 2 1 0 1 02 6 7 0 0 1

1∼

1 2 1 0 1 03 5 1 1 0 02 6 7 0 0 1

2∼

1 2 1 0 1 00 −1 −2 1 −3 00 2 5 0 −2 1

3∼

1 2 1 0 1 00 1 2 −1 3 00 2 5 0 −2 1

4∼

1 0 −3 2 −5 00 1 2 −1 3 00 0 1 2 −8 1

5∼

1 0 0 8 −29 30 1 0 −5 19 −20 0 1 2 −8 1

= [I3|A−1].

Thus,

A−1 =

8 −29 3−5 19 −2

2 −8 1

.

1. P12 2. A12(−3), A13(−2) 3. M2(−1) 4. A21(−2), A23(−2) 5. A31(3), A32(−2)

10. This matrix is not invertible, because the column of zeros guarantees that the rank of the matrix is lessthan three.

11. We have

[A|I3] =

4 2 −13 1 0 02 1 −7 0 1 03 2 4 0 0 1

1∼

3 2 4 0 0 12 1 −7 0 1 04 2 −13 1 0 0

2∼

1 1 11 0 −1 12 1 −7 0 1 04 2 −13 1 0 0

3∼

1 1 11 0 −1 10 −1 −29 0 3 −20 −2 −57 1 4 −4

4∼

1 1 11 0 −1 10 1 29 0 −3 20 −2 −57 1 4 −4

5∼

1 0 −18 0 2 −10 1 29 0 −3 20 0 1 1 −2 0

6∼

1 0 18 −34 −10 1 0 −29 55 20 0 1 1 −2 0

= [I3|A−1].

Page 155: Differential Equations and Linear Algebra - 3e - Goode -Solutions

155

Thus,

A−1 =

18 −34 −1−29 55 2

1 −2 0

.

1. P13 2. A21(−1) 3. A12(−2), A13(−4) 4. M2(−1)

5. A21(−1), A23(2) 6. A31(18), A32(−29)

12. We have

[A|I3] =

1 2 −3 1 0 02 6 −2 0 1 0

−1 1 4 0 0 1

1∼

1 2 −3 1 0 00 2 4 −2 1 00 3 1 1 0 1

2∼

1 2 −3 1 0 00 1 2 −1 1

2 00 3 1 1 0 1

3∼

1 0 −7 3 −1 00 1 2 −1 1

2 00 0 −5 4 − 3

2 1

4∼

1 0 −7 3 −1 00 1 2 −1 1

2 00 0 1 − 4

5310 − 1

5

5∼

1 0 0 − 135

1110 − 7

50 1 0 3

5 − 110

25

0 0 1 − 45

310 − 1

5

= [I3|A−1].

Thus,

A−1 =

− 135

1110 − 7

535 − 1

1025

− 45

310 − 1

5

.

1. A12(−2), A13(1) 2. M2( 12 ) 3. A21(−2), A23(−3) 4. M3(− 1

5 ) 5. A31(7), A32(−2)

13. We have

[A|I3] =

1 i 2 1 0 01 + i −1 2i 0 1 0

2 2i 5 0 0 1

1∼

1 i 2 1 0 00 −i −2 −1− i 1 00 0 1 −2 0 1

2∼

1 i 2 1 0 00 1 −2i 1− i i 00 0 1 −2 0 1

3∼

1 0 0 −i 1 00 1 −2i 1− i i 00 0 1 −2 0 1

4∼

1 0 0 −i 1 00 1 0 1− 5i i 2i0 0 1 −2 0 1

= [I3|A−1].

Thus,

A−1 =

−i 1 01− 5i i 2i−2 0 1

.

1. A12(−1− i), A13(−2) 2. M2(i) 3. A21(−i) 4. A32(2i)

14. We have

[A|I3] =

2 1 3 1 0 01 −1 2 0 1 03 3 4 0 0 1

1∼

1 −1 2 0 1 02 1 3 1 0 03 3 4 0 0 1

2∼

1 −1 2 0 1 00 3 −1 1 −2 00 6 −2 0 −3 1

Page 156: Differential Equations and Linear Algebra - 3e - Goode -Solutions

156

3∼

1 −1 2 0 1 00 3 −1 1 −2 00 0 0 −2 1 1

Since 2 = rank(A) < rank(A#) = 3, we know that A−1 does not exist (we have obtained a row of zeros inthe block matrix on the left.

1. P12 2. A12(−2), A13(−3) 3. A23(−2)

15. We have

[A|I4] =

1 −1 2 3 1 0 0 02 0 3 −4 0 1 0 03 −1 7 8 0 0 1 01 0 3 5 0 0 0 1

1∼

1 −1 2 3 1 0 0 00 2 −1 −10 −2 1 0 00 2 1 −1 −3 0 1 00 1 1 2 −1 0 0 1

2∼

1 −1 2 3 1 0 0 00 1 1 2 −1 0 0 10 2 1 −1 −3 0 1 00 2 −1 −10 −2 1 0 0

3∼

1 0 3 5 0 0 0 10 1 1 2 −1 0 0 10 0 −1 −5 −1 0 1 −20 0 −3 −14 0 1 0 −2

4∼

1 0 3 5 0 0 0 10 1 1 2 −1 0 0 10 0 1 5 1 0 −1 20 0 −3 −14 0 1 0 −2

5∼

1 0 0 −10 −3 0 3 −50 1 0 −3 −2 0 1 −10 0 1 5 1 0 −1 20 0 0 1 3 1 −3 4

6∼

1 0 0 0 27 10 −27 350 1 0 0 7 3 −8 110 0 1 0 −14 −5 14 −180 0 0 1 3 1 −3 4

= [I4|A−1].

Thus,

A−1 =

27 10 −27 357 3 −8 11

−14 −5 14 −183 1 −3 4

.

1. A12(−2), A13(−3), A14(−1) 2. P13 3. A21(1), A23(−2), A24(−2)

4. M3(−1) 5. A31(−3), A32(−1), A34(3) 6. A41(10), A42(3), A43(5)

16. We have

[A|I4] =

0 −2 −1 −3 1 0 0 02 0 2 1 0 1 0 01 −2 0 2 0 0 1 03 −1 −2 0 0 0 0 1

1∼

1 −2 0 2 0 0 1 02 0 2 1 0 1 0 00 −2 −1 −3 1 0 0 03 −1 −2 0 0 0 0 1

2∼

1 −2 0 2 0 0 1 00 4 2 −3 0 1 −2 00 −2 −1 −3 1 0 0 00 5 −2 −6 0 0 −3 1

3∼

1 −2 0 2 0 0 1 00 1 1

2 − 34 0 1

4 − 12 0

0 −2 −1 −3 1 0 0 00 5 −2 −6 0 0 −3 1

Page 157: Differential Equations and Linear Algebra - 3e - Goode -Solutions

157

4∼

1 0 1 1

2 0 12 0 0

0 1 12 − 3

4 0 14 − 1

2 00 0 0 − 9

2 1 12 −1 0

0 0 − 92 − 9

4 0 − 54 − 1

2 1

5∼

1 0 1 1

2 0 12 0 0

0 1 12 − 3

4 0 14 − 1

2 00 0 − 9

2 − 94 0 − 5

4 − 12 1

0 0 0 − 92 1 1

2 −1 0

6∼

1 0 1 1

2 0 12 0 0

0 1 12 − 3

4 0 14 − 1

2 00 0 1 1

2 0 518

19 − 2

90 0 0 − 9

2 1 12 −1 0

7∼

1 0 0 0 0 2

9 − 19

29

0 1 0 −1 0 19 − 5

919

0 0 1 12 0 5

1819 − 2

90 0 0 − 9

2 1 12 −1 0

8∼

1 0 0 0 0 2

9 − 19

29

0 1 0 −1 0 19 − 5

919

0 0 1 12 0 5

1819 − 2

90 0 0 1 − 2

9 − 19

29 0

9∼

1 0 0 0 0 2

9 − 19

29

0 1 0 0 − 29 0 − 1

319

0 0 1 0 19

13 0 − 2

90 0 0 1 − 2

9 − 19

29 0

= [I4|A−1].

Thus,

A−1 =

0 2

9 − 19

29

− 29 0 − 1

319

19

13 0 − 2

9− 2

9 − 19

29 0

.

1. P13 2. A12(−2), A14(−3) 3. M2( 14 ) 4. A21(2), A23(2), A24(−5)

5. P34 6. M3(− 29 ) 7. A31(−1), A32(− 1

2 ) 8. M4(− 29 ) 9. A42(1), A43(− 1

2 )

17. To determine the second column vector of A−1 without determining the whole inverse, we solve the

linear system

2 −1 45 1 21 −1 3

xyz

=

010

.

18. We have A =[

1 32 5

], b =

[13

], and the Gauss-Jordan method yields A−1 =

[−5 3

2 −1

].

Therefore, we have

x = A−1b =[−5 3

2 −1

] [13

]=[

4−1

].

So we have x1 = 4 and x2 = −1.

19. We have A =

1 1 −20 1 12 4 −3

, b =

−231

, and the Gauss-Jordan method yields A−1 =

7 5 −3−2 −1 1

2 2 −1

.

Therefore, we have

x = A−1b =

7 5 −3−2 −1 1

2 2 −1

−231

=

−221

.

Hence, we have x1 = −2, x2 = 2, and x3 = 1.

20. We have A =[

1 −2i2− i 4i

], b =

[2−i

], and the Gauss-Jordan method yields A−1 = 1

2+8i

[4i 2i

−2 + i 1

].

Therefore, we have

x = A−1b =1

2 + 8i

[4i 2i

−2 + i 1

] [2−i

]=

12 + 8i

[2 + 8i−4 + i

].

Page 158: Differential Equations and Linear Algebra - 3e - Goode -Solutions

158

Hence, we have x1 = 1 and x2 = −4+i2+8i .

21. We have A =

3 4 52 10 14 1 8

, b =

111

, and the Gauss-Jordan method yields A−1 =

−79 27 4612 −4 −738 −13 −22

.

Therefore, we have

x = A−1b =

−79 27 4612 −4 −738 −13 −22

111

=

−613

.

Hence, we have x1 = −6, x2 = 1, and x3 = 3.

22. We have A =

1 1 21 2 −12 −1 1

, b =

1224

−36

, and the Gauss-Jordan method yields A−1 = 112

−1 3 53 3 −35 −3 −1

.

Therefore, we have

x = A−1b =112

−1 3 53 3 −35 −3 −1

1224

−36

=

−10182

.

Hence, x1 = −10, x2 = 18, and x3 = 2.

23. We have

AAT =[

0 1−1 0

] [0 −11 0

]=[

(0)(0) + (1)(1) (0)(−1) + (1)(0)(−1)(0) + (0)(1) (−1)(−1) + (0)(0)

]=[

1 00 1

]= I2,

so AT = A−1.

24. We have

AAT =[ √

3/2 1/2−1/2

√3/2

] [ √3/2 −1/2

1/2√

3/2

]=[

(√

3/2)(√

3/2) + (1/2)(1/2) (√

3/2)(−1/2) + (1/2)(√

3/2)(−1/2)(

√3/2) + (

√3/2)(1/2) (−1/2)(−1/2) + (

√3/2)(

√3/2)

]=[

1 00 1

]= I2,

so AT = A−1.

25. We have

AAT =[

cos α sinα− sinα cos α

] [cos α − sinαsinα cos α

]=[

cos2 α + sin2 α (cos α)(− sinα) + (sinα)(cos α)(− sinα)(cos α) + (cos α)(sinα) (− sinα)2 + cos2 α

]=[

1 00 1

]= I2,

so AT = A−1.

26. We have

AAT =(

11 + 2x2

) 1 −2x 2x2

2x 1− 2x2 −2x2x2 2x 1

( 11 + 2x2

) 1 2x 2x2

−2x 1− 2x2 2x2x2 −2x 1

=(

11 + 4x2 + 4x4

) 1 + 4x2 + 4x4 0 00 1 + 4x2 + 4x4 00 0 1 + 4x2 + 4x4

= I3,

Page 159: Differential Equations and Linear Algebra - 3e - Goode -Solutions

159

so AT = A−1.

27. For part 2, we have

(B−1A−1)(AB) = B−1(A−1A)B = B−1InB = B−1B = In,

and for part 3, we have(A−1)T AT = (AA−1)T = IT

n = In.

28. We prove this by induction on k, with k = 1 trivial and k = 2 proven in part 2 of Theorem 2.6.9.Assuming the statement is true for a product involving k − 1 matrices, we may proceed as follows:

(A1A2 · · ·Ak)−1 = ((A1A2 · · ·Ak−1)Ak)−1 = A−1k (A1A2 · · ·Ak−1)−1

= A−1k (A−1

k−1 · · ·A−12 A−1

1 ) = A−1k A−1

k−1 · · ·A−12 A−1

1 .

In the second equality, we have applied part 2 of Theorem 2.6.9 to the two matrices A1A2 · · ·Ak−1 and Ak,and in the third equality, we have assumed that the desired property is true for products of k − 1 matrices.

29. Since A is symmetric, we know that AT = A. We wish to show that (A−1)T = A−1. We have

(A−1)T = (AT )−1 = A−1,

which shows that A−1 is symmetric. The first equality follows from part 3 of Theorem 2.6.9, and the secondequality results from the assumption that A is symmetric.

30. Since A is skew-symmetric, we know that AT = −A. We wish to show that (A−1)T = −A−1. We have

(A−1)T = (AT )−1 = (−A)−1 = −(A−1),

which shows that A−1 is skew-symmetric. The first equality follows from part 3 of Theorem 2.6.9, and thesecond equality results from the assumption that A−1 is skew-symmetric.

31. We have

(In −A)(In + A + A2 + A3) = In(In + A + A2 + A3)−A(In + A + A2 + A3)

= In + A + A2 + A3 −A−A2 −A3 −A4 = In −A4 = In,

where the last equality uses the assumption that A4 = 0. This calculation shows that In −A and In + A +A2 + A3 are inverses of one another.

32. We haveB = BIn = B(AC) = (BA)C = InC = C.

33. YES. Since BA = In, we know that A−1 = B (see Theorem 2.6.11). Likewise, since CA = In, A−1 = C.Since the inverse of A is unique, it must follow that B = C.

34. We can simply compute

1∆

[a22 −a12

−a21 a11

] [a11 a12

a21 a22

]=

1∆

[a22a11 − a12a21 a22a12 − a12a22

−a21a11 + a11a21 −a21a12 + a11a22

]=

1∆

[a11a22 − a12a21 0

0 a11a22 − a12a21

]=[

1 00 1

]= I2.

Page 160: Differential Equations and Linear Algebra - 3e - Goode -Solutions

160

Therefore, [a11 a12

a21 a22

]−1

=1∆

[a22 −a12

−a21 a11

].

35. Assume that A is an invertible matrix and that Axi = bi for i = 1, 2, . . . , p (where each bi is given).Use elementary row operations on the augmented matrix of the system to obtain the equivalence

[A|b1 b2 b3 . . . bp] ∼ [In|c1 c2 c3 . . . cp].

The solutions to the system can be read from the last matrix: xi = ci for each i = 1, 2, . . . , p.

36. We have 1 −1 1 1 −1 22 −1 4 1 2 31 1 6 −1 5 2

1∼

1 −1 1 1 −1 20 1 2 −1 4 −10 2 5 −2 6 0

2∼

1 0 3 0 3 10 1 2 −1 4 −10 0 1 0 −2 2

3∼

1 0 0 0 9 −50 1 0 −1 8 −50 0 1 0 −2 2

.

Hence,x1 = (0,−1, 0), x2 = (9, 8,−2), x3 = (−5,−5, 2).

1. A12(−2), A13(−1) 2. A21(1), A23(−2) 3. A31(−3), A32(−2)

37.

(a): Let ei denote the ith column vector of the identity matrix Im, and consider the m linear systems ofequations

Axi = ei

for i = 1, 2, . . . ,m. Since rank(A) = m and each ei is a column m-vector, it follows that

rank(A#) = m = rank(A)

and so each of the systems Axi = ei above has a solution (Note that if m < n, then there will be an infinitenumber of solutions). If we let B = [x1,x2, . . . ,xm], then

AB = A [x1,x2, . . . ,xm] = [Ax1, Ax2, . . . , Axm] = [e1, e2, . . . , em] = In.

(b): A right inverse for A in this case is a 3× 2 matrix

a db ec f

such that

[a + 3b + c d + 3e + f

2a + 7b + 4c 2d + 7e + 4f

]=[

1 00 1

].

Thus, we must have

a + 3b + c = 1, d + 3e + f = 0, 2a + 7b + 4c = 0, 2d + 7e + 4f = 1.

Page 161: Differential Equations and Linear Algebra - 3e - Goode -Solutions

161

The first and third equation comprise a linear system with augmented matrix[

1 3 1 12 7 4 0

]for a, b, and

c. The row-echelon form of this augmented matrix is[

1 3 1 10 1 2 −2

]. Setting c = t, we have b = −2− 2t

and a = 7+5t. Next, the second and fourth equation above comprise a linear system with augmented matrix[1 3 1 02 7 4 1

]for d, e, and f . The row-echelon form of this augmented matrix is

[1 3 1 00 1 2 1

]. Setting

f = s, we have e = 1− 2s and d = −3 + 5s. Thus, right inverses of A are precisely the matrices of the form 7 + 5t −3 + 5s−2− 2t 1− 2s

t s

.

Solutions to Section 2.7

True-False Review:

1. TRUE. Since every elementary matrix corresponds to a (reversible) elementary row operation, thereverse elementary row operation will correspond to an elementary matrix that is the inverse of the originalelementary matrix.

2. FALSE. For instance, the matrices[

2 00 1

]and

[1 00 2

]are both elementary matrices, but their

product,[

2 00 2

], is not.

3. FALSE. Every invertible matrix can be expressed as a product of elementary matrices. Since everyelementary matrix is invertible and products of invertible matrices are invertible, any product of elementarymatrices must be an invertible matrix.

4. TRUE. Performing an elementary row operation on a matrix does not alter its rank, and the matrixEA is obtained from A by performing the elementary row operation associated with the elementary matrixE. Therefore, A and EA have the same rank.

5. FALSE. If Pij is a permutation matrix, then P 2ij = In, since permuting the ith and jth rows of In twice

yields In. Alternatively, we can observe that P 2ij = In from the fact that P−1

ij = Pij .

6. FALSE. For example, consider the elementary matrices E1 =[

1 00 7

]and E2 =

[1 10 1

]. Then we

have E1E2 =[

1 10 7

]and E2E1 =

[1 70 7

].

7. FALSE. For example, consider the elementary matrices E1 =

1 3 00 1 00 0 1

and E2 =

1 0 00 1 20 0 1

.

Then we have E1E2 =

1 3 60 1 20 0 1

and E2E1 =

1 3 00 1 20 0 1

.

8. FALSE. The only matrices we perform an LU factorization for are invertible matrices for which thereduction to upper triangular form can be accomplished without permuting rows.

9. FALSE. The matrix U need not be a unit upper triangular matrix.

Page 162: Differential Equations and Linear Algebra - 3e - Goode -Solutions

162

10. FALSE. As can be seen in Example 2.7.8, a 4× 4 matrix with LU factorization will have 6 multipliers,not 10 multipliers.

Problems:

1.

Permutation Matrices: P12 =

0 1 01 0 00 0 1

, P13 =

0 0 10 1 01 0 0

, P23 =

1 0 00 0 10 1 0

.

Scaling Matrices: M1(k) =

k 0 00 1 00 0 1

, M2(k) =

1 0 00 k 00 0 1

, M3(k) =

1 0 00 1 00 0 k

.

Row Combinations:

A12(k) =

1 0 0k 1 00 0 1

, A13(k) =

1 0 00 1 0k 0 1

, A23(k) =

1 0 00 1 00 k 1

,

A21(k) =

1 k 00 1 00 0 1

, A31(k) =

1 0 k0 1 00 0 1

, A32(k) =

1 0 00 1 k0 0 1

.

2. We have [3 51 −2

]1∼[

1 −23 5

]2∼[

1 −20 11

]3∼[

1 −20 1

].

1. P12 2. A12(−3) 3. M2( 111 )

Elementary Matrices: M2( 111 ), A12(−3), P12.

3. We have [5 8 21 3 −1

]1∼[

1 3 −15 8 2

]2∼[

1 3 −10 −7 7

]3∼[

1 3 −10 1 −1

].

1. P12 2. A12(−5) 3. M2(− 17 )

Elementary Matrices: M2(− 17 ), A12(−5), P12.

4. We have 3 −1 42 1 31 3 2

1∼

1 3 22 1 33 −1 4

2∼

1 3 20 −5 −10 −10 −2

3∼

1 3 20 −5 −10 0 0

4∼

1 3 20 1 1

50 0 0

.

1. P13 2. A12(−2), A13(−3) 3. A23(−2) 4. M2(− 15 )

Elementary Matrices: M2(− 15 ), A23(−2), A13(−3), A12(−2), P13.

5. We have 1 2 3 42 3 4 53 4 5 6

1∼

1 2 3 40 −1 −2 −30 −2 −4 −6

2∼

1 2 3 40 1 2 30 −2 −4 −6

3∼

1 2 3 40 1 2 30 0 0 0

.

Page 163: Differential Equations and Linear Algebra - 3e - Goode -Solutions

163

1. A12(−2), A13(−3) 2. M2(−1) 3. A23(2)

Elementary Matrices: A23(2), M2(−1), A13(−3), A12(−2).

6. We reduce A to the identity matrix:[1 21 3

]1∼[

1 20 1

]2∼[

1 00 1

].

1. A12(−1) 2. A21(−2)

The elementary matrices corresponding to these row operations are E1 =[

1 0−1 1

]and E2 =

[1 −20 1

].

We have E2E1A = I2, so that

A = E−11 E−1

2 =[

1 01 1

] [1 20 1

],

which is the desired expression since E−11 and E−1

2 are elementary matrices.

7. We reduce A to the identity matrix:[−2 −3

5 7

]1∼[−2 −3

1 1

]2∼[

1 1−2 −3

]3∼[

1 10 −1

]4∼[

1 00 −1

]5∼[

1 00 1

].

1. A12(2) 2. P12 3. A12(2) 4. A21(1) 5. M2(−1)

The elementary matrices corresponding to these row operations are

E1 =[

1 02 1

], E2 =

[0 11 0

], E3 =

[1 02 1

], E4 =

[1 10 1

], E5 =

[1 00 −1

].

We have E5E4E3E2E1A = I2, so

A = E−11 E−1

2 E−13 E−1

4 E−15 =

[1 0

−2 1

] [0 11 0

] [1 0

−2 1

] [1 −10 1

] [1 00 −1

],

which is the desired expression since each E−1i is an elementary matrix.

8. We reduce A to the identity matrix:[3 −4

−1 2

]1∼[−1 2

3 −4

]2∼[

1 −23 −4

]3∼[

1 −20 2

]4∼[

1 −20 1

]5∼[

1 00 1

].

1. P12 2. M1(−1) 3. A12(−3) 4. M2( 12 ) 5. A21(2)

The elementary matrices corresponding to these row operations are

E1 =[

0 11 0

], E2 =

[−1 0

0 1

], E3 =

[1 0

−3 1

], E4 =

[1 00 1

2

], E5 =

[1 20 1

].

We have E5E4E3E2E1A = I2, so

A = E−11 E−1

2 E−13 E−1

4 E−15 =

[0 11 0

] [−1 0

0 1

] [1 03 1

] [1 00 2

] [1 −20 1

],

Page 164: Differential Equations and Linear Algebra - 3e - Goode -Solutions

164

which is the desired expression since each E−1i is an elementary matrix.

9. We reduce A to the identity matrix:[4 −51 4

]1∼[

1 44 −5

]2∼[

1 40 −21

]3∼[

1 40 1

]4∼[

1 00 1

].

1. P12 2. A12(−4) 3. M2(− 121 ) 4. A21(−4)

The elementary matrices corresponding to these row operations are

E1 =[

0 11 0

], E2 =

[1 0

−4 1

], E3 =

[1 00 − 1

21

], E4 =

[1 −40 1

].

We have E4E3E2E1A = I2, so

A = E−11 E−1

2 E−13 E−1

4 =[

0 11 0

] [1 04 1

] [1 00 −21

] [1 40 1

],

which is the desired expression since each E−1i is an elementary matrix.

10. We reduce A to the identity matrix: 1 −1 02 2 23 1 3

1∼

1 −1 00 4 23 1 3

2∼

1 −1 00 4 20 4 3

3∼

1 −1 00 4 20 0 1

4∼

1 −1 00 1 1

20 0 1

5∼

1 −1 00 1 00 0 1

6∼

1 0 00 1 00 0 1

.

1. A12(−2) 2. A13(−3) 3. A23(−1) 4. M2( 14 ) 5. A32(− 1

2 ) 6. A21(1)

The elementary matrices corresponding to these row operations are

E1 =

1 0 0−2 1 0

0 0 1

, E2 =

1 0 00 1 0

−3 0 1

, E3 =

1 0 00 1 00 −1 1

,

E4 =

1 0 00 1

4 00 0 1

, E5 =

1 0 00 1 − 1

20 0 1

, E6 =

1 1 00 1 00 0 1

.

We have E6E5E4E3E2E1A = I3, so

A = E−11 E−1

2 E−13 E−1

4 E−15 E−1

6

=

1 0 02 1 00 0 1

1 0 00 1 03 0 1

1 0 00 1 00 1 1

1 0 00 4 00 0 1

1 0 00 1 1

20 0 1

1 −1 00 1 00 0 1

,

which is the desired expression since each E−1i is an elementary matrix.

Page 165: Differential Equations and Linear Algebra - 3e - Goode -Solutions

165

11. We reduce A to the identity matrix: 0 −4 −21 −1 3

−2 2 2

1∼

1 −1 30 −4 −2

−2 2 2

2∼

1 −1 30 −4 −20 0 8

3∼

1 −1 30 −4 −20 0 1

4∼

1 −1 30 −4 00 0 1

5∼

1 −1 00 −4 00 0 1

6∼

1 −1 00 1 00 0 1

7∼

1 0 00 1 00 0 1

.

1. P12 2. A13(2) 3. M3( 18 ) 4. A32(2) 5. A31(−3) 6. M2(− 1

4 ) 7. A21(1)

The elementary matrices corresponding to these row operations are

E1 =

0 1 01 0 00 0 1

, E2 =

1 0 00 1 02 0 1

, E3 =

1 0 00 1 00 0 1

8

, E4 =

1 0 00 1 20 0 1

,

E5 =

1 0 −30 1 00 0 1

, E6 =

1 0 00 − 1

4 00 0 1

, E7 =

1 1 00 1 00 0 1

.

We have E7E6E5E4E3E2E1A = I3, so

A = E−11 E−1

2 E−13 E−1

4 E−15 E−1

6 E−17

=

0 1 01 0 00 0 1

1 0 00 1 0

−2 0 1

1 0 00 1 00 0 8

1 0 00 1 −20 0 1

1 0 30 1 00 0 1

1 0 00 −4 00 0 1

1 −1 00 1 00 0 1

,

which is the desired expression since each E−1i is an elementary matrix.

12. We reduce A to the identity matrix: 1 2 30 8 03 4 5

1∼

1 2 30 1 03 4 5

2∼

1 2 30 1 00 −2 −4

3∼

1 0 30 1 00 −2 −4

4∼

1 0 30 1 00 0 −4

5∼

1 0 30 1 00 0 1

6∼

1 0 00 1 00 0 1

.

1. M2( 18 ) 2. A13(−3) 3. A21(−2) 4. A23(2) 5. M3(− 1

4 ) 6. A31(−3)

The elementary matrices corresponding to these row operations are

E1 =

1 0 00 1

8 00 0 1

, E2 =

1 0 00 1 0

−3 0 1

, E3 =

1 −2 00 1 00 0 1

,

E4 =

1 0 00 1 00 2 1

, E5 =

1 0 00 1 00 0 − 1

4

, E6 =

1 0 −30 1 00 0 1

.

Page 166: Differential Equations and Linear Algebra - 3e - Goode -Solutions

166

We have E6E5E4E3E2E1A = I3, so

A = E−11 E−1

2 E−13 E−1

4 E−15 E−1

6

=

1 0 00 8 00 0 1

1 0 00 1 03 0 1

1 2 00 1 00 0 1

1 0 00 1 00 −2 1

1 0 00 1 00 0 −4

1 0 30 1 00 0 1

,

which is the desired expression since each E−1i is an elementary matrix.

13. We reduce A to the identity matrix:[2 −11 3

]1∼[

1 32 −1

]2∼[

1 30 −7

]3∼[

1 30 1

]4∼[

1 00 1

].

1. P12 2. A12(−2) 3. M2(− 17 ) 4. A21(−3)

The elementary matrices corresponding to these row operations are

E1 =[

0 11 0

], E2 =

[1 0

−2 1

], E3 =

[1 00 − 1

7

], E4 =

[1 −30 1

].

Direct multiplication verifies that E4E3E2E1A = I2.

14. We have [3 −2

−1 5

]1∼[

3 −20 13

3

]= U.

1. A12( 13 )

Hence, E1 = A12( 13 ). Then Equation (2.7.3) reads L = E−1

1 = A12(− 13 ) =

[1 0

− 13 1

]. Verifying Equation

(2.7.2):

LU =[

1 0− 1

3 1

] [3 −20 13

3

]=[

3 −2−1 5

]= A.

15. We have [2 35 1

]1∼[

2 30 − 13

2

]= U =⇒ m21 =

52

=⇒ L =[

1 052 1

].

Then

LU =[

1 052 1

] [2 30 − 13

2

]=[

2 35 1

]= A.

1. A12(− 52 )

16. We have [3 15 2

]1∼[

3 10 1

3

]= U =⇒ m21 =

53

=⇒ L =[

1 053 1

].

Then

LU =[

1 053 1

] [3 10 1

3

]=[

3 15 2

]= A.

Page 167: Differential Equations and Linear Algebra - 3e - Goode -Solutions

167

1. A12(− 53 )

17. We have 3 −1 26 −1 1

−3 5 2

1∼

3 −1 20 1 −30 4 4

2∼

3 −1 20 1 −30 0 16

= U =⇒ m21 = 2,m31 = −1,m32 = 4.

Hence,

L =

1 0 02 1 0

−1 4 1

and LU =

1 0 02 1 0

−1 4 1

3 −1 20 1 −30 0 16

=

3 −1 26 −1 1

−3 5 2

= A.

1. A12(−2), A13(1) 2. A23(−4)

18. We have 5 2 1−10 −2 3

15 2 −3

1∼

5 2 10 2 50 −4 −6

2∼

5 2 10 2 50 0 4

= U =⇒ m21 = −2,m31 = 3,m32 = −2.

Hence,

L =

1 0 0−2 1 0

3 −2 1

and LU =

1 0 0−2 1 0

3 −2 1

5 2 10 2 50 0 4

=

5 2 1−10 −2 3

15 2 −3

= A.

1. A12(2), A13(−3) 2. A23(2)

19. We have1 −1 2 32 0 3 −43 −1 7 81 3 4 5

1∼

1 −1 2 30 2 −1 −100 2 1 −10 4 2 2

2∼

1 −1 2 30 2 −1 −100 0 2 90 0 4 22

3∼

1 −1 2 30 2 −1 −100 0 2 90 0 0 4

= U.

1. A12(−2), A13(−3), A14(−1) 2. A23(−1), A24(−2) 3. A34(−2)

Hence,m21 = 2, m31 = 3,m41 = 1,m32 = 1,m42 = 2,m43 = 2.

Hence,

L =

1 0 0 02 1 0 03 1 1 01 2 2 1

and LU =

1 0 0 02 1 0 03 1 1 01 2 2 1

1 −1 2 30 2 −1 −100 0 2 90 0 0 4

=

1 −1 2 32 0 3 −43 −1 7 81 3 4 5

= A.

20. We have2 −3 1 24 −1 1 1

−8 2 2 −56 1 5 2

1∼

2 −3 1 20 5 −1 −30 −10 6 30 10 2 −4

2∼

2 −3 1 20 5 −1 −30 0 4 −30 0 4 2

3∼

2 −3 1 20 5 −1 −30 0 4 −30 0 0 5

= U.

Page 168: Differential Equations and Linear Algebra - 3e - Goode -Solutions

168

1. A12(−2), A13(4), A14(−3) 2. A23(2), A24(−2) 3. A34(−1)

Hence,m21 = 2, m31 = −4, m41 = 3, m32 = −2, m42 = 2, m43 = 1.

Hence,

L =

1 0 0 02 1 0 0

−4 −2 1 03 2 1 1

and LU =

1 0 0 02 1 0 0

−4 −2 1 03 2 1 1

2 −3 1 20 5 −1 −30 0 4 −30 0 0 5

=

2 −3 1 24 −1 1 1

−8 2 2 −56 1 5 2

= A.

21. We have [1 22 3

]1∼[

1 20 −1

]= U =⇒ m21 = 2 =⇒ L =

[1 02 1

].

1. A12(−2)

We now solve the triangular systems Ly = b and Ux = y. From Ly = b, we obtain y =[

3−7

]. Then

Ux = y yields x =[−11

7

].

22. We have 1 −3 53 2 22 5 2

1∼

1 −3 50 11 −130 11 −8

2∼

1 −3 50 11 −130 0 5

= U =⇒ m21 = 3,m31 = 2,m32 = 1.

1. A12(−3), A13(−2) 2. A23(−1)

Hence, L =

1 0 03 1 02 1 1

. We now solve the triangular systems Ly = b and Ux = y. From Ly = b, we

obtain y =

12

−5

. Then Ux = y yields x =

3−1−1

.

23. We have 2 2 16 3 −1

−4 2 2

1∼

2 2 10 −3 −40 0 −4

2∼

2 2 10 −3 −40 0 −4

= U =⇒ m21 = 3,m31 = −2,m32 = −2.

1. A12(−3), A13(2) 2. A23(2)

Hence, L =

1 0 03 1 0

−2 −2 1

. We now solve the triangular systems Ly = b and Ux = y. From Ly = b, we

obtain y =

1−3−2

. Then Ux = y yields x =

−1/121/31/2

.

Page 169: Differential Equations and Linear Algebra - 3e - Goode -Solutions

169

24. We have4 3 0 08 1 2 00 5 3 60 0 −5 7

1∼

4 3 0 00 −5 2 00 5 3 60 0 −5 7

2∼

4 3 0 00 −5 2 00 0 5 60 0 −5 7

3∼

4 3 0 00 −5 2 00 0 5 60 0 0 13

= U.

1. A12(−2) 2. A23(1) 3. A34(1)

The only nonzero multipliers are m21 = 2,m32 = −1, and m43 = −1. Hence, L =

1 0 0 02 1 0 00 −1 1 00 0 −1 1

. We

now solve the triangular systems Ly = b and Ux = y. From Ly = b, we obtain y =

2

−1−1

4

. Then Ux = y

yields x =

677/1300−9/325−37/654/13

.

25. We have [2 −1

−8 3

]1∼[

2 −10 −1

]= U =⇒ m21 = −4 =⇒ L =

[1 0

−4 1

].

1. A12(4)

We now solve the triangular systemsLyi = bi, Uxi = yi

for i = 1, 2, 3. We have

Ly1 = b1 =⇒ y1 =[

311

]. Then Ux1 = y1 =⇒ x1 =

[−4−1

];

Ly2 = b2 =⇒ y2 =[

215

]. Then Ux2 = y2 =⇒ x2 =

[−6.5−15

];

Ly3 = b3 =⇒ y3 =[

511

]. Then Ux3 = y3 =⇒ x3 =

[−3−11

].

26. We have −1 4 23 1 45 −7 1

1∼

−1 4 20 13 100 13 11

2∼

−1 4 20 13 100 0 1

= U.

1. A12(3), A13(5) 2. A23(−1)

Thus, m21 = −3, m31 = −5, and m32 = 1. We now solve the triangular systems

Lyi = bi, Uxi = yi

for i = 1, 2, 3. We have

Page 170: Differential Equations and Linear Algebra - 3e - Goode -Solutions

170

Ly1 = e1 =⇒ y1 =

132

. Then Ux1 = y1 =⇒ x1 =

−29/13−17/13

2

;

Ly2 = e2 =⇒ y2 =

01

−1

. Then Ux2 = y2 =⇒ x2 =

18/1311/13−1

;

Ly3 = e3 =⇒ y3 =

001

. Then Ux3 = y3 =⇒ x3 =

−14/13−10/13

1

.

27. Observe that if Pi is an elementary permutation matrix, then P−1i = Pi = PT

i . Therefore, we have

P−1 = (P1P2 . . . Pk)−1 = P−1k P−1

k−1 . . . P−12 P−1

1 = PTk PT

k−1 . . . PT2 . . . PT

1 = (P1P2 . . . Pk)T = PT .

28.

(a): Let A be an invertible upper triangular matrix with inverse B. Therefore, we have AB = In. WriteA = [aij ] and B = [bij ]. We will show that bij = 0 for all i > j, which shows that B is upper triangular. Wehave

n∑k=1

aikbkj = δij .

Since A is upper triangular, aik = 0 whenever i > k. Therefore, we can reduce the above summation to

n∑k=i

aikbij = δij .

Let i = n. Then the above summation reduces to annbnj = δnj . If j = n, we have annbnn = 1, soann 6= 0. For j < n, we have annbnj = 0, and therefore bnj = 0 for all j < n.

Next let i = n− 1. Then we have

an−1,n−1bn−1,j + an−1,nbnj = δn−1,j .

Setting j = n−1 and using the fact that bn,n−1 = 0 by the above calculation, we obtain an−1,n−1bn−1,n−1 = 1,so an−1,n−1 6= 0. For j < n− 1, we have an−1,n−1bn−1,j = 0 so that bn−1,j = 0.

Next let i = n−2. Then we have an−2,n−2bn−2,j +an−2,n−1bn−1,j +an−2,nbnj = δn−2,j . Setting j = n−2and using the fact that bn−1,n−2 = 0 and bn,n−2 = 0, we have an−2,n−2bn−2,n−2 = 1, so that an−2,n−2 6= 0.For j < n− 2, we have an−2,n−2bn−2,j = 0 so that bn−2,j = 0.

Proceeding in this way, we eventually show that bij = 0 for all i > j.For an invertible lower triangular matrix A with inverse B, we can either modify the preceding argument,

or we can proceed more briefly as follows: Note that AT is an invertible upper triangular matrix with inverseBT . By the preceding argument, BT is upper triangular. Therefore, B is lower triangular, as required.

(b): Let A be an invertible unit upper triangular matrix with inverse B. Use the notations from (a). By(a), we know that B is upper triangular. We simply must show that bjj = 0 for all j. From annbnn = 1(see proof of (a)), we see that if ann = 1, then bnn = 1. Moreover, from an−1,n−1bn−1,n−1 = 1, the factthat an−1,n−1 = 1 proves that bn−1,n−1 = 1. Likewise, the fact that an−2,n−2bn−2,n−2 = 1 implies that ifan−2,n−2 = 1, then bn−2,n−2 = 1. Continuing in this fashion, we prove that bjj = 1 for all j.

For the last part, if A is an invertible unit lower triangular matrix with inverse B, then AT is an invertibleunit upper triangular matrix with inverse BT , and by the preceding argument, BT is a unit upper triangularmatrix. This implies that B is a unit lower triangular matrix, as desired.

Page 171: Differential Equations and Linear Algebra - 3e - Goode -Solutions

171

29.

(a): Since A is invertible, Corollary 2.6.12 implies that both L2 and U1 are invertible. Since L1U1 = L2U2,we can left-multiply by L−1

2 and right-multiply by U−11 to obtain L−1

2 L1 = U2U−11 .

(b): By Problem 28, we know that L−12 is a unit lower triangular matrix and U−1

1 is an upper triangularmatrix. Therefore, L−1

2 L1 is a unit lower triangular matrix and U2U−11 is an upper triangular matrix. Since

these two matrices are equal, we must have L−12 L1 = In and U2U

−11 = In. Therefore, L1 = L2 and U1 = U2.

30. The system Ax = b can be written as QRx = b. If we can solve Qy = b for y and then solve Rx = yfor x, then QRx = b as desired. Multiplying Qy = b by QT and using the fact that QT Q = In, we obtainy = QT b. Therefore, Rx = y can be replaced by Rx = QT b. Therefore, to solve Ax = b, we first determiney = QT b and then solve the upper triangular system Rx = QT b by back-substitution.

Solutions to Section 2.8

True-False Review:

1. FALSE. According to the given information, part (c) of the Invertible Matrix Theorem fails, while part(e) holds. This is impossible.

2. TRUE. This holds by the equivalence of parts (d) and (f) of the Invertible Matrix Theorem.

3. FALSE. Part (d) of the Invertible Matrix Theorem fails according to the given information, and thereforepart (b) also fails. Hence, the equation Ax = b does not have a unique solution. But it is not valid to conclude

that the equation has infinitely many solutions; it could have no solutions. For instance, if A =

1 0 00 1 00 0 0

and b =

001

, there are no solutions to Ax = b, although rank(A) = 2.

4. FALSE. An easy counterexample is the matrix 0n, which fails to be invertible even though it is uppertriangular. Since it fails to be invertible, it cannot e row-equivalent to In, by the Invertible Matrix Theorem.

Problems:

1. Since A is an invertible matrix, the only solution to Ax = 0 is x = 0. However, if we assume thatAB = AC, then A(B − C) = 0. If xi denotes the ith column of B − C, then xi = 0 for each i. That is,B − C = 0, or B = C, as required.

2. If rank(A) = n, then the augmented matrix A# for the system Ax = 0 can be reduced to REF suchthat each column contains a pivot except for the right-most column of all-zeros. Solving the system byback-substitution, we find that x = 0, as claimed.

3. Since Ax = 0 has only the trivial solution, REF(A) contains a pivot in every column. Therefore, thelinear system Ax = b can be solved by back-substitution for every b in Rn. Therefore, Ax = b does have asolution.

Now suppose there are two solutions y and z to the system Ax = b. That is, Ay = b and Az = b.Subtracting, we find

A(y − z) = 0,

and so by assumption, y − z = 0. That is, y = z. Therefore, there is only one solution to the linear systemAx = b.

Page 172: Differential Equations and Linear Algebra - 3e - Goode -Solutions

172

4. If A and B are each invertible matrices, then A and B can each be expressed as a product of elementarymatrices, say

A = E1E2 . . . Ek and B = E′1E′2 . . . E′l .

ThenAB = E1E2 . . . EkE′1E

′2 . . . E′l ,

so AB can be expressed as a product of elementary matrices. Thus, by the equivalence of (a) and (e) in theInvertible Matrix Theorem, AB is invertible.

5. We are assuming that the equations Ax = 0 and Bx = 0 each have only the trivial solution x = 0. Nowconsider the linear system

(AB)x = 0.

Viewing this equation asA(Bx) = 0,

we conclude that Bx = 0. Thus, x = 0. Hence, the linear equation (AB)x = 0 has only the trivial solution.

Solutions to Section 2.9

Problems:

1. We have

(−4)A−BT =[

8 −16 −8 −244 4 −20 0

]−[−3 2 1 0

0 2 −3 1

]=[

11 −18 −9 −244 2 −17 −1

].

2. We have

AB =[−2 4 2 6−1 −1 5 0

]−3 0

2 21 −30 1

=[

16 86 −17

].

Moreover,tr(AB) = −1.

3. We have

(AC)(AC)T =[−226

] [−2 26

]=[

4 −52−52 676

].

4. We have

(−4B)A =

12 0−8 −8−4 12

0 −4

[ −2 4 2 6−1 −1 5 0

]=

−24 48 24 72

24 −24 −56 −48−4 −28 52 −24

4 4 −20 0

.

5. Using Problem 2, we find that

(AB)−1 =[

16 86 −17

]−1

= − 1320

[−17 −8−6 16

].

Page 173: Differential Equations and Linear Algebra - 3e - Goode -Solutions

173

6. We have

CT C =[−5 −6 3 1

] −5−6

31

= [71],

andtr(CT C) = 71.

7.

(a): We have

AB =[

1 2 32 5 7

] 3 b−4 a

a b

=[

3a− 5 2a + 4b7a− 14 5a + 9b

].

In order for this product to equal I2, we require

3a− 5 = 1, 2a + 4b = 0, 7a− 14 = 0, 5a + 9b = 1.

We quickly solve this for the unique solution: a = 2 and b = −1.

(b): We have

BA =

3 −1−4 2

2 −1

[ 1 2 32 5 7

]=

1 1 20 2 20 −1 −1

.

8. We compute the (i, j)-entry of each side of the equation. We will denote the entries of AT by aTij , which

equals aji. On the left side, note that the (i, j)-entry of (ABT )T is the same as the (j, i)-entry of ABT , and

(j, i)-entry of ABT =n∑

k=0

ajkbTki =

n∑k=0

ajkbik =n∑

k=0

bikaTkj ,

and the latter expression is the (i, j)-entry of BAT . Therefore, the (i, j)-entries of (ABT )T and BAT arethe same, as required.

9.

(a): The (i, j)-entry of A2 isn∑

k=1

aikakj .

(b): Assume that A is symmetric. That means that AT = A. We claim that A2 is symmetric. To see this,note that

(A2)T = (AA)T = AT AT = AA = A2.

Thus, (A2)T = A2, and so A2 is symmetric.

10. We are assuming that A is skew-symmetric, so AT = −A. To show that BT AB is skew-symmetric, weobserve that

(BT AB)T = BT AT (BT )T = BT AT B = BT (−A)B = −(BT AB),

as required.

Page 174: Differential Equations and Linear Algebra - 3e - Goode -Solutions

174

11. We have

A2 =[

3 9−1 −3

]2=[

0 00 0

],

so A is nilpotent.

12. We have

A2 =

0 0 10 0 00 0 0

and

A3 = A2A =

0 0 10 0 00 0 0

0 1 10 0 10 0 0

=

0 0 00 0 00 0 0

,

so A is nilpotent.

13. We have

A′(t) =

−3e−3t −2 sec2 t tan t6t2 − sin t6/t −5

.

14. We have ∫ 1

0

B(t) dt =

−7t t3/3

6t− t2/2 3t4/4 + 2t3

t + t2/2 2π sin(πt/2)

et t− t4/4

∣∣∣∣10

=

−7 1/3

11/2 11/43/2 2/π

e− 1 3/4

.

15. Since A(t) is 3× 2 and B(t) is 4× 2, it is impossible to perform the indicated subtraction.

16. Since A(t) is 3× 2 and B(t) is 4× 2, it is impossible to perform the indicated subtraction.

17. From the last equation, we see that x3 = 0. Substituting this into the middle equation, we find thatx2 = 0.5. Finally, putting the values of x2 and x3 into the first equation, we find x1 = −6 − 2.5 = −8.5.Thus, there is a unique solution to the linear system, and the solution set is

{(−8.5, 0.5, 0)}.

18. To solve this system, we need to reduce the corresponding augmented matrix for the linear system torow-echelon form. This gives us

5 −1 2 7−2 6 9 0−7 5 −3 −7

1∼

1 11 20 7−2 6 9 0−7 5 −3 −7

2∼

1 11 20 70 28 49 140 82 137 42

3∼

1 11 20 70 1 7/4 1/20 82 137 42

4∼

1 11 20 70 1 7/4 1/20 0 −13/2 1

5∼

1 11 20 70 1 7/4 1/20 0 1 −2/13

.

From the last row, we conclude that x3 = −2/13, and using the middle row, we can solve for x2: we havex2+ 7

4 ·(− 2

13

)= 1

2 , so x2 = 2026 = 10

13 . Finally, from the first row we can get x1: we have x1+11· 1013+20·(− 2

13

)=

7, and so x1 = 2113 . So there is a unique solution:{(

2113

,1013

,− 213

)}.

Page 175: Differential Equations and Linear Algebra - 3e - Goode -Solutions

175

1. A21(2) 2. A12(2), A13(7) 3. M2(1/28) 4. A23(−82) 5. M3(−2/13)

19. To solve this system, we need to reduce the corresponding augmented matrix for the linear system torow-echelon form. This gives us

1 2 −1 11 0 1 54 4 0 12

1∼

1 2 −1 10 −2 2 40 −4 4 8

2∼

1 2 −1 10 1 −1 −20 −4 4 8

3∼

1 2 −1 10 1 −1 −20 0 0 0

.

From this row-echelon form, we see that z is a free variable. Set z = t. Then from the middle row of thematrix, y = t− 2, and from the top row, x + 2(t− 2)− t = 1 or x = −t + 5. So the solution set is

{(−t + 5, t− 2, t) : t ∈ R} = {(5,−2, 0) + t(−1, 1, 1) : t ∈ R}.

1. A12(−1), A13(−4) 2. M2(−1/2) 3. A23(4)

20. To solve this system, we need to reduce the corresponding augmented matrix for the linear system torow-echelon form. This gives us

1 −2 −1 3 0−2 4 5 −5 3

3 −6 −6 8 2

1∼

1 −2 −1 3 00 0 3 1 30 0 −3 −1 2

2∼

1 −2 −1 3 00 0 3 1 30 0 0 0 5

3∼

1 −2 −1 3 00 0 1 1/3 10 0 0 0 1

.

The bottom row of this matrix shows that this system has no solutions.

1. A12(2), A13(−3) 2. A23(1) 3. M2(1/3), M3(1/3)

21. To solve this system, we need to reduce the corresponding augmented matrix for the linear system torow-echelon form. This gives us

3 0 −1 2 −1 11 3 1 −3 2 −14 −2 −3 6 −1 50 0 0 1 4 −2

1∼

1 3 1 −3 2 −13 0 −1 2 −1 14 −2 −3 6 −1 50 0 0 1 4 −2

2∼

1 3 1 −3 2 −10 −9 −4 11 −7 40 −14 −7 18 −9 90 0 0 1 4 −2

3∼

1 3 1 −3 2 −10 −27 −12 33 −21 120 28 14 −36 18 −180 0 0 1 4 −2

4∼

1 3 1 −3 2 −10 −27 −12 33 −21 120 1 2 −3 −3 −60 0 0 1 4 −2

5∼

1 3 1 −3 2 −10 1 2 −3 −3 −60 −27 −12 33 −21 120 0 0 1 4 −2

6∼

1 3 1 −3 2 −10 1 2 −3 −3 −60 0 42 −48 −102 −1500 0 0 1 4 −2

7∼

1 3 1 −3 2 −10 1 2 −3 −3 −60 0 1 − 8

7 − 177 − 25

70 0 0 1 4 −2

.

We see that x5 = t is the only free variable. Back substitution yields the remaining values:

x5 = t, x4 = −4t− 2, x3 = −417− 15

7t, x2 = −2

7− 33

7t, x1 = −2

7+

167

t.

Page 176: Differential Equations and Linear Algebra - 3e - Goode -Solutions

176

So the solution set is {(−2

7+

167

t,−27− 33

7t,−41

7− 15

7t,−4t− 2, t

): t ∈ R

}

={

t

(167

,−337

,−157

,−4, 1)

+(−2

7,−2

7,−41

7,−2, 0

): t ∈ R

}.

1. P12 2. A12(−3), A13(−4) 3. M2(3), M3(−2) 4. A23(1) 5. P23 6. A23(27) 7. M3(1/42)

22. To solve this system, we need to reduce the corresponding augmented matrix for the linear system torow-echelon form. This gives us

1 1 1 1 −3 61 1 1 2 −5 82 3 1 4 −9 172 2 2 3 −8 14

1∼

1 1 1 1 −3 60 0 0 1 −2 20 1 −1 2 −3 50 0 0 −1 2 −2

2∼

1 1 1 1 −3 60 0 0 1 −2 20 1 −1 2 −3 50 0 0 0 0 0

3∼

1 1 1 1 −3 60 1 −1 2 −3 50 0 0 1 −2 20 0 0 0 0 0

.

From this row-echelon form, we see that x5 = t and x3 = s are free variables. Furthermore, solving thissystem by back-substitution, we see that

x5 = t, x4 = 2t + 2, x3 = s, x2 = s− t + 1, x1 = 2t− 2s + 3.

So the solution set is

{(2t− 2s + 3, s− t + 1, s, 2t + 2, t) : s, t ∈ R} = {t(2,−1, 0, 2, 1) + s(−2, 1, 1, 0, 0) + (3, 1, 0, 2, 0) : s, t ∈ R}.

1. A12(−1), A13(−2), A14(−2) 2. A24(1) 3. P23

23. To solve this system, we need to reduce the corresponding augmented matrix for the linear system torow-echelon form. This gives us[

1 −3 2i 1−2i 6 2 −2

]1∼[

1 −3 2i 10 6− 6i −2 −2 + 2i

]2∼[

1 −3 2i 10 1 − 1

6 (1 + i) − 13

].

1. A12(2i) 2. M2( 16−6i )

From the last augmented matrix above, we see that x3 is a free variable. Let us set x3 = t, where t isa complex number. Then we can solve for x2 using the equation corresponding to the second row of therow-echelon form: x2 = − 1

3 + 16 (1+i)t. Finally, using the first row of the row-echelon form, we can determine

that x1 = 12 t(1− 3i). Therefore, the solution set for this linear system of equations is

{(12t(1− 3i),−1

3+

16(1 + i)t, t) : t ∈ C}.

Page 177: Differential Equations and Linear Algebra - 3e - Goode -Solutions

177

24. We reduce the corresponding linear system as follows:[1 −k 62 3 k

]1∼[

1 −k 60 3 + 2k k − 12

].

If k 6= − 32 , then each column of the row-reduced coefficient matrix will contain a pivot, and hence, the linear

system will have a unique solution. If, on the other hand, k = − 32 , then the system is inconsistent, because

the last row of the row-echelon form will have a pivot in the right-most column. Under no circumstanceswill the linear system have infinitely many solutions.

25. First observe that if k = 0, then the second equation requires that x3 = 2, and then the first equationrequires x2 = 2. However, x1 is a free variable in this case, so there are infinitely many solutions.

Now suppose that k 6= 0. Then multiplying each row of the corresponding augmented matrix for thelinear system by 1/k yields a row-echelon form with pivots in the first two columns only. Therefore, thethird variable, x3, is free in this case. So once again, there are infinitely many solutions to the system.

We conclude that the system has infinitely many solutions for all values of k.

26. Since this linear system is homogeneous, it already has at least one solution: (0, 0, 0). Therefore, it onlyremains to determine the values of k for which this will be the only solution. We reduce the correspondingmatrix as follows: 10 k −1 0

k 1 −1 02 1 −1 0

1∼

10k k2 −k 010k 10 −10 01 1/2 −1/2 0

2∼

1 1/2 −1/2 010k 10 −10 010k k2 −k 0

3∼

1 1/2 −1/2 00 10− 5k 5k − 10 00 k2 − 5k 4k 0

4∼

1 1/2 −1/2 00 1 −1 00 k2 − 5k 4k 0

5∼

1 1/2 −1/2 00 1 −1 00 0 k2 − k 0

.

1. M1(k), M2(10), M3(1/2) 2. P13 3. A12(−10k), A13(−10k) 4. M2( 110−5k ) 5. A23(5k − k2)

Note that the steps above are not valid if k = 0 or k = 2 (because Step 1 is not valid with k = 0 and Step4 is not valid if k = 2). We will discuss those special cases individually in a moment. However if k 6= 0, 2,then the steps are valid, and we see from the last row of the last matrix that if k = 1, we have infinitelymany solutions. Otherwise, if k 6= 0, 1, 2, then the matrix has full rank, and so there is a unique solution tothe linear system.

If k = 2, then the last two rows of the original matrix are the same, and so the matrix of coefficients ofthe linear system is not invertible. Therefore, the linear system must have infinitely many solutions.

If k = 0, we reduce the original linear system as follows:

10 0 −1 00 1 −1 02 1 −1 0

1∼

1 0 −1/10 00 1 −1 02 1 −1 0

2∼

1 0 −1/10 00 1 −1 00 1 −4/5 0

3∼

1 0 −1/10 00 1 −1 00 0 1/5 0

.

The last matrix has full rank, so there will be a unique solution in this case.

1. M1(1/10) 2. A13(−2) 3. A23(−1)

To summarize: The linear system has infinitely many solutions if and only if k = 1 or k = 2. Otherwise,the system has a unique solution.

Page 178: Differential Equations and Linear Algebra - 3e - Goode -Solutions

178

27. To solve this system, we need to reduce the corresponding augmented matrix for the linear system torow-echelon form. This gives us 1 −k k2 0

1 0 k 00 1 −1 1

1∼

1 −k k2 00 k k − k2 00 1 −1 1

2∼

1 −k k2 00 1 −1 10 k k − k2 0

3∼

1 −k k2 00 1 −1 10 0 2k − k2 −k

.

1. A12(−1) 2. P23 3. A23(−k)

Now provided that 2k − k2 6= 0, the system can be solved without free variables via back-substitution, andtherefore, there is a unique solution. Consider now what happens if 2k−k2 = 0. Then either k = 0 or k = 2.If k = 0, then only the first two columns of the last augmented matrix above are pivoted, and we have a freevariable corresponding to x3. Therefore, there are infinitely many solutions in this case. On the other hand,if k = 2, then the last row of the last matrix above reflects an inconsistency in the linear system, and thereare no solutions.

To summarize, the system has no solutions if k = 2, a unique solution if k 6= 0 and k 6= 2, and infinitelymany solutions if k = 0.

28. No, there are no common points of intersection. A common point of intersection would be indicated bya solution to the linear system consisting of the equations of the three planes. However, the correspondingaugmented matrix can be row-reduced as follows: 1 2 1 4

0 1 −1 11 3 0 0

1∼

1 2 1 40 1 −1 10 1 −1 −4

2∼

1 2 1 40 1 −1 10 0 0 −5

.

The last row of this matrix shows that the linear system is inconsistent, and so there are no points commonto all three planes.

1. A13(−1) 2. A23(−1)

29.

(a): We have [4 7

−2 5

]1∼[

1 7/4−2 5

]2∼[

1 7/40 17/2

]3∼[

1 7/40 1

].

1. M1(1/4) 2. A12(2) 3. M2(2/17)

(b): We have: rank(A) = 2, since the row-echelon form of A in (a) consists two nonzero rows.

(c): We have

[4 7 1 0

−2 5 0 1

]1∼[

1 7/4 1/4 0−2 5 0 1

]2∼[

1 7/4 1/4 00 17/2 1/2 1

]3∼[

1 7/4 1/4 00 1 1/17 2/17

]4∼[

1 0 5/34 −7/340 1 1/17 2/17

].

Page 179: Differential Equations and Linear Algebra - 3e - Goode -Solutions

179

1. M1(1/4) 2. A12(2) 3. M2(2/17) 4. A21(−7/4)

Thus,

A−1 =[

534 − 7

34117

217

].

30.

(a): We have [2 −7

−4 14

]1∼[

2 −70 0

]2∼[

1 −7/20 0

].

1. A12(2) 2. M1(1/2)

(b): We have: rank(A) = 1, since the row-echelon form of A in (a) has one nonzero row.

(c): Since rank(A) < 2, A is not invertible.

31.

(a): We have 3 −1 60 2 33 −5 0

1∼

1 −1/3 20 2 31 −5/3 0

2∼

1 −1/3 20 2 30 −4/3 −2

3∼

1 −1/3 20 2 30 0 0

4∼

1 −1/3 20 1 3/20 0 0

.

1. M1(1/3), M3(1/3) 2. A13(−1) 3. A23(2/3) 4. M2(1/2)

(b): We have: rank(A) = 2, since the row-echelon form of A in (a) consists of two nonzero rows.

(c): Since rank(A) < 3, A is not invertible.

32.

(a): We have2 1 0 01 2 0 00 0 3 40 0 4 3

1∼

1 2 0 02 1 0 00 0 3 40 0 4 3

2∼

1 2 0 00 −3 0 00 0 3 40 0 1 −1

3∼

1 2 0 00 −3 0 00 0 1 −10 0 3 4

4∼

1 2 0 00 1 0 00 0 1 −10 0 0 7

5∼

1 2 0 00 1 0 00 0 1 −10 0 0 1

.

1. P12 2. A12(−2), A34(−1) 3. P34 4. M2(−1/3), A34(−3) 5. M4(1/7)

(b): We have: rank(A) = 4, since the row-echelon form of A in (a) consists of four nonzero rows.

Page 180: Differential Equations and Linear Algebra - 3e - Goode -Solutions

180

(c): We have2 1 0 0 1 0 0 01 2 0 0 0 1 0 00 0 3 4 0 0 1 00 0 4 3 0 0 0 1

1∼

1 2 0 0 0 1 0 02 1 0 0 1 0 0 00 0 3 4 0 0 1 00 0 4 3 0 0 0 1

2∼

1 2 0 0 0 1 0 00 −3 0 0 1 −2 0 00 0 3 4 0 0 1 00 0 1 −1 0 0 −1 1

3∼

1 2 0 0 0 1 0 00 −3 0 0 1 −2 0 00 0 1 −1 0 0 −1 10 0 3 4 0 0 1 0

4∼

1 2 0 0 0 1 0 00 1 0 0 −1/3 2/3 0 00 0 1 −1 0 0 −1 10 0 0 7 0 0 4 −3

5∼

1 0 0 0 2/3 −1/3 0 00 1 0 0 −1/3 2/3 0 00 0 1 −1 0 0 −1 10 0 0 1 0 0 4/7 −3/7

6∼

1 0 0 0 2/3 −1/3 0 00 1 0 0 −1/3 2/3 0 00 0 1 0 0 0 −3/7 4/70 0 0 1 0 0 4/7 −3/7

.

1. P12 2. A12(−2), A34(−1) 3. P34 4. A34(−3), M2(−1/3) 5. M4(1/7), A21(−2) 6. A43(1)

Thus,

A−1 =

2/3 −1/3 0 0−1/3 2/3 0 0

0 0 −3/7 4/70 0 4/7 −3/7

.

33.

(a): We have 3 0 00 2 −11 −1 2

1∼

1 0 00 2 −11 −1 2

2∼

1 0 00 2 −10 −1 2

3∼

1 0 00 −1 20 2 −1

4∼

1 0 00 −1 20 0 3

5∼

1 0 00 1 −20 0 1

.

1. M1(1/3) 2. A13(−1) 3. P23 4. A23(2) 5. M2(−1), M3(1/3)

(b): We have: rank(A) = 3, since the row-echelon form of A in (a) has 3 nonzero rows.

(c): We have 3 0 0 1 0 00 2 −1 0 1 01 −1 2 0 0 1

1∼

1 0 0 1/3 0 00 2 −1 0 1 01 −1 2 0 0 1

2∼

1 0 0 1/3 0 00 2 −1 0 1 00 −1 2 −1/3 0 1

3∼

1 0 0 1/3 0 00 −1 2 −1/3 0 10 2 −1 0 1 0

4∼

1 0 0 1/3 0 00 −1 2 −1/3 0 10 0 3 −2/3 1 2

5∼

1 0 0 1/3 0 00 1 −2 1/3 0 −10 0 1 −2/9 1/3 2/3

6∼

1 0 0 1/3 0 00 1 0 −1/9 2/3 1/30 0 1 −2/9 1/3 2/3

.

Page 181: Differential Equations and Linear Algebra - 3e - Goode -Solutions

181

1. M1(1/3) 2. A13(−1) 3. P23 4. A23(2) 5. M2(−1), M3(1/3) 6. A32(2)

Hence,

A−1 =

1/3 0 0−1/9 2/3 1/3−2/9 1/3 2/3

.

34.

(a): We have −2 −3 11 4 20 5 3

1∼

1 4 2−2 −3 1

0 5 3

2∼

1 4 20 5 50 5 3

3∼

1 4 20 5 50 0 −2

4∼

1 4 20 1 10 0 1

.

1. P12 2. A12(2) 3. A23(−1) 4. M2(1/5), M3(−1/2)

(b): We have: rank(A) = 3, since the row-echelon form of A in (a) consists of 3 nonzero rows.

(c): We have −2 −3 1 1 0 01 4 2 0 1 00 5 3 0 0 1

1∼

1 4 2 0 1 0−2 −3 1 1 0 0

0 5 3 0 0 1

2∼

1 4 2 0 1 00 5 5 1 2 00 5 3 0 0 1

3∼

1 4 2 0 1 00 5 5 1 2 00 0 −2 −1 −2 1

4∼

1 4 2 0 1 00 1 1 1/5 2/5 00 0 1 1/2 1 −1/2

5∼

1 0 −2 −4/5 −3/5 00 1 1 1/5 2/5 00 0 1 1/2 1 −1/2

6∼

1 0 0 1/5 7/5 −10 1 0 −3/10 −3/5 1/20 0 1 1/2 1 −1/2

.

1. P12 2. A12(2) 3. A23(−1) 4. M2(1/5), M3(−1/2) 5. A21(−4) 6. A31(2), A32(−1)

Thus,

A−1 =

1/5 7/5 −1−3/10 −3/5 1/21/2 1 −1/2

.

35. We use the Gauss-Jordan method to find A−1: 1 −1 3 1 0 04 −3 13 0 1 01 1 4 0 0 1

1∼

1 −1 3 1 0 00 1 1 −4 1 00 2 1 −1 0 1

2∼

1 −1 3 1 0 00 1 1 −4 1 00 0 −1 7 −2 1

3∼

1 −1 3 1 0 00 1 1 −4 1 00 0 1 −7 2 −1

4∼

1 0 4 −3 1 00 1 1 −4 1 00 0 1 −7 2 −1

5∼

1 0 0 25 −7 40 1 0 3 −1 10 0 1 −7 2 −1

.

1. A12(−4), A13(−1) 2. A23(−2) 3. M3(−1) 4. A21(1) 5. A31(−4), A32(−1)

Page 182: Differential Equations and Linear Algebra - 3e - Goode -Solutions

182

Thus,

A−1 =

25 −7 43 −1 1

−7 2 −1

.

Now xi = A−1ei for each i. So

x1 = A−1e1 =

253

−7

, x2 = A−1e2 =

−7−1

2

, x3 = A−1e3 =

41

−1

.

36. We have xi = A−1bi, where

A−1 = − 139

[−2 −5−7 2

].

Therefore,

x1 = A−1b1 = − 139

[−2 −5−7 2

] [12

]= − 1

39

[−12−3

]=

139

[123

]=

113

[41

],

x2 = A−1b2 = − 139

[−2 −5−7 2

] [43

]= − 1

39

[−23−22

]=

139

[2322

],

and

x3 = A−1b3 = − 139

[−2 −5−7 2

] [−2

5

]= − 1

39

[−21

24

]=

139

[21

−24

]=

113

[7

−8

].

37.

(a): We have(A−1B)(B−1A) = A−1(BB−1)A = A−1InA = A−1A = In

and(B−1A)(A−1B) = B−1(AA−1)B = B−1InB = B−1B = In.

Therefore,(B−1A)−1 = A−1B.

(b): We have(A−1B)−1 = B−1(A−1)−1 = B−1A,

as required.

38. We prove this by induction on k.If k = 0, then Ak = A0 = In and S−1D0S = S−1InS = S−1S = In. Thus, Ak = S−1DkS when k = 0.Now assume that Ak−1 = S−1Dk−1S. We wish to show that Ak = S−1DkS. We have

Ak = AAk−1 = (S−1DS)(S−1Dk−1S) = S−1D(SS−1)Dk−1S = S−1DInDk−1S = S−1DDk−1S = S−1DkS,

as required.

39.

(a): We reduce A to the identity matrix:[4 7

−2 5

]1∼[

1 74

−2 5

]2∼[

1 74

0 172

]3∼[

1 74

0 1

]4∼[

1 00 1

].

Page 183: Differential Equations and Linear Algebra - 3e - Goode -Solutions

183

1. M1( 14 ) 2. A12(2) 3. M2( 2

17 ) 4. A21(− 74 )

The elementary matrices corresponding to these row operations are

E1 =[

14 00 1

], E2 =

[1 02 1

], E3 =

[1 00 2

17

], E4 =

[1 − 7

40 1

].

We have E4E3E2E1A = I2, so that

A = E−11 E−1

2 E−13 E−1

4 =[

4 00 1

] [1 0

−2 1

] [1 00 17

2

] [1 7

40 1

],

which is the desired expression since E−1i is an elementary matrix for each i.

(b): We can reduce A to upper triangular form by the following elementary row operation:[4 7

−2 5

]1∼[

4 70 17

2

].

1. A12( 12 )

Therefore we have the multiplier m12 = − 12 . Hence, setting

L =[

1 0− 1

2 1

]and U =

[4 70 17

2

],

we have the LU factorization A = LU , which can be easily verified by direct multiplication.

40.

(a): We reduce A to the identity matrix:2 1 0 01 2 0 00 0 3 40 0 4 3

1∼

1 2 0 02 1 0 00 0 3 40 0 4 3

2∼

1 2 0 00 −3 0 00 0 3 40 0 4 3

3∼

1 2 0 00 1 0 00 0 3 40 0 4 3

4∼

1 0 0 00 1 0 00 0 3 40 0 4 3

5∼

1 0 0 00 1 0 00 0 1 4

30 0 4 3

6∼

1 0 0 00 1 0 00 0 1 4

30 0 0 − 7

3

7∼

1 0 0 00 1 0 00 0 1 4

30 0 0 1

8∼

1 0 0 00 1 0 00 0 1 00 0 0 1

.

1. P12 2. A12(−2) 3. M2(− 13 ) 4. A21(−2) 5. M3( 1

3 )

6. A34(−4) 7. M4(− 37 ) 8. A43(− 4

3 )

The elementary matrices corresponding to these row operations are

E1 =

0 1 0 01 0 0 00 0 1 00 0 0 1

, E2 =

1 0 0 0

−2 1 0 00 0 1 00 0 0 1

, E3 =

1 0 0 00 − 1

3 0 00 0 1 00 0 0 1

, E4 =

1 −2 0 00 1 0 00 0 1 00 0 0 1

Page 184: Differential Equations and Linear Algebra - 3e - Goode -Solutions

184

E5 =

1 0 0 00 1 0 00 0 1

3 00 0 0 1

, E6 =

1 0 0 00 1 0 00 0 1 00 0 −4 1

, E7 =

1 0 0 00 1 0 00 0 1 00 0 0 − 3

7

, E8 =

1 0 0 00 1 0 00 0 1 − 4

30 0 0 1

.

We haveE8E7E6E5E4E3E2E1A = I4

so that

A = E−11 E−1

2 E−13 E−1

4 E−15 E−1

6 E−17 E−1

8

=

0 1 0 01 0 0 00 0 1 00 0 0 1

1 0 0 02 1 0 00 0 1 00 0 0 1

1 0 0 00 −3 0 00 0 1 00 0 0 1

1 2 0 00 1 0 00 0 1 00 0 0 1

· · ·

· · ·

1 0 0 00 1 0 00 0 3 00 0 0 1

1 0 0 00 1 0 00 0 1 00 0 4 1

1 0 0 00 1 0 00 0 1 00 0 0 − 7

3

1 0 0 00 1 0 00 0 1 4

30 0 0 1

,

which is the desired expression since E−1i is an elementary matrix for each i.

(b): We can reduce A to upper triangular form by the following elementary row operations:2 1 0 01 2 0 00 0 3 40 0 4 3

1∼

2 1 0 00 3

2 0 00 0 3 40 0 4 3

2∼

2 1 0 00 3

2 0 00 0 3 40 0 0 − 7

3

.

1. A12(− 12 ) 2. A34(− 4

3 )

Therefore, the nonzero multipliers are m12 = 12 and m34 = 4

3 . Hence, setting

L =

1 0 0 012 1 0 00 0 1 00 0 4

3 1

and U =

2 1 0 00 3

2 0 00 0 3 40 0 0 − 7

3

,

we have the LU factorization A = LU , which can be easily verified by direct multiplication.

41.

(a): We reduce A to the identity matrix: 3 0 00 2 −11 −1 2

1∼

1 −1 20 2 −13 0 0

2∼

1 −1 20 2 −10 3 −6

3∼

1 −1 20 1 − 1

20 3 −6

4∼

1 −1 20 1 − 1

20 0 − 9

2

5∼

1 −1 20 1 − 1

20 0 1

6∼

1 0 32

0 1 − 12

0 0 1

7∼

1 0 00 1 − 1

20 0 1

8∼

1 0 00 1 00 0 1

.

Page 185: Differential Equations and Linear Algebra - 3e - Goode -Solutions

185

1. P13 2. A13(−3) 3. M2( 12 ) 4. A23(−3) 5. M3(− 2

9 )

6. A21(1) 7. A31(− 32 ) 8. A32( 1

2 )

The elementary matrices corresponding to these row operations are

E1 =

0 0 10 1 01 0 0

, E2 =

1 0 00 1 0

−3 0 1

, E3 =

1 0 00 1

2 00 0 1

, E4 =

1 0 00 1 00 −3 1

E5 =

1 0 00 1 00 0 − 2

9

, E6 =

1 1 00 1 00 0 1

, E7 =

1 0 − 32

0 1 00 0 1

, E8 =

1 0 00 1 1

20 0 1

.

We haveE8E7E6E5E4E3E2E1A = I3

so thatA = E−1

1 E−12 E−1

3 E−14 E−1

5 E−16 E−1

7 E−18

=

0 0 10 1 01 0 0

1 0 00 1 03 0 1

1 0 00 2 00 0 1

1 0 00 1 00 3 1

· · ·· · ·

1 0 00 1 00 0 − 9

2

1 −1 00 1 00 0 1

1 0 32

0 1 00 0 1

1 0 00 1 − 1

20 0 1

,

which is the desired expression since E−1i is an elementary matrix for each i.

(b): We can reduce A to upper triangular form by the following elementary row operations: 3 0 00 2 −11 −1 2

1∼

3 0 00 2 −10 −1 2

2∼

3 0 00 2 −10 0 3

2

.

1. A13(− 13 ) 2. A23( 1

2 )

Therefore, the nonzero multipliers are m13 = 13 and m23 = − 1

2 . Hence, setting

L =

1 0 00 1 013 − 1

2 1

and U =

3 0 00 2 −10 0 3

2

,

we have the LU factorization A = LU , which can be verified by direct multiplication.

42.

(a): We reduce A to the identity matrix: −2 −3 11 4 20 5 3

1∼

1 4 2−2 −3 1

0 5 3

2∼

1 4 20 5 50 5 −3

3∼

1 4 20 5 50 1 −8

4∼

1 4 20 1 −80 5 5

5∼

1 4 20 1 −80 0 45

6∼

1 4 20 1 −80 0 1

7∼

1 0 340 1 −80 0 1

8∼

1 0 340 1 00 0 1

9∼

1 0 00 1 00 0 1

.

Page 186: Differential Equations and Linear Algebra - 3e - Goode -Solutions

186

1. P12 2. A12(2) 3. A23(−1) 4. P23 5. A23(−5)

6. M3( 145 ) 7. A21(−4) 8. A32(8) 9. A31(−34)

The elementary matrices corresponding to these row operations are

E1 =

0 1 01 0 00 0 1

, E2 =

1 0 02 1 00 0 1

, E3 =

1 0 00 1 00 −1 1

,

E4 =

1 0 00 0 10 1 0

, E5 =

1 0 00 1 00 −5 1

, E6 =

1 0 00 1 00 0 1

45

,

E7 =

1 −4 00 1 00 0 1

, E8 =

1 0 00 1 80 0 1

, E9 =

1 0 −340 1 00 0 1

.

We haveE9E8E7E6E5E4E3E2E1A = I3

so that

A = E−11 E−1

2 E−13 E−1

4 E−15 E−1

6 E−17 E−1

8 E−19

=

0 1 01 0 00 0 1

1 0 0−2 1 0

0 0 1

1 0 00 1 00 1 1

1 0 00 0 10 1 0

· · ·· · ·

1 0 00 1 00 5 1

1 0 00 1 00 0 45

1 4 00 1 00 0 1

1 0 00 1 −80 0 1

1 0 340 1 00 0 1

,

which is the desired expression since E−1i is an elementary matrix for each i.

(b): We can reduce A to upper triangular form by the following elementary row operations: −2 −3 11 4 20 5 3

1∼

−2 −3 10 5

252

0 5 3

2∼

−2 −3 10 5

252

0 0 −2

.

Therefore, the nonzero multipliers are m12 = − 12 and m23 = 2. Hence, setting

L =

1 0 0− 1

2 1 00 2 1

and U =

−2 −3 10 5

252

0 0 −2

,

we have the LU factorization A = LU , which can be verified by direct multiplication.

43.

(a): Note that

(A+B)3 = (A+B)2(A+B) = (A2+AB+BA+B2)(A+B) = A3+ABA+BA2+B2A+A2B+AB2+BAB+B3.

Page 187: Differential Equations and Linear Algebra - 3e - Goode -Solutions

187

(b): We have

(A−B)3 = (A−B)2(A−B) = (A2−AB−BA+B2)(A−B) = A3−ABA−BA2+B2A−A2B+AB2+BAB−B3.

(c): The answer is 2k, because each term in the expansion of (A + B)k consists of a string of k matrices,each of which is either A or B (2 possibilities for each matrix in the string). Multiplying the possibilitiesfor each position in the string of length k, we get 2k different strings, and hence 2k different terms in theexpansion of (A + B)k.

44. We claim that (A 00 B−1

)−1

=(

A−1 00 B

).

To see this, simply note that(A 00 B−1

)(A−1 00 B

)=(

In 00 Im

)= In+m

and (A−1 00 B

)(A 00 B−1

)=(

In 00 Im

)= In+m.

45. For a 2× 4 matrix, the leading ones can occur in 6 different positions:

[1 ∗ ∗ ∗0 1 ∗ ∗

],

[1 ∗ ∗ ∗0 0 1 ∗

],

[1 ∗ ∗ ∗0 0 0 1

],

[0 1 ∗ ∗0 0 1 ∗

],

[0 1 ∗ ∗0 0 0 1

],

[0 0 1 ∗0 0 0 1

]For a 3× 4 matrix, the leading ones can occur in 4 different positions: 1 ∗ ∗ ∗

0 1 ∗ ∗0 0 1 ∗

,

1 ∗ ∗ ∗0 1 ∗ ∗0 0 0 1

,

1 ∗ ∗ ∗0 0 1 ∗0 0 0 1

,

0 1 ∗ ∗0 0 1 ∗0 0 0 1

For a 4× 6 matrix, the leading ones can occur in 15 different positions:

1 ∗ ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 1 ∗ ∗

,

1 ∗ ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 0 1 ∗

,

1 ∗ ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 0 0 1

,

1 ∗ ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 1 ∗

,

1 ∗ ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 0 1

,

1 ∗ ∗ ∗ ∗ ∗0 1 ∗ ∗ ∗ ∗0 0 0 0 1 ∗0 0 0 0 0 1

,

1 ∗ ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 1 ∗

,

1 ∗ ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 0 1

,

1 ∗ ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 0 1 ∗0 0 0 0 0 1

,

1 ∗ ∗ ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 1 ∗0 0 0 0 0 1

,

0 1 ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 1 ∗

,

0 1 ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 0 1

,

Page 188: Differential Equations and Linear Algebra - 3e - Goode -Solutions

188 0 1 ∗ ∗ ∗ ∗0 0 1 ∗ ∗ ∗0 0 0 0 1 ∗0 0 0 0 0 1

,

0 1 ∗ ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 1 ∗0 0 0 0 0 1

,

0 0 1 ∗ ∗ ∗0 0 0 1 ∗ ∗0 0 0 0 1 ∗0 0 0 0 0 1

For an m× n matrix with m ≤ n, the answer is the binomial coefficient

C(n, m) =(

nm

)=

n!m!(n−m)!

.

This represents n “choose” m, which is the number of ways to choose m columns from the n columns of thematrix in which to put the leading ones. This choice then determines the structure of the matrix.

46. The inverse of A10 is B5. To see this, we use the fact that

A2B = In = BA2

as follows:

A10B5 = A8(A2B)B4 = A8InB4 = A8B4 = A6(A2B)B3 = A6InB3

= A6B3 = A4(A2B)B2 = A4InB2 = A4B2 = A2(A2B)B = A2InB = A2B = In

andB5A10 = B4(BA2)A8 = B4InA8 = B4A8 = B3(BA2)A6 = B3InA6

= B3A6 = B2(BA2)A4 = B2InA4 = B2A4 = B(BA2)A2 = BInA2 = BA2 = In.

Solutions to Section 3.1

True-False Review:

1. TRUE. Let A =[

a 0b c

]. Then

det(A) = ac− b0 = ac,

which is the product of the elements on the main diagonal of A.

2. TRUE. Let A =

a b c0 d e0 0 f

. Using the schematic of Figure 3.1.1, we have

det(A) = adf + be0 + c00− 0dc− 0ea− f0b = adf,

which is the product of the elements on the main diagonal of A.

3. FALSE. The volume of this parallelepiped is determined by the absolute value of det(A), since det(A)could very well be negative.

4. TRUE. There are 12 of each. The ones of even parity are (1, 2, 3, 4), (1, 3, 4, 2), (1, 4, 2, 3), (2, 1, 4, 3),(2, 4, 3, 1), (2, 3, 1, 4), (3, 1, 2, 4), (3, 2, 4, 1), (3, 4, 1, 2), (4, 1, 3, 2), (4, 2, 1, 3), (4, 3, 2, 1), and the others are allof odd parity.

5. FALSE. Many examples are possible here. If we take A =[

1 00 0

]and B =

[0 00 1

], then det(A) =

det(B) = 0, but A + B = I2, and det(I2) = 1 6= 0.

Page 189: Differential Equations and Linear Algebra - 3e - Goode -Solutions

189

6. FALSE. Many examples are possible here. If we take A =[

1 23 4

], for example, then det(A) =

1 · 4− 2 · 3 = −2 < 0, even though all elements of A are positive.

7. TRUE. In the summation that defines the determinant, each term in the sum is a product consistingof one element from each row and each column of the matrix. But that means one of the factors in eachterm will be zero, since it must come from the row containing all zeros. Hence, each term is zero, and thesummation is zero.

8. TRUE. If the determinant of the 3× 3 matrix [v1,v2,v3] is zero, then the volume of the parallelepipeddetermined by the three vectors is zero, and this means precisely that the three vectors all lie in the sameplane.

Problems:

1. σ(2, 1, 3, 4) = (−1)N(2,1,3,4) = (−1)1 = −1, odd.

2. σ(1, 3, 2, 4) = (−1)N(1,3,2,4) = (−1)1 = −1, odd.

3. σ(1, 4, 3, 5, 2) = (−1)N(1,4,3,5,2) = (−1)4 = 1, even.

4. σ(5, 4, 3, 2, 1) = (−1)N(5,4,3,2,1) = (−1)10 = 1, even.

5. σ(1, 5, 2, 4, 3) = (−1)N(1,5,2,4,3) = (−1)4 = 1, even.

6. σ(2, 4, 6, 1, 3, 5) = (−1)N(2,4,6,1,3,5) = (−1)6 = 1, even.

7. det(A) =∣∣∣∣ a11 a12

a21 a22

∣∣∣∣ = σ(1, 2)a11a22 + σ(2, 1)a12a21 = a11a22 − a12a21.

8. det(A) =∣∣∣∣ 1 −1

2 3

∣∣∣∣ = 1 · 3− (−1)2 = 5.

9. det(A) =∣∣∣∣ 2 −1

6 −3

∣∣∣∣ = 2(−3)− (−1)6 = 0.

10. det(A) =∣∣∣∣ −4 10−1 8

∣∣∣∣ = −4 · 8− 10(−1) = −22.

11. det(A) =

∣∣∣∣∣∣1 −1 02 3 60 2 −1

∣∣∣∣∣∣ = 1 · 3(−1) + (−1)6 · 0 + 0 · 2 · 2− 0 · 3 · 0− 6 · 2 · 1− (−1)(−1)2 = −17.

12. det(A) =

∣∣∣∣∣∣2 1 54 2 39 5 1

∣∣∣∣∣∣ = 2 · 2 · 1 + 1 · 3 · 9 + 5 · 4 · 5− 5 · 2 · 9− 3 · 5 · 2− 1 · 1 · 4 = 7.

13. det(A) =

∣∣∣∣∣∣0 0 20 −4 1

−1 5 −7

∣∣∣∣∣∣ = 0(−4)(−7) + 0 · 1(−1) + 2 · 0 · 5− 2(−4)(−1)− 1 · 5 · 0− (−7)0 · 0 = −8.

14. det(A) =

∣∣∣∣∣∣∣∣1 2 3 40 5 6 70 0 8 90 0 0 10

∣∣∣∣∣∣∣∣ = 400, since of the 24 terms in the expression (3.1.3) for the determinant,

only the term σ(1, 2, 3, 4)a11a22a33a44 = 400 contains all nonzero entries.

Page 190: Differential Equations and Linear Algebra - 3e - Goode -Solutions

190

15. det(A) =

∣∣∣∣∣∣∣∣0 0 2 05 0 0 00 0 0 30 2 0 0

∣∣∣∣∣∣∣∣ = −60, since of the 24 terms in the expression (3.1.3) for the determinant,

only the term σ(3, 1, 4, 2)a13a21a34a42 contains all nonzero entries, and since σ(3, 1, 4, 2) = −1, we obtainσ(3, 1, 4, 2)a13a21a34a42 = (−1) · 2 · 5 · 3 · 2 = −60.

16.∣∣∣∣ π π2√

2 2π

∣∣∣∣ = 2π2 −√

2π2 = (2−√

2)π2.

17.

∣∣∣∣∣∣2 3 −11 4 13 1 6

∣∣∣∣∣∣ = (2)(4)(6) + (3)(1)(3) + (−1)(1)(1)− (3)(4)(−1)− (1)(1)(2)− (6)(1)(3) = 48.

18.

∣∣∣∣∣∣3 2 62 1 −1

−1 1 4

∣∣∣∣∣∣ = (3)(1)(4) + (2)(−1)(−1) + (6)(2)(1)− (−1)(1)(6)− (1)(−1)(3)− (4)(2)(2) = 19.

19.

∣∣∣∣∣∣2 3 60 1 21 5 0

∣∣∣∣∣∣ = (2)(1)(0) + (3)(2)(1) + (6)(0)(5)− (1)(1)(6)− (5)(2)(2)− (0)(0)(3) = −20.

20.

∣∣∣∣∣∣√

π e2 e−1√

67 1/30 2001π π2 π3

∣∣∣∣∣∣ = (√

π)(1/30)(π3)+(e2)(2001)(π)+(e−1)(√

67)(π2)−(π)(1/30)(e−1)−(π2)(2001)(√

π)−

(π3)(√

67)(e2) ≈ 9601.882.

21.

∣∣∣∣∣∣e2t e3t e−4t

2e2t 3e3t −4e−4t

4e2t 9e3t 16e−4t

∣∣∣∣∣∣ = (e2t)(3e3t)(16e−4t)+(e3t)(−4e−4t)(4e2t)+(e−4t)(2e2t)(9e3t)−(4e2t)(3e3t)(e−4t)−

(9e3t)(−4e−4t)(e2t)− (16e−4t)(2e2t)(e3t) = 42et.

22.y′′′1 − y′′1 + 4y′1 − 4y1 = 8 sin 2x + 4 cos 2x− 8 sin 2x− 4 cos 2x = 0,y′′′2 − y′′2 + 4y′2 − 4y2 = −8 cos 2x + 4 sin 2x + 8 cos 2x− 4 sin 2x = 0,y′′′3 − y′′3 + 4y′3 − 4y3 = ex − ex + 4ex − 4ex = 0.∣∣∣∣∣∣

y1 y2 y3

y′1 y′2 y′3y′′1 y′′2 y′′3

∣∣∣∣∣∣ =∣∣∣∣∣∣

cos 2x sin 2x ex

−2 sin 2x 2 cos 2x ex

−4 cos 2x −4 sin 2x ex

∣∣∣∣∣∣= 2ex cos2 2x− 4ex sin 2x cos 2x + 8ex sin2 2x + 8ex cos2 2x + 4ex sin 2x cos 2x + 2ex sin2 2x = 10ex.

23.(a):

y′′′1 − y′′1 − y′1 + y1 = ex − ex − ex + ex = 0,y′′′2 − y′′2 − y′2 + y2 = sinhx− coshx− sinhx + coshx = 0,y′′′3 − y′′3 − y′3 + y3 = cosh x− sinhx− coshx + sinhx = 0.∣∣∣∣∣∣

y1 y2 y3

y′1 y′2 y′3y′′1 y′′2 y′′3

∣∣∣∣∣∣ =∣∣∣∣∣∣

ex coshx sinhxex sinhx coshxex coshx sinhx

∣∣∣∣∣∣

Page 191: Differential Equations and Linear Algebra - 3e - Goode -Solutions

191

= ex sinh2 x + ex cosh2 x + ex sinhx coshx− ex sinh2 x− ex cosh2 x− ex sinhx coshx = 0.(b): The formulas we need are

coshx =ex + e−x

2and sinhx =

ex − e−x

2.

Adding the two equations, we find that cosh x + sinhx = ex, so that −ex + coshx + sinhx = 0. Therefore,we may take d1 = −1, d2 = 1, and d3 = 1.

24.(a): S4 = {1, 2, 3, 4}(1, 2, 3, 4) (1, 2, 4, 3)(1, 3, 2, 4) (1, 3, 4, 2)(1, 4, 2, 3) (1, 4, 3, 2)(2, 1, 3, 4) (2, 1, 4, 3)(2, 3, 1, 4) (2, 3, 4, 1)(2, 4, 1, 3) (2, 4, 3, 1)(3, 1, 2, 4) (3, 1, 4, 2)(3, 2, 1, 4) (3, 2, 4, 1)(3, 4, 1, 2) (3, 4, 2, 1)(4, 1, 2, 3) (4, 1, 3, 2)(4, 2, 1, 3) (4, 2, 3, 1)(4, 3, 1, 2) (4, 3, 2, 1)

(b):N(1, 2, 3, 4) = 0, σ(1, 2, 3, 4) = 1, even; N(1, 2, 4, 3) = 1, σ(1, 2, 4, 3) = −1, odd;N(1, 3, 2, 4) = 1, σ(1, 3, 2, 4) = −1, odd; N(1, 3, 4, 2) = 2, σ(1, 3, 4, 2) = 1, even;N(1, 4, 2, 3) = 2, σ(1, 4, 2, 3) = 1, even; N(1, 4, 3, 2) = 3, σ(1, 4, 3, 2) = −1, odd;N(2, 1, 3, 4) = 1, σ(2, 1, 3, 4) = −1, odd; N(2, 1, 4, 3) = 2, σ(2, 1, 4, 3) = 1, even;N(2, 3, 1, 4) = 2, σ(2, 3, 1, 4) = 1, even; N(2, 3, 4, 1) = 3, σ(2, 3, 4, 1) = −1, odd;N(2, 4, 1, 3) = 3, σ(2, 4, 1, 3) = −1, odd; N(2, 4, 3, 1) = 4, σ(2, 4, 3, 1) = 1, even;N(3, 1, 2, 4) = 2, σ(3, 1, 2, 4) = 1, even; N(3, 1, 4, 2) = 3, σ(3, 1, 4, 2) = −1, odd;N(3, 2, 1, 4) = 3, σ(3, 2, 1, 4) = −1, odd; N(3, 2, 4, 1) = 4, σ(3, 2, 4, 1) = 1, even;N(3, 4, 1, 2) = 4, σ(3, 4, 1, 2) = 1, even; N(3, 4, 2, 1) = 5, σ(3, 4, 2, 1) = −1, odd;N(4, 1, 2, 3) = 3, σ(4, 1, 2, 3) = −1, odd; N(4, 1, 3, 2) = 4, σ(4, 1, 3, 2) = 1, even;N(4, 2, 1, 3) = 4, σ(4, 2, 1, 3) = 1, even; N(4, 2, 3, 1) = 5, σ(4, 2, 3, 1) = −1, odd;N(4, 3, 1, 2) = 5, σ(4, 3, 1, 2) = −1, odd; N(4, 3, 2, 1) = 6, σ(4, 3, 2, 1) = 1, even.

(c):det(A) = a11a22a33a44 − a11a22a34a43 − a11a23a32a44 + a11a23a34a42

+ a11a24a32a43 − a11a24a33a42 − a12a21a33a44 + a12a21a34a43

+ a12a23a31a44 − a12a23a34a41 − a12a24a31a43 + a12a24a33a41

+ a13a21a32a44 − a13a21a34a42 − a13a22a31a44 + a13a22a34a41

+ a13a24a31a42 − a13a24a32a41 − a14a21a32a43 + a14a21a33a42

+ a14a22a31a43 − a14a22a33a41 − a14a23a31a42 + a14a23a32a41

Page 192: Differential Equations and Linear Algebra - 3e - Goode -Solutions

192

25.

det(A) =

∣∣∣∣∣∣∣∣1 −1 0 13 0 2 52 1 0 39 −1 2 1

∣∣∣∣∣∣∣∣= 1 · 0 · 0 · 1− 1 · 0 · 2 · 3− 1 · 1 · 2 · 1 + 1 · 1 · 2 · 5 + 1(−1)2 · 3− 1(−1)0 · 5

−3(−1)0 · 1 + 3(−1)2 · 3 + 3 · 1 · 0 · 1− 3 · 1 · 2 · 1− 3(−1)0 · 3 + 1(−1)0 · 1+2(−1)2 · 1− 2(−1)2 · 5− 2 · 0 · 0 · 1 + 2 · 0 · 2 · 1 + 2(−1)0 · 5− 2(−1)2 · 1−9(−1)2 · 3 + 9(−1)0 · 5 + 9 · 0 · 0 · 3− 9 · 0 · 0 · 1− 9 · 1 · 0 · 5 + 9 · 1 · 2 · 1

= 70.

26.

det(A) =

∣∣∣∣∣∣∣∣1 1 0 13 1 −2 32 3 1 2

−2 3 5 −2

∣∣∣∣∣∣∣∣= 1 · 1 · 1(−2)− 1 · 1 · 5 · 2− 1 · 3(−2)(−2) + 1 · 3 · 5 · 3 + 1 · 3(−2)2− 1 · 3 · 1 · 3

−3 · 1 · 1(−2) + 3 · 1 · 5 · 2 + 3 · 3 · 0(−2)− 3 · 3 · 5 · 1− 3 · 3 · 0 · 2 + 3 · 3 · 1 · 1+2 · 1(−2)(−2)− 2 · 1 · 5 · 3− 2 · 1 · 0(−2) + 2 · 1 · 5 · 1 + 2 · 3 · 0 · 3− 2 · 3(−2)1−(−2)1(−2)2 + (−2)1 · 1 · 3 + (−2)1 · 0 · 2− (−2) · 1 · 1 · 1− (−2)3 · 0 · 3 + (−2)3(−2)1

= 0.

27.

det(A) =

∣∣∣∣∣∣∣∣0 1 2 32 0 3 43 4 0 54 5 6 0

∣∣∣∣∣∣∣∣= 0 · 0 · 0 · 0− 0 · 0 · 6 · 5− 0 · 4 · 3 · 0 + 0 · 4 · 6 · 4 + 0 · 5 · 3 · 5− 0 · 5 · 0 · 4

−2 · 1 · 0 · 0 + 2 · 1 · 6 · 5 + 2 · 4 · 2 · 0− 2 · 4 · 6 · 3− 2 · 5 · 2 · 5 + 2 · 5 · 0 · 3+3 · 1 · 3 · 0− 3 · 1 · 6 · 4− 3 · 0 · 2 · 0 + 3 · 0 · 6 · 3 + 3 · 5 · 2 · 4− 3 · 5 · 3 · 3−4 · 1 · 3 · 5 + 4 · 1 · 0 · 4 + 4 · 0 · 2 · 5− 4 · 0 · 0 · 3− 4 · 4 · 2 · 4 + 4 · 4 · 3 · 3

= −315.

28. In evaluating det(A) with the expression (3.1.3), observe that the only nonzero terms in the summationoccur when p5 = 5. Such terms include the factor a55 = 7, which is multiplied by each corresponding termfrom the 4× 4 determinant calculated in Problem 27. Therefore, by factoring a55 out of the expression forthe determinant, we are left with the determinant of the corresponding 4× 4 matrix appearing in Problem27. Therefore, the answer here is 7 · (−315) = −2205.

29.(a):

det(cA) =∣∣∣∣ ca11 ca12

ca21 ca22

∣∣∣∣ = (ca11)(ca22)− (ca12)(ca21)

= c2a11a22 − c2a12a21 = c2(a11a22 − a12a21) = c2 det(A).

Page 193: Differential Equations and Linear Algebra - 3e - Goode -Solutions

193

(b):

det(cA) =∑

σ(p1, p2, p3, . . . , pn)ca1p1ca2p2ca3p3 · · · canpn

= cn∑

σ(p1, p2, p3, . . . , pn)a1p1a2p2a3p3 · · · anpn

= cn det(A),

where each summation above runs over all permutations σ of {1, 2, 3, . . . , n}.30. a11a25a33a42a54. All row and column indices are distinct, so this is a term of an order 5 determinant.Further, N(1, 5, 3, 2, 4) = 4, so that σ(1, 5, 3, 2, 4) = (−1)4 = +1.

31. a11a23a34a43a52. This is not a possible term of an order 5 determinant, since the column indices are notdistinct.

32. a13a25a31a44a42. This is not a possible term of an order 5 determinant, since the row indices are notdistinct.

33. a11a32a24a43a55. This is a possible term of an order 5 determinant.N(1, 4, 2, 3, 5) = 2 =⇒ σ(1, 4, 2, 3, 5) = (−1)2 = +1.

34. a13ap4a32a2q = a13a2qa32ap4. We must choose p = 4 and q = 1 in order for the row and column indicesto be distinct. N(3, 1, 2, 4) = 2 so that σ(3, 1, 2, 4) = (−1)2 = +1.

35. a21a3qap2a43 = ap2a21a3qa43. We must choose p = 1 and q = 4.N(2, 1, 4, 3) = 2 and σ(2, 1, 4, 3) = (−1)2 = +1.

36. a3qap4a13a42. We must choose p = 2 and q = 1.N(3, 4, 1, 2) = 4 and σ(3, 4, 1, 2) = (−1)4 = +1.

37. apqa34a13a42. We must choose p = 2 and q = 1.N(3, 1, 4, 2) = 3 and σ(3, 1, 4, 2) = (−1)3 = −1.

38.(a): ε123 = 1, ε132 = −1, ε213 = −1, ε231 = 1, ε312 = 1, ε321 = −1.

(b): Consider3∑

i=1

3∑j=1

3∑k=1

εijka1ia2ja3k. The only nonzero terms arise when i, j, and k are distinct. Conse-

quently,

3∑i=1

3∑j=1

3∑k=1

εijka1ia2ja3k = ε123a11a22a33 + ε132a11a23a32 + ε213a12a21a33

+ε231a12a23a31 + ε312a13a21a32 + ε321a13a22a31

= a11a22a33 + a12a23a31 + a13a21a32 − a11a23a32 − a12a21a33 − a13a22a31

= det(A).

39. From the given term, we have

N(n, n− 1, n− 2, . . . , 1) = 1 + 2 + 3 + · · ·+ (n− 1) =n(n− 1)

2,

because the series of (n−1) terms is just an arithmetic series which has a first term of one, common differenceof one, and last term (n− 1). Thus, σ(n, n− 1, n− 2, . . . , 1) = (−1)n(n−1)/2.

Page 194: Differential Equations and Linear Algebra - 3e - Goode -Solutions

194

Solutions to Section 3.2

True-False Review:

1. FALSE. The determinant of the matrix will increase by a factor of 2n. For instance, if A = I2, then

det(A) = 1. However, det(2A) = det[

2 00 2

]= 4, so the determinant in this case increases by a factor of

four.

2. TRUE. In both cases, the determinant of the matrix is multiplied by a factor of c.

3. TRUE. This follows by repeated application of Property (P8):

det(A5) = det(AAAAA) = (detA)(detA)(detA)(detA)(detA) = (detA)5.

4. TRUE. Since det(A2) = (detA)2 and since det(A) is a real number, det(A2) must be nonnegative.

5. FALSE. The matrix is not invertible if and only if its determinant, x2y − xy2 = xy(x− y), is zero. Forexample, if x = y = 1, the matrix is not invertible since x = y; however, neither x nor y is zero in this case.We conclude that the statement is false.

6. TRUE. We havedet(AB) = (detA)(detB) = (detB)(detA) = det(BA).

Problems:

From this point on, P2 will denote an application of the property of determinants that states that if everyelement in any row (column) of a matrix A is multiplied by a number c, then the determinant of the resultingmatrix is c det(A).

1. ∣∣∣∣∣∣1 2 32 6 43 −5 2

∣∣∣∣∣∣ 1=

∣∣∣∣∣∣1 2 30 2 −20 −11 −7

∣∣∣∣∣∣ 2= 2

∣∣∣∣∣∣1 2 30 1 −10 −11 −7

∣∣∣∣∣∣ 3= 2

∣∣∣∣∣∣1 2 30 1 −10 0 −18

∣∣∣∣∣∣ = 2(−18) = −36.

1. A12(−2), A13(−3) 2. P2 3. A23(11)

2. ∣∣∣∣∣∣2 −1 43 2 1

−2 1 4

∣∣∣∣∣∣ 1=

∣∣∣∣∣∣2 −1 43 2 10 0 8

∣∣∣∣∣∣ 2= 2

∣∣∣∣∣∣1 −1/2 23 2 10 0 8

∣∣∣∣∣∣ 3= 2

∣∣∣∣∣∣1 −1/2 20 7/2 −50 0 8

∣∣∣∣∣∣ = 2 · 28 = 56.

1. A13(1) 2. P2 3. A12(−3)

3. ∣∣∣∣∣∣2 1 3

−1 2 64 1 12

∣∣∣∣∣∣ 1= −

∣∣∣∣∣∣−1 2 6

2 1 34 1 12

∣∣∣∣∣∣ 2= −

∣∣∣∣∣∣−1 2 6

0 5 150 9 36

∣∣∣∣∣∣ 3= −5 · 9

∣∣∣∣∣∣−1 2 6

0 1 30 1 4

∣∣∣∣∣∣4= −45

∣∣∣∣∣∣−1 2 6

0 1 30 0 1

∣∣∣∣∣∣ = (−45)(−1) = 45.

Page 195: Differential Equations and Linear Algebra - 3e - Goode -Solutions

195

1. P12 2. A12(2), A13(4) 3. P2 4. A23(−1)

4. ∣∣∣∣∣∣0 1 −2

−1 0 32 −3 0

∣∣∣∣∣∣ 1=

∣∣∣∣∣∣1 0 −30 1 −22 −3 0

∣∣∣∣∣∣ 2=

∣∣∣∣∣∣1 0 −30 1 −20 −3 6

∣∣∣∣∣∣ 3=

∣∣∣∣∣∣1 0 −30 1 −20 0 0

∣∣∣∣∣∣ = 0.

1. P12, P2 2. A13(−2) 3. A23(3)

5. ∣∣∣∣∣∣3 7 15 9 −62 1 3

∣∣∣∣∣∣ 1=

∣∣∣∣∣∣1 6 −25 9 −62 1 3

∣∣∣∣∣∣ 2=

∣∣∣∣∣∣1 6 −20 −21 40 −11 7

∣∣∣∣∣∣ 3=

∣∣∣∣∣∣1 6 −20 1 −100 −11 7

∣∣∣∣∣∣4=

∣∣∣∣∣∣1 6 −20 1 −100 0 −103

∣∣∣∣∣∣ = −103.

1. A31(−1) 2. A12(−5), A13(−2) 3. A32(−2) 4. A24(11)

6. ∣∣∣∣∣∣∣∣1 −1 2 43 1 2 4

−1 1 3 22 1 4 2

∣∣∣∣∣∣∣∣1=

∣∣∣∣∣∣∣∣1 −1 2 40 4 −4 −80 0 5 60 3 0 −6

∣∣∣∣∣∣∣∣2= 3 · 4

∣∣∣∣∣∣∣∣1 −1 2 40 1 −1 −20 0 5 60 1 0 −2

∣∣∣∣∣∣∣∣3= −12

∣∣∣∣∣∣∣∣1 −1 2 40 1 −1 −20 1 0 −20 0 5 6

∣∣∣∣∣∣∣∣4= −12

∣∣∣∣∣∣∣∣1 −1 2 40 1 −1 −20 0 1 00 0 5 6

∣∣∣∣∣∣∣∣5= −12

∣∣∣∣∣∣∣∣1 −1 2 40 1 −1 −20 0 1 00 0 0 6

∣∣∣∣∣∣∣∣ = −12 · 6 = −72.

1. A12(−3), A13(1), A14(−2) 2. P2 3. P34 4. A23(−1) 5. A34(−5)

7. Note that in the first step below, we extract a factor of 13 from the second row of the matrix, and wealso extract a factor of 8 from the second column of the matrix.∣∣∣∣∣∣∣∣

2 32 1 426 104 26 −132 56 2 71 40 1 5

∣∣∣∣∣∣∣∣1= 13 · 8

∣∣∣∣∣∣∣∣2 4 1 42 1 2 −12 7 2 71 5 −1 5

∣∣∣∣∣∣∣∣2= −104

∣∣∣∣∣∣∣∣1 5 −1 52 1 2 −12 7 2 72 4 1 4

∣∣∣∣∣∣∣∣3= −104

∣∣∣∣∣∣∣∣1 5 1 50 −9 0 −110 −3 0 −30 −6 −1 −6

∣∣∣∣∣∣∣∣4= −104(−3)

∣∣∣∣∣∣∣∣1 5 1 50 1 0 10 −9 0 −110 −6 −1 −6

∣∣∣∣∣∣∣∣5= 312

∣∣∣∣∣∣∣∣1 5 1 50 1 0 10 0 0 −20 0 −1 0

∣∣∣∣∣∣∣∣6= 312

∣∣∣∣∣∣∣∣1 5 1 50 1 0 10 0 −1 00 0 0 −2

∣∣∣∣∣∣∣∣= 312(−1)(−2) = 624.

Page 196: Differential Equations and Linear Algebra - 3e - Goode -Solutions

196

1. P2 2. P14 3. A12(−2), A13(−2), A14(−2) 4. P2, P23 5. A23(9), A24(6) 6. P34

8. ∣∣∣∣∣∣∣∣0 1 −1 1

−1 0 1 11 −1 0 1

−1 −1 −1 0

∣∣∣∣∣∣∣∣1= −

∣∣∣∣∣∣∣∣1 −1 0 1

−1 0 1 10 1 −1 1

−1 −1 −1 0

∣∣∣∣∣∣∣∣2= −

∣∣∣∣∣∣∣∣1 −1 0 10 −1 1 20 1 −1 10 −2 −1 1

∣∣∣∣∣∣∣∣3= −

∣∣∣∣∣∣∣∣1 −1 0 10 −1 1 20 0 0 30 0 −3 −3

∣∣∣∣∣∣∣∣4= −

∣∣∣∣∣∣∣∣1 −1 0 10 −1 1 20 0 −3 −30 0 0 3

∣∣∣∣∣∣∣∣ = 9.

1. P13 2. A12(1), A14(1) 3. A23(1), A24(−2) 4. P34

9. ∣∣∣∣∣∣∣∣2 1 3 53 0 1 24 1 4 35 2 5 3

∣∣∣∣∣∣∣∣1= −

∣∣∣∣∣∣∣∣1 2 3 50 3 1 21 4 4 32 5 5 3

∣∣∣∣∣∣∣∣2= −

∣∣∣∣∣∣∣∣1 2 3 50 3 1 20 2 1 −20 1 −1 −7

∣∣∣∣∣∣∣∣3= −

∣∣∣∣∣∣∣∣1 2 3 40 1 −1 −70 2 1 −20 3 1 2

∣∣∣∣∣∣∣∣4=

∣∣∣∣∣∣∣∣1 2 3 40 1 −1 −70 0 3 120 0 4 23

∣∣∣∣∣∣∣∣5= 3

∣∣∣∣∣∣∣∣1 2 3 40 1 −1 −70 0 1 40 0 4 23

∣∣∣∣∣∣∣∣6= 3

∣∣∣∣∣∣∣∣1 2 3 40 1 −1 −70 0 1 40 0 0 7

∣∣∣∣∣∣∣∣= 3 · 1 · 1 · 1 · 7 = 21.

1. CP12 2. A13(−1), A14(−2) 3. P24 4. A23(−2), A24(−3) 5. P2 6. A34(−4)

10. ∣∣∣∣∣∣∣∣2 −1 3 47 1 2 3

−2 4 8 66 −6 18 −24

∣∣∣∣∣∣∣∣1= −6

∣∣∣∣∣∣∣∣−1 2 3 4

1 7 2 34 −2 8 6

−1 1 3 −4

∣∣∣∣∣∣∣∣2= 6

∣∣∣∣∣∣∣∣1 −2 −3 −41 7 2 34 −2 8 6

−1 1 3 −4

∣∣∣∣∣∣∣∣3= 6

∣∣∣∣∣∣∣∣1 −2 −3 −40 9 5 70 6 20 220 −1 0 −8

∣∣∣∣∣∣∣∣4= −6

∣∣∣∣∣∣∣∣1 −2 −3 −40 −1 0 −80 6 20 220 9 5 7

∣∣∣∣∣∣∣∣5= −6

∣∣∣∣∣∣∣∣1 −2 −3 −40 −1 0 −80 0 20 −260 0 5 −65

∣∣∣∣∣∣∣∣6= 6

∣∣∣∣∣∣∣∣1 −2 −3 −40 −1 0 −80 0 5 −650 0 20 −26

∣∣∣∣∣∣∣∣7= 6

∣∣∣∣∣∣∣∣1 −2 −3 −40 −1 0 −80 0 5 −650 0 0 234

∣∣∣∣∣∣∣∣ = 6 · 1(−1) · 5 · 234 = −7020.

Page 197: Differential Equations and Linear Algebra - 3e - Goode -Solutions

197

1. CP12, P2 2. M1(−1) 3. A12(−1), A13(−4), A14(1) 4. P24

5. A23(6), A24(9) 6. P34 7. A34(−4)

11. ∣∣∣∣∣∣∣∣7 −1 3 4

14 2 4 621 1 3 4−7 4 5 8

∣∣∣∣∣∣∣∣1= 7 · 2

∣∣∣∣∣∣∣∣1 −1 3 22 2 4 33 1 3 2

−1 4 5 4

∣∣∣∣∣∣∣∣2= 14

∣∣∣∣∣∣∣∣1 −1 3 20 4 −2 −10 4 −6 −40 3 8 6

∣∣∣∣∣∣∣∣3= 14

∣∣∣∣∣∣∣∣1 −1 3 20 4 −2 −10 0 −4 −30 3 8 6

∣∣∣∣∣∣∣∣4= 14

∣∣∣∣∣∣∣∣1 −1 3 20 1 −10 −70 0 −4 −30 3 8 6

∣∣∣∣∣∣∣∣5= 14

∣∣∣∣∣∣∣∣1 −1 3 20 1 −10 −70 0 −4 −30 0 38 27

∣∣∣∣∣∣∣∣6= −56

∣∣∣∣∣∣∣∣1 −1 3 20 1 −10 70 0 1 3/40 0 38 27

∣∣∣∣∣∣∣∣7= −56

∣∣∣∣∣∣∣∣1 −1 3 20 1 −10 −70 0 1 3/40 0 0 −3/2

∣∣∣∣∣∣∣∣ = −56 · (−3/2) = 84.

1. P2 2. A12(−2), A13(−3), A14(1) 3. A23(−1) 4. A42(−1)

5. A24(−3) 6. P2 7. A34(−38)

12. ∣∣∣∣∣∣∣∣∣∣3 7 1 2 31 1 −1 0 14 8 −1 6 63 7 0 9 48 16 −1 8 12

∣∣∣∣∣∣∣∣∣∣1= −

∣∣∣∣∣∣∣∣∣∣1 1 −1 0 13 7 1 2 34 8 −1 6 63 7 0 9 48 16 −1 8 12

∣∣∣∣∣∣∣∣∣∣2= −

∣∣∣∣∣∣∣∣∣∣1 1 −1 0 10 4 4 2 00 4 3 6 20 4 3 9 10 8 7 8 4

∣∣∣∣∣∣∣∣∣∣3= −

∣∣∣∣∣∣∣∣∣∣1 1 −1 0 10 4 4 2 00 0 −1 4 20 0 −1 7 10 0 −1 4 4

∣∣∣∣∣∣∣∣∣∣4= −

∣∣∣∣∣∣∣∣∣∣1 1 −1 0 10 4 4 2 00 0 −1 4 20 0 0 3 −10 0 0 0 2

∣∣∣∣∣∣∣∣∣∣= −(1)(4)(−1)(3)(2) = 24.

1. P12 2. A12(−3), A13(−4), A14(−3), A15(−8) 3. A23(−1), A24(−1), A25(−2) 4. A34(−1), A35(−1)

13.∣∣∣∣ 2 1

3 2

∣∣∣∣ = 1; invertible.

14.∣∣∣∣ −1 1

1 −1

∣∣∣∣ = 0; not invertible.

15.

∣∣∣∣∣∣2 6 −13 5 12 0 1

∣∣∣∣∣∣ = 14; invertible.

16.

∣∣∣∣∣∣−1 2 3

5 −2 18 −2 5

∣∣∣∣∣∣ = −8; invertible.

Page 198: Differential Equations and Linear Algebra - 3e - Goode -Solutions

198

17. ∣∣∣∣∣∣∣∣1 0 2 −13 −2 1 42 1 6 21 −3 4 0

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 0 2 −10 −2 −5 70 1 2 40 −3 2 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣−2 −5 7

1 2 4−3 2 1

∣∣∣∣∣∣ = 133; invertible.

18. ∣∣∣∣∣∣∣∣1 1 1 1

−1 1 −1 11 1 −1 −1

−1 1 1 −1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 1 1 10 2 0 20 0 −2 −20 2 2 0

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣

2 0 20 −2 −22 2 0

∣∣∣∣∣∣ = 16; invertible.

19.

∣∣∣∣∣∣∣∣1 2 −3 5

−1 2 −3 62 3 −1 41 −2 3 −6

∣∣∣∣∣∣∣∣1= −

∣∣∣∣∣∣∣∣1 2 −3 51 −2 3 −62 3 −1 41 −2 3 −6

∣∣∣∣∣∣∣∣2= 0; not invertible. 1. M2(−1) 2. P7

20. The system is Ax = b, where A =[

1 kk 4

], x =

[x1

x2

], and b =

[b1

b2

]. According to Corollary

3.2.5, the system has unique solution if and only if det(A) 6= 0. But, det(A) = 4 − k2, so that the systemhas a unique solution if and only if k 6= ±2.

21. det(A) =

∣∣∣∣∣∣1 2 k2 −k 13 6 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 2 k2 −k 10 0 1− 3k

∣∣∣∣∣∣ = (3k−1)(k+4). Consequently, the system has an infinite

number of solutions if and only if k = −4 or k = 1/3 (Corollary 3.2.5).

22. The given system is(1− k)x1+ 2x2+ x3 = 0,

2x1+ (1− k)x2+ x3 = 0,

x1+ x2+ (2− k)x3 = 0.

The determinant of the coefficients as a function of k is given by

det(A) =

∣∣∣∣∣∣1− k 2 1

2 1− k 11 1 2− k

∣∣∣∣∣∣ = −(1 + k)(k − 1)(k − 4).

Consequently, the system has an infinite number of solutions if and only if k = −1, k = 1, or ,k = 4.

23. det(A) =

∣∣∣∣∣∣1 k 0k 1 11 1 1

∣∣∣∣∣∣ = 1 + k − 1 − k2 = k(1 − k). Consequently, the system has a unique solution if

and only if k 6= 0, 1.

24. det(A) =

∣∣∣∣∣∣1 −1 23 1 40 1 3

∣∣∣∣∣∣ = 1 · 1 · 3 + (−1)4 · 0 + 2 · 3 · 1− 2 · 1 · 0− 1 · 4 · 1− (−1)3 · 3 = 14.

det(AT) = det(A) = 14.det(−2A) = (−2)3 det(A) = −8 · 14 = −112.

Page 199: Differential Equations and Linear Algebra - 3e - Goode -Solutions

199

25. A =[

1 −12 3

]and B =

[1 2

−2 4

]. det(A) det(B) = [3 · 1− (−1)2][1 · 4− 2(−2)] = 5 · 8 = 40.

det(AB) =∣∣∣∣ 3 −2−4 16

∣∣∣∣ = 3 · 16− (−2)(−4) = 40. Hence, det(AB) = det(A) det(B).

26. A =[

coshx sinhxsinhx coshx

]and B =

[cosh y sinh ysinh y cosh y

].

det(AB) = det(A) det(B) = (cosh2 x− sinh2 x)(cosh2 y − sinh2 y) = 1 · 1 = 1.

27.

∣∣∣∣∣∣3 2 16 4 −19 6 2

∣∣∣∣∣∣ 1= 3

∣∣∣∣∣∣1 2 12 4 −13 6 2

∣∣∣∣∣∣ 2= 3 · 2

∣∣∣∣∣∣1 1 12 2 −13 3 2

∣∣∣∣∣∣ 3= 0. 1. P2 2. P2 3. P7

28.

∣∣∣∣∣∣1 −3 12 −1 73 1 13

∣∣∣∣∣∣ 1=

∣∣∣∣∣∣1 −3 + 4 · 1 12 −1 + 4 · 2 73 1 + 4 · 3 13

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 1 12 7 73 13 13

∣∣∣∣∣∣ 2= 0.

29.

∣∣∣∣∣∣1 + 3a 1 31 + 2a 1 2

2 2 0

∣∣∣∣∣∣ 1=

∣∣∣∣∣∣1 1 31 1 22 2 0

∣∣∣∣∣∣+∣∣∣∣∣∣

3a 1 32a 1 20 2 0

∣∣∣∣∣∣ 2= 0+a

∣∣∣∣∣∣3 1 32 1 20 2 0

∣∣∣∣∣∣ 3= a·0 = 0. 1. P5 2. P2, P7 3. P7

30. B is obtained from A by the following elementary row operations: (1) M1(4), (2) M2(3), (3) P12. Thus,det(B) = det(A) · 4 · 3 · (−1) = −12.

31. B is obtained from AT by the following elementary row operations: (1) A13(3), (2) M1(−2). Sincedet(AT ) = det(A) = 1, and (1) leaves the determinant unchanged, we have det(B) = −2.

32. B is obtained from A by the following operations: (1) Interchange the two columns, (2) M1(−1), (3)A12(−4). Now (3) leaves the determinant unchanged, and (1) and (2) each change the sign of the determinant.Therefore, det(B) = det(A) = 1.

33. B is obtained from A by the following row operations: (1) A13(5), (2) M2(−4), (3) P12, (4) P23. Thus,det(B) = det(A) · (−4) · (−1) · (−1) = (−6) · (−4) = 24.

34. B is obtained from A by the following operations: (1) M1(−3), (2) A23(−4), (3) P12. Thus, det(B) =det(A) · (−3) · (−1) = (−6) · (3) = −18.

35. B is obtained from AT by the following row operations: (1) M1(2), (2) A32(−1), (3) A13(−1). Thus,det(B) = det(AT ) · 2 = (−6) · (2) = 12.

36. We have det(ABT ) = det(A)det(BT ) = 5 · 3 = 15.

37. We have det(A2B5) = (detA)2(detB)5 = 52 · 35 = 6075.

38. We have det((A−1B2)3) = (det(A−1B2))3 =[(detA−1)(detB)2

]3 =(

15· 32

)3

=(

95

)3

=729125

= 5.832.

39. We have

det((2B)−1(AB)T ) = (det((2B)−1))(det(AB)T ) =(

det(A)det(B)det(2B)

)=(

5 · 33 · 24

)=

516

.

40. We have det((5A)(2B)) = (5 · 54)(3 · 24) = 150, 000.

41.

Page 200: Differential Equations and Linear Algebra - 3e - Goode -Solutions

200

(a): The volume of the parallelepiped is given by |det(A)|. In this case, we have

|det(A)| = |2 + 12k + 36− 4k − 18− 12| = |8 + 8k|.

(b): NO. The answer does not change because the determinants of A and AT are the same, so the vol-ume determined by the columns of A, which is the same as the volume determined by the rows of AT , is|det(AT )| = |det(A)|.42. ∣∣∣∣∣∣

1 −1 x2 1 x2

4 −1 x3

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 −1 x0 3 x2 − 2x0 3 x3 − 4x

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 −1 x0 3 x2 − 2x0 0 x3 − x2 − 2x

∣∣∣∣∣∣=

∣∣∣∣∣∣1 −1 x0 3 x(x− 2)0 0 x(x− 2)(x + 1)

∣∣∣∣∣∣ .We see directly that the determinant will be zero if and only if x ∈ {0,−1, 2}.43. ∣∣∣∣ αx− βy βx− αy

βx + αy αx + βy

∣∣∣∣ = ∣∣∣∣ αx βx− αyβx αx + βy

∣∣∣∣+ ∣∣∣∣ −βy βx− αyαy αx + βy

∣∣∣∣=∣∣∣∣ αx βx

βx αx

∣∣∣∣+ ∣∣∣∣ αx −αyβx βy

∣∣∣∣+ ∣∣∣∣ −βy βxαy αx

∣∣∣∣+ ∣∣∣∣ −βy −αyαy βy

∣∣∣∣= x2

∣∣∣∣ α ββ α

∣∣∣∣+ xy

∣∣∣∣ α −αβ β

∣∣∣∣− xy

∣∣∣∣ β −βα α

∣∣∣∣+ y2

∣∣∣∣ α ββ α

∣∣∣∣= (x2 + y2)

∣∣∣∣ α ββ α

∣∣∣∣+ xy

∣∣∣∣ α −αβ β

∣∣∣∣+ xy

∣∣∣∣ β βα −α

∣∣∣∣= (x2 + y2)

∣∣∣∣ α ββ α

∣∣∣∣+ xy

∣∣∣∣ α −αβ β

∣∣∣∣− xy

∣∣∣∣ α −αβ β

∣∣∣∣= (x2 + y2)

∣∣∣∣ α ββ α

∣∣∣∣ .44. ∣∣∣∣∣∣

a1 + βb1 b1 + γc1 c1 + αa1

a2 + βb2 b2 + γc2 c2 + αa2

a3 + βb3 b3 + γc3 c3 + αa3

∣∣∣∣∣∣ =∣∣∣∣∣∣

a1 b1 + γc1 c1 + αa1

a2 b2 + γc2 c2 + αa2

a3 b3 + γc3 c3 + αa3

∣∣∣∣∣∣+∣∣∣∣∣∣

βb1 b1 + γc1 c1 + αa1

βb2 b2 + γc2 c2 + αa2

βb3 b3 + γc3 c3 + αa3

∣∣∣∣∣∣=

∣∣∣∣∣∣a1 b1 c1

a2 b2 c2

a3 b3 c3

∣∣∣∣∣∣+ αβγ

∣∣∣∣∣∣b1 c1 a1

b2 c2 a2

b3 c3 a3

∣∣∣∣∣∣= (1 + αβγ)

∣∣∣∣∣∣b1 c1 a1

b2 c2 a2

b3 c3 a3

∣∣∣∣∣∣ .Now if the last expression is to be zero for all ai, bi, and ci, then it must be the case that 1+αβγ = 0; hence,αβγ = −1.

45. Suppose A is a matrix with a row of zeros. We will use (P3) and (P7) to justify the fact that det(A) = 0.If A has more than one row of zeros, then by (P7), since two rows of A are the same, det(A) = 0. Assume

Page 201: Differential Equations and Linear Algebra - 3e - Goode -Solutions

201

instead that only one row of A consists entirely of zeros. Adding a nonzero row of A to the row of zerosyields a new matrix B with two equal rows. Thus, by (P7), det(B) = 0. However, B was obtained fromA by adding a multiple of one row to another row, and by (P3), det(B) = det(A). Hence, det(A) = 0, asrequired.

46. A is orthogonal, so AT = A−1. Using the properties of determinants, it follows that

1 = det(In) = det(AA−1) = det(A) det(A−1) = det(A) det(AT) = det(A) det(A).

Therefore, det(A) = ±1.

47.(a): From the definition of determinant we have:

det(A) =∑n!

σ(p1, p2, p3 . . . , pn)a1p1a2p2a3p3 · · · anpn . (47.1)

If A is lower triangular, then aij = 0 whenever i < j, and therefore the only nonzero terms in (47.1) arethose with pi ≤ i for all i. Since all the pi must be distinct, the only possibility is pi = i for all i with1 ≤ i ≤ n, and so (47.1) reduces to the single term:

det(A) = σ(1, 2, 3, . . . , n)a11a22a33 · · · ann.

(b):

det(A) =

∣∣∣∣∣∣∣∣2 −1 3 51 2 2 13 0 1 41 2 0 1

∣∣∣∣∣∣∣∣1=

∣∣∣∣∣∣∣∣−3 −11 3 0

0 0 2 0−1 −8 1 0

1 2 0 1

∣∣∣∣∣∣∣∣2=

∣∣∣∣∣∣∣∣0 13 0 02 16 0 0

−1 −8 1 01 2 0 1

∣∣∣∣∣∣∣∣3= −

∣∣∣∣∣∣∣∣2 16 0 00 13 0 0

−1 −8 1 01 2 0 1

∣∣∣∣∣∣∣∣4= −

∣∣∣∣∣∣∣∣2 0 0 00 13 0 0

−1 −8 1 01 2 0 1

∣∣∣∣∣∣∣∣ = −26.

1. A41(−5), A42(−1), A43(−4) 2. A31(−3), A32(−2) 3. P12 4. A21(−16/13)

48. The problem stated in the text is wrong. It should state: “Use determinants to prove that if A isinvertible and B and C are matrices with AB = AC, then det(B) = det(C).” To prove this, take thedeterminant of both sides of AB = AC to get det(AB) = det(AC). Thus, by Property P8, we havedet(A)det(B) = det(A)det(C). Since A is invertible, det(A) 6= 0. Thus, we can cancel det(A) from the lastequality to obtain det(B) = det(C).

49.det(S−1AS) = det(S−1) det(A) det(S) = det(S−1) det(S) det(A) = det(S−1S) det(A)

= det(In) det(A) = det(A).

50. No. If A were invertible, then det(A) 6= 0, so that det(A3) = det(A) det(A) det(A) 6= 0.

51. Let E be an elementary matrix. There are three different possibilities for E.

(a): E permutes two rows: Then E is obtained from In by interchanging two rows of In. Since det(In) = 1and using Property P1, we obtain det(E) = −1.

Page 202: Differential Equations and Linear Algebra - 3e - Goode -Solutions

202

(b): E adds a multiple of one row to another: Then E is obtained from In by adding a multiple of one rowof In to another. Since det(In) = 1 and using Property P3, we obtain det(E) = +1.

(c): E scales a row by k: Then E is obtained from In by multiplying a row of In by k. Since det(In) = 1and using Property P2, we obtain det(E) = k.

52. We have

0 =

∣∣∣∣∣∣x y 1x1 y1 1x2 y2 1

∣∣∣∣∣∣ = xy1 + yx2 + x1y2 − x2y1 − xy2 − yx1,

which can be rewritten asx(y1 − y2) + y(x2 − x1) = x2y1 − x1y2.

Setting a = y1 − y2, b = x2 − x1, and c = x2y1 − x1y2, we can express this equation as ax + by = c, theequation of a line. Moreover, if we substitute x1 for x and y1 for y, we obtain a valid identity. Likewise, if wesubstitute x2 for x and y2 for y, we obtain a valid identity. Therefore, we have a straight line that includesthe points (x1, y1) and (x2, y2).

53. ∣∣∣∣∣∣1 x x2

1 y y2

1 z z2

∣∣∣∣∣∣ 1=

∣∣∣∣∣∣1 x x2

0 y − x (y − x)(y + x)0 z − x (z + x)(z − x)

∣∣∣∣∣∣ 2= (y − x)(z − x)

∣∣∣∣∣∣1 x x2

0 1 y + x0 1 z + x

∣∣∣∣∣∣3= (y − x)(z − x)

∣∣∣∣∣∣1 x x2

0 1 y + x0 0 z − y

∣∣∣∣∣∣ = (y − x)(z − x)(z − y) = (y − z)(z − x)(x− y).

1. A12(−1)A13(−1) 2. P2 3. A23(−1)

54. Since A is an n× n skew-symmetric matrix, AT = −A; thus,

det(AT) = det(−A) = (−1)n det(A) = −det(A)

since n is given as odd. But by P4, det(AT) = det(A), so det(A) = −det(A) or det(A) = 0.

55. Solving b = c1a1 + c2a2 + · · ·+ cnan for c1a1 yields c1a1 = b− c2a2 − · · · − cnan.Consequently, det(Bk) can be written as

det(Bk) = det([a1,a2, . . . ,ak−1,b,ak+1, . . . ,an])= det([a1,a2, . . . ,ak−1, (c1a1 + c2a2 + · · ·+ cnan),ak+1, . . . ,an])= c1 det([a1, . . . ,ak−1,a1,ak+1, . . . ,an]) + c2 det([a1, . . . ,ak−1,a2,ak+1, . . . ,an]) + · · ·+ ck det([a1, . . . ,ak−1,ak,ak+1, . . . ,an]) + · · ·+ cn det([a1, . . . ,ak−1,an,ak+1, . . . ,an]).

Now by P7, all except the kth determinant are zero since they have two equal columns, so that we are leftwith det(Bk) = ck det(A).

58. Using technology we find that:∣∣∣∣∣∣∣∣∣∣1 2 3 4 a2 1 2 3 43 2 1 2 34 3 2 1 2a 4 3 2 1

∣∣∣∣∣∣∣∣∣∣= −192 + 88a− 8a2 = −8(a− 3)(a− 8).

Page 203: Differential Equations and Linear Algebra - 3e - Goode -Solutions

203

Consequently, the matrix is invertible provided a 6= 3, 8.

59. Using technology we find that:∣∣∣∣∣∣1− k 4 1

3 2− k 13 4 −1− k

∣∣∣∣∣∣ = −(k − 6)(k + 2)2.

Consequently, the system has an infinite number of solutions if and only if k = 6,−2.

k = 6: In this case, the system is Bx = 0, where B =

−5 4 13 −4 13 4 −7

. This system has solution set

{t(1, 1, 1) : t ∈ R}.

k = −2: In this case, the system is Cx = 0, where C =

3 4 13 4 13 4 1

. This system has solution set

{(r, s,−3r − 4s) : r, s ∈ R}.60. Using technology we find that:

det(A) = −20;A−1 =

−2/5 1/2 0 1/10

1/2 −1 1/2 00 1/2 −1 1/2

1/10 0 1/2 −2/5

;x = A−1b =

19/10−51/2

12/5

.

Solutions to Section 3.3

True-False Review:

1. FALSE. Because 2 + 3 = 5 is odd, the (2, 3)-cofactor is the negative of the (2, 3)-minor of the matrix.

2. TRUE. This just requires a slight modification of the proof of Theorem 3.3.16. We compute

(A · adj(A))ij =n∑

k=1

aikadj(A)kj =n∑

k=1

aikCjk = δij · det(A).

Therefore, A · adj(A) = det(A) · In.

3. TRUE. The Cofactor Expansion Theorem allows for expansion along any row or any column of thematrix, and in all cases, the result is the determinant of the matrix.

4. FALSE. For example, let A =

1 2 34 5 67 8 9

, and let c = 2. It is easy to see that the (1, 1)-entry of

adj(A) is −3. But the (1, 1)-entry of adj(2A) is −12, not −6. Therefore, the equality posed in this reviewitem does not generally hold. Many other counterexamples could be given as well.

5. FALSE. For example, let A =

1 2 34 5 67 8 9

and B =

9 8 76 5 43 2 1

. Then A + B =

10 10 1010 10 1010 10 10

.

The (1, 1)-entry of adj(A + B) is therefore 0. However, the (1, 1)-entry of adj(A) is −3, and the (1, 1)-entryof adj(B) is also −3. But (−3) + (−3) 6= 0. Many other examples abound, of course.

Page 204: Differential Equations and Linear Algebra - 3e - Goode -Solutions

204

6. FALSE. Let A =[

a bc d

]and let B =

[e fg h

]. Then AB =

[ae + bg af + bhce + dg cf + dh

]. We compute

adj(AB) =[

cf + dh −(af + bh)−(ce + dg) ae + bg

]6=[

d −b−c a

] [h −f

−g e

]= adj(A)adj(B).

7. TRUE. This can be immediately deduced by substituting In for A in Theorem 3.3.16.

Problems:

From this point on, CET(col#n) will mean that the Cofactor Expansion Theorem has been applied to columnn of the determinant, and CET(row#n) will mean that the Cofactor Expansion Theorem has been appliedto row n of the determinant.

1. Minors: M11 = 4, M21 = −3, M12 = 2, M22 = 1;Cofactors: C11 = 4, C21 = 3, C12 = −2, C22 = 1.

2. Minors: M11 = −9, M21 = −7, M31 = −2, M12 = 7, M22 = 1, M32 = −2, M13 = 5, M23 = 3, M33 = 2;Cofactors: C11 = −9, C21 = 7, C31 = −2, C12 = −7, C22 = 1, C32 = 2, C13 = 5, C23 = −3, C33 = 2.

3. Minors: M11 = −5, M21 = 47, M31 = 3, M12 = 0, M22 = −2, M32 = 0, M13 = 4, M23 = −38, M33 = −2;Cofactors: C11 = −5, C21 = −47, C31 = 3, C12 = 0, C22 = −2, C32 = 0, C13 = 4, C23 = 38, C33 = −2.

4.

M12 =

∣∣∣∣∣∣3 1 27 4 65 1 2

∣∣∣∣∣∣ = −4, M31 =

∣∣∣∣∣∣3 −1 24 1 20 1 2

∣∣∣∣∣∣ = 16,

M23 =

∣∣∣∣∣∣1 3 27 1 65 0 2

∣∣∣∣∣∣ = 40, M42 =

∣∣∣∣∣∣1 −1 23 1 27 4 6

∣∣∣∣∣∣ = 12,

C12 = 4,C31 = 16,C23 = −40,C42 = 12.

5.∣∣∣∣ 1 −2

1 3

∣∣∣∣ = 1 · |3|+ 2 · |1| = 5.

6.

∣∣∣∣∣∣−1 2 3

1 4 −23 1 4

∣∣∣∣∣∣ = 3 ·∣∣∣∣ 1 4

3 1

∣∣∣∣+ 2 ·∣∣∣∣ −1 2

3 1

∣∣∣∣+ 4 ·∣∣∣∣ −1 2

1 4

∣∣∣∣ = 3(−11) + 2(−7) + 4(−6) = −71.

7.

∣∣∣∣∣∣2 1 −47 1 31 5 −2

∣∣∣∣∣∣ = −7 ·∣∣∣∣ 1 −4

5 −2

∣∣∣∣+ ∣∣∣∣ 2 −41 −2

∣∣∣∣− 3 ·∣∣∣∣ 2 1

1 5

∣∣∣∣ = −7 · 18 + 0− 3 · 9 = −153.

8.

∣∣∣∣∣∣3 1 47 1 22 3 −5

∣∣∣∣∣∣ = 3 ·∣∣∣∣ 1 2

3 −5

∣∣∣∣− ∣∣∣∣ 1 43 −5

∣∣∣∣+ 2 ·∣∣∣∣ 1 4

1 2

∣∣∣∣ = 3(−11)− 7(−17) + 2(−2) = 82.

9.

∣∣∣∣∣∣0 2 −3

−2 0 53 −5 0

∣∣∣∣∣∣ = 3 · 10 + 5(−6) + 0 · 4 = 0.

Page 205: Differential Equations and Linear Algebra - 3e - Goode -Solutions

205

10.

∣∣∣∣∣∣∣∣1 −2 3 04 0 7 −20 1 3 41 5 −2 0

∣∣∣∣∣∣∣∣1= −2 ·

∣∣∣∣∣∣1 −2 30 1 31 5 −2

∣∣∣∣∣∣− 4 ·

1 −2 34 0 71 5 −2

= −2 · (−26)− 4 · (−5) = 72.

1. CET(col#4)

11.

∣∣∣∣∣∣1 0 −23 1 −17 2 5

∣∣∣∣∣∣ 1=∣∣∣∣ 1 −1

2 5

∣∣∣∣− 2 ·∣∣∣∣ 3 1

7 2

∣∣∣∣ = 7− 2(−1) = 9. 1. CET(row#1)

12.

∣∣∣∣∣∣−1 2 3

0 1 42 −1 3

∣∣∣∣∣∣ 1= −1 ·∣∣∣∣ 1 4−1 3

∣∣∣∣+ 2 ·∣∣∣∣ 2 3

1 4

∣∣∣∣ = −7 + 10 = 3. 1. CET(col#1)

13.

∣∣∣∣∣∣2 −1 35 2 13 −3 7

∣∣∣∣∣∣ 1= 2 ·∣∣∣∣ 2 1−3 7

∣∣∣∣− 5 ·∣∣∣∣ −1 3−3 7

∣∣∣∣+ 3 ·∣∣∣∣ −1 3

2 1

∣∣∣∣ = 2 · 17− 5 · 2 + 3(−7) = 3.

1. CET(col#1)

14.

∣∣∣∣∣∣0 −2 12 0 −3

−1 3 0

∣∣∣∣∣∣ 1= 0 ·∣∣∣∣ 0 −3

3 0

∣∣∣∣− 2 ·∣∣∣∣ −2 1

3 0

∣∣∣∣− 1 ·∣∣∣∣ −2 1

0 −3

∣∣∣∣ = 0 + 6− 6 = 0.

1. CET(col#1)

15. ∣∣∣∣∣∣∣∣1 0 −1 00 1 0 −1

−1 0 −1 00 1 0 1

∣∣∣∣∣∣∣∣1= 1 ·

∣∣∣∣∣∣1 0 −10 −1 01 0 1

∣∣∣∣∣∣− 1 ·

∣∣∣∣∣∣0 −1 01 0 −11 0 1

∣∣∣∣∣∣ = −2− 2 = −4.

1. CET(col#1)

16. ∣∣∣∣∣∣∣∣2 −1 3 11 4 −2 30 2 −1 01 3 −2 4

∣∣∣∣∣∣∣∣1=

∣∣∣∣∣∣∣∣2 5 3 11 0 −2 30 0 −1 01 −1 −2 4

∣∣∣∣∣∣∣∣2= −

∣∣∣∣∣∣2 5 11 0 31 −1 4

∣∣∣∣∣∣=∣∣∣∣ 5 1−1 4

∣∣∣∣+ 3 ·∣∣∣∣ 2 5

1 −1

∣∣∣∣ = 21− 21 = 0.

1. CA32(2) 2. CET(row#3)

Page 206: Differential Equations and Linear Algebra - 3e - Goode -Solutions

206

17. ∣∣∣∣∣∣∣∣3 5 2 62 3 5 −57 5 −3 −169 −6 27 −12

∣∣∣∣∣∣∣∣1=

∣∣∣∣∣∣∣∣1 2 −3 112 3 5 −57 5 −3 −169 −6 27 −12

∣∣∣∣∣∣∣∣2=

∣∣∣∣∣∣∣∣1 2 −3 110 −1 11 −270 −9 18 −930 −24 54 −111

∣∣∣∣∣∣∣∣3=

∣∣∣∣∣∣−1 11 −27−9 18 −93−24 54 −111

∣∣∣∣∣∣ 4=

∣∣∣∣∣∣−1 11 −27

0 −81 1500 −210 537

∣∣∣∣∣∣ 5= −∣∣∣∣ −81 150−210 537

∣∣∣∣ = 11997.

1. A21(−1) 2. A12(−2), A13(−7), A14(−9) 3. CET(col#1)

4. A12(−9), A13(−24) 5. CET(col#1)

18. ∣∣∣∣∣∣∣∣2 −7 4 35 5 −3 76 2 6 34 2 −4 5

∣∣∣∣∣∣∣∣1=

∣∣∣∣∣∣∣∣2 −7 4 31 3 1 26 2 6 34 2 −4 5

∣∣∣∣∣∣∣∣2=

∣∣∣∣∣∣∣∣0 −13 2 −11 3 1 20 −16 0 −90 −10 −8 −3

∣∣∣∣∣∣∣∣3= −

∣∣∣∣∣∣−13 2 −1−16 0 −9−10 −8 −3

∣∣∣∣∣∣4=

∣∣∣∣∣∣13 −2 1

−16 0 −9−10 −8 −3

∣∣∣∣∣∣ 5=

∣∣∣∣∣∣13 −2 1101 −18 029 −14 0

∣∣∣∣∣∣ 6=∣∣∣∣ 101 −18

29 14

∣∣∣∣ = −892.

1. A42(−1) 2. A21(−2), A23(−6), A24(−4) 3. CET(col#1) 4. P2

5. A12(9), A13(3) 6. CET(col#3)

19. ∣∣∣∣∣∣∣∣∣∣2 0 −1 3 00 3 0 1 20 1 3 0 41 0 1 −1 03 0 2 0 5

∣∣∣∣∣∣∣∣∣∣1=

∣∣∣∣∣∣∣∣∣∣0 0 −3 5 00 3 0 1 20 1 3 0 41 0 1 −1 00 0 −1 3 5

∣∣∣∣∣∣∣∣∣∣2= −

∣∣∣∣∣∣∣∣0 −3 5 03 0 1 21 3 0 40 −1 3 5

∣∣∣∣∣∣∣∣3= −

∣∣∣∣∣∣∣∣0 −3 5 00 −9 1 −101 3 0 40 −1 3 5

∣∣∣∣∣∣∣∣4= −

∣∣∣∣∣∣−3 5 0−9 1 −10−1 3 5

∣∣∣∣∣∣ 5=

∣∣∣∣∣∣−3 5 0−9 1 −10

1 −3 −5

∣∣∣∣∣∣6=

∣∣∣∣∣∣42 0 50−9 1 −10−26 0 −35

∣∣∣∣∣∣ 7=∣∣∣∣ 42 50−26 −35

∣∣∣∣ = −170.

1. A41(−2),A45(−3) 2. CET(col#1) 3. A12(−3) 4. CET(col#1)

5. P2 6. A21(−5), A23(3) 7. CET(col#2)

20. ∣∣∣∣∣∣∣∣0 x y z

−x 0 1 −1−y −1 0 1−z 1 −1 0

∣∣∣∣∣∣∣∣1=

∣∣∣∣∣∣∣∣xz 0 x + y z−x 0 1 −1

−y − z 0 −1 1−z 1 −1 0

∣∣∣∣∣∣∣∣2=

∣∣∣∣∣∣xz x + y z−x 1 −1

−y − z −1 1

∣∣∣∣∣∣

Page 207: Differential Equations and Linear Algebra - 3e - Goode -Solutions

207

3=

∣∣∣∣∣∣xz − yz + z2 x + y + z 0−x + y − z 0 0

−y − z −1 1

∣∣∣∣∣∣ 4=∣∣∣∣ xz − yz + z2 x + y + z−x− y − z 0

∣∣∣∣ = (x + y + z)2.

1. A41(−x), A43(1) 2. CET(col#2) 3. A32(1), A31(−z) 4. CET(col#3)

21.

(a):

V (r1, r2, r3) =

∣∣∣∣∣∣1 1 1r1 r2 r3

r21 r2

2 r23

∣∣∣∣∣∣ 1=

∣∣∣∣∣∣1 0 0r1 r2 − r1 r3 − r1

r21 r2

2 − r21 r2

3 − r21

∣∣∣∣∣∣2=∣∣∣∣ r2 − r1 r3 − r1

r22 − r2

1 r23 − r2

1

∣∣∣∣ = (r2 − r1)(r23 − r2

1)− (r3 − r1)(r22 − r2

1)

= (r3 − r1)(r2 − r1)[(r3 + r1)− (r2 + r1)] = (r2 − r1)(r3 − r1)(r3 − r2).

1. CA12(−1), CA13(−1) 2. CET(row#1)

(b): We use mathematical induction. The result is vacuously true when n = 1 and quickly verified whenn = 2. Suppose that the result is true when n = k − 1, for k ≥ 3, and consider

V (r1, r2, . . . , rk) =

∣∣∣∣∣∣∣∣∣∣∣

1 1 1 · · · 1r1 r2 r3 · · · rk

r21 r2

2 r23 · · · r2

k...

......

...rk−11 rk−1

2 rk−13 · · · rk−1

k

∣∣∣∣∣∣∣∣∣∣∣.

The determinant vanishes when rk = r1, r2, . . . , rk−1, so we can write

V (r1, r2, . . . , rk) = a(r1, r2, . . . , rk)n−1∏i=1

(rk − ri),

where a(r1, r2, . . . , rk) is the coefficient of rk−1k in the expansion of V (r1, r2, . . . , rk). However, using the

Cofactor Expansion Theorem along column k, we see that this coefficient is just V (r1, r2, . . . , rk−1), so byhypothesis,

a(r1, r2, . . . , rk) = V (r1, r2, . . . , rk−1) =∏

1≤i<m≤n−1

(rm − ri).

Thus,

V (r1, r2, . . . , rk) =∏

1≤i<m≤n−1

(rm − ri)n−1∏i=1

(rk − ri) =∏

1≤i<m≤n

(rm − ri).

Hence the result is true for n = k, so, by induction, is true for all non-negative integers n.

22.(a): det(A) = 11;

Page 208: Differential Equations and Linear Algebra - 3e - Goode -Solutions

208

(b): MC =[

5 −4−1 3

];

(c): adj(A) =[

5 −1−4 3

];

(d): A−1 =111

[5 −1

−4 3

].

23.(a): det(A) = 7;

(b): MC =[

1 −42 −1

];

(c): adj(A) =[

1 2−4 −1

];

(d): A−1 =17

[1 2

−4 −1

].

24.(a): det(A) = 0;

(b): MC =[−6 15−2 5

];

(c): adj(A) =[−6 −215 5

];

(d): A−1 does not exist because det(A) = 0.

25.

(a): det(A) =

∣∣∣∣∣∣2 −3 02 1 50 −1 2

∣∣∣∣∣∣ =∣∣∣∣∣∣

2 −3 00 4 50 −1 2

∣∣∣∣∣∣ = 2 · 13 = 26;

(b): MC =

7 −4 −26 4 2

−15 −10 8

;

(c): adj(A) =

7 6 −15−4 4 −10−2 2 8

;

(d): A−1 =1

det(A)adj(A) =

126

7 6 −15−4 4 −10−2 2 8

.

26.

(a): det(A) =

∣∣∣∣∣∣−2 3 −1

2 1 50 2 3

∣∣∣∣∣∣ =∣∣∣∣∣∣−2 3 −1

0 4 40 2 3

∣∣∣∣∣∣ = −2 · 4 = −8;

Page 209: Differential Equations and Linear Algebra - 3e - Goode -Solutions

209

(b): MC =

−7 −6 4−11 −6 4

16 8 −8

;

(c): adj(A) =

−7 −11 16−6 −6 8

4 4 −8

;

(d): A−1 =1

det(A)adj(A) = −1

8

−7 −11 16−6 −6 8

4 4 −8

.

27.

(a): det(A) =

∣∣∣∣∣∣1 −1 23 −1 45 1 7

∣∣∣∣∣∣ =∣∣∣∣∣∣

6 0 98 0 115 1 7

∣∣∣∣∣∣ = 6;

(b): MC =

−11 −1 89 −3 −6

−2 2 2

;

(c): adj(A) =

−11 9 −2−1 −3 2

8 −6 2

;

(d): A−1 =1

det(A)adj(A) =

16

−11 9 −2−1 −3 2

8 −6 2

=

−11/6 3/2 −1/3−1/6 −1/2 1/3

4/3 −1 1/3

.

28.

(a): det(A) =

∣∣∣∣∣∣0 1 2

−1 −1 31 −2 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

0 1 20 −3 41 −2 1

∣∣∣∣∣∣ = 10;

(b): MC =

5 4 3−5 −2 1

5 −2 1

;

(c): adj(A) =

5 −5 54 −2 −23 1 1

;

(d): A−1 =1

det(A)adj(A) =

110

5 −5 54 −2 −23 1 1

.

29.

(a): det(A) =

∣∣∣∣∣∣2 −3 51 2 10 7 −1

∣∣∣∣∣∣ =∣∣∣∣∣∣

0 −7 31 2 10 7 −1

∣∣∣∣∣∣ = 14;

(b): MC =

−9 1 732 −2 −14

−13 3 7

;

Page 210: Differential Equations and Linear Algebra - 3e - Goode -Solutions

210

(c): adj(A) =

−9 32 −131 −2 37 −14 7

;

(d): A−1 =1

det(A)adj(A) =

114

−9 32 −131 −2 37 −14 7

=

−9/14 16/7 −13/141/14 −1/7 3/141/2 −1 1/2

.

30.

(a): det(A) =

∣∣∣∣∣∣∣∣1 1 1 1

−1 1 −1 11 1 −1 −1

−1 1 1 −1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 1 1 10 2 0 20 2 −2 00 2 0 −2

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣

2 0 22 −2 02 0 −2

∣∣∣∣∣∣ = 16;

(b): MC =

4 4 4 4

−4 4 −4 44 4 −4 −4

−4 4 4 −4

;

(c): adj(A) =

4 −4 4 −44 4 4 44 −4 −4 44 4 −4 −4

;

(d): A−1 =1

det(A)adj(A) =

116

4 −4 4 −44 4 4 44 −4 −4 44 4 −4 −4

.

31.

(a): det(A) =

∣∣∣∣∣∣∣∣1 0 3 5

−2 1 1 33 9 0 22 0 3 −1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

11 0 18 54 1 10 37 9 6 20 0 0 −1

∣∣∣∣∣∣∣∣ = −

∣∣∣∣∣∣11 0 184 1 107 9 6

∣∣∣∣∣∣ = −

∣∣∣∣∣∣11 0 184 1 10

−29 0 −84

∣∣∣∣∣∣ = 402;

(b): MC =

84 −46 −29 81

−162 60 99 −2718 38 −11 3

−30 26 130 −72

;

(c): adj(A) =

84 −162 18 −30

−46 60 38 26−29 99 −11 130

81 −27 3 −72

;

Page 211: Differential Equations and Linear Algebra - 3e - Goode -Solutions

211

(d):

A−1 =1

det(A)adj(A) =

1402

84 −162 18 −30

−46 60 38 26−29 99 −11 130

81 −27 3 −72

=

14/67 −27/67 3/67 −5/67

−23/201 10/67 19/201 13/201−29/402 33/134 −11/402 65/201

27/134 −9/134 1/134 −12/67

.

32.(a): Using the Cofactor Expansion Theorem along row 1 yields

det(A) = (1 + 2x2) + 2x(2x + 4x3) + 2x2(2x2 + 4x4)

= 8x6 + 12x4 + 6x2 + 1 = (1 + 2x2)3.

(b):

MC =

1 + 2x2 −(2x + 4x3) 2x2 + 4x4

2x + 4x3 1− 4x4 −(2x + 4x3)2x2 + 4x4 2x + 4x3 1 + 2x2

=

1 + 2x2 −2x(1 + 2x2) 2x2(1 + 2x2)2x(1 + 2x2) (1 + 2x2)(1− 2x2) −2x(1 + 2x2)2x2(1 + 2x2) 2x(1 + 2x2) 1 + 2x2

,

so that

A−1 =1

det(A)adj(A) =

1(1 + 2x2)2

1 2x 2x2

−2x 1− 2x2 2x2x2 −2x 1

.

33. det(A) =

∣∣∣∣∣∣1 1 11 2 21 2 3

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 1 10 1 10 1 2

∣∣∣∣∣∣ =∣∣∣∣ 1 1

1 2

∣∣∣∣ = 2− 1 = 1,

and

C23 = −∣∣∣∣ 1 1

1 2

∣∣∣∣ = −(2− 1) = −1.

Thus,

(A−1)32 =(adj(A))32

det(A)=

C23

det(A)= −1.

34. det(A) =

∣∣∣∣∣∣2 0 −12 1 13 −1 0

∣∣∣∣∣∣ =∣∣∣∣∣∣

4 1 02 1 13 −1 0

∣∣∣∣∣∣ = −∣∣∣∣ 4 1

3 −1

∣∣∣∣ = −(−4− 3) = 7,

and

C13 =∣∣∣∣ 2 1

3 −1

∣∣∣∣ = −2− 3 = −5.

Page 212: Differential Equations and Linear Algebra - 3e - Goode -Solutions

212

Thus,

(A−1)31 =(adj(A))31

det(A)=

C13

det(A)= −5

7.

35.

det(A) =

∣∣∣∣∣∣∣∣1 0 1 02 −1 1 30 1 −1 2

−1 1 2 0

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 0 0 02 −1 −1 30 1 −1 2

−1 1 3 0

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣−1 −1 3

1 −1 21 3 0

∣∣∣∣∣∣=

∣∣∣∣∣∣−1 2 3

1 −4 21 0 0

∣∣∣∣∣∣ =∣∣∣∣ 2 3−4 2

∣∣∣∣ = 4 + 12 = 16,

and

C32 = −

∣∣∣∣∣∣1 1 02 1 3

−1 2 0

∣∣∣∣∣∣ = −(−3)∣∣∣∣ 1 1−1 2

∣∣∣∣ = 3(2 + 1) = 9.

Thus,

(A−1)23 =(adj(A))23

det(A)=

C32

det(A)=

916

.

36. MC =[

2e2t −2et

−e2t 3et

], adj(A) =

[2e2t −e2t

−2et 3et

], and det(A) = 4e3t, so

A−1 =1

det(A)adj(A) =

14e3t

[2e2t −e2t

−2et 3et

].

37. MC =[

e−t sin(2t) −et cos(2t)−et cos(2t) et sin(2t)

], adj(A) =

[e−t sin(2t) et cos(2t)−et cos(2t) et sin(2t)

], and

det(A) = sin2(2t) + cos2(2t) = 1,

so

A−1 =1

det(A)adj(A) =

[e−t sin(2t) et cos(2t)−et cos(2t) et sin(2t)

].

38. MC =

3te−t −e−t −te2t

−te−t e−t 0−te−t 0 te2t

, adj(A) =

3te−t −te−t −te−t

−e−t e−t 0−te2t 0 te2t

, and

det(A) =

∣∣∣∣∣∣et 2tet e−2t

et 2tet e−2t

et tet 2e−2t

∣∣∣∣∣∣ =∣∣∣∣∣∣

et tet e−2t

0 tet 00 0 e−2t

∣∣∣∣∣∣ = et(te−t) = t,

so

A−1 =1

det(A)adj(A) =

1t

3te−t −te−t −te−t

−e−t e−t 0−te2t 0 te2t

.

Page 213: Differential Equations and Linear Algebra - 3e - Goode -Solutions

213

39. MC =

−1 2 −13 −6 3

−2 4 −2

, adj(A) =

−1 3 −22 −6 4

−1 3 −2

. Hence,

A · adj(A) =

1 2 33 4 54 5 6

−1 3 −22 −6 4

−1 3 −2

=

0 0 00 0 00 0 0

= 03.

From Equation (3.3.4) of the text we have that, in general, A · adj(A) = det(A) · In. Since, for the givenmatrix, A · adj(A) = 03, we must have det(A) = 0.

40. det(A) =∣∣∣∣ 2 −3

1 2

∣∣∣∣ = 7, det(B1) =∣∣∣∣ 2 −3

4 2

∣∣∣∣ = 16, and det(B2) =∣∣∣∣ 2 2

1 4

∣∣∣∣ = 6. Thus,

x1 =det(B1)det(A)

=167

and x2 =det(B2)det(A)

=67.

Solution: (16/7, 6/7).

41.

det(A) =

∣∣∣∣∣∣3 −2 11 1 −12 0 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 −2 13 1 −10 0 1

∣∣∣∣∣∣ = 7,

det(B1) =

∣∣∣∣∣∣4 −2 12 1 −11 0 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

4 −2 −32 1 −31 0 0

∣∣∣∣∣∣ = 9,

det(B2) =

∣∣∣∣∣∣3 4 11 2 −12 1 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

4 6 01 2 −13 3 0

∣∣∣∣∣∣ = −6,

det(B3) =

∣∣∣∣∣∣3 −2 41 1 22 0 1

∣∣∣∣∣∣ =∣∣∣∣∣∣−5 −2 4−3 1 2

0 0 1

∣∣∣∣∣∣ = −11.

Thus, x1 =det(B1)det(A)

=97, x2 =

det(B2)det(A)

= −67, and x3 =

det(B3)det(A)

= −117

.

Solution: (9/7,−6/7,−11/7).

42. det(A) =

∣∣∣∣∣∣1 −3 11 4 −12 1 −3

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 −3 10 7 −20 7 −5

∣∣∣∣∣∣ =∣∣∣∣ 7 −2

7 −5

∣∣∣∣ = −35 + 14 = −21 6= 0.

Thus, the system has only the trivial solution.

Page 214: Differential Equations and Linear Algebra - 3e - Goode -Solutions

214

43.

det(A) =

∣∣∣∣∣∣∣∣1 −2 3 −12 0 1 01 1 0 −10 1 −2 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣−5 −2 3 −1

0 0 1 01 1 0 −14 1 −2 1

∣∣∣∣∣∣∣∣= −

∣∣∣∣∣∣−5 −2 −1

1 1 −14 1 1

∣∣∣∣∣∣ = −

∣∣∣∣∣∣−1 −1 0

5 2 04 1 1

∣∣∣∣∣∣ = −3,

det(B1) =

∣∣∣∣∣∣∣∣1 −2 3 −12 0 1 00 1 0 −13 1 −2 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 −3 3 −12 0 1 00 0 0 −13 2 −2 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣

1 −3 32 0 13 2 −2

∣∣∣∣∣∣ =∣∣∣∣∣∣−5 −3 3

0 0 17 2 −2

∣∣∣∣∣∣ = −11,

det(B2) =

∣∣∣∣∣∣∣∣1 1 3 −12 2 1 01 0 0 −10 3 −2 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 1 3 02 2 1 21 0 0 00 3 −2 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣

1 3 02 1 23 −2 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 3 0−4 5 0

3 −2 1

∣∣∣∣∣∣ = 17,

det(B3) =

∣∣∣∣∣∣∣∣1 −2 1 −12 0 2 01 1 0 −10 1 3 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

0 −2 1 −10 0 2 01 1 0 −1

−3 1 3 1

∣∣∣∣∣∣∣∣= −2

∣∣∣∣∣∣0 −2 −11 1 −1

−3 1 1

∣∣∣∣∣∣ = −2

∣∣∣∣∣∣−3 −1 0−2 2 0−3 1 1

∣∣∣∣∣∣ = −2(−8) = 16,

det(B4) =

∣∣∣∣∣∣∣∣1 −2 3 12 0 1 21 1 0 00 1 −2 3

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

3 −2 3 12 0 1 20 1 0 0

−1 1 −2 3

∣∣∣∣∣∣∣∣ = −

∣∣∣∣∣∣3 3 12 1 2

−1 −2 3

∣∣∣∣∣∣= −

∣∣∣∣∣∣−3 3 −5

0 1 03 2 7

∣∣∣∣∣∣ = −∣∣∣∣ −3 −5

3 7

∣∣∣∣ = 6.

Therefore,

x1 =113

, x2 = −173

, x3 = −163

, and x4 = −2.

Solution: (11/3,−17/3,−16/3,−2).

44.

det(A) =∣∣∣∣ et e−2t

et −2e−2t

∣∣∣∣ = ete−2t

∣∣∣∣ 1 11 −2

∣∣∣∣ = −3e−t,

det(B1) =∣∣∣∣ 3 sin t e−2t

4 cos t −2e−2t

∣∣∣∣ = e−2t

∣∣∣∣ 3 sin t 14 cos t −2

∣∣∣∣ = −2e−2t(3 sin t + 2 cos t),

det(B2) =∣∣∣∣ et 3 sin t

et 4 cos t

∣∣∣∣ = et

∣∣∣∣ 1 3 sin t1 4 cos t

∣∣∣∣ = et(4 cos t− 3 sin t).

Page 215: Differential Equations and Linear Algebra - 3e - Goode -Solutions

215

Thus, x1 =det(B1)det(A)

=2(3 sin t + 2 cos t)

3etand x2 =

det(B2)det(A)

=e2t(3 sin t− 4 cos t)

3.

Solution:(

23e−t(3 sin t + 2 cos t),

13e2t(3 sin t− 4 cos t)

).

45.

det(A) =

∣∣∣∣∣∣∣∣1 4 −2 12 9 −3 −21 5 0 −13 14 7 −2

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 −1 −2 22 −1 −3 01 0 0 03 −1 7 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣−1 −2 2−1 −3 0−1 7 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 −16 0−1 −3 0−1 7 1

∣∣∣∣∣∣ = −19,

det(B2) =

∣∣∣∣∣∣∣∣1 2 −2 12 5 −3 −21 3 0 −13 6 7 −2

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 −1 −2 22 −1 −3 01 0 0 03 −3 7 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣−1 −2 2−1 −3 0−3 7 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

5 −16 0−1 −3 0−3 7 1

∣∣∣∣∣∣=∣∣∣∣ 5 −16−1 −3

∣∣∣∣ = −31. Therefore x2 =det(B2)det(A)

=3119

.

46.

det(A) =

∣∣∣∣∣∣b + c a a

b a + c bc c a + b

∣∣∣∣∣∣ =∣∣∣∣∣∣−a + b + c a a−a + b− c a + c b

0 c a + b

∣∣∣∣∣∣= −c

∣∣∣∣ −a + b + c a−a + b− c b

∣∣∣∣+ (a + b)∣∣∣∣ −a + b + c a−a + b− c a + c

∣∣∣∣ = 4abc,

det(B1) =

∣∣∣∣∣∣a a ab a + c bc c a + b

∣∣∣∣∣∣ =∣∣∣∣∣∣

a 0 ab a− b + c bc 0 a + b

∣∣∣∣∣∣= (a− b + c)

∣∣∣∣ a ac a + b

∣∣∣∣ = a(a− b + c)(a + b− c),

det(B2) =

∣∣∣∣∣∣b + c a a

b b bc c a + b

∣∣∣∣∣∣ =∣∣∣∣∣∣

b + c a− b− c ab 0 bc 0 a + b

∣∣∣∣∣∣= −(a− b− c)

∣∣∣∣ b bc a + b

∣∣∣∣ = −b(a− b− c)(a + b− c),

det(B3) =

∣∣∣∣∣∣b + c a a

b a + c bc c c

∣∣∣∣∣∣ =∣∣∣∣∣∣−a + b + c a a

0 a + c b0 c c

∣∣∣∣∣∣= (−a + b + c)

∣∣∣∣ a + c bc c

∣∣∣∣ = −c(a− b− c)(a− b + c).

Case 1: If a 6= 0, b 6= 0, and c 6= 0, then det(A) 6= 0, so it follows by Theorem 3.2.4 that the given system ofequations has a unique solution. The solution in this case is given by:

Page 216: Differential Equations and Linear Algebra - 3e - Goode -Solutions

216

x1 =det(B1)det(A)

=(a− b + c)(a + b− c)

4bc,

x2 =det(B2)det(A)

=−(a− b− c)(a + b− c)

4ac,

x3 =det(B3)det(A)

=−(a− b− c)(a− b + c)

4ab.

Case 2: If a = b = 0 and c 6= 0, then it is easy to see from the system that the system is inconsistent. Bythe symmetry of the equations, the same will be true if a = c = 0 with b 6= 0, or b = c = 0 with a 6= 0.

Case 3: Suppose a = 0 where b 6= 0 and c 6= 0, and consider the reduced row echelon form of the augmentedmatrix: b + c 0 0 0

c b b bb c c c

1 0 0 00 b− c b− c b− c0 c− b c− b c− b

1 0 0 00 1 1 10 0 0 0

.

From the last matrix we see that the system has an infinite number of solutions of the form{(0, 1 − r, r) : r ∈ R}. By the symmetry of the three equations, it follows that the other two cases: b = 0with a 6= 0 and c 6= 0, and c = 0 with a 6= 0 and b 6= 0 have similar forms of solutions.

47. Let B be the matrix obtained from A by adding column i to column j (i 6= j) in the matrix A. Bythe property for columns corresponding to Property P3, we have det(B) = det(A). Cofactor expansion of Balong column j gives

det(A) = det(B) =n∑

k=1

(akj + aki)Ckj =n∑

k=1

akjCkj +n∑

k=1

akiCkj .

That is,

det(A) = det(A) +n∑

k=1

akiCkj ,

since by the Cofactor Expansion Theorem the first summation on the right-hand side is simply det(A). Itfollows immediately that

n∑k=1

akiCkj = 0, i 6= j.

51.

A =

1.21 3.42 2.155.41 2.32 7.15

21.63 3.51 9.22

, B1 =

3.25 3.42 2.154.61 2.32 7.159.93 3.51 9.22

,

B2 =

1.21 3.25 2.155.41 4.61 7.15

21.63 9.93 9.22

, B3 =

1.21 3.42 3.255.41 2.32 4.61

21.63 3.51 9.93

.

From Cramer’s Rule,

x1 =det(B1)det(A)

≈ 0.25, x2 =det(B2)det(A)

≈ 0.72, x3 =det(B3)det(A)

≈ 0.22.

Page 217: Differential Equations and Linear Algebra - 3e - Goode -Solutions

217

52. det(A) = 32, det(B1) = −3218, det(B2) = 3207, det(B3) = 2896, det(B4) = −9682, det(B5) = 2414.So,

x1 = −160916

, x2 =320732

, x3 =1812

, x4 = −484132

, x5 =120716

.

53. We have

(BA)ji =n∑

k=1

bjkaki =n∑

k=1

1det(A)

· adj(A)jk · aki =1

det(A)

n∑k=1

Ckjaki = δij ,

where we have used Equation (3.3.4) in the last step.

Solutions to Section 3.4

Problems:

1.∣∣∣∣ 5 −1

3 7

∣∣∣∣ = 5 · 7− 3(−1) = 38.

2.

∣∣∣∣∣∣3 5 7

−1 2 46 3 −2

∣∣∣∣∣∣ = −43.

3.

∣∣∣∣∣∣5 1 46 1 314 2 7

∣∣∣∣∣∣ = −3.

4. ∣∣∣∣∣∣2.3 1.5 7.94.2 3.3 5.16.8 3.6 5.7

∣∣∣∣∣∣ = 2.3∣∣∣∣ 3.3 5.1

3.6 5.7

∣∣∣∣− 1.5∣∣∣∣ 4.2 5.1

6.8 5.7

∣∣∣∣+ 7.9∣∣∣∣ 4.2 3.3

6.8 3.6

∣∣∣∣= 1.035 + 16.11− 57.828 = −40.683.

5. ∣∣∣∣∣∣a b cb c ac a b

∣∣∣∣∣∣ = a

∣∣∣∣ c aa b

∣∣∣∣− b

∣∣∣∣ b ac b

∣∣∣∣+ c

∣∣∣∣ b cc a

∣∣∣∣= a(bc− a2)− b(b2 − ac) + c(ab− c2) = 3abc− a3 − b3 − c3.

6. ∣∣∣∣∣∣∣∣3 5 −1 22 1 5 23 2 5 71 −1 2 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

0 8 −7 −10 3 1 00 5 −1 41 −1 2 1

∣∣∣∣∣∣∣∣ = −1

∣∣∣∣∣∣8 −7 −13 1 05 −1 4

∣∣∣∣∣∣= −1

∣∣∣∣∣∣29 −7 −10 1 08 −1 4

∣∣∣∣∣∣ = −1∣∣∣∣ 29 −1

8 4

∣∣∣∣ = −124.

Page 218: Differential Equations and Linear Algebra - 3e - Goode -Solutions

218

7. ∣∣∣∣∣∣∣∣7 1 2 32 −2 4 63 −1 5 418 9 27 54

∣∣∣∣∣∣∣∣ = 9

∣∣∣∣∣∣∣∣7 1 2 32 −2 4 63 −1 5 42 1 3 6

∣∣∣∣∣∣∣∣ = 9

∣∣∣∣∣∣∣∣7 1 2 3

16 0 8 1210 0 7 7−5 0 1 3

∣∣∣∣∣∣∣∣ = −9

∣∣∣∣∣∣16 8 1210 7 7−5 1 3

∣∣∣∣∣∣= −36

∣∣∣∣∣∣4 2 3

10 7 7−5 1 3

∣∣∣∣∣∣ = −36

∣∣∣∣∣∣14 0 −345 0 −14−5 1 3

∣∣∣∣∣∣= 36

∣∣∣∣ 14 −345 −14

∣∣∣∣ = −2196.

8. det(A) = 11. MC =[

7 −2−5 3

]=⇒ adj(A) =

[7 −5

−2 3

], so that

A−1 =1

det(A)adj(A) =

[7/11 −5/11

−2/11 3/11

].

9. det(A) =

∣∣∣∣∣∣1 2 32 3 13 1 2

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 2 30 −1 −50 −5 −7

∣∣∣∣∣∣ = −18.

MC =

5 −1 −7−1 −7 5−7 5 −1

=⇒ adj(A) =

5 −1 −7−1 −7 5−7 5 −1

, so that

A−1 =1

det(A)adj(A) =

−5/18 1/18 7/181/18 7/18 −5/187/18 −5/18 1/18

.

10. det(A) =

∣∣∣∣∣∣3 4 72 6 13 14 −1

∣∣∣∣∣∣ =∣∣∣∣∣∣−11 −38 7

0 0 15 20 −1

∣∣∣∣∣∣ = −∣∣∣∣ −11 −38

5 20

∣∣∣∣ = 30.

MC =

−20 5 10102 −24 −30−38 11 10

=⇒ adj(A) =

−20 102 −385 −24 11

10 −30 10

, so that

A−1 =1

det(A)adj(A) =

−2/3 17/5 −19/151/6 −4/5 11/301/3 −1 1/3

.

11. det(A) =

∣∣∣∣∣∣2 5 74 −3 26 9 11

∣∣∣∣∣∣ = 116.

MC =

−51 −32 548 −20 12

31 24 −26

=⇒ adj(A) =

−51 8 31−32 −20 24

54 12 −26

, so that

A−1 =1

det(A)adj(A) =

−51/116 2/29 31/116−8/29 −5/29 6/2927/58 3/29 −13/58

.

Page 219: Differential Equations and Linear Algebra - 3e - Goode -Solutions

219

12. det(A) =

∣∣∣∣∣∣∣∣5 −1 2 53 −1 4 51 −1 2 15 9 −3 2

∣∣∣∣∣∣∣∣ = −152.

MC =

−38 32 34 2

0 28 44 −6038 −124 −222 1300 −24 −16 8

=⇒ adj(A) =

−38 0 38 0

32 28 −124 −2434 44 −222 −162 −60 130 8

, so that

A−1 =1

det(A)adj(A) =

1/4 0 −1/4 0

−4/19 −7/38 31/38 3/19−17/76 −11/38 111/76 2/19−1/76 15/38 −65/76 −1/19

.

13. A =[

3 56 2

], B1 =

[4 59 2

], B2 =

[3 46 9

], so that

x1 =det(B1)det(A)

=3724

, x2 =det(B2)det(A)

= −18.

14. A =[

cos t sin tsin t − cos t

], B1 =

[e−t sin t

3e−t − cos t

], B2 =

[cos t e−t

sin t 3e−t

], so that

x1 =det(B1)det(A)

= e−t[cos t + 3 sin t], x2 =det(B2)det(A)

= e−t[sin t− 3 cos t].

15. A =

4 1 32 −1 52 3 1

, B1 =

5 1 37 −1 52 3 1

, B2 =

4 5 32 7 52 2 1

, B3 =

4 1 52 −1 72 3 2

, so that

x1 =det(B1)det(A)

=14, x2 =

det(B2)det(A)

=116

, x3 =det(B3)det(A)

=2116

.

16. A =

5 3 62 4 −72 5 9

, B1 =

3 3 6−1 4 −7

4 5 9

, B2 =

5 3 62 −1 −72 4 9

, B3 =

5 3 32 4 −12 5 4

, so that

x1 =det(B1)det(A)

=30271

, x2 =det(B2)det(A)

=59271

, x3 =det(B3)det(A)

=81271

.

17. A =

3.1 3.5 7.12.2 5.2 6.31.4 8.1 0.9

, B1 =

3.6 3.5 7.12.5 5.2 6.39.3 8.1 0.9

, B2 =

3.1 3.6 7.12.2 2.5 6.31.4 9.3 0.9

, B3 =

3.1 3.5 3.62.2 5.2 2.51.4 8.1 9.3

,

so that

x1 =det(B1)det(A)

= 3.77, x2 =det(B2)det(A)

= 0.66, x3 =det(B3)det(A)

= −1.46.

18. Since A is invertible, A−1 exists. Then,

AA−1 = In =⇒ det(AA−1) = det(In) = 1 =⇒ det(A) det(A−1) = 1, so that det(A−1) =1

det(A).

Page 220: Differential Equations and Linear Algebra - 3e - Goode -Solutions

220

19. det(2A) = 23 det(A) = 24.

From the result of the preceding problem, det(A−1) =1

det(A)=

13.

det(ATB) = det(AT) det(B) = det(A) det(B) = −12.det(B5) = [det(B)]5 = −1024.

det(B−1AB) = det(B−1) det(A) det(B) =1

det(B)det(A) det(B) = det(A) = 3.

Solutions to Section 3.5

Additional Problems:

1.

(a): We have det(A) = (−7)(−5)− (1)(−2) = 37.

(b): We have

A1∼[

1 −5−7 −2

]2∼[

1 −50 −37

]= B.

1. P12 2. A12(7)

Now, det(A) = −det(B) = −(−37) = 37.

(c): Let us use the Cofactor Expansion Theorem along the first row:

det(A) = a11C11 + a12C12 = (−7)(−5) + (−2)(−1) = 37.

2.

(a): We have det(A) = (6)(1)− (−2)(6) = 18.

(b): We have

A1∼[

1 1−2 1

]2∼[

1 10 3

]= B.

1. M1(1/6) 2. A12(2)

Now, det(A) = 6 det(B) = 6 · 3 = 18.

(c): Let us use the Cofactor Expansion Theorem along the first row:

det(A) = a11C11 + a12C12 = (6)(1) + (6)(2) = 18.

3.

(a): Using Equation (3.1.2), we have

det(A) = (−1)(2)(−3)+(4)(2)(2)+(1)(0)(2)−(−1)(2)(2)−(4)(0)(−3)−(1)(2)(2) = 6+16+0−(−4)−0−4 = 22.

(b): We have

A1∼

−1 4 10 2 20 10 −1

2∼

−1 4 10 2 20 0 −11

= B.

Page 221: Differential Equations and Linear Algebra - 3e - Goode -Solutions

221

1. A13(2) 2. A23(−5)

Now, det(A) = det(B) = (−1)(2)(−11) = 22.

(c): Let us use the Cofactor Expansion Theorem along the first column:

det(A) = a11C11 + a21C21 + a31C31 = (−1)(−10) + (0)(14) + (2)(6) = 22.

4.

(a): Using Equation (3.1.2), we have

det(A) = (2)(0)(3)+(3)(2)(6)+(−5)(−4)(−3)−(−6)(0)(−5)−(−3)(2)(2)−(3)(−4)(3) = 0+36−60+−0+12+36 = 24.

(b): We have

A1∼

2 3 −50 6 −80 −12 18

2∼

2 3 −50 6 −80 0 2

= B.

1. A12(2), A13(−3) 2. A23(2)

Now, det(A) = det(B) = (2)(6)(2) = 24.

(c): Let us use the Cofactor Expansion Theorem along the second row:

det(A) = (−4)(6) + (0)(36) + (2)(24) = −24 + 0 + 48 = 24.

5.

(a): Of the 24 terms appearing in the determinant expression (3.1.3), only terms containing the factors a11

and a44 will be nonzero (all other entries in the first column and fourth row of A are zero). Looking at entriesin the second and third rows and columns of A, we see that only the product a23a32 is nonzero. Therefore,the only nonzero term in the summation (3.1.3) is a11a23a32a44 = (3)(1)(2)(−4) = −24. The permutationassociated with this term is (1, 3, 2, 4) which contains one inversion. Therefore, σ(1, 3, 2, 4) = −1, and so thedeterminant is (−24)(−1) = 24.

(b): We have

A1∼

3 −1 −2 10 2 1 −10 0 1 40 0 0 −4

= B.

1. P23

Now, det(A) = −det(B) = −(3)(2)(1)(−4) = 24.

(c): Cofactor expansion along the first column yields: det(A) = 3·

∣∣∣∣∣∣0 1 42 1 −10 0 −4

∣∣∣∣∣∣. This latter determinant can

be found by cofactor expansion along the last column: (−4)[(0)(1)− (2)(1)] = 8. Thus, det(A) = 3 · 8 = 24.

6.

Page 222: Differential Equations and Linear Algebra - 3e - Goode -Solutions

222

(a): Of the 24 terms in the determinant expression (3.1.3), the only nonzero term is a41a32a23a14, whichhas a positive sign: σ(4, 3, 2, 1) = +1. Therefore, det(A) = (−3)(1)(−5)(−2) = −30.

(b): By permuting the first and last rows, and by permuting the middle two rows, we bring A to uppertriangular form. The two permutations introduce two sign changes, so the resulting upper triangular matrixhas the same determinant as A, and it is (−3)(1)(−5)(−2) = −30.

(c): Cofactor expansion along the first column yields: det(A) = −(−3) ·

∣∣∣∣∣∣0 0 −20 −5 11 −4 1

∣∣∣∣∣∣. This latter

determinant can be found by cofactor expansion along the first column: (1)[(0)(1) − (−5)(−2)] = −10.Hence, det(A) = 3 · (−10) = −30.

7. To obtain the given matrix from A, we perform two row permutations, multiply a row through by −4,and multiply a row through by 2. The combined effect of these operations is to multiply the determinant ofA by (−1)2 · (−4) · (2) = −8. Hence, the given matrix has determinant det(A) · (−8) = 4 · (−8) = −32.

8. To obtain the given matrix from A, we add 5 times the middle row to the top row, we multiply the lastrow by 3, we multiply the middle row by −1, we add a multiple of the last row to the middle row, and weperform a row permutation. The combined effect of these operations is to multiply the determinant of A by(1) · (3) · (−1) · (−1) = 3. Hence, the given matrix has determinant det(A) · (3) = 4 · (3) = 12.

9. To obtain the given matrix from AT , we do two row permutations, multiply a row by −1, multiply a row by3, and add 2 times one row to another. The combined effect of these operations is to multiply the determinantof A by (−1)(−1)(−1)(3)(1) = −3. Hence, the given matrix has determinant det(A)·(−3) = (4)·(−3) = −12.

10. To obtain the given matrix from A, we permute the bottom two rows, multiply a row by 2, multiply arow by −1, add −1 times one row to another, and then multiply each row of the matrix by 3. The combinedeffect of these operations is to multiply the determinant of A by (−1)(2)(−1)(1)(3)3 = 54. Hence, the givenmatrix has determinant det(A) · 54 = (4)(54) = 216.

11. We have det(AB) = det(A)det(B) = (−2) · 3 = −6.

12. We have det(B2A−1) = (det(B))2det(A−1) = 32

−2 = −4.5.

13. We have det((A−1B)T (2B−1)) = det(A−1B) · det(2B−1) = − 12 · 3 · 2

4 · 13 = −8.

14. We have det((−A)3(2B2)) = det(−A)3det(2B2) = (−1)3(detA)3 · 24 · (detB)2 = (−1)3(−2)3 · 24 · 32 =1152.

15. Since A and B are not square matrices, det(A) and det(B) are not possible to compute. We have

det(C) = −18, and therefore, det(CT ) = −18. Now, AB =[

8 −1025 28

], and so det(AB) = 474. Since

BA =

4 5 21 8 −1318 15 24

, we have det(BA) = 0. Next, det(BT AT ) = det((AB)T ) = det(AB) = 474. Next,

det(BAC) = det(BA)det(C) = 0 · (−18) = 0. Finally, we have det(ACB) = det[

38 54133 297

]= 4104.

16. The matrix of cofactors of B is MC =[

1 −1−4 5

], and so adj(B) =

[1 −4

−1 5

]. Since det(B) = 1,

Page 223: Differential Equations and Linear Algebra - 3e - Goode -Solutions

223

we have B−1 =[

1 −4−1 5

]. Therefore,

(A−1BT )−1 = (BT )−1A = (B−1)T A =[

1 −1−4 5

] [1 23 4

]=[−2 −211 12

].

17.

MC =

16 −1 −54 5 −3

−4 2 10

;

adj(A) =

16 4 −4−1 5 2−5 −3 10

;

det(A) = 28;

A−1 =

4/7 1/7 −1/7−1/28 5/28 1/14−5/28 −3/28 5/14

.

18.

MC =

5 12 −11 −1

65 −24 −23 −13−25 0 −5 5−35 0 5 −5

;

adj(A) =

5 65 −25 −35

12 −24 0 0−11 −23 −5 5−1 −13 5 −5

;

det(A) = −60;

A−1 = − 160

5 65 −25 −35

12 −24 0 0−11 −23 −5 5−1 −13 5 −5

.

19.

MC =

88 −24 −40 −4832 12 −20 016 12 −4 0−4 6 −2 0

;

adj(A) =

88 32 16 −4

−24 12 12 6−40 −20 −4 −2−48 0 0 0

;

det(A) = −48;

A−1 = − 148

88 32 16 −4

−24 12 12 6−40 −20 −4 −2−48 0 0 0

.

20.

Page 224: Differential Equations and Linear Algebra - 3e - Goode -Solutions

224

MC =

21 12 −1224 9 −1248 24 −27

;

adj(A) =

21 24 4812 9 24

−12 −12 −27

;

det(A) = 9;

A−1 =

7/3 8/3 16/34/3 1 8/3

−4/3 −4/3 −3

.

21.

MC =

7 −2 00 2 −2

−6 0 2

;

adj(A) =

7 0 −6−2 2 0

0 −2 2

;

det(A) = 2;

A−1 =

3.5 0 −3−1 1 00 −1 1

.

22. There are many ways to do this. One choice is to let B =

4 −1 05 1 40 0 10

9

.

23. FALSE. For instance, if one entry of the matrix A =

1 2 31 2 31 2 3

is changed, two of the rows of A

will still be identical, and therefore, the determinant of the resulting matrix must be zero. It is not possibleto force the determinant to equal r.

24. Note thatdet(A) = 2 + 12k + 36− 4k − 18− 12 = 8k + 8 = 8(k + 1).

(a): Based on the calculation above, we see that A fails to be invertible if and only if k = −1.

(b): The volume of the parallelepiped determined by the row vectors of A is precisely |det(A)| = |8k + 8|.The volume of the parallelepiped determined by the column vectors of A is the same as the volume of theparallelepiped determined by the row vectors of AT , which is |det(AT )| = |det(A)| = |8k + 8|. Thus, thevolume is the same.

25. Note that

det(A) = 3(k + 1) + 2k + 0− 3− k(k + 1)− 0 = −k2 + 4k = k(4− k).

(a): Based on the calculation above, we see that A fails to be invertible if and only if k = 0 or k = 4.

(b): The volume of the parallelepiped determined by the row vectors of A is precisely |det(A)| = |−k2 +4k|.The volume of the parallelepiped determined by the column vectors of A is the same as the volume of theparallelepiped determined by the row vectors of AT , which is |det(AT )| = |det(A)| = | − k2 + 4k|. Thus, thevolume is the same.

Page 225: Differential Equations and Linear Algebra - 3e - Goode -Solutions

225

26. Note thatdet(A) = 0 + 4(k − 3) + 2k3 − k2 − 8k − 0 = 2k3 − k2 − 4k − 12.

(a): Based on the calculation above, we see that A fails to be invertible if and only if 2k3−k2−4k−12 = 0,and the only real solution to this equation for k is k ≈ 2.39 (from calculator).

(b): The volume of the parallelepiped determined by the row vectors of A is precisely |det(A)| = |2k3−k2−4k−12|. The volume of the parallelepiped determined by the column vectors of A is the same as the volume ofthe parallelepiped determined by the row vectors of AT , which is |det(AT )| = |det(A)| = |2k3−k2−4k−12|.Thus, the volume is the same.

27. From the assumption that AB = −BA, we can take the determinant of each side: det(AB) = det(−BA).Hence, det(A)det(B) = (−1)ndet(B)det(A) = −det(A)det(B). From this, it follows that det(A)det(B) = 0,and therefore, either det(A) = 0 or det(B) = 0. Thus, either A or B (or both) fails to be invertible.

28. Since AAT = In, we can take the determinant of both sides to get det(AAT ) = det(In) = 1. Hence,det(A)det(AT ) = 1. Therefore, we have (det(A))2 = 1. We conclude that det(A) = ±1.

29. The coefficient matrix of this linear system is A =[−3 1

1 2

]. We have

det(A) =∣∣∣∣ −3 1

1 2

∣∣∣∣ = −7, det(B1) =∣∣∣∣ 3 1

1 2

∣∣∣∣ = 5, and det(B2) =∣∣∣∣ −3 3

1 1

∣∣∣∣ = −6.

Thus,

x1 =det(B1)det(A)

= −57

and x2 =det(B2)det(A)

=67.

Solution: (−5/7, 6/7).

30. The coefficient matrix of this linear system is A =

2 −1 14 5 34 −3 3

. We have

det(A) =

∣∣∣∣∣∣2 −1 14 5 34 −3 3

∣∣∣∣∣∣ = 16, det(B1) =

∣∣∣∣∣∣2 −1 10 5 32 −3 3

∣∣∣∣∣∣ = 32,

det(B2) =

∣∣∣∣∣∣2 2 14 0 34 2 3

∣∣∣∣∣∣ = −4, det(B3) =

∣∣∣∣∣∣2 −1 24 5 04 −3 2

∣∣∣∣∣∣ = −36.

Thus,

x1 =det(B1)det(A)

= 2, x2 =det(B2)det(A)

= −14, and x3 =

det(B3)det(A)

= −94.

Solution: (2,−1/4,−9/4).

31. The coefficient matrix of this linear system is A =

3 1 22 −1 10 5 5

. We have

det(A) =

∣∣∣∣∣∣3 1 22 −1 10 5 5

∣∣∣∣∣∣ = −20, det(B1) =

∣∣∣∣∣∣−1 1 2−1 −1 1−5 5 5

∣∣∣∣∣∣ = −10,

Page 226: Differential Equations and Linear Algebra - 3e - Goode -Solutions

226

det(B2) =

∣∣∣∣∣∣3 −1 22 −1 10 −5 5

∣∣∣∣∣∣ = −10, det(B3) =

∣∣∣∣∣∣3 1 −12 −1 −10 5 −5

∣∣∣∣∣∣ = 30.

Thus,

x1 =det(B1)det(A)

=12, x2 =

det(B2)det(A)

=12, and x3 =

det(B3)det(A)

= −32.

Solution: (1/2, 1/2,−3/2).

Solutions to Section 4.1

True-False Review:

1. FALSE. The vectors (x, y) and (x, y, 0) do not belong to the same set, so they are not even comparable,let alone equal to one another.

2. TRUE. The unique additive inverse of (x, y, z) is (−x,−y,−z).

3. TRUE. The solution set refers to collections of the unknowns that solve the linear system. Since thissystem has 6 unknowns, the solution set will consist of vectors belonging to R6.

4. TRUE. The vector (−1)·(x1, x2, . . . , xn) is precisely (−x1,−x2, . . . ,−xn), and this is the additive inverseof (x1, x2, . . . , xn).

5. FALSE. There is no such name for a vector whose components are all positive.

6. FALSE. The correct result is

(s + t)(x + y) = (s + t)x + (s + t)y = sx + tx + sy + ty,

and in this item, only the first and last of these four terms appear.

7. TRUE. When the vector x is scalar multiplied by zero, each component becomes zero: 0x = 0. This isthe zero vector in Rn.

8. TRUE. This is seen geometrically from addition and subtraction of geometric vectors.

9. FALSE. If k < 0, then kx is a vector in the third quadrant. For instance, (1, 1) lies in the first quadrant,but (−2)(1, 1) = (−2,−2) lies in the third quadrant.

10. TRUE. Recalling that i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1), we have

5i− 6j +√

2k = 5(1, 0, 0)− 6(0, 1, 0) +√

2(0, 0, 1) = (5,−6,√

2),

as stated.

11. FALSE. If the three vectors lie on the same line or the same plane, the resulting object may determinea one-dimensional segment or two-dimensional area. For instance, if x = y = z = (1, 0, 0), then the vectorsx,y, and z rest on the segment from (0, 0, 0) to (1, 0, 0), and do not determine a three-dimensional solidregion.

12. FALSE. The components of kx only remain even integers if k is an integer. But, for example, if k = π,then the components of kx are not even integers at all, let alone even integers.

Problems:

Page 227: Differential Equations and Linear Algebra - 3e - Goode -Solutions

227

(6, 2)

(3, 8)

(-3, 6)

v1

v2

v3

y

x

Figure 0.0.61: Figure for Exercise 1

(20, -4)

(17, -16)

(-3, -12)

v1v2

v3

y

x

Figure 0.0.62: Figure for Exercise 2

1. v1 = (6, 2), v2 = (−3, 6), v3 = (6, 2) + (−3, 6) = (3, 8).

2. v1 = (−3,−12), v2 = (20,−4), v3 = (−3,−12) + (20,−4) = (17,−16).

3. v = 5(3,−1, 2, 5) − 7(−1, 2, 9,−2) = (15,−5, 10, 25) − (−7, 14, 63,−14) = (22,−19,−53, 39). Additiveinverse: −v = (−1)v = (−22, 19, 53,−39).

4. We have

y =23x +

13z =

23(1, 2, 3, 4, 5) +

13(−1, 0,−4, 1, 2) =

(13,43,23, 3, 4

).

5. Let x = (x1, x2, x3, x4), y = (y1, y2, y3, y4) be arbitrary vectors in R4. Then

x + y = (x1, x2, x3, x4) + (y1, y2, y3, y4) = (x1 + y1, x2 + y2, x3 + y3, x4 + y4)= (y1 + x1, y2 + x2, y3 + x3, y4 + x4) = (y1, y2, y3, y4) + (x1, x2, x3, x4)= y + x.

Page 228: Differential Equations and Linear Algebra - 3e - Goode -Solutions

228

6. Let x = (x1, x2, x3, x4), y = (y1, y2, y3, y4), z = (z1, z2, z3, z4) be arbitrary vectors in R4. Then

x + (y + z) = (x1, x2, x3, x4) + (y1 + z1, y2 + z2, y3 + z3, y4 + z4)= (x1 + (y1 + z1), x2 + (y2 + z2), x3 + (y3 + z3), x4 + (y4 + z4))= ((x1 + y1) + z1, (x2 + y2) + z2, (x3 + y3) + z3, (x4 + y4) + z4)= (x1 + y1, x2 + y2, x3 + y3, x4 + y4) + (z1, z2, z3, z4)= (x + y) + z.

7. Let x = (x1, x2, x3) and y = (y1, y2, y3) be arbitrary vectors in R3, and let r, s, t be arbitrary real numbers.Then:1x = 1(x1, x2, x3) = (1x1, 1x2, 1x3) = (x1, x2, x3) = x.(st)x = (st)(x1, x2, x3) = ((st)x1, (st)x2, (st)x3) = (s(tx1), s(tx2), s(tx3)) = s(tx1, tx2, tx3) = s(tx).r(x + y) = r(x1 + y1, x2 + y2, x3 + y3) = (r(x1 + y1), r(x2 + y2), r(x3 + y3))

= (rx1 + ry1, rx2 + ry2, rx3 + ry3) = (rx1, rx2, rx3) + (ry1, ry2, ry3)= rx + ry.

(s + t)x = (s + t)(x1, x2, x3) = ((s + t)x1, (s + t)x2, (s + t)x3)= (sx1 + tx1, sx2 + tx2, sx3 + tx3) = (sx1, sx2, sx3) + (tx1, tx2, tx3)= sx + tx.

8. For example, if x = (2, 2) and y = (−1,−1), then x + y = (1, 1) lies in the first quadrant. If x = (1, 2)and y = (−5, 1), then x + y = (−4, 3) lies in the second quadrant. If x = (1, 1) and y = (−2,−2), thenx + y = (−1,−1) lies in the third quadrant. If x = (2, 1) and y = (−1,−5), then x + y = (1,−4) lies in thefourth quadrant.

Solutions to Section 4.2

True-False Review:

1. TRUE. This is part 1 of Theorem 4.2.6.

2. FALSE. The statement would be true if it was required that v be nonzero. However, if v = 0, thenrv = sv = 0 for all values of r and s, and r and s need not be equal. We conclude that the statement is nottrue.

3. FALSE. This set is not closed under scalar multiplication. In particular, if k is an irrational numbersuch as k = π and v is an integer, then kv is not an integer.

4. TRUE. We have

(x + y) + ((−x) + (−y)) = (x + (−x)) + (y + (−y)) = 0 + 0 = 0,

where we have used the vector space axioms in these steps. Therefore, the additive inverse of x + y is(−x) + (−y).

5. TRUE. This is part 1 of Theorem 4.2.6.

6. TRUE. This is called the trivial vector space. Since 0 + 0 = 0 and k · 0 = 0, it is closed under additionand scalar multiplication. Both sides of the remaining axioms yield 0, and 0 is the zero vector, and it is itsown additive inverse.

Page 229: Differential Equations and Linear Algebra - 3e - Goode -Solutions

229

7. FALSE. This set is not closed under addition, since 1 + 1 6∈ {0, 1}. Therefore, (A1) fails, and hence,this set does not form a vector space. (It is worth noting that the set is also not closed under scalarmultiplication.)

8. FALSE. This set is not closed under scalar multiplication. If k < 0 and x is a positive real number, theresult kx is a negative real number, and therefore no longer belongs to the set of positive real numbers.

Problems:

1. If x = p/q and y = r/s, where p, q, r, s are integers (q 6= 0, s 6= 0), then x + y = (ps + qr)/(qs), whichis a rational number. Consequently, the set of all rational numbers is closed under addition. The set is notclosed under scalar multiplication since, if we multiply a rational number by an irrational number, the resultis an irrational number.

2. Let A = [aij ] and B = [bij ] be upper triangular matrices, and let k be an arbitrary real number. Thenwhenever i > j, it follows that aij = 0 and bij = 0. Consequently, aij + bij = 0 whenever i > j, and kaij = 0whenever i > j. Therefore, A + B and kA are upper triangular matrices, so the set of all upper triangularmatrices with real elements is closed under both addition and scalar multiplication.

3. V = {y : y′ + 9y = 4x2} is not a vector space because it is not closed under vector addition. Letu, v ∈ V . Then u′ + 9u = 4x2 and v′ + 9v = 4x2. It follows that (u + v)′ + 9(u + v) = (u′ + v′) + 9(u + v) =u′ + 9u + v′ + 9v = 4x2 + 4x2 = 8x2 6= 4x2. Thus, u + v /∈ V . Likewise, V is not closed under scalarmultiplication.

4. V = {y : y′ + 9y = 0 for all x ∈ I} is closed under addition and scalar multiplication, as we now show:

A1: Addition: For u, v ∈ V, u′ + 9u = 0 and v′ + 9v = 0, so(u + v)′ + 9(u + v) = u′ + v′ + 9u + 9v = u′ + 9u + v′ + 9v = 0 + 0 = 0=⇒ u + v ∈ V , therefore we have closure under addition.

A2: Scalar Multiplication: If α ∈ R and u ∈ V , then(αu)′ + 9(αu) = αu′ + 9αu = α(u′ + 9u) = α · 0 = 0, so we also have closure under multiplication.

5. V = {x ∈ Rn : Ax = 0, where A is a fixed matrix} is closed under addition and scalar multiplication, aswe now show:

Let u,v ∈ V and k ∈ R.A1: Addition: A(u + v) = Au + Av = 0 + 0 = 0, so u + v ∈ V .A2: Scalar Multiplication: A(ku) = kAu = k0 = 0, thus ku ∈ V .

6. (a): The zero vector in M2(R) is 02 =[

0 00 0

]. Since det(02) = 0, 02 is an element of S.

(b): Let A =[

1 00 0

]and B =

[0 00 1

]. Then det(A) = 0 = det(B), so that A,B ∈ S. However,

A + B =[

1 00 1

]=⇒ det(A + B) = 1, so that A + B /∈ S. Consequently, S is not closed under addition.

(c): YES. Note that det(cA) = c2det(A), so if det(A) = 0, then det(cA) = 0.

7. (1) N is not closed under scalar multiplication, since multiplication of a positive integer by a real numberdoes not, in general, result in a positive integer.

Page 230: Differential Equations and Linear Algebra - 3e - Goode -Solutions

230

(2) There is no zero vector in N.(3) No element of N has an additive inverse in N.

8. Let V be R2, i.e. {(x, y) : x, y ∈ R}. With addition and scalar multiplication as defined in the text, thisset is clearly closed under both operations.A3: u + v = (u1, u2) + (v1, v2) = (u1 + v1, u2 + v2) = (v1 + u1, v2 + u2) = v + u.A4: [u+v]+w = [(u1+v1, u2+v2)]+(w1, w2) = ([u1+v1]+w1, [u2+v2]+w2) = (u1+[v1+w1], u2+[v2+w2]) =(u1, u2) + [(v1 + w1, v2 + w2)] = u + [v + w].A5: 0 = (0, 0) since (x, y) + (0, 0) = (x + 0, y + 0) = (x, y).A6: If u = (a, b), then −u = (−a,−b), a, b ∈ R, since (a, b) + (−a,−b) = (a− a, b− b) = (0, 0).Now, let u = (u1, u2),v = (v1, v2), where u1, u2, v1, v2 ∈ R, and let r, s, t ∈ R.A7: 1 · v = 1(v1, v2) = (1 · v1, 1 · v2) = (v1, v2) = v.A8: (rs)v = (rs)(v1, v2) = ((rs)v1, (rs)v2) = (rsv1, rsv2) = (r(sv1), r(sv2)) = r(sv1, sv2) = r(sv).A9: r(u+v) = r((u1, u2)+(v1, v2)) = r(u1 +v1, u2 +v2) = (r(u1 +v1), r(u2 +v2)) = (ru1 +rv1, ru2 +rv2) =(ru1, ru2) + (rv1, rv2) = r(u1, u2) + r(v1, v2) = ru + rv.A10: (r + s)u = (r + s)(u1, u2) = ((r + s)u1, (r + s)u2) = (ru1 + su1, ru2 + su2) = (ru1, ru2) + (su1, su2) =r(u1, u2) + s(u1, u2) = ru + su.Thus, R2 is a vector space.

9. Let A =[

a b cd e f

]∈ M2×3(R). Then we see that the zero vector is 02×3 =

[0 0 00 0 0

], since

A + 02×3 = A. Further, A has additive inverse −A =[−a −b −c−d −e −f

]since A + (−A) = 02×3.

10. The zero vector of Mm×n(R) is 0m×n, the m× n zero matrix. The additive inverse of the m× n matrixA with (i, j)-element aij is the m× n matrix −A with (i, j)-element −aij .

11. V = {p : p is a polynomial in x of degree 2}. V is not a vector space because it is not closed underaddition. For example, x2 ∈ V and −x2 ∈ V , yet x2 + (−x2) = 0 /∈ V .

12. YES. We verify the ten axioms of a vector space.A1: The product of two positive real numbers is a positive real number, so the set is closed under addition.A2: Any power of a positive real number is a positive real number, so the set is closed under scalar multi-plication.A3: We have

x + y = xy = yx = y + x

for all x, y ∈ R+, so commutativity under addition holds.A4: We have

(x + y) + z = (xy) + z = (xy)z = x(yz) = x(y + z) = x + (y + z)

for all x, y, z ∈ R+, so that associativity under addition holds.A5: We claim that the zero vector in this set is the real number 1. To see this, note that 1 + x = 1x = x =x1 = x + 1 for all x ∈ R+.A6: We claim that the additive inverse of the vector x ∈ R+ is 1

x ∈ R+. To see this, note that x+ 1x = x 1

x =1 = 1

xx = 1x + x.

A7: Note that 1 · x = x1 = x for all x ∈ R+, so that the unit property holds.A8: For all r, s ∈ R and x ∈ R+, we have

(rs) · x = xrs = xsr = (xs)r = r · xs = r · (s · x),

as required for associativity of scalar multiplication.

Page 231: Differential Equations and Linear Algebra - 3e - Goode -Solutions

231

A9: For all r ∈ R and x, y ∈ R+, we have

r · (x + y) = r · (xy) = (xy)r = xryr = xr + yr = r · x + r · y,

as required for distributivity of scalar multiplication over vector addition.A10: For all r, s ∈ R and x ∈ R+, we have

(r + s) · x = xr+s = xrxs = xr + xs = r · x + s · x,

as required for distributivity of scalar multiplication over scalar addition.The above verification of axioms A1-A10 shows that we have a vector space structure here.

13. Axioms A1 and A2 clearly hold under the given operations.A3: u + v = (u1, u2) + (v1, v2) = (u1 − v1, u2 − v2) = (−(v1 − u1),−(v2 − u2)) 6= (v1 − u1, v2 − u2) = v + u.Consequently, A3 does not hold.A4: (u+v)+w = (u1−v1, u2−v2)+(w1, w2) = ((u1−v1)−w1, (u2−v2)−w2) = (u1−(v1+w1), u2−(v2+w2)) =(u1, u2) + (v1 + w1, v2 + w2) 6= (u1, u2) + (v1 −w1, v2 −w2) = u + (v + w). Consequently, A4 does not hold.A5: 0 = (0, 0) since u + 0 = (u1, u2) + (0, 0) = (u1 − 0, u2 − 0) = (u1, u2) = u.A6: If u = (u1, u2), then −u = (u1, u2) since u+(−u) = (u1, u2)+(u1, u2) = (u1−u1, u2−u2) = (0, 0) = 0.Each of the remaining axioms do not hold.

14. Axiom A5: The zero vector on R2 with the defined operation of addition is given by (1,1), for if (u1, u2)is any element in R2, then (u1, u2) + (1, 1) = (u1 · 1, u2 · 1) = (u1, u2). Thus, Axiom A5 holds.Axiom A6: Suppose that (u1, v1) is any element in R2 with additive inverse (a, b). From the first part ofthe problem, we know that (1, 1) is the zero element, so it must be the case that (u1, v1) + (a, b) = (1, 1) sothat (u1a, v1b) = (1, 1); hence, it follows that u1a = 1 and v1b = 1, but this system is not satisfied for all(u1, v1) ∈ R, namely, (0, 0). Thus, Axiom A6 is not satisfied.

15. Let A,B,C ∈ M2(R) and r, s, t ∈ R.A3: The addition operation is not commutative since

A + B = AB 6= BA = B + A.

A4: Addition is associative since

(A + B) + C = AB + C = (AB)C = A(BC) = A(B + C) = A + (B + C).

A5: I2 =[

1 00 1

]is the zero vector in M2(R) because A + I2 = AI2 = A for all A ∈ M2(R).

A6: We wish to determine whether for each matrix A ∈ M2(R) we can find a matrix B ∈ M2(R) such thatA+B = I2 (remember that we have shown in A5 that the zero vector is I2), equivalently, such that AB = I2.However, this equation can be satisfied only if A is nonsingular, therefore the axiom fails.A7: 1 ·A = A is true for all A ∈ M2(R).A8: (st)A = s(tA) is true for all A ∈ M2(R) and s, t ∈ R.A9: sA+ tA = (sA)(tA) = st(AA) 6= (s+ t)A for all s, t ∈ R and A ∈ M2(R). Consequently, the axiom fails.A10: rA+ rB = (rA)(rB) = r2AB = r2(A+B). Thus, rA+ rB 6= rA+ rB for all r ∈ R, so the axiom fails.

16. M2(R) = {A : A is a 2× 2 real matrix}. Let A,B ∈ M2(R) and k ∈ R.A1: A⊕B = −(A + B).A2: k ·A = −kA.A3 and A4: A⊕B = −(A + B) = −(B + A) = B ⊕A. Hence, the operation ⊕ is commutative.

Page 232: Differential Equations and Linear Algebra - 3e - Goode -Solutions

232

(A⊕B)⊕ C = −((A⊕B) + C) = −(−(A + B) + C) = A + B − C, butA⊕ (B⊕C) = −(A+(B⊕C)) = −(A+(−(B +C)) = −A+B +C. Thus the operation ⊕ is not associative.A5: An element B is needed such that A ⊕ B = A for all A ∈ M2(R), but −(A + B) = A =⇒ B = −2A.Since this depends on A, there is no zero vector.A6: Since there is no zero vector, we cannot define the additive inverse.Let r, s, t ∈ R.A7: 1 ·A = −1A = −A 6= A.A8: (st) · A = −[(st)A], but s · (t · A) = s · (−tA) = −[s(−tA)] = s(tA) = (st)A, so it follows that(st) ·A 6= s · (t ·A).A9: r ·(A⊕B) = −r(A⊕B) = −r(−(A+B)) = r(A+B) = rA+rB = −[(−rA)+(−rB)] = −rA⊕(−rB) =r ·A + r ·B, whereas rA⊕ rB = −(rA + rB) = −rA− rB, so this axiom fails to hold.A10: (s+t) ·A = −(s+t)A = −sA+(−tA), but s ·A⊕t ·A = −(sA)⊕(−tA) = −[−(sA)+(−tA)] = sA+tA,hence (s + t) ·A 6= s ·A⊕ t ·A. We conclude that only the axioms (A1)-(A3) hold.

17. Let C2 = {(z1, z2) : zi ∈ C} under the usual operations of addition and scalar multiplication.A3 and A4: Follow from the properties of addition in C2.A5: (0, 0) is the zero vector in C2 since (z1, z2) + (0, 0) = (z1 + 0, z2 + 0) = (z1, z2) for all (z1, z2) ∈ C2.A6: The additive inverse of the vector (z1, z2) ∈ C2 is the vector (−z1,−z2) for all (z1, z2) ∈ C2.A7-A10: Follows from properties in C2.Thus, C2 together with its defined operations, is a complex vector space.

18. Let M2(C) = {A : A is a 2 × 2 matrix with complex entries} under the usual operations of matrixaddition and multiplication.A3 and A4: Follows from properties of matrix addition.A5: The zero vector, 0, for M2(C) is the 2× 2 zero matrix, 02.A6: For each vector A = [aij ] ∈ M2(C), the vector −A = [−aij ] ∈ M2(C) satisfies A + (−A) = 02.A7-A10: Follow from properties of matrix algebra.Hence, M2(C) together with its defined operations, is a complex vector space.

19. Let u = (u1, u2, u3) and v = (v1, v2, v3) be vectors in C3, and let k ∈ R.A1: u + v = (u1, u2, u3) + (v1, v2, v3) = (u1 + v1, u2 + v2, u3 + v3) ∈ C3.A2: ku = k(u1, u2, u3) = (ku1, ku2, ku3) ∈ C3.A3 and A4: Satisfied by the properties of addition in C3.A5: (0, 0, 0) is the zero vector in C3 since (0, 0, 0) + (z1, z2, z3) = (0 + z1, 0 + z2, 0 + z3) = (z1, z2, z3) for all(z1, z2, z3) ∈ C3.A6: (−z1,−z2,−z3) is the additive inverse of (z1, z2, z3) because (z1, z2, z3) + (−z1,−z2,−z3) = (0, 0, 0) forall (z1, z2, z3) ∈ C3.Let r, s, t ∈ R.A7: 1 · u = 1 · (u1, u2, u3) = (1u1, 1u2, 1u3) = (u1, u2, u3) = u.A8: (st)u = (st)(u1, u2, u3) = ((st)u1, (st)u2, (st)u3) = (s(tu1), s(tu2), s(tu3)) = s(tu1, tu2, tu3) = s(t(u1, u2, u3)) =s(tu).A9: r(u+v) = r(u1 + v1, u2 + v2, u3 + v3) = (r(u1 + v1), r(u2 + v2), r(u3 + v3)) = (ru1 + rv1, ru2 + rv2, ru3 +rv3) = (ru1, ru2, ru3) + (rv1, rv2, rv3) = r(u1, u2, u3) + r(v1, v2, v3) = ru + rv.A10: (s + t)u = (s + t)(u1, u2, u3) = ((s + t)u1, (s + t)u2, (s + t)u3) = (su1 + tu1, su2 + tu2, su3 + tu3) =(su1, su2, su3) + (tu1, tu2, tu3) = s(u1, u2, u3) + t(u1, u2, u3) = su + tu.Thus, C3 is a real vector space.

20. NO. If we scalar multiply a vector (x, y, z) ∈ R3 by a non-real (complex) scalar r, we will obtain thevector (rx, ry, rz) 6∈ R3, since rx, ry, rz 6∈ R.

21. Let k be an arbitrary scalar, and let u be an arbitrary vector in V . Then, using property 2 of Theorem

Page 233: Differential Equations and Linear Algebra - 3e - Goode -Solutions

233

4.2.6, we havek0 = k(0u) = (k0)u = 0u = 0.

22. Assume that k is a scalar and u ∈ V such that ku = 0. If k = 0, the desired conclusion is alreadyreached. If, on the other hand, k 6= 0, then we have 1

k ∈ R and

1k· (ku) =

1k· 0 = 0,

or (1k· k)

u = 0.

Hence, 1 ·u = 0, and the unit property A7 now shows that u = 0, and again, we reach the desired conclusion.

23. We verify the axioms A1-A10 for a vector space.A1: If a0 + a1x + · · ·+ anxn and b0 + b1x + · · ·+ bnxn belong to Pn, then

(a0 + a1x + · · ·+ anxn) + (b0 + b1x + · · ·+ bnxn) = (a0 + b0) + (a1 + b1)x + · · ·+ (an + bn)xn,

which again belongs to Pn. Therefore, Pn is closed under addition.A2: If a0 + a1x + · · ·+ anxn and r is a scalar, then

r · (a0 + a1x + · · ·+ anxn) = (ra0) + (ra1)x + · · ·+ (ran)xn,

which again belongs to Pn. Therefore, Pn is closed under scalar multiplication.A3: Let p(x) = a0 + a1x + · · ·+ anxn and q(x) = b0 + b1x + · · ·+ bnxn belong to Pn. Then

p(x) + q(x) = (a0 + a1x + · · ·+ anxn) + (b0 + b1x + · · ·+ bnxn)= (a0 + b0) + (a1 + b1)x + · · ·+ (an + bn)xn

= (b0 + a0) + (b1 + a1)x + · · ·+ (bn + an)xn

= (b0 + b1x + · · ·+ bnxn) + (a0 + a1x + · · ·+ anxn)= q(x) + p(x),

so Pn satisfies commutativity under addition.A4: Let p(x) = a0 + a1x + · · ·+ anxn, q(x) = b0 + b1x + · · ·+ bnxn, and r(x) = c0 + c1x + · · ·+ cnxn belongto Pn. Then

[p(x) + q(x)] + r(x) = [(a0 + a1x + · · ·+ anxn) + (b0 + b1x + · · ·+ bnxn)] + (c0 + c1x + · · ·+ cnxn)= [(a0 + b0) + (a1 + b1)x + · · ·+ (an + bn)xn] + (c0 + c1x + · · ·+ cnxn)= [(a0 + b0) + c0] + [(a1 + b1) + c1]x + · · ·+ [(an + bn) + cn]xn

= [a0 + (b0 + c0)] + [a1 + (b1 + c1)]x + · · ·+ [an + (bn + cn)]xn

= (a0 + a1x + · · ·+ anxn) + [(b0 + c0) + (b1 + c1)x + · · ·+ (bn + cn)xn]= (a0 + a1x + · · ·+ anxn) + [(b0 + b1x + · · ·+ bnxn) + (c0 + c1x + · · ·+ cnxn)]= p(x) + [q(x) + r(x)],

so Pn satisfies associativity under addition.A5: The zero vector is the zero polynomial z(x) = 0 + 0 · x + · · ·+ 0 · xn, and it is readily verified that thispolynomial satisfies z(x) + p(x) = p(x) = p(x) + z(x) for all p(x) ∈ Pn.A6: The additive inverse of p(x) = a0 + a1x + · · ·+ anxn is

−p(x) = (−a0) + (−a1)x + · · ·+ (−an)xn.

Page 234: Differential Equations and Linear Algebra - 3e - Goode -Solutions

234

It is readily verified that p(x) + (−p(x)) = z(x), where z(x) is defined in A5.A7: We have

1 · (a0 + a1x + · · ·+ anxn) = a0 + a1x + · · ·+ anxn,

which demonstrates the unit property in Pn.A8: Let r, s ∈ R, and p(x) = a0 + a1x + · · ·+ anxn ∈ Pn. Then

(rs) · p(x) = (rs) · (a0 + a1x + · · ·+ anxn)= [(rs)a0] + [(rs)a1]x + · · ·+ [(rs)an]xn

= r[(sa0) + (sa1)x + · · ·+ (san)xn]= r[s(a0 + a1x + · · ·+ anxn)]= r · (s · p(x)),

which verifies the associativity of scalar multiplication.A9: Let r ∈ R, let p(x) = a0 + a1x + · · ·+ anxn ∈ Pn, and let q(x) = b0 + b1x + · · ·+ bnxn ∈ Pn. Then

r · (p(x) + q(x)) = r · ((a0 + a1x + · · ·+ anxn) + (b0 + b1x + · · ·+ bnxn))= r · [(a0 + b0) + (a1 + b1)x + · · ·+ (an + bn)xn]= [r(a0 + b0)] + [r(a1 + b1)]x + · · ·+ [r(an + bn)]xn

= [(ra0) + (ra1)x + · · ·+ (ran)xn] + [(rb0) + (rb1)x + · · ·+ (rbn)xn]= [r(a0 + a1x + · · ·+ anxn)] + [r(b0 + b1x + · · ·+ bnxn)]= r · p(x) + r · q(x),

which verifies the distributivity of scalar multiplication over vector addition.A10: Let r, s ∈ R and let p(x) = a0 + a1x + · · ·+ anxn ∈ Pn. Then

(r + s) · p(x) = (r + s) · (a0 + a1x + · · ·+ anxn)= [(r + s)a0] + [(r + s)a1]x + · · ·+ [(r + s)an]xn

= [ra0 + ra1x + · · ·+ ranxn] + [sa0 + sa1x + · · ·+ sanxn]= r(a0 + a1x + · · ·+ anxn) + s(a0 + a1x + · · ·+ anxn)= r · p(x) + s · p(x),

which verifies the distributivity of scalar multiplication over scalar addition.The above verification of axioms A1-A10 shows that Pn is a vector space.

Solutions to Section 4.3

True-False Review:

1. FALSE. The null space of an m× n matrix A is a subspace of Rn, not Rm.

2. FALSE. It is not necessarily the case that 0 belongs to the solution set of the linear system. In fact, 0belongs to the solution set of the linear system if and only if the system is homogeneous.

3. TRUE. If b = 0, then the line is y = mx, which is a line through the origin of R2, a one-dimensionalsubspace of R2. On the other hand, if b 6= 0, then the origin does not lie on the given line, and thereforesince the line does not contain the zero vector, it cannot form a subspace of R2 in this case.

Page 235: Differential Equations and Linear Algebra - 3e - Goode -Solutions

235

4. FALSE. The spaces Rm and Rn, with m < n, are not comparable. Neither of them is a subset of theother, and therefore, neither of them can form a subspace of the other.

5. TRUE. Choosing any vector v in S, the scalar multiple 0v = 0 still belongs to S.

6. FALSE. In order for a subset of V to form a subspace, the same operations of addition and scalarmultiplication must be used in the subset as used in V .

7. FALSE. This set is not closed under addition. For instance, the point (1, 1, 0) lies in the xy-plane, thepoint (0, 1, 1) lies in the yz-plane, but

(1, 1, 0) + (0, 1, 1) = (1, 2, 1)

does not belong to S. Therefore, S is not a subspace of V .

8. FALSE. For instance, if we consider V = R3, then the xy-plane forms a subspace of V , and the x-axisforms a subspace of V . Both of these subspaces contain in common all points along the x-axis. Otherexamples abound as well.

Problems:

1. S = {x ∈ R2 : x = (2k,−3k), k ∈ R}.(a) S is certainly nonempty. Let x,y ∈ S. Then for some r, s ∈ R,

x = (2r,−3r) and y = (2s,−3s).

Hence,

x + y = (2r,−3r) + (2s,−3s) = (2(r + s),−3(r + s)) = (2k,−3k),

where k = r + s. Consequently, S is closed under addition. Further, if c ∈ R, then

cx = c(2r,−3r) = (2cr,−3cr) = (2t,−3t),

where t = cr. Therefore S is also closed under scalar multiplication. It follows from Theorem 4.3.2 that Sis a subspace of R2.

(b) The subspace S consists of all points lying along the line in the accompanying figure.

y

x

y=-3x/2

Figure 0.0.63: Figure for Exercise 1(b)

2. S = {x ∈ R3 : x = (r − 2s, 3r + s, s), r, s ∈ R}.

Page 236: Differential Equations and Linear Algebra - 3e - Goode -Solutions

236

(a) S is certainly nonempty. Let x,y ∈ S. Then for some r, s, u, v ∈ R,

x = (r − 2s, 3r + s, s) and y = (u− 2v, 3u + v, v).

Hence,

x + y = (r − 2s, 3r + s, s) + (u− 2v, 3u + v, v)= ((r + u)− 2(s + v), 3(r + u) + (s + v), s + v)= (a− 2b, 3a + b, b),

where a = r + u, and b = s + v. Consequently, S is closed under addition. Further, if c ∈ R, then

cx = c(r − 2s, 3r + s, s) = (cr − 2cs, 3cr + cs, cs) = (k − 2l, 3k + l, l),

where k = cr and l = cs. Therefore S is also closed under scalar multiplication. It follows from Theorem4.3.2 that S is a subspace of R3.

(b) The coordinates of the points in S are (x, y, z) where

x = r − 2s, y = 3r + s, z = s.

Eliminating r and s between these three equations yields 3x− y + 7z = 0.

3. S = {(x, y) ∈ R2 : 3x + 2y = 0}. S 6= ∅ since (0, 0) ∈ S.

Closure under Addition: Let (x1, x2), (y1, y2) ∈ S. Then 3x1 + 2x2 = 0 and 3y1 + 2y2 = 0, so 3(x1 + y1) +2(x2 + y2) = 0, which implies that (x1 + y1, x2 + y2) ∈ S.

Closure under Scalar Multiplication: Let a ∈ R and (x1, x2) ∈ S. Then 3x1 + 2x2 = 0 =⇒ a(3x1 + 2x2) =a · 0 =⇒ 3(ax1) + 2(ax2) = 0, which shows that (ax1, ax2) ∈ S.Thus, S is a subspace of R2 by Theorem 4.3.2.

4. S = {(x1, 0, x3, 2) : x1, x3 ∈ R} is not a subspace of R4 because it is not closed under addition. To seethis, let (a, 0, b, 2), (c, 0, d, 2) ∈ S. Then (a, 0, b, 2) + (c, 0, d, 2) = (a + c, 0, b + d, 4) /∈ S.

5. S = {(x, y, z) ∈ R3 : x + y + z = 1} is not a subspace of R3 because (0, 0, 0) /∈ S since 0 + 0 + 0 6= 1.

6. S = {u ∈ R2 : Au = b, A is a fixed m× n matrix} is not a subspace of Rn since 0 /∈ S.

7. S = {(x, y) ∈ R2 : x2 − y2 = 0} is not a subspace of R2, since it is not closed under addition, as we nowobserve: If (x1, y1), (x2, y2) ∈ S, then (x1, y1) + (x2, y2) = (x1 + x2, y1 + y2).(x1 + x2)2 − (y1 + y2)2 = x2

1 + 2x1x2 + x22 − (y2

1 + 2y1y2 + y22)

= (x21 − y2

1) + (x22 − y2

2) + 2(x1x2 − y1y2)= 0 + 0 + 2(x1x2 − y1y2) 6= 0, in general.

Thus, (x1, y1) + (x2, y2) /∈ S.

8. S = {A ∈ M2(R) : det(A) = 1} is not a subspace of M2(R). To see this, let k ∈ R be a scalar andlet A ∈ S. Then det(kA) = k2 det(A) = k2 · 1 = k2 6= 1, unless k = ±1. Note also that det(A) = 1 anddet(B) = 1 does not imply that det(A + B) = 1.

9. S = {A = [aij ] ∈ Mn(R) : aij = 0 whenever i < j}. Note that S 6= ∅ since 0n ∈ S. Now let A = [aij ]and B = [bij ] be lower triangular matrices. Then aij = 0 and bij = 0 whenever i < j. Then aij + bij = 0

Page 237: Differential Equations and Linear Algebra - 3e - Goode -Solutions

237

and caij = 0 whenever i < j. Hence A + B = [aij + bij ] and cA = [caij ] are also lower triangular matrices.Therefore S is closed under addition and scalar multiplication. Consequently, S is a subspace of M2(R) byTheorem 4.3.2.

10. S = {A ∈ Mn(R) : A is invertible} is not a subspace of Mn(R) because 0n /∈ S.

11. S = {A ∈ M2(R) : AT = A}. S 6= ∅ since 02 ∈ S.

Closure under Addition: If A,B ∈ S, then (A + B)T = AT + BT = A + B, which shows that A + B ∈ S.

Closure under Scalar Multiplication: If r ∈ R and A ∈ S, then (rA)T = rAT = rA, which shows that rA ∈ S.Consequently, S is a subspace of M2(R) by Theorem 4.3.2.

12. S = {A ∈ M2(R) : AT = −A}. S 6= ∅ because 02 ∈ S.

Closure under Addition: If A,B ∈ S, then (A + B)T = AT + BT = −A + (−B) = −(A + B), which showsthat A + B ∈ S.

Closure under Scalar Multiplication: If k ∈ R and A ∈ S, then (kA)T = kAT = k(−A) = −(kA), whichshows that kA ∈ S.Thus, S is a subspace of M2(R) by Theorem 4.3.2.

13. S = {f ∈ V : f(a) = f(b)}, where V is the vector space of all real-valued functions defined on [a, b].Note that S 6= ∅ since the zero function O(x) = 0 for all x belongs to S.Closure under Addition: If f, g ∈ S, then (f + g)(a) = f(a) + g(a) = f(b) + g(b) = (f + g)(b), which showsthat f + g ∈ S.

Closure under Scalar Multiplication: If k ∈ R and f ∈ S, then (kf)(a) = kf(a) = kf(b) = (kf)(b), whichshows that kf ∈ S.Therefore S is a subspace of V by Theorem 4.3.2.

14. S = {f ∈ V : f(a) = 1}, where V is the vector space of all real-valued functions defined on [a, b]. Weclaim that S is not a subspace of V .

Not Closed under Addition: If f, g ∈ S, then (f + g)(a) = f(a) + g(a) = 1 + 1 = 2 6= 1, which shows thatf + g /∈ S.

It can also be shown that S is not closed under scalar multiplication.

15. S = {f ∈ V : f(−x) = f(x) for all x ∈ R}. Note that S 6= ∅ since the zero function O(x) = 0 for all xbelongs to S. Let f, g ∈ S. Then

(f + g)(−x) = f(−x) + g(−x) = f(x) + g(x) = (f + g)(x)

and if c ∈ R, then(cf)(−x) = cf(−x) = cf(x) = (cf)(x),

so f + g and c · f belong to S. Therefore, S is closed under addition and scalar multiplication. Therefore, Sis a subspace of V by Theorem 4.3.2.

16. S = {p ∈ P2 : p(x) = ax2 + b, a, b ∈ R}. Note that S 6= ∅ since p(x) = 0 belongs to S.Closure under Addition: Let p, q ∈ S. Then for some a1, a2, b1, b2 ∈ R,

p(x) = a1x2 + b1 and q(x) = a2x

2 + b2.

Page 238: Differential Equations and Linear Algebra - 3e - Goode -Solutions

238

Hence,

(p + q)(x) = p(x) + q(x) = (a1 + a2)x2 + b1 + b2 = ax2 + b,

where a = a1 + a2 and b = b1 + b2, so that S is closed under addition.Closure under Scalar Multiplication: If k ∈ R, then

(kp)(x) = kp(x) = ka1x2 + kb1 = cx2 + d,

where c = ka1 and d = kb1, so that S is also closed under scalar multiplication.

It follows from Theorem 4.3.2 that S is a subspace of P2.

17. S = {p ∈ P2 : p(x) = ax2 + 1, a ∈ R}. We claim that S is not closed under addition:Not Closed under Addition: Let p, q ∈ S. Then for some a1, a2 ∈ R,

p(x) = a1x2 + 1 and q(x) = a2x

2 + 1.

Hence,(p + q)(x) = p(x) + q(x) = (a1 + a2)x2 + 2 = ax2 + 2

where a = a1 + a2. Consequently, p + q /∈ S, and therefore S is not closed under addition. It follows that Sis not a subspace of P2.

18. S = {y ∈ C2(I) : y′′ + 2y′ − y = 0}. Note that S is nonempty since the function y = 0 belongs to S.

Closure under Addition: Let y1, y2 ∈ S.(y1 + y2)′′ + 2(y1 + y2)′ − (y1 + y2) = y′′1 + y′′2 + 2(y′1 + y′2)− y1 − y2

= y′′1 + 2y′1 − y1 + y′′2 + 2y′2 − y2

= (y′′1 + 2y′1 − y1) + (y′′2 + 2y′2 − y2)= 0 + 0 = 0.

Thus, y1 + y2 ∈ S.

Closure under Scalar Multiplication: Let k ∈ R and y1 ∈ S.(ky1)′′ + 2(ky1)′ − (ky1) = ky′′1 + 2ky′1 − ky1 = k(y′′1 + 2y′1 − y1) = k · 0 = 0, which shows that ky1 ∈ S.Hence, S is a subspace of V by Theorem 4.3.2.

19. S = {y ∈ C2(I) : y′′ + 2y′ − y = 1}. S is not a subspace of V .We show that S fails to be closed under addition (one can also verify that it is not closed under scalarmultiplication, but this is unnecessary if one shows the failure of closure under addition):Not Closed under Addition: Let y1, y2 ∈ S.(y1 + y2)′′ + 2(y1 + y2)′ − (y1 + y2) = y′′1 + y′′2 + 2(y′1 + y′2)− y1 − y2

= (y′′1 + 2y′1 − y1) + (y′′2 + 2y′2 − y2)= 1 + 1 = 2 6= 1.

Thus, y1 + y2 /∈ S.

Or alternatively:Not Closed under Scalar Multiplication: Let k ∈ R and y1 ∈ S.((ky1)′′ + 2(ky1)′ − (ky1) = ky′′1 + 2ky′1 − ky1 = k(y′′1 + 2y′1 − y1) = k · 1 = k 6= 1, unless k = 1. Therefore,ky1 /∈ S unless k = 1.

20. A =

1 −2 14 −7 −2

−1 3 4

. nullspace(A) = {x ∈ R3 : Ax = 0}. The RREF of the augmented matrix of

Page 239: Differential Equations and Linear Algebra - 3e - Goode -Solutions

239

the system Ax = 0 is

1 0 0 00 1 0 00 0 1 0

. Consequently, the linear system Ax = 0 has only the trivial solution

(0, 0, 0), so nullspace(A) = {(0, 0, 0)}.

21. A =

1 3 −2 13 10 −4 62 5 −6 −1

. nullspace(A) = {x ∈ R3 : Ax = 0}. The RREF of the augmented matrix of

the system Ax = 0 is

1 0 −8 −8 00 1 2 3 00 0 0 0 0

, so that nullspace(A) = {(8r + 8s,−2r − 3s, r, s) : r, s ∈ R}.

22. A =

1 i −23 4i −5

−1 −3i i

. nullspace(A) = {x ∈ C3 : Ax = 0}. The RREF of the augmented matrix of

the system Ax = 0 is

1 0 0 00 1 0 00 0 1 0

. Consequently, the linear system Ax = 0 has only the trivial solution

(0, 0, 0), so nullspace(A) = {(0, 0, 0)}.23. Since the zero function y(x) = 0 for all x ∈ I is not a solution to the differential equation, the set of allsolutions does not contain the zero vector from C2(I), hence it is not a vector space at all and cannot be asubspace.

24.

(a) As an example, we can let V = R2. If we take S1 to be the set of points lying on the x-axis and S2

to be the set of points lying on the y-axis, then it is readily seen that S1 and S2 are both subspaces of V .However, the union of these subspaces is not closed under addition. For instance, the points (1, 0) and (0, 1)lie in S1 ∪ S2, but (1, 0) + (0, 1) = (1, 1) 6∈ S1 ∪ S2. Therefore, the union S1 ∪ S2 does not form a subspaceof V .

(b) Since S1 and S2 are both subspaces of V , they both contain the zero vector. It follows that the zerovector is an element of S1∩S2, hence this subset is nonempty. Now let u and v be vectors in S1∩S2, and letc be a scalar. Then u and v are both in S1 and both in S2. Since S1 and S2 are each subspaces of V , u + vand cu are vectors in both S1 and S2, hence they are in S1 ∩ S2. This implies that S1 ∩ S2 is a nonemptysubset of V , which is closed under both addition and scalar multiplication. Therefore, S1 ∩ S2 is a subspaceof V .

(c) Note that S1 is a subset of S1 + S2 (every vector v ∈ S1 can be written as v + 0 ∈ S1 + S2), so S1 + S2

is nonempty.

Closure under Addition: Now let u and v belong to S1 + S2. We may write u = u1 + u2 and v = v1 + v2,where u1,v1 ∈ S1 and u2,v2 ∈ S2. Then

u + v = (u1 + u2) + (v1 + v2) = (u1 + v1) + (u2 + v2).

Since S1 and S2 are closed under addition, we have that u1 + v1 ∈ S1 and u2 + v2 ∈ S2. Therefore,

u + v = (u1 + u2) + (v1 + v2) = (u1 + v1) + (u2 + v2) ∈ S1 + S2.

Hence, S1 + S2 is closed under addition.

Page 240: Differential Equations and Linear Algebra - 3e - Goode -Solutions

240

Closure under Scalar Multiplication: Next, let u ∈ S1 +S2 and let c be a scalar. We may write u = u1 +u2,where u1 ∈ S1 and u2 ∈ S2. Thus,

c · u = c · u1 + c · u2,

and since S1 and S2 are closed under scalar multiplication, c · u1 ∈ S1 and c · u2 ∈ S2. Therefore,

c · u = c · u1 + c · u2 ∈ S1 + S2.

Hence, S1 + S2 is closed under scalar multiplication.

Solutions to Section 4.4

True-False Review:

1. TRUE. By its very definition, when a linear span of a set of vectors is formed, that span becomes closedunder addition and under scalar multiplication. Therefore, it is a subspace of V .

2. FALSE. In order to say that S spans V , it must be true that all vectors in V can be expressed as alinear combination of the vectors in S, not simply “some” vector.

3. TRUE. Every vector in V can be expressed as a linear combination of the vectors in S, and therefore, itis also true that every vector in W can be expressed as a linear combination of the vectors in S. Therefore,S spans W , and S is a spanning set for W .

4. FALSE. To illustrate this, consider V = R2, and consider the spanning set {(1, 0), (0, 1), (1, 1)}. Thenthe vector v = (2, 2) can be expressed as a linear combination of the vectors in S in more than one way:

v = 2(1, 1) and v = 2(1, 0) + 2(0, 1).

Many other illustrations, using a variety of different vector spaces, are also possible.

5. TRUE. To say that a set S of vectors in V spans V is to say that every vector in V belongs to span(S).So V is a subset of span(S). But of course, every vector in span(S) belongs to the vector space V , and sospan(S) is a subset of V . Therefore, span(S) = V .

6. FALSE. This is not necessarily the case. For example, the linear span of the vectors (1, 1, 1) and (2, 2, 2)is simply a line through the origin, not a plane.

7. FALSE. There are vector spaces that do not contain finite spanning sets. For instance, if V is the vectorspace consisting of all polynomials with coefficients in R, then since a finite spanning set could not containpolynomials of arbitrarily large degree, no finite spanning set is possible for V .

8. FALSE. To illustrate this, consider V = R2, and consider the spanning set S = {(1, 0), (0, 1), (1, 1)}.The proper subset S′ = {(1, 0), (0, 1)} is still a spanning set for V . Therefore, it is possible for a propersubset of a spanning set for V to still be a spanning set for V .

9. TRUE. The general matrix

a b c0 d e0 0 f

in this vector space can be written as aE11 + bE12 + cE13 +

dE22 + eE23 + fE33, and therefore the matrices in the set {E11, E12, E13, E22, E23, E33} span the vectorspace.

10. FALSE. For instance, it is easily verified that {x2, x2 + x, x2 + 1} is a spanning set for P2, and yet, itcontains only polynomials of degree 2.

Page 241: Differential Equations and Linear Algebra - 3e - Goode -Solutions

241

11. FALSE. For instance, consider m = 2 and n = 3. Then one spanning set for R2 is {(1, 0), (0, 1), (1, 1), (2, 2)},which consists of four vectors. On the other hand, one spanning set for R3 is {(1, 0, 0), (0, 1, 0), (0, 0, 1)},which consists of only three vectors.

12. TRUE. This is explained in True-False Review Question 7 above.

Problems:

1. {(1,−1), (2,−2), (2, 3)}. Since v1 = (1,−1), and v2 = (2, 3) are noncolinear, the given set of vectors doesspan R2. (See the comment preceding Example 4.4.3 in the text.)

2. The given set of vectors does not span R2 since it does not contain two nonzero and non-colinear vectors.

3. The three vectors in the given set are all collinear. Consequently, the set of vectors does not span R2.

4. Since

∣∣∣∣∣∣1 2 4

−1 5 −21 3 1

∣∣∣∣∣∣ = −23 6= 0, the given vectors are not coplanar, and therefore span R3.

5. Since

∣∣∣∣∣∣1 2 4

−2 3 −11 1 2

∣∣∣∣∣∣ = −7 6= 0, the given vectors are not coplanar, and therefore span R3. Note that we

can simply ignore the zero vector (0, 0, 0).

6. Since

∣∣∣∣∣∣2 3 1

−1 −3 14 5 3

∣∣∣∣∣∣ = 0, the vectors are coplanar, and therefore the given set does not span R3.

7. Since

∣∣∣∣∣∣1 3 42 4 53 5 6

∣∣∣∣∣∣ = 0, the vectors are coplanar, and therefore the given set does not span R3. The linear

span of the vectors is those points (x, y, z) for which the system

c1(1, 2, 3) + c2(3, 4, 5) + c3(4, 5, 6) = (x, y, z)

is consistent. Reducing the augmented matrix

1 3 4 x2 4 5 y3 5 6 z

of this system yields

1 3 4 x0 2 3 2x− y0 0 0 x− 2y + z

.

This system is consistent if and only if x − 2y + z = 0. Consequently, the linear span of the given set ofvectors consists of all points lying on the plane with the equation x− 2y + z = 0.

8. Let (x1, x2) ∈ R2 and a, b ∈ R.(x1, x2) = av1 + bv2 = a(2,−1) + b(3, 2) = (2a,−a) + (3b, 2b) = (2a + 3b,−a + 2b). It follows that

2a + 3b = x1 and −a + 2b = x2, which implies that a =2x1 − 3x2

7and b =

x1 + 2x2

7so (x1, x2) =(

2x1 − 3x2

7

)v1 +

(x1 + 2x2

7

)v2. {v1,v2} spans R2, and in particular, (5,−7) =

317

v1 −97v2.

9. Let (x, y, z) ∈ R3 and a, b, c ∈ R.(x, y, z) = av1 + bv2 + cv3 = a(−1, 3, 2) + b(1,−2, 1) + c(2, 1, 1)

= (−a, 3a, 2a) + (b,−2b, b) + (2c, c, c)= (−a + b + 2c, 3a− 2b + c, 2a + b + c).

These equalities result in the system:

a + b + 2c = x

3a− 2b + c = y

2a + b + c = z

Page 242: Differential Equations and Linear Algebra - 3e - Goode -Solutions

242

Upon solving for a, b, and c we obtain

a =−3x + y + 5z

16, b =

−x− 5y + 7z

16, and c =

7x + 3y − z

16.

Consequently, {v1,v2,v3} spans R3, and

(x, y, z) =(−3x + y + 5z

16

)v1 +

(−x− 5y + 7z

16

)v2 +

(7x + 3y − z

16

)v3.

10. Let (x, y) ∈ R2 and a, b, c ∈ R.(x, y) = av1 + bv2 + cv3 = a(1, 1) + b(−1, 2) + c(1, 4)

= (a, a) + (−b, 2b) + (c, 4c)= (a− b + c, a + 2b + 4c).

These equalities result in the system:

{a− b + c = x

a + 2b + 4c = y

Solving the system we find that

a =2x + y − 6c

3, and b =

y − x− 3c

3

where c is a free real variable. It follows that

(x, y) =(

2x + y − 6c

3

)v1 +

(y − x− 3c

3

)v2 + cv3,

which implies that the vectors v1,v2, and v3 span R2. Moreover, if c = 0, then

(x, y) =(

2x + y

3

)v1 +

(y − x

3

)v2,

so R2 = span{v1,v2} also.

11. x = (c1, c2, c2 − 2c1) = (c1, 0,−2c1) + (0, c2, c2) = c1(1, 0,−2) + c2(0, 1, 1) = c1v1 + c2v2. Thus,{(1, 0,−2), (0, 1, 1)} spans S.

12. v = (c1, c2, c2 − 2c1, c1 − 2c2) = (c1, 0,−c1, c1) + (0, c2, c2,−2c2) = c1(1, 0,−1, 1) + c2(0, 1, 1,−2). Thus,{(1, 0,−1, 1), (0, 1, 1,−2)} spans S.

13. x− 2y − z = 0 =⇒ x = 2y + z, so v ∈ R3.=⇒ v = (2y + z, y, z) = (2y, y, 0) + (z, 0, z) = y(2, 1, 0) + z(1, 0, 1).Therefore S = {v ∈ R3 : v = a(2, 1, 0) + b(1, 0, 1), a, b ∈ R}, hence {(2, 1, 0), (1, 0, 1)} spans S.

14. nullspace(A) = {x ∈ R3 : Ax = 0}. Ax = 0 =⇒

1 2 31 3 42 4 6

x1

x2

x3

=

000

.

Performing Gauss-Jordan elimination on the augmented matrix of the system we obtain: 1 2 3 01 3 4 02 4 6 0

1 2 3 00 1 1 00 0 0 0

1 0 1 00 1 1 00 0 0 0

.

From the last matrix we find that x1 = −r and x2 = −r where x3 = r, with r ∈ R. Consequently,

x = (x1, x2, x3) = (−r,−r, r) = r(−1,−1, 1).

Page 243: Differential Equations and Linear Algebra - 3e - Goode -Solutions

243

Thus,nullspace(A) = {x ∈ R3 : x = r(−1,−1, 1), r ∈ R} = span{(−1,−1, 1)}.

15. A =

1 2 3 51 3 4 22 4 6 −1

. nullspace(A) = {x ∈ R4 : Ax = 0}. The RREF of the augmented ma-

trix of this system is

1 0 1 0 00 1 1 0 00 0 0 1 0

. Consequently, nullspace(A) = {r(−1,−1, 1, 0) : r ∈ R} =

span{(−1,−1, 1, 0)}.

16. A ∈ S =⇒ A =[

a bb c

]where a, b, c ∈ R. Thus,

A =[

a 00 0

]+[

0 bb 0

]+[

0 00 c

]= a

[1 00 0

]+ b

[0 11 0

]+ c

[0 00 1

]= aA1 + bA2 + cA3.

Therefore S = span{A1, A2, A3}.

17. S ={

A ∈ M2(R) : A =[

0 α−α 0

], α ∈ R

}={

A ∈ M2(R) : A = α

[0 1

−1 0

]}= span

{[0 1

−1 0

]}.

18.

(a) S 6= ∅ since[

0 00 0

]∈ S.

Closure under Addition: Let x,y ∈ S. Then,

x + y =[

x11 x12

0 x22

]+[

y11 y12

0 y22

]=[

x11 + y11 x12 + y12

0 x22 + y22

],

which implies that x + y ∈ S.

Closure under Scalar Multiplication: Let r ∈ R and x ∈ S. Then

rx = r

[x11 x12

0 x22

]=[

rx11 rx12

0 rx22

],

which implies thatrx ∈ S.

Consequently, S is a subspace of M2(R) by Theorem 4.3.2.

(b) A =[

a11 a12

0 a22

]= a11

[1 00 0

]+ a12

[0 10 0

]+ a22

[0 00 1

].

Therefore, S = span{[

1 00 0

],

[0 10 0

],

[0 00 1

]}.

19. Let v ∈ span{v1,v2} and a, b ∈ R.v = av1 + bv2 = a(1,−1, 2) + b(2,−1, 3) = (a,−a, 2a) + (2b,−b, 3b) = (a + 2b,−a− b, 2a + 3b).Thus, span{v1,v2} = {v ∈ R3 : v = (a + 2b,−a− b, 2a + 3b), a, b ∈ R}.Geometrically, span{v1,v2} is the plane through the origin determined by the two given vectors. The planehas parametric equations x = a + 2b, y = −a − b, and z = 2a + 3b. If a, b, and c are eliminated from theequations, then the resulting Cartesian equation is given by x− y − z = 0.

Page 244: Differential Equations and Linear Algebra - 3e - Goode -Solutions

244

20. Let v ∈ span{v1,v2} and a, b ∈ R.v = av1 + bv2 = a(1, 2,−1) + b(−2,−4, 2) = (a, 2a,−a) + (−2b,−4b, 2b) = (a− 2b, 2a− 4b,−a + 2b)

= (a− 2b)(1, 2,−1) = k(1, 2,−1), where k = a− 2b.

Thus, span{v1,v2} = {v ∈ R3 : v = k(1, 2,−1), k ∈ R}. Geometrically, span{v1,v2} is the line through theorigin determined by the vector (1, 2,−1).

21. Let v ∈ span{v1,v2,v3} and a, b, c ∈ R.v = av1 + bv2 + cv3 = a(1, 1,−1) + b(2, 1, 3) + c(−2,−2, 2) = (a, a,−a) + (2b, b, 3b) + (−2c,−2c, 2c)

= (a + 2b− 2c, a + b− 2c,−a + 3b + 2c).

Assuming that v = (x, y, z) and using the last ordered triple, we obtain the system:

a + 2b− 2c = x

a + b− 2c = y

−a + 3b + 2c = zPerforming Gauss-Jordan elimination on the augmented matrix of the system, we obtain: 1 2 −2 x

1 1 −2 y−1 3 2 z

1 2 −2 x0 −1 0 y − x0 5 0 x + z

1 0 −2 2y − x0 1 0 x− y0 0 0 5y − 4x + z

.

It is clear from the last matrix that the subspace, S, of R3 is a plane through (0, 0, 0) with Cartesian equation4x− 5y − z = 0. Moreover, {v1,v2} also spans the subspace S sincev = av1 + bv2 + cv3 = a(1, 1,−1) + b(2, 1, 3) + c(−2,−2, 2) = a(1, 1,−1) + b(2, 1, 3)− 2c(1, 1,−1)

= (a− 2c)(1, 1,−1) + b(2, 1, 3) = dv1 + bv2 where d = a− 2c ∈ R.

22. If v ∈ span{v1,v2} then there exist a, b ∈ R such that

v = av1 + bv2 = a(1,−1, 2) + b(2, 1, 3) = (a,−a, 2a) + (2b, b, 3b) = (a + 2b,−a + b, 2a + 3b).

Hence, v = (3, 3, 4) is in span{v1,v2} provided there exists a, b ∈ R satisfying the system:

a + 2b = 3−a + b = 32a + 3b = 4

Solving this system we find that a = −1 and b = 2.Consequently, v = −v1 + 2v2 so that (3, 3, 4) ∈ span{v1,v2}.23. If v ∈ span{v1,v2} then there exist a, b ∈ R such that

v = av1 + bv2 = a(−1, 1, 2) + b(3, 1,−4) = (−a, a, 2a) + (3b, b,−4b) = (−a + 3b, a + b, 2a− 4b).

Hence v = (5, 3,−6) is in span{v1,v2} provided there exists a, b ∈ R satisfying the system:

−a + 3b = 5

a + b = 32a− 4b = −6

Solving this system we find that a = 1 and b = 2.Consequently, v = v1 + 2v2 so that (5, 3,−6) ∈ span{v1,v2}.24. If v ∈ span{v1,v2} then there exist a, b ∈ R such that

v = av1 + bv2 = (3a, a, 2a) + (−2b,−b, b) = (3a− 2b, a− b, 2a + b).

Hence v = (1, 1,−2) is in span{v1,v2} provided there exists a, b ∈ R satisfying the system:

3a− 2b = 1

a− b = 12a + b = −2

Page 245: Differential Equations and Linear Algebra - 3e - Goode -Solutions

245

Since

3 −2 11 −1 12 1 −2

reduces to

1 −1 10 1 −20 0 2

, it follows that the system has no solution. Hence it

must be the case that (1, 1,−2) /∈ span{v1,v2}.

25. If p ∈ span{p1, p2} then there exist a, b ∈ R such that p(x) = ap1(x) + bp2(x), so p(x) = 2x2 − x + 2 isin span{p1, p2} provided there exist a, b ∈ R such that

2x2 − x + 2 = a(x− 4) + b(x2 − x + 3)

= ax− 4a + bx2 − bx + 3b

= bx2 + (a− b)x + (3b− 4a).

Equating like coefficients and solving, we find that a = 1 and b = 2.Thus, 2x2 − x + 2 = 1 · (x− 4) + 2 · (x2 − x + 3) = p1(x) + 2p2(x) so p ∈ span{p1, p2}.

26. Let A ∈ span{A1, A2, A3} and c1, c2, c3 ∈ R.

A = c1A1 + c2A2 + c3A3 = c1

[1 −12 0

]+ c2

[0 1

−2 1

]+ c3

[3 01 2

]=[

c1 −c1

2c1 0

]+[

0 c2

−2c2 c2

]+[

3c3 0c3 2c3

]=[

c1 + 3c3 −c1 + c2

2c1 − 2c2 + c3 c2 + 2c3

].

Therefore span{A1, A2, A3} ={

A ∈ M2(R) : A =[

c1 + 3c3 −c1 + c2

2c1 − 2c2 + c3 c2 + 2c3

]}.

27. Let A ∈ span{A1, A2} and a, b ∈ R.

A = aA1 + bA2 = a

[1 2

−1 3

]+ b

[−2 1

1 −1

]=[

a 2a−a 3a

]+[−2b b

b −b

]=[

a− 2b 2a + b−a + b 3a− b

].

So span{A1, A2} ={

A ∈ M2(R) : A =[

a− 2b 2a + b−a + b 3a− b

]}. Now, to determine whether B ∈ span{A1, A2},

let[

a− 2b 2a + b−a + b 3a− b

]=[

3 1−2 4

]. This implies that a = 1 and b = −1, thus B ∈ span{A1, A2}.

28. (a) The general vector in span{f, g} is of the form h(x) = c1 coshx + c2 sinhx where c1, c2 ∈ R.

(b) Let h ∈ S and c1, c2 ∈ R. Thenh(x) = c1f(x) + c2g(x) = c1 coshx + c2 sinhx

= c1ex + e−x

2+ c2

ex − e−x

2=

c1ex

2+

c1e−x

2+

c2ex

2− c2e

−x

2

=c1 + c2

2ex +

c1 − c2

2e−x = d1e

x + d2e−x

where d1 =c1 + c2

2and d2 =

c1 − c2

2. Therefore S =span{ex, e−x}.

29. The origin in R3.

30. All points lying on the line through the origin with direction v1.

31. All points lying on the plane through the origin containing v1 and v2.

32. If v1 = v2 = 0, then the subspace is the origin in R3. If at least one of the vectors is nonzero, then thesubspace consists of all points lying on the line through the origin in the direction of the nonzero vector.

33. Suppose that S is a subset of S′. We must show that every vector in span(S) also belongs to span(S′).Every vector v that lies in span(S) can be expressed as v = c1v1 + c2v2 + · · ·+ ckvk, where v1,v2, . . . ,vk

Page 246: Differential Equations and Linear Algebra - 3e - Goode -Solutions

246

belong to S. However, since S is a subset of S′, v1,v2, . . . ,vk also belong to S′, and therefore, v belongs tospan(S′). Thus, we have shown that every vector in span(S) also lies in span(S′).

34.

Proof of =⇒: We begin by supposing that span{v1,v2,v3} = span{v1,v2}. Since v3 ∈ span{v1,v2,v3},our supposition implies that v3 ∈ span{v1,v2}, which means that v3 can be expressed as a linear combinationof the vectors v1 and v2.

Proof of ⇐=: Now suppose that v3 can be expressed as a linear combination of the vectors v1 and v2. Wemust show that span{v1,v2,v3} = span{v1,v2}, and we do this by showing that each of these subsets is asubset of the other. Since it is clear that span{v1,v2} is a subset of span{v1,v2,v3}, we focus our attentionon proving that every vector in span{v1,v2,v3} belongs to span{v1,v2}. To see this, suppose that v belongsto span{v1,v2,v3}, so that we may write v = c1v1 + c2v2 + c3v3. By assumption, v3 can be expressed as alinear combination of v1 and v2, so that we may write v3 = d1v1 + d2v2. Hence, we obtain

v = c1v1 + c2v2 + c3v3 = c1v1 + c2v2 + c3(d1v1 + d2v2) = (c1 + c3d1)v1 + (c2 + c3d2)v2 ∈ span{v1,v2},

as required.

Solutions to Section 4.5

1. FALSE. For instance, consider the vector space V = R2. Here are two different minimal spanning setsfor V :

{(1, 0), (0, 1)} and {(1, 0), (1, 1)}.

Many other examples of this abound.

2. TRUE. We have seven column vectors, and each of them belongs to R5. Therefore, the number of vectorspresent exceeds the number of components in those vectors, and hence they must be linearly dependent.

3. FALSE. For instance, the 7× 5 zero matrix, 07×5, does not have linearly independent columns.

4. TRUE. Any linear dependencies within the subset also represent linear dependencies within the original,larger set of vectors. Therefore, if the nonempty subset were linearly dependent, then this would requirethat the original set is also linearly dependent. In other words, if the original set is linearly independent,then so is the nonempty subset.

5. TRUE. This is stated in Theorem 4.5.21.

6. TRUE. If we can write v = c1v1 + c2v2 + · · ·+ ckvk, then {v,v1,v2, . . . ,vk} is a linearly dependent set.

7. TRUE. This is a rephrasing of the statement in True-False Review Question 5 above.

8. FALSE. None of the vectors (1, 0), (0, 1), and (1, 1) in R2 are proportional to each other, and yet, theyform a linearly dependent set of vectors.

9. FALSE. The illustration given in part (c) of Example 4.5.22 gives an excellent case-in-point here.

Problems:

1. {(1,−1), (1, 1)}. These vectors are elements of R2. Since there are two vectors, and the dimension of R2

is two, Corollary 4.5.15 states that the vectors will be linearly dependent if and only if∣∣∣∣ 1 1−1 1

∣∣∣∣ = 0. Now∣∣∣∣ 1 1−1 1

∣∣∣∣ = 2 6= 0. Consequently, the given vectors are linearly independent.

Page 247: Differential Equations and Linear Algebra - 3e - Goode -Solutions

247

2. {(2,−1), (3, 2), (0, 1)}. These vectors are elements of R2, but since there are three vectors, the vectorsare linearly dependent by Corollary 4.5.15. Let v1 = (2,−1),v2 = (3, 2),v3 = (0, 1). We now determine adependency relationship. The condition

c1v1 + c2v2 + c3v3 = 0

requires 2c1 + 3c2 = 0 and −c1 + 2c2 + c3 = 0.

The RREF of the augmented matrix of this system is[

1 0 − 37 0

0 1 27 0

]. Hence the system has solution

c1 = 3r, c2 = −2r, c3 = 7r. Consequently, 3v1 − 2v2 + 7v3 = 0.

3. {(1,−1, 0), (0, 1,−1), (1, 1, 1)}. These vectors are elements of R3.∣∣∣∣∣∣1 0 1

−1 1 10 −1 1

∣∣∣∣∣∣ = 3 6= 0, so by Corollary 4.5.15, the vectors are linearly independent.

4. {(1, 2, 3), (1,−1, 2), (1,−4, 1)}. These vectors are elements of R3.∣∣∣∣∣∣1 1 12 −1 −43 2 1

∣∣∣∣∣∣ = 0, so by Corollary 4.5.15, the vectors are linearly dependent. Let v1 = (1, 2, 3),v2 =

(1,−1, 2),v3 = (1,−4, 1). We determine a dependency relation. The condition c1v1 + c2v2 + c3v3 = 0requires

c1 + c2 + c3 = 0, 2c1 − c2 − 4c3 = 0, 3c1 + 2c2 + c3 = 0.

The RREF of the augmented matrix of this system is

1 0 −1 00 1 2 00 0 0 0

. Hence c1 = r, c2 = −2r, c3 = r,

and so v1 − 2v2 + v3 = 0.

5. Given {(−2, 4,−6), (3,−6, 9)}. The vectors are linearly dependent because 3(−2, 4,−6) + 2(3,−6, 9) =(0, 0, 0), which gives a linear dependency relation.Alternatively, let a, b ∈ R and observe that

a(−2, 4,−6) + b(3,−6, 9) = (0, 0, 0) =⇒ (−2a, 4a,−6a) + (3b,−6b, 9b) = (0, 0, 0).

The last equality results in the system:

−2a + 3b = 0

4a− 6b = 0−6a + 9b = 0

The RREF of the augmented matrix of this system is

1 − 32 0

0 0 00 0 0

, which implies that a =32b. Thus, the

given set of vectors is linearly dependent.

6. {(1,−1, 2), (2, 1, 0)}. Let a, b ∈ R.a(1,−1, 2) + b(2, 1, 0) = (0, 0, 0) =⇒ (a,−a, 2a) + (2b, b, 0) = (0, 0, 0) =⇒ (a + 2b,−a + b, 2a) = (0, 0, 0). The

last equality results in the system:

a + 2b = 0−a + b = 0

2a = 0. Since the only solution of the system is a = b = 0, it

follows that the vectors are linearly independent.

7. {(−1, 1, 2), (0, 2,−1), (3, 1, 2), (−1,−1, 1)}. These vectors are elements of R3.Since there are four vectors, it follows from Corollary 4.5.15 that the vectors are linearly dependent. Let

Page 248: Differential Equations and Linear Algebra - 3e - Goode -Solutions

248

v1 = (−1, 1, 2),v2 = (0, 2,−1),v3 = (3, 1, 2),v4 = (−1,−1, 1). Then

c1v1 + c2v2 + c3v3 + c4v4 = 0

requires

−c1 + 3c3 − c4 = 0, c1 + 2c2 + c3 − c4 = 0, 2c1 − c2 + 2c3 + c4 = 0.

The RREF of the augmented matrix of this system is

1 0 0 2

5 0

0 1 0 − 35 0

0 0 1 − 15 0

so that

c1 = −2r, c2 = 3r, c3 = r, c4 = 5r. Hence, −2v1 + 3v2 + v3 + 5v4 = 0.

8. {(1,−1, 2, 3), (2,−1, 1,−1), (−1, 1, 1, 1)}. Let a, b, c ∈ R.a(1,−1, 2, 3) + b(2,−1, 1,−1) + c(−1, 1, 1, 1) = (0, 0, 0, 0)=⇒ (a,−a, 2a, 3a) + (2b,−b, b,−b) + (−c, c, c, c) = (0, 0, 0, 0)=⇒ (a + 2b− c,−a− b + c, 2a + b + c, 3a− b + c) = (0, 0, 0, 0).

The last equality results in the system:

a + 2b− c = 0−a− b + c = 02a + b + c = 03a− b + c = 0

. The RREF of the augmented matrix of this system

is

1 0 0 00 1 0 00 0 1 00 0 0 0

. Consequently, a = b = c = 0. Thus, the given set of vectors is linearly independent.

9. {(2,−1, 0, 1), (1, 0,−1, 2), (0, 3, 1, 2), (−1, 1, 2, 1)}. These vectors are elements in R4.By CET (row 1), we obtain:∣∣∣∣∣∣∣∣

2 1 0 −1−1 0 3 1

0 −1 1 21 2 2 1

∣∣∣∣∣∣∣∣ = 2

∣∣∣∣∣∣0 3 1

−1 1 22 2 1

∣∣∣∣∣∣−∣∣∣∣∣∣−1 3 1

0 1 21 2 1

∣∣∣∣∣∣+∣∣∣∣∣∣−1 0 3

0 −1 11 2 2

∣∣∣∣∣∣ = 21 6= 0.

Thus, it follows from Corollary 4.5.15 that the vectors are linearly independent.

10. Since

∣∣∣∣∣∣1 4 72 5 83 6 9

∣∣∣∣∣∣ = 0, the given set of vectors is linearly dependent in R3. Further, since the vectors

are not collinear, it follows that span{v1,v2,v3} is a plane 3-space.

11. (a) Since

∣∣∣∣∣∣2 1 −3

−1 3 −95 −4 12

∣∣∣∣∣∣ = 0, the given set of vectors is linearly dependent.

(b) By inspection, we see that v3 = −3v2. Hence v2 and v3 are colinear and therefore span{v2,v3} is theline through the origin that has the direction of v2. Further, since v1 is not proportional to either of thesevectors, it does not lie along the same line, hence v1 is not in span{v2,v3}.

12.

∣∣∣∣∣∣1 0 11 2 kk k 6

∣∣∣∣∣∣ =

∣∣∣∣∣∣1 0 10 2 k − 10 k 6− k

∣∣∣∣∣∣ =∣∣∣∣ 2 k − 1

k 6− k

∣∣∣∣ = (3 − k)(k + 4). Now, the determinant is zero when

Page 249: Differential Equations and Linear Algebra - 3e - Goode -Solutions

249

k = 3 or k = −4. Consequently, by Corollary 4.5.15, the vectors are linearly dependent if and only if k = 3or k = −4.

13. {(1, 0, 1, k), (−1, 0, k, 1), (2, 0, 1, 3)}. These vectors are elements in R4.Let a, b, c ∈ R.a(1, 0, 1, k) + b(−1, 0, k, 1) + c(2, 0, 1, 3) = (0, 0, 0, 0)=⇒ (a, 0, a, ka) + (−b, 0, kb, b) + (2c, 0, c, 3c) = (0, 0, 0, 0)=⇒ (a− b + 2c, 0, a + kb + c, ka + b + 3c) = (0, 0, 0, 0).

The last equality results in the system:

a− b + 2c = 0a + kb + c = 0

ka + b + 3c = 0. Evaluating the determinant of the coefficient

matrix, we obtain ∣∣∣∣∣∣1 −1 21 k 1k 1 3

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 0 01 k + 1 −1k k + 1 3− 2k

∣∣∣∣∣∣ = 2(k + 1)(2− k).

Consequently, the system has only the trivial solution, hence the given set of vectors are linearly independentif and only if k 6= 2,−1.

14. {(1, 1, 0,−1), (1, k, 1, 1), (2, 1, k, 1), (−1, 1, 1, k)}. These vectors are elements in R4.Let a, b, c, d ∈ R.a(1, 1, 0,−1) + b(1, k, 1, 1) + c(2, 1, k, 1) + d(−1, 1, 1, k) = (0, 0, 0, 0)=⇒ (a, a, 0,−a) + (b, kb, b, b) + (2c, c, kc, c) + (−d, d, d, kd) = (0, 0, 0, 0)=⇒ (a + b + 2c− d, a + kb + c + d, b + kc + d,−a + b + c + kd) = (0, 0, 0, 0).

The last equality results in the system:

a + b + 2c− d = 0a + kb + c + d = 0

b + kc + d = 0−a + b + c + kd = 0

. By Corollary 3.2.5, this system has the

trivial solution if and only if the determinant of the coefficient matrix is zero. Evaluating the determinantof the coefficient matrix, we obtain:∣∣∣∣∣∣∣∣

1 1 2 −11 k 1 10 1 k 1

−1 1 1 k

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣∣∣

1 1 2 −10 k − 1 −1 20 1 k 10 2 3 k − 1

∣∣∣∣∣∣∣∣ =∣∣∣∣∣∣

k − 1 −1 21 k 12 3 k − 1

∣∣∣∣∣∣ =∣∣∣∣∣∣

0 −k2 + k − 1 3− k1 k 10 3− 2k k − 3

∣∣∣∣∣∣=∣∣∣∣ k2 − k + 1 k − 3

3− 2k k − 3

∣∣∣∣ = (k − 3)∣∣∣∣ k2 − k + 1 1

3− 2k 1

∣∣∣∣ = (k − 3)(k − 1)(k + 2).

For the original set of vectors to be linearly independent, we need a = b = c = 0. We see that this conditionwill be true provided that k ∈ R such that k 6= 3, 1,−2.

15. Let a, b, c ∈ R. a

[1 10 1

]+ b

[2 −10 1

]+ c

[3 60 4

]=[

0 00 0

]

=⇒[

a + 2b + 3c a− b + 6c0 a + b + 4c

]=[

0 00 0

]. The last equation results in the system:

a + 2b + 3c = 0a− b + 6c = 0a + b + 4c = 0

.

The RREF of the augmented matrix of this system is

1 0 5 00 1 −1 00 0 0 0

, which implies that the system has

an infinite number of solutions. Consequently, the given matrices are linearly dependent.

Page 250: Differential Equations and Linear Algebra - 3e - Goode -Solutions

250

16. Let a, b ∈ R. a

[2 −13 4

]+ b

[−1 2

1 3

]=[

0 00 0

]

=⇒[

2a− b −a + 2b3a + b 4a + 3b

]=[

0 00 0

]. The last equation results in the system:

2a− b = 03a + b = 0

−a + 2b = 04a + 3b = 0

. This system

has only the trivial solution a = b = 0, thus the given matricies are linearly independent in M2(R).

17. Let a, b, c ∈ R. a

[1 01 2

]+ b

[−1 1

2 1

]+ c

[2 15 7

]=[

0 00 0

]

=⇒[

a− b + 2c b + ca + 2b + 5c 2a + b + 7c

]=[

0 00 0

]. The last equation results in the system:

a− b + 2c = 0

a + 2b + 5c = 02a + b + 7c = 0

b + c = 0

.

The RREF of the augmented matrix of this system is

1 0 3 00 1 1 00 0 0 00 0 0 0

, which implies that the system has

an infinite number of solutions. Consequently, the given matrices are linearly dependent.

18. Let a, b ∈ R. ap1(x) + bp2(x) = 0 =⇒ a(1− x) + b(1 + x) = 0 =⇒ (a + b) + (−a + b)x = 0.

Equating like coefficients, we obtain the system:

{a + b = 0

−a + b = 0. Since the only solution to this system is

a = b = 0, it follows that the given vectors are linearly independent.

19. Let a, b ∈ R. ap1(x) + bp2(x) = 0 =⇒ a(2 + 3x) + b(4 + 6x) = 0 =⇒ (2a + 4b) + (3a + 6b)x = 0.

Equating like coefficients, we obtain the system:

{2a + 4b = 03a + 6b = 0

. The RREF of the augmented matrix of this

system is[

1 2 00 0 0

], which implies that the system has an infinite number of solutions. Thus, the given

vectors are linearly dependent.

20. Let c1, c2 ∈ R. c1p1(x) + c2p2(x) = 0 =⇒ c1(a + bx) + c2(c + dx) = 0 =⇒ (ac1 + cc2) + (bc1 + dc2)x = 0.

Equating like coefficients, we obtain the system:

{ac1 + cc2 = 0bc1 + dc2 = 0

. The determinant of the matrix of coef-

ficients is∣∣∣∣ a c

b d

∣∣∣∣ = ad − bc. Consequently, the system has just the trivial solution, and hence p1(x) and

p2(x) are linearly independent if and only if ad− bc 6= 0.

21. Since cos 2x = cos2 x−sin2 x, f1(x) = f3(x)−f2(x) so it follows that f1, f2, and f3 are linearly dependentin C∞(−∞,∞).

22. Let v1 = (1, 2, 3),v2 = (−3, 4, 5),v3 = (1,− 43 ,− 5

3 ). By inspection, we see that v2 = −3v3. Further,since v1 and v2 are not proportional they are linearly independent. Consequently, {v1,v2} is a linearlyindependent set of vectors and span{v1,v2} =span{v1,v2,v3}.

23. Let v1 = (1, 2,−1),v2 = (3, 1, 5),v3 = (0, 0, 0),v4 = (−1, 2, 3). Since v3 = 0, it is certainly true thatspan{v1,v2 v3} =span{v1,v2,v3 v4}. Further, since det[v1,v2,v3] = −42 6= 0, {v1,v2,v3} is a linearlyindependent set.

Page 251: Differential Equations and Linear Algebra - 3e - Goode -Solutions

251

24. Since we have four vectors in R3, the given set is linearly dependent. We could determine the specific lin-ear dependency between the vectors to find a linearly independent subset, but in this case, if we just take any

three of the vectors, say (1,−1, 1), (1,−3, 1), (3, 1, 2), then

∣∣∣∣∣∣1 1 3

−1 −3 11 1 2

∣∣∣∣∣∣ = 2 6= 0, so that these vectors are

linearly independent. Consequently, span{(1,−1, 1), (1,−3, 1), (3, 1, 2)} = span{(1, 1, 1), (1,−1, 1), (1,−3, 1), (3, 1, 2)}.25. Let v1 = (1, 1,−1, 1),v2 = (2,−1, 3, 1),v3 = (1, 1, 2, 1),v4 = (2,−1, 2, 1).

Since

∣∣∣∣∣∣∣∣1 2 1 21 −1 1 −1

−1 3 2 21 1 1 1

∣∣∣∣∣∣∣∣ = 0, the set {v1,v2,v3,v4} is linearly dependent. We now determine the linearly

dependent relationship. The RREF of the augmented matrix corresponding to the system

c1(1, 1,−1, 1) + c2(2,−1, 3, 1) + c3(1, 1, 2, 1) + c4(2,−1, 2, 1) = (0, 0, 0, 0)

is

1 0 0 1

3 00 1 0 1 00 0 1 − 1

3 00 0 0 0 0

, so that c1 = −r, c2 = −3r, c3 = r, c4 = 3r, where r is a free variable. It follows

that a linearly dependent relationship between the given set of vectors is

−v1 − 3v2 + v3 + 3v4 = 0

so thatv1 = −3v2 + v3 + 3v4.

Consequently, span{v2,v3,v4} = span{v1,v2,v3,v4}, and {v2,v3,v4} is a linearly independent set.

26. Let A1 =[

1 23 4

], A2 =

[−1 2

5 7

], A3 =

[3 21 1

]. Then

c1A1 + c2A2 + c3A3 = 02

requires that

c1 − c2 + 3c3 = 0, 2c1 + 2c2 + 2c3 = 0, 3c1 + 5c2 + c3 = 0, 4c1 + 7c2 + c3 = 0.

The RREF of the augmented matrix of this system is

1 0 2 00 1 −1 00 0 0 00 0 0 0

. Consequently, the matrices are

linearly dependent. Solving the system gives c1 = −2r, c2 = c3 = r. Hence, a linearly dependent relationshipis −2A1 + A2 + A3 = 02.

27. We first determine whether the given set of polynomials is linearly dependent. Let

p1(x) = 2− 5x, p2(x) = 3 + 7x, p3(x) = 4− x.

Thenc1(2− 5x) + c2(3 + 7x) + c3(4− x) = 0

requires2c1 + 3c2 + 4c3 = 0 and − 5c1 + 7c2 − c3 = 0.

Page 252: Differential Equations and Linear Algebra - 3e - Goode -Solutions

252

This system has solution (−31r,−18r, 29r), where r is a free variable. Consequently, the given set of poly-nomials is linearly dependent, and a linearly dependent relationship is

−31p1(x)− 18p2(x) + 29p3(x) = 0,

or equivalently,

p3(x) =129

[31p1(x) + 18p2(x)].

Hence, the linearly independent set of vectors {2−5x, 3+7x} spans the same subspace of P1 as that spannedby {2− 5x, 3 + 7x, 4− x}.

28. We first determine whether the given set of polynomials is linearly dependent. Let

p1(x) = 2 + x2, p2(x) = 4− 2x + 3x2, p3(x) = 1 + x.

Thenc1(2 + x2) + c2(4− 2x + 3x2) + c3(1 + x) = 0

leads to the system2c1 + 4c2 + c3 = 0, −2c2 + c3 = 0, c1 + 3c2 = 0.

This system has solution (−3r, r, 2r) where r is a free variable. Consequently, the given set of vectors islinearly dependent, and a specific linear relationship is

−3p1(x) + p2(x) + 2p3(x) = 0,

or equivalently,p2(x) = 3p1(x)− 2p3(x).

Hence, the linearly independent set of vectors {2+x2, 1+x} spans the same subspace of P2 as that spannedby the given set of vectors.

29. W [f1, f2, f3](x) =

∣∣∣∣∣∣1 x x2

0 1 2x0 0 2

∣∣∣∣∣∣ = 2. Since W [f1, f2, f3](x) 6= 0 on I, it follows that the functions are

linearly independent on I.

30.

W [f1, f2, f3](x) =

∣∣∣∣∣∣sinx cos x tanxcos x − sinx sec2 x

− sinx − cos x 2 tanx sec2 x

∣∣∣∣∣∣ =∣∣∣∣∣∣

sinx cos x tanxcos x − sinx sec2 x

0 0 tanx(2 sec2 x + 1)

∣∣∣∣∣∣= − tanx(2 sec2 x + 1).

W [f1, f2, f3](x) is not always zero over I, so the vectors are linearly independent by Theorem 4.5.21

31. W [f1, f2, f3](x) =

∣∣∣∣∣∣1 3x x2 − 10 3 2x0 0 2

∣∣∣∣∣∣ =∣∣∣∣ 3 2x

0 2

∣∣∣∣ = 6 6= 0 on I. Consequently, {f1, f2, f3} is a linearly

independent set on I by Theorem 4.5.21.

32. W [f1, f2, f3](x) =

∣∣∣∣∣∣e2x e3x e−x

2e2x 3e3x −e−x

4e2x 9e3x e−x

∣∣∣∣∣∣ = e4x

∣∣∣∣∣∣1 1 12 3 −14 9 1

∣∣∣∣∣∣ = e4x

∣∣∣∣∣∣1 0 02 1 −34 5 −3

∣∣∣∣∣∣ = 12e4x. Since the

Wronskian is never zero, the functions are linearly indepenedent on (−∞,∞).

Page 253: Differential Equations and Linear Algebra - 3e - Goode -Solutions

253

33. On [0,∞), f2 = 7f1, so that the functions are linearly dependent on this interval. Therefore W [f1, f2](x) =

0 for x ∈ [0,∞). However, on (−∞, 0), W [f1, f2](x) =∣∣∣∣ 3x3 7x2

9x2 14x

∣∣∣∣ = −21x4 6= 0. Since the Wronskian is

not zero for all x ∈ (−∞,∞), the functions are linearly independent on that interval.

34. W [f1, f2, f3](x) =

∣∣∣∣∣∣1 x 2x− 10 1 20 0 0

∣∣∣∣∣∣ = 0. By inspection, we see that f3 = 2f2− f1, so that the functions

are linearly dependent on (−∞,∞).

35. We show that the Wronskian (the determinant can be computed by cofactor expansion along the firstrow) is identically zero:∣∣∣∣∣∣

ex e−x coshxex −e−x sinhxex e−x coshx

∣∣∣∣∣∣ = −(coshx + sinhx)− (coshx− sinhx) + 2 cosh x = 0.

Thus, the Wronskian is identically zero on (−∞,∞). Furthermore, {f1, f2, f3} is a linearly dependent setbecause

−12f1(x)− 1

2f2(x) + f3(x) = −1

2ex − 1

2e−x + coshx = −1

2ex − 1

2e−x +

ex + e−x

2= 0 for allx ∈ I.

36. We show that the Wronskian is identically zero for f1(x) = ax3 and f2(x) = bx3, which covers thefunctions in this problem as a special case:∣∣∣∣ ax3 bx3

3ax2 3bx2

∣∣∣∣ = 3abx5 − 3abx5 = 0.

Next, let a, b ∈ R.If x ≥ 0, then

af1(x) + bf2(x) = 0 =⇒ 2ax3 + 5bx3 = 0 =⇒ (2a + 5b)x3 = 0 =⇒ 2a + 5b = 0.

If x ≤ 0, then

af1(x) + bf2(x) = 0 =⇒ 2ax3 − 3bx3 = 0 =⇒ (2a− 3b)x3 = 0 =⇒ 2a− 3b = 0.

Solving the resulting system, we obtain a = b = 0; therefore, {f1, f2} is a linearly independent set of vectorson (−∞,∞).

37. (a) When x > 0, f ′2(x) = 1 and when x < 0, f ′2(x) = −1; thus f ′2(0) does not exist, which implies thatf2 /∈ C1(−∞,∞).

(b) Let a, b ∈ R. On the interval (−∞, 0), ax + b(−x) = 0, which has more than the trivial solution for aand b. Thus, {f1, f2} is a linearly dependent set of vectors on (−∞, 0).On the interval [0,∞), ax + bx = 0 =⇒ a + b = 0, which has more than the trivial solution for a and b.Therefore {f1, f2} is a linearly dependent set of vectors on [0,∞).

On the interval (−∞,∞), a and b must satisfy: ax + b(−x) = 0 and ax + bx = 0, that is,

{a− b = 0a + b = 0

. Since

this system has only a = b = 0 as its solution, {f1, f2} is a linearly independent set of vectors on (−∞,∞).

Page 254: Differential Equations and Linear Algebra - 3e - Goode -Solutions

254

y

x

y = f1(x) = x = f2(x)y = f2(x) = -x

y = f1(x) = x

f1(x) = f2(x) on (0, ∞)f1(x) = - f2(x) on (-∞, 0)

Figure 0.0.64: Figure for Exercise 37

38. Let a, b ∈ R.

af1(x) + bf2(x) = 0 =⇒

{ax + bx = 0 if x 6= 0

a(0) + b(1) = 0 if x = 0=⇒

{(a + b)x = 0

b = 0=⇒ a = 0 and b = 0 so {f1, f2} is a linearly independent set on I.

39. Let a, b, c ∈ R and x ∈ (−∞,∞).

af1(x) + bf2(x) + cf3(x) = 0 =⇒

{a(x− 1) + b(2x) + c(3) = 0 for x ≥ 1

2a(x− 1) + b(2x) + c(3) = 0 for x < 1

=⇒

{(a + 2b)x + (−a + 3c) = 0

(2a + 2b)x + (−2a + 3c) = 0=⇒

{a + 2b = 0 and − a + 3c = 0

2a + 2b = 0 and − 2a + 3c = 0.Since the only solution to this system of equations is a = b = c = 0, it follows that the given functions arelinearly independent on (−∞,∞).The domain space may be divided into three types of intervals: (1) interval subsets of (−∞, 1), (2) intervalsubset of [1,∞), (3) intervals containing 1 where 1 is not an endpoint of the intervals.

For intervals of type (3):Intervals such as type (3) are treated as above [with domain space of (−∞,∞)]: vectors are linearly inde-pendent.

For intervals of type (1):a(2(x− 1)) + b(2x) + c(3) = 0 =⇒ (2a + 2b)x + (−2a + 3c) = 0 =⇒ 2a + 2b = 0, and −2a + 3c = 0.Since this system has three variables with only two equations, the solution to the system is not unique, henceintervals of type (1) result in linearly dependent vectors.

For intervals of type (2):a(x−1)+b(2x)+c(3) = 0 =⇒ a+2b = 0 and −a+3c = 0. As in the last case, this system has three variableswith only two equations, so it must be the case that intervals of type (2) result in linearly dependent vectors.

40. (a) Let f0(x) = 1, f1(x) = x, f2(x) = x2, f3(x) = x3.

Then W [f1, f2, f3, f4](x) =

∣∣∣∣∣∣∣∣1 x x2 x3

0 1 2x 3x2

0 0 2 6x0 0 0 6

∣∣∣∣∣∣∣∣ = 12 6= 0. Hence, {f1, f2, f3, f4} is linearly independent on

any interval.

Page 255: Differential Equations and Linear Algebra - 3e - Goode -Solutions

255

(b) W [f0, f1, f2, . . . , fn](x) =

∣∣∣∣∣∣∣∣∣∣∣

1 x x2 · · · xn

0 1 2x · · · nxn−1

0 0 2 · · · n(n− 1)xn−2

......

......

0 0 0 · · · n!

∣∣∣∣∣∣∣∣∣∣∣.

The matrix corresponding to this determinant is upper triangular, so the value of the determinant is givenby the product of all of the diagonal entries. W [f0, f1, f2, . . . , fn](x) = 1 · 1 · 2 · 6 · 24 · · ·n!, which is not zeroregardless of the actual domain of x. Consequently, the functions are linearly independent on any interval.

41. (a) Let f1(x) = er1x, f2(x) = er2x and f3(x) = er3x. Then

W [f1, f2, f3](x) =

∣∣∣∣∣∣er1x er2x er3x

r1er1x r2e

r2x r3er3x

r21e

r1x r22e

r2x r23e

r3x

∣∣∣∣∣∣ = er1xer2xer3x

∣∣∣∣∣∣1 1 1r1 r2 r3

r21 r2

2 r23

∣∣∣∣∣∣= e(r1+r2+r3)x(r3 − r1)(r3 − r2)(r2 − r1).

If ri 6= rj for i 6= j, then W [f1, f2, f3](x) is never zero, and hence the functions are linearly independent onany interval. If, on the other hand, ri = rj with i 6= j, then fi − fj = 0, so that the functions are linearlydependent. Thus, r1, r2, r3 must all be different in order that f1, f2, f3 are linearly independent.

(b)

W [f1, f2, f3, . . . , fn](x) =

∣∣∣∣∣∣∣∣∣∣∣

er1x er2x · · · ernx

r1er1x r2e

r2x · · · rnernx

r21e

r1x r22e

r2x · · · r2nernx

......

...rn−11 er1x rn−1

2 er2x · · · rn−1n ernx

∣∣∣∣∣∣∣∣∣∣∣= er1xer2x · · · ernx

∣∣∣∣∣∣∣∣∣∣∣

1 1 · · · 1r1 r2 · · · rn

r21 r2

2 · · · r2n

......

...rn−11 rn−1

2 · · · rn−1n

∣∣∣∣∣∣∣∣∣∣∣∗= er1xer2x · · · ernxV (r1, r2, . . . , rn)

= er1xer2x · · · ernx∏

1≤i<m≤n

(rm − ri).

Now from the last equality, we see that if ri 6= rj for i 6= j, where i, j ∈ {1, 2, 3, . . . , n}, then W [f1, f2, . . . , fn](x)is never zero, and hence the functions are linearly independent on any interval. If, on the other hand, ri = rj

with i 6= j, then f1 − fj = 0, so that the functions are linearly dependent. Thus {f1, f2, . . . , fn} is a linearlyindependent set if and only if all the ri are distinct for i ∈ {1, 2, 3, . . . , n}.

(*Note: V (r1, r2, . . . , rn) is the n× n Vandermonde determinant. See Section 3.3 Problem 21).

42. Let a, b ∈ R. Assume that av + bw = 0. Then a(αv1 + v2) + b(v1 + αv2) = 0 which implies that(αa + b)v1 + (a + bα)v2 = 0. Now since it is given that v1 and v2 are linearly independent, αa + b = 0 and

a + bα = 0. This system has only the trivial solution for a and b if and only if∣∣∣∣ α 1

1 α

∣∣∣∣ 6= 0. That is, if and

only if α2 − 1 6= 0, or α 6= ±1. Therefore, the vectors are linearly independent provided that α 6= ±1.

43. It is given that v1 and v2 are linearly independent. Let u1 = a1v1 + b1v2,u2 = a2v1 + b2v2, andu3 = a3v1 + b3v2 where a1, a2, a3, b1, b2, b3 ∈ R. Let c1, c2, c3 ∈ R. Then

Page 256: Differential Equations and Linear Algebra - 3e - Goode -Solutions

256

c1u1 + c2u2 + c3u3 = 0=⇒ c1(a1v1 + b1v2) + c2(a2v1 + b2v2) + c3(a3v1 + b3v2) = 0=⇒ (c1a1 + c2a2 + c3a3)v1 + (c1b1 + c2b2 + c3b3)v2 = 0.

Now since v1 and v2 are linearly independent:

{c1a1 + c2a2 + c3a3 = 0c1b1 + c2b2 + c3b3 = 0.

There are an infinite number

of solutions to this homogeneous system since there are three unknowns but only two equations. Hence,{u1,u2,u3} is a linearly dependent set of vectors.

44. Given that {v1,v2, . . . ,vm} is a linearly independent set of vectors in a vector space V , and

uk =m∑

i=1

aikvi, k ∈ {1, 2, . . . , n}. (44.1)

(a) Let ck ∈ R, k ∈ {1, 2, . . . , n}. Using system (44.1) andn∑

k=1

ckuk = 0, we obtain:

n∑k=1

ck

(m∑

i=1

aikvi

)= 0 ⇐⇒

m∑i=1

(n∑

k=1

aikck

)vi = 0.

Since the vi for each i ∈ {1, 2, . . . ,m} are linearly independent,

n∑k=1

aikck = 0, 1 ≤ i ≤ m. (44.2)

But this is a system of m equations with n unknowns c1, c2, . . . , cn. Since n > m, the system has moreunknowns than equations, and so has nontrivial solutions. Thus, {u1,u2, . . . ,un} is a linearly dependentset.

(b) If m = n, then the system (44.2) has a trivial solution ⇐⇒ the coefficient matrix of the system isinvertible ⇐⇒ det[aij ] 6= 0.

(c) If n < m, then the homogeneous system (44.2) has just the trivial solution if and only if rank(A) = n.Recall that for a homogeneous system, rank(A#) = rank(A).

(d) Corollary 4.5.15.

45. We assume thatc1(Av1) + c2(Av2) + · · ·+ cn(Avn) = 0.

Our aim is to show that c1 = c2 = · · · = cn = 0. We manipulate the left side of the above equation asfollows:

c1(Av1) + c2(Av2) + · · ·+ cn(Avn) = 0

A(c1v1) + A(c2v2) + · · ·+ A(cnvn) = 0

A(c1v1 + c2v2 + · · ·+ cnvn) = 0.

Since A is invertible, we can left multiply the last equation by A−1 to obtain

c1v1 + c2v2 + · · ·+ cnvn = 0.

Since {v1,v2, . . . ,vn} is linearly independent, we can now conclude directly that c1 = c2 = · · · = cn = 0, asrequired.

Page 257: Differential Equations and Linear Algebra - 3e - Goode -Solutions

257

46. Assume that c1v1 + c2v2 + c3v3 = 0. We must show that c1 = c2 = c3 = 0. Let us suppose for themoment that c3 6= 0. In that case, we can solve the above equation for v3:

v3 = −c1

c3v1 −

c2

c3v2.

However, this contradicts the assumption that v3 does not belong to span{v1,v2}. Therefore, we concludethat c3 = 0. Our starting equation therefore reduces to c1v1 + c2v2 = 0. Now the assumption that {v1,v2}is linearly independent shows that c1 = c2 = 0. Therefore, c1 = c2 = c3 = 0, as required.

47. Assume that c1v1 + c2v2 + · · · + ckvk + ck+1vk+1 = 0. We must show that c1 = c2 = · · · = ck+1 = 0.Let us suppose for the moment that ck+1 6= 0. In that case, we can solve the above equation for vk+1:

vk+1 = − c1

ck+1v1 −

c2

ck+1v2 − · · · −

ck

ck+1vk.

However, this contradicts the assumption that vk+1 does not belong to span{v1,v2, . . . ,vk}. Therefore, weconclude that ck+1 = 0. Our starting equation therefore reduces to c1v1 + c2v2 + · · · + ckvk = 0. Nowthe assumption that {v1,v2, . . . ,vk} is linearly independent shows that c1 = c2 = · · · = ck = 0. Therefore,c1 = c2 = · · · = ck = ck+1 = 0, as required.

48. Let {v1,v2, . . . ,vk} be a set of vectors with k ≥ 2. Suppose that vk can be expressed as a linearcombination of {v1,v2, . . . ,vk−1}. We claim that

span{v1,v2, . . . ,vk} = span{v1,v2, . . . ,vk−1}.

Since every vector belonging to span{v1,v2, . . . ,vk−1} evidently belongs to span{v1,v2, . . . ,vk}, we focuson showing that every vector in span{v1,v2, . . . ,vk} also belongs to span{v1,v2, . . . ,vk−1}:

Let v ∈ span{v1,v2, . . . ,vk}. We therefore may write

v = c1v1 + c2v2 + · · ·+ ckvk.

By assumption, we may write vk = d1v1 + d2v2 + · · ·+ dk−1vk−1. Therefore, we obtain

v = c1v1 + c2v2 + · · ·+ ckvk

= c1v1 + c2v2 + · · ·+ ck−1vk−1 + ck(d1v1 + d2v2 + · · ·+ dk−1vk−1)= (c1 + ckd1)v1 + (c2 + ckd2)v2 + · · ·+ (ck−1 + ckdk−1)vk−1

∈ span{v1,v2, . . . ,vk−1}.

This shows that every vector belonging to span{v1,v2, . . . ,vk} also belongs to span{v1,v2, . . . ,vk−1}, asneeded.

49. We first prove part 1 of Proposition 4.5.7. Suppose that we have a set {u,v} of two vectors in a vectorspace V . If {u,v} is linearly dependent, then we have

cu + dv = 0,

where c and d are not both zero. Without loss of generality, suppose that c 6= 0. Then we have

u = −d

cv,

so that u and v are proportional. Conversely, if u and v are proportional, then v = cu for some constant c.Thus, cu− v = 0, which shows that {u,v} is linearly dependent.

Page 258: Differential Equations and Linear Algebra - 3e - Goode -Solutions

258

For part 2 of Proposition 4.5.7, suppose the zero vector 0 belongs to a set S of vectors in a vector spaceV . Then 1 · 0 is a linear dependency among the vectors in S, and therefore S is linearly dependent.

50. Suppose that {v1,v2, . . . ,vk} spans V and let v be any vector in V . By assumption, we can write

v = c1v1 + c2v2 + · · ·+ ckvk,

for some constants c1, c2, . . . , ck. Therefore,

c1v1 + c2v2 + · · ·+ ckvk − v = 0

is a linear dependency among the vectors in {v,v1,v2, . . . ,vk}. Thus, {v,v1,v2, . . . ,vk} is linearly depen-dent.

51. Let S = {p1, p2, . . . , pk} and assume without loss of generality that the polynomials are listed indecreasing order by degree:

deg(p1) > deg(p2) > · · · > deg(pk).

To show that S is linearly independent, assume that

c1p1 + c2p2 + · · ·+ ckpk = 0.

We wish to show that c1 = c2 = · · · = ck = 0. We require that each coefficient on the left side of the aboveequation is zero, since we have 0 on the right-hand side. Since p1 has the highest degree, none of the termsc2p2,c3p3, . . . , ckpk can cancel the leading coefficient of p1. Therefore, we conclude that c1 = 0. Thus, wenow have

c2p2 + c3p3 + · · ·+ ckpk = 0,

and we can repeat this argument again now to show successively that c2 = c3 = · · · = ck = 0.

Solutions to Section 4.6

1. FALSE. It is not enough that S spans V . It must also be the case that S is linearly independent.

2. FALSE. For example, R2 is not a subspace of R3, since R2 is not even a subset of R3.

3. TRUE. Any set of two non-proportional vectors in R2 will form a basis for R2.

4. FALSE. We have dim[Pn] = n + 1 and dim[Rn] = n.

5. FALSE. For example, if V = R2, then the set S = {(1, 0), (2, 0), (3, 0)}, consisting of 3 > 2 vectors, failsto span V , a 2-dimensional vector space.

6. TRUE. We have dim[P3] = 4, and so any set of more than four vectors in P3 must be linearly dependent(a maximal linearly independent set in a 4-dimensional vector space consists of four vectors).

7. FALSE. For instance, the two vectors 1 + x and 2 + 2x in P3 are linearly dependent.

8. TRUE. Since M3(R) is 9-dimensional, any set of 10 vectors in this vector space must be linearlydependent by Theorem 4.6.4.

9. FALSE. Only linearly independent sets with fewer than n vectors can be extended to a basis for V .

10. TRUE. We can build such a subset by choosing vectors from the set as follows. Choose v1 to be anyvector in the set. Now choose v2 in the set such that v2 6∈ span{v1}. Next, choose v3 in the set such thatv3 6∈ span{v1,v2}. Proceed in this manner until it is no longer possible to find a vector in the set thatis not spanned by the collection of previously chosen vectors. This point will occur eventually, since V is

Page 259: Differential Equations and Linear Algebra - 3e - Goode -Solutions

259

finite-dimensional. Moreover, the chosen vectors will form a linearly independent set, since each vi is chosenfrom outside span{v1,v2, . . . ,vi−1}. Thus, the set we obtain in this way is a linearly independent set ofvectors that spans V , hence forms a basis for V .

11. FALSE. The set of all 3× 3 upper triangular matrices forms a 6-dimensional subspace of M3(R), nota 3-dimensional subspace. One basis is given by {E11, E12, E13, E22, E23, E33}.

Problems:

1. dim[R2] = 2. There are two vectors, so if they are to form a basis for R2, they need to be linearly

independent:∣∣∣∣ 1 −1

1 1

∣∣∣∣ = 2 6= 0. This implies that the vectors are linearly independent, hence they form a

basis for R2.

2. dim[R3] = 3. There are three vectors, so if they are to form a basis for R3, they need to be linearly

independent:

∣∣∣∣∣∣1 3 12 −1 11 2 −1

∣∣∣∣∣∣ = 13 6= 0. This implies that the vectors are linearly independent, hence they

form a basis for R3.

3. dim[R3] = 3. There are three vectors, so if they are to form a basis for R3, they need to be linearly

independent:

∣∣∣∣∣∣1 2 3

−1 5 111 −2 −5

∣∣∣∣∣∣ = 0. This implies that the vectors are linearly dependent, hence they do not

form a basis for R3.

4. dim[R4] = 4. We need 4 linearly independent vectors in order to span R4. However, there are only 3vectors in this set. Thus, the vectors cannot be a basis for R4.

5. dim[R4] = 4. There are four vectors, so if they are to form a basis for R3, they need to be linearly

independent:

∣∣∣∣∣∣∣∣1 2 −1 21 1 1 −10 3 1 12 −1 −2 2

∣∣∣∣∣∣∣∣ =

∣∣∣∣∣∣∣∣1 2 −1 20 −1 2 −30 3 1 10 −5 0 −2

∣∣∣∣∣∣∣∣ =

∣∣∣∣∣∣−1 2 −3

3 1 1−5 0 −2

∣∣∣∣∣∣ =

∣∣∣∣∣∣−7 0 −5

3 1 1−5 0 −2

∣∣∣∣∣∣ = −11.

Since this determinant is nonzero, the given vectors are linearly independent. Consequently, they form abasis for R4.

6. dim[R4] = 4. There are four vectors, so if they are to form a basis for R3, they need to be linearly

independent:

∣∣∣∣∣∣∣∣0 1 0 k

−1 0 1 00 1 1 2k 0 0 1

∣∣∣∣∣∣∣∣ = −

∣∣∣∣∣∣−1 1 0

0 1 2k 0 1

∣∣∣∣∣∣ −∣∣∣∣∣∣

0 0 k−1 1 0

k 0 1

∣∣∣∣∣∣ = −(−1 + 2k) − (−k2) = k2 − 2k + 1 =

(k − 1)2 6= 0 when k 6= 1. Thus, the vectors will form a basis for R4 provided k 6= 1.

7. The general vector p(x) ∈ P3 can be represented as p(x) = a0 + a1x + a2x2 + a3x

3. Thus P3 =span{1, x, x2, x3}. Further, {1, x, x2, x3} is a linearly independent set since

W [1, x, x2, x3] =

∣∣∣∣∣∣∣∣1 x x2 x3

0 1 2x 3x2

0 0 2 6x0 0 0 6

∣∣∣∣∣∣∣∣ = 12 6= 0.

Consequently, S = {1, x, x2, x3} is a basis for P3 and dim[P3] = 4. Of course, S is not the only basis for P3.

Page 260: Differential Equations and Linear Algebra - 3e - Goode -Solutions

260

8. Many acceptable bases are possible here. One example is

S = {x3, x3 + 1, x3 + x, x3 + x2}.

All of the polynomials in this set have degree 3. We verify that S is a basis: Assume that

c1(x3) + c2(x3 + 1) + c3(x3 + x) + c4(x3 + x2) = 0.

Thus,(c1 + c2)x3 + c4x

2 + c3x + c2 = 0,

from which we quickly see that c1 = c2 = c3 = c4 = 0. Thus, S is linearly independent. Since P3 is4-dimensional, we can now conclude from Corollary 4.6.13 that S is a basis for P3.

9. Ax = 0 =⇒[

1 3−2 −6

] [x1

x2

]=[

00

]. The augmented matrix for this system is

[1 3 0

−2 −6 0

]∼[

1 3 00 0 0

]; thus, x1 + 3x2 = 0, or x1 = 3x2. Let x2 = r so that x1 = 3r where r ∈ R. Consequently,

S = {x ∈ R2 : x = r(3, 1), r ∈ R} = span{(3, 1)}. It follows that {(3, 1)} is a basis for S and dim[S] = 1.

10. Ax = 0 =⇒

0 0 00 0 00 1 0

x1

x2

x3

=

000

. The RREF of the augmented matrix for this system

is

0 1 0 00 0 0 00 0 0 0

. We see that x2 = 0, and x1 and x3 are free variables: x1 = r and x2 = s. Hence,

(x1, x2, x3) = (r, 0, s) = r(1, 0, 0) + s(0, 0, 1), so that the solution set of the system is S = {x ∈ R3 : x =r(1, 0, 0) + s(0, 0, 1), r, s ∈ R}. Therefore we see that {(1, 0, 0), (0, 0, 1)} is a basis for S and dim[S] = 2.

11. Ax = 0 =⇒

1 −1 42 3 −21 2 −2

x1

x2

x3

=

000

. The RREF of the augmented matrix for this system is 1 0 2 00 1 −2 00 0 0 0

. If we let x3 = r then (x1, x2, x3) = (−2r, 2r, r) = r(−2, 2, 1), so that the solution set of

the system is S = {x ∈ R3 : x = r(−2, 2, 1), r ∈ R}. Therefore we see that {(−2, 2, 1)} is a basis for S anddim[S] = 1.

12. Ax = 0 =⇒

1 −1 2 32 −1 3 41 0 1 13 −1 4 5

x1

x2

x3

x4

=

0000

. The RREF of the augmented matrix for this system

is

1 0 1 1 00 1 −1 −2 00 0 0 0 00 0 0 0 0

. If we let x3 = r, x4 = s then x2 = r+2s and x1 = −r−s. Hence, the solution set

of the system is S = {x ∈ R4 : x = r(−1, 1, 1, 0) + s(−1, 2, 0, 1), r, s ∈ R} = span{(−1, 1, 1, 0), (−1, 2, 0, 1)}.Further, the vectors v1 = (−1, 1, 1, 0),v2 = (−1, 2, 0, 1) are linearly independent since c1v1 + c2v2 = 0 =⇒c1(−1, 1, 1, 0) + c2(−1, 2, 0, 1) = (0, 0, 0, 0) =⇒ c1 = c2 = 0. Consequently, {(−1, 1, 1, 0), (−1, 2, 0, 1)} is abasis for S and dim[S] = 2.

13. If we let y = r and z = s where r, s ∈ R, then x = 3r− s. It follows that any ordered triple in S can bewritten in the form:

Page 261: Differential Equations and Linear Algebra - 3e - Goode -Solutions

261

(x, y, z) = (3r− s, r, s) = (3r, r, 0) + (−s, 0, s) = r(3, 1, 0) + s(−1, 0, 1), where r, s ∈ R. If we let v1 = (3, 1, 0)and v2 = (−1, 0, 1), then S = {v ∈ R3 : v = r(3, 1, 0) + s(−1, 0, 1), r, s ∈ R} = span{v1,v2}; moreover, v1

and v2 are linearly independent for if a, b ∈ R and av1 + bv2 = 0, then

a(3, 1, 0) + b(−1, 0, 1) = (0, 0, 0),

which implies that (3a, a, 0) + (−b, 0, b) = (0, 0, 0), or (3a − b, a, b) = (0, 0, 0). In other words, a = b = 0.Since {v1,v2} spans S and is linearly independent, it is a basis for S. Also, dim[S] = 2.

14.S = {x ∈ R3 : x = (r, r − 2s, 3s− 5r), r, s ∈ R}

= {x ∈ R3 : x = (r, r,−5r) + (0,−2s, 3s), r, s ∈ R}= {x ∈ R3 : x = r(1, 1,−5) + s(0,−2, 3), r, s ∈ R}.

Thus, S = span{(1, 1,−5), (0,−2, 3)}. The vectors v1 = (1, 1,−5) and v2 = (0,−2, 3) are linearly indepen-dent for if a, b ∈ R and av1 + bv2 = 0, then

a(1, 1,−5)+b(0,−2, 3) = (0, 0, 0) =⇒ (a, a,−5a)+(0,−2b, 3b) = (0, 0, 0) =⇒ (a, a−2b, 3b−5a) = (0, 0, 0) =⇒ a = b = 0.

It follows that {v1,v2} is a basis for S and dim[S] = 2.

15. S = {A ∈ M2(R) : A =[

a b0 c

], a, b, c ∈ R}. Each vector in S can be written as

A = a

[1 00 0

]+ b

[0 10 0

]+ c

[0 00 1

],

so that S = span{[

1 00 0

],

[0 10 0

],

[0 00 1

]}. Since the vectors in this set are clearly linearly inde-

pendent, it follows that a basis for S is{[

1 00 0

],

[0 10 0

],

[0 00 1

]}, and therefore dim[S] = 3.

16. S = {A ∈ M2(R) : A =[

a bc −a

], a, b, c ∈ R}. Each vector in S can be written as

A = a

[1 00 −1

]+ b

[0 10 0

]+ c

[0 01 0

],

so that S = span{[

1 00 −1

],

[0 10 0

],

[0 01 0

]}. Since this set of vectors is clearly linearly independent,

it follows that a basis for S is{[

1 00 −1

],

[0 10 0

],

[0 01 0

]}, and therefore dim[S] = 3.

17. We see directly that v3 = 2v1. Let v be an arbitrary vector in S. Then

v = c1v1 + c2v2 + c3v3 = (c1 + 2c3)v1 + c2v2 = d1v1 + d2v2,

where d1 = (c1 + 2c3) and d2 = c2. Hence S = span{v1,v2}. Further, v1 and v2 are linearly independentfor if a, b ∈ R and av1 + bv2 = 0, then

a(1, 0, 1) + b(0, 1, 1) = (0, 0, 0) =⇒ (a, 0, a) + (0, b, b) = (0, 0, 0) =⇒ (a, b, a + b) = (0, 0, 0) =⇒ a = b = 0.

Consequently, {v1,v2} is a basis for S and dim[S] = 2.

Page 262: Differential Equations and Linear Algebra - 3e - Goode -Solutions

262

18. f3 depends on f1 and f2 since sinhx =ex − e−x

2. Thus, f3(x) =

f1(x)− f2(x)2

.

W [f1, f2](x) =∣∣∣∣ ex e−x

ex −e−x

∣∣∣∣ = −2 6= 0 for all x ∈ R, so {f1, f2} is a linearly independent set. Thus, {f1, f2}

is a basis for S and dim[S] = 2.

19. The given set of matrices is linearly dependent because it contains the zero vector. Consequently, the

matrices A1 =[

1 3−1 2

], A2 =

[−1 4

1 1

], A3 =

[5 −6

−5 1

]span the same subspace of M2(R) as that

spanned by the original set. We now determine whether {A1, A2, A3} is linearly independent. The vectorequation:

c1A1 + c2A2 + c3A3 = 02

leads to the linear system

c1 − c2 + 5c3 = 0, 3c1 + 4c2 − 6c3 = 0,−c1 + c2 − 5c3 = 0, 2c1 + c2 + c3 = 0.

This system has solution (−2r, 3r, r), where r is a free variable. Consequently, {A1, A2, A3} is linearlydependent with linear dependency relation

−2A1 + 3A2 + A3 = 0,

or equivalently,A3 = 2A1 − 3A2.

It follows that the set of matrices {A1, A2} spans the same subspace of M2(R) as that spanned by the originalset of matrices. Further {A1, A2} is linearly independent by inspection, and therefore it is a basis for thesubspace.

20. (a) We must show that every vector (x, y) ∈ R2 can be expressed as a linear combination of v1 and v2.Mathematically, we express this as

c1(1, 1) + c2(−1, 1) = (x, y)

which implies thatc1 − c2 = x and c1 + c2 = y.

Adding the equations here, we obtain 2c1 = x + y or

c1 =12(x + y).

Now we can solve for c2:

c2 = y − c1 =12(y − x).

Therefore, since we were able to solve for c1 and c2 in terms of x and y, we see that the system of equationsis consistent for all x and y. Therefore, {v1,v2} spans R2.

(b)∣∣∣∣ 1 −1

1 1

∣∣∣∣ = 2 6= 0, so the vectors are linearly independent.

(c) We can draw this conclusion from part (a) alone by using Theorem 4.6.12 or from part (b) alone byusing Theorem 4.6.10.

21. (a) We must show that every vector (x, y) ∈ R2 can be expressed as a linear combination of v1 and v2.Mathematically, we express this as

c1(2, 1) + c2(3, 1) = (x, y)

Page 263: Differential Equations and Linear Algebra - 3e - Goode -Solutions

263

which implies that2c1 + 3c2 = x and c1 + c2 = y.

From this, we can solve for c1 and c2 in terms of x and y:

c1 = 3y − x and c2 = x− 2y.

Therefore, since we were able to solve for c1 and c2 in terms of x and y, we see that the system of equationsis consistent for all x and y. Therefore, {v1,v2} spans R2.

(b)∣∣∣∣ 2 3

1 1

∣∣∣∣ = −1 6= 0, so the vectors are linearly independent.

(c) We can draw this conclusion from part (a) alone by using Theorem 4.6.12 or from part (b) alone byusing Theorem 4.6.10.

22. dim[R3] = 3. There are 3 vectors, so if they are linearly independent, then they are a basis of R3 by

Theorem 4.6.1. Since

∣∣∣∣∣∣0 3 66 0 −33 3 0

∣∣∣∣∣∣ = 81 6= 0, the given vectors are linearly independent, and so {v1,v2,v3}

is a basis for R3.

23. dim[P2] = 3. There are 3 vectors, so {p1, p2, p3} may be a basis for P2 depending on α. To be a basis,the set of vectors must be linearly independent. Let a, b, c ∈ R. Then ap1(x) + bp2(x) + cp3(x) = 0=⇒ a(1 + αx2) + b(1 + x + x2) + c(2 + x) = 0=⇒ (a + b + 2c) + (b + c)x + (aα + b)x2 = 0. Equating like coefficients in the last equality, we obtain the

system:

a + b + 2c = 0

b + c = 0aα + b = 0.

Reduce the augmented matrix of this system.

1 1 2 00 1 1 0α 1 0 0

1 1 2 00 1 1 00 α− 1 2α 0

1 1 2 00 1 1 00 0 α + 1 0

.

For this system to have only the trivial solution, the last row of the matrix must not be a zero row. Thismeans that α 6= −1. Therefore, for the given set of vectors to be linearly independent (and thus a basis), αcan be any value except −1.

24. dim[P2] = 3. There are 3 vectors, so in order to form a basis for P2, they must be linearly independent.Since we are dealing with functions, we will use the Wronskian.

W [p1, p2, p3](x) =

∣∣∣∣∣∣1 + x x(x− 1) 1 + 2x2

1 2x− 1 4x0 2 4

∣∣∣∣∣∣ =∣∣∣∣∣∣

1 + x x(x− 1) 1 + 2x1 2x− 1 20 2 0

∣∣∣∣∣∣ = −2.

Since the Wronskian is nonzero, {p1, p2, p3} is linearly independent, and hence forms a basis for P2.

25. W [p0, p1, p2](x) =

∣∣∣∣∣∣1 x 3x2−1

20 1 3x0 0 3

∣∣∣∣∣∣ = 3 6= 0, so {p0, p1, p2} is a linearly independent set. Since

dim[P2] = 3, it follows that {p0, p1, p2} is a basis for P2.

26. (a) Suppose that a, b, c, d ∈ R.

a

[−1 1

0 1

]+ b

[1 3

−1 0

]+ c

[1 01 2

]+ d

[0 −12 3

]= 02

Page 264: Differential Equations and Linear Algebra - 3e - Goode -Solutions

264

=⇒[−a + b + c a + 3b− c−b + c + 2d a + 2c + 3d

]= 02. The last matrix results in the system:

−a + b + c = 0a + 3b− d = 0

−b + c + 2d = 0a + 2c + 3d = 0.

The

RREF of the augmented matrix of this system is

1 −1 −1 0 00 4 1 −1 00 0 5 7 00 0 0 1 0

, which implies that a = b =

c = d = 0. Hence, the given set of vectors is linearly independent. Since dim[M2(R)] = 4, it follows that{A1, A2, A3, A4} is a basis for M2(R).

(b) We wish to express the vector[

5 67 8

]in the form

[5 67 8

]= a

[−1 1

0 1

]+ b

[1 3

−1 0

]+ c

[1 01 2

]+ d

[0 −12 3

].

Matching entries on each side of this equation (upper left, upper right, lower left, and lower right), we obtaina linear system with augmented matrix

−1 1 1 0 51 3 0 −1 60 −1 1 2 71 0 2 3 8

.

Solving this system by Gaussian elimination, we find that a = − 343 , b = 12, c = − 55

3 , and d = 563 . Thus, we

have [5 67 8

]= −34

3

[−1 1

0 1

]+ 12

[1 3

−1 0

]− 55

3

[1 01 2

]+

563

[0 −12 3

].

27.

(a) Ax = 0 =⇒

1 1 −1 12 −3 5 −65 0 2 −3

x1

x2

x3

x4

=

000

. The augmented matrix for this linear system is

1 1 −1 1 02 −3 5 −6 05 0 2 −3 0

1∼

1 1 −1 1 00 −5 7 −8 00 −5 7 −8 0

2∼

1 1 −1 1 00 −5 7 −8 00 0 0 0 0

3∼

1 1 −1 1 00 1 −1.4 1.6 00 0 0 0 0

.

1. A12(−2), A13(−5) 2. A23(−1) 3. M2(− 15 )

We see that x3 = r and x4 = s are free variables, and therefore nullspace(A) is 2-dimensional. Now wecan check directly that Av1 = 0 and Av2 = 0, and since v1 and v2 are linearly independent (they arenon-proportional), they therefore form a basis for nullspace(A) by Corollary 4.6.13.

(b) An arbitrary vector (x, y, z, w) in nullspace(A) can be expressed as a linear combination of the basisvectors:

(x, y, z, w) = c1(−2, 7, 5, 0) + c2(3,−8, 0, 5),

Page 265: Differential Equations and Linear Algebra - 3e - Goode -Solutions

265

where c1, c2 ∈ R.

28.

(a) An arbitrary matrix in S takes the form

a b −a− bc d −c− d

−a− c −b− d a + b + c + d

= a

1 0 −10 0 0

−1 0 1

+ b

0 1 −10 0 00 −1 1

+ c

0 0 01 0 −1

−1 0 1

+ d

0 0 00 1 −10 −1 1

.

Therefore, we have the following basis for S: 1 0 −1

0 0 0−1 0 1

,

0 1 −10 0 00 −1 1

,

0 0 01 0 −1

−1 0 1

,

0 0 00 1 −10 −1 1

.

From this, we see that dim[S] = 4.

(b) We see that each of the matrices 1 0 00 0 00 0 0

,

0 1 00 0 00 0 0

,

0 0 10 0 00 0 0

,

0 0 01 0 00 0 0

,

0 0 00 0 01 0 0

has a different row or column that does not sum to zero, and thus none of these matrices belong to S, andthey are linearly independent from one another. Therefore, supplementing the basis for S in part (a) withthe five matrices here extends the basis in (a) to a basis for M3(R).

29.

(a) An arbitrary matrix in S takes the form

a b cd e a + b + c− d− e

b + c− d a + c− e d + e− c

= a

1 0 00 0 10 1 0

+ b

0 1 00 0 11 0 0

+ c

0 0 10 0 11 1 −1

+ d

0 0 01 0 −1

−1 0 1

+ e

0 0 00 1 −10 −1 1

.

Therefore, we have the following basis for S: 1 0 0

0 0 10 1 0

,

0 1 00 0 11 0 0

,

0 0 10 0 11 1 −1

,

0 0 01 0 −1

−1 0 1

,

0 0 00 1 −10 −1 1

.

From this, we see that dim[S] = 5.

(b) We must include four additional matrices that are linearly independent and outside of S. The matricesE11, E12, E11 + E13, and E22 will suffice in this case.

30. Let A ∈ Sym2(R) so A = AT . A =[

a bb c

]∈ Sym2(R); a, b, c ∈ R. This vector can be represented as

a linear combination of{[

1 00 0

],

[0 11 0

],

[0 00 1

]}:

Page 266: Differential Equations and Linear Algebra - 3e - Goode -Solutions

266

A = a

[1 00 0

]+b

[0 11 0

]+c

[0 00 1

]. Since

{[1 00 0

],

[0 11 0

],

[0 00 1

]}is a linearly independent

set that also spans Sym2(R), it is a basis for Sym2(R). Thus, dim[Sym2(R)] = 3. Let B ∈ Skew2(R) so B =

−BT . Then B =[

0 b−b 0

]= b

[0 1

−1 0

], where b ∈ R. The set

{[0 1

−1 0

]}is linearly independent

and spans Skew2(R) so that a basis for Skew2(R) is{[

0 1−1 0

]}. Consequently, dim[Skew2(R)] = 1. Now,

dim[Sym2(R)] = 4, and hence dim[Sym2(R)] + dim[Skew2(R)] = 3 + 1 = 4 = dim[M2(R)].

31. We know that dim[Mn(R)] = n2. Let S ∈ Symn(R) and let [Sij ] be the matrix with ones in the (i, j)and (j, i) positions and zeroes elsewhere. Then the general n× n symmetric matrix can be expressed as:S = a11S11 + a12S12 +a13S13 + · · ·+ a1nS1n

+ a22S22 +a23S23 + · · ·+ a2nS2n

+ · · ·+ an−1 n−1Sn−1 n−1 + an−1 nSn−1 n + annSnn.

We see that S has been resolved into a linear combination of

n + (n − 1) + (n − 2) + · · · + 1 =n(n + 1)

2linearly independent matrices, which therefore form a basis for

Symn(R); hence dim[Sym2(R)] =n(n + 1)

2.

Now let T ∈ Skewn(R) and let [Tij ] be the matrix with one in the (i, j)-position, negative one in the (j, i)-position, and zeroes elsewhere, including the main diagonal. Then the general n×n skew-symmetric matrixcan be expressed as:T = a12T12 + a13T13 +a14T14 + · · ·+ a1nT1n

+ a23T23 +a24T24 + · · ·+ a2nT2n

+ · · ·+ an−1 nTn−1 n

We see that T has been resolved into a linear combination of (n−1)+(n−2)+(n−3)+ · · ·+2+1 =(n− 1)n

2

linearly independent vectors, which therefore form a basis for Skewn(R); hence dim[Skewn(R)] =(n− 1)n

2.

Consequently, using these results, we have

dim[Symn(R)] + dim[Skew2(R)] =n(n + 1)

2+

(n− 1)n2

= n2 = dim[Mn(R)].

32. (a) S is a two-dimensional subspace of R3. Consequently, any two linearly independent vectors lyingin this subspace determine a basis for S. By inspection we see, for example, that v1 = (4,−1, 0) andv2 = (3, 0, 1) both lie in the plane. Further, since they are not proportional, these vectors are linearlyindependent. Consequently, a basis for S is {(4,−1, 0), (3, 0, 1)}.

(b) To extend the basis obtained in Part (a) to obtain a basis for R3, we require one more vector thatdoes not lie in S. For example, since v3 = (1, 0, 0) does not lie on the plane it is an appropriate vector.Consequently, a basis for R3 is {(4,−1, 0), (3, 0, 1), (1, 0, 0)}.

33. Each vector in S can be written as[a bb a

]= a

[1 00 1

]+ b

[0 11 0

].

Consequently, a basis for S is given by the linearly independent set{[

1 00 1

],

[0 11 0

]}. To extend this

Page 267: Differential Equations and Linear Algebra - 3e - Goode -Solutions

267

basis to M2(R), we can choose, for example, the two vectors[

1 00 0

]and

[0 10 0

]. Then the linearly

independent set{[

1 00 1

],

[0 11 0

],

[1 00 0

],

[0 10 0

]}is a basis for M2(R).

34. The vectors in S can be expressed as

(2a1 + a2)x2 + (a1 + a2)x + (3a1 − a2) = a1(2x2 + x + 3) + a2(x2 + x− 1),

and since {2x2 + x + 3, x2 + x − 1} is linearly independent (these polynomials are non-proportional) andcertainly span S, they form a basis for S. To extend this basis to V = P2, we must include one additionalvector (since P2 is 3-dimensional). Any polynomial that is not in S will suffice. For example, x 6∈ S, since xcannot be expressed in the form (2a1 + a2)x2 + (a1 + a2)x + (3a1 − a2), since the equations

2a1 + a2 = 0, a1 + a2 = 1, 3a1 − a2 = 0

are inconsistent. Thus, the extension we use as a basis for V here is {2x2 +x+3, x2 +x− 1, x}. Many othercorrect answers can also be given here.

35. Since S is a basis for Pn−1, S contains n vectors. Therefore, S ∪ {xn} is a set of n + 1 vectors, whichis precisely the dimension of Pn. Moreover, xn does not lie in Pn−1 = span(S), and therefore, S ∪ {xn} islinearly independent by Problem 47 in Section 4.5. By Corollary 4.6.13, we conclude that S ∪{xn} is a basisfor Pn.

36. Since S is a basis for Pn−1, S contains n vectors. Therefore, S ∪ {p} is a set of n + 1 vectors, which isprecisely the dimension of Pn. Moreover, p does not lie in Pn−1 = span(S), and therefore, S ∪{p} is linearlyindependent by Problem 47 in Section 4.5. By Corollary 4.6.13, we conclude that S ∪ {p} is a basis for Pn.

37.

(a) Let ek denote the kth standard basis vector. Then a basis for Cn with scalars in R is given by

{e1, e2, . . . , en, ie1, ie2, . . . , ien},

and the dimension is 2n.

(b) Using the notation in part (a), a basis for Cn with scalars in C is given by

{e1, e2, . . . , en},

and the dimension is n.

Solutions to Section 4.7

True-False Review:

1. TRUE. This is the content of Theorem 4.7.1. The existence of such a linear combination comes fromthe fact that a basis for V must span V , and the uniqueness of such a linear combination follows from thelinear independence of the vectors comprising a basis.

2. TRUE. This follows from the Equation [v]B = PB←C [v]C , which is just Equation (4.7.6) with the rolesof B and C reversed.

3. TRUE. The number of columns in the change-of-basis matrix PC←B is the number of vectors in B, whilethe number of rows of PC←B is the number of vectors in C. Since all bases for the vector space V containthe same number of vectors, this implies that PC←B contains the same number of rows and columns.

Page 268: Differential Equations and Linear Algebra - 3e - Goode -Solutions

268

4. TRUE. If V is an n-dimensional vector space, then

PC←BPB←C = In = PB←CPC←B ,

which implies that PC←B is invertible.

5. TRUE. This follows from the linearity properties:

[v −w]B = [v + (−1)w]B = [v]B + [(−1)w]B = [v]B + (−1)[w]B = [v]B − [w]B .

6. FALSE. It depends on the order in which the vectors in the bases B and C are listed. For instance, ifwe consider the bases B = {(1, 0), (0, 1)} and C = {(0, 1), (1, 0)} for R2, then although B and C contain the

same vectors, if we let v = (1, 0), then [v]B =[

10

]while [v]C =

[01

].

7. FALSE. For instance, if we consider the bases B = {(1, 0), (0, 1)} and C = {(0, 1), (1, 0)} for R2, and if

we let v = (1, 0) and w = (0, 1), then v 6= w, but [v]B =[

10

]= [w]C .

8. TRUE. If B = {v1,v2, . . . ,vn}, then the column vector [vi]B is the ith standard basis vector (1 inthe ith position and zeroes elsewhere). Thus, for each i, the ith column of PB←B consists of a 1 in the ithposition and zeroes elsewhere. This describes precisely the identity matrix.

Problems:

1. Write(5,−10) = c1(2,−2) + c2(1, 4).

Then2c1 + c2 = 5 and − 2c1 + 4c2 = −10.

Solving this system of equations gives c1 = 3 and c2 = −1. Thus,

[v]B =[

3−1

].

2. Write(8,−2) = c1(−1, 3) + c2(3, 2).

Then−c1 + 3c2 = 8 and 3c1 + 2c2 = −2.

Solving this system of equations gives c1 = −2 and c2 = 2. Thus,

[v]B =[−2

2

].

3. Write(−9, 1,−8) = c1(1, 0, 1) + c2(1, 1,−1) + c3(2, 0, 1).

Thenc1 + c2 + 2c3 = −9 and c2 = 1 and c1 − c2 + c3 = −8.

4. Write(1, 7, 7) = c1(1,−6, 3) + c2(0, 5,−1) + c3(3,−1,−1).

Page 269: Differential Equations and Linear Algebra - 3e - Goode -Solutions

269

Thenc1 + 3c3 = 1 and − 6c1 + 5c2 − c3 = 7 and 3c1 − c2 − c3 = 7.

Using Gaussian elimination to solve this system of equations gives c1 = 4, c2 = 6, and c3 = −1. Thus,

[v]B =

46

−1

.

5. Write(1, 7, 7) = c1(3,−1,−1) + c2(1,−6, 3) + c3(0, 5,−1).

Then3c1 + c2 = 1 and − c1 − 6c2 + 5c3 = 7 and − c1 + 3c2 − c3 = 7.

Using Gaussian elimination to solve this system of equations gives c1 = −1, c2 = 4, and c3 = 6. Thus,

[v]B =

−146

.

6. Write(5, 5, 5) = c1(−1, 0, 0) + c2(0, 0,−3) + c3(0,−2, 0).

Then−c1 = 5 and − 2c3 = 5 and − 3c2 = 5.

Therefore, c1 = −5, c2 = − 53 , and c3 = − 5

2 . Thus,

[v]B =

−5−5/3−5/2

.

7. Write−4x2 + 2x + 6 = c1(x2 + x) + c2(2 + 2x) + c3(1).

Equating the powers of x on each side, we have

c1 = −4 and c1 + 2c2 = 2 and 2c2 + c3 = 6.

Solving this system of equations, we find that c1 = −4, c2 = 3, and c3 = 0. Hence,

[p(x)]B =

−430

.

8. Write15− 18x− 30x2 = c1(5− 3x) + c2(1) + c3(1 + 2x2).

Equating the powers of x on each side, we have

5c1 + c2 + c3 = 15 and − 3c1 = −18 and 2c3 = −30.

Page 270: Differential Equations and Linear Algebra - 3e - Goode -Solutions

270

Solving this system of equations, we find that c1 = 6, c2 = 0, and c3 = −15. Hence,

[p(x)]B =

60

−15

.

9. Write4− x + x2 − 2x3 = c1(1) + c2(1 + x) + c3(1 + x + x2) + c4(1 + x + x2 + x3).

Equating the powers of x on each side, we have

c1 + c2 + c3 + c4 = 4 and c2 + c3 + c4 = −1 and c3 + c4 = 1 and c4 = −2.

Solving this system of equations, we find that c1 = 5, c2 = −2, c3 = 3, and c4 = −2. Hence,

[p(x)]B =

5

−23

−2

.

10. Write8 + x + 6x2 + 9x3 = c1(x3 + x2) + c2(x3 − 1) + c3(x3 + 1) + c4(x3 + x).

Equating the powers of x on each side, we have

−c2 + c3 = 8 and c4 = 1 and c1 = 6 and c1 + c2 + c3 + c4 = 9.

Solving this system of equations, we find that c1 = 6, c2 = −3, c3 = 5, and c4 = 1. Hence

[p(x)]B =

6

−351

.

11. Write [−3 −2−1 2

]= c1

[1 11 1

]+ c2

[1 11 0

]+ c3

[1 10 0

]+ c4

[1 00 0

].

Equating the individual entries of the matrices on each side of this equation (upper left, upper right, lowerleft, and lower right, respectively) gives

c1 + c2 + c3 + c4 = −3 and c1 + c2 + c3 = −2 and c1 + c2 = −1 and c1 = 2.

Solving this system of equations, we find that c1 = 2, c2 = −3, c3 = −1, and c4 = −1. Thus,

[A]B =

2

−3−1−1

.

12. Write [−10 16−15 −14

]= c1

[2 −13 5

]+ c2

[0 4

−1 1

]+ c3

[1 11 1

]+ c4

[3 −12 5

].

Page 271: Differential Equations and Linear Algebra - 3e - Goode -Solutions

271

Equating the individual entries of the matrices on each side of this equation (upper left, upper right, lowerleft, and lower right, respectively) gives

2c1 + c3 + 3c4 = −10, −c1 + 4c2 + c3 − c4 = 16, 3c1 − c2 + c3 + 2c4 = −15, 5c1 + c2 + c3 + 5c4 = −14.

Solving this system of equations, we find that c1 = −2, c2 = 4, c3 = −3, and c4 = −1. Thus,

[A]B =

−2

4−3−1

.

13. Write [5 67 8

]= c1

[−1 1

0 1

]+ c2

[1 3

−1 0

]+ c3

[1 01 2

]+ c4

[0 −12 3

].

Equating the individual entries of the matrices on each side of this equation (upper left, upper right, lowerleft, and lower right, respectively) gives

−c1 + c2 + c3 = 5 and c1 + 3c2 − c4 = 6 and − c2 + c3 + 2c4 = 7 and c1 + 2c3 + 3c4 = 8.

Solving this system of equations, we find that c1 = −34/3, c2 = 12, c3 = −55/3, and c4 = 56/3. Thus,

[A]B =

−34/3

12−55/356/3

.

14. Write(x, y, z) = c1(0, 6, 3) + c2(3, 0, 3) + c3(6,−3, 0).

Then3c2 + 6c3 = x and 6c1 − 3c3 = y and 3c1 + 3c2 = z.

The augmented matrix for this linear system is

3 3 0 z6 0 −3 y0 3 6 z

. We can reduce this to row-echelon form

as

1 1 0 z/30 1 2 x/30 0 9 y + 2x− 2z

. Solving this system by back-subsitution gives

c1 =19x +

29y − 1

9z and c2 = −1

9x− 2

9y +

49z and c3 =

29x +

19y − 2

9z.

Hence, denote the ordered basis {v1,v2,v3} by B, we have

[v]B =

19x + 2

9y − 19z

− 19x− 2

9y + 49z

29x + 1

9y − 29z

.

15. Writea0 + a1x + a2x

2 = c1(1 + x) + c2x(x− 1) + c3(1 + 2x2).

Page 272: Differential Equations and Linear Algebra - 3e - Goode -Solutions

272

Equating powers of x on both sides of this equation, we have

c1 + c3 = a0 and c1 − c2 = a1 and c2 + 2c3 = a2.

The augmented matrix corresponding to this system of equations is

1 0 1 a0

1 −1 0 a1

0 1 2 a2

. We can reduce this

to row-echelon form as

1 0 1 a0

0 1 2 a2

0 0 1 −a0 + a1 + a2

. Thus, solving by back-substitution, we have

c1 = 2a0 − a1 − a2 and c2 = 2a0 − 2a1 − a2 and c3 = −a0 + a1 + a2.

Hence, relative to the ordered basis B = {p1, p2, p3}, we have

[p(x)]B =

2a0 − a1 − a2

2a0 − 2a1 − a2

−a0 + a1 + a2

.

16. Let v1 = (9, 2) and v2 = (4,−3). Setting

(9, 2) = c1(2, 1) + c2(−3, 1)

and solving, we find c1 = 3 and c2 = −1. Thus, [v1]C =[

3−1

]. Next, setting

(4,−3) = c1(2, 1) + c2(−3, 1)

and solving, we find c1 = −1 and c2 = −2. Thus, [v2]C =[−1−2

]. Therefore,

PC←B =[

3 −1−1 −2

].

17. Let v1 = (−5,−3) and v2 = (4, 28). Setting

(−5,−3) = c1(6, 2) + c2(1,−1)

and solving, we find c1 = −1 and c2 = 1. Thus, [v1]C =[−1

1

]. Next, setting

(4, 28) = c1(6, 2) + c2(1,−1)

and solving, we find c1 = 4 and c2 = −20. Thus, [v2]C =[

4−20

]. Therefore,

PC←B =[−1 4

1 −20

].

18. Let v1 = (2,−5, 0), v2 = (3, 0, 5), and v3 = (8,−2,−9). Setting

(2,−5, 0) = c1(1,−1, 1) + c2(2, 0, 1) + c3(0, 1, 3)

Page 273: Differential Equations and Linear Algebra - 3e - Goode -Solutions

273

and solving, we find c1 = 4, c2 = −1, and c3 = −1. Thus, [v1]C =

4−1−1

. Next, setting

(3, 0, 5) = c1(1,−1, 1) + c2(2, 0, 1) + c3(0, 1, 3)

and solving, we find c1 = 1, c2 = 1, and c3 = 1. Thus, [v2]C =

111

. Finally, setting

(8,−2,−9) = c1(1,−1, 1) + c2(2, 0, 1) + c3(0, 1, 3)

and solving, we find c1 = −2, c2 = 5, and c3 = −4. Thus, [v3]C =

−25

−4

. Therefore,

PC←B =

4 1 −2−1 1 5−1 1 −4

.

19. Let v1 = (−7, 4, 4), v2 = (4, 2,−1), and v3 = (−7, 5, 0). Setting

(−7, 4, 4) = c1(1, 1, 0) + c2(0, 1, 1) + c3(3,−1,−1)

and solving, we find c1 = 0, c2 = 5/3 and c3 = −7/3. Thus, [v1]C =

05/3−7/3

. Next, setting

(4, 2,−1) = c1(1, 1, 0) + c2(0, 1, 1) + c3(3,−1,−1)

and solving, we find c1 = 0, c2 = 19/3 and c3 = −7/3. Setting

(4, 2,−1) = c1(1, 1, 0) + c2(0, 1, 1) + c3(3,−1,−1)

and solving, we find c1 = 3, c2 = −2/3, and c3 = 1/3. Thus, [v2]C =

3−2/31/3

. Finally, setting

(−7, 5, 0) = c1(1, 1, 0) + c2(0, 1, 1) + c3(3,−1,−1)

and solving, we find c1 = 5, c2 = −4, and c3 = −4. Hence, [v3]C =

5−4−4

. Therefore,

PC←B =

0 3 553 − 2

3 −4− 7

313 −4

.

20. Let v1 = 7− 4x and v2 = 5x. Setting

7− 4x = c1(1− 2x) + c2(2 + x)

Page 274: Differential Equations and Linear Algebra - 3e - Goode -Solutions

274

and solving, we find c1 = 3 and c2 = 2. Thus, [v1]C =[

32

]. Next, setting

5x = c1(1− 2x) + c2(2 + x)

and solving, we find c1 = −2 and c2 = 1. Hence, [v2]C =[−2

1

]. Therefore,

PC←B =[

3 −22 1

].

21. Let v1 = −4 + x− 6x2, v2 = 6 + 2x2, and v3 = −6− 2x + 4x2. Setting

−4 + x− 6x2 = c1(1− x + 3x2) + c2(2) + c3(3 + x2)

and solving, we find c1 = −1, c2 = 3, and c3 = −3. Thus, [v1]C =

−13

−3

. Next, setting

6 + 2x2 = c1(1− x + 3x2) + c2(2) + c3(3 + x2)

and solving, we find c1 = 0, c2 = 0, and c3 = 2. Thus, [v2]C =

002

. Finally, setting

−6− 2x + 4x2 = c1(1− x + 3x2) + c2(2) + c3(3 + x2)

and solving, we find c1 = 2, c2 = −1, and c3 = −2. Thus, [v3]C =

2−1−2

. Therefore,

PC←B =

−1 0 23 0 −1

−3 2 −2

.

22. Let v1 = −2 + 3x + 4x2 − x3, v2 = 3x + 5x2 + 2x3, v3 = −5x2 − 5x3, and v4 = 4 + 4x + 4x2. Setting

−2 + 3x + 4x2 − x3 = c1(1− x3) + c2(1 + x) + c3(x + x2) + c4(x2 + x3)

and solving, we find c1 = 0, c2 = −2, c3 = 5, and c4 = −1. Thus, [v1]C =

0

−25

−1

. Next, setting

3x + 5x2 + 2x3 = c1(1− x3) + c2(1 + x) + c3(x + x2) + c4(x2 + x3)

and solving, we find c1 = 0, c2 = 0, c3 = 3, and c4 = 2. Thus, [v2]C =

0032

. Next, solving

−5x2 − 5x3 = c1(1− x3) + c2(1 + x) + c3(x + x2) + c4(x2 + x3)

Page 275: Differential Equations and Linear Algebra - 3e - Goode -Solutions

275

and solving, we find c1 = 0, c2 = 0, c3 = 0, and c4 = −5. Thus, [v3]C =

000

−5

. Finally, setting

4 + 4x + 4x2 = c1(1− x3) + c2(1 + x) + c3(x + x2) + c4(x2 + x3)

and solving, we find c1 = 2, c2 = 2, c3 = 2, and c4 = 2. Thus, [v4]C =

2222

. Therefore,

PC←B =

0 0 0 2

−2 0 0 25 3 0 2

−1 2 −5 2

.

23. Let v1 = 2 + x2, v2 = −1− 6x + 8x2, and v3 = −7− 3x− 9x2. Setting

2 + x2 = c1(1 + x) + c2(−x + x2) + c3(1 + 2x2)

and solving, we find c1 = 3, c2 = 3, and c3 = −1. Thus, [v1]C =

33

−1

. Next, solving

−1− 6x + 8x2 = c1(1 + x) + c2(−x + x2) + c3(1 + 2x2)

and solving, we find c1 = −4, c2 = 2, and c3 = 3. Thus, [v2]C =

−423

. Finally, solving

−7− 3x− 9x2 = c1(1 + x) + c2(−x + x2) + c3(1 + 2x2)

and solving, we find c1 = −2, c2 = 1, and c3 = −5. Thus, [v3]C =

−21

−5

. Therefore,

PC←B =

3 −4 −23 2 1

−1 3 −5

.

24. Let v1 =[

1 0−1 −2

], v2 =

[0 −13 0

], v3 =

[3 50 0

], and v4 =

[−2 −4

0 0

]. Setting[

1 0−1 −2

]= c1

[1 11 1

]+ c2

[1 11 0

]+ c3

[1 10 0

]+ c4

[1 00 0

]

and solving, we find c1 = −2, c2 = c3 = c4 = 1. Thus, [v1]C =

−2

111

. Next, setting

[0 −13 0

]c1

[1 11 1

]+ c2

[1 11 0

]+ c3

[1 10 0

]+ c4

[1 00 0

]

Page 276: Differential Equations and Linear Algebra - 3e - Goode -Solutions

276

and solving, we find c1 = 0, c2 = 3, c3 = −4, and c4 = 1. Thus, [v2]C =

03

−41

. Next, setting

[3 50 0

]c1

[1 11 1

]+ c2

[1 11 0

]+ c3

[1 10 0

]+ c4

[1 00 0

]

and solving, we find c1 = 0, c2 = 0, c3 = 5, and c4 = −2. Thus, [v3]C =

005

−2

. Finally, setting

[−2 −4

0 0

]c1

[1 11 1

]+ c2

[1 11 0

]+ c3

[1 10 0

]+ c4

[1 00 0

]

and solving, we find c1 = 0, c2 = 0, c3 = −4, and c4 = 2. Thus, [v4]C =

00

−42

. Therefore, we have

PC←B =

−2 0 0 0

1 3 0 01 −4 5 −41 1 −2 2

.

25. Let v1 = E12, v2 = E22, v3 = E21, and v4 = E11. We see by inspection that

[v1]C =

0001

, [v2]C =

1000

, [v3]C =

0010

, [v4]C =

0100

.

Therefore,

PC←B =

0 1 0 00 0 0 10 0 1 01 0 0 0

.

26. We could simply compute the inverse of the matrix obtained in Problem 16. For instructive purposes,however, we proceed directly. Let w1 = (2, 1) and w2 = (−3, 1). Setting

(2, 1) = c1(9, 2) + c2(4,−3)

and solving, we obtain c1 = 2/7 and c2 = −1/7. Thus, [w1]B =[

2/7−1/7

]. Next, setting

(−3, 1) = c1(9, 2) + c2(4,−3)

Page 277: Differential Equations and Linear Algebra - 3e - Goode -Solutions

277

and solving, we obtain c1 = −1/7 and c2 = −3/7. Thus, [w2]B =[−1/7−3/7

]. Therefore,

PB←C =

27 − 1

7

− 17 − 3

7

.

27. We could simply compute the inverse of the matrix obtained in Problem 16. For instructive purposes,however, we proceed directly. Let w1 = (6, 2) and w2 = (1,−1). Setting

(6, 2) = c1(−5,−3) + c2(4, 28)

and solving, we obtain c1 = −5/4 and c2 = −1/16. Thus, [w1]B =[

−5/4−1/16

]. Next, setting

(1,−1) = c1(−5,−3) + c2(4, 28)

and solving, we obtain c1 = 25/4 and c2 = −1/16. Thus, [w2]B =[

25/4−1/16

]. Therefore,

PB←C =

− 54

254

− 116 − 1

16

.

28. We could simply compute the inverse of the matrix obtained in Problem 16. For instructive purposes,however, we proceed directly. Let w1 = (1,−1, 1), w2 = (2, 0, 1), and w3 = (0, 1, 3). Setting

(1,−1, 1) = c1(2,−5, 0) + c2(3, 0, 5) + c3(8,−2,−9)

and solving, we find c1 = 1/5, c2 = 1/5, and c3 = 0. Thus, [w1]B =

1/51/50

. Next, setting

(2, 0, 1) = c1(2,−5, 0) + c2(3, 0, 5) + c3(8,−2,−9)

and solving, we find c1 = −2/45, c2 = 2/5, and c3 = 1/9. Thus, [w2]B =

−2/452/51/9

. Finally, setting

(0, 1, 3) = c1(2,−5, 0) + c2(3, 0, 5) + c3(8,−2,−9)

and solving, we find c1 = −7/45, c2 = 2/5, and c3 = −1/9. Thus, [w3]B =

−7/452/5−1/9

. Therefore,

PB←C =

15 − 2

45 − 745

15

25

25

0 19 − 1

9

.

Page 278: Differential Equations and Linear Algebra - 3e - Goode -Solutions

278

29. We could simply compute the inverse of the matrix obtained in Problem 16. For instructive purposes,however, we proceed directly. Let w1 = 1− 2x and w2 = 2 + x. Setting

1− 2x = c1(7− 4x) + c2(5x)

and solving, we find c1 = 1/7 and c2 = −2/7. Thus, [w1]B =[

1/7−2/7

]. Setting

2 + x = c1(7− 4x) + c2(5x)

and solving, we find c1 = 2/7 and c2 = 3/7. Thus, [w2]B =[

2/73/7

]. Therefore,

PB←C =

17

27

− 27

37

.

30. We could simply compute the inverse of the matrix obtained in Problem 16. For instructive purposes,however, we proceed directly. Let w1 = 1− x3, w2 = 1 + x, w3 = x + x2, and w4 = x2 + x3. Setting

a0 + a1x + a2x2 + a3x

3 = c1(−2 + 3x + 4x2 − x3) + c2(3x + 5x2 + 2x3) + c3(−5x2 − 5x3) + c4(4 + 4x + 4x2),

the corresponding augmented matrix for the resulting equations that arise by equating the various powersof x is

−2 0 0 4 a0

3 3 0 4 04 5 −5 4 0

−1 2 −5 0 −1

,

which can be formulated as the linear system Ax = b, where

A =

−2 0 0 4

3 3 0 44 5 −5 4

−1 2 −5 0

.

??????? NEED CALCULATOR ??????

31. We could simply compute the inverse of the matrix obtained in Problem 25. For instructive purposes,however, we proceed directly. Let w1 = E22, w2 = E11, w3 = E21, and w4 = E12. We see by inspectionthat

[w1]B =

0100

, [w2]B =

0001

, [w3]B =

0010

, [w4]B =

1000

.

Therefore

PB←C =

0 0 0 11 0 0 00 0 1 00 1 0 0

.

Page 279: Differential Equations and Linear Algebra - 3e - Goode -Solutions

279

32. We first compute [v]B and [v]C directly. Setting

(−5, 3) = c1(9, 2) + c2(4,−3)

and solving, we obtain c1 = −3/35 and c2 = −37/35. Thus, [v]B =

− 335

− 3735

. Setting

(−5, 3) = c1(2, 1) + c2(−3, 1)

and solving, we obtain c1 = 4/5 and c2 = 11/5. Thus, [v]C =

45

115

. Now, according to Problem 16,

PC←B =[

3 −1−1 −2

], so

PC←B [v]B =[

3 −1−1 −2

] − 335

− 3735

=

45

115

= [v]C ,

which confirms Equation (4.7.6).

33. We first compute [v]B and [v]C directly. Setting

(−1, 2, 0) = c1(−7, 4, 4) + c2(4, 2,−1) + c3(−7, 5, 0)

and solving, we obtain c1 = 3/43, c2 = 12/43, and c3 = 10/43. Thus, [v]B =

343

1243

1043

. Setting

(−1, 2, 0) = c1(1, 1, 0) + c2(0, 1, 1) + c3(3,−1,−1)

and solving, we obtain c1 = 2, c2 = −1, and c3 = −1. Thus, [v]C =

2−1−1

. Now, according to Problem

19, PC←B =

0 3 5

53 − 2

3 −4

− 73

13 −4

, so PC←B [v]B =

0 3 5

53 − 2

3 −4

− 73

13 −4

343

1243

1043

=

2

−1

−1

= [v]C , which

confirms Equation (4.7.6).

34. We first compute [v]B and [v]C directly. Setting

6− 4x = c1(7− 4x) + c2(5x)

and solving, we obtain c1 = 6/7 and c2 = −4/35. Thus, [v]B =

67

− 435

. Next, setting

6− 4x = c1(1− 2x) + c2(2 + x)

Page 280: Differential Equations and Linear Algebra - 3e - Goode -Solutions

280

and solving, we obtain c1 = 14/5 and c2 = 8/5. Thus, [v]C =

145

85

. Now, according to Problem 20,

PC←B =[

3 −22 1

], so

PC←B [v]B =

3 −2

2 1

67

− 435

=

145

85

= [v]C ,

which confirms Equation (4.7.6).

35. We first compute [v]B and [v]C directly. Setting

5− x + 3x2 = c1(−4 + x− 6x2) + c2(6 + 2x2) + c3(−6− 2x + 4x2)

and solving, we obtain c1 = 1, c2 = 5/2, and c3 = 1. Thus, [v]B =

1521

. Next, setting

5− x + 3x2 = c1(1− x + 3x2) + c2(2) + c3(3 + x2)

and solving, we obtain c1 = 1, c2 = 2, and c3 = 0. Thus, [v]C =

120

. Now, according to Problem 21,

PC←B =

−1 0 23 0 −1

−3 2 −2

, so

PC←B [v]B =

−1 0 23 0 −1

−3 2 −2

1521

=

120

= [v]C ,

which confirms Equation (4.7.6).

36. We first compute [v]B and [v]C directly. Setting[−1 −1−4 5

]= c1

[1 0

−1 −2

]+ c2

[0 −13 0

]+ c3

[3 50 0

]+ c4

[−2 −4

0 0

]

and solving, we obtain c1 = −5/2, c2 = −13/6, c3 = 37/6, and c4 = 17/2. Thus, [v]B =

−5/2−13/637/617/2

. Next,

setting [−1 −1−4 5

]= c1

[1 11 1

]+ c2

[1 11 0

]+ c3

[1 10 0

]+ c4

[1 00 0

]

and solving, we find c1 = 5, c2 = −9, c3 = 3, and c4 = 0. Thus, [v]C =

5

−930

. Now, according to Problem

Page 281: Differential Equations and Linear Algebra - 3e - Goode -Solutions

281

24, PC←B =

−2 0 0 0

1 3 0 01 −4 5 −41 1 −2 2

, and so

PC←B [v]B =

−2 0 0 0

1 3 0 01 −4 5 −41 1 −2 2

−5/2−13/637/617/2

=

5

−930

= [v]C ,

which confirms Equation (4.7.6).

37. Write x = a1v1 + a2v2 + · · ·+ anvn. We have

cx = c(a1v1 + a2v2 + · · ·+ anvn) = (ca1)v1 + (ca2)v2 + · · ·+ (can)vn.

Hence,

[cx]B =

ca1

ca2

...can

= c

a1

a2

...an

= c[x]B .

38. We must show that {v1,v2, . . . ,vn} is linearly independent and spans V .

Check linear independence: Assume that

c1v1 + c2v2 + · · ·+ cnvn = 0.

We wish to show that c1 = c2 = · · · = cn = 0. Now by assumption, the zero vector 0 can be uniquely writtenas a linear combination of the vectors in {v1,v2, . . . ,vn}. Since

0 = 0 · v1 + 0 · v2 + · · ·+ 0 · vn,

we therefore conclude thatc1 = c2 = · · · = cn = 0,

as needed.

Check spanning property: Let v be an arbitrary vector in V . By assumption, it is possible to express v(uniquely) as a linear combination of the vectors in {v1,v2, . . . ,vn}; say

v = a1v1 + a2v2 + · · ·+ anvn.

Therefore, v lies in span{v1,v2, . . . ,vn}. Since v is an arbitrary member of V , we conclude that {v1,v2, . . . ,vn}spans V .

39. Let B = {v1,v2, . . . ,vn} and let C = {vσ(1),vσ(2), . . . ,vσ(n)}, where σ is a permutation of the set{1, 2, . . . , n}. We will show that PC←B contains exactly one 1 in each row and column, and zeroes elsewhere(the argument for PB←C is essentially identical, or can be deduced from the fact that PB←C is the inverseof PC←B).

Let i be in {1, 2, . . . , n}. The ith column of PC←B is [vi]C . Suppose that ki ∈ {1, 2, . . . , n} is such thatσ(ki) = i. Then [vi]C is a column vector with a 1 in the kith position and zeroes elsewhere. Since the valuesk1, k2, . . . , kn are distinct, we see that each column of PC←B contains a single 1 (with zeroes elsewhere) in adifferent position from any other column. Hence, when we consider all n columns as a whole, each positionin the column vector must have a 1 occurring exactly once in one of the columns. Therefore, PC←B containsexactly one 1 in each row and column, and zeroes elsewhere.

Page 282: Differential Equations and Linear Algebra - 3e - Goode -Solutions

282

Solutions to Section 4.8

True-False Review:

1. TRUE. Note that rowspace(A) is a subspace of Rn and colspace(A) is a subspace of Rm, so certainly ifrowspace(A) = colspace(A), then Rn and Rm must be the same. That is, m = n.

2. FALSE. A basis for the row space of A consists of the nonzero row vectors of any row-echelon form ofA.

3. FALSE. The nonzero column vectors of the original matrix A that correspond to the nonzero columnvectors of a row-echelon form of A form a basis for colspace(A).

4. TRUE. Both rowspace(A) and colspace(A) have dimension equal to rank(A), the number of nonzerorows in a row-echelon form of A. Equivalently, their dimensions are both equal to the number of pivotsoccurring in a row-echelon form of A.

5. TRUE. For an invertible n × n matrix, rank(A) = n. That means there are n nonzero rows in a row-echelon form of A, and so rowspace(A) is n-dimensional. Therefore, we conclude that rowspace(A) = Rn.

6. TRUE. This follows immediately from the (true) statements in True-False Review Questions 4-5 above.

Problems:

1. A row-echelon form of A is[

1 −20 0

]. Consequently, a basis for rowspace(A) is {(1,−2)}, whereas a

basis for colspace(A) is {(1,−3)}.

2. A row-echelon form of A is[

1 1 −3 20 1 −2 1

]. Consequently, a basis for rowspace(A) is

{(1, 1,−3, 2), (0, 1,−2, 1)}, whereas a basis for colspace(A) is {(1, 3), (1, 4)}.

3. A row-echelon form of A is

1 2 30 1 20 0 0

. Consequently, a basis for rowspace(A) is {(1, 2, 3), (0, 1, 2)},

whereas a basis for colspace(A) is {(1, 5, 9), (2, 6, 10)}.

4. A row-echelon form of A is

0 1 13

0 0 00 0 0

. Consequently, a basis for rowspace(A) is {(0, 1, 13 )}, whereas

a basis for colspace(A) is {(3,−6, 12)}.

5. A row-echelon form of A is

1 2 −1 30 0 0 10 0 0 00 0 0 0

. Consequently, a basis for rowspace(A) is

{(1, 2,−1, 3), (0, 0, 0, 1)}, whereas a basis for colspace(A) is {(1, 3, 1, 5), (3, 5,−1, 7)}.

6. We can reduce A to

1 −1 2 30 2 −4 30 0 6 −13

(Note: This is not row-echelon form, but it is not nec-

essary to bring the leading nonzero element in each row to 1.). Consequently, a basis for rowspace(A) is{(1,−1, 2, 3), (0, 2,−4, 3), (0, 0, 6,−13)}, whereas a basis for colspace(A) is {(1, 1, 3), (−1, 1, 1), (2,−2, 4)}.

7. We determine a basis for the rowspace of the matrix

1 −1 25 −4 17 −5 −4

. A row-echelon form of this matrix is

Page 283: Differential Equations and Linear Algebra - 3e - Goode -Solutions

283 1 −1 20 1 −90 0 0

. Consequently, a basis for the subspace spanned by the given vectors is {(1,−1, 2), (0, 1,−9)}.

8. We determine a basis for the rowspace of the matrix

1 3 31 5 −12 7 41 4 1

. A row-echelon form of this matrix is

1 3 30 1 −20 0 00 0 0

. Consequently, a basis for the subspace spanned by the given vectors is {(1, 3, 3), (0, 1,−2)}.

9. We determine a basis for the rowspace of the matrix

1 1 −1 22 1 3 −41 2 −6 10

. A row-echelon form of

this matrix is

1 1 −1 20 1 −5 80 0 0 0

. Consequently, a basis for the subspace spanned by the given vectors is

{(1, 1,−1, 2), (0, 1,−5, 8)}.

10. We determine a basis for the rowspace of the matrix

1 4 1 32 8 3 51 4 0 42 8 2 6

. A row-echelon form of this

matrix is

1 4 1 30 0 1 −10 0 0 00 0 0 0

. Consequently, a basis for the subspace spanned by the given vectors is

{(1, 4, 1, 3), (0, 0, 1,−1)}.

11. A row-echelon form of A is[

1 −30 0

]. Consequently, a basis for rowspace(A) is {(1,−3)}, whereas a

basis for colspace(A) is {(−3, 1)}. Both of these subspaces are lines in the xy-plane.

12. (a) A row-echelon form of A is

1 2 40 1 10 0 0

. Consequently, a basis for rowspace(A) is

{(1, 2, 4), (0, 1, 1)}, whereas a basis for colspace(A) is {(1, 5, 3), (2, 11, 7)}.

(b) We see that both of these subspaces are 2-dimensional, and therefore each corresponds geometrically toa plane. By inspection, we see that the two basis vectors for rowspace(A) satisfy the equations 2x+y−z = 0,and therefore rowspace(A) corresponds to the plane with this equation. Similarly, we see that the two basisvectors for colspace(A) satisfy the equations 2x − y + z = 0, and therefore colspace(A) corresponds to theplane with this equations.

13. If A =[

1 12 2

], colspace(A) is spanned by (1, 2), but if we permute the two rows of A, we obtain a

new matrix whose column space is spanned by (2, 1). On the other hand, if we multiply the first row by 2,then we obtain a new matrix whose column space is spanned by (2, 2). And if we add −2 times the first rowto the second row, we obtain a new matrix whose column space is spanned by (1, 0). Therefore, in all cases,colspace(A) is altered by the row operations performed.

Page 284: Differential Equations and Linear Algebra - 3e - Goode -Solutions

284

14. Many examples are possible here, but an easy 2× 2 example is the matrix A =[

1 1−1 −1

]. We have

rowspace(A) = {(r, r) : r ∈ R}, while colspace(A) = {(r,−r) : r ∈ R}. Thus, rowspace(A) and colspace(A)have no nonzero vectors in common.

Solutions to Section 4.9

True-False Review:

1. FALSE. For example, consider the 7 × 3 zero matrix, 07×3. We have rank(07×3) = 0, and therefore bythe Rank-Nullity Theorem, nullity(07×3) = 3. But |m−n| = |7−3| = 4. Many other examples can be given.In particular, provided that m > 2n, the m× n zero matrix will show the falsity of the statement.

2. FALSE. In this case, rowspace(A) is a subspace of R9, hence it cannot possibly be equal to R7. Thecorrect conclusion here should have been that rowspace(A) is a 7-dimensional subspace of R9.

3. TRUE. By the Rank-Nullity Theorem, rank(A) = 7, and therefore, rowspace(A) is a 7-dimensionalsubspace of R7. Hence, rowspace(A) = R7.

4. FALSE. For instance, the matrix A =[

0 10 0

]is an upper triangular matrix with two zeros appearing

on the main diagonal. However, since rank(A) = 1, we also have nullity(A) = 1.

5. TRUE. An invertible matrix A must have nullspace(A) = {0}, but if colspace(A) is also {0}, then Awould be the zero matrix, which is certainly not invertible.

6. FALSE. For instance, if we take A =[

1 00 0

]and B =

[0 10 0

], then nullity(A)+ nullity(B) =

1 + 1 = 2, but A + B =[

1 10 0

], and nullity(A + B) = 1.

7. FALSE. For instance, if we take A =[

1 00 0

]and B =

[0 00 1

], then nullity(A)·nullity(B) = 1 ·1 = 1,

but AB = 02, and nullity(AB) = 2.

8. TRUE. If x belongs to the nullspace of B, then Bx = 0. Therefore, (AB)x = A(Bx) = A0 = 0, sothat x also belongs to the nullspace of AB. Thus, nullspace(B) is a subspace of nullspace(AB). Hence,nullity(B) ≤ nullity(AB), as claimed.

9. TRUE. If y belongs to nullspace(A), then Ay = 0. Hence, if Axp = b, then

A(y + xp) = Ay + Axp = 0 + b = b,

which demonstrates that y + xp is also a solution to the linear system Ax = b.

Problems:

1. The matrix is already in row-echelon form. A vector (x, y, z, w) in nullspace(A) must satisfy x−6z−w = 0.We see that y,z, and w are free variables, and

nullspace(A) =

6z + wyzw

: y, z, w ∈ R

= span

0100

,

6010

,

1001

.

Page 285: Differential Equations and Linear Algebra - 3e - Goode -Solutions

285

Therefore, nullity(A) = 3. Moreover, since this row-echelon form contains one nonzero row, rank(A) = 1.Since the number of columns of A is 4 = 1 + 3, the Rank-Nullity Theorem is verified.

2. We bring A to row-echelon form:

REF(A) =[

1 − 12

0 0

].

A vector (x, y) in nullspace(A) must satisfy x− 12y = 0. Setting y = 2t, we get x = t. Therefore,

nullspace(A) ={[

t2t

]: t ∈ R

}= span

{[12

]}.

Therefore, nullity(A) = 1. Since REF(A) contains one nonzero row, rank(A) = 1. Since the number ofcolumns of A is 2 = 1 + 1, the Rank-Nullity Theorem is verified.

3. We bring A to row-echelon form:

REF(A) =

1 1 −10 1 70 0 1

.

Since there are no unpivoted columns, there are no free variables in the associated homogeneous linearsystem, and so

nullspace(A) = {0}.

Therefore, nullity(A) = 0. Since REF(A) contains three nonzero rows, rank(A) = 3. Since the number ofcolumns of A is 3 = 3 + 0, the Rank-Nullity Theorem is verified.

4. We bring A to row-echelon form:

REF(A) =

1 4 −1 30 1 1 10 0 0 0

.

A vector (x, y, z, w) in nullspace(A) must satisfy x+4y−z+3w = 0 and y+z+w = 0. We see that z and w arefree variables. Set z = t and w = s. Then y = −z−w = −t−s and x = z−4y−3w = t−4(−t−s)−3s = 5t+s.Therefore,

nullspace(A) =

5t + s−t− s

ts

: s, t ∈ R

= span

5−1

10

,

1

−101

.

Therefore, nullity(A) = 2. Moreover, since REF(A) contains two nonzero rows, rank(A) = 2. Since thenumber of columns of A is 4 = 2 + 2, the Rank-Nullity Theorem is verified.

5. Since all rows (or columns) of this matrix are proportional to the first one, rank(A) = 1. Since A has twocolumns, we conclude from the Rank-Nullity Theorem that

nullity(A) = 2− rank(A) = 2− 1 = 1.

6. The first and last rows of A are not proportional, but the middle rows are proportional to the first row.Therefore, rank(A) = 2. Since A has five columns, we conclude from the Rank-Nullity Theorem that

nullity(A) = 5− rank(A) = 5− 2 = 3.

Page 286: Differential Equations and Linear Algebra - 3e - Goode -Solutions

286

7. Since the second and third columns are not proportional and the first column is all zeros, we haverank(A) = 2. Since A has three columns, we conclude from the Rank-Nullity Theorem that

nullity(A) = 3− rank(A) = 3− 2 = 1.

8. This matrix (already in row-echelon form) has one nonzero row, so rank(A) = 1. Since it has fourcolumns, we conclude from the Rank-Nullity Theorem that

nullity(A) = 4− rank(A) = 4− 1 = 3.

9. The augmented matrix for this linear system is

1 3 −1 42 7 9 111 5 21 10

. We quickly reduce this augmented

matrix to row-echelon form: 1 3 −1 40 1 11 30 0 0 0

.

A solution (x, y, z) to the system will have a free variable corresponding to the third column: z = t. Theny + 11t = 3, so y = 3− 11t. Finally, x + 3y− z = 4, so x = 4 + t− 3(3− 11t) = −5 + 34t. Thus, the solutionset is

−5 + 34t3− 11t

t

: t ∈ R

=

t

34−11

1

+

−530

: t ∈ R

.

Observe that xp =

−530

is a particular solution to Ax = b, and that

34−11

1

forms a basis for

nullspace(A). Therefore, the set of solution vectors obtained does indeed take the form (4.9.3).

10. The augmented matrix for this linear system is

1 −1 2 3 61 −2 5 5 132 −1 1 4 5

. We quickly reduce this aug-

mented matrix to row-echelon form: 1 −1 2 3 60 1 −3 −2 −70 0 0 0 0

.

A solution (x, y, z, w) to the system will have free variables corresponding to the third and fourth columns:z = t and w = s. Then y − 3z − 2w = −7 requires that y = −7 + 3t + 2s, and x− y + 2z + 3w = 6 requiresthat x = 6 + (−7 + 3t + 2s)− 2t− 3s = −1 + t− s. Thus, the solution set is

−1 + t− s−7 + 3t + 2s

ts

: s, t ∈ R

=

t

1310

+ s

−1

201

+

−1−7

00

: s, t ∈ R

.

Page 287: Differential Equations and Linear Algebra - 3e - Goode -Solutions

287

Observe that xp =

−1−7

00

is a particular solution to Ax = b, and that

1310

,

−1

201

is a basis for nullspace(A). Therefore, the set of solution vectors obtained does indeed take the form (4.9.3).

11. The augmented matrix for this linear system is

1 1 −2 −33 −1 −7 21 1 1 02 2 −4 −6

. We quickly reduce this augmented

matrix to row-echelon form: 1 1 −2 −30 1 1

4 − 114

0 0 1 1

.

There are no free variables in the solution set (since none of the first three columns is unpivoted), and we find

the solution set by back-substitution:

2−3

1

. It is easy to see that this is indeed a particular solution:

xp =

2−3

1

. Since the row-echelon form of A has three nonzero rows, rank(A) = 3. Thus, nullity(A) = 0.

Hence, nullspace(A) = {0}. Thus, the only term in the expression (4.9.3) that appears in the solution is xp,and this is precisely the unique solution we obtained in the calculations above.

12. By inspection, we see that a particular solution to this (homogeneous) linear system is xp = 0. Wequickly reduce this augmented matrix to row-echelon form: 1 1 −1 5 0

0 1 − 12

72 0

0 0 0 0 0

.

A solution (x, y, z, w) to the system will have free variables corresponding to the third and fourth columns:z = t and w = s. Then y − 1

2z + 72w = 0 requires that y = 1

2 t − 72s, and x + y − z + 5w = 0 requires that

x = t− ( 12 t− 7

2s)− 5s = 12 t− 3

2s. Thus, the solution set is

12 t− 3

2s12 t− 7

2sts

: s, t ∈ R

=

t

121210

+ s

− 3

2− 7

201

: s, t ∈ R

.

Since the vectors

121210

and

− 3

2− 7

201

form a basis for nullspace(A), our solutions to take the proper form

given in (4.9.3).

Page 288: Differential Equations and Linear Algebra - 3e - Goode -Solutions

288

13. By the Rank-Nullity Theorem, rank(A) = 7 − nullity(A) = 7 − 4 = 3, and hence, colspace(A) is3-dimensional. But since A has three rows, colspace(A) is a subspace of R3. Therefore, since the only3-dimensional subspace of R3 is R3 itself, we conclude that colspace(A) = R3. Now rowspace(A) is also3-dimensional, but it is a subspace of R5. Therefore, it is not accurate to say that rowspace(A) = R3.

14. By the Rank-Nullity Theorem, rank(A) = 4−nullity(A) = 4−0 = 4, so we conclude that rowspace(A) is4-dimensional. Since rowspace(A) is a subspace of R4 (since A contains four columns), and it is 4-dimensional,we conclude that rowspace(A) = R4. Although colspace(A) is 4-dimensional, colspace(A) is a subspace ofR6, and therefore it is not accurate to say that colspace(A) = R4.

15. If rowspace(A) = nullspace(A), then we know that rank(A) = nullity(A). Therefore, rank(A)+ nullity(A)must be even. But rank(A)+ nullity(A) is the number of columns of A. Therefore, A contains an even numberof columns.

16. We know that rank(A) + nullity(A) = 7. But since A only has five rows, rank(A) ≤ 5. There-fore, nullity(A) ≥ 2. However, since nullspace(A) is a subspace of R7, nullity(A) ≤ 7. Therefore, 2 ≤nullity(A) ≤ 7. There are many examples of a 5 × 7 matrix A with nullity(A) = 2; one example is

1 0 0 0 0 0 00 1 0 0 0 0 00 0 1 0 0 0 00 0 0 1 0 0 00 0 0 0 1 0 0

. The only 5× 7 matrix with nullity(A) = 7 is 05×7, the 5× 7 zero matrix.

17. We know that rank(A) + nullity(A) = 8. But since A only has three rows, rank(A) ≤ 3. There-fore, nullity(A) ≥ 5. However, since nullspace(A) is a subspace of R8, nullity(A) ≤ 8. Therefore, 5 ≤nullspace(A) ≤ 8. There are many examples of a 3 × 8 matrix A with nullity(A) = 5; one example is 1 0 0 0 0 0 0 0

0 1 0 0 0 0 0 00 0 1 0 0 0 0 0

. The only 3× 8 matrix with nullity(A) = 8 is 03×8, the 3× 8 zero matrix.

18. If Bx = 0, then ABx = A0 = 0. This observation shows that nullspace(B) is a subspace ofnullspace(AB). On the other hand, if ABx = 0, then Bx = (A−1A)Bx = A−1(AB)x = A−10 = 0, soBx = 0. Therefore, nullspace(AB) is a subspace of nullspace(B). As a result, since nullspace(B) andnullspace(AB) are subspaces of each other, they must be equal: nullspace(AB) = nullspace(B). Therefore,nullity(AB) = nullity(B).

Solutions to Section 4.10

True-False Review:

1. TRUE. This follows from the equivalence of (a) and (m) in the Invertible Matrix Theorem.

2. FALSE. If the matrix has n linearly independent rows, then by the equivalence of (a) and (m) in theInvertible Matrix Theorem, such a matrix would be invertible. But if that were so, then by part (j) of theInvertible Matrix Theorem, such a matrix would have to have n linearly independent columns.

3. FALSE. If the matrix has n linearly independent columns, then by the equivalence of (a) and (j) in theInvertible Matrix Theorem, such a matrix would be invertible. But if that were so, then by part (m) of theInvertible Matrix Theorem, such a matrix would have to have n linearly independent rows.

4. FALSE. An n × n matrix A with det(A) = 0 is not invertible by part (g) of the Invertible MatrixTheorem. Therefore, by the equivalence of (a) and (l) in the Invertible Matrix Theorem, the columns of Ado not form a basis for Rn.

Page 289: Differential Equations and Linear Algebra - 3e - Goode -Solutions

289

5. TRUE. If rowspace(A) 6= Rn, then by the equivalence of (a) and (n) in the Invertible Matrix Theorem,A is not invertible. Therefore, A is not row-equivalent to the identity matrix. Since B is row-equivalent toA, then B is not row-equivalent to the identity matrix, and therefore, B is not invertible. Hence, by part(k) of the Invertible Matrix Theorem, we conclude that colspace(B) 6= Rn.

6. FALSE. If nullspace(A) = {0}, then A is invertible by the equivalence of (a) and (c) in the InvertibleMatrix Theorem. Since E is an elementary matrix, it is also invertible. Therefore, EA is invertible. By part(g) of the Invertible Matrix Theorem, det(EA) 6= 0, contrary to the statement given.

7. FALSE. The matrix [A|B] has 2n columns, but only n rows, and therefore rank([A|B]) ≤ n. Hence, bythe Rank-Nullity Theorem, nullity([A|B]) ≥ n > 0.

8. TRUE. The first and third rows are proportional, and hence statement (m) in the Invertible MatrixTheorem is false. Therefore, by the equivalence with (a), the matrix is not invertible.

9. FALSE. For instance, the matrix

0 1 0 01 0 0 00 0 0 10 0 1 0

is of the form given, but satisfies any (and all) of the

statements of the Invertible Matrix Theorem.

10. FALSE. For instance, the matrix

1 2 13 6 21 0 0

is of the form given, but has a nonzero determinant,

and so by part (g) of the Invertible Matrix Theorem, it is invertible.

Solutions to Section 4.11

1. FALSE. The converse of this statement is true, but for the given statement, many counterexamplesexist. For instance, the vectors v = (1, 1) and w = (1, 0) in R2 are linearly independent, but they are notorthogonal.

2. FALSE. We have

〈kv, kw〉 = k〈v, kw〉 = k〈kw,v〉 = k2〈w,v〉 = k2〈v,w〉,

where we have used the axioms of an inner product to carry out these steps. Therefore, the result conflictswith the given statement, which must therefore be a false statement.

3. TRUE. We have

〈c1v1 + c2v2,w〉 = 〈c1v1,w〉+ 〈c2v2,w〉 = c1〈v1,w〉+ c2〈v2,w〉 = c1 · 0 + c2 · 0 = 0.

4. TRUE. We have

〈x + y,x− y〉 = 〈x,x〉 − 〈x,y〉+ 〈y,x〉 − 〈y,y〉 = 〈x,x〉 − 〈y,y〉 = |‖x|‖2 − |‖y|‖2.

This will be negative if and only if |‖x|‖2 < |‖y|‖2, and since |‖x|‖ and |‖y|‖ are nonnegative real numbers,|‖x|‖2 < |‖y|‖2 if and only if |‖x|‖ < |‖y|‖.5. FALSE. For example, if V is the inner product space of integrable functions on (−∞,∞), then theformula

〈f, g〉 =∫ b

a

f(t)g(t)dt

Page 290: Differential Equations and Linear Algebra - 3e - Goode -Solutions

290

is a valid any product for any choice of real numbers a < b. See also Problem 9 in this section, in which a“non-standard” inner product on R2 is given.

6. TRUE. The angle between the vectors −2v and −2w is

cos θ =(−2v) · (−2w)

|‖ − 2v|‖|‖ − 2w|‖=

(−2)2(v ·w)(−2)2|‖v|‖|‖w|‖

=v ·w

|‖v|‖|‖w|‖,

and this is the angle between the vectors v and w.

7. FALSE. This definition of 〈p, q〉 will not satisfy the requirements of an inner product. For instance, ifwe take p = x, then 〈p, p〉 = 0, but p 6= 0.

Problems:

1. 〈v,w〉 = 8, ||v|| = 3√

3, ||w|| =√

7. Hence,

cos θ =〈v,w〉||v||||w||

=8

3√

21=⇒ θ ≈ 0.95 radians.

2. 〈f, g〉 =∫ π

0

x sinxdx = π, ||f || =

√(∫ π

0

sin2 xdx

)=√(π

2

), ||g|| =

√(∫ π

0

x2dx

)=

√(π3

3

). Hence,

cos θ =π(

π2

)1/2 (π3

3

)1/2=√

≈ 0.68 radians.

3. 〈v,w〉 = (2 + i)(−1 − i) + (3 − 2i)(1 + 3i) + (4 + i)(3 + i) = 19 + 11i. ||v|| =√〈v,v〉 =

√35.

||w|| =√〈w,w〉 =

√22.

4. Let A,B,C ∈ M2(R).(1): 〈A,A〉 = a2

11 + a212 + a2

21 + a222 ≥ 0, and 〈A,A〉 = 0 ⇐⇒ a11 = a12 = a21 = a22 = 0 ⇐⇒ A = 0.

(2): 〈A,B〉 = a11b11 + a12b12 + a21b21 + a22b22 = b11a11 + b12a12 + b21a21 + b22a22 = 〈B,A〉.(3): Let k ∈ R.〈kA,B〉 = ka11b11 + ka12b12 + ka21b21 + ka22b22 = k(a11b11 + a12b12 + a21b21 + a22b22) = k〈A,B〉.(4):〈(A + B), C〉 = (a11 + b11)c11 + (a12 + b12)c12 + (a21 + b21)c21 + (a22 + b22)c22

= (a11c11 + b11c11) + (a12c12 + b12c12) + (a21c21 + b21c21) + (a22c22 + b22c22)= (a11c11 + a12c12 + a21c21 + a22c22) + (b11c11 + b12c12 + b21c21 + b22c22

= 〈A,C〉+ 〈B,C〉.

5. We need only demonstrate one example showing that some property of an inner product is violated by the

given formula. Set A =[

1 00 −1

]. Then according to the given formula, we have 〈A,A〉 = −2, violating

the requirement that 〈u,u〉 ≥ 0 for all vectors u.

6. 〈A,B〉 = 2 · 3 + (−1)1 + 3(−1) + 5 · 2 = 12.||A|| =

√〈A,A〉 =

√2 · 2 + (−1)(−1) + 3 · 3 + 5 · 5 =

√39.

||B|| =√〈B,B〉 =

√3 · 3 + 1 · 1 + (−1)(−1) + 2 · 2 =

√15.

7. 〈A,B〉 = 13, ||A|| =√

33, ||B|| =√

7.

Page 291: Differential Equations and Linear Algebra - 3e - Goode -Solutions

291

8. Let p1, p2, p3 ∈ P1 where p1(x) = a + bx, p2(x) = c + dx, and p3(x) = e + fx.Define

〈p1, p2〉 = ac + bd. (8.1)

The properties 1 through 4 of Definition 4.11.3 must be verified.(1): 〈p1, p1〉 = a2 + b2 ≥ 0 and 〈p1, p1〉 = 0 ⇐⇒ a = b = 0 ⇐⇒ p1(x) = 0.(2): 〈p1, p2〉 = ac + bd = ca + db = 〈p2, p1〉.(3): Let k ∈ R.〈kp1, p2〉 = kac + kbd = k(ac + bd) = k〈p1, p2〉.(4):〈p1 + p2, p3〉 = (a + c)e + (b + d)f = ae + ce + bf + df

= (ae + bf) + (ce + df)= 〈p1, p3〉+ 〈p2, p3〉.

Hence, the mapping defined by (8.1) is an inner product in P1.

9. Property 1:〈u,u〉 = 2u1u1 + u1u2 + u2u1 + 2u2u2 = 2u2

1 + 2u1u2 + 2u22 = (u1 + u2)2 + u2

1 + u22 ≥ 0.

〈u,u〉 = 0 ⇐⇒ (u1 + u2)2 + u21 + u2

2 = 0 ⇐⇒ u1 = 0 = u2 ⇐⇒ u = 0.Property 2:〈u,v〉 = 2u1v1 + u1v2 + u2v1 + 2u2v2 = 2u2v2 + u2v1 + u1v2 + 2u1v1 = 〈v,u〉.Property 3:k〈u,v〉 = k(2u1v1 + u1v2 + u2v1 + 2u2v2) = 2ku1v1 + ku1v2 + ku2v1 + 2ku2v2

= 2(ku1)v1 + (ku1)v2 + (ku2)v1 + 2(ku2)v2 = 〈ku,v〉= 2ku1v1 + ku1v2 + ku2v1 + 2ku2v2

= 2u1(kv1) + u1(kv2) + u2(kv1) + 2u2(kv2) = 〈u, kv〉.Property 4:〈(u + v),w〉 = 〈(u1 + v1, u2 + v2), (w1, w2)〉

= 2(u1 + v1)w1 + (u1 + v1)w2 + (u2 + v2)w1 + 2(u2 + v2)w2

= 2u1w1 + 2v1w1 + u1w2 + v1w2 + u2w1 + v2w1 + 2u2w2 + 2v2w2

= 2u1w1 + u1w2 + u2w1 + 2u2w2 + 2v1w1 + v1w2 + v2w1 + 2v2w2

= 〈u,w〉+ 〈v,w〉.Therefore 〈u,v〉 = 2u1v1 + u1v2 + u2v1 + 2u2v2 defines an inner product on R2.

10. (a) Using the defined inner product:〈v,w〉 = 〈(1, 0), (−1, 2)〉 = 2 · 1(−1) + 1 · 2 + 0(−1) + 2 · 0 · 2 = 0.

(b) Using the standard inner product:〈v,w〉 = 〈(1, 0), (−1, 2)〉 = 1(−1) + 0 · 2 = −1 6= 0.

11. (a) Using the defined inner product:〈v,w〉 = 2 · 2 · 3 + 2 · 6 + (−1)3 + 2(−1)6 = 9 6= 0.

(b) Using the standard inner product:〈v,w〉 = 2 · 3 + (−1)6 = 0.

12. (a) Using the defined inner product:〈v,w〉 = 2 · 1 · 2 + 1 · 1 + (−2) · 2 + 2 · (−2) · 1 = −3 6= 0.

(b) Using the standard inner product:〈v,w〉 = 1 · 2 + (−2)(1) = 0.

Page 292: Differential Equations and Linear Algebra - 3e - Goode -Solutions

292

13. (a) Show symmetry: 〈v,w〉 = 〈w,v〉.〈v,w〉 = 〈(v1, v2), (w1, w2)〉 = v1w1 − v2w2 = w1v1 − w2v2 = 〈(w1, w2), (v1, v2)〉 = 〈w,v〉.(b) Show 〈kv,w〉 = k〈v,w〉 = 〈v, kw〉.Note that kv = k(v1, v2) = (kv1, kv2) and kw = k(w1, w2) = (kw1, kw2).〈kv,w〉 = 〈(kv1, kv2), (w1, w2)〉 = (kv1)w1 − (kv2)w2 = k(v1w1 − v2w2) = k〈(v1, v2), (w1, w2)〉 = k〈v,w〉.Also,〈v, kw〉 = 〈(v1, v2), (kw1, kw2)〉 = v1(kw1)− v2(kw2) = k(v1w1 − v2w2) = k〈(v1, v2), (w1, w2)〉 = k〈v,w〉.(c) Show 〈(u + v),w〉 = 〈u,w〉+ 〈v,w〉.Let w = (w1, w2) and note that u + v = (u1 + v1, u2 + v2).〈(u + v),w〉 = 〈(u1 + v1, u2 + v2), (w1, w2)〉 = (u1 + v1)w1 − (u2 + v2)w2

= u1w1 + v1w1 − u2w2 − v2w2 = u1w1 − u2w2 + v1w1 − v2w2

= 〈(u1, v2), (w1, w2)〉+ 〈(v1, v2), (w1, w2)〉 = 〈u,w〉+ 〈v,w〉.Property 1 fails since, for example, 〈u,u〉 < 0 whenever |u2| > |u1|.14. 〈v,v〉 = 0 =⇒ 〈(v1, v2), (v1, v2)〉 = 0 =⇒ v2

1 − v22 = 0 =⇒ v2

1 = v22 =⇒ |v1| = |v2|. Thus, in this space,

null vectors are given by {(v1, v2) ∈ R2 : |v1| = |v2|} or equivalently, v = r(1, 1) or v = s(1,−1) wherer, s ∈ R.

15. 〈v,v〉 < 0 =⇒ 〈(v1, v2), (v1, v2)〉 < 0 =⇒ v21 − v2

2 < 0 =⇒ v21 < v2

2 . In this space, timelike vectors aregiven by {(v1, v2) ∈ R2 : v2

1 < v22}.

16. 〈v,v〉 > 0 =⇒ 〈(v1, v2), (v1, v2)〉 > 0 =⇒ v21 − v2

2 > 0 =⇒ v21 > v2

2 . In this space, spacelike vectors aregiven by {(v1, v2) ∈ R2 : v2

1 > v22}.

17.

y

x

Timelike Vectors

Timelike Vectors

Spacelike VectorsSpacelike Vectors

Spacelike Vectors Spacelike Vectors

Null VectorsNull Vectors

Null VectorsNull Vectors

Figure 0.0.65: Figure for Exercise 17

18. Suppose that some ki ≤ 0. If ei denotes the standard basis vector in Rn with 1 in the ith position andzeros elsewhere, then the given formula yields 〈ei, ei〉 = ki ≤ 0, violating the first axiom of an inner product.Therefore, if the given formula defines a valid inner product, then we must have that ki > 0 for all i.

Now we prove the converse. Suppose that each ki > 0. We verify the axioms of an inner product for thegiven proposed inner product. We have

〈v,v〉 = k1v21 + k2v

22 + · · ·+ knv2

n ≥ 0,

Page 293: Differential Equations and Linear Algebra - 3e - Goode -Solutions

293

and we have equality if and only if v1 = v2 = · · · = vn = 0. For the second axiom, we have

〈w,v〉 = k1w1v1 + k2w2v2 + · · ·+ knwnvn = k1v1w1 + k2v2w2 + · · ·+ knvnwn = 〈v,w〉.

For the third axiom, we have

〈kv,w〉 = k1(kv1)w1 + k2(kv2)w2 + · · ·+ kn(kvn)wn = k[k1v1w1 + k2v2w2 + · · ·+ knvnwn] = k〈v,w〉.

Finally, for the fourth axiom, we have

〈u + v,w〉 = k1(u1 + v1)w1 + k2(u2 + v2)w2 + · · ·+ kn(un + vn)wn

= [k1u1w1 + k2u2w2 + · · ·+ knunwn] + [k1v1w1 + k2v2w2 + · · ·+ knvnwn]= 〈u,w〉+ 〈v,w〉.

19. We have

〈v,0〉 = 〈v,0 + 0〉 = 〈0 + 0,v〉 = 〈0,v〉+ 〈0,v〉 = 〈v,0〉+ 〈v,0〉 = 2〈v,0〉,

which implies that 〈v,0〉 = 0.

20. (a) For all v,w ∈ V .||v + w||2 = 〈(v + w), (v + w)〉

= 〈v,v + w〉+ 〈w,v + w〉by Property 4= 〈v + w,v〉+ 〈v + w,w〉by Property 2= 〈v,v〉+ 〈w,v〉+ 〈v,w〉+ 〈w,w〉by Property 4= 〈v,v〉+ 〈v,w〉+ 〈w,v〉+ 〈w,w〉by Property 2

= ||v||2 + 2〈v,w〉+ ||w||2.

(b) This follows immediately by substituting 〈v,w〉 = 0 in the formula given in part (a).

(c) (i) From part (a), it follows that ||v + w||2 = ||v||2 + 2〈v,w〉+ ||w||2 and ||v −w||2 = ||v + (−w)||2 =||v||2 + 2〈v,−w〉+ || −w||2 = ||v||2 − 2〈v,w〉+ ||w||2.Thus, ||v + w||2 − ||v −w||2 =

(||v||2 + 2〈v,w〉+ ||w||2

)−(||v||2 − 2〈v,w〉+ ||w||2

)= 4〈v,w〉.

(ii) ||v + w||2 + ||v − w||2 =(||v||2 + 2〈v,w〉+ ||w||2

)+(||v||2 − 2〈v,w〉+ ||w||2

)= 2||v||2 + 2||w||2 =

2(||v||2 + ||w||2

).

21. For all v,w ∈ V and vi, wi ∈ C.||v + w||2 = 〈v + w,v + w〉

= 〈v,v + w〉+ 〈w,v + w〉by Property 4

= 〈v + w,v〉+ 〈v + w,w〉by Property 2

= 〈v,v〉+ 〈w,v〉+ 〈v,w〉+ 〈w,w〉by Property 4

= 〈v,v〉+ 〈w,v〉+ 〈v,w〉+ 〈w,w〉

= 〈v,v〉+ 〈w,w〉+ 〈v,w〉+ 〈v,w〉= ||v||2 + ||w||2 + 2Re{〈v,w〉}= ||v||2 + 2Re{〈v,w〉}+ ||v||2.

Solutions to Section 4.12

Page 294: Differential Equations and Linear Algebra - 3e - Goode -Solutions

294

1. TRUE. An orthonormal basis is simply an orthogonal basis consisting of unit vectors.

2. FALSE. The converse of this statement is true, but for the given statement, many counterexamples exist.For instance, the set of vectors {(1, 1), (1, 0)} in R2 is linearly independent, but does not form an orthogonalset.

3. TRUE. We can verify easily that ∫ π

0

cos t sin tdt =sin2 t

2|π0 = 0,

which means that {cos x, sinx} is an orthogonal set. Moreover, since they are non-proportional functions,they are linearly independent. Therefore, they comprise an orthogonal basis for the 2-dimensional innerproduct space span{cos x, sinx}.4. FALSE. For instance, in R3 we can take the vectors x1 = (1, 0, 0), x2 = (1, 1, 0), and x3 = (0, 0, 1). Ap-plying the Gram-Schmidt process to the ordered set {x1,x2,x3} yields the standard basis {e1, e2, e3}. How-ever, applying the Gram-Schmidt process to the ordered set {x3,x2,x1} yields the basis {x3,x2, ( 1

2 ,− 12 , 0)}

instead.

5. TRUE. This is the content of Theorem 4.12.7.

6. TRUE. The vector P(w,v) is a scalar multiple of the vector v, and since v is orthogonal to u, P(w,v)is also orthogonal to u, so that its projection onto u must be 0.

7. TRUE. We have

P(w1 + w2,v) =〈w1 + w2,v〉

‖v‖2v =

〈w1,v〉+ 〈w2,v〉‖v‖2

v =〈w1,v〉‖v‖2

v +〈w2,v〉‖v‖2

v = P(w1,v) + P(w2,v).

Problems:

1. 〈(2,−1, 1), (1, 1,−1)〉 = 2 + (−1) + (−1) = 0;〈(2,−1, 1), (0, 1, 1)〉 = 0 + (−1) + 1 = 0;〈(1, 1,−1), (0, 1, 1)〉 = 0 + 1 + (−1) = 0.Since each vector in the set is orthogonal to every other vector in the set, the vectors form an orthogonalset. To generate an orthonormal set, we divide each vector by its norm:||(2,−1, 1)|| =

√4 + 1 + 1 =

√6,

||(1, 1,−1)|| =√

1 + 1 + 1 =√

3, and||(0, 1, 1)|| =

√0 + 1 + 1 =

√2. Thus, the corresponding orthonormal set is:{√

66

(2,−1, 1),√

33

(1, 1,−1),√

22

(0, 1, 1)

}.

2. 〈(1, 3,−1, 1), (−1, 1, 1,−1)〉 = −1 + 3 + (−1) + (−1) = 0;〈(1, 3,−1, 1), (1, 0, 2, 1)〉 = 1 + 0 + (−2) + 1 = 0;〈(−1, 1, 1,−1), (1, 0, 2, 1)〉 = −1 + 0 + 2 + (−1) = 0.Since all vectors in the set are orthogonal to each other, they form an orthogonal set. To generate anorthonormal set, we divide each vector by its norm:||(1, 3,−1, 1)|| =

√1 + 9 + 1 + 1 =

√12 = 2

√3,

||(−1, 1, 1,−1)|| =√

1 + 1 + 1 + 1 =√

4 = 2, and||(1, 0, 2, 1)|| =

√1 + 0 + 4 + 1 =

√6. Thus, an orthonormal set is:{√

36

(1, 3,−1, 1),12(−1, 1, 1,−1),

√6

6(1, 0, 2, 1)

}.

Page 295: Differential Equations and Linear Algebra - 3e - Goode -Solutions

295

3. 〈(1, 2,−1, 0), (1, 0, 1, 2)〉 = 1 + 0 + (−1) + 0 = 0;〈(1, 2,−1, 0), (−1, 1, 1, 0)〉 = −1 + 2 + (−1) + 0 = 0;〈(1, 2,−1, 0), (1,−1,−1, 0)〉 = 1 + (−2) + 1 + 0 = 0;〈(1, 0, 1, 2), (−1, 1, 1, 0)〉 = −1 + 0 + 1 + 0 = 0;〈(1, 0, 1, 2), (1,−1,−1, 0)〉 = 1 + 0 + (−1) + 0 = 0;〈(−1, 1, 1, 0), (1,−1,−1, 0)〉 = −1 + (−1) + (−1) = −3.Hence, this is not an orthogonal set of vectors.

4. 〈(1, 2,−1, 0, 3), (1, 1, 0, 2,−1)〉 = 1 + 2 + 0 + 0 + (−3) = 0;〈(1, 2,−1, 0, 3), (4, 2,−4,−5,−4)〉 = 4 + 4 + 4 + 0 + (−12) = 0;〈(1, 1, 0, 2,−1), (4, 2,−4,−5,−4)〉 = 4 + 2 + 0 + (−10) + 4 = 0.Since all vectors in the set are orthogonal to each other, they form an orthogonal set. To generate anorthonormal set, we divide each vector by its norm:||(1, 2,−1, 0, 3)|| =

√1 + 4 + 1 + 0 + 9 =

√15,

||(1, 1, 0, 2,−1)|| =√

1 + 1 + 0 + 4 + 1 =√

7, and||(4, 2,−4,−5,−4)|| =

√16 + 4 + 16 + 25 + 16 =

√77. Thus, an orthonormal set is:{√

1515

(1, 2,−1, 0, 3),√

77

(1, 1, 0, 2,−1),√

7777

(4, 2,−4,−5,−4)

}.

5. We require that 〈v1,v2〉 = 〈v1,w〉 = 〈v2,w〉 = 0. Let w = (a, b, c) where a, b, c ∈ R.〈v1,v2〉 = 〈(1, 2, 3), (1, 1,−1)〉 = 0.〈v1,w〉 = 〈(1, 2, 3), (a, b, c)〉 =⇒ a + 2b + 3c = 0.〈v2,w〉 = 〈(1, 1,−1), (a, b, c)〉 =⇒ a + b− c = 0.Letting the free variable c = t ∈ R, the system has the solution a = 5t, b = −4t, and c = t. Consequently,{(1, 2, 3), (1, 1,−1), (5t,−4t, t)} will form an orthogonal set whenever t 6= 0. To determine the correspondingorthonormal set, we must divide each vector by its norm:||v1|| =

√1 + 4 + 9 =

√14, ||v2|| =

√1 + 1 + 1 =

√3, ||w|| =

√25t2 + 16t2 + t2 =

√42t2 = |t|

√42 = t

√42

if t ≥ 0. Setting t = 1, an orthonormal set is:{√14

14(1, 2, 3),

√3

3(1, 1,−1),

√42

42(5,−4, 1)

}.

6. 〈(1 − i, 3 + 2i), (2 + 3i, 1 − i)〉 = (1 − i)(2 + 3i) + (3 + 2i)(1− i) = (1 − i)(2 − 3i) + (3 + 2i)(1 + i) =(2− 3i− 2i− 3) + (3 + 3i + 2i− 2) = (−1− 5i) + (1 + 5i) = 0. The vectors are orthogonal.||(1− i, 3 + 2i)|| =

√(1− i)(1 + i) + (3 + 2i)(3− 2i) =

√1 + 1 + 9 + 4 =

√15.

||(2 + 3i, 1 − i)|| =√

(2 + 3i)(2− 3i) + (1− i)(1 + i) =√

4 + 9 + 1 + 1 =√

15. Thus, the correspondingorthonormal set is: {√

1515

(1− i, 3 + 2i),√

1515

(2 + 3i, 1− i)

}.

7. 〈(1− i, 1 + i, i), (0, i, 1− i)〉 = (1− i) · 0 + (1 + i)(−i) + i(1 + i) = 0.〈(1− i, 1 + i, i), (−3 + 3i, 2 + 2i, 2i)〉 = (1− i)(−3− 3i) + (1 + i)(2− 2i) + i(−2i) = 0.〈(0, i, 1− i), (−3 + 3i, 2 + 2i, 2i)〉 = 0 + i(2− 2i) + (1− i)(−2i) = (2i + 2) + (−2i− 2) = 0.Hence, the vectors are orthogonal. To obtain a corresponding orthonormal set, we divide each vector by itsnorm.||(1− i, 1 + i, i)|| =

√(1− i)(1 + i) + (1 + i)(1− i) + i(−i) =

√1 + 1 + 1 + 1 + 1 =

√5.

Page 296: Differential Equations and Linear Algebra - 3e - Goode -Solutions

296

||(0, i, 1− i)|| =√

0 + i(−i) + (1− i)(1 + i) =√

1 + 1 + 1 =√

3.

||(−3 + 3i, 2 + 2i, 2i)|| =√

(−3 + 3i)(−3− 3i) + (2 + 2i)(2− 2i) + 2i(−2i) =√

9 + 9 + 4 + 4 + 4 =√

30.Consequently, an orthonormal set is:{√

55

(1− i, 1 + i, i),√

33

(0, i, 1− i),√

3030

(−3 + 3i, 2 + 2i, 2i)

}.

8. Let z = a + bi where a, b ∈ R. We require that 〈v,w〉 = 0.〈(1− i, 1 + 2i), (2 + i, a + bi)〉 = 0 =⇒ (1− i)(2− i) + (1 + 2i)(a− bi) = 0 =⇒ 1− 3i + a + 2b + (2a− b)i = 0.

Equating real parts and imaginary parts from the last equality results in the system:

{a + 2b = −12a− b = 3.

This system has the solution a = 1 and b = −1; hence z = 1 − i. Our desired orthogonal set is given by{(1− i, 1 + 2i), (2 + i, 1− i)}.

||(1− i, 1 + 2i)|| =√

(1− i)(1 + i) + (1 + 2i)(1− 2i) =√

1 + 1 + 1 + 4 =√

7.

||(2 + i, 1− i)|| =√

(2 + i)(2− i) + (1− i)(1 + i) =√

4 + 1 + 1 + 1 =√

7.

The corresponding orthonormal set is given by:{√7

7(1− i, 1 + 2i),

√7

7(2 + i, 1− i)

}.

9. 〈f1, f2〉 = 〈1, sinπx〉 =∫ 1

−1

sinπxdx =[− cos πx

π

]1−1

= 0.

〈f1, f3〉 = 〈1, cos πx〉 =∫ 1

−1

cos πxdx =[sinπx

π

]1−1

= 0.

〈f2, f3〉 = 〈sinπx, cos πx〉 =∫ 1

−1

sinπx cos πxdx =12

∫ 1

−1

sin 2πxdx =[−14π

cos 2πx

]1−1

= 0. Thus, the

vectors are orthogonal.

||f1|| =

√∫ 1

−1

1dx =√

[x]1−1 =√

2.

||f2|| =

√∫ 1

−1

sin2 πxdx =

√∫ 1

−1

1− cos 2πx

2dx =

√[x2

]1−1−[

14π

sin 2πx

]1−1

= 1.

||f3|| =

√∫ 1

−1

cos2 πxdx =

√∫ 1

−1

1 + cos 2πx

2dx =

√[x2

]1−1

+[

14π

sin 2πx

]1−1

= 1.

Consequently,

{√2

2, sinπx, cos πx

}is an orthonormal set of functions on [−1, 1].

10. 〈f1, f2〉 = 〈1, x〉 =∫ 1

−1

1 · xdx =[x2

2

]1−1

= 0.

〈f1, f3〉 = 〈1,3x2 − 1

2〉 =

∫ 1

−1

3x2 − 12

dx =[x3 − x

2

]1−1

= 0.

Page 297: Differential Equations and Linear Algebra - 3e - Goode -Solutions

297

〈f2, f3〉 = 〈x,3x2 − 1

2〉 =

∫ 1

−1

x · 3x2 − 12

dx =12

[3x4

4− x2

2

]1−1

= 0. Thus, the vectors are orthogonal.

||f1|| =

√∫ 1

−1

dx =√

[x]1−1 =√

2.

||f2|| =

√∫ 1

−1

x2dx =

√[x3

3

]1−1

=√

63

.

||f3|| =

√∫ 1

−1

(3x2 − 1

2

)2

dx =

√14

∫ 1

−1

(9x4 − 6x2 + 1)dx =

√14

[9x5

5− 2x3 + x

]1−1

=√

105

.

To obtain a set of orthonormal vectors, we divide each vector by its norm:

f1

||f1||=√

22

f1,f2

||f2||=√

62

f2,f3

||f3||=√

102

f3.

Thus,

{√2

2,

√6

2x,

√104

(3x2 − 1)

}is an orthonormal set of vectors.

11. 〈f1, f2〉 =∫ 1

−1

sinπx sin 2πxdx =12

∫ 1

−1

(cos 3πx− cos πx)dx = 0.

〈f1, f3〉 =∫ 1

−1

sinπx sin 3πxdx =12

∫ 1

−1

(cos 4πx− cos 2πx)dx = 0.

〈f2, f3〉 =∫ 1

−1

sin 2πx sin 3πxdx =12

∫ 1

−1

(cos 5πx− cos πx)dx = 0.

Therefore, {f1, f2, f3} is an orthogonal set.

||f1|| =

√∫ 1

−1

sin2 πxdx =

√∫ 1

−1

12(1− cos 2πx)dx =

√12

[x− sin 2πx

]1−1

= 1.

||f2|| =

√∫ 1

−1

sin2 2πxdx =

√∫ 1

−1

12(1− cos 4πx)dx =

√12

[x− sin 4πx

]1−1

= 1.

||f3|| =

√∫ 1

−1

sin2 3πxdx =

√∫ 1

−1

12(1− cos 6πx)dx =

√12

[x− sin 6πx

]1−1

= 1.

Thus, it follows that {f1, f2, f3} is an orthonormal set of vectors on [−1, 1].

12. 〈f1, f2〉 = 〈cos πx, cos 2πx〉 =∫ 1

−1

cos πx cos 2πxdx =12

∫ 1

−1

(cos 3πx + cos πx)dx = 0.

〈f1, f3〉 = 〈cos πx, cos 3πx〉 =∫ 1

−1

cos πx cos 3πxdx =12

∫ 1

−1

(cos 4πx + cos 2πx)dx = 0.

〈f2, f3〉 = 〈cos 2πx, cos 3πx〉 =∫ 1

−1

cos 2πx cos 3πxdx =12

∫ 1

−1

(cos 5πx + cos πx)dx = 0.

Therefore, {f1, f2, f3} is an orthogonal set.

||f1|| =

√∫ 1

−1

cos2 πxdx =

√12

∫ 1

−1

(1 + cos 2πx)dx =

√12

[x +

sin 2πx

]1−1

= 1.

||f2|| =

√∫ 1

−1

cos2 2πxdx =

√12

∫ 1

−1

(1 + cos 4πx)dx =

√12

[x +

sin 4πx

]1−1

= 1.

Page 298: Differential Equations and Linear Algebra - 3e - Goode -Solutions

298

||f3|| =

√∫ 1

−1

cos2 3πxdx =

√12

∫ 1

−1

(1 + cos 6πx)dx =

√12

[x +

sin 6πx

]1−1

= 1.

Thus, it follows that{f1, f2, f3} is an orthonormal set of vectors on [−1, 1].

13. It is easily verified that 〈A1, A2〉 = 0, 〈A1, A3〉 = 0, 〈A2, A3〉 = 0. Thus we require a, b, c, d such that〈A1, A4〉 = 0 =⇒ a + b− c + 2d = 0,〈A2, A4〉 = 0 =⇒ −a + b + 2c + d = 0,〈A3, A4〉 = 0 =⇒ a− 3b + 2d = 0.

Solving this system for a, b, c, and d, we obtain: a =32c, b = −1

2c, d = 0. Thus,

A4 =[

32c − 1

2cc 0

]= 2c

[3 −12 0

]= k

[3 −12 0

],

where k is any nonzero real number.

14. Let v1 = (1,−1,−1) and v2 = (2, 1,−1).u1 = v1 = (1,−1,−1), ||u1|| =

√1 + 1 + 1 =

√3.

〈v2,u1〉 = 〈(2, 1,−1), (1,−1,−1)〉 = 2− 1 + 1 = 2.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (2, 1,−1)− 23(1,−1,−1) =

13(4, 5,−1).

||u2|| =√

(16 + 25 + 1)/9 =13

√42. Hence, an orthonormal basis is:{√

33

(1,−1,−1),√

4242

(4, 5,−1)

}.

15. Let v1 = (2, 1,−2) and v2 = (1, 3,−1).u1 = v1 = (2, 1,−2), ||u1|| =

√4 + 1 + 4 =

√9 = 3.

〈v2,u1〉 = 〈(1, 3,−1), (2, 1,−2)〉 = 2 · 1 + 1 · 3 + (−2)(−1) = 7.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (1, 3,−1)− 79(2, 1,−2) =

59(−1, 4, 1).

||u2|| =59√

1 + 16 + 1 =5√

23

. Hence, an orthonormal basis is:{13(2, 1,−2),

√2

6(−1, 4, 1)

}.

16. Let v1 = (−1, 1, 1, 1) and v2 = (1, 2, 1, 2).u1 = v1 = (−1, 1, 1, 1), ||u1|| =

√1 + 1 + 1 + 1 = 2.

〈v2,u1〉 = 〈(1, 2, 1, 2), (−1, 1, 1, 1)〉 = 1(−1) + 2 · 1 + 1 · 1 + 2 · 1 = 4.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (1, 2, 1, 2)− (−1, 1, 1, 1) = (2, 1, 0, 1).

||u2|| =√

4 + 1 + 0 + 1 =√

6. Hence, an orthonormal basis is:{12(−1, 1, 1, 1),

√6

6(2, 1, 0, 1)

}.

17. Let v1 = (1, 0,−1, 0), v2 = (1, 1,−1, 0) and v3 = (−1, 1, 0, 1).u1 = v1 = (1, 0,−1, 0), ||u1|| =

√1 + 0 + 1 + 0 =

√2.

Page 299: Differential Equations and Linear Algebra - 3e - Goode -Solutions

299

〈v2,u1〉 = 〈(1, 1,−1, 0), (1, 0,−1, 0)〉 = 1 · 1 + 1 · 0 + (−1)(−1) + 0 · 0 = 2.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (1, 1,−1, 0)− (1, 0,−1, 0) = (0, 1, 0, 0).

||u2|| =√

0 + 1 + 0 + 0 = 1.〈v3,u1〉 = 〈(−1, 1, 0, 1), (1, 0,−1, 0)〉 = (−1)1 + 1 · 0 + 0(−1) + 1 · 0 = −1.〈v3,u2〉 = 〈(−1, 1, 0, 1), (0, 1, 0, 0)〉 = (−1)0 + 1 · 1 + 0 · 0 + 1 · 0 = 1.

u3 = v3 −〈v3,u1〉||u1||2

u1 −〈v3,u2〉||u2||2

u2 = (−1, 1, 0, 1) +12(1, 0,−1, 0)− (0, 1, 0, 0) =

12(−1, 0,−1, 2);

||u3|| =12√

1 + 0 + 1 + 4 =√

62

. Hence, an orthonormal basis is:{√2

2(1, 0,−1, 0), (0, 1, 0, 0),

√6

6(−1, 0,−1, 2)

}.

18. Let v1 = (1, 2, 0, 1), v2 = (2, 1, 1, 0) and v3 = (1, 0, 2, 1).u1 = v1 = (1, 2, 0, 1), ||u1|| =

√1 + 4 + 0 + 1 =

√6.

〈v2,u1〉 = 〈(2, 1, 1, 0), (1, 2, 0, 1)〉 = 2 · 1 + 1 · 2 + 1 · 0 + 0 · 1 = 4.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (2, 1, 1, 0)− 23(1, 2, 0, 1) =

13(4,−1, 3,−2).

||u2|| =13√

16 + 1 + 9 + 4 =13

√30.

〈v3,u1〉 = 〈(1, 0, 2, 1), (1, 2, 0, 1)〉 = 1 · 1 + 0 · 2 + 2 · 0 + 1 · 1 = 2.

〈v3,u2〉 = 〈(1, 0, 2, 1), (4/3,−1/3, 1,−2/3)〉 = 1(4/3) + 0(−1/3) + 2 · 1 + 1(−2/3) =83.

u3 = v3 −〈v3,u1〉||u1||2

u1 −〈v3,u2〉||u2||2

u2 = (1, 0, 2, 1)− 13(1, 2, 0, 1)− 4

15(4,−1, 3,−2) =

25(−1,−1, 3, 3);

||u3|| =25√

1 + 1 + 9 + 9 =4√

55

. Hence, an orthonormal basis is:{√6

6(1, 2, 0, 1),

√30

30(4,−1, 3,−2),

√5

10(−1,−1, 3, 3)

}.

19. Let v1 = (1, 1,−1, 0), v2 = (−1, 0, 1, 1) and v3 = (2,−1, 2, 1).u1 = v1 = (1, 1,−1, 0), ||u1|| =

√1 + 1 + 1 + 0 =

√3.

〈v2,u1〉 = 〈(−1, 0, 1, 1), (1, 1,−1, 0)〉 = −1 · 1 + 0 · 1 + 1(−1) + 1 · 0 = −2.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (−1, 0, 1, 1)− −23

(1, 1,−1, 0) =13(−1, 2, 1, 3).

||u2|| =13√

1 + 4 + 1 + 9 =√

153

.

〈v3,u1〉 = 〈(2,−1, 2, 1), (1, 1,−1, 0)〉 = 2 · 1 +−1 · 1 + 2(−1) + 1 · 0 = −1.

〈v3,u2〉 = 〈(2,−1, 2, 1), (−1/3, 2/3, 1/3, 1)〉 = 2(−1/3) + (−1)(2/3) + 2(1/3) + 1 · 1 =13.

u3 = v3 −〈v3,u1〉||u1||2

u1 −〈v3,u2〉||u2||2

u2 = (2,−1, 2, 1) +13(1, 1,−1, 0)− 1

15(−1, 2, 1, 3) =

45(3,−1, 2, 1);

||u3|| =4√

155

. Hence, an orthonormal basis is:{√3

3(1, 1,−1, 0),

√15

15(−1, 2, 1, 3),

√15

15(3,−1, 2, 1)

}.

Page 300: Differential Equations and Linear Algebra - 3e - Goode -Solutions

300

20. A row-echelon form of A is

1 −2 10 1 1

70 0 0

. Hence, a basis for rowspace(A) is {(1,−2, 1), (0, 7, 1))}.

The vectors in this basis are not orthogonal. We therefore use the Gram-Schmidt process to determine anorthogonal basis. Let v1 = (1,−2, 1), v2 = (0, 7, 1). Then an orthogonal basis for rowspace(A) is {u1,u2},where

u1 = (1,−2, 1), u2 = (0, 7, 1) +136

(1,−2, 1) =16(13, 16, 19).

21. Let v1 = (1− i, 0, i) and v2 = (1, 1 + i, 0).u1 = v1 = (1− i, 0, i), ||u1|| =

√(1− i)(1 + i) + 0 + i(−i) =

√3.

〈v2,u1〉 = 〈(1, 1 + i, 0), (1− i, 0, i)〉 = 1(1 + i) + (1 + i)0 + 0(−i) = 1 + i.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (1, 1 + i, 0)− 1 + i

3(1− i, 0, i) =

13(1, 3 + 3i, 1− i).

||u2|| =13

√1 + (3 + 3i)(3− 3i) + (1− i)(1 + i) =

√213

. Hence, an orthonormal basis is:{√3

3(1− i, 0, i),

√21

21(1, 3 + 3i, 1− i)

}.

22. Let v1 = (1 + i, i, 2− i) and v2 = (1 + 2i, 1− i, i).u1 = v1 = (1 + i, i, 2− i), ||u1|| =

√(1 + i)(1− i) + i(−i) + (2− i)(2 + i) = 2

√2.

〈v2,u1〉 = (1 + 2i)(1− i) + (1− i)(−i) + i(2 + i) = 1 + 2i.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (1 + 2i, 1− i, i)− 18(1 + 2i)(1 + i, i, 2− i) =

18(9 + 13i, 10− 9i,−4 + 5i).

||u2|| =18

√(9 + 13i)(9− 13i) + (10− 9i)(10 + 9i) + (−4 + 5i)(−4− 5i) =

√1184

. Hence, an orthonormalbasis is: {√

24

(1 + i, i, 2− i),√

118236

(9 + 13i, 10− 9i,−4 + 5i)

}.

23. Let f1 = 1, f2 = x and f3 = x2.g1 = f1 = 1;

||g1||2 =∫ 1

0

dx = 1; 〈f2, g1〉 =∫ 1

0

xdx =[x2

2

]10

=12.

g2 = f2 −〈f2, g1〉||g1||2

g1 = x− 12

=12(2x− 1).

||g2||2 =∫ 1

0

(x− 1

2

)2

dx =∫ 1

0

(x2 − x +

14

)dx =

[x3

3− x2

2+

x

4

]10

=112

.

〈f3, g1〉 =∫ 1

0

x2dx =[x3

3

]10

=13.

〈f3, g2〉 =∫ 1

0

x2

(x− 1

2

)dx =

[x4

4− x3

6

]10

=112

.

g3 = f3−〈f3, g1〉||g1||2

g1−〈f3, g2〉||g2||2

g2 = x2− 13−(

x− 12

)=

16(6x2− 6x + 1). Thus, an orthogonal basis is given

by: {1,

12(2x− 1),

16(6x2 − 6x + 1)

}.

Page 301: Differential Equations and Linear Algebra - 3e - Goode -Solutions

301

24. Let f1 = 1, f2 = x2 and f3 = x4 for all x in [−1, 1].

〈f1, f2〉 =∫ 1

−1

x2dx =23, 〈f1, f3〉 =

∫ 1

−1

x4dx =25, and 〈f2, f3〉 =

∫ 1

−1

x6dx =27.

||f1||2 =∫ 1

−1

dx = 2, ||f2||2 =∫ 1

−1

x4dx =25.

Let g1 = f1 = 1.

g2 = f2 −〈f2, g1〉||g1||2

g1 = x2 − 〈x2, 1〉||1||2

· 1 = x2 − 13

=13(3x− 1).

g3 = f3 −〈f3, g1〉||g1||2

g1 −〈f3, g2〉||g2||2

g2 = x4 − 〈x4, 1〉||1||2

· 1−〈x4,

(x2 − 1

3

)〉

||x2 − 13 ||2

(x2 − 1

3

)= x4 − 6x2

7+

335

=135

(35x4 − 30x2 + 3).

Thus, an orthogonal basis is given by:{1,

13(3x2 − 1),

135

(35x4 − 30x2 + 3)}

.

25. Let f1 = 1, f2 = sinx and f3 = cos x for all x in[−π

2,π

2

].

〈f1, f2〉 =∫ π/2

−π/2

sinxdx = [− cos 2x]π/2−π/2 = 0. Therefore, f1 and f2 are orthogonal.

〈f2, f3〉 =∫ π/2

−π/2

sinx cos xdx =12

∫ π/2

−π/2

sin 2xdx =[−1

4cos 2x

]π/2

−π/2

= 0. Therefore, f2 and f3 are orthogo-

nal.

〈f1, f3〉 =∫ π/2

−π/2

cos xdx = [sinx]π/2−π/2 = 2.

Let g1 = f1 = 1 so that ||g1||2 =∫ π/2

−π/2

dx = π.

g2 = f2 = sinx, and ||g2||2 =∫ π/2

−π/2

sin2 xdx =∫ π/2

−π/2

1− cos 2x

2dx =

π

2.

g3 = f3 −〈f3, g1〉||g1||2

g1 −〈f3, g2〉||g2||2

g2 = cos x − 2π· 1 − 0 · sinx =

(π cos x − 2). Thus, an orthogonal basis for

the subspace of C0[−π/2, π/2] spanned by {1, sinx, cos x} is:{1, sinx,

(π cos x− 2)}

.

26. Given A1 =[

1 −12 1

]and A2 =

[2 −34 1

]. Using the Gram-Schmidt procedure:

B1 =[

1 −12 1

], 〈A2, B1〉 = 10 + 6 + 24 + 5 = 45, and ||B1||2 = 5 + 2 + 12 + 5 = 24.

B2 = A2 −〈A2, B1〉||B1||2

B1 =[

2 −34 1

]− 15

8

[1 −12 1

]=[

18 − 9

814 − 7

8

]. Thus, an orthogonal basis for the

subspace of M2(R) spanned by A1 and A2 is:{[1 −12 1

],

[18 − 9

814 − 7

8

]}.

Page 302: Differential Equations and Linear Algebra - 3e - Goode -Solutions

302

27. Given A1 =[

0 11 0

], A2 =

[0 11 1

]and A3 =

[1 11 0

]. Using the Gram-Schmidt procedure:

B1 =[

0 11 0

], 〈A2, B1〉 = 5, and ||B1||2 = 5.

B2 = A2 −〈A2, B1〉||B1||2

B1 =[

0 11 1

]−[

0 11 0

]=[

0 00 1

].

Also, 〈A3, B1〉 = 5, 〈A3, B2〉 = 0, and ||B2||2 = 5, so that

B3 = A3−〈A3, B1〉||B1||2

B1−〈A3, B2〉||B2||2

B2 =[

1 11 0

]−[

0 11 0

]−0[

0 00 1

]=[

1 00 0

]. Thus, an orthogonal

basis for the subspace of M2(R) spanned by A1, A2, and A3 is:{[0 11 0

],

[1 00 0

],

[0 00 1

]},

which is the subspace of all symmetric matrices in M2(R).

28. Given p1(x) = 1− 2x + 2x2 and p2(x) = 2− x− x2. Using the Gram-Schmidt procedure:q1 = 1− 2x + 2x2, 〈p2, q1〉 = 2 · 1 + (−1)(−2) + (−1)2 = 2, ||q1||2 = 12 + (−2)2 + 22 = 9. So,

q2 = p2 −〈p2, q1〉||q1||2

q1 = 2− x− x2 − 29(1− 2x + 2x2) =

19(16− 5x− 13x2). Thus, an orthogonal basis for the

subspace spanned by p1 and p2 is {1− 2x + 2x2, 16− 5x− 13x2}.29. Given p1(x) = 1 + x2, p2(x) = 2− x + x3, and p3(x) = −x + 2x2. Using the Gram-Schmidt procedure:q1 = 1 + x2, 〈p2, q1〉 = 2 · 1 + (−1)(0) + 0 · 1 + 1 · 2 = 2, and ‖q1‖2 = 12 + 12 = 2. So,

q2 = p2 −〈p2, q1〉||q1||2

q1 = 2− x + x3 − (1 + x2) = 1− x− x2 + x3. Also,

〈p3, q1〉 = 0 · 1 + (−1)0 + 2 · 1 + 02 = 2〈p3, q2〉 = 0 · 1 + (−1)2 + 2(−1) + 0 · 1 = −1, and||q2||2 = 12 + (−1)2 + (−1)2 + 12 = 4 so that

q3 = p3−〈p3, q1〉||q1||2

q1−〈p3, q2〉||q2||2

q2 = −x+2x2−(1+x2)+14(1−x−x2 +x3) =

14(−3−5x+3x2 +x3). Thus, an

orthogonal basis for the subspace spanned by p1, p2, and p3 is {1+x2, 1−x−x2 +x3, −3− 5x+3x2 +x3}.30. {u1,u2,v} is a linearly independent set of vectors, and 〈u1,u2〉 = 0. If we let u3 = v+λu1 +µu2, thenit must be the case that 〈u3,u1〉 = 0 and 〈u3,u2〉 = 0. 〈u3,u1〉 = 0 =⇒ 〈v + λu1 + µu2,u1〉 = 0=⇒ 〈v,u1〉+ λ〈u1,u1〉+ µ〈u2,u1〉 = 0 =⇒ 〈v,u1〉+ λ〈u1,u1〉+ µ · 0 = 0

=⇒ λ = −〈v,u1〉||u1||2

.

〈u3,u2〉 = 0 =⇒ 〈v + λu1 + µu2,u2〉 = 0=⇒ 〈v,u2〉+ λ〈u1,u2〉+ µ〈u2,u2〉 = 0 =⇒ 〈v,u2〉+ λ · 0 + µ〈u2,u2〉 = 0

=⇒ µ = −〈v,u2〉||u2||2

.

Hence, if u3 = v− 〈v,u1〉||u1||2

u1−〈v,u2〉||u2||2

u2, then {u1,u2,u3} is an orthogonal basis for the subspace spanned

by {u1,u2,v}.31. It was shown in Remark 3 following Definition 4.12.1 that each vector ui is a unit vector. Moreover, fori 6= j,

〈ui,uj〉 = 〈 1‖vi‖

vi,1

‖vj‖vj〉 =

1‖vi‖ ‖vj‖

〈vi,vj〉 = 0,

since 〈vi,vj〉 = 0. Therefore {u1,u2, . . . ,uk} is an orthonormal set of vectors.

Page 303: Differential Equations and Linear Algebra - 3e - Goode -Solutions

303

32. Setz = x− P(x,v1)− P(x,v2)− · · · − P(x,vk).

To verify that z is orthogonal to vi, we compute the inner product of z with vi:

〈z,vi〉 = 〈x− P(x,v1)− P(x,v2)− · · · − P(x,vk),vi〉= 〈x,vi〉 − 〈P(x,v1),vi〉 − 〈P(x,v2),vi〉 − · · · − 〈P(x,vk),vi〉.

Since P(x,vj) is a multiple of vj and the set {v1,v2, . . . ,vk} is orthogonal, all of the subtracted terms onthe right-hand side of this expression are zero except

〈P(x,vi),vi〉 = 〈 〈x,vi〉‖vi‖2

vi,vi〉 =〈x,vi〉‖vi‖2

〈vi,vi〉 = 〈x,vi〉.

Therefore,〈z,vi〉 = 〈x,vi〉 − 〈x,vi〉 = 0,

which implies that z is orthogonal to vi.

33. We must show that W⊥ is closed under addition and closed under scalar multiplication.

Closure under addition: Let v1 and v2 belong to W⊥. This means that 〈v1,w〉 = 〈v2,w〉 = 0 for all w ∈ W .Therefore,

〈v1 + v2,w〉 = 〈v1,w〉+ 〈v2,w〉 = 0 + 0 = 0

for all w ∈ W . Therefore, v1 + v2 ∈ W⊥.

Closure under scalar multiplication: Let v belong to W⊥ and let c be a scalar. This means that 〈v,w〉 = 0for all w ∈ W . Therefore,

〈cv,w〉 = c〈v,w〉 = c · 0 = 0,

which shows that cv ∈ W⊥.

34. In this case, W⊥ consists of all (x, y, z) ∈ R3 such that 〈(x, y, z), (r, r,−r)〉 = 0 for all r ∈ R. That is,rx + ry − rz = 0. In particular, we must have x + y − z = 0. Therefore, W⊥ is the plane x + y − z = 0,which can also be expressed as

W⊥ = span{(−1, 0, 1), (0, 1, 1)}.

35. In this case, W⊥ consists of all (x, y, z, w) ∈ R4 such that 〈(x, y, z, w), (0, 1,−1, 3)〉 = 〈(x, y, z, w), (1, 0, 0, 3)〉 =0. This requires that y − z + 3w = 0 and x + 3w = 0. We can associated an augmented matrix with this

system of linear equations:[

0 1 −1 3 01 0 0 3 0

]. Note that z and w are free variables: z = s and w = t.

Then y = s− 3t and x = −3t. Thus,

W⊥ = {(−3t, s−3t, s, t) : s, t ∈ R} = {t(−3,−3, 0, 1)+s(0, 1, 1, 0) : s, t ∈ R} = span{(−3,−3, 0, 1), (0, 1, 1, 0)}.

36. In this case, W⊥ consists of all 2× 2 matrices[

x yz w

]that are orthogonal to all symmetric matrices.

The set of symmetric matrices is spanned by the matrices[

1 00 0

],

[0 11 0

],

[0 00 1

]. Thus, we must

have ⟨[x yz w

],

[1 00 0

]⟩=⟨[

x yz w

],

[0 11 0

]⟩=⟨[

x yz w

],

[0 00 1

]⟩= 0.

Page 304: Differential Equations and Linear Algebra - 3e - Goode -Solutions

304

Thus, x = 0, y + z = 0 and w = 0. Therefore

W⊥ ={[

0 −zz 0

]: z ∈ R

}= span

{[0 −11 0

]},

which is precisely the set of 2× 2 skew-symmetric matrices.

37. Suppose v belongs to both W and W⊥. Then 〈v,v〉 = 0 by definition of W⊥, which implies that v = 0by the first axiom of an inner product. Therefore W and W⊥ can contain no common elements aside fromthe zero vector.

38. Suppose that W1 is a subset of W2 and let v be a vector in (W2)⊥. This means that 〈v,w2〉 = 0 forall w2 ∈ W2. In particular, v is orthogonal to all vectors in W1 (since W1 is merely a subset of W2). Thus,v ∈ (W1)⊥. Hence, we have shown that every vector belonging to (W2)⊥ also belongs to (W1)⊥.

39. (a) Using technology we find that∫ π

−π

sinnx dx = 0,∫ π

−π

cos nx dx = 0,

∫ π

−π

sinnx cos mxdx = 0.

Further, for m 6= n, ∫ π

−π

sinnx sinmxdx = 0 and∫ π

−π

cos nx cos mxdx = 0.

Consequently the given set of vectors is orthogonal on [−π, π].

(b) Multiplying (4.12.7) by cos mx and integrating over [−π, π] yields∫ π

−π

f(x) cos mxdx =12a0

∫ π

−π

cos mxdx +∫ π

−π

∞∑n=1

(an cos nx + bn sinnx) cos mxdx.

Assuming that interchange of the integral and infinite summation is permissible, this can be written∫ π

−π

f(x) cos mxdx =12a0

∫ π

−π

cos mxdx +∞∑

n=1

∫ π

−π

(an cos nx + bn sinnx) cos mxdx.

which reduces to ∫ π

−π

f(x) cos mxdx =12a0

∫ π

−π

cos mxdx + am

∫ π

−π

cos2 mxdx

where we have used the results from part (a). When m = 0, this gives∫ π

−π

f(x) dx =12a0

∫ π

−π

dx = πa0 =⇒ a0 =1π

∫ π

−π

f(x) dx,

whereas for m 6= 0,∫ π

−π

f(x) cos mxdx = am

∫ π

−π

cos2 mxdx = πam =⇒ am =1π

∫ π

−π

f(x) cos mxdx.

(c) Multiplying (4.12.7) by sin(mx), integrating over [−π, π], and interchanging the integration and sum-mation yields∫ π

−π

f(x) sinmxdx =12a0

∫ π

−π

sinmxdx +∞∑

n=1

∫ π

−π

(an cos nx + bn sinnx) sinmxdx.

Page 305: Differential Equations and Linear Algebra - 3e - Goode -Solutions

305

Using the results from (a), this reduces to∫ π

−π

f(x) sinmxdx = bn

∫ π

−π

sin2 mxdx = πbn =⇒ bn =1π

∫ π

−π

f(x) sinmxdx.

(d) The Fourier coefficients for f are

a0 =1π

∫ π

−π

xdx = 0, an =1π

∫ π

−π

x cos nx dx = 0,

bn =1π

∫ π

−π

x sinnx dx = − 2n

cos nπ =2n

(−1)n+1.

The Fourier series for f is∞∑

n=1

2n

(−1)n+1 sinnx.

(e) The approximations using the first term, the first three terms, the first five terms, and the first ten termsin the Fourier series for f are shown in the accompanying figures.

2

1

321

3

2

2

1

13

3

S3(x)

x

Figure 0.0.66: Figure for Exercise 39(e) - 3 terms included

These figures suggest that the Fourier series is converging to the function f(x) at all points in the interval(−π, π).

Solutions to Section 4.13

Problems:

Page 306: Differential Equations and Linear Algebra - 3e - Goode -Solutions

306

2

21 3

1

3

3

2

1

2 13

x

S5(x)

Figure 0.0.67: Figure for Exercise 39(e) - 5 terms included

2

2

2

3

1

3

1

3

1123

x

S10(x)

Figure 0.0.68: Figure for Exercise 39(e) - 10 terms included

1. Write v = (a1, a2, a3, a4, a5) ∈ R5. Then we have

(r + s)v = (r + s)(a1, a2, a3, a4, a5)= ((r + s)a1, (r + s)a2, (r + s)a3, (r + s)a4, (r + s)a5)= (ra1 + sa1, ra2 + sa2, ra3 + sa3, ra4 + sa4, ra5 + sa5)= (ra1, ra2, ra3, ra4, ra5) + (sa1, sa2, sa3, sa4, sa5)= r(a1, a2, a3, a4, a5) + s(a1, a2, a3, a4, a5)= rv + sv.

Page 307: Differential Equations and Linear Algebra - 3e - Goode -Solutions

307

2. Write v = (a1, a2, a3, a4, a5) and w = (b1, b2, b3, b4, b5) in R5. Then we have

r(v + w) = r((a1, a2, a3, a4, a5) + (b1, b2, b3, b4, b5))= r(a1 + b1, a2 + b2, a3 + b3, a4 + b4, a5 + b5)= (r(a1 + b1), r(a2 + b2), r(a3 + b3), r(a4 + b4), r(a5 + b5))= (ra1 + rb1, ra2 + rb2, ra3 + rb3, ra4 + rb4, ra5 + rb5)= (ra1, ra2, ra3, ra4, ra5) + (rb1, rb2, rb3, rb4, rb5)= r(a1, a2, a3, a4, a5) + r(b1, b2, b3, b4, b5)= rv + rw.

3. NO. This set of polynomials is not closed under scalar multiplication. For example, the polynomialp(x) = 2x belongs to the set, but 1

3p(x) = 23x does not belong to the set (since 2

3 is not an even integer).

4. YES. This set of polynomials forms a subspace of the vector space P5. To confirm this, we will checkthat this set is closed under addition and scalar multiplication:

Closure under Addition: Let p(x) = a0 +a1x+a4x4 +a5x

5 and q(x) = b0 + b1x+ b4x4 + b5x

5 be polynomialsin the set under consideration (their x2 and x3 terms are zero). Then

p(x) + q(x) = (a0 + b0) + (a1 + b1)x + · · ·+ (a4 + b4)x4 + (a5 + b5)x5

is again in the set (since it still has no x2 or x3 terms). So closure under addition holds.

Closure under Scalar Multiplication: Let p(x) = a0 + a1x + a4x4 + a5x

5 be in the set, and let k be a scalar.Then

kp(x) = (ka0) + (ka1)x + (ka4)x4 + (ka5)x5,

which is again in the set (since it still has no x2 or x3 terms). So closure under scalar multiplication holds.

5. NO. We can see immediately that the zero vector (0, 0, 0) is not a solution to this linear system (the firstequation is not satisfied by the zero vector), and therefore, we know at once that this set cannot be a vectorspace.

6. YES. The set of solutions to this linear system forms a subspace of R3. To confirm this, we will checkthat this set is closed under addition and scalar multiplication:

Closure under Addition: Let (a1, a2, a3) and (b1, b2, b3) be solutions to the linear system. This means that

4a1 − 7a2 + 2a3 = 0, 5a1 − 2a2 + 9a3 = 0

and4b1 − 7b2 + 2b3 = 0, 5b1 − 2b2 + 9b3 = 0.

Adding the equations on the left, we get

4(a1 + b1)− 7(a2 + b2) + 2(a3 + b3) = 0,

so the vector (a1 + b1, a2 + b2, a3 + b3) satisfies the first equation in the linear system. Likewise, adding theequations on the right, we get

5(a1 + b1)− 2(a2 + b2) + 9(a3 + b3) = 0,

so (a1 + b1, a2 + b2, a3 + b3) also satisfies the second equation in the linear system. Therefore, (a1 + b1, a2 +b2, a3 + b3) is in the solution set for the linear system, and closure under addition therefore holds.

Page 308: Differential Equations and Linear Algebra - 3e - Goode -Solutions

308

Closure under Scalar Multiplication: Let (a1, a2, a3) be a solution to the linear system, and let k be a scalar.We have

4a1 − 7a2 + 2a3 = 0, 5a1 − 2a2 + 9a3 = 0,

and so, multiplying both equations by k, we have

k(4a1 − 7a2 + 2a3) = 0, k(5a1 − 2a2 + 9a3) = 0,

or4(ka1)− 7(ka2) + 2(ka3) = 0, 5(ka1)− 2(ka2) + 9(ka3) = 0.

Thus, the vector (ka1, ka2, ka3) is a solution to the linear system, and closure under scalar multiplicationtherefore holds.

7. NO. This set is not closed under addition. For example, the vectors[

1 11 1

]and

[−1 1

1 1

]both

belong to the set (their entries are all nonzero), but[1 11 1

]+[−1 1

1 1

]=[

0 22 2

],

which does not belong to the set (some entries are zero, and some are nonzero). So closure under additionfails, and therefore, this set does not form a vector space.

8. YES. The set of 2× 2 real matrices that commute with C =[

1 22 2

]forms a subspace of M2(R). To

confirm this, we will check that this set is closed under addition and scalar multiplication:

Closure under Addition: Let A and B be 2× 2 real matrices that commute with C. That is, AC = CA andBC = CB. Then

(A + B)C = AC + BC = CA + CB = C(A + B),

so A + B commutes with C, and therefore, closure under addition holds.

Closure under Scalar Multiplication: Let A be a 2 × 2 real matrix that commutes with C, and let k be ascalar. Then since AC = CA, we have

(kA)C = k(AC) = k(CA) = C(kA),

so kA is still in the set. Thus, the set is also closed under scalar multiplication.

9. YES. The set of functions f : [0, 1] → [0, 1] such that f(0) = f(

14

)= f

(12

)= f

(34

)= f(1) = 0 is a

subspace of the vector space of all functions [0, 1] → [0, 1]. We confirm this by checking that this set is closedunder addition and scalar multiplication:

Closure under Addition: Let g and h be functions such that

g(0) = g

(14

)= g

(12

)= g

(34

)= g(1) = 0

and

h(0) = h

(14

)= h

(12

)= h

(34

)= h(1) = 0.

Page 309: Differential Equations and Linear Algebra - 3e - Goode -Solutions

309

Now(g + h)(0) = g(0) + h(0) = 0 + 0 = 0,

(g + h)(

14

)= g

(14

)+ h

(14

)= 0 + 0 = 0,

(g + h)(

12

)= g

(12

)+ h

(12

)= 0 + 0 = 0,

(g + h)(

34

)= g

(34

)+ h

(34

)= 0 + 0 = 0,

(g + h)(1) = g(1) + h(1) = 0 + 0 = 0.

Thus, g + h belongs to the set, and so the set is closed under addition.

Closure under Scalar Multiplication: Let g be a function with

g(0) = g

(14

)= g

(12

)= g

(34

)= g(1) = 0,

and let k be a scalar. Then(kg)(0) = k · g(0) = k · 0 = 0,

(kg)(

14

)= k · g

(14

)= k · 0 = 0,

(kg)(

12

)= k · g

(12

)= k · 0 = 0,

(kg)(

34

)= k · g

(34

)= k · 0 = 0,

(kg)(1) = k · g(1) = k · 0 = 0.

Thus, kg belongs to the set, and so the set is closed under scalar multiplication.

10. NO. This set is not closed under addition. For example, the function f defined by f(x) = x belongs tothe set, and the function g defined by

g(x) ={

x, if x ≤ 1/20, if x > 1/2

belongs to the set. But

(f + g)(x) ={

2x, if x ≤ 1/2x, if x > 1/2.

Then f + g : [0, 1] → [0, 1], but for 0 < x ≤ 12 , |(f + g)(x)| = 2x > x, so f + g is not in the set. Therefore,

the set is not closed under addition.

11. NO. This set is not closed under addition. For example, if we let

A =[

1 00 1

]and B =

[0 10 0

],

then A2 = A is symmetric, and B2 = 02 is symmetric, but

(A + B)2 =[

1 10 1

]2=[

1 20 1

]

Page 310: Differential Equations and Linear Algebra - 3e - Goode -Solutions

310

is not symmetric, so A + B is not in the set. Thus, the set in question is not closed under addition.

12. YES. Let us describe geometrically the set of points equidistant from (−1, 2) and (1,−2). If (x, y) issuch a point, then using the distance formula and equating distances to (−1, 2) and (1,−2), we have√

(x + 1)2 + (y − 2)2 =√

(x− 1)2 + (y + 2)2

or(x + 1)2 + (y − 2)2 = (x− 1)2 + (y + 2)2

orx2 + 2x + 1 + y2 − 4y + 4 = x2 − 2x + 1 + y2 + 4y + 4.

Cancelling like terms and rearranging this equation, we have 4x = 8y. So the points that are equidistantfrom (−1, 2) and (1,−2) lie on the line through the origin with equation y = 1

2x. Any line through the originof R2 is a subspace of R2. Therefore, this line forms a vector space.

13. NO. This set is not closed under addition, nor under scalar multiplication. For instance, the point(5,−3, 4) is a distance 5 from (0,−3, 4), so the point (5,−3, 4) lies in the set. But the point (10,−6, 8) =2(5,−3, 4) is a distance√

(10− 0)2 + (−6 + 3)2 + (8− 4)2 =√

100 + 9 + 16 =√

125 6= 5

from (0,−3, 4), so (10,−6, 8) is not in the set. So the set is not closed under scalar multiplication, and hencedoes not form a subspace.

14. We must check each of the vector space axioms (A1)-(A10).

Axiom (A1): Assume that (a1, a2) and (b1, b2) belong to V . Then a2, b2 > 0. Hence,

(a1, a2) + (b1, b2) = (a1 + b1, a2b2) ∈ V,

since a2b2 > 0. Thus, V is closed under addition.

Axiom (A2): Assume that (a1, a2) ∈ V , and let k be a scalar. Note that since a2 > 0, the expressionak2 > 0 for every k ∈ R. Hence, k(a1, a2) = (ka1, a

k2) ∈ V , thereby showing that V is closed under scalar

multiplication.

Axiom (A3): Let (a1, a2), (b1, b2) ∈ V . We have

(a1, a2) + (b1, b2) = (a1 + b1, a2b2)= (b1 + a1, b2a2)= (b1, b2) + (a1, a2),

as required.

Axiom (A4): Let (a1, a2), (b1, b2), (c1, c2) ∈ V . We have

((a1, a2) + (b1, b2)) + (c1, c2) = (a1 + b1, a2b2) + (c1, c2)= ((a1 + b1) + c1, (a2b2)c2)= (a1 + (b1 + c1), a2(b2c2))= (a1, a2) + (b1 + c1, b2c2)= (a1, a2) + ((b1, b2) + (c1, c2)),

Page 311: Differential Equations and Linear Algebra - 3e - Goode -Solutions

311

as required.

Axiom (A5): We claim that (0, 1) is the zero vector in V . To see this, let (b1, b2) ∈ V . Then

(0, 1) + (b1, b2) = (0 + b1, 1 · b2) = (b1, b2).

Since this holds for every (b1, b2) ∈ V , we conclude that (0, 1) is the zero vector.

Axiom (A6): We claim that the additive inverse of a vector (a1, a2) in V is the vector (−a1, a−12 ) (Note that

a−12 > 0 since a2 > 0.) To check this, we compute as follows:

(a1, a2) + (−a1, a−12 ) = (a1 + (−a1), a2a

−12 ) = (0, 1).

Axiom (A7): We have1 · (a1, a2) = (a1, a

12) = (a1, a2)

for all (a1, a2) ∈ V .

Axiom (A8): Let (a1, a2) ∈ V , and let r and s be scalars. Then we have

(rs)(a1, a2) = ((rs)a1, ars2 )

= (r(sa1), (as2)

r)= r(sa1, a

s2)

= r(s(a1, a2)),

as required.

Axiom (A9): Let (a1, a2) and (b1, b2) be members of V , and let r be a scalar. We have

r((a1, a2) + (b1, b2)) = r(a1 + b1, a2b2)= (r(a1 + b1), (a2b2)r)= (ra1 + rb1, a

r2b

r2)

= (ra1, ar2) + (rb1, b

r2)

= r(a1, a2) + r(b1, b2),

as required.

Axiom (A10): Let (a1, a2) ∈ V , and let r and s be scalars. We have

(r + s)(a1, a2) = ((r + s)a1, ar+s2 )

= (ra1 + sa1, ar2a

s2)

= (ra1, ar2) + (sa1, a

s2)

= r(a1, a2) + s(a1, a2),

as required.

15. We must show that W is closed under addition and closed under scalar multiplication:

Closure under Addition: Let (a, 2a) and (b, 2b) be elements of W . Now consider the sum of these elements:

(a, 2a) + (b, 2b) = (a + b, 2a2b) = (a + b, 2a+b) ∈ W,

which shows that W is closed under addition.

Page 312: Differential Equations and Linear Algebra - 3e - Goode -Solutions

312

Closure under Scalar Multiplication: Let (a, 2a) be an element of W , and let k be a scalar. Then

k(a, 2a) = (ka, (2a)k) = (ka, 2ka) ∈ W,

which shows that W is closed under scalar multiplication.

Thus, W is a subspace of V .

16. Note that 3(1, 2) = (3, 23) = (3, 8), so the second vector is a multiple of the first one under the vectorspace operations of V from Problem 14. Therefore, {(1, 2), (3, 8)} is linearly dependent.

17. We show that S = {(1, 4), (2, 1)} is linearly independent and spans V .

S is linearly independent: Assume that

c1(1, 4) + c2(2, 1) = (0, 1).

This can be written(c1, 4c1) + (2c2, 1c2) = (0, 1)

or(c1 + 2c2, 4c1) = (0, 1).

In order for 4c1 = 1, we must have c1 = 0. And then in order for c1 + 2c2 = 0, we must have c2 = 0.Therefore, S is linearly independent.

S spans V : Consider an arbitrary vector (a1, a2) ∈ V , where a2 > 0. We must find constants c1 and c2 suchthat

c1(1, 4) + c2(2, 1) = (a1, a2).

Thus,(c1, 4c1) + (2c2, 1c2) = (a1, a2)

or(c1 + 2c2, 4c1) = (a1, a2).

Hence,c1 + 2c2 = a1 and 4c1 = a2.

From the second equation, we conclude that

c1 = log4(a2).

Thus, from the first equation,

c2 =12(a1 − log4(a2)).

Hence, since we were able to find constants c1 and c2 in order that

c1(1, 4) + c2(2, 1) = (a1, a2),

we conclude that {(1, 4), (2, 1)} spans V .

18. This really hinges on whether or not the given vectors are linearly dependent or linearly independent.If we assume that

c1(2 + x2) + c2(4− 2x + 3x2) + c3(1 + x) = 0,

Page 313: Differential Equations and Linear Algebra - 3e - Goode -Solutions

313

then(2c1 + 4c2 + c3) + (−2c2 + c3)x + (c1 + 3c2)x2 = 0.

Thus, we have2c1 + 4c2 + c3 = 0 − 2c2 + c3 = 0 c1 + 3c2 = 0.

Since the matrix of coefficients 2 4 10 −2 11 3 0

fails to be invertible, we conclude that there will be non-trivial solutions for c1, c2, and c3. Thus, thepolynomials are linearly dependent. We can therefore remove a vector from the set without decreasing thespan. We remove 1+x, leaving us with {2+x2, 4−2x+3x2}. Since these polynomials are not proportional,they are now linearly independent, and hence, they are a basis for their span. Hence,

dim{2 + x2, 4− 2x + 3x2, 1 + x} = 2.

19. NO. This set is not closed under scalar multiplication. For example, (1, 1) belongs to W , but 2 · (1, 1) =(2, 2) does not belong to W .

20. NO. This set is not closed under scalar multiplication. For example, (1, 1) belongs to W , but 2 · (1, 1) =(2, 2) does not belong to W .

21. NO. The zero vector (zero matrix) is not an orthogonal matrix. Any subspace must contain a zerovector.

22. YES. We show that W is closed under addition and closed under scalar multiplication.

Closure under Addition: Assume that f and g belong to W . Thus, f(a) = 2f(b) and g(a) = 2g(b). We mustshow that f + g belongs to W . We have

(f + g)(a) = f(a) + g(a) = 2f(b) + 2g(b) = 2[f(b) + g(b)] = 2(f + g)(b),

so f + g ∈ W . So W is closed under addition.

Closure under Scalar Multiplication: Assume that f belongs to W and k is a scalar. Thus, f(a) = 2f(b).Moreover,

(kf)(a) = kf(a) = k(2f(b)) = 2(kf(b)) = 2(kf)(b),

so kf ∈ W . Thus, W is closed under scalar multiplication.

23. YES. We show that W is closed under addition and closed under scalar multiplication.

Closure under Addition: Assume that f and g belong to W . Thus,∫ b

a

f(x)dx = 0 and∫ b

a

g(x)dx = 0.

Hence, ∫ b

a

(f + g)(x)dx =∫ b

a

f(x)dx +∫ b

a

g(x)dx = 0 + 0 = 0,

so f + g ∈ W .

Page 314: Differential Equations and Linear Algebra - 3e - Goode -Solutions

314

Closure under Scalar Multiplication: Assume that f belongs to W and k is a scalar. Thus,∫ b

a

f(x)dx = 0.

Therefore, ∫ b

a

(kf)(x)dx =∫ b

a

kf(x)dx = k

∫ b

a

f(x)dx = k · 0 = 0,

so kf ∈ W .

24. YES. We show that W is closed under addition and scalar multiplication.

Closure under Addition: Assume that a1 b1

c1 d1

e1 f1

,

a2 b2

c2 d2

e2 f2

∈ W.

Thus,

a1 + b1 = c1 + f1, a1 − c1 = e1 − f1 − d1, a2 + b2 = c2 + f2, a2 − c2 = e2 − f2 − d2.

Hence,

(a1 +a2)+(b1 +b2) = (c1 +c2)+(f1 +f2) and (a1 +a2)−(c1 +c2) = (e1 +e2)−(f1 +f2)−(d1 +d2).

Thus, a1 b1

c1 d1

e1 f1

+

a2 b2

c2 d2

e2 f2

=

a1 + a2 b1 + b2

c1 + c2 d1 + d2

e1 + e2 f1 + f2

∈ W,

which means W is closed under addition.

Closure under Scalar Multiplication: Assume that a bc de f

∈ W

and k is a scalar. Then we have a + b = c + f and a − c = e − f − d. Thus, ka + kb = kc + kf andka− kc = ke− kf − kd. Therefore,

k

a bc de f

=

ka kbkc kdke kf

∈ W,

so W is closed under scalar multiplication.

25. (a) NO, (b) YES. Since 3 vectors are required to span R3, S cannot span V . However, since thevectors are not proportional, they are linearly independent.

26. (a) YES, (b) YES. If we place the three vectors into the columns of a 3× 3 matrix

6 −3 21 1 11 −8 −1

,

we observe that the matrix is invertible. Hence, its columns are linearly independent. Since we have 3linearly independent vectors in the 3-dimensional space R3, we have a basis for R3.

Page 315: Differential Equations and Linear Algebra - 3e - Goode -Solutions

315

27. (a) NO, (b) YES. Since we have only 3 vectors in a 4-dimensional vector space, they cannot possiblyspan R4. To check linear independence, we place the vectors into the columns of a matrix:

6 1 1−3 1 −8

2 1 −10 0 0

.

The first three rows for the same invertible matrix as in the previous problem, so the reduced row-echelon form

is

1 0 00 1 00 0 10 0 0

, so there are no free variables, and hence, the column vectors form a linearly independent

set.

28. (a) YES, (b) NO. Since (0, 0, 0) is a member of S, S cannot be linearly independent. However, the set{(10,−6, 5), (3,−3, 2), (6, 4,−1)} is linearly independent (these vectors can be made to form a 3× 3 matrixthat is invertible), and thus, S must span at least a 3-dimensional space. Since dim[R3] = 3, we know thatS spans R3.

29. (a) YES, (b) YES. Consider the linear equation

c1(2x− x3) + c2(1 + x + x2) + 3c3 + c4x = 0.

Then(c2 + 3c3) + (2c1 + c2 + c4)x + c2x

2 − c1x3 = 0.

From the latter equation, we see that c1 = c2 = 0 (looking at the x2 and x3 coefficients) and thus, c3 = c4 = 0(looking at the constant term and x coefficient). Thus, c1 = c2 = c3 = c4 = 0, and hence, S is linearlyindependent. Since we have four linearly independent vectors in the 4-dimensional vector space P3, weconclude that these vectors also span P3.

30. (a) NO, (b) NO. The set S only contains four vectors, although dim[P4] = 5, so it is impossible forS to span P4. Alternatively, simply note that none of the polynomials in S contain an x3 term.

To check linear independence, consider the equation

c1(x4 + x + 2 + 1) + c2(x2 + x + 1) + c3(x + 1) + c4(x4 + 2x + 3) = 0.

Rearranging this, we have

(c1 + c4)x4 + (c1 + c2)x2 + (c2 + c3 + 2c4)x + (c1 + c2 + 3c4) = 0,

and so we look to solve the linear system with augmented matrix1 1 1 3 00 1 1 2 01 1 0 0 01 0 0 1 0

1 1 1 3 00 1 1 2 00 0 −1 −3 00 −1 −1 −2 0

=

1 1 1 3 00 1 1 2 00 0 1 3 00 0 0 0 0

,

where we have added −1 times the first row to each of the third and fourth rows in the first step, and zeroedat the last row in the second step. We see that c4 is a free variable, which means that a nontrivial solutionto the linear system exists, and therefore, the original vectors are linearly dependent.

Page 316: Differential Equations and Linear Algebra - 3e - Goode -Solutions

316

31. (a) NO, (b) YES. The vector space M2×3(R) is 6-dimensional, and since only four vectors belongto S, S cannot possibly span M2×3(R). On the other hand, if we form the linear system with augmentedmatrix

−1 3 −1 −11 00 2 −2 −6 00 1 −3 −5 00 1 3 1 01 2 2 −2 01 3 1 −5 0

,

row reduction shows that each column of a row-echelon form contains a pivot, and therefore, the vectors arelinearly independent.

32. (a) NO, (b) NO. Since M2(R) is only 4-dimensional and S contains 5 vectors, S cannot possibly belinearly independent. Moreover, each matrix in S is symmetric, and therefore, only symmetric matrices canbe found in the span(S). Thus, S fails to span M2(R).

33. Assume that {v1,v2,v3} is linearly independent, and that v4 does not lie in span{v1,v2,v3}. We willshow that {v1,v2,v3,v4} is linearly independent. To do this, assume that

c1v1 + c2v2 + c3v3 + c4v4 = 0.

We must prove that c1 = c2 = c3 = c4 = 0.If c4 6= 0, then we rearrange the above equation to show that

v4 = −c1

c4v1 −

c2

c4v2 −

c3

c4v3,

which implies that v4 ∈ span{v1,v2,v3}, contrary to our assumption. Therefore, we know that c4 = 0.Hence the equation above reduces to

c1v1 + c2v2 + c3v3 = 0,

and the linear independence of {v1,v2,v3} now implies that c1 = c2 = c3 = 0. Therefore, c1 = c2 = c3 =c4 = 0, as required.

34. Note that v and w are column vectors in Rm. We have v · w = vT w. Since w ∈ nullspace(AT ), wehave AT w = 0, and since v ∈ colspace(A), we can write v = Av1 for some v1 ∈ Rm. Therefore,

v ·w = vT w = (Av1)T w = (vT1 AT )w = vT

1 (AT w) = vT1 (0) = 0,

as desired.

35.

(a): Our proof here actually shows that the set of n×n skew-symmetric matrices forms a subspace of Mn(R)for all positive integers n. We show that W is closed under addition and scalar multiplication:

Closure under Addition: Suppose that A and B are in W . This means that AT = −A and BT = −B. Then

(A + B)T = AT + BT = (−A) + (−B) = −(A + B),

so A + B is skew-symmetric. Therefore, A + B belongs to W , and W is closed under addition.

Closure under Scalar Multiplication: Suppose that A is in W and k is a scalar. We know that AT = −A.Then

(kA)T = k(AT ) = k(−A) = −(kA),

Page 317: Differential Equations and Linear Algebra - 3e - Goode -Solutions

317

so kA is skew-symmetric. Therefore, kA belongs to W , and W is closed under scalar multiplication.

Therefore, W is a subspace.

(b): An arbitrary 3× 3 skew-symmetric matrix takes the form 0 a b−a 0 c−b −c 0

= a

0 1 0−1 0 0

0 0 0

+ b

0 0 10 0 0

−1 0 0

+ c

0 0 00 0 10 −1 0

.

This results in 03 if and only if a = b = c = 0, so the three matrices appearing on the right-hand sideare linearly independent. Moreover, the equation above also demonstrates that W is spanned by the threematrices appearing on the right-hand side. Therefore, these matrices are a basis for W :

Basis =

0 1 0−1 0 0

0 0 0

,

0 0 10 0 0

−1 0 0

,

0 0 00 0 10 −1 0

.

Hence, dim[W ] = 3.

(c): Since dim[M3(R)] = 9, we must add an additional six (linearly independent) 3×3 matrices to form a basisfor M3(R). Using the notation prior to Example 4.6.3, we can use the matrices E11, E22, E33, E12, E13, E23

to extend the basis in part (b) to a basis for M3(R):

Basis for M3(R) =

0 1 0−1 0 0

0 0 0

,

0 0 10 0 0

−1 0 0

,

0 0 00 0 10 −1 0

, E11, E22, E33, E12, E13, E23

36.

(a): We show that W is closed under addition and closed under scalar multiplication. Arbitrary elementsof W have the form a b −a− b

c d −c− d−a− c −b− d a + b + c + d

.

Closure under Addition: Let

X1 =

a1 b1 −a1 − b1

c1 d1 −c1 − d1

−a1 − c1 −b1 − d1 a1 + b1 + c1 + d1

and X2 =

a2 b2 −a2 − b2

c2 d2 −c2 − d2

−a2 − c2 −b2 − d2 a2 + b2 + c2 + d2

be elements of W . Then

X1 + X2 =

a1 + a2 b1 + b2 −a1 − b1 − a2 − b2

c1 + c2 d1 + d2 −c1 − d1 − c2 − d2

−a1 − c1 − a2 − c2 −b1 − d1 − b2 − d2 a1 + b1 + c1 + d1 + a2 + b2 + c2 + d2

,

which is immediately seen to have row sums and column sums of zero. Thus, X1 + X2 belongs to W , andW is closed under addition.

Closure under Scalar Multiplication: Let

X1 =

a1 b1 −a1 − b1

c1 d1 −c1 − d1

−a1 − c1 −b1 − d1 a1 + b1 + c1 + d1

,

Page 318: Differential Equations and Linear Algebra - 3e - Goode -Solutions

318

and let k be a scalar. Then

kX1 =

ka1 kb1 k(−a1 − b1)kc1 kd1 k(−c1 − d1)

k(−a1 − c1) k(−b1 − d1) k(a1 + b1 + c1 + d1)

,

and we see by inspection that all row and column sums of kX1 are zero, so kX1 belongs to W . Therefore,W is closed under scalar multiplication.

Therefore, W is a subspace of M3(R).

(b): We may write a b −a− bc d −c− d

−a− c −b− d a + b + c + d

= a

1 0 −10 0 0

−1 0 1

+b

0 1 −10 0 00 −1 1

+c

0 0 01 0 −1

−1 0 1

+d

0 0 00 1 −10 −1 1

.

This results in 03 if and only if a = b = c = d = 0, so the matrices appearing on the right-hand sideare linearly independent. Moreover, the equation above also demonstrates that W is spanned by the fourmatrices appearing on the right-hand side. Therefore, these matrices are a basis for W :

Basis =

1 0 −1

0 0 0−1 0 1

,

0 1 −10 0 00 −1 1

,

0 0 01 0 −1

−1 0 1

,

0 0 00 1 −10 −1 1

.

Hence, dim[W ] = 4.

(c): Since dim[M3(R)] = 9, we must add an additional five (linearly independent) 3 × 3 matrices to forma basis for M3(R). Using the notation prior to Example 4.6.3, we can use the matrices E11, E12, E13, E21,and E31 to extend the basis from part (b) to a basis for M3(R):

Basis for M3(R) = 1 0 −1

0 0 0−1 0 1

,

0 1 −10 0 00 −1 1

,

0 0 01 0 −1

−1 0 1

,

0 0 00 1 −10 −1 1

, E11, E12, E13, E21, E31

.

37.

(a): We must verify the axioms (A1)-(A10) for a vector space:

Axiom (A1): Assume that (v1, w1) and (v2, w2) belong to V ⊕W . Since v1 +V v2 ∈ V and w1 +W w2 ∈ W ,then the sum

(v1, w1) + (v2, w2) = (v1 +V v2, w1 +W w2)

lies in V ⊕W .

Axiom (A2): Assume that (v, w) belongs to V ⊕W , and let k be a scalar. Since k ·V v ∈ V and k ·W w ∈ W ,the scalar multiplication

k · (v, w) = (k ·V v, k ·W w)

lies in V ⊕W .

For the remainder of the axioms, we will omit the ·V and ·W notations. They are to be understood.

Page 319: Differential Equations and Linear Algebra - 3e - Goode -Solutions

319

Axiom (A3): Assume that (v1, w1), (v2, w2) ∈ V ⊕W . Then

(v1, w1) + (v2, w2) = (v1 + v2, w1 + w2) = (v2 + v1, w2 + w1) = (v2, w2) + (v1, w1),

as required.

Axiom (A4): Assume that (v1, w1), (v2, w2), (v3, w3) ∈ V ⊕W . Then

((v1, w1) + (v2, w2)) + (v3, w3) = (v1 + v2, w1 + w2) + (v3, w3)= ((v1 + v2) + v3, (w1 + w2) + w3)= (v1 + (v2 + v3), w1 + (w2 + w3))= (v1, w1) + (v2 + v3, w2 + w3)= (v1, w1) + ((v2, w2) + (v3, w3)),

as required.

Axiom (A5): We claim that the zero vector in V ⊕W is (0V , 0W ), where 0V is the zero vector in the vectorspace V and 0W is the zero vector in the vector space W . To check this, let (v, w) ∈ V ⊕W . Then

(0V , 0W ) + (v, w) = (0V + v, 0W + w) = (v, w),

which confirms that (0V , 0W ) is the zero vector for V ⊕W .

Axiom (A6): We claim that the additive inverse of the vector (v, w) ∈ V ⊕W is the vector (−v,−w), where−v is the additive inverse of v in the vector space V and −w is the additive inverse of w in the vector spaceW . We check this:

(v, w) + (−v,−w) = (v + (−v), w + (−w)) = (0V , 0W ),

as required.

Axiom (A7): For every vector (v, w) ∈ V ⊕W , we have

1 · (v, w) = (1 · v, 1 · w) = (v, w),

where in the last step we have used the fact that Axiom (A7) holds in each of the vector spaces V and W .

Axiom (A8): Let (v, w) be a vector in V ⊕W , and let r and s be scalars. Using the fact that Axiom (A8)holds in V and W , we have

(rs)(v, w) = ((rs)v, (rs)w)= (r(sv), r(sw))= r(sv, sw)= r(s(v, w)).

Axiom (A9): Let (v1, w1) and (v2, w2) be vectors in V ⊕W , and let r be a scalar. Then

r((v1, w1) + (v2, w2)) = r(v1 + v2, w1 + w2)= (r(v1 + v2), r(w1 + w2))= (rv1 + rv2, rw1 + rw2)= (rv1, rw1) + (rv2, rw2)= r(v1, w1) + r(v2, w2)),

as required.

Page 320: Differential Equations and Linear Algebra - 3e - Goode -Solutions

320

Axiom (A10): Let (v, w) ∈ V ⊕W , and let r and s be scalars. Then

(r + s)(v, w) = ((r + s)v, (r + s)w)= (rv + sv, rw + sw)= (rv, rw) + (sv, sw)= r(v, w) + s(v, w),

as required.

(b): We show that {(v, 0) : v ∈ V } is a subspace of V ⊕W , by checking closure under addition and closureunder scalar multiplication:

Closure under Addition: Suppose (v1, 0) and (v2, 0) belong to {(v, 0) : v ∈ V }, where v1, v2 ∈ V . Then

(v1, 0) + (v2, 0) = (v1 + v2, 0) ∈ {(v, 0) : v ∈ V },

which shows that the set is closed under addition.

Closure under Scalar Multiplication: Suppose (v, 0) ∈ {(v, 0) : v ∈ V } and k is a scalar. Then k(v, 0) = (kv, 0)is again in the set. Thus, {(v, 0) : v ∈ V } is closed under scalar multiplication.

Therefore, {(v, 0) : v ∈ V } is a subspace of V ⊕W .

(c): Let {v1, v2, . . . , vn} be a basis for V , and let {w1, w2, . . . , wm} be a basis for W . We claim that

S = {(vi, 0) : 1 ≤ i ≤ n} ∪ {(0, wj) : 1 ≤ j ≤ m}

is a basis for V ⊕W . To show this, we will verify that S is a linearly independent set that spans V ⊕W :

Check that S is linearly independent: Assume that

c1(v1, 0) + c2(v2, 0) + · · ·+ cn(vn, 0) + d1(0, w1) + d2(0, w2) + · · ·+ dm(0, wm) = (0, 0).

We must show thatc1 = c2 = · · · = cn = d1 = d2 = · · · = dm = 0.

Adding the vectors on the left-hand side, we have

(c1v1 + c2v2 + · · ·+ cnvn, d1w1 + d2w2 + · · ·+ dmwm) = (0, 0),

so thatc1v1 + c2v2 + · · ·+ cnvn = 0 and d1w1 + d2w2 + · · ·+ dmwm = 0.

Since {v1, v2, . . . , vn} is linearly independent,

c1 = c2 = · · · = cn = 0,

and since {w1, w2, . . . , wm} is linearly independent,

d1 = d2 = · · · = dm = 0.

Thus, S is linearly independent.

Check that S spans V ⊕ W : Let (v, w) ∈ V ⊕ W . We must express (v, w) as a linear combination of thevectors in S. Since {v1, v2, . . . , vn} spans V , there exist scalars c1, c2, . . . , cn such that

v = c1v1 + c2v2 + · · ·+ cnvn,

Page 321: Differential Equations and Linear Algebra - 3e - Goode -Solutions

321

and since {w1, w2, . . . , wm} spans W , there exist scalars d1, d2, . . . , dm such that

w = d1w1 + d2w2 + · · ·+ dmwm.

Then(v, w) = c1(v1, 0) + c2(v2, 0) + · · ·+ cn(vn, 0) + d1(0, w1) + d2(0, w2) + · · ·+ dm(0, wm).

Therefore, (v, w) is a linear combination of vectors in S, so S spans V ⊕W .

Therefore, S is a basis for V ⊕W . Since S contains n + m vectors, dim[V ⊕W ] = n + m.

38. There are many examples here. One such example is S = {x3, x3 − x2, x3 − x, x3 − 1}, a basis for P3

whose vectors all have degree 3. To see that S is a basis, note that S is linearly independent, since if

c1x3 + c2(x3 − x2) + c3(x3 − x) + c4(x3 − 1) = 0,

then(c1 + c2 + c3 + c4)x3 − c2x

2 − c3x− c4 = 0,

and so c1 = c2 = c3 = c4 = 0. Since S is a linearly independent set of 4 vectors and dim[P3] = 4, S is a basisfor P3.

39. Let A be an m× n matrix. By the Rank-Nullity Theorem,

dim[colspace(A)] + dim[nullspace(A)] = n.

Since, by assumption, colspace(A) = nullspace(A) = r, n = 2r must be even.

40. The ith row of the matrix A is bi(c1 c2 . . . cn). Therefore, each row of A is a multiple of the first row,and so rank(A) = 1. Thus, by the Rank-Nullity Theorem, nullity(A) = n− 1.

41. A row-echelon form of A is given by[

1 20 0

]. Thus, a basis for the rowspace of A is given by {(1, 2)},

a basis for the columnspace of A is given by{[

−3−6

]}, and a basis for the nullspace of A is given by{[

−21

]}. All three subspaces are one-dimensional.

42. A row-echelon form of A is given by

1 −6 −2 00 1 1/3 5/210 0 0 0

. Thus, a basis for the rowspace of A is

{(1,−6,−2, 0), (0, 1, 13 , 5

21 )}, and a basis for the column space of A is given by

−1

37

,

6321

. For

the nullspace, we observe that the equations corresponding to the row-echelon form of A can be written as

x− 6y − 2z = 0 and y +13z +

521

w = 0.

Set w = t and z = s. Then y = − 13s− 5

21 t and x = − 107 t. Thus,

nullspace(A) ={(

−107

t,−13s− 5

21t, s, t

): s, t ∈ R

}={

t

(−10

7,− 5

21, 0, 1

)+ s

(0,−1

3, 1, 0

)}.

Hence, a basis for the nullspace of A is given by {(− 107 ,− 5

21 , 0, 1), (0,− 13 , 1, 0)}.

Page 322: Differential Equations and Linear Algebra - 3e - Goode -Solutions

322

43. A row-echelon form of A is given by

1 −2.5 −50 1 1.30 0 10 0 0

. Thus, a basis for the rowspace of A is given by

{(1,−2.5,−5), (0, 1, 1.3), (0, 0, 1)}, and a basis for the columnspace of A is given by−4

06

−2

,

01055

,

313210

.

Moreover, we have that the nullspace of A is 0-dimensional, and so the basis is empty.

44. A row-echelon form of A is given by

1 0 2 2 10 1 −1 −4 −30 0 1 4 30 0 0 1 0

. Thus, a basis for the rowspace of A is

given by{(1, 0, 2, 2, 1), (0, 1,−1,−4,−3), (0, 0, 1, 4, 3), (0, 0, 0, 1, 0)},

a basis for the columnspace of A is given by

311

−2

,

5010

,

521

−4

,

22

−2−2

.

For the nullspace, if the variables corresponding the columns are (x, y, z, u, v), then the row-echelon formtells us that v = t is a free variable, u = 0, z = −3t, y = 0, and x = 5t. Thus,

nullspace(A) = {(5t, 0,−3t, 0, t) : t ∈ R} = {t(5, 0,−3, 0, 1) : t ∈ R},

and so a basis for the nullspace of A is {(5, 0,−3, 0, 1)}.

45. We will obtain bases for the rowspace, columnspace, and nullspace and orthonormalize them. A row-

echelon form of A is given by

1 2 60 1 20 0 00 0 0

. We see that a basis for the rowspace of A is given by

{(1, 2, 6), (0, 1, 2)}.

We apply Gram-Schmidt to this set, and thus we need to replace (0, 1, 2) by

(0, 1, 2)− 1441

(1, 2, 6) =(−14

41,1341

,− 241

).

So an orthogonal basis for the rowspace of A is given by{(1, 2, 6) ,

(−14

41,1341

,− 241

)}.

Page 323: Differential Equations and Linear Algebra - 3e - Goode -Solutions

323

To replace this with an orthonormal basis, we must normalize each vector. The first one has norm√

41 andthe second one has norm 3√

41. Hence, an orthonormal basis for the rowspace of A is{√

4141

(1, 2, 6),√

413

(−14

41,1341

,− 241

)}.

Returning to the row-echelon form of A obtained above, we see that a basis for the columnspace of A is

1201

,

2110

.

We apply Gram-Schmidt to this set, and thus we need to replace

2110

by

2110

− 46

1201

=13

4

−13

−2

.

So an orthogonal basis for the columnspace of A is given by

1201

,13

4

−13

−2

.

The norms of these vectors are, respectively,√

6 and√

30/3. Hence, we normalize the above orthogonalbasis to obtain the orthonormal basis for the columnspace:

√6

6

1201

,

√30

30

4

−13

−2

.

Returning once more to the row-echelon form of A obtained above, we see that, in order to find thenullspace of A, we must solve the equations x + 2y + 6z = 0 and y + 2z = 0. Setting z = t as a free variable,we find that y = −2t and x = −2t. Thus, a basis for the nullspace of A is {(−2,−2, 1)}, which can benormalized to {(

−23,−2

3,13

)}.

46. A row-echelon form for A is

1 3 50 1 3/20 0 10 0 00 0 0

. Note that since rank(A) = 3, nullity(A) = 0, and so

there is no basis for nullspace(A). Moreover, rowspace(A) is a 3-dimensional subspace of R3, and therefore,rowspace(A) = R3. An orthonormal basis for this is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}.

Page 324: Differential Equations and Linear Algebra - 3e - Goode -Solutions

324

Finally, consider the columnspace of A. We must apply the Gram-Schmidt process to the three columnsof A. Thus, we replace the second column vector by

v2 =

3

−3255

− 4

1

−1011

=

−1

1211

.

Next, we replace the third column vector by

v3 =

51328

− 72

1

−1011

− 32

−1

1211

=

330

−33

.

Hence, an orthogonal basis for the columnspace of A is

1

−1011

,

−1

1211

,

330

−33

.

Normalizing each vector yields the orthonormal basis12

1

−1011

,1√8

−1

1211

,16

330

−33

.

47. Let x1 = (5,−1, 2) and let x2 = (7, 1, 1). Using the Gram-Schmidt process, we have

v1 = x1 = (5,−1, 2)

and

v2 = x2 −〈x2,v1〉|‖v1|‖2

v1 = (7, 1, 1)− 3630

(5,−1, 2) = (7, 1, 1)− (6,−65,125

) = (1,115

,−75).

Hence, an orthogonal basis is given by

{(5,−1, 2), (1,115

,−75)}.

48. We already saw in Problem 26 that S spans R3, so therefore an obvious orthogonal basis for span(S)is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. Alternatively, for practice with Gram-Schmidt, we would proceed as follows:Let x1 = (6,−3, 2), x2 = (1, 1, 1), and x3 = (1,−8,−1). Using the Gram-Schmidt process, we have

v1 = x1 = (6,−3, 2),

Page 325: Differential Equations and Linear Algebra - 3e - Goode -Solutions

325

v2 = x2 −〈x2,v1〉|‖v1|‖2

v1 = (1, 1, 1)− 549

(6,−3, 2) =(

1949

,6449

,3949

),

and

v3 = x3 −〈x3,v1〉|‖v1|‖2

v1 −〈x3,v2〉|‖v2|‖2

v2 = (1,−8,−1)− 47(6,−3, 2) +

38427

(19, 64, 39) =(−45

61,−36

61,8161

).

Hence, an orthogonal basis is given by{(6,−3, 2),

(1949

,6449

,3949

),

(−45

61,−36

61,8161

)}.

49. We already saw in Problem 29 that S spans P3, so therefore we can apply Gram-Schmidt to the basis{1, x, x2, x3} for P3, instead of the given set of polynomials. Let x1 = 1, x2 = x, x3 = x2, and x4 = x3.Using the Gram-Schmidt process, we have

v1 = x1 = 1,

v2 = x2 −〈x2,v1〉|‖v1|‖2

v1 = x− 12,

v3 = x3 −〈x3,v1〉|‖v1|‖2

v1 −〈x3,v2〉|‖v2|‖2

v2 = x2 − (x− 12)− 1

3= x2 − x +

16,

and

v4 = x4−〈x4,v1〉|‖v1|‖2

v1−〈x4,v2〉|‖v2|‖2

v2−〈x4,v3〉|‖v3|‖2

v3 = x3− 14− 9

10(x− 1

2)− 3

2(x2−x+

16) = x3− 3

2x2 +

35x− 1

20.

Hence, an orthogonal basis is given by{1, x− 1

2, x2 − x +

16, x3 − 3

2x2 +

35x− 1

20

}.

50. It is easy to see that the span of the set of vectors in Problem 32 is the set of all 2 × 2 symmetricmatrices. Therefore, we can simply give the orthogonal basis{[

1 00 0

],

[0 11 0

],

[0 00 1

]}for the set of all 2× 2 symmetric matrices.

51. We have〈u,v〉 = 〈(2, 3), (4,−1)〉 = 2 · 4 + 3 · (−1) = 5, |‖u|‖ =

√13, |‖v|‖ =

√17,

and so

θ = cos−1

(〈u,v〉

|‖u|‖|‖v|‖

)= cos−1

(5√

13√

17

)≈ 1.23 radians.

52. We have

〈u,v〉 = 〈(−2,−1, 2, 4), (−3, 5, 1, 1)〉 = (−2) · (−3) + (−1) · 5 + 2 · 1 + 4 · 1 = 7, |‖u|‖ = 5, |‖v|‖ = 6,

and so

θ = cos−1

(〈u,v〉

|‖u|‖|‖v|‖

)= cos−1

(7

5 · 6

)= cos−1(7/30) ≈ 1.34 radians.

Page 326: Differential Equations and Linear Algebra - 3e - Goode -Solutions

326

53. For Problem 51, we have

〈u,v〉 = 〈(2, 3), (4,−1)〉 = 2 · 2 · 4 + 3 · (−1) = 13, |‖u|‖ =√

17, |‖v|‖ =√

33,

and so

θ = cos−1

(〈u,v〉

|‖u|‖|‖v|‖

)= cos−1

(13√

17√

33

)≈ 0.99 radians.

For Problem 52, we have

〈u,v〉 = 〈(−2,−1, 2, 4), (−3, 5, 1, 1)〉 = 2 · (−2) · (−3) + (−1) · 5 + 2 · 1 + 4 · 1 = 13, |‖u|‖ =√

29, |‖v|‖ =√

45,

and so

θ = cos−1

(〈u,v〉

|‖u|‖|‖v|‖

)= cos−1

(13√

29 ·√

45

)≈ 1.20 radians.

54.

(a): We must verify the four axioms for an inner product given in Definition 4.11.3.

Axiom 1: We have

p · p = p(t0)p(t0) + p(t1)p(t1) + · · ·+ p(tn)p(tn) = p(t0)2 + p(t1)2 + · · ·+ p(tn)2 ≥ 0.

Moreover,p(t0)2 + p(t1)2 + · · ·+ p(tn)2 = 0 ⇐⇒ p(t0) = p(t1) = · · · = p(tn) = 0.

But the only polynomial of degree ≤ n which has more than n roots is the zero polynomial. Thus,

p · p = 0 ⇐⇒ p = 0.

Axiom 2: We have

p · q = p(t0)q(t0) + p(t1)q(t1) + · · ·+ p(tn)q(tn) = q(t0)p(t0) + q(t1)p(t1) + · · ·+ q(tn)p(tn) = q · p

for all p, q ∈ Pn.

Axiom 3: Let k be a scalar, and let p, q ∈ Pn. Then

(kp) · q = (kp)(t0)q(t0) + (kp)(t1)q(t1) + · · ·+ (kp)(tn)q(tn)= kp(t0)q(t0) + kp(t1)q(t1) + · · ·+ kp(tn)q(tn)= k[p(t0)q(t0) + p(t1)q(t1) + · · ·+ p(tn)q(tn)]= k[p · q],

as required.

Axiom 4: Let p1, p2, q ∈ Pn. Then we have

(p1 + p2) · q = (p1 + p2)(t0)q(t0) + (p1 + p2)(t1)q(t1) + · · ·+ (p1 + p2)(tn)q(tn)= [p1(t0) + p2(t0)]q(t0) + [p1(t1) + p2(t1)]q(t1) + · · ·+ [p1(tn) + p2(tn)]q(tn)= [p1(t0)q(t0) + p1(t1)q(t1) + · · ·+ p1(tn)q(tn)] + [p2(t0)q(t0) + p2(t1)q(t1) + · · ·+ p2(tn)q(tn)]= (p1 · q) + (p2 · q),

Page 327: Differential Equations and Linear Algebra - 3e - Goode -Solutions

327

as required.

(b): The projection of p2 onto span{p0, p1} is given by

〈p2, p0〉|‖p0|‖2

p0 +〈p2, p1〉|‖p1|‖2

p1 =204

p0 +011

p1 = 5.

(c): We take q = t2 − 5.

55. Let x = (2, 3, 4) and let v = (6,−1,−4). We must find the length of

x−P(x,v) = (2, 3, 4)− 〈(2, 3, 4), (6,−1,−4)〉|‖(6,−1,−4)|‖

(6,−1,−4) = (2, 3, 4) +753

(6,−1,−4) =(

14853

,15253

,18453

).

56. Note that〈x− y,vi〉 = 〈x,vi〉 − 〈y,vi〉 = 0,

by assumption. Let v be an arbitrary vector in V , and write

v = a1v1 + a2v2 + · · ·+ anvn,

for some scalars a1, a2, . . . , an. Observe that

〈x− y,v〉 = 〈x− y, a1v1 + a2v2 + · · ·+ anvn〉 = a1〈x− y,v1〉+ a2〈x− y,v2〉+ · · ·+ an〈x− y,vn〉= a1 · 0 + a2 · 0 + · · ·+ an · 0 = 0.

Thus, x− y is orthogonal to every vector in V . In particular

〈x− y,x− y〉 = 0,

and hence, x− y = 0. Therefore x = y.

57. Any of the conditions (a)-(p) appearing in the Invertible Matrix Theorem would be appropriate at thispoint in the text.

Solutions to Section 5.1

True-False Review:

1. FALSE. The conditions T (u + v) = T (u) + T (v) and T (c · v) = ccT (v) must hold for all vectors u,vin V and for all scalars c, not just “for some”.

2. FALSE. The dimensions of the matrix A should be m× n, not n×m, as stated in the question.

3. FALSE. This will only necessarily hold for a linear transformation, not for more general mappings.

4. TRUE. This is precisely the definition of the matrix associated with a linear transformation, as given inthe text.

5. TRUE. Since0 = T (0) = T (v + (−v)) = T (v) + T (−v),

we conclude that T (−v) = −T (v).

6. TRUE. Using the properties of a linear transformation, we have

T ((c + d)v) = T (cv + dv) = T (cv) + T (dv) = cT (v) + dT (v),

Page 328: Differential Equations and Linear Algebra - 3e - Goode -Solutions

328

as stated.

Problems:

1. Let (x1, x2), (y1, y2) ∈ R2 and c ∈ R.

T ((x1, x2) + (y1, y2)) = T (x1 + y1, x2 + y2)= (x1 + y1 + 2x2 + 2y2, 2x1 + 2y1 − x2 − y2)= (x1 + 2x2, 2x1 − x2) + (y1 + 2y2, 2y1 − y2)= T (x1, x2) + T (y1, y2).

T (c(x1, x2)) = T (cx1, cx2) = (cx1 + 2cx2, 2cx1 − cx2) = c(x1 + 2x2, 2x1 − x2) = cT (x1, x2).Thus, T is a linear transformation.

2. Let (x1, x2, x3), (y1, y2, y3) ∈ R3 and c ∈ R.

T ((x1, x2, x3) + (y1, y2, y3)) = T (x1 + y1, x2 + y2, x3 + y3)= (x1 + y1 + 3x2 + 3y2 + x3 + y3, x1 + y1 − x2 − y2)= (x1 + 3x2 + x3, x1 − x2) + (y1 + 3y2 + y3, y1 − y2)= T (x1, x2, x3) + T (y1, y2, y3).

T (c(x1, x2, x3)) = T (cx1, cx2, cx3) = (cx1+3cx2+cx3, cx1−cx2) = c(x1+3x2+x3, x1−x2) = cT (x1, x2, x3).Thus, T is a linear transformation.

3. Let y1, y2 ∈ C2(I) and c ∈ R. Then,

T (y1 + y2) = (y1 + y2)′′− 16(y1 + y2) = (y

′′

1 − 16y1) + (y′′

2 − 16y2) = T (y1) + T (y2).

T (cy1) = (cy1)′′− 16(cy1) = c(y

′′

1 − 16y1) = cT (y1).

Consequently, T is a linear transformation.

4. Let y1, y2 ∈ C2(I) and c ∈ R. Then,T (y1 + y2) = (y1 + y2)

′′+ a1(y1 + y2)

′+ a2(y1 + y2)

= (y′′

1 + a1y′

1 + a2y1) + (y′′

2 + a1y′

2 + a2y2) = T (y1) + T (y2).T (cy1) = (cy1)

′′+ a1(cy1)

′+ a2(cy1) = c(y

′′

1 + a1y′

1 + a2y1) = cT (y1).Consequently, T is a linear transformation.

5. Let f, g ∈ V and c ∈ R. Then T (f+g) =∫ b

a

(f+g)(x)dx =∫ b

a

[f(x)+g(x)]dx =∫ b

a

f(x)dx+∫ b

a

g(x)dx =

T (f) + T (g).

T (cf) =∫ b

a

[cf(x)]dx = c

∫ b

a

f(x)dx = cT (f). Therefore, T is a linear transformation.

6. Let A1, A2, B ∈ Mn(R) and c ∈ R.T (A1 + A2) = (A1 + A2)B −B(A1 + A2) = A1B + A2B −BA1 −BA2

= (A1B −BA1) + (A2B −BA2) = T (A1) + T (A2).T (cA1) = (cA1B)−B(cA1) = c(A1B −BA1) = cT (A1).Consequently, T is a linear transformation.

Page 329: Differential Equations and Linear Algebra - 3e - Goode -Solutions

329

7. Let A,B ∈ Mn(R) and c ∈ R. Then,S(A + B) = (A + B) + (A + B)T = A + AT + B + BT = S(A) + S(B).S(cA) = (cA) + (cA)T = c(A + AT ) = cS(A).Consequently, S is a linear transformation.

8. Let A,B ∈ Mn(R) and c ∈ R. Then,

T (A + B) = tr(A + B) =n∑

k=1

(akk + bkk) =n∑

k=1

akk +n∑

k=1

bkk = tr(A) + tr(B) = T (A) + T (B).

T (cA) = tr(cA) =n∑

k=1

cakk = cn∑

k=1

akk = ctr(A) = cT (A).

Consequently, T is a linear transformation.

9. Let x = (x1, x2), y = (y1, y2) be in R2. Then

T (x + y) = T (x1 + y1, x2 + y2) = (x1 + x2 + y1 + y2, 2),

whereasT (x) + T (y) = (x1 + x2, 2) + (y1 + y2, 2) = (x1 + x2 + y1 + y2, 4).

We see that T (x + y) 6= T (x) + T (y), hence T is not a linear transformation.

10. Let A ∈ M2(R) and c ∈ R. Then

T (cA) = det(cA) = c2 det(A) = c2T (A).

Since T (cA) 6= cT (A) in general, it follows that T is not a linear transformation.

11. If T (x1, x2) = (3x1 − 2x2, x1 + 5x2), then A = [T (e1), T (e2)] =[

3 −21 5

].

12. If T (x1, x2) = (x1 + 3x2, 2x1 − 7x2, x1), then A = [T (e1), T (e2)] =

1 32 −71 0

.

13. If T (x1, x2, x3) = (x1 − x2 + x3, x3 − x1), then A = [T (e1), T (e2), T (e3)] =[

1 −1 1−1 0 1

].

14. If T (x1, x2, x3) = x1 + 5x2 − 3x3, then A = [T (e1), T (e2), T (e3)] =[

1 5 −3].

15. If T (x1, x2, x3) = (x3 − x1,−x1, 3x1 + 2x3, 0), then A = [T (e1), T (e2), T (e3)] =

−1 0 1−1 0 0

3 0 20 0 0

.

16. T (x) = Ax =[

1 3−4 7

] [x1

x2

]=[

x1 + 3x2

−4x1 + 7x2

], which we write as

T (x1, x2) = (x1 + 3x2,−4x1 + 7x2).

17. T (x) = Ax =[

2 −1 53 1 −2

] x1

x2

x3

=[

2x1 − x2 + 5x3

3x1 + x2 − 2x3

], which we write as

T (x1, x2, x3) = (2x1 − x2 + 5x3, 3x1 + x2 − 2x3).

Page 330: Differential Equations and Linear Algebra - 3e - Goode -Solutions

330

18. T (x) = Ax =

2 2 −34 −1 25 7 −8

x1

x2

x3

=

2x1 + 2x2 − 3x3

4x1 − x2 + 2x3

5x1 + 7x2 − 8x3

, which we write as

T (x1, x2, x3) = (2x1 + 2x2 − 3x3, 4x1 − x2 + 2x3, 5x1 + 7x2 − 8x3).

19. T (x) = Ax =

−3−2

01

[x] =

−3x−2x

0x

, which we write as

T (x) = (−3x,−2x, 0, x).

20. T (x) = Ax =[

1 −4 −6 0 2]

x1

x2

x3

x4

x5

= [x1 − 4x2 − 6x3 + 2x5], which we write as

T (x1, x2, x3, x4, x5) = x1 − 4x2 − 6x3 + 2x5.

21. Let u be a fixed vector in V , v1,v2 ∈ V , and c ∈ R. ThenT (v1 + v2) = 〈u,v1 + v2〉 = 〈u,v1〉+ 〈u,v2〉 = T (v1) + T (v2).T (cv1) = 〈u, cv1〉 = c〈u,v1〉 = cT (v1). Thus, T is a linear transformation.

22. We must show that the linear transformation T respects addition and scalar multiplication:

T respects addition: Let v1 and v2 be vectors in V . Then we have

T (v1 + v2) = (〈u1,v1 + v2〉, 〈u2,v1 + v2〉)= (〈v1 + v2,u1〉, 〈v1 + v2,u2〉)= (〈v1,u1〉+ 〈v2,u1〉, 〈v1,u2〉+ 〈v2,u2〉)= (〈v1,u1〉, 〈v1,u2〉) + (〈v2,u1〉, 〈v2,u2〉)= (〈u1,v1〉, 〈u2,v1〉) + (〈u1,v2〉, 〈u2,v2〉)= T (v1) + T (v2).

T respects scalar multiplication: Let v be a vector in V and let c be a scalar. Then we have

T (cv) = (〈u1, cv〉, 〈u2, cv〉)= (〈cv,u1〉, 〈cv,u2〉)= (c〈v,u1〉, c〈v,u2〉)= c(〈v,u1〉, 〈v,u2〉)= c(〈u1,v〉, 〈u2,v〉)= cT (v).

23. (a) If D = [v1,v2] =∣∣∣∣ 1 1

1 −1

∣∣∣∣ then det(D) = −2 6= 0, so by Corollary 4.5.15 the vectors v1 = (1, 1)

and v2 = (1,−1) are linearly independent. Since dim[R2] = 2, it follows from Theorem 4.6.10 that {v1,v2}is a basis for R2.

Page 331: Differential Equations and Linear Algebra - 3e - Goode -Solutions

331

(b) Let x = (x1, x2) be an arbitrary vector in R2. Since {v1,v2} forms a basis for R2, there exist c1 and c2

such that(x1, x2) = c1(1, 1) + c2(1,−1),

that is, such thatc1 + c2 = x1, c1 − c2 = x2.

Solving this system yields c1 =12(x1 + x2), c2 =

12(x1 − x2). Thus,

(x1, x2) =12(x1 + x2)v1 +

12(x1 − x2)v2,

so that

T [(x1, x2)] = T

[12(x1 + x2)v1 +

12(x1 − x2)v2

]=

12(x1 + x2)T (v1) +

12(x1 − x2)T (v2)

=12(x1 + x2)(2, 3) +

12(x1 − x2)(−1, 1)

=(

x1

2+

3x2

2, 2x1 + x2

).

In particular, when (4,−2) is substituted for (x1, x2), it follows that T (4,−2) = (−1, 6).

24. The matrix of T is the 4 × 2 matrix [T (e1), T (e2)]. Therefore, we must determine T (1, 0) and T (0, 1),which we can determine from the given information by using the linear transformation properties. A quickcalculation shows that (1, 0) = − 2

3 (−1, 1) + 13 (1, 2), so

T (1, 0) = T (−23(−1, 1)+

13(1, 2)) = −2

3T (−1, 1)+

13T (1, 2) = −2

3(1, 0,−2, 2)+

13(−3, 1, 1, 1) = (−5

3,13,53,−1).

Similarly, we have (0, 1) = 13 (−1, 1) + 1

3 (1, 2), so

T (0, 1) = T (13(−1, 1) +

13(1, 2)) =

13T (−1, 1) +

13T (1, 2) =

13(1, 0,−2, 2) +

13(−3, 1, 1, 1) = (−2

3,13,−1

3, 1).

Therefore, we have the matrix of T : −5/3 −2/3

1/3 1/25/3 −1/3−1 1

.

25. The matrix of T is the 2 × 4 matrix [T (e1), T (e2), T (e3), T (e4)]. Therefore, we must determineT (1, 0, 0, 0), T (0, 1, 0, 0), T (0, 0, 1, 0), and T (0, 0, 0, 1), which we can determine from the given informationby using the linear transformation properties. We are given that

T (1, 0, 0, 0) = (3,−2).

Next,T (0, 1, 0, 0) = T (1, 1, 0, 0)− T (1, 0, 0, 0) = (5, 1)− (3,−2) = (2, 3),

T (0, 0, 1, 0) = T (1, 1, 1, 0)− T (1, 1, 0, 0) = (−1, 0)− (5, 1) = (−6,−1),

Page 332: Differential Equations and Linear Algebra - 3e - Goode -Solutions

332

andT (0, 0, 0, 1) = T (1, 1, 1, 1)− T (1, 1, 1, 0) = (2, 2)− (−1, 0) = (3, 2).

Therefore, we have the matrix of T : [3 2 −6 3

−2 3 −1 2

].

26. The matrix of T is the 3 × 3 matrix [T (e1), T (e2), T (e3)]. Therefore, we must determine T (1, 0, 0),T (0, 1, 0), and T (0, 0, 1), which we can determine from the given information by using the linear transforma-tion properties. A quick calculation shows that (1, 0, 0) = (1, 2, 0)− 6(0, 1, 1) + 2(0, 2, 3), so

T (1, 0, 0) = T (1, 2, 0)− 6T (0, 1, 1) + 2T (0, 2, 3) = (2,−1, 1)− 6(3,−1,−1) + 2(6,−5, 4) = (32,−5, 4).

Similarly, (0, 1, 0) = 3(0, 1, 1)− (0, 2, 3), so

T (0, 1, 0) = 3T (0, 1, 1)− T (0, 2, 3) = 3(3,−1,−1)− (6,−5, 4) = (3, 2,−7).

Finally, (0, 0, 1) = −2(0, 1, 1) + (0, 2, 3), so

T (0, 0, 1) = −2T (0, 1, 1) + T (0, 2, 3) = −2(3,−1,−1) + (6,−5, 4) = (0,−3, 6).

Therefore, we have the matrix of T : 32 3 0−5 2 −3

4 −7 6

.

27. The matrix of T is the 4 × 3 matrix [T (e1), T (e2), T (e3)]. Therefore, we must determine T (1, 0, 0),T (0, 1, 0), and T (0, 0, 1), which we can determine from the given information by using the linear transforma-tion properties. A quick calculation shows that (1, 0, 0) = 1

4 (0,−1, 4)− 14 (0, 3, 3) + 1

4 (4, 4,−1), so

T (1, 0, 0) =14T (0,−1, 4)−1

4T (0, 3, 3)+

14T (4, 4,−1) =

14(2, 5,−2, 1)−1

4(−1, 0, 0, 5)+

14(−3, 1, 1, 3) = (0,

32, 0,−1

4).

Similarly, (0, 1, 0) = − 15 (0,−1, 4) + 4

15 (0, 3, 3), so

T (0, 1, 0) = −15(2, 5,−2, 1) +

415

(−1, 0, 0, 5) = (−23,−1,

25,1715

).

Finally, (0, 0, 1) = 15 (0,−1, 4) + 1

15 (0, 3, 3), so

T (0, 0, 1) =15T (0,−1, 4) +

115

T (0, 3, 3) =15(2, 5,−2, 1) +

115

(−1, 0, 0, 5) = (13, 1,−2

5,

815

).

Therefore, we have the matrix of T : 0 −2/3 1/3

3/2 −1 10 2/5 −2/5

−1/4 17/15 8/15

.

28. T (ax2+bx+c) = aT (x2)+bT (x)+cT (1) = a(3x+2)+b(x2−1)+c(x+1) = bx2+(3a+c)x+(2a−b+c).

Page 333: Differential Equations and Linear Algebra - 3e - Goode -Solutions

333

29. Using the linearity of T , we have T (2v1 + 3v2) = v1 + v2 and T (v1 + v2) = 3v1 − v2. That is,2T (v1)+ 3T (v2) = v1 +v2 and T (v1)+T (v2) = 3v1−v2. Solving this system for the unknowns T (v1) andT (v2), we obtain T (v2) = 3v2 − 5v1 and T (v1) = 8v1 − 4v2.

30. Since T is a linear transformation we obtain:

T (x2)− T (1) = x2 + x− 3, (30.1)

2T (x) = 4x, (30.2)

3T (x) + 2T (1) = 2(x + 3) = 2x + 6. (30.3)

From Equation (30.2) it follows that T (x) = 2x, so upon substitution into Equation (30.3) we have3(2x) + 2T (1) = 2(x + 3) or T (1) = −2x + 3. Substituting this last result into Equation (30.1) yieldsT (x2)− (−2x + 3) = x2 + x− 3 so T (x2) = x2 − x. Now if a, b and c are arbitrary real numbers, thenT (ax2 + bx + c) = aT (x2) + bT (x) + cT (1) = a(x2 − x) + b(2x) + c(−2x + 3)

= ax2 − ax + 2bx− 2cx + 3c = ax2 + (−a + 2b− 2c)x + 3c.

31. Let v ∈ V . Since {v1,v2} is a basis for V , there exists a, b ∈ R such that v = av1 + bv2. Hence

T (v) = T (av1 + bv2) = aT (v1) + bT (v2) = a(3v1 − v2) + b(v1 + 2v2)= 3av1 − av2 + bv1 + 2bv2 = (3a + b)v1 + (2b− a)v2.

32. Let v be any vector in V . Since {v1,v2, . . . ,vk} spans V , we can write v = c1v1 + c2v2 + · · ·+ ckvk forsuitable scalars c1, c2, . . . , ck. Then

T (v) = T (c1v1 + c2v2 + · · ·+ ckvk) = c1T (v1) + c2T (v2) + · · ·+ ckT (vk)= c1S(v1) + c2S(v2) + · · ·+ ckS(vk)= S(c1v1 + c2v2 + · · ·+ ckvk) = S(v),

as required.

33. Let v be any vector in V . Since {v1,v2, . . . ,vk} is a basis for V , we can write v = c1v1+c2v2+· · ·+ckvk

for suitable scalars c1, c2, . . . , ck. Then

T (v) = T (c1v1 + c2v2 + · · ·+ ckvk) = c1T (v1) + c2T (v2) + · · ·+ ckT (vk) = c10 + c20 + . . . ck0 = 0,

as required.

34. Let v1 and v2 be arbitrary vectors in V . Then,(T1 + T2)(v1 + v2) = T1(v1 + v2) + T2(v1 + v2)

= T1(v1) + T1(v2) + T2(v1) + T2(v2)= T1(v1) + T2(v1) + T1(v2) + T2(v2)= (T1 + T2)(v1) + (T1 + T2)(v2).

Further, if k is any scalar, then

(T1 + T2)(kv) = T1(kv) + T2(kv) = kT1(v) + kT2(v) = k[T1(v) + T2(v)] = k(T1 + T2)(v).

It follows that T1 + T2 is a linear transformation.Now consider the transformation cT , where c is an arbitrary scalar.

(cT )(v1 + v2) = cT (v1 + v2) = c[T (v1) + T (v2)] = cT (v1) + cT (v2) = (cT )(v1) + (cT )(v2).

Page 334: Differential Equations and Linear Algebra - 3e - Goode -Solutions

334

(cT )(kv1) = cT (kv1) = c[kT (v1)] = (ck)T (v1) = (kc)T (v1) = k[cT (v1)].

Thus, cT is a linear transformation.

35. (T1 + T2)(x) = T1(x) + T2(x) = Ax + Bx = (A + B)x =[

5 62 −2

] [x1

x2

]=[

5x1 + 6x2

2x1 − 2x2

].

Hence, (T1 + T2)(x1, x2) = (5x1 + 6x2, 2x1 − 2x2).

(cT1)(x) = cT1(x) = c(Ax) = (cA)x =[

3c c−c 2c

] [x1

x2

]=[

3cx1 + cx2

−cx1 + 2cx2

].

Hence, (cT1)(x1, x2) = (3cx1 + cx2, −cx1 + 2cx2).

36. (T1 + T2)(x) = T1(x) + T2(x) = Ax + Bx = (A + B)x.(cT1)(x) = cT1(x) = c(Ax) = (cA)x.

37. Problem 34 establishes that if T1 and T2 are in L(V,W ) and c is any scalar, then T1 + T2 and cT1 arein L(V,W ). Consequently, Axioms (A1) and (A2) are satisfied.A3: Let v be any vector in L(V,W ). Then

(T1 + T2)(v) = T1(v) + T2(v) = T2(v) + T1(v) = (T2 + T1)(v).

Hence T1 + T2 = T2 + T1, therefore the addition operation is commutative.A4: Let T3 ∈ L(V,W ). Then[(T1 + T2) + T3] (v) = (T1 + T2)(v) + T3(v) = [T1(v) + T2(v)] + T3(v)

= T1(v) + [T2(v) + T3(v)] = T1(v) + (T2 + T3)(v) = [T1 + (T2 + T3)](v).Hence (T1 + T2) + T3 = T1 + (T2 + T3), therefore the addition operation is associative.A5: The zero vector in L(V,W ) is the zero transformation, O : V → W , defined by

O(v) = 0, for allv inV,

where 0 denotes the zero vector in V . To show that O is indeed the zero vector in L(V,W ), let T be anytransformation in L(V,W ). Then

(T + O)(v) = T (v) + O(v) = T (v) + 0 = T (v) for allv ∈ V,

so that T + O = T .A6: The additive inverse of the transformation T ∈ L(V,W ) is the linear transformation −T defined by−T = (−1)T , since

[T + (−T )](v) = T (v) + (−T )(v) = T (v) + (−1)T (v) = T (v)− T (v) = 0,

for all v ∈ V , so that T + (−T ) = O.A7-A10 are all straightforward verifications.

Solutions to Section 5.2

True-False Review:

1. FALSE. For example, T (x1, x2) = (0, 0) is a linear transformation that maps every line to the origin.

2. TRUE. All of the matrices Rx, Ry, Rxy, LSx, LSy, Sx, and Sy discussed in this section are elementarymatrices.

Page 335: Differential Equations and Linear Algebra - 3e - Goode -Solutions

335

3. FALSE. A shear parallel to the x-axis composed with a shear parallel to the y-axis is given by matrix[1 k0 1

] [1 0l 1

]=[

1 + kl kl 1

], which is not a shear.

4. TRUE. This is explained prior to Example 5.2.1.

5. FALSE. For example,

Rxy ·Rx =[

0 11 0

] [1 00 −1

]=[

0 −11 0

],

and this matrix is not in the form of a stretch.

6. FALSE. For example, [k 00 1

] [1 00 l

]=[

k 00 l

]is not a stretch.

Problems:

1. T (1, 1) = (1,−1), T (2, 1) = (1,−2), T (2, 2) = (2,−2), T (1, 2) = (2,−1).

1

1

-1

2

-2

x

2

y

Figure 0.0.69: Figure for Exercise 1

2. T (1, 1) = (0, 3), T (2, 1) = (1, 4), T (2, 2) = (0, 6), T (1, 2) = (−1, 5).

3. T (1, 1) = (2, 0), T (2, 1) = (3,−1), T (2, 2) = (4, 0), T (1, 2) = (3, 1).

4. T (1, 1) = (−4,−2), T (2, 1) = (−6,−4), T (2, 2) = (−8,−4), T (1, 2) = (−6,−2).

5. A =[

1 20 1

]=⇒ T (x) = Ax corresponds to a shear parallel to the y-axis.

6.[

0 22 0

]1∼[

2 00 0

]2∼[

1 00 2

]3∼[

1 00 1

]1. P12 2. M1(1/2) 3. M2(1/2). So,

Page 336: Differential Equations and Linear Algebra - 3e - Goode -Solutions

336

1

1

-1 2-2

x

2

3

4

5

6

y

Figure 0.0.70: Figure for Exercise 2

1

1

-1

2 3 4x

2

y

Figure 0.0.71: Figure for Exercise 3

T (x) = Ax = P12M1(2)M2(2)x

which corresponds to a stretch in the y-direction, followed by a stretch in the x-direction, followed by areflection in y = x.

7. A =[

1 03 1

]=⇒ T (x) = Ax corresponds to a shear parallel to the y-axis.

8.[−1 0

0 −1

]1∼[

1 00 −1

]2∼[

1 00 1

]1. M1(−1) 2. M2(−1) So,

Page 337: Differential Equations and Linear Algebra - 3e - Goode -Solutions

337

1

1

2-4

-4

-6-8 -2

-2

x

2

y

Figure 0.0.72: Figure for Exercise 4

T (x) = Ax = M1(−1)M2(−1)x

which corresponds to a reflection in the x-axis, followed by a reflection in the y-axis.

9.[

1 −3−2 8

]1∼[

1 −30 2

]2∼[

1 −30 1

]3∼[

1 00 1

]1. A12(2) 2. M2(1/2) 3. A21(3). So,

T (x) = Ax = A12(−2)M2(2)A21(−3)x

which corresponds to a shear parallel to the x-axis, followed by a stretch in the y-direction, followed by ashear parallel to the y-axis.

10.[

1 23 4

]1∼[

1 20 −2

]2∼[

1 00 −2

]3∼[

1 00 1

]1. A12(−3) 2. A21(1) 3. M2(−1/2). So,

T (x) = Ax = A12(3)A21(−1)M2(−2)x

which corresponds to a reflection in the x-axis followed by a stretch in they y-direction, followed by a shearparallel to the x-axis, followed by a shear parallel to the y-axis.

11.[

1 00 −2

]=[

1 00 2

] [1 00 −1

]. So T (x) = Ax corresponds to a reflection in the x-axis followed by

a stretch in the y-direction.

12.[−1 −1−1 0

]1∼[−1 −1

0 1

]2∼[

1 10 1

]3∼[

1 00 1

]1. A12(−1) 2. M1(−1) 3. A21(−1).

So,

T (x) = Ax = A12(1)M1(−1)A21(1)x

which corresponds to a shear parallel to the x-axis, followed by a reflection in the y-axis, followed by a shearparallel to the y-axis.

13.

Page 338: Differential Equations and Linear Algebra - 3e - Goode -Solutions

338

R(θ) =[

cos θ 00 1

] [1 0

sin θ 1

] [1 00 sec θ

] [1 − tan θ0 1

]=[

cos θ 00 1

] [1 0

sin θ 1

] [1 − tan θ0 sec θ

]=[

cos θ 00 1

] [1 − tan θ

sin θ cos θ

]=[

cos θ − sin θsin θ cos θ

]which coincides with the matrix of the transformation of R2 corresponding to a rotation through an angle θin the counter-clockwise direction.

14. The matrix for a counter-clockwise rotation through an angle θ =π

2is[

0 −11 0

]. Now,

[0 −11 0

]1∼[

1 00 −1

]2∼[

1 00 1

]1. P12 2. M2(−1)

So,

T (x) = Ax = P12M2(−1)x

which corresponds to a reflection in the x-axis followed by a reflection in y = x.

Solutions to Section 5.3

True-False Review:

1. FALSE. The statement should read

dim[Ker(T )] + dim[Rng(T )] = dim[V ],

not dim[W ] on the right-hand side.

2. FALSE. As a specific illustration, we could take T : P4 → R7 defined by

T (a0 + a1x + a2x2 + a3x

3 + a4x4) = (a0, a1, a2, a3, a4, 0, 0),

and it is easy to see that T is a linear transformation with Ker(T ) = {0}. Therefore, Ker(T ) is 0-dimensional.

3. FALSE. The solution set to the homogeneous linear system Ax = 0 is Ker(T ), not Rng(T ).

4. FALSE. Rng(T ) is a subspace of W , not V , since it consists of vectors of the form T (v), and thesebelong to W .

5. TRUE. From the given information, we see that Ker(T ) is at least 2-dimensional, and therefore, sinceM23 is 6-dimensional, the Rank-Nullity Theorem requires that Rng(T ) have dimension at most 6− 2 = 4.

6. TRUE. Any vector of the form T (v) where v belongs to Rn can be written as Av, and this in turn canbe expressed as a linear combination of the columns of A. Therefore, T (v) belongs to colspace(A).

Problems:

1. T (7, 5,−1) =[

1 −1 21 −2 −3

] 75

−1

=[

00

]=⇒ (7, 5,−1) ∈ Ker(T ).

T (−21,−15, 2) =[

1 −1 21 −2 −3

] −21−15

2

=[−2

3

]=⇒ (−21,−15, 2) /∈ Ker(T ).

Page 339: Differential Equations and Linear Algebra - 3e - Goode -Solutions

339

T (35, 25,−5) =[

1 −1 21 −2 −3

] 3525−5

=[

00

]=⇒ (35, 25,−5) ∈ Ker(T ).

2. Ker(T ) = {x ∈ R2 : T (x) = 0} = {x ∈ R2 : Ax = 0}. The augmented matrix of the system Ax = 0 is:[3 6 01 2 0

], with reduced row-echelon form of

[1 2 00 0 0

]. It follows that

Ker(T ) = {x ∈ R2 : x = (−2t, t), t ∈ R} = {x ∈ R2 : x = t(−2, 1), t ∈ R}.

Geometrically, this is a line in R2. It is the subspace of R2 spanned by the vector (−2, 1).dim[Ker(T )] = 1.For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A,we see that colspace(A) is generated by the first column vector of A. Consequently,

Rng(T ) = {y ∈ R2 : y = r(3, 1), r ∈ R}.

Geometrically, this is a line in R2. It is the subspace of R2 spanned by the vector (3, 1).dim[Rng(T )] = 1.Since dim[Ker(T )]+ dim[Rng(T )] = 2 = dim[R2], Theorem 5.3.8 is satisfied.

3. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0

is:

1 −1 0 00 1 2 02 −1 1 0

, with reduced row-echelon form

1 0 0 00 1 0 00 0 1 0

. Thus x1 = x2 = x3 = 0, so

Ker(T ) = {0}. Geometrically, this describes a point (the origin) in R3.dim[Ker(T )] = 0.For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A,we see that colspace(A) is generated by the first three column vectors of A. Consequently,Rng(T ) = R3, dim[Rng(T )] = dim[R3] = 3, and Theorem 5.3.8 is satisfied sincedim[Ker(T )]+ dim[Rng(T )] = 0 + 3 = dim[R3].

4. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 is: 1 −2 1 02 −3 −1 05 −8 −1 0

, with reduced row-echelon form of

1 0 −5 00 1 −3 00 0 0 0

. Thus

Ker(T ) = {x ∈ R3 : x = t(5, 3, 1), t ∈ R}.

Geometrically, this describes the line in R3 through the origin, spanned by (5, 3, 1).dim[Ker(T )] = 1.For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A,we see that a basis for colspace(A) is given by the first two column vectors of A. Consequently,

Rng(T ) = {y ∈ R3 : y = r(1, 2, 5) + s(−2,−3,−8), r, s ∈ R}.

Geometrically, this is a plane through the origin in R3.dim[Rng(T )] = 2 and Theorem 5.3.8 is satisfied since dim[Ker(T )]+ dim[Rng(T )] = 1 + 2 = 3 = dim[R3].

5. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 is:[1 −1 2 0

−3 3 −6 0

], with reduced row-echelon form of

[1 −1 2 00 0 0 0

]. Thus

Ker(T ) = {x ∈ R3 : x = r(1, 1, 0) + s(−2, 0, 1), r, s ∈ R}.

Page 340: Differential Equations and Linear Algebra - 3e - Goode -Solutions

340

Geometrically, this describes the plane through the origin in R3, which is spanned by the linearly indepen-dent set {(1, 1, 0), (−2, 0, 1)}.dim[Ker(T )] = 2.For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A,we see that a basis for colspace(A) is given by the first column vector of A. Consequently,

Rng(T ) = {y ∈ R2 : y = t(1,−3), t ∈ R}.

Geometrically, this is the line through the origin in R2 spanned by (1,−3).dim[Rng(T )] = 1 and Theorem 5.3.8 is satisfied since dim[Ker(T )]+ dim[Rng(T )] = 2 + 1 = 3 = dim[R3].

6. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 is:[1 3 2 02 6 5 0

], with reduced row-echelon form of

[1 3 0 00 0 1 0

]. Thus

Ker(T ) = {x ∈ R3 : x = r(−3, 1, 0), r ∈ R}.

Geometrically, this describes the line through the origin in R3, which is spanned by (−3, 1, 0).dim[Ker(T )] = 1.For the given transformation, Rng(T ) = colspace(A). From the preceding reduced row-echelon form of A,we see that a basis for colspace(A) is given by the first and third column vectors of A. Consequently,Rng(T ) = span{(1, 2), (2, 5)} = R2, so that dim[Rng(T )] = 2. Geometrically, Rng(T ) is the xy-plane, andTheorem 5.3.8 is satisfied since dim[Ker(T )]+ dim[Rng(T )] = 1 + 2 = 3 = dim[R3].

7. The matrix of T in Problem 24 of Section 5.1 is A =

−5/3 −2/3

1/3 1/25/3 −1/3−1 1

. Thus,

Ker(T ) = nullspace(A) = {0}

and

Rng(T ) = colspace(A) = span

−5/3

1/35/3−1

,

−2/3

1/2−1/3

1

.

8. The matrix of T in Problem 25 of Section 5.1 is A =[

3 2 −6 3−2 3 −1 2

]. Thus,

Ker(T ) = nullspace(A) = span{(16/13, 15/13, 1, 0), (−5/13,−12/13, 0, 1)}

andRng(T ) = colspace(A) = R2.

9. The matrix of T in Problem 26 of Section 5.1 is A =

32 3 0−5 2 −3

4 −7 6

. Thus,

Ker(T ) = nullspace(A) = {0}

andRng(T ) = colspace(A) = R3.

Page 341: Differential Equations and Linear Algebra - 3e - Goode -Solutions

341

10. The matrix of T in Problem 27 of Section 5.1 is A =

0 −2/3 1/3

3/2 −1 10 2/5 −2/5

−1/4 17/15 8/15

. Thus,

Ker(T ) = nullspace(A) = {0}

and

Rng(T ) = colspace(A) = span

03/2

0−1/4

,

−2/3−12/5

17/15

,

1/3

1−2/58/15

.

11. (a) Ker(T ) = {v ∈ R3 : 〈u,v〉 = 0}. For v to be in the kernel of T , u and v must be orthogonal. Sinceu is any fixed vector in R3, then v must lie in the plane orthogonal to u. Hence dim[Ker(T )] = 2.

(b) Rng(T ) = {y ∈ R : y = 〈u,v〉, v ∈ R3}, and dim[Rng(T )] = 1.

12. (a) Ker(S) = {A ∈ Mn(R) : A − AT = 0} = {A ∈ Mn(R) : A = AT }. Hence, any matrix in Ker(S) issymmetric by definition.

(b) Since any matrix in Ker(S) is symmetric, it has been shown that {A1, A2, A3} is a spanning set for theset of all symmetric matrices in M2(R), where

A1 =[

1 00 0

], A2 =

[0 11 0

], A3 =

[0 00 1

].

Thus, dim[Ker(S)] = 3.

13. Ker(T ) = {A ∈ Mn(R) : AB − BA = 0} = {A ∈ M2(R) : AB = BA}. This is the set of matrices thatcommute with B.

14. (a)Ker(T ) = {p ∈ P2 : T (p) = 0} = {ax2 + bx + c ∈ P2 : ax2 + (a + 2b + c)x + (3a − 2b − c) = 0, for all x}.Thus, for p(x) = ax2 + bx + c to be in Ker(T ), a, b, and c must satisfy the system:

a = 0a + 2b + c = 0

3a− 2b− c = 0

Solving this system, we obtain that a = 0 and c = −2b. Consequently, all polynomials of the form0x2 + bx + (−2b) are in Ker(T ), so

Ker(T ) = {b(x− 2) : b ∈ R}.

Since Ker(T ) is spanned by the nonzero vector x− 2, it follows that dim[Rng(T )] = 1.

(b) In this case,Rng(T ) = {T (ax2 + bx + c) : a, b, c ∈ R}

= {ax2 + (a + 2b + c)x + (3a− 2b− c) : a, b, c ∈ R}= {a(x2 + x + 3) + b(2x− 2) + c(x− 1) : a, b, c ∈ R}= {a(x2 + x + 3) + (2b + c)(x− 1) : a, b, c ∈ R}= span{x2 + x + 3, x− 1}.

Page 342: Differential Equations and Linear Algebra - 3e - Goode -Solutions

342

Since the vectors in this spanning set are linearly independent on any interval, it follows that the spanningset is a basis for Rng(T ). Hence, dim[Rng(T )] = 2.

15. Ker(T ) = {p ∈ P2 : T (p) = 0} = {ax2 + bx + c ∈ P2 : (a + b) + (b − c)x = 0, for all x}. Thus,a, b, and c must satisfy: a + b = 0 and b − c = 0 =⇒ a = −b and b = c. Letting c = r ∈ R, we haveax2 + bx + c = r(−x2 + x + 1). Thus, Ker(T ) = {r(−x2 + x + 1) : r ∈ R} and dim[Ker(T )] = 1.

Rng(T ) = {T (ax2 + bx + c) : a, b, c ∈ R} = {(a + b) + (b− c)x : a, b, c ∈ R} = {c1 + c2x : c1, c2 ∈ R}.Consequently, a basis for Rng(T ) is {1, x}, so that Rng(T ) = P1, and dim[Rng(T )] = 2.

16. Ker(T ) = {p ∈ P1 : T (p) = 0} = {ax + b ∈ P1 : (b − a) + (2b − 3a)x + bx2 = 0, for all x}. Thus, aand b must satisfy: b−a = 0, 2b−3a = 0, and b = 0 =⇒ a = b = 0. Thus, Ker(T ) = {0} and dim[Ker(T )] = 0.

Rng(T ) = {T (ax + b) : a, b ∈ R} = {(b− a) + (2b− 3a)x + bx2 : a, b ∈ R}= {−a(1 + 3x) + b(1 + 2x + x2) : a, b ∈ R}= span{1 + 3x, 1 + 2x + x2}.

Since the vectors in this spanning set are linearly independent on any interval, it follows that the spanningset is a basis for Rng(T ), and dim[Rng(T )] = 2.

17.T (v) = 0 ⇐⇒ T (av1 + bv2 + cv3) = 0

⇐⇒ aT (v1) + bT (v2) + cT (v3) = 0

⇐⇒ a(2w1 −w2) + b(w1 −w2) + c(w1 + 2w2) = 0

⇐⇒ (2a + b + c)w1 + (−a− b + 2c)w2 = 0

⇐⇒ 2a + b + c = 0 and a− b + 2c = 0.

Reducing the augmented matrix of the system yields:[2 1 1 01 −1 2 0

]∼[

1 −1 2 02 1 1 0

]∼[

1 0 1 00 1 −1 0

].

Setting c = r =⇒ b = r, a = −r. Thus,Ker(T ) = {v ∈ V : v = r(−v1 + v2 + v3), r ∈ R} and dim[Ker(T )] = 1.

Rng(T ) = {T (v) : v ∈ V } = {(2a + b + c)w1 + (−a− b + 2c)w2 : a, b, c ∈ V } = span{w1,w2} = W .Consequently, dim[Rng(T )] = 2.

18. (a) If w ∈ Rng(T ), then T (v) = w for some v ∈ V , and since {v1,v2, . . . ,vn} is a basis for V , thereexist c1, c2, . . . , cn ∈ R for which v = c1v1 + c2v2 + · · ·+ cnvn.Accordingly, w = T (v) = a1T (v1) + a2T (v2) + · · ·+ anT (vn). Thus,

Rng(T ) = span{T (v1), T (v2), . . . , T (vn)}.

We must show that {T (v1), T (v2), . . . , T (vn)} is also linearly independent. Suppose that

b1T (v1) + b2T (v2) + · · ·+ bnT (vn) = 0.

Then T (b1v1 + b2v2 + · · · + bnvn) = 0, so since Ker(T ) = {0}, b1v1 + b2v2 + · · · + bnvn = 0. Since{v1,v2, . . . ,vn} is a linearly independent set, the preceding equation implies that b1 = b2 = · · · = bn = 0.

Page 343: Differential Equations and Linear Algebra - 3e - Goode -Solutions

343

Consequently, {T (v1), T (v2), . . . , T (vn)} is a linearly independent set in W . Therefore, since we have alreadyshown that it is a spanning set for Rng(T ), {T (v1), T (v2), . . . , T (vn)} is a basis for Rng(T ).

(b) As an example, let T : R3 → R2 be defined by T ((a, b, c)) = (a, b) for all (a, b, c) in R3. Then for thebasis {e1, e2, e3} of R3, we have {T (e1), T (e2), T (e3)} = {(1, 0), (0, 1), (0, 0)}, which is clearly not a basis forR2.

Solutions to Section 5.4

True-False Review:

1. FALSE. Many one-to-one linear transformations T : P3 → M32 can be constructed. One possibleexample would be to define

T (a0 + a1x + a2x2 + a3x

3) =

a0 a1

a2 a3

0 0

.

It is easy to check that with T so defined, T is a one-to-one linear transformation.

2. TRUE. We can define an isomorphism T : V → M32 via

T

a b c0 d e0 0 f

=

a bc de f

.

With T so defined, it is easy to check that T is an isomorphism.

3. TRUE. Both Ker(T1) and Ker(T2T1) are subspaces of V1, and since if T1(v1) = 0, then (T2T1)v1 =T2(T1(v1)) = T2(0) = 0, we see that every vector in Ker(T1) belongs to Ker(T2T1). Therefore, Ker(T1) is asubspace of Ker(T2T1).

4. TRUE. Observe that T is not one-to-one since Ker(T ) 6= {0} (because Ker(T ) is 1-dimensional).Moreover, since M22 is 4-dimensional, then by the Rank-Nullity Theorem, Rng(T ) is 3-dimensional. SinceP2 is 3-dimensional, we conclude that Rng(T ) = P2; that is, T is onto.

5. TRUE. Since M2(R) is 4-dimensional, Rng(T ) can be at most 4-dimensional. However, P4 is 5-dimensional. Therefore, any such linear transformation T cannot be onto.

6. TRUE. If we assume that (T2T1)v = (T2T1)w, then T2(T1(v)) = T2(T1(w)). Since T2 is one-to-one,then T1(v) = T1(w). Next, since T1 is one-to-one, we conclude that v = w. Therefore, T2T1 is one-to-one.

7. FALSE. This linear transformation is onto one-to-one. The reason is essentially because the derivativeof any constant is zero. Therefore, Ker(T ) consists of all constant functions, and therefore, Ker(T ) 6= {0}.

8. TRUE. Since M23 is 6-dimensional and Rng(T ) is only 4-dimensional, T is not onto. Moreover, since P3

is 4-dimensional, the Rank-Nullity Theorem implies that Ker(T ) is 0-dimensional. Therefore, Ker(T ) = {0},and this means that T is one-to-one.

9. TRUE. Recall that dim[Rn] = n and dim[Rm] = m. In order for such an isomorphism to exist, Rn andRm must have the same dimension; that is, m = n.

10. FALSE. For example, the vector space of all polynomials with real coefficients is an infinite-dimensionalreal vector space, and since Rn is finite-dimensional for all positive integers n, this statement is false.

11. FALSE. In order for this to be true, it would also have to be assumed that T1 is onto. For example,suppose V1 = V2 = V3 = R2. If we define T2(x, y) = (x, y) for all (x, y) in R2, then T2 is onto. However,

Page 344: Differential Equations and Linear Algebra - 3e - Goode -Solutions

344

if we define T1(x, y) = (0, 0) for all (x, y) in R2, then (T2T1)(x, y) = T2(T1(x, y)) = T2(0, 0) = (0, 0) for all(x, y) in R2. Therefore T2T1 is not onto, since Rng(T2T1) = {(0, 0)}, even though T2 itself is onto.

12. TRUE. This is a direct application of the Rank-Nullity Theorem. Since T is assumed to be onto,Rng(T ) = R3, which is 3-dimensional. Therefore the dimension of Ker(T ) is 8− 3 = 5.

Problems:

1. T1T2(x) = T1(T2(x)) = T1(Bx) = (AB)x, so T1T2 = AB. Similarly, T2T1 = BA.

AB =[−1 2

3 1

] [1 5

−2 0

]=[−5 −5

1 15

]. BA =

[1 5

−2 0

] [−1 2

3 1

]=[

14 72 −4

].

T1T2(x) = (AB)x =[−5 −5

1 15

] [x1

x2

]=[−5x1 − 5x2

x1 + 15x2

]= (−5(x1 + x2), x1 + 15x2) and

T2T1(x) = (BA)x =[

14 72 −4

] [x1

x2

]=[

14x1 + 7x2

2x1 − 4x2

]= (7(2x1 + x2), 2(x1 − 2x2)).

Clearly, T1T2 6= T2T1.

2. T2(T1(x)) = B(A(x)) =[−1 1

] [ 1 −13 2

] [x1

x2

]=[−1 1

] [ x1 − x2

3x1 + 2x2

]=[

2x1 + 3x2

]=[

2 3]x.

T1T2 does not exist because T1 must have a domain of R2, yet the range of T2, which must be the domainof T1, is R.

3. Ker(T1) = {x ∈ R2 : Ax = 0}. Ax = 0 =⇒[

1 −12 −2

] [x1

x2

]=[

00

]. This matrix equation results in

the system: x1 − x2 = 0 and 2x1 − 2x2 = 0, or equivalently, x1 = x2. Thus,

Ker(T1) = {x ∈ R2 : x = r(1, 1) where r ∈ R}.

Geometrically, this is the line through the origin in R2 spanned by (1, 1).

Ker(T2) = {x ∈ R2 : Bx = 0}. Bx = 0 =⇒[

2 13 −1

] [x1

x2

]=[

00

]. This matrix equation results in the

system: 2x1 + x2 = 0 and 3x1 − x2 = 0, or equivalently, x1 = x2 = 0. Thus, Ker(T2) = {0}. Geometrically,this is a point (the origin).

Ker(T1T2) = {x ∈ R2 : (AB)x = 0}. (AB)x = 0 =⇒[

1 −12 −2

] [2 13 −1

] [x1

x2

]=[

00

]=⇒

[−1 2−2 4

] [x1

x2

]=[

00

]. This matrix equation results in the system:

−x1 + 2x2 = 0 and − 2x1 + 4x2 = 0,

or equivalently, x1 = 2x2.Thus, Ker(T1T2) = {x ∈ R2 : x = s(2, 1) where s ∈ R}. Geometrically, this is the line through the origin inR2 spanned by the vector (2, 1).

Ker(T2T1) = {x ∈ R2 : (BA)x = 0}. (BA)x = 0 =⇒[

2 13 −1

] [1 −12 −2

] [x1

x2

]=[

00

]=⇒

[4 −41 −1

] [x1

x2

]=[

00

]. This matrix equation results in the system:

4x1 − 4x2 = 0 and x1 − x2 = 0,

or equivalently, x1 = x2.Thus, Ker(T2T1) = {x ∈ R2 : x = t(1, 1) where t ∈ R}. Geometrically, this is the line through the origin inR2 spanned by the vector (1, 1).

Page 345: Differential Equations and Linear Algebra - 3e - Goode -Solutions

345

4.

(T2T1)(A) = T2(T1(A)) = T2(A−AT ) = (A−AT ) + (A−AT )T = (A−AT ) + (AT −A) = 0n.

5. (a) (T1(f))(x) =d

dx[sin(x− a)] = cos(x− a).

(T2(f))(x) =∫ x

a

sin(t− a)dt = [− cos(t− a)]xa = 1− cos(x− a).

(T1T2)(f)(x) =d

dx[1− cos(x− a)] = sin(x− a) = f(x).

(T2T1)(f)(x) =∫ x

a

cos(t− a)dt = [sin(t− a)]xa = sin(x− a) = f(x). Consequently,

(T1T2)(f) = (T2T1)(f) = f .

(b) (T1T2)(f) = T1(T2(f)) = T1

(∫ x

a

f(x)dx

)=

d

dx

(∫ x

a

f(x)dx

)= f(x).

(T2T1)(g) = T2(T1(g)) = T2

(dg(x)dx

)=∫ x

a

dg(x)dx

dx = g(x)− g(a).

6. Let v ∈ V so there exists a, b ∈ R such that v = av1+bv2. ThenT2T1(v) = T2[aT1(v1) + bT1(v2)] = T2[a(v1 − v2) + b(2v1 + v2)]

= T2[(a + 2b)v1 + (b− a)v2] = (a + 2b)T2(v1) + (b− a)T (v2)= (a + 2b)(v1 + 2v2) + (b− a)(3v1 − v2) = (5b− 2a)v1 + 3(a + b)v2.

7. Let v ∈ V so there exists a, b ∈ R such that v = av1+bv2. ThenT2T1(v) = T2[aT1(v1) + bT1(v2)] = T2[a(3v1 + v2)]

= 3a(−5v2) + a(−v1 + 6v2) = −av1 − 9av2.

8. Ker(T ) = {x ∈ R2 : T (x) = 0} = {x ∈ R2 : Ax = 0}. The augmented matrix of the system Ax = 0

is:[

4 2 01 3 0

], with reduced row-echelon form of

[1 0 00 1 0

]. It follows that Ker(T ) = {0}. Hence T is

one-to-one by Theorem 5.4.7.For the given transformation, Rng(T ) = {y ∈ R2 : Ax = y is consistent}. The augmented matrix of the

system Ax = y is[

4 2 y1

1 3 y2

]with reduced row-echelon form of

[1 0 (3y1 − y2)/50 1 (2y2 − y1)/5

]. The system is,

therefore, consistent for all (y1, y2), so that Rng(T ) = R2. Consequently, T is onto. T−1 exists since T hasbeen shown to be both one-to-one and onto. Using the Gauss-Jordan method for computing A−1, we have[

4 2 1 01 3 0 1

]∼[

1 12

14 0

0 52 − 1

4 1

]∼[

1 0 310 − 1

50 1 − 1

1025

].

Thus, A−1 =[

310 − 1

5− 1

1025

], so T−1(y) = A−1y.

9. Ker(T ) = {x ∈ R2 : T (x) = 0} = {x ∈ R2 : Ax = 0}. The augmented matrix of the system Ax = 0 is:[1 2 0

−2 −4 0

], with reduced row-echelon form of

[1 2 00 0 0

]. It follows that

Ker(T ) = {x ∈ R2 : x = t(−2, 1), t ∈ R}.

By Theorem 5.4.7, since Ker(T ) 6= {0}, T is not one-to-one. This also implies that T−1 does not exist.For the given transformation, Rng(T ) = {y ∈ R2 : Ax = y is consistent}. The augmented matrix of the

Page 346: Differential Equations and Linear Algebra - 3e - Goode -Solutions

346

system Ax = y is[

1 2 y1

−2 −4 y2

]with reduced row-echelon form of

[1 2 y1

0 0 2y1 + y2

]. The last row of

this matrix implies that 2y1 + y2 = 0 is required for consistency. Therefore, it follows that

Rng(T ) = {(y1, y2) ∈ R2 : 2y1 + y2 = 0} = {y ∈ R2 : y = s(1,−2), s ∈ R}.

T is not onto because dim[Rng(T )] = 1 6= 2 = dim[R2].

10. Ker(T ) = {x ∈ R3 : T (x) = 0} = {x ∈ R3 : Ax = 0}. The augmented matrix of the system Ax = 0 is:[1 2 −1 02 5 1 0

], with reduced row-echelon form

[1 0 −7 00 1 3 0

]. It follows that

Ker(T ) = {(x1, x2, x3) ∈ R3 : x1 = 7t, x2 = −3t, x3 = t, t ∈ R} = {x ∈ R3 : x = t(7,−3, 1), t ∈ R}.

By Theorem 5.4.7, since Ker(T ) 6= {0}, T is not one-to-one. This also implies that T−1 does not exist.For the given transformation, Rng(T ) = {y ∈ R2 : Ax = y is consistent}. The augmented matrix of thesystem Ax = y is [

1 2 −1 y1

2 5 1 y2

]∼[

1 0 −7 5y1 − 2y2

0 1 3 y2 − 2y1

].

We clearly have a consistent system for all y = (y1, y2) ∈ R2, thus Rng(T ) = R2. Therefore, T is onto byDefinition 5.3.3.

11. Reducing A to row-echelon form, we obtain

REF(A) =

1 3 50 1 20 0 00 0 0

.

We quickly find thatKer(T ) = nullspace(A) = span{(1,−2, 1)}.

Moreover,

Rng(T ) = colspace(A) = span

0352

,

1441

.

Based on these calculations, we see that T is neither one-to-one nor onto.

12. We have(0, 0, 1) = −1

3(2, 1,−3) +

23(1, 0, 0) +

13(0, 1, 0),

and soT (0, 0, 1) = −1

3T (2, 1,−3) +

23T (1, 0, 0) +

13T (0, 1, 0)

= −13(7,−1) +

23(4, 5) +

13(−1, 1)

= (0, 4).

(a) From the given information and the above calculation, we find that the matrix of T is A =[

4 −1 05 1 4

].

Page 347: Differential Equations and Linear Algebra - 3e - Goode -Solutions

347

(b) Because A has more columns than rows, REF(A) must have an unpivoted column, which implies thatnullspace(A) 6= {0}. Hence, T is not one-to-one. On the other hand, colspace(A) = R2 since the first twocolumns, for example, are linearly independent vectors in R2. Thus, T is onto.

13. Show T is a linear transformation:Let x,y ∈ V and c, λ ∈ R where λ 6= 0.T (x + y) = λ(x + y) = λx + λy = T (x) + T (y), andT (cx) = λ(cx) = c(λx) = cT (x). Thus, T is a linear transformation.Show T is one-to-one:x ∈ Ker(T ) ⇐⇒ T (x) = 0 ⇐⇒ λx = 0 ⇐⇒ x = 0, since λ 6= 0. Thus, Ker(T ) = {0}. By Theorem 5.4.7, Tis one-to-one.Show T is onto:dim[Ker(T )]+ dim[Rng(T )] = dim[V ] =⇒ dim[{0}]+ dim[Rng(T )] = dim[V ]=⇒ 0 + dim[Rng(T )] = dim[V ] =⇒ dim[Rng(T )] = dim[V ] =⇒ Rng(T ) = V . Thus, T is onto by Definition5.3.3.Find T−1:

T−1(x) =1λx since T (T−1(x)) = T

(1λx)

= λ

(1λx)

= x.

14. T : P1 → P1 where T (ax + b) = (2b− a)x + (b + a).Show T is one-to-one:Ker(T ) = {p ∈ P1 : T (p) = 0} = {ax + b ∈ P1 : (2b − a)x + (b + a) = 0}. Thus, a and b must satisfy thesystem 2b − a = 0, b + a = 0. The only solution to this system is a = b = 0. Consequently, Ker(T ) = {0},so T is one-to-one.Show T is onto:Since Ker(T ) = {0}, dim[Ker(T )] = 0. Thus, dim[Ker(T )]+ dim[Rng(T )] = dim[P1]=⇒ dim[Rng(T )] = dim[P1], and since Rng(T ) is a subspace of P1, it follows that Rng(T ) = P1. Conse-quently, T is onto by Theorem 5.4.7.Determine T−1:Since T is one-to-one and onto, T−1 exists. T (ax + b) = (2b− a)x + (b + a)

=⇒ T−1[(2b− a)x + (b + a)] = ax + b. If we let A = 2b− a and B = a + b, then b =A + B

3and a =

2B −A

3so that T−1[Ax + B] =

13(2B −A)x +

13(A + B).

15. T is not one-to-one:Ker(T ) = {p ∈ P2 : T (p) = 0} = {ax2 + bx + c : c + (a− b)x = 0}. Thus, a, b, and c must satisfy the systemc = 0 and a− b = 0. Consequently, Ker(T ) = {r(x2 + x) : r ∈ R} 6= {0}. Thus, by Theorem 5.4.7, T is notone-to-one. T−1 does not exist because T is not one-to-one.T is onto:Since Ker(T ) = {r(x2 + x) : r ∈ R}, we see that dim[Ker(T )] = 1.Thus, dim[Ker(T )]+ dim[Rng(T )] = dim[P2] =⇒ 1+ dim[Rng(T )] = 3 =⇒ dim[Rng(T )] = 2. Since Rng(T )is a subspace of P1,

2 = dim[Rng(T )] ≤ dim[P1] = 2,

and so equality holds: Rng(T ) = P1. Thus, T is onto by Theorem 5.4.7.

16. {v1,v2} is a basis for V and T : V → V is a linear transformation.Show T is one-to-one:Any v ∈ V can be expressed as v = av1 + bv2 where a, b ∈ R.T (v) = 0 ⇐⇒ T (av1 + bv2) = 0 ⇐⇒ aT (v1) + bT (v2) = 0 ⇐⇒ a(v1 + 2v2) + b(2v1 − 3v2) = 0⇐⇒ (a + 2b)v1 + (2a − 3b)v2 = 0 ⇐⇒ a + 2b = 0 and 2a − 3b = 0 ⇐⇒ a = b = 0 ⇐⇒ v = 0. Hence

Page 348: Differential Equations and Linear Algebra - 3e - Goode -Solutions

348

Ker(T ) = {0}. Therefore T is one-to-one.Show T is onto:dim[Ker(T )] = dim[{0}] = 0 and dim[Ker(T )]+ dim[Rng(T )] = dim[V ] implies that 0+ dim[Rng(T )] = 2or dim[Rng(T )] = 2. Since Rng(T ) is a subspace of V , it follows that Rng(T ) = V . Thus, T is onto byTheorem 5.4.7.Determine T−1:Since T is one-to-one and onto, T−1 exists.T (av1 + bv2) = (a + 2b)v1 + (2a− 3b)v2

=⇒ T−1[(a+2b)v1 +(2a−3b)v2] = (av1 + bv2). If we let A = a+2b and B = 2a−3b, then a =17(3A+2B)

and b =17(2A−B). Hence, T−1(Av1 + Bv2) =

17(3A + 2B)v1 +

17(2A−B)v2.

17. Let v ∈ V . Then there exists a, b ∈ R such that v = av1 + bv2.

(T1T2)v = T1[aT2(v1) + bT2(v2)] = T1

[a

2(v1 + v2) +

b

2(v1 − v2)

]=

a + b

2T1(v1) +

a− b

2T1(v2) =

a + b

2(v1 + v2) +

a− b

2(v1 − v2)

=(

a + b

2+

a− b

2

)v1 +

(a + b

2− a− b

2

)v2 = av1 + bv2 = v.

(T2T1)v = T2[aT1(v1) + bT1(v2)] = T2[a(v1 + v2) + b(v1 − v2)]= T2[(a + b)v1 + (a− b)v2] = (a + b)T2(v1) + (a− b)T2(v2)

=a + b

2(v1 + v2) +

a− b

2(v1 − v2) = av1 + bv2 = v.

Since (T1T2)v = v and (T2T1)v = v for all v ∈ V , it follows that T2 is the inverse of T1, thus T2 = T−11 .

18. An arbitrary vector in P1 can be written as

p(x) = ap0(x) + bp1(x),

where p0(x) = 1, and p1(x) = x denote the standard basis vectors in P1. Hence, we can define an isomorphismT : R2 → P1 by

T (a, b) = a + bx.

19. Let S denote the subspace of M2(R) consisting of all upper triangular matrices. An arbitrary vector inS can be written as [

a b0 c

]= a

[1 00 0

]+ b

[0 10 0

]+ c

[0 00 1

].

Therefore, we define an isomorphism T : R3 → S by

T (a, b, c) =[

a b0 c

].

20. Let S denote the subspace of M2(R) consisting of all skew-symmetric matrices. An arbitrary vector inS can be written as [

0 a−a 0

]= a

[0 1

−1 0

].

Therefore, we can define an isomorphism T : R → S by

T (a) =[

0 a−a 0

].

Page 349: Differential Equations and Linear Algebra - 3e - Goode -Solutions

349

21. Let S denote the subspace of M2(R) consisting of all symmetric matrices. An arbitrary vector in S canbe written as [

a bb c

]= a

[1 00 0

]+ b

[0 11 0

]+ c

[0 00 1

].

Therefore, we can define an isomorphism T : R3 → S by

T (a, b, c) =[

a bb c

].

22. A typical vector in V takes the form A =

a b c d0 e f g0 0 h i0 0 0 j

. Therefore, we can define T : V → R10 via

T (A) = (a, b, c, d, e, f, g, h, i, j).

It is routine to verify that T is an invertible linear transformation. Therefore, we have n = 10.

23. A typical vector in V takes the form p = a0 + a2x2 + a4x

4 + a6x6 + a8x

8. Therefore, we can defineT : V → R5 via

T (p) = (a0, a2, a4, a6, a8).

It is routine to verify that T is an invertible linear transformation. Therefore, we have n = 5.

24. We have

T−11 (x) = A−1x =

[3 −1

−2 1

]x.

25. We have

T−12 (x) = A−1x =

[−1/3 −1/6

1/3 2/3

]x.

26. We have

T−13 (x) = A−1x =

11/14 −8/7 1/14−3/7 5/7 1/73/14 1/7 −1/14

x.

27. We have

T−14 (x) = A−1x =

8 −29 3−5 19 −2

2 −8 1

x.

28. The matrix of T2T1 is[−4 −1

2 2

] [1 12 3

]=[−6 −7

6 8

]. The matrix of T1T2 is

[1 12 3

] [−4 −1

2 2

]=[

−2 1−2 4

].

29. The matrix of T4T3 is

3 5 11 2 12 6 7

1 1 30 1 23 5 −1

=

6 13 184 8 623 43 11

. The matrix of T3T4 is 1 1 30 1 23 5 −1

3 5 11 2 12 6 7

=

10 25 235 14 1512 19 1

.

Page 350: Differential Equations and Linear Algebra - 3e - Goode -Solutions

350

30. We have(T2T1)(cu1) = T2(T1(cu1))

= T2(cT1(u1))= c(T2(T1(u1)))= c(T2T1)(u1),

as needed.

31. We first prove that if T is one-to-one, then T is onto. The assumption that T is one-to-one implies thatdim[Ker(T )] = 0. Hence, by Theorem 5.3.8,

dim[W ] = dim[V ] = dim[Rng(T )],

which implies that Rng(T ) = W . That is, T is onto.

Next we show that if T is onto, then T is one-to-one. We have Rng(T ) = W , so that dim[Rng(T )] =dim[W ] = dim[V ]. Hence, by Theorem 5.3.8, we have dim[Ker(T )] = 0. Hence, Ker(T ) = {0}, which impliesthat T is one-to-one.

32. If T : V → W is linear, then show that T−1 : W → V is also linear.Let y, z ∈ W, c ∈ R, and T (u) = y and T (v) = z where u,v ∈ V . Then T−1(y) = u and T−1(z) = v. Thus,

T−1(y + z) = T−1(T (u) + T (v)) = T−1(T (u + v)) = u + v = T−1(y) + T−1(z),

andT−1(cy) = T−1(cT (u)) = T−1(T (cu)) = cu = cT−1(y).

Hence, T−1 is a linear transformation.

33. Let T : V → V be a one-to-one linear transformation. Since T is one-to-one, it follows from Theorem5.4.7 that Ker(T ) = {0}, so dim[Ker(T )] = 0. By Theorem 5.3.8 and substitution,dim[Ker(T )]+ dim[Rng(T )] = dim[V ] =⇒ 0 + dim[Rng(T )] = dim[V ] =⇒ dim[Rng(T )] = dim[V ],and since Rng(T ) is a subspace of V , it follows that Rng(T ) = V , thus T is onto. T−1 exists because T isboth one-to-one and onto.

34. To show that {T (v1), T (v2), . . . , T (vk)} is linearly independent, assume that

c1T (v1) + c2T (v2) + · · ·+ ckT (vk) = 0.

We must show that c1 = c2 = · · · = ck = 0. Using the linearity properties of T , we can write

T (c1v1 + c2v2 + · · ·+ ckvk) = 0.

Now, since T is one-to-one, we can conclude that c1v1 + c2v2 + · · ·+ ckvk = 0, and since {v1,v2, . . . ,vk} islinearly independent, we conclude that c1 = c2 = · · · = ck = 0 as desired.

35. To prove that T is onto, let w be an arbitrary vector in W . We must find a vector v in V such thatT (v) = w. Since {w1,w2, . . . ,wm} spans W , we can write

w = c1w1 + c2w2 + · · ·+ cmwm

for some scalars c1, c2, . . . , cm. Therefore

T (c1v1 + c2v2 + · · ·+ cmvm) = c1T (v1) + c2T (v2) + · · ·+ cmT (vm) = c1w1 + c2w2 + · · ·+ cmwm = w,

Page 351: Differential Equations and Linear Algebra - 3e - Goode -Solutions

351

which shows that v = c1v1 + c2v2 + · · ·+ cmvm maps under T to w, as desired.

36. Since T is a linear transformation, Rng(T ) is a subspace of W , butdim[W ] = dim[Rng(T )] = n, so W = Rng(T ), which means, by definition, that T is onto.

37. This problem is not correct.

38. Suppose that x belongs to Rng(T ). This means that there exists a vector v in V such that T (v) = x.Applying T to both sides of this equation, we have T (T (v)) = T (x), or T 2(v) = T (x). However, sinceT 2 = 0, we conclude that T 2(v) = 0, and hence, T (x) = 0. By definition, this means that x belongs toKer(T ). Thus, since every vector in Rng(T ) belongs to Ker(T ), Rng(T ) is a subset of Ker(T ). In addition,Rng(T ) is closed under addition and scalar multiplication, by Theorem 5.3.5, and therefore, Rng(T ) formsa subspace of Ker(T ).

39.

(a) To show that T2T1 : V1 → V3 is one-to-one, we show that Ker(T2T1) = {0}. Suppose that v1 ∈ Ker(T2T1).This means that (T2T1)(v1) = 0. Hence, T2(T1(v1)) = 0. However, since T2 is one-to-one, we conclude thatT1(v1) = 0. Next, since T1 is one-to-one, we conclude that v1 = 0, which shows that the only vector inKer(T2T1) is 0, as expected.

(b) To show that T2T1 : V1 → V3 is onto, we begin with an arbitrary vector v3 in V3. Since T2 : V2 → V3 isonto, there exists v2 in V2 such that T2(v2) = v3. Moreover, since T1 : V1 → V2 is onto, there exists v1 inV1 such that T1(v1) = v2. Therefore,

(T2T1)(v1) = T2(T1(v1)) = T2(v2) = v3,

and therefore, we have found a vector, namely v1, in V1 that is mapped under T2T1 to v3. Hence, T2T1 isonto.

(c) This follows immediately from parts (a) and (b).

Solutions to Section 5.5

True-False Review:

1. FALSE. The matrix representation is an m× n matrix, not an n×m matrix.

2. FALSE. The matrix [T ]BC would only make sense if C was a basis for V and B was a basis for W , andthis would only be true if V and W were the same vector space. Of course, in general V and W can bedifferent, and so this statement does not hold.

3. FALSE. The correct equation is given in (5.5.2).

4. FALSE. The correct statement is([T ]CB

)−1 = [T−1]BC .

5. TRUE. Many examples are possible. A fairly simple one is the following. Let T1 : R2 → R2 be givenby T1(x, y) = (x, y), and let T2 : R2 → R2 be given by T2(x, y) = (y, x). Clearly, T1 and T2 are different

linear transformations. Now if B1 = {(1, 0), (0, 1)} = C1, then [T1]C1B1

=[

1 00 1

]. If B2 = {(1, 0), (0, 1)} and

C2 = {(0, 1), (1, 0)}, then [T2]C2B2

=[

1 00 1

]. Thus, although T1 6= T2, we found suitable bases B1, C1, B2, C2

such that [T1]C1B1

= [T2]C2B2

.

6. TRUE. This is the content of part (b) of Corollary 5.5.10.

Problems:

Page 352: Differential Equations and Linear Algebra - 3e - Goode -Solutions

352

1.

(a): We must determine T (1), T (x), and T (x2), and find the components of the resulting vectors in R2

relative to the basis C. We have

T (1) = (1, 2), T (x) = (0, 1), T (x2) = (−3,−2).

Therefore, relative to the standard basis C on R2, we have

[T (1)]C =[

12

], [T (x)]C =

[01

], [T (x2)]C =

[−3−2

].

Therefore,

[T ]CB =[

1 0 −32 1 −2

].

(b): We must determine T (1), T (1+x), and T (1+x+x2), and find the components of the resulting vectorsin R2 relative to the basis C. We have

T (1) = (1, 2), T (1 + x) = (1, 3), T (1 + x + x2) = (−2, 1).

SettingT (1) = (1, 2) = c1(1,−1) + c2(2, 1)

and solving, we find c1 = −1 and c2 = 1. Thus, [T (1)]C =[−1

1

]. Next, setting

T (1 + x) = (1, 3) = c1(1,−1) + c2(2, 1)

and solving, we find c1 = −5/3 and c2 = 4/3. Thus, [T (1 + x)]C =[−5/34/3

]. Finally, setting

T (1 + x + x2) = (−2, 1) = c1(1,−1) + c2(2, 1)

and solving, we find c1 = −4/3 and c2 = −1/3. Thus, [T (1 + x + x2)]C =[−4/3−1/3

]. Putting the results for

[T (1)]C , [T (1 + x)]C , and [T (1 + x + x2)]C into the columns of [T ]CB , we obtain

[T ]CB =[−1 − 5

3 − 43

1 43 − 1

3

].

2.

(a): We must determine T (E11), T (E12), T (E21), and T (E22), and find the components of the resultingvectors in P3 relative to the basis C. We have

T (E11) = 1− x3, T (E12) = 3x2, T (E21) = x3, T (E22) = −1.

Therefore, relative to the standard basis C on P3, we have

[T (E11)]C =

100

−1

, [T (E12)]C =

0030

, [T (E21)]C =

0001

, [T (E22)]C =

−1

000

.

Page 353: Differential Equations and Linear Algebra - 3e - Goode -Solutions

353

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

1 0 0 −10 0 0 00 3 0 0

−1 0 1 0

.

(b): The values of T (E21), T (E11), T (E22), and T (E12) were determined in part (a). We must express thoseresults in terms of the ordered basis C given in (b). We have

[T (E21)]C =

0010

, [T (E11)]C =

01

−10

, [T (E22)]C =

0

−100

, [T (E12)]C =

0003

.

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

0 0 0 00 1 −1 01 −1 0 00 0 0 3

.

3.

(a): We must determine T (1, 0, 0), T (0, 1, 0), and T (0, 0, 1), and find the components of the resulting vectorsrelative to the basis C. We have

T (1, 0, 0) = cos x, T (0, 1, 0) = 3 sinx, T (0, 0, 1) = −2 cos x + sinx.

Therefore, relative to the basis C, we have

[T (1, 0, 0)]C =[

10

], [T (0, 1, 0)]C =

[03

], [T (0, 0, 1)]C =

[−2

1

].

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =[

1 0 −20 3 1

].

(b): We must determine T (2,−1,−1), T (1, 3, 5), and T (0, 4,−1), and find the components of the resultingvectors relative to the basis C. We have

T (2,−1,−1) = 4 cos x− 4 sinx, T (1, 3, 5) = −9 cos x + 14 sinx, T (0, 4,−1) = 2 cos x + 11 sinx.

Setting4 cos x− 4 sinx = c1(cos x− sinx) + c2(cos x + sinx)

and solving, we find c1 = 4 and c2 = 0. Therefore [T (2,−1,−1)]C =[

40

]. Next, setting

−9 cos x + 14 sinx = c1(cos x− sinx) + c2(cos x + sinx)

Page 354: Differential Equations and Linear Algebra - 3e - Goode -Solutions

354

and solving, we find c1 = −23/2 and c2 = 5/2. Therefore, [T (1, 3, 5)]C =[−23/25/2

]. Finally, setting

2 cos x + 11 sinx = c1(cos x− sinx) + c2(cos x + sinx)

and solving, we find c1 = −9/2 and c2 = 13/2. Therefore [T (0, 4,−1)]C =[−9/213/2

]. Putting these results

into the columns of [T ]CB , we obtain

[T ]CB =[

4 − 232 − 9

20 5

2132

].

4.

(a): We must determine T (1), T (x), and T (x2), and find the components of the resulting vectors relative tothe standard basis C on P3. We have

T (1) = 1 + x, T (x) = x + x2, T (x2) = x2 + x3.

Therefore,

[T (1)]C =

1100

, [T (x)]C =

0110

, [T (x2)]C =

0011

.

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

1 0 01 1 00 1 10 0 1

.

(b): We must determine T (1), T (x− 1), and T ((x− 1)2), and find the components of the resulting vectorsrelative to the given basis C on P3. We have

T (1) = 1 + x, T (x− 1) = −1 + x2, T ((x− 1)2) = 1− 2x− x2 + x3.

Setting1 + x = c1(1) + c2(x− 1) + c3(x− 1)2 + c4(x− 1)3

and solving, we find c1 = 2, c2 = 1, c3 = 0, and c4 = 0. Therefore [T (1)]C =

2100

. Next, setting

−1 + x2 = c1(1) + c2(x− 1) + c3(x− 1)2 + c4(x− 1)3

and solving, we find c1 = 0, c2 = 2, c3 = 1, and c4 = 0. Therefore, [T (x− 1)]C =

0210

. Finally, setting

1− 2x− x2 + x3 = c1(1) + c2(x− 1) + c3(x− 1)2 + c4(x− 1)3

Page 355: Differential Equations and Linear Algebra - 3e - Goode -Solutions

355

and solving, we find c1 = −1, c2 = −1, c3 = 2, and c4 = 1. Therefore, [T ((x − 1)2)]C =

−1−1

21

. Putting

these results into the columns of [T ]CB , we obtain

[T ]CB =

2 0 −11 2 −10 1 20 0 1

.

5.

(a): We must determine T (1), T (x), T (x2), and T (x3), and find the components of the resulting vectorsrelative to the standard basis C on P2. We have

T (1) = 0, T (x) = 1, T (x2) = 2x, T (x3) = 3x2.

Therefore, if C is the standard basis on P2, then we have

[T (1)]C =

000

, [T (x)]C =

100

, [T (x2)]C =

020

, [T (x3)]C =

003

.

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

0 1 0 00 0 2 00 0 0 3

.

(b): We must determine T (x3), T (x3 + 1), T (x3 + x), and T (x3 + x2), and find the components of theresulting vectors relative to the given basis C on P2. We have

T (x3) = 3x2, T (x3 + 1) = 3x2, T (x3 + x) = 3x2 + 1, T (x3 + x2) = 3x2 + 2x.

Setting3x2 = c1(1) + c2(1 + x) + c3(1 + x + x2)

and solving, we find c1 = 0, c2 = −3, and c3 = 3. Therefore [T (x3)]C =

0−3

3

. Likewise, [T (x3 + 1)]C = 0−3

3

. Next, setting

3x2 + 1 = c1(1) + c2(1 + x) + c3(1 + x + x2)

and solving, we find c1 = 1, c2 = −3, and c3 = 3. Therefore, [T (x3 + x)]C =

1−3

3

. Finally, setting

3x2 + 2x = c1(1) + c2(1 + x) + c3(1 + x + x2)

Page 356: Differential Equations and Linear Algebra - 3e - Goode -Solutions

356

and solving, we find c1 = −2, c2 = −1, and c3 = 3. Therefore, [T (x3 + 2x)]C =

−2−1

3

. Putting these

results into the columns of [T ]CB , we obtain

[T ]CB =

0 0 1 −2−3 −3 −3 −1

3 3 3 3

.

6.

(a): We must determine T (E11), T (E12), T (E21), and T (E22), and find the components of the resultingvectors relative to the standard basis C on R2. We have

T (E11) = (1, 1), T (E12) = (0, 0), T (E21) = (0, 0), T (E22) = (1, 1).

Therefore, since C is the standard basis on R2, we have

[T (E11)]C =[

11

], [T (E12)]C =

[00

], [T (E21)]C =

[00

], [T (E22)]C =

[11

].

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =[

1 0 0 11 0 0 1

].

(b): Let us denote the four matrices in the ordered basis for B as A1, A2, A3, and A4, respectively. Wemust determine T (A1), T (A2), T (A3), and T (A4), and find the components of the resulting vectors relativeto the standard basis C on R2. We have

T (A1) = (−4,−4), T (A2) = (3, 3), T (A3) = (−2,−2), T (A4) = (0, 0).

Therefore, since C is the standard basis on R2, we have

[T (A1)]C =[−4−4

], [T (A2)]C =

[33

], [T (A3)]C =

[−2−2

], [T (A4)]C =

[00

].

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =[−4 3 −2 0−4 3 −2 0

].

7.

(a): We must determine T (E11), T (E12), T (E21), and T (E22), and find the components of the resultingvectors relative to the standard basis C on M2(R). We have

T (E11) = E11, T (E12) = 2E12 − E21, T (E21) = 2E21 − E12, T (E22) = E22.

Therefore, we have

[T (E11)]C =

1000

, [T (E12)]C =

02

−10

, [T (E21)]C =

0

−120

, [T (E22)]C =

0001

.

Page 357: Differential Equations and Linear Algebra - 3e - Goode -Solutions

357

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

1 0 0 00 2 −1 00 −1 2 00 0 0 1

.

(b): Let us denote the four matrices in the ordered basis for B as A1, A2, A3, and A4, respectively. Wemust determine T (A1), T (A2), T (A3), and T (A4), and find the components of the resulting vectors relativeto the standard basis C on M2(R). We have

T (A1) =[−1 −2−2 −3

], T (A2) =

[1 03 2

], T (A3) =

[0 −87 −2

], T (A4) =

[0 7

−2 0

].

Therefore, we have

[T (A1)]C =

−1−2−2−3

, [T (A2)]C =

1032

, [T (A3)]C =

0

−87

−2

, [T (A4)]C =

07

−20

.

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

−1 1 0 0−2 0 −8 7−2 3 7 −2−3 2 −2 0

.

8.

(a): We must determine T (e2x) and T (e−3x), and find the components of the resulting vectors relative tothe given basis C. We have

T (e2x) = 2e2x and T (e−3x) = −3e−3x.

Therefore, we have

[T ]CB =[

2 00 −3

].

(b): We must determine T (e2x − 3e−3x) and T (2e−3x), and find the components of the resulting vectorsrelative to the given basis C. We have

T (e2x − 3e−3x) = 2e2x + 9e−3x and T (2e−3x) = −6e−3x.

Now, setting2e2x + 9e−3x = c1(e2x + e−3x) + c2(−e2x)

and solving, we find c1 = 9 and c2 = 7. Therefore, [T (e2x − 3e−3x)]C =[

97

]. Finally, setting

−6e−3x = c1(e2x + e−3x) + c2(−e2x)

Page 358: Differential Equations and Linear Algebra - 3e - Goode -Solutions

358

and solving, we find c1 = c2 = −6. Therefore, [T (2e−3x)]C =[−6−6

]. Putting these results into the columns

of [T ]CB , we obtain

[T ]CB =[

9 −67 −6

].

9.

(a): Let us first compute [T ]CB . We must determine T (1) and T (x), and find the components of the resultingvectors relative to the standard basis C = {E11, E12, E21, E22} on M2(R). We have

T (1) =[

1 00 −1

]and T (x) =

[−1 0−2 1

].

Therefore

[T (1)]C =

100

−1

and [T (x)]C =

−1

0−2

1

.

Hence,

[T ]CB =

1 −10 00 −2

−1 1

.

Now,

[p(x)]B = [−2 + 3x]B =[−2

3

].

Therefore, we have

[T (p(x))]C = [T ]CB [p(x)]B =

1 −10 00 −2

−1 1

[ −23

]=

−5

0−6

5

.

Thus,

T (p(x)) =[−5 0−6 5

].

(b): We have

T (p(x)) = T (−2 + 3x) =[−5 0−6 5

].

10.

(a): Let us first compute [T ]CB . We must determine T (1, 0, 0), T (0, 1, 0), and T (0, 0, 1), and find the compo-nents of the resulting vectors relative to the standard basis C = {1, x, x2, x3} on P3. We have

T (1, 0, 0) = 2− x− x3, T (0, 1, 0) = −x, T (0, 0, 1) = x + 2x3.

Page 359: Differential Equations and Linear Algebra - 3e - Goode -Solutions

359

Therefore,

[T (1, 0, 0)]C =

2

−10

−1

, [T (0, 1, 0)]C =

0

−100

, [T (0, 0, 1)]C =

0102

.

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

2 0 0

−1 −1 10 0 0

−1 0 2

.

Now,

[v]B = [(2,−1, 5)]B =

2−1

5

,

and therefore,

[T (v)]C = [T ]CB [v]B =

2 0 0

−1 −1 10 0 0

−1 0 2

2−1

5

=

4408

.

Therefore,T (v) = 4 + 4x + 8x3.

(b): We haveT (v) = T (2,−1, 5) = 4 + 4x + 8x3.

11.

(a): Let us first compute [T ]CB . We must determine T (E11), T (E12), T (E21), and T (E22), and find thecomponents of the resulting vectors relative to the standard basis C = {E11, E12, E21, E22}. We have

T (E11) =[

2 −10 −1

], T (E12) =

[−1 0

0 −1

], T (E21) =

[0 00 3

], T (E22) =

[1 30 0

].

Therefore,

[T (E11)]C =

2

−10

−1

, [T (E12)]C =

−1

00

−1

, [T (E21)]C =

0003

, [T (E22)]C =

1300

.

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

2 −1 0 1

−1 0 0 30 0 0 0

−1 −1 3 0

.

Page 360: Differential Equations and Linear Algebra - 3e - Goode -Solutions

360

Now,

[A]B =

−7

21

−3

,

and therefore,

[T (A)]C = [T ]CB [A]B =

2 −1 0 1

−1 0 0 30 0 0 0

−1 −1 3 0

−7

21

−3

=

−19−2

08

.

Hence,

T (A) =[−19 −2

0 8

].

(b): We have

T (A) =[−19 −2

0 8

].

12.

(a): Let us first compute [T ]CB . We must determine T (1), T (x), and T (x2), and find the components of theresulting vectors relative to the standard basis C = {1, x, x2, x3, x4}. We have

T (1) = x2, T (x) = x3, T (x2) = x4.

Therefore,

[T (1)]C =

00100

, [T (x)]C =

00010

, [T (x2)]C =

00001

.

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

0 0 00 0 01 0 00 1 00 0 1

.

Now,

[p(x)]B = [−1 + 5x− 6x2]B =

−15

−6

,

and therefore,

[T (p(x))]C = [T ]CB [p(x)]B =

0 0 00 0 01 0 00 1 00 0 1

−1

5−6

=

00

−15

−6

.

Page 361: Differential Equations and Linear Algebra - 3e - Goode -Solutions

361

Hence,T (p(x)) = −x2 + 5x3 − 6x4.

(b): We haveT (p(x)) = −x2 + 5x3 − 6x4.

13.

(a): Let us first compute [T ]CB . We must determine T (Eij) for 1 ≤ i, j ≤ 3, and find components of theresulting vectors relative to the standard basis C = {1}. We have

T (E11) = T (E22) = T (E33) = 1 and T (E12) = T (E13) = T (E23) = T (E21) = T (E31) = T (E32) = 0.

Therefore

[T (E11)]C = [T (E22]C = [T (E33]C = [1] and all other component vectors are [0].

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =[

1 0 0 0 1 0 0 0 1].

Now,

[A]B =

2−6

014

−400

−3

.

and therefore,

[T (A)]C = [T ]CB [A]B =[

1 0 0 0 1 0 0 0 1]

2−6

014

−400

−3

= [3].

Hence, T (A) = 3.

(b): We have T (A) = 3.

14.

(a): Let us first compute [T ]CB . We must determine T (1), T (x), T (x2), T (x3), and T (x4), and find thecomponents of the resulting vectors relative to the standard basis C = {1, x, x2, x3}. We have

T (1) = 0, T (x) = 1, T (x2) = 2x, T (x3) = 3x2, T (x4) = 4x3.

Page 362: Differential Equations and Linear Algebra - 3e - Goode -Solutions

362

Therefore,

[T (1)]C =

0000

, [T (x)]C =

1000

, [T (x2)]C =

0200

, [T (x3)]C =

0030

, [T (x4)]C =

0004

.

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =

0 1 0 0 00 0 2 0 00 0 0 3 00 0 0 0 4

.

Now,

[p(x)]B = [3− 4x + 6x2 + 6x3 − 2x4]B =

3

−466

−2

,

and therefore,

[T (p(x))]C = [T ]CB [p(x)]B =

0 1 0 0 00 0 2 0 00 0 0 3 00 0 0 0 4

3−4

66

−2

=

−41218−8

.

Therefore,T (p(x)) = T (3− 4x + 6x2 + 6x3 − 2x4) = −4 + 12x + 18x2 − 8x3.

(b): We haveT (p(x)) = p′(x) = −4 + 12x + 18x2 − 8x3.

15.

(a): Let us first compute [T ]CB . We must determine T (1), T (x), T (x2), and T (x3), and find the componentsof the resulting vectors relative to the standard basis C = {1}. We have

T (1) = 1, T (x) = 2, T (x2) = 4, T (x3) = 8.

Therefore,[T (1)]C = [1], [T (x)]C = [2], [T (x2)]C = [4], [T (x3)]C = [8].

Putting these results into the columns of [T ]CB , we obtain

[T ]CB =[

1 2 4 8].

Now,

[p(x)]B = [2x− 3x2]B =

02

−30

,

Page 363: Differential Equations and Linear Algebra - 3e - Goode -Solutions

363

and therefore,

[T (p(x))]C = [T ]CB [p(x)]B =[

1 2 4 8]

02

−30

= [−8].

Therefore, T (p(x)) = −8.

(b): We have p(2) = 2 · 2− 3 · 22 = −8.

16. The linear transformation T2T1 : P4 → R is given by

(T2T1)(p(x)) = p′(2).

Let A denote the standard basis on P4, let B denote the standard basis on P3, and let C denote the standardbasis on R.

(a): To determine [T2T1]CA, we compute

(T2T1)(1) = 0, (T2T1)(x) = 1, (T2T1)(x2) = 4, (T2T1)(x3) = 12, (T2T1)(x4) = 32.

Therefore

[(T2T1)(1)]C = [0], [(T2T1)(x)]C = [1], [(T2T1)(x2)]C = [4], [(T2T1)(x3)]C = [12], [(T2T1)(x4)]C = [32].

Putting these results into the columns of [T2T1]CA, we obtain

[T2T1]CA =[

0 1 4 12 32].

(b): We have

[T2]CB [T1]BA =[

1 2 4 8]

0 1 0 0 00 0 2 0 00 0 0 3 00 0 0 0 4

=[

0 1 4 12 32]

= [T2T1]CA.

(c): Let p(x) = 2 + 5x − x2 + 3x4. Then the component vector of p(x) relative to the standard basis A is

[p(x)]A =

25

−103

. Thus,

[(T2T1)(p(x))]C = [T2T1]CA[p(x)]A =[

0 1 4 12 32]

25

−103

= [97].

Therefore,(T2T1)(2 + 5x− x2 + 3x4) = 97.

Page 364: Differential Equations and Linear Algebra - 3e - Goode -Solutions

364

Of course,p′(2) = 5− 2 · 2 + 12 · 23 = 97

by direct calculation as well.

17. The linear transformation T2T1 : P1 → R2 is given by

(T2T1)(a + bx) = (0, 0).

Let A denote the standard basis on P1, let B denote the standard basis on M2(R), and let C denote thestandard basis on R2.

(a): To determine [T2T1]CA, we compute

(T2T1)(1) = (0, 0) and (T2T1)(x) = (0, 0).

Therefore, we obtain [T2T1]CA = 02.

(b): We have

[T2]CB [T1]BA =[

1 0 0 11 0 0 1

]1 −10 00 −2

−1 1

= 02 = [T2T1]CA.

(c): The component vector of p(x) = −3 + 8x relative to the standard basis A is [p(x)]A =[−3

8

]. Thus,

[(T2T1)(p(x))]C = [T2T1]CA[p(x)]A = 02[p(x)]A =[

00

].

Therefore,(T2T1)(−3 + 8x) = (0, 0).

Of course

(T2T1)(−3 + 8x) = T1

([−11 0−16 11

])= (0, 0)

by direct calculation as well.

18. The linear transformation T2T1 : P2 → P2 is given by

(T2T1)(p(x)) = [(x + 1)p(x)]′.

Let A denote the standard basis on P2, let B denote the standard basis on P3, and let C denote the standardbasis on P2.

(a): To determine [T2T1]CA, we compute as follows:

(T2T1)(1) = 1, (T2T1)(x) = 1 + 2x, (T2T1)(x2) = 2x + 3x2.

Therefore,

[(T2T1)(1)]C =

100

, [(T2T1)(x)]C =

120

, [(T2T1)(x2)]C =

023

.

Page 365: Differential Equations and Linear Algebra - 3e - Goode -Solutions

365

Putting these results into the columns of [T2T1]CA, we obtain

[T2T1]CA =

1 1 00 2 20 0 3

.

(b): We have

[T2]CB [T1]BA =

0 1 0 00 0 2 00 0 0 3

1 0 01 1 00 1 10 0 1

=

1 1 00 2 20 0 3

= [T2T1]CA.

(c): The component vector of p(x) = 7 − x + 2x2 relative to the standard basis A is [p(x)]A =

7−1

2

.

Thus,

[(T2T1)(p(x))]C = [T2T1]CA[p(x)]A =

1 1 00 2 20 0 3

7−1

2

=

626

.

Therefore,(T2T1)(7− x + 2x2) = 6 + 2x + 6x2.

Of course

(T2T1)(7− x + 2x2) = T2((x + 1)(7− x + 2x2)) = T2(7 + 6x + x2 + 2x3) = 6 + 2x + 6x2

by direct calculation as well.

(d): YES. Since the matrix [T2T1]CA computed in part (a) is invertible, T2T1 is invertible.

19. NO. The matrices [T ]CB obtained in Problem 2 are not invertible (they contain rows of zeros), andtherefore, the corresponding linear transformation T is not invertible.

20. YES. We can explain this answer by using a matrix representation of T . Let B = {1, x, x2} and letC = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. Then T (1) = (1, 1, 1), T (x) = (0, 1, 2), and T (x2) = (0, 1, 4), and so

[T ]CB =

1 0 01 1 11 2 4

.

Since this matrix is invertible, the corresponding linear transformation T is invertible.

21. Note thatw belongs to Rng(T ) ⇐⇒ w = T (v) for some v in V

⇐⇒ [w]C = [T (v)]C for some v in V

⇐⇒ [w]C = [T ]CB [v]B for some v in V .

The right-hand side of this last expression can be expressed as a linear combination of the columns of [T ]CB ,and therefore, w belongs to Rng(T ) if and only if [w]C can be expressed as a linear combination of thecolumns of [T ]CB . That is, if and only if [w]C belongs to colspace([T ]CB).

Page 366: Differential Equations and Linear Algebra - 3e - Goode -Solutions

366

Solutions to Section 5.6

True-False Review:

1. FALSE. If v = 0, then Av = λv = 0, but by definition, an eigenvector must be a nonzero vector.

2. TRUE. When we compute det(A − λI) for an upper or lower triangular matrix A, the determinant isthe product of the entries lying along the main diagonal of A− λI:

det(A− λI) = (a11 − λ)(a22 − λ) . . . (ann − λ).

The roots of this characteristic equation are precisely the values a11, a22, . . . , ann along the main diagonal ofthe matrix A.

3. TRUE. The eigenvalues of a matrix are precisely the set of roots of its characteristic equation. Therefore,two matrices A and B that have the same characteristic equation will have the same eigenvalues.

4. FALSE. Many examples of this can be found. As a simple one, consider A =[

0 00 0

]and B =

[0 10 0

].

We have det(A− λI) = (−λ)2 = λ2 = det(B − λI). Note that every nonzero vector in R2 is an eigenvector

of A corresponding to λ = 0. However, only vectors of the form[

a0

]with a 6= 0 are eigenvectors of B.

Therefore, A and B do not have precisely the same set of eigenvectors. In this case, every eigenvector of Bis also an eigenvector of A, but not conversely.

5. TRUE. Geometrically, all nonzero points v = (x, y) in R2 are oriented in a different direction from theorigin after a 90◦ rotation than they are initially. Therefore, the vectors v and Av are not parallel.

6. TRUE. The characteristic equation of an n × n matrix A, det(A − λI) is a polynomial of degree n ithe indeterminate λ. Since such a polynomial always possesses n roots (with possible repeated or complexroots) by the Fundamental Theorem of Algebra, the statement is true.

7. FALSE. This is not true, in general, when the linear combination formed involves eigenvectors cor-

responding to different eigenvalues. For example, let A =[

1 00 2

], with eigenvalues λ = 1 and λ = 2.

It is easy to see that corresponding eigenvectors to these eigenvalues are, respectively, v1 =[

10

]and

v2 =[

01

]. However, note that

A(v1 + bfv2) =[

12

],

which is not of the form λ(v1 + v2), and therefore v1 + v2 is not an eigenvector of A. As a more trivialillustration, note that if v is an eigenvector of A, then 0v is a linear combination of {v} that is no longer aneigenvector of A.

8. TRUE. This is basically a fact about roots of polynomials. Complex roots of real polynomials alwaysoccur in complex conjugate pairs. Therefore, if λ = a+ ib (b 6= 0) is an eigenvalue of A, then so is λ = a− ib.

9. TRUE. If λ is an eigenvalue of A, then we have Av = λv for some eigenvector v of A corresponding toλ. Then

A2v = A(Av) = A(λv) = λ(Av) = λ(λv) = λ2v,

which shows that v is also an eigenvector of A2, this time corresponding to the eigenvalue λ2.

Page 367: Differential Equations and Linear Algebra - 3e - Goode -Solutions

367

Problems:

1. Av =[

1 32 2

] [11

]=[

44

]= 4

[11

]= λv.

2. Av =

1 −2 −6−2 2 −5

2 1 8

21

−1

=

63

−3

= 3

21

−1

= λv.

3. Since v = c1

10

−3

+ c2

4−3

0

=

c1 + 4c2

−3c2

−3c1

, it follows that

Av =

1 4 13 2 13 4 −1

c1 + 4c2

−3c2

−3c1

=

−2c1 − 8c2

6c2

6c1

= −2

c1 + 4c2

−3c2

−3c1

= λv.

4. Av1 = λ1v1 =⇒[

4 12 3

] [1

−2

]= λ1

[1

−2

]=⇒

[2

−4

]=[

λ1

−2λ1

]=⇒ λ1 = 2.

Av2 = λ2v2 =⇒[

4 12 3

] [11

]= λ2

[11

]=⇒

[55

]=[

λ2

λ2

]=⇒ λ2 = 5.

Thus λ1 = 2 corresponds to v1 and λ2 = 5 corresponds to v2.

5. The only vectors that are mapped into a scalar multiple of themselves under a reflection in the x-axisare those vectors that either point along the x-axis, or that point along the y-axis. Hence, the eigenvectorsare of the form (a, 0) or (0, b) where a and b are arbitrary nonzero real numbers. A vector that points alongthe x-axis will have neither its magnitude nor its direction altered by a reflection in the x-axis. Hence, theeigenvectors of the form (a, 0) correspond to the eigenvalue λ = 1. A vector of the form (0, b) will be mappedinto the vector (0,−b) = −1(0, b) under a reflection in the x-axis. Consequently, the eigenvectors of the form(0, b) correspond to the eigenvalue λ = −1.

6. Any vectors lying along the line y = x are unmoved by the action of T , and hence, any vector (t, t) witht 6= 0 is an eigenvector with corresponding eigenvalue λ = 1. On the other hand, any vector lying alongthe line y = −x will be reflected across the line y = x, thereby experiencing a 180◦ change of direction.Therefore, any vector (t,−t) with t 6= 0 is an eigenvector with corresponding eigenvalue λ = −1. All vectorsthat do not lie on the line y = x or the line y = −x are not eigenvectors of this linear transformation.

7. If θ 6= 0, π, there are no vectors that are mapped into scalar multiples of themselves under the rotation,and consequently, there are no real eigenvalues and eigenvectors in this case. If θ = 0, then every vector ismapped onto itself under the rotation, therefore λ = 1, and every nonzero vector in R2 is an eigenvector. Ifθ = π, then every vector is mapped onto its negative under the rotation, therefore λ = −1, and once again,every nonzero vector in R2 is an eigenvector.

8. Any vectors lying on the y-axis are unmoved by the action of T , and hence, any vector (0, y, 0) with ynot zero is an eigenvector of T with corresponding eigenvalue λ = 1. On the other hand, any vector lyingin the xz-plane, say (x, 0, z) is transformed under T to (0, 0, 0). Thus, any vector (x, 0, z) with x and z notboth zero is an eigenvector with corresponding eigenvalue λ = 0.

9. det(A− λI) = 0 ⇐⇒∣∣∣∣ 3− λ −1−5 −1− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − 2λ− 8 = 0

⇐⇒ (λ + 2)(λ− 4) = 0 ⇐⇒ λ = −2 or λ = 4.

If λ = −2 then (A− λI)v = 0 assumes the form[

5 −1−5 1

] [v1

v2

]=[

00

]=⇒ 5v1 − v2 = 0 =⇒ v2 = 5v1. If we let v1 = t ∈ R, then the solution set of this system is {(t, 5t) : t ∈ R}

Page 368: Differential Equations and Linear Algebra - 3e - Goode -Solutions

368

so the eigenvectors corresponding to λ = −2 are v = t(1, 5) where t ∈ R.

If λ = 4 then (A− λI)v = 0 assumes the form[−1 −1−5 −5

] [v1

v2

]=[

00

]=⇒ −v1− v2 = 0 =⇒ v2 = −v1. If we let v2 = r ∈ R, then the solution set of this system is {(r,−r) : r ∈ R}so the eigenvectors corresponding to λ = 4 are v = r(1,−1) where r ∈ R.

10. det(A− λI) = 0 ⇐⇒∣∣∣∣ 1− λ 6

2 −3− λ

∣∣∣∣ = 0 ⇐⇒ λ2 + 2λ− 15 = 0

⇐⇒ (λ− 3)(λ + 5) = 0 ⇐⇒ λ = 3 or λ = −5.

If λ = 3 then (A− λI)v = 0 assumes the form[−2 6

2 −6

] [v1

v2

]=[

00

]=⇒ v1 − 3v2 = 0. If we let v2 = r ∈ R, then the solution set of this system is {(3r, r) : r ∈ R} so theeigenvectors corresponding to λ = 3 are v = r(3, 1) where r ∈ R.

If λ = −5 then (A− λI)v = 0 assumes the form[

6 62 2

] [v1

v2

]=[

00

]=⇒ v1 + v2 = 0. If we let v2 = s ∈ R, then the solution set of this system is {(−s, s) : s ∈ R} so theeigenvectors corresponding to λ = −5 are v = s(−1, 1) where s ∈ R.

11. det(A− λI) = 0 ⇐⇒∣∣∣∣ 7− λ 4−1 3− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − 10λ + 25 = 0

⇐⇒ (λ− 5)2 = 0 ⇐⇒ λ = 5 of multiplicity two.

If λ = 5 then (A− λI)v = 0 assumes the form[

2 4−1 −2

] [v1

v2

]=[

00

]=⇒ v1 +2v2 = 0 =⇒ v1 = −2v2. If we let v2 = t ∈ R, then the solution set of this system is {(−2t, t) : t ∈ R}so the eigenvectors corresponding to λ = 5 are v = t(−2, 1) where t ∈ R.

12. det(A− λI) = 0 ⇐⇒∣∣∣∣ 2− λ 0

0 2− λ

∣∣∣∣ = 0 ⇐⇒ (2− λ)2 = 0 ⇐⇒ λ = 2 of multiplicity two.

If λ = 2 then (A− λI)v = 0 assumes the form[

0 00 0

] [v1

v2

]=[

00

]Thus, if we let v1 = s and v2 = t where s, t ∈ R, then the solution set of this system is {(s, t) : s, t ∈ R} sothe eigenvectors corresponding to λ = 2 are v = s(1, 0) + t(0, 1) where s, t ∈ R.

13. det(A− λI) = 0 ⇐⇒∣∣∣∣ 3− λ −2

4 −1− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − 2λ + 5 = 0 ⇐⇒ λ = 1± 2i.

If λ = 1− 2i then (A− λI)v = 0 assumes the form[

2 + 2i −24 −2 + 2i

] [v1

v2

]=[

00

]=⇒ (1 + i)v1 − v2 = 0. If we let v1 = s ∈ C, then the solution set of this system is {(s, (1 + i)s) : s ∈ C} sothe eigenvectors corresponding to λ = 1 − 2i are v = s(1, 1 + i) where s ∈ C. By Theorem 5.6.8, since theentries of A are real, λ = 1 + 2i has corresponding eigenvectors of the form v = t(1, 1− i) where t ∈ C.

14. det(A− λI) = 0 ⇐⇒∣∣∣∣ 2− λ 3−3 2− λ

∣∣∣∣ = 0 ⇐⇒ (2− λ)2 = −9 ⇐⇒ λ = 2± 3i.

If λ = 2− 3i then (A− λI)v = 0 assumes the form[

3i 3−3 3i

] [v1

v2

]=[

00

]=⇒ v2 = −iv1. If we let v1 = t ∈ C, then the solution set of this system is {(t,−it) : t ∈ C} so theeigenvectors corresponding to λ = 2− 3i are v = t(1,−i) where t ∈ C. By Theorem 5.6.8, since the entriesof A are real, λ = 2 + 3i has corresponding eigenvectors of the form v = r(1, i) where r ∈ C.

Page 369: Differential Equations and Linear Algebra - 3e - Goode -Solutions

369

15. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣10− λ −12 8

0 2− λ 0−8 12 −6− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 2)3 = 0 ⇐⇒ λ = 2 of multiplicity three.

If λ = 2 then (A− λI)v = 0 assumes the form

8 −12 80 0 0

−8 12 −8

v1

v2

v3

=

000

=⇒ 2v1 − 3v2 + 2v3 = 0. Thus, if we let v2 = 2s and v3 = t where s, t ∈ R, then the solution set of thissystem is {(3s− t, 2s, t) : s, t ∈ R} so the eigenvectors corresponding to λ = 2 are v = s(3, 2, 0) + t(−1, 0, 1)where s, t ∈ R.

16. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣3− λ 0 0

0 2− λ −11 −1 2− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 1)(λ − 3)2 = 0 ⇐⇒ λ = 1 or λ = 3 of

multiplicity two.

If λ = 1 then (A− λI)v = 0 assumes the form

2 0 00 1 −11 −1 1

v1

v2

v3

=

000

=⇒ v1 = 0 and v2 − v3 = 0. Thus, if we let v3 = s where s ∈ R, then the solution set of this system is{(0, s, s) : s ∈ R} so the eigenvectors corresponding to λ = 1 are v = s(0, 1, 1) where s ∈ R.

If λ = 3 then (A− λI)v = 0 assumes the form

0 0 00 −1 −11 −1 −1

v1

v2

v3

=

000

=⇒ v1 = 0 and v2 + v3 = 0. Thus, if we let v3 = t where t ∈ R, then the solution set of this system is{(0,−t, t) : t ∈ R} so the eigenvectors corresponding to λ = 3 are v = t(0,−1, 1) where t ∈ R.

17. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ 0 0

0 3− λ 22 −2 −1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 1)3 = 0 ⇐⇒ λ = 1 of multiplicity three.

If λ = 1 then (A− λI)v = 0 assumes the form

0 0 00 2 22 −2 −2

v1

v2

v3

=

000

=⇒ v1 = 0 and v2 + v3 = 0. Thus, if we let v3 = s where s ∈ R, then the solution set of this system is{(0,−s, s) : s ∈ R} so the eigenvectors corresponding to λ = 1 are v = s(0,−1, 1) where s ∈ R.

18. det(A−λI) = 0 ⇐⇒

∣∣∣∣∣∣6− λ 3 −4−5 −2− λ 20 0 −1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ−1)(λ+1)(λ−3) = 0 ⇐⇒ λ = −1, λ = 1,

or λ = 3.

If λ = −1 then (A− λI)v = 0 assumes the form

7 3 −4−5 −1 2

0 0 0

v1

v2

v3

=

000

=⇒ v1 = v3− v2 and 4v2− 3v3 = 0. Thus, if we let v3 = 4r where r ∈ R, then the solution set of this systemis {(r, 3r, 4r) : r ∈ R} so the eigenvectors corresponding to λ = −1 are v = r(1, 3, 4) where r ∈ R.

If λ = 1 then (A− λI)v = 0 assumes the form

5 3 −4−5 −3 2

0 0 −2

v1

v2

v3

=

000

=⇒ 5v1 + 3v2 = 0 and v3 = 0. Thus, if we let v2 = −5s where s ∈ R, then the solution set of this system is{(3s,−5s, 0) : s ∈ R} so the eigenvectors corresponding to λ = 1 are v = s(3,−5, 0) where s ∈ R.

Page 370: Differential Equations and Linear Algebra - 3e - Goode -Solutions

370

If λ = 3 then (A− λI)v = 0 assumes the form

3 3 −4−5 −5 2

0 0 −4

v1

v2

v3

=

000

=⇒ v1 + v2 = 0 and v3 = 0. Thus, if we let v2 = t where t ∈ R, then the solution set of this system is{(−t, t, 0) : t ∈ R} so the eigenvectors corresponding to λ = 3 are v = t(−1, 1, 0) where t ∈ R.

19. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣7− λ −8 6

8 −9− λ 60 0 −1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 1)3 = 0 ⇐⇒ λ = −1 of multiplicity

three.

If λ = −1 then (A− λI)v = 0 assumes the form

8 −8 68 −8 60 0 0

v1

v2

v3

=

000

=⇒ 4v1 − 4v2 + 3v3 = 0. Thus, if we let v2 = r and v3 = 4s where r, s ∈ R, then the solution set of thissystem is {(r−3s, r, 4s) : r, s ∈ R} so the eigenvectors corresponding to λ = −1 are v = r(1, 1, 0)+s(−3, 0, 4)where r, s ∈ R.

20. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ 1 −10 2− λ 02 −1 3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 1)(λ − 2)2 = 0 ⇐⇒ λ = 1 or λ = 2 of

multiplicity two.

If λ = 1 then (A− λI)v = 0 assumes the form

−1 1 −10 1 02 −1 2

v1

v2

v3

=

000

=⇒ v2 = 0 and v1 + v3 = 0. Thus, if we let v3 = r where r ∈ R, then the solution set of this system is{(−r, 0, r) : r ∈ R} so the eigenvectors corresponding to λ = 1 are v = r(−1, 0, 1) where r ∈ R.

If λ = 2 then (A− λI)v = 0 assumes the form

−2 1 −10 0 02 −1 1

v1

v2

v3

=

000

=⇒ 2v1 − v2 + v3 = 0. Thus, if we let v1 = s and v3 = t where s, t ∈ R, then the solution set of this systemis {(s, 2s + t, t) : s, t ∈ R} so the eigenvectors corresponding to λ = 2 are v = s(1, 2, 0) + t(0, 1, 1) wheres, t ∈ R.

21. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ 0 0

0 −λ 10 −1 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ (1− λ)(1 + λ2) = 0 ⇐⇒ λ = 1 or λ = ±i.

If λ = 1 then (A− λI)v = 0 assumes the form

0 0 00 −1 10 −1 −1

v1

v2

v3

=

000

=⇒ −v2 + v3 = 0 and −v2 − v3 = 0. The solution set of this system is {(r, 0, 0) : r ∈ C} so the eigenvectorscorresponding to λ = 1 are v = r(1, 0, 0) where r ∈ C.

If λ = −i then (A− λI)v = 0 assumes the form

1 + i 0 00 i 10 −1 i

v1

v2

v3

=

000

=⇒ v1 = 0 and −v2 + iv3 = 0. The solution set of this system is {(0, si, s) : s ∈ C} so the eigenvectorscorresponding to λ = −i are v = s(0, i, 1) where s ∈ C. By Theorem 5.6.8, since the entries of A are real,λ = i has corresponding eigenvectors of the form v = t(0,−i, 1) where t ∈ C.

22. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣−2− λ 1 0

1 −1− λ −11 3 −3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 2)(λ2 + 4λ + 5) = 0 ⇐⇒ λ = −2 or

Page 371: Differential Equations and Linear Algebra - 3e - Goode -Solutions

371

λ = −2± i.

If λ = −2 then (A− λI)v = 0 assumes the form

0 1 01 1 −11 3 −1

v1

v2

v3

=

000

=⇒ v2 = 0 and v1 − v3 = 0. Thus, if we let v3 = r where r ∈ C, then the solution set of this system is{(r, 0, r) : r ∈ C} so the eigenvectors corresponding to λ = −2 are v = r(1, 0, 1) where r ∈ C.

If λ = −2 + i then (A− λI)v = 0 assumes the form

−i 1 01 1− i −11 3 −1− i

v1

v2

v3

=

000

=⇒ v1 +

−2 + i

5v3 = 0 and v2 +

−1− 2i

5v3 = 0. Thus, if we let v3 = 5s where s ∈ C, then the solution

set of this system is {((2 − i)s, (1 + 2i)s, 5s) : s ∈ C} so the eigenvectors corresponding to λ = −2 + i arev = s(2 − i, 1 + 2i, 5) where s ∈ C. By Theorem 5.6.8, since the entries of A are real, λ = −2 − i hascorresponding eigenvectors of the form v = t(2 + i, 1− 2i, 5) where t ∈ C.

23. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣2− λ −1 3

3 1− λ 02 −1 3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ λ(λ− 2)(λ− 4) = 0 ⇐⇒ λ = 0, λ = 2, or λ = 4.

If λ = 0 then (A− λI)v = 0 assumes the form

2 −1 33 1 02 −1 3

v1

v2

v3

=

000

=⇒ v1 +2v2−3v3 = 0 and −5v2 +9v3 = 0. Thus, if we let v3 = 5r where r ∈ R, then the solution set of thissystem is {(−3r, 9r, 5r) : r ∈ R} so the eigenvectors corresponding to λ = 0 are v = r(−3, 9, 5) where r ∈ R.

If λ = 2 then (A− λI)v = 0 assumes the form

0 −1 33 −1 02 −1 1

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and v2 − 3v3 = 0. Thus, if we let v3 = s where s ∈ R, then the solution set of this system is{(s, 3s, s) : s ∈ R} so the eigenvectors corresponding to λ = 2 are v = s(1, 3, 1) where s ∈ R.

If λ = 4 then (A− λI)v = 0 assumes the form

−2 −1 33 −3 02 −1 −1

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and v2 − v3 = 0. Thus, if we let v3 = t where t ∈ R, then the solution set of this system is{(t, t, t) : t ∈ R} so the eigenvectors corresponding to λ = 4 are v = t(1, 1, 1) where t ∈ R.

24. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣5− λ 0 0

0 5− λ 00 0 5− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 5)3 = 0 ⇐⇒ λ = 5 of multiplicity three.

If λ = 5 then (A− λI)v = 0 assumes the form

0 0 00 0 00 0 0

v1

v2

v3

=

000

Thus, if we let v1 = r, v2 = s, and v3 = t where r, s, t ∈ R, then the solution set of this system is{(r, s, t) : r, s, t ∈ R} so the eigenvectors corresponding to λ = 5 are v = r(1, 0, 0)+s(0, 1, 0)+ t(0, 0, 1) wherer, s, t ∈ R. That is, every nonzero vector in R3 is an eigenvector of A corresponding to λ = 5.

25. det(A−λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ 2 2

2 −λ 22 2 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ−4)(λ+2)2 = 0 ⇐⇒ λ = 4 or λ = −2 of multiplicity

two.

If λ = 4 then (A− λI)v = 0 assumes the form

−4 2 22 −4 22 2 −4

v1

v2

v3

=

000

Page 372: Differential Equations and Linear Algebra - 3e - Goode -Solutions

372

=⇒ v1 − v3 = 0 and v2 − v3 = 0. Thus, if we let v3 = r where r ∈ R, then the solution set of this system is{(r, r, r) : r ∈ R} so the eigenvectors corresponding to λ = 4 are v = r(1, 1, 1) where r ∈ R.

If λ = −2 then (A− λI)v = 0 assumes the form

2 2 22 2 22 2 2

v1

v2

v3

=

000

=⇒ v1 + v2 + v3 = 0. Thus, if we let v2 = s and v3 = t where s, t ∈ R, then the solution set of this system is{(−s− t, s, t) : s, t ∈ R} so the eigenvectors corresponding to λ = −2 are v = s(−1, 1, 0) + t(−1, 0, 1) wheres, t ∈ R.

26. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣∣∣1− λ 2 3 4

4 3− λ 2 14 5 6− λ 77 6 5 4− λ

∣∣∣∣∣∣∣∣ = 0 ⇐⇒ λ4 − 14λ3 − 32λ2 = 0

⇐⇒ λ2(λ− 16)(λ + 2) = 0 ⇐⇒ λ = 16, λ = −2, or λ = 0 of multiplicity two.

If λ = 16 then (A− λI)v = 0 assumes the form

−15 2 3 4

4 −13 2 14 5 −10 77 6 5 −12

v1

v2

v3

v4

=

0000

=⇒ v1 − 1841v3 + 2078v4 = 0, v2 + 82v3 − 93v4 = 0, and 31v3 − 35v4 = 0. Thus, if we let v4 = 31r wherer ∈ R, then the solution set of this system is {(17r, 13r, 35r, 31r) : r ∈ R} so the eigenvectors correspondingto λ = 16 are v = r(17, 13, 35, 31) where r ∈ R.

If λ = −2 then (A− λI)v = 0 assumes the form

3 2 3 44 5 2 14 5 8 77 6 5 6

v1

v2

v3

v4

=

0000

=⇒ v1 + v4 = 0, v2 − v4 = 0, and v3 + v4 = 0. Thus, if we let v4 = s where s ∈ R, then the solution set ofthis system is {(−s, s,−s, s) : s ∈ R} so the eigenvectors corresponding to λ = −2 are v = s(−1, 1,−1, 1)where s ∈ R.

If λ = 0 then (A− λI)v = 0 assumes the form

1 2 3 44 3 2 14 5 6 77 6 5 4

v1

v2

v3

v4

=

0000

=⇒ v1 − v3 − 2v4 = 0 and v2 + 2v3 + 3v4 = 0. Thus, if we let v3 = a and v4 = b where a, b ∈ R, then thesolution set of this system is {(a + 2b,−2a− 3b, a, b) : a, b ∈ R} so the eigenvectors corresponding to λ = 0are v = a(1,−2, 1, 0) + b(2,−3, 0, 1) where a, b ∈ R.

27. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣∣∣−λ 1 0 0−1 −λ 0 0

0 0 −λ −10 0 1 −λ

∣∣∣∣∣∣∣∣ = 0 ⇐⇒ (λ2 + 1)2 = 0 ⇐⇒ λ = ±i, where each root is of

multiplicity two.

If λ = −i then (A− λI)v = 0 assumes the form

i 1 0 0

−1 i 0 00 0 i −10 0 1 i

v1

v2

v3

v4

=

0000

=⇒ v1 − iv2 = 0 and v3 + iv4 = 0. Thus, if we let v2 = r and v4 = s where r, s ∈ C, then the solution setof this system is {(ir, r,−is, s) : r, s ∈ C} so the eigenvectors corresponding to λ = −i are v = r(i, 1, 0, 0) +s(0, 0,−i, 1) where r, s ∈ C. By Theorem 5.6.8, since the entries of A are real, λ = i has correspondingeigenvectors v = a(−i, 1, 0, 0) + b(0, 0, i, 1) where a, b ∈ C.

Page 373: Differential Equations and Linear Algebra - 3e - Goode -Solutions

373

28. This matrix is lower triangular, and therefore, the eigenvalues appear along the main diagonal of thematrix: λ = 1+ i, 1− 3i, 1. Note that the eigenvalues do not occur in complex conjugate pairs, but this doesnot contradict Theorem 5.6.8 because the matrix does not consist entirely of real elements.

29. (a) p(λ) = det(A− λI2) =∣∣∣∣ 1− λ −1

2 4− λ

∣∣∣∣ = λ2 − 5λ + 6.

(b)

A2 − 5A + 6I2 =[

1 −12 4

] [1 −12 4

]− 5

[1 −12 4

]+ 6

[1 00 1

]=[−1 −510 14

]+[

−5 5−10 −20

]+[

6 00 6

]=[

0 00 0

]= 02.

(c) Using part (b) of this problem:A2 − 5A + 6I2 = 02 ⇐⇒ A−1(A2 − 5A + 6I2) = A−1 · 02

⇐⇒ A− 5I2 + 6A−1 = 02

⇐⇒ 6A−1 = 5I2 −A

⇐⇒ A−1 =16

{5[

1 00 1

]−[

1 −12 4

]}⇐⇒ A−1 =

16

[4 1

−2 1

], orA−1 =

[23

16

− 13

16

].

30. (a) det(A − λI2) =∣∣∣∣ 1− λ 2

2 −2− λ

∣∣∣∣ = 0 ⇐⇒ λ2 + λ − 6 = 0 ⇐⇒ (λ − 2)(λ + 3) = 0 ⇐⇒ λ = 2 or

λ = −3.

(b)[

1 22 −2

]∼[

1 20 −6

]∼[

1 00 1

]= B.

det(B − λI2) = 0 ⇐⇒∣∣∣∣ 1− λ 0

0 1− λ

∣∣∣∣ = 0 ⇐⇒ (λ − 1)2 = 0 ⇐⇒ λ = 1 of multiplicity two. Matrices A

and B do not have the same eigenvalues.

31.A(3v1 − v2) = 3Av1 −Av2 = 3(2v1)− (−3v2) = 6v1 + 3v2

= 6[

1−1

]+ 3

[21

]=[

6−6

]+[

63

]=[

12−3

].

32. (a) Let a, b, c ∈ R. If v = av1 + bv2 + cv3, then 5, 0, 3) = a(1,−1, 1) + b(2, 1, 3) + c(−1,−1, 2), or

(5, 0, 3) = (a + 2b− c,−a + b− c, a + 3b + 2c). The last equality results in the system:

a + 2b− c = 5−a + b− c = 0a + 3b + 2c = 3.

This system has the solution a = 2, b = 1, and c = −1. Consequently, v = 2v1 + v2 − v3.

(b) Using part (a):Av = A(2v1 + v2 − v3) = 2Av1 + Av2 −Av3 = 2(2v1) + (−2v2)− (3v3)

= 4v1 − 2v2 − 3v3 = 4(1,−1, 1)− 2(2, 1, 3)− 3(−1,−1, 2) = (3,−3,−8).

33.

Page 374: Differential Equations and Linear Algebra - 3e - Goode -Solutions

374

A(c1v1 + c2v2 + c3v3) = A(c1v1) + A(c2v2) + A(c3v3)= c1(Av1) + c2(Av2) + c3(Av3)= c1(λv1) + c2(λv2) + c3(λv3)= λ(c1v1 + c2v2 + c3v3).

Thus, c1v1 + c2v2 + c3v3 is an eigenvector of A corresponding to the eigenvalue λ.

34. Recall that the determinant of an upper (lower) triangular matrix is just the product of its main diagonalelements. Let A be an n × n upper (lower) triangular matrix. It follows that A − λIn is an upper (lower)triangular matrix with main diagonal element aii − λ, i = 1, 2, . . . , n. Consequently,

det(A− λIn) = 0 ⇐⇒n∏

i=1

(aii − λ) = 0.

This implies that λ = a11, a22, . . . , ann.

35. Any scalar λ such that det(A−λI) = 0 is an eigenvalue of A. Therefore, if 0 is an eigenvalue of A, thendet(A − 0 · I) = 0, or det(A) = 0, which implies that A is not invertible. On the other hand, if 0 is not aneigenvalue of A, then det(A− 0 · I) 6= 0, or det(A) 6= 0, which implies that A is invertible.

36. A is invertible, so A−1 exists. Also, λ is an eigenvalue of A so that Av = λv. Thus,

A−1(Av) = A−1(λv) =⇒ (A−1A)v = λA−1v =⇒ Inv = λA−1v =⇒ v = λA−1v =⇒ 1λv = A−1v.

Therefore1λ

is an eigenvalue of A−1 provided that λ is an eigenvalue of A.

37. By assumption, we have Av = λv and Bv = µv.

(a) Therefore,(AB)v = A(Bv) = A(µv) = µ(Av) = µ(λv) = (λµ)v,

which shows that v is an eigenvector of AB with corresponding eigenvalue λµ.

(b) Also,(A + B)v = Av + Bv = λv + µv = (λ + µ)v,

which shows that v is an eigenvector of A + B with corresponding eigenvalue λ + µ.

38. Recall that a matrix and its transpose have the same determinant. Thus,

det(A− λIn) = det([A− λIn]T ) = det(AT − λIn).

Since A and AT have the same characteristic polynomial, it follows that both matrices also have the sameeigenvalues.

39. (a) v = r + is is an eigenvector with eigenvalue λ = a + bi, b 6= 0 =⇒ Av = λv=⇒ A(r + is) = (a + bi)(r + is) = (ar− bs) + i(as + br) =⇒ Ar = ar− bs and As = as + br.Now if r = 0, then A0 = a0 − bs =⇒ 0 = 0 − bs =⇒ 0 = bs =⇒ s = 0 since b 6= 0. This would mean thatv = 0 so v could not be an eigenvector. Thus, it must be that r 6= 0. Similarly, if s = 0, then r = 0, andagain, this would contradict the fact that v is an eigenvector. Hence, it must be the case that r 6= 0 ands 6= 0.

(b) As in part (a), Ar = ar− bs and As = as + br.Let c1, c2 ∈ R. Then if

c1r + c2s = 0, (39.1)

Page 375: Differential Equations and Linear Algebra - 3e - Goode -Solutions

375

we have A(c1r + c2s) = 0 =⇒ c1Ar + c2As = 0 =⇒ c1(ar− bs) + c2(as + br) = 0.Hence, (c1a + c2b)r + (c2a − c1b)s = 0 =⇒ a(c1r + c2s) + b(c2r − c1s) = 0 =⇒ b(c2r − c1s) = 0 where wehave used (39.1). Since b 6= 0, we must have c2r − c1s = 0. Combining this with (39.1) yields c1 = c2 = 0.Therefore, it follows that r and s are linearly independent vectors.

40. λ1 = 2, v = r(−1, 1). λ2 = 5, v = s(1, 2).

41. λ1 = −2 (multiplicity two), v = r(1, 1, 1). λ2 = −5, v = s(20, 11, 14).

42. λ1 = 3 (multiplicity two), v = r(1, 0,−1) + s(0, 1,−1). λ2 = 6, v = t(1, 1, 1).

43. λ1 = 3 −√

6, v = r(√

6,−1 +√

6,−5 +√

6). λ2 = 3 +√

6, v = s(√

6, 1 +√

6, 5 +√

6), λ3 = −2, v =t(−1, 3, 0).

44. λ1 = 0, v = r(2, 2, 1). λ2 = 3i, v = s(−4− 3i, 5,−2 + 6i), λ3 = −3i, v = t(−4 + 3i, 5,−2− 6i).

45. λ1 = −1 (multiplicity four), v = a(−1, 0, 0, 1, 0) + b(−1, 0, 1, 0, 0) + c(−1, 0, 0, 0, 1) + d(−1, 1, 0, 0, 0).

Solutions to Section 5.7

True-False Review:

1. TRUE. This is the definition of a nondefective matrix.

2. TRUE. The eigenspace Eλ is equal to the null space of the n× n matrix A− λI, and this null space isa subspace of Rn.

3. TRUE. The dimension of an eigenspace never exceeds the algebraic multiplicity of the correspondingeigenvalue.

4. TRUE. Eigenvectors corresponding to distinct eigenspaces are linearly independent. Therefore if wechoose one (nonzero) vector from each distinct eigenspace, the chosen vectors will form a linearly independentset.

5. TRUE. Since each eigenvalue of the matrix A occurs with algebraic multiplicity 1, we can simply chooseone eigenvector from each eigenspace to obtain a basis of eigenvectors for A. Thus, A is nondefective.

6. FALSE. Many examples will show that this statement is false, including the n × n identity matrix In

for n ≥ 2. The matrix In is not defective, and yet, has λ = 1 occurring with algebraic multiplicity n.

7. TRUE. Eigenvectors corresponding to distinct eigenvalues are always linearly independent, as proved inthe text in this section.

Problems:

1. det(A − λI) = 0 ⇐⇒∣∣∣∣ 1− λ 4

2 3− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − 4λ − 5 = 0 ⇐⇒ (λ − 5)(λ + 1) = 0 ⇐⇒ λ = 5 or

λ = −1.

If λ1 = 5 then (A− λI)v = 0 assumes the form[−4 4

2 −2

] [v1

v2

]=[

00

]=⇒ v1 − v2 = 0. The solution

set of this system is {(r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = 5 is E1 = {v ∈ R2 : v =r(1, 1), r ∈ R}. A basis for E1 is {(1, 1)}, and dim[E1] = 1.

If λ2 = −1 then (A− λI)v = 0 assumes the form[

2 42 4

] [v1

v2

]=[

00

]=⇒ v1 + 2v2 = 0. The solution

set of this system is {(−2s, s) : s ∈ R}, so the eigenspace corresponding to λ2 = −1 is E2 = {v ∈ R2 : v =s(−2, 1), s ∈ R}. A basis for E2 is {(−2, 1)}, and dim[E2] = 1.A complete set of eigenvectors for A is given by {(1, 1), (−2, 1)}, so A is nondefective.

Page 376: Differential Equations and Linear Algebra - 3e - Goode -Solutions

376

2. det(A− λI) = 0 ⇐⇒∣∣∣∣ 3− λ 0

0 3− λ

∣∣∣∣ = 0 ⇐⇒ (3− λ)2 = 0 ⇐⇒ λ = 3 of multiplicity two.

If λ1 = 3 then (A− λI)v = 0 assumes the form[

0 00 0

] [v1

v2

]=[

00

].

The solution set of this system is {(r, s) : r, s ∈ R}, so the eigenspace corresponding to λ1 = 3 isE1 = {v ∈ R2 : v = r(1, 0) + s(0, 1), r, s ∈ R}. A basis for E1 is {(1, 0), (0, 1)}, and dim[E1] = 2.A is nondefective.

3. det(A− λI) = 0 ⇐⇒∣∣∣∣ 1− λ 2−2 5− λ

∣∣∣∣ = 0 ⇐⇒ (λ− 3)2 = 0 ⇐⇒ λ = 3 of multiplicity two.

If λ1 = 3 then (A − λI)v = 0 assumes the form[−2 2−2 2

] [v1

v2

]=[

00

]=⇒ v1 − v2 = 0. The solution

set of this system is {(r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = 3 isE1 = {v ∈ R2 : v = r(1, 1), r ∈ R}. A basis for E1 is {(1, 1)}, and dim[E1] = 1.A is defective since it does not have a complete set of eigenvectors.

4. det(A− λI) = 0 ⇐⇒∣∣∣∣ 5− λ 5−2 −1− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − 4λ + 5 = 0 ⇐⇒ λ = 2± i.

If λ1 = 2− i then (A−λI)v = 0 assumes the form[

3 + i 5−2 −3 + i

] [v1

v2

]=[

00

]=⇒ −2v1+(−3+i)v2 =

0. The solution set of this system is {((−3 + i)r, 2r) : r ∈ C}, so the eigenspace corresponding to λ1 = 2− iis E1 = {v ∈ C2 : v = r(−3 + i, 2), r ∈ C}. A basis for E1 is {(−3 + i, 2)}, and dim[E1] = 1.If λ2 = 2 + i then from Theorem 5.6.8, the eigenvectors corresponding to λ2 = 2 + i are v = s(−3 − i, 2)where s ∈ C, so the eigenspace corresponding to λ2 = 2 + i is E2 = {v ∈ C2 : v = s(−3 − i, 2), s ∈ C}. Abasis for E2 is {(−3− i, 2)}, and dim[E2] = 1. A is nondefective since it has a complete set of eigenvectors,namely {(−3 + i, 2), (−3− i, 2)}.

5. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣3− λ −4 −1

0 −1− λ −10 −4 2− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 2)(λ − 3)2 = 0 ⇐⇒ λ = −2 or λ = 3 of

multiplicity two.

If λ1 = −2 then (A− λI)v = 0 assumes the form

5 −4 −10 1 −10 −4 4

v1

v2

v3

=

000

.

The solution set of this system is {(r, r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = −2 is E1 ={v ∈ R3 : v = r(1, 1, 1), r ∈ R}. A basis for E1 is {(1, 1, 1)}, and dim[E1] = 1.

If λ2 = 3 then (A − λI)v = 0 assumes the form

0 −4 −10 −4 −10 −4 −1

v1

v2

v3

=

000

=⇒ 4v2 + v3 = 0. The

solution set of this system is {(s, t,−4t) : s, t ∈ R}, so the eigenspace corresponding to λ2 = 3 isE2 = {v ∈ R3 : v = s(1, 0, 0) + t(0, 1,−4), s, t ∈ R}. A basis for E2 is {(1, 0, 0), (0, 1,−4)}, and dim[E2] = 2.A complete set of eigenvectors for A is given by {(1, 1, 1), (1, 0, 0), (0, 1,−4)}, so A is nondefective.

6. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣4− λ 0 0

0 2− λ −30 −2 1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 1)(λ − 4)2 = 0 ⇐⇒ λ = −1 or λ = 4 of

multiplicity two.

If λ1 = −1 then (A− λI)v = 0 assumes the form

5 0 00 3 −30 −2 2

v1

v2

v3

=

000

Page 377: Differential Equations and Linear Algebra - 3e - Goode -Solutions

377

=⇒ v1 = 0 and v2 − v3 = 0. The solution set of this system is {(0, r, r) : r ∈ R}, so the eigenspacecorresponding to λ1 = −1 is E1 = {v ∈ R3 : v = r(0, 1, 1), r ∈ R}. A basis for E1 is {(0, 1, 1)}, anddim[E1] = 1.

If λ2 = 4 then (A − λI)v = 0 assumes the form

0 0 00 −2 −30 −2 −3

v1

v2

v3

=

000

=⇒ 2v2 + 3v3 = 0.

The solution set of this system is {(s, 3t,−2t) : s, t ∈ R}, so the eigenspace corresponding to λ2 = 4 isE2 = {v ∈ R3 : v = s(1, 0, 0) + t(0, 3,−2), s, t ∈ R}. A basis for E2 is {(1, 0, 0), (0,−1, 1)}, and dim[E2] = 2.A complete set of eigenvectors for A is given by {(1, 0, 0), (0, 3,−2), (0, 1, 1)}, so A is nondefective.

7. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣3− λ 1 0−1 5− λ 00 0 4− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 4)3 = 0 ⇐⇒ λ = 4 of multiplicity three.

If λ1 = 4 then (A− λI)v = 0 assumes the form

−1 1 0−1 1 0

0 0 0

v1

v2

v3

=

000

=⇒ v1 − v2 = 0 and v3 ∈ R. The solution set of this system is {(r, r, s) : r, s ∈ R}, so the eigenspace corre-sponding to λ1 = 4 is E1 = {v ∈ R3 : v = r(1, 1, 0)+s(0, 0, 1), r, s ∈ R}. A basis for E1 is {(1, 1, 0), (0, 0, 1)},and dim[E1] = 2.A is defective since it does not have a complete set of eigenvectors.

8. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣3− λ 0 0

2 −λ −41 4 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 3)(λ2 + 16) = 0 ⇐⇒ λ = 3 or λ = ±4i.

If λ1 = 3 then (A− λI)v = 0 assumes the form

0 0 02 −3 −41 4 −3

v1

v2

v3

=

000

=⇒ 11v1 − 25v3 = 0 and 11v2 − 2v3. The solution set of this system is {(25r, 2r, 11r) : r ∈ C}, so theeigenspace corresponding to λ1 = 3 is E1 = {v ∈ C3 : v = r(25, 2, 11), r ∈ C}. A basis for E1 is {(25, 2, 11)},and dim[E1] = 1.

If λ2 = −4i then (A− λI)v = 0 assumes the form

3 + 4i 0 02 4i −41 4 4i

v1

v2

v3

=

000

=⇒ v1 = 0 and iv2 − v3 = 0. The solution set of this system is {(0, s, is) : s ∈ C}, so the eigenspacecorresponding to λ2 = −4i is E2 = {v ∈ C3 : v = s(0, 1, i), s ∈ C}. A basis for E2 is {(0, 1, i)}, anddim[E2] = 1.If λ3 = 4i then from Theorem 5.6.8, the eigenvectors corresponding to λ3 = 4i are v = t(0, 1,−i) wheret ∈ C, so the eigenspace corresponding to λ3 = 4i is E3 = {v ∈ C3 : v = t(0, 1,−i), t ∈ C}. A basis for E3 is{(0, 1,−i)}, and dim[E3] = 1. A complete set of eigenvectors for A is given by {(25, 2, 11), (0, 1, i), (0, 1,−i)},so A is nondefective.

9. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣4− λ 1 6−4 −λ −70 0 −3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 3)(λ − 2)2 = 0 ⇐⇒ λ = −3 or λ = 2 of

multiplicity two.

If λ1 = −3 then (A− λI)v = 0 assumes the form

7 1 6−4 3 −7

0 0 0

v1

v2

v3

=

000

=⇒ v1 + v3 = 0 and v2 − v3 = 0. The solution set of this system is {(−r, r, r) : r ∈ R}, so the eigenspacecorresponding to λ1 = −3 is E1 = {v ∈ R3 : v = r(−1, 1, 1), r ∈ R}. A basis for E1 is {(−1, 1, 1)}, and

Page 378: Differential Equations and Linear Algebra - 3e - Goode -Solutions

378

dim[E1] = 1.

If λ2 = 2 then (A− λI)v = 0 assumes the form

2 1 6−4 −2 −7

0 0 5

v1

v2

v3

=

000

=⇒ 2v1 + v2 = 0 and v3 = 0. The solution set of this system is {(−s, 2s, 0) : s ∈ R}, so the eigenspacecorresponding to λ2 = 2 is E2 = {v ∈ R3 : v = s(−1, 2, 0), s ∈ R}. A basis for E2 is {(−1, 2, 0)}, anddim[E2] = 1.A is defective because it does not have a complete set of eigenvectors.

10. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣2− λ 0 0

0 2− λ 00 0 2− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 2)3 = 0 ⇐⇒ λ = 2 of multiplicity three.

If λ1 = 2 then (A − λI)v = 0 assumes the form

0 0 00 0 00 0 0

v1

v2

v3

=

000

. The solution set of this

system is {(r, s, t) : r, s, t ∈ R}, so the eigenspace corresponding to λ1 = 2 isE1 = {v ∈ R3 : v = r(1, 0, 0) + s(0, 1, 0) + t(0, 0, 1), r, s, t ∈ R}. A basis for E1 is {(1, 0, 0), (0, 1, 0), (0, 0, 1)},and dim[E1] = 3.A is nondefective since it has a complete set of eigenvectors.

11. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣7− λ −8 6

8 −9− λ 60 0 −1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 1)3 = 0 ⇐⇒ λ = −1 of multiplicity

three.

If λ1 = −1 then (A− λI)v = 0 assumes the form

8 −8 68 −8 60 0 0

v1

v2

v3

=

000

=⇒ 4v1 − 4v2 + 3v3 = 0.

The solution set of this system is {(r − 3s, r, 4s) : r, s ∈ R}, so the eigenspace corresponding to λ1 = −1 isE1 = {v ∈ R3 : v = r(1, 1, 0)+s(−3, 0, 4), r, s ∈ R}. A basis for E1 is {(1, 1, 0), (−3, 0, 4)}, and dim[E1] = 2.A is defective since it does not have a complete set of eigenvectors.

12. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣2− λ 2 −1

2 1− λ −12 3 −1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 2)λ2 = 0 ⇐⇒ λ = 2 or λ = 0 of

multiplicity two.

If λ1 = 2 then (A− λI)v = 0 assumes the form

0 2 −12 −1 −12 3 −3

v1

v2

v3

=

000

=⇒ 4v1 − 3v3 = 0 and 2v2 − v3 = 0. The solution set of this system is {(3r, 2r, 4r) : r ∈ R}, so theeigenspace corresponding to λ1 = 2 is E1 = {v ∈ R3 : v = r(3, 2, 4), r ∈ R}. A basis for E1 is {(3, 2, 4)},and dim[E1] = 1.

If λ2 = 0 then (A− λI)v = 0 assumes the form

2 2 −12 1 −12 3 −1

v1

v2

v3

=

000

=⇒ 2v1 − v3 = 0 and v2 = 0. The solution set of this system is {(s, 0, 2s) : s ∈ R}, so the eigenspacecorresponding to λ2 = 0 is E2 = {v ∈ R3 : v = s(1, 0, 2), s ∈ R}. A basis for E2 is {(1, 0, 2)}, anddim[E2] = 1.A is defective because it does not have a complete set of eigenvectors.

Page 379: Differential Equations and Linear Algebra - 3e - Goode -Solutions

379

13. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ −1 2

1 −1− λ 21 −1 2− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 2)λ2 = 0 ⇐⇒ λ = 2 or λ = 0 of

multiplicity two.

If λ1 = 2 then (A− λI)v = 0 assumes the form

−1 −1 21 −3 21 −1 0

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and v2 − v3 = 0. The solution set of this system is {(r, r, r) : r ∈ R}, so the eigenspacecorresponding to λ1 = 2 is E1 = {v ∈ R3 : v = r(1, 1, 1), r ∈ R}. A basis for E1 is {(1, 1, 1)}, anddim[E1] = 1.

If λ2 = 0 then (A − λI)v = 0 assumes the form

1 −1 21 −1 21 −1 2

v1

v2

v3

=

000

=⇒ v1 − v2 + 2v3 = 0.

The solution set of this system is {(s − 2t, s, t) : s, t ∈ R}, so the eigenspace corresponding to λ2 = 0 isE2 = {v ∈ R3 : v = s(1, 1, 0) + t(−2, 0, 1), s, t ∈ R}. A basis for E2 is {(1, 1, 0), (−2, 0, 1)}, and dim[E2] = 2.A is nondefective because it has a complete set of eigenvectors.

14. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣2− λ 3 0−1 −λ 1−2 −1 4− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 2)3 = 0 ⇐⇒ λ = 2 of multiplicity three.

If λ1 = 2 then (A− λI)v = 0 assumes the form

0 3 0−1 −2 1−2 −1 2

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and v2 = 0. The solution set of this system is {(r, 0, r) : r ∈ R}, so the eigenspacecorresponding to λ1 = 2 is E1 = {v ∈ R3 : v = r(1, 0, 1), r ∈ R}. A basis for E1 is {(1, 0, 1)}, anddim[E1] = 1.A is defective since it does not have a complete set of eigenvectors.

15. det(A−λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ −1 −1−1 −λ −1−1 −1 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ+2)(λ−1)2 = 0 ⇐⇒ λ = −2 or λ = 1 of multiplicity

two.

If λ1 = −2 then (A− λI)v = 0 assumes the form

2 −1 −1−1 2 −1−1 −1 2

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and v2 − v3 = 0. The solution set of this system is {(r, r, r) : r ∈ R}, so the eigenspacecorresponding to λ1 = −2 is E1 = {v ∈ R3 : v = r(1, 1, 1), r ∈ R}. A basis for E1 is {(1, 1, 1)}, anddim[E1] = 1.

If λ2 = 1 then (A − λI)v = 0 assumes the form

−1 −1 −1−1 −1 −1−1 −1 −1

v1

v2

v3

=

000

=⇒ v1 + v2 + v3 = 0.

The solution set of this system is {(−s − t, s, t) : s, t ∈ R}, so the eigenspace corresponding to λ2 = 1is E2 = {v ∈ R3 : v = s(−1, 1, 0) + t(−1, 0, 1), s, t ∈ R}. A basis for E2 is {(−1, 1, 0), (−1, 0, 1)}, anddim[E2] = 2.A is nondefective because it has a complete set of eigenvectors.

16. (λ − 4)(λ + 1) = 0 ⇐⇒ λ = 4 or λ = −1. Since A has two distinct eigenvalues, it has two linearlyindependent eigenvectors and is, therefore, nondefective.

17. (λ− 1)2 = 0 ⇐⇒ λ = 1 of multiplicity two.

Page 380: Differential Equations and Linear Algebra - 3e - Goode -Solutions

380

If λ1 = 1 then (A− λI)v = 0 assumes the form[

5 5−5 −5

] [v1

v2

]=[

00

]=⇒ v1 + v2 = 0. The solution

set of this system is {(−r, r) : r ∈ R}, so the eigenspace corresponding to λ1 = 1is E1 = {v ∈ R2 : v = r(−1, 1), r ∈ R}. A basis for E1 is {(1, 1)}, and dim[E1] = 1.A is defective since it does not have a complete set of eigenvectors.

18. λ2 − 4λ + 13 = 0 ⇐⇒ λ = 2± 3i. Since A has two distinct eigenvalues, it has two linearly independenteigenvectors and is, therefore, nondefective.

19. (λ− 2)2(λ + 1) = 0 ⇐⇒ λ = −1 or λ = 2 of multiplicity two. To determine whether A is nondefective,all we require is the dimension of the eigenspace corresponding to λ = 2.

If λ = 2 then (A − λI)v = 0 assumes the form

−1 −3 1−1 −3 1−1 −3 1

v1

v2

v3

=

000

=⇒ −v1 − 3v2 + v3 = 0.

The solution set of this system is {(−3s + t, s, t) : s, t ∈ R}, so the eigenspace corresponding to λ = 2 isE = {v ∈ R3 : v = s(−3, 1, 0) + t(1, 0, 1), s, t ∈ R}. Since dim[E] = 2, A is nondefective.

20. (λ− 3)3 = 0 ⇐⇒ λ = 3 of multiplicity three.

If λ = 3 then (A− λI)v = 0 assumes the form

−4 2 2−4 2 2−4 2 2

v1

v2

v3

=

000

=⇒ −2v1 + v2 + v3 = 0 and

v2 = 0. The solution set of this system is {(r, 2r− s, s) : r, s ∈ R}, so the eigenspace corresponding to λ = 3is E = {v ∈ R3 : v = r(1, 2, 0) + s(0,−1, 1), r, s ∈ R}.A is defective since it does not have a complete set of eigenvectors.

21. det(A− λI) = 0 ⇐⇒∣∣∣∣ 2− λ 1

3 4− λ

∣∣∣∣ = 0 ⇐⇒ (λ− 1)(λ− 5) = 0 ⇐⇒ λ = 1 or λ = 5.

If λ1 = 1 then (A− λI)v = 0 assumes the form[

1 13 3

] [v1

v2

]=[

00

]=⇒ v1 + v2 = 0. The eigenspace

corresponding to λ1 = 1 is E1 = {v ∈ R2 : v = r(−1, 1), r ∈ R}. A basis for E1 is {(−1, 1)}.

If λ2 = 5 then (A − λI)v = 0 assumes the form[−3 1

3 −1

] [v1

v2

]=[

00

]=⇒ 3v1 − v2 = 0. The

eigenspace corresponding to λ2 = 5 is E2 = {v ∈ R2 : v = s(1, 3), s ∈ R}. A basis for E2 is {(1, 3)}.

1

1

-1-1

-2 2

2

3

v1

E1 E2

v2

Figure 0.0.73: Figure for Exercise 21

22. det(A− λI) = 0 ⇐⇒∣∣∣∣ 2− λ 3

0 2− λ

∣∣∣∣ = 0 ⇐⇒ (2− λ)2 = 0 ⇐⇒ λ = 2 of multiplicity two.

Page 381: Differential Equations and Linear Algebra - 3e - Goode -Solutions

381

If λ1 = 2 then (A − λI)v = 0 assumes the form[

0 30 0

] [v1

v2

]=[

00

]=⇒ v1 ∈ R and v2 = 0. The

eigenspace corresponding to λ1 = 2 is E1 = {v ∈ R2 : v = r(1, 0), r ∈ R}. A basis for E1 is {(1, 0)}.

v1

E1

v2

Figure 0.0.74: Figure for Exercise 22

23. det(A− λI) = 0 ⇐⇒∣∣∣∣ 5− λ 0

0 5− λ

∣∣∣∣ = 0 ⇐⇒ (5− λ)2 = 0 ⇐⇒ λ = 5 of multiplicity two.

If λ1 = 5 then (A − λI)v = 0 assumes the form[

0 00 0

] [v1

v2

]=[

00

]=⇒ v1, v2 ∈ R. The eigenspace

corresponding to λ1 = 5 is E1 = {v ∈ R2 : v = r(1, 0) + s(0, 1), r, s ∈ R}. A basis for E1 is {(1, 0), (0, 1)}.

v1

v2

E1 is the whole of R2

Figure 0.0.75: Figure for Exercise 23

24. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣3− λ 1 −1

1 3− λ −1−1 −1 3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 5)(λ − 2)2 = 0 ⇐⇒ λ = 5 or λ = 2 of

multiplicity two.

If λ1 = 5 then (A− λI)v = 0 assumes the form

−2 1 −11 −2 −1

−1 −1 −2

v1

v2

v3

=

000

=⇒ v1 + v3 = 0 and v2 + v3 = 0. The eigenspace corresponding to λ1 = 5 isE1 = {v ∈ R3 : v = r(1, 1,−1), r ∈ R}. A basis for E1 is {(1, 1,−1)}.

If λ2 = 2 then (A − λI)v = 0 assumes the form

1 1 −11 1 −1

−1 −1 1

v1

v2

v3

=

000

=⇒ v1 + v2 − v3 = 0.

The eigenspace corresponding to λ2 = 2 is E2 = {v ∈ R3 : v = s(−1, 1, 0) + t(1, 0, 1), s, t ∈ R}. A basis forE2 is {(−1, 1, 0), (1, 0, 1)}.

Page 382: Differential Equations and Linear Algebra - 3e - Goode -Solutions

382

v3

E1 v2

(1, 0, 1)

(-1, 1, 0)

V1 (1, 1, -1)

E2

Figure 0.0.76: Figure for Exercise 24

25. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣−3− λ 1 0−1 −1− λ 20 0 −2− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 2)3 = 0 ⇐⇒ λ = −2 of multiplicity

three.

If λ1 = −2 then (A − λI)v = 0 assumes the form

−1 1 0−1 1 2

0 0 0

v1

v2

v3

=

000

=⇒ v1 − v2 = 0 and

v3 = 0. The eigenspace corresponding to λ1 = −2 is E1 = {v ∈ R3 : v = r(1, 1, 0), r ∈ R}. A basis for E1 is{(1, 1, 0)}.

v3

E1

v2

(1, 1, 0)V1

Figure 0.0.77: Figure for Exercise 25

26. (a) If λ1 = 1 then (A− λI)v = 0 assumes the form

1 −2 31 −2 31 −2 3

v1

v2

v3

=

000

=⇒ v1 − 2v2 + 3v3 = 0. The eigenspace corresponding to λ1 = 1 isE1 = {v ∈ R3 : v = r(2, 1, 0) + s(−3, 0, 1), r, s ∈ R}. A basis for E1 is {(2, 1, 0), (−3, 0, 1)}.Now apply the Gram-Schmidt process where v1 = (−3, 0, 1), and v2 = (2, 1, 0). Let u1 = v1 so that〈v2,u1〉 = 〈(2, 1, 0), (−3, 0, 1)〉 = 2(−3) + 1 · 0 + 0 · 1 = −6 and ||u1||2 = (−3)2 + 02 + 12 = 10.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (2, 1, 0) +610

(−3, 0, 1) =15(1, 5, 3).

Thus, {(−3, 0, 1), (1, 5, 3)} is an orthogonal basis for E1.

(b) If λ2 = 3 then (A− λI)v = 0 assumes the form

−1 −2 31 −4 31 −2 1

v1

v2

v3

=

000

=⇒ v1 − v2 = 0 and

Page 383: Differential Equations and Linear Algebra - 3e - Goode -Solutions

383

v2 − v3 = 0. The eigenspace corresponding to λ2 = 3 is E2 = {v ∈ R3 : v = r(1, 1, 1), r ∈ R}. A basis forE2 is {(1, 1, 1)}.To determine the orthogonality of the vectors, consider the following inner products:〈(−3, 0, 1), (1, 1, 1)〉 = −3 + 0 + 1 = −2 6= 0 and 〈(1, 5, 3), (1, 1, 1)〉 = 1 + 5 + 3 = 9 6= 0.Thus, the vectors in E1 are not orthogonal to the vectors in E2.

27. (a) If λ1 = 2 then (A− λI)v = 0 assumes the form

−1 −1 1−1 −1 1

1 1 −1

v1

v2

v3

=

000

=⇒ v1 + v2 − v3 = 0. The eigenspace corresponding to λ1 = 2 isE1 = {v ∈ R3 : v = r(−1, 1, 0) + s(1, 0, 1), r, s ∈ R}. A basis for E1 is {(−1, 1, 0), (1, 0, 1)}.Now apply the Gram-Schmidt process where v1 = (1, 0, 1), and v2 = (−1, 1, 0). Let u1 = v1 so that〈v2,u1〉 = 〈(−1, 1, 0), (1, 0, 1)〉 = −1 · 1 + 1 · 0 + 0 · 1 = −1 and ||u1||2 = 12 + 02 + 12 = 2.

u2 = v2 −〈v2,u1〉||u1||2

u1 = (−1, 1, 0) +12(1, 0, 1) =

12(−1, 2, 1).

Thus, {(1, 0, 1), (−1, 2, 1)} is an orthogonal basis for E1.

(b) If λ2 = −1 then (A − λI)v = 0 assumes the form

2 −1 1−1 2 1

1 1 2

v1

v2

v3

=

000

. The eigenspace

corresponding to λ2 = −1 is E2 = {v ∈ R3 : v = r(−1,−1, 1), r ∈ R}. A basis for E2 is {(−1,−1, 1)}.To determine the orthogonality of the vectors, consider the following inner products:〈(1, 0, 1), (−1,−1, 1)〉 = −1 + 0 + 1 = 0 and 〈(−1, 2, 1), (−1,−1, 1)〉 = 1− 2 + 1 = 0.Thus, the vectors in E1 are orthogonal to the vectors in E2.

28. We are given that the eigenvalues of A are λ1 = 0 (multiplicity two), and λ2 = a + b + c. There are twocases to consider: λ1 = λ2 or λ1 6= λ2.

If λ1 = λ2 then λ1 = 0 is of multiplicity three, and (A−λI)v = 0 assumes the form

a b ca b ca b c

v1

v2

v3

= 000

, or equivalently,

av1 + bv2 + cv3 = 0. (28.1)

The only way to have three linearly independent eigenvectors for A is if a = b = c = 0.If λ1 6= λ2 then λ1 = 0 is of multiplicity two, and λ2 = a + b + c 6= 0 are distinct eigenvalues. By Theorem5.7.11, E1 must have dimension two for A to possess a complete set of eigenvectors. The system for de-termining the eigenvectors corresponding to λ1 = 0 is once more given by (28.1). Since we can choose twovariables freely in (28.1), it follows that there are indeed two corresponding linearly independent eigenvectors.Consequently, A is nondefective in this case.

29. (a) Setting λ = 0 in (5.7.4), we have p(λ) = det(A − λI) = det(A), and in (5.7.5), we have p(λ) =p(0) = bn. Thus, bn = det(A).The value of det(A− λI) is the sum of products of its elements, one taken from each row and each column.Expanding det(A−λI) yields equation (5.7.5). The expression involving λn in p(λ) comes from the product,n∏

i=1

(aii−λ), of the diagonal elements. All the remaining products of the determinant have degree not higher

than n − 2, since, if one of the factors of the product is aij , where i 6= j, then this product cannot contain

Page 384: Differential Equations and Linear Algebra - 3e - Goode -Solutions

384

the factors λ− aii and λ− ajj . Hence,

p(λ) =n∏

i=1

(aii − λ) + (terms of degree not higher thann− 2)

so,p(λ) = (−1)nλn + (−1)n−1(a11 + a22 + · · ·+ ann)λn−1 + · · ·+ an.

Equating like coefficients from (5.7.5), it follows that b1 = (−1)n−1(a11 + a22 + · · ·+ ann).

(b) Letting λ = 0, we have from (5.7.6) that p(0) =n∏

i=1

(λi− 0) or p(0) =n∏

i=1

λi, but from (5.7.5), p(0) = bn,

therefore bn =n∏

i=1

λi. Letting λ = 1, we have from (5.7.6) that

p(λ) =n∏

i=1

(λi − λ) = (−1)nn∏

i=1

(λ− λi) = (−1)n[λn − (λ1 + λ2 + · · ·+ λn)λn−1 + · · ·+ bn].

Equating like coefficients with (5.7.5), it follows that b1 = (−1)n−1(λ1 + λ2 + · · ·+ λn).

(c) From (a), bn = det(A), and from (b), det(A) = λ1λ2 · · ·λn, so det(A) is the product of the eigenvaluesof A.From (a), b1 = (−1)n−1(a11 + a22 + · · · + ann) and from (b), b1 = (−1)n−1(λ1 + λ2 + · · · + λn), thusa11 + a22 + · · ·+ ann = λ1 + λ2 + · · ·+ λn. That is, tr(A) is the sum of the eigenvalues of A.

30.

(a) We have det(A) = 19 and tr(A) = 3, so the product of the eigenvalues of A is 19, and the sum of theeigenvalues of A is 3.

(b) We have det(A) = −69 and tr(A) = 1, so the product of the eigenvalues of A is -69, and the sum of theeigenvalues of A is 1.

(c) We have det(A) = −607 and tr(A) = 24, so the product of the eigenvalues of A is -607, and the sum ofthe eigenvalues of A is 24.

31. Note that Ei 6= ∅ since 0 belongs to Ei.Closure under Addition: Let v1,v2 ∈ Ei. Then A(v1 + v2) = Av1 + Av2 = λiv1 + λiv2 = λi(v1 + v2) =⇒v1 + v2 ∈ Ei.

Closure under Scalar Multiplication: Let c ∈ C and v1 ∈ Ei. Then A(cv1) = c(Av1) = c(λiv1) = λi(cv1) =⇒cv1 ∈ Ei.

Thus, by Theorem 4.3.2, Ei is a subspace of Cn.

32. The conditionc1v1 + c2v2 = 0 (32.1)

=⇒ A(c1v1) + A(c2v2) = 0 =⇒ c1Av1 + c2Av2 = 0

=⇒ c1(λ1v1) + c2(λ2v2) = 0. (32.2)

Substituting c2v2 from (32.1) into (32.2) yields (λ1 − λ2)c1v1 = 0.Since λ2 6= λ1, we must have c1v1 = 0, but v1 6= 0, so c1 = 0. Substituting into (32.1) yields c2 = 0 also.Consequently, v1 and v2 are linearly independent.

Page 385: Differential Equations and Linear Algebra - 3e - Goode -Solutions

385

33. Considerc1v1 + c2v2 + c3v3 = 0. (33.1)

If c1 6= 0, then the preceding equation can be written as

w1 + w2 = 0,

where w1 = c1v1 and w2 = c2v2 + c3v3. But this would imply that {w1,w2} is linearly dependent, whichwould contradict Theorem 5.7.5 since w1 and w2 are eigenvectors corresponding to different eigenvalues.Consequently, we must have c1 = 0. But then (33.1) implies that c2 = c3 = 0 since {v1,v2} is a linearlyindependent set by assumption. Hence {v1,v2,v3} is linearly independent.

34. λ1 = 1 (multiplicity 3), basis: {(0, 1, 1)}.

35. λ1 = 0 (multiplicity 2), basis: {(−1, 1, 0), (−1, 0, 1)}. λ2 = 3, basis: {(1, 1, 1)}.

36. λ1 = 2, basis: {(1,−2√

2, 1)}. λ2 = 0, basis: {(1, 0,−1)}. λ3 = 7, basis: {(√

2, 1,√

2)}.

37. λ1 = −2, basis: {(2, 1,−4)}. λ2 = 3 (multiplicity 2), basis: {(0, 2, 1), (3, 11, 0)}.

38. λ1 = 0 (multiplicity 2), basis: {(0, 1, 0,−1), (1, 0,−1, 0)}. λ2 = 6, basis: {(1, 1, 1, 1)}. λ3 = −2, basis:{1,−1, 1,−1)}.

39. A has eigenvalues:

λ1 =32a +

12

√a2 + 8b2,

λ2 =32a− 1

2

√a2 + 8b2,

λ3 = 0.

Provided a 6= ±b, these eigenvalues are distinct, and therefore the matrix is nondefective.If a = b 6= 0, then the eigenvalue λ = 0 has multiplicity two. A basis for the corresponding eigenspace is{(−1, 0, 1), (−1, 1, 0)}. Since this is two-dimensional, the matrix is nondefective in this case.If a = −b 6= 0, then the eigenvalue λ = 0 once more has multiplicity two. A basis for the correspondingeigenspace is {(0, 1, 1), (1, 1, 0)}, therefore the matrix is nondefective in this case also.If a = b = 0, then A = 02, so that λ = 0 (multiplicity three), and the corresponding eigenspace is all of R3.Hence A is nondefective.

40. If a = b = 0, then A = 03, which is nondefective. We now assume that at least one of either a or b isnonzero.A has eigenvalues: λ1 = b, λ2 = a− b, and λ3 = 3a + b.Provided a 6= 0, 2b,−b, the eigenvalues are distinct, and therefore A is nondefective.If a = 0, then the eigenvalue λ = b has multiplicity two. In this case, a basis for the corresponding eigenspaceis {(1, 0, 1), (0, 1, 0)}, so that A is nondefective.If a = 2b, then the eigenvalue λ = b has multiplicity two. In this case, a basis for the corresponding eigenspaceis {(−2, 1, 0), (−1, 0, 1)}, so that A is nondefective.If a = −b, then the eigenvalue λ = −2b has multiplicity two. In this case, a basis for the correspondingeigenspace is {(0, 1, 1), (1, 0,−1)}, so that A is nondefective.

Solutions to Section 5.8

True-False Review:

Page 386: Differential Equations and Linear Algebra - 3e - Goode -Solutions

386

1. TRUE. The terms “diagonalizable” and “nondefective” are synonymous. The diagonalizability ofa matrix A hinges on the ability to form an invertible matrix S with a full set of linearly independenteigenvectors of the matrix as its columns. This, in turn, requires the original matrix to be nondefective.

2. TRUE. If we assume that A is diagonalizable, then there exists an invertible matrix S and a diagonalmatrix D such that S−1AS = D. Since A is invertible, we can take the inverse of each side of this equationto obtain

D−1 = (S−1AS)−1 = S−1A−1S,

and since D−1 is still a diagonal matrix, this equation shows that A−1 is diagonalizable.

3. FALSE. For instance, the matrices A = I2 and B =[

1 10 1

]both have eigenvalue λ = 1 (with

multiplicity 2). However, A and B are not similar. [Reason: If A and B were similar, then S−1AS = Bfor some invertible matrix S, but since A = I2, this would imply that B = I2, contrary to our choice of Babove.]

4. FALSE. An n × n matrix is diagonalizable if and only if it has n linearly independent eigenvectors.Besides, every matrix actually has infinitely many eigenvectors, obtained by taking scalar multiples of asingle eigenvector v.

5. TRUE. Assume A is an n× n matrix such that p(λ) = det(A− λI) has no repeated roots. This impliesthat A has n distinct eigenvalues. Corresponding to each eigenvalue, we can select an eigenvector. Sinceeigenvectors corresponding to distinct eigenvalues are linearly independent, this yields n linearly independenteigenvectors for A. Therefore, A is nondefective, and hence, diagonalizable.

6. TRUE. Assuming that A is diagonalizable, then there exists an invertible matrix S and a diagonalmatrix D such that S−1AS = D. Therefore,

D2 = (S−1AS)2 = (S−1AS)(S−1AS) = S−1ASS−1AS = S−1A2S.

Since D2 is still a diagonalizable matrix, this equation shows that A2 is diagonalizable.

7. TRUE. Since I−1n AIn = A, A is similar to itself.

8. TRUE. The sum of the dimensions of the eigenspaces of such a matrix is even, and therefore not equalto n. This means we cannot obtain n linearly independent eigenvectors for A, and therefore, A is defective(and not diagonalizable).

Problems:

1. det(A − λI) = 0 ⇐⇒∣∣∣∣ −1− λ −2

−2 2− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − λ − 6 = 0 ⇐⇒ (λ − 3)(λ + 2) = 0 ⇐⇒ λ = 3 or

λ = −2. A is diagonalizable because it has two distinct eigenvalues.

If λ1 = 3 then (A − λI)v = 0 assumes the form[−4 −2−2 −1

] [v1

v2

]=[

00

]=⇒ 2v1 + v2 = 0. If we let

v1 = r ∈ R, then the solution set of this system is {(−r, 2r) : r ∈ R}, so the eigenvectors corresponding toλ1 = 3 are v1 = r(−1, 2) where r ∈ R.

If λ2 = −2 then (A − λI)v = 0 assumes the form[

1 −2−2 4

] [v1

v2

]=[

00

]=⇒ v1 − 2v2 = 0. If we let

v2 = s ∈ R, then the solution set of this system is {(2s, s) : s ∈ R}, so the eigenvectors corresponding toλ2 = −2 are v2 = s(2, 1) where s ∈ R.

Thus, the matrix S =[−1 2

2 1

]satisfies S−1AS = diag(3,−2).

Page 387: Differential Equations and Linear Algebra - 3e - Goode -Solutions

387

2. det(A − λI) = 0 ⇐⇒∣∣∣∣ −7− λ 4

−4 1− λ

∣∣∣∣ = 0 ⇐⇒ λ2 + 6λ + 9 = 0 ⇐⇒ (λ + 3)2 = 0 ⇐⇒ λ = −3 of

multiplicity two.

If λ = −3 then (A − λI)v = 0 assumes the form[−4 4−4 4

] [v1

v2

]=[

00

]=⇒ v1 − v2 = 0. If we let

v1 = r ∈ R, then the solution set of this system is {(r, r) : r ∈ R}, so the eigenvectors corresponding toλ = −3 are v = r(1, 1) where r ∈ R.A has only one linearly independent eigenvector, so by Theorem 5.8.4, A is not diagonalizable.

3. det(A − λI) = 0 ⇐⇒∣∣∣∣ 1− λ −8

2 −7− λ

∣∣∣∣ = 0 ⇐⇒ λ2 + 6λ + 9 = 0 ⇐⇒ (λ + 3)2 = 0 ⇐⇒ λ = −3 of

multiplicity two.

If λ = −3 then (A − λI)v = 0 assumes the form[

4 −82 −4

] [v1

v2

]=[

00

]=⇒ v1 − 2v2 = 0. If we let

v2 = r ∈ R, then the solution set of this system is {(2r, r) : r ∈ R}, so the eigenvectors corresponding toλ = −3 are v = r(2, 1) where r ∈ R.A has only one linearly independent eigenvector, so by Theorem 5.8.4, A is not diagonalizable.

4. det(A− λI) = 0 ⇐⇒∣∣∣∣ −λ 4−4 −λ

∣∣∣∣ = 0 ⇐⇒ λ2 + 16 = 0 ⇐⇒ λ = ±4i. A is diagonalizable because it has

two distinct eigenvalues.

If λ = −4i then (A − λI)v = 0 assumes the form[

4i 4−4 4i

] [v1

v2

]=[

00

]=⇒ v1 − iv2 = 0. If we let

v2 = r ∈ C, then the solution set of this system is {(ir, r) : r ∈ C}, so the eigenvectors corresponding toλ = −4i are v = r(i, 1) where r ∈ C. Since the entries of A are real, it follows from Theorem 5.6.8 thatv2 = (−i, 1) is an eigenvector corresponding to λ = 4i.

Thus, the matrix S =[

i −i1 1

]satisfies S−1AS = diag(−4i, 4i).

5. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ 0 0

0 3− λ 71 1 −3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (1− λ)(λ + 4)(λ− 4) = 0 ⇐⇒ λ = 1, λ = −4 or

λ = 4.

If λ = 1 then (A− λI)v = 0 assumes the form

0 0 00 2 71 1 −4

v1

v2

v3

=

000

=⇒ 2v1 − 15v3 = 0 and 2v2 + 7v3 = 0. If we let v3 = 2r where r ∈ R, then the solution set of this system is{(15r,−7r, 2r) : r ∈ R} so the eigenvectors corresponding to λ = 1 are v1 = r(15,−7, 2) where r ∈ R.

If λ = −4 then (A− λI)v = 0 assumes the form

5 0 00 7 71 1 1

v1

v2

v3

=

000

=⇒ v1 = 0 and v2 + v3 = 0. If we let v2 = s ∈ R, then the solution set of this system is {(0, s,−s) : s ∈ R}so the eigenvectors corresponding to λ = −4 are v2 = s(0, 1,−1) where s ∈ R.

If λ = 4 then (A− λI)v = 0 assumes the form

−3 0 00 −1 71 1 −7

v1

v2

v3

=

000

=⇒ v1 = 0 and v2 − 7v3 = 0. If we let v3 = t ∈ R, then the solution set of this system is {(0, 7t, t) : t ∈ R}so the eigenvectors corresponding to λ = 4 are v3 = t(0, 7, 1) where t ∈ R.

Page 388: Differential Equations and Linear Algebra - 3e - Goode -Solutions

388

Thus, the matrix S =

15 0 0−7 7 1

2 1 −1

satisfies S−1AS = diag(1,−4, 4).

6. det(A−λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ −2 0

2 −3− λ 02 −2 −2− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ+1)3 = 0 ⇐⇒ λ = −1 of multiplicity three.

If λ = −1 then (A − λI)v = 0 assumes the form

2 −2 02 −2 02 −2 0

v1

v2

v3

=

000

=⇒ v1 − v2 = 0 and

v3 ∈ R. If we let v2 = r ∈ R and v3 = s ∈ R, then the solution set of this system is {(r, r, s) : r, s ∈ R}, sothe eigenvectors corresponding to λ = −1 are v1 = r(1, 1, 0) and v2 = s(0, 0, 1) where r, s ∈ R.A has only two linearly independent eigenvectors, so by Theorem 5.8.4, A is not diagonalizable.

7. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ −2 −2−2 −λ −2−2 −2 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 2)2(λ + 4) = 0 ⇐⇒ λ = −4 or λ = 2 of multiplicity

two.

If λ = −4 then (A− λI)v = 0 assumes the form

4 −2 −2−2 4 −2−2 −2 4

v1

v2

v3

=

000

=⇒ v1−v3 = 0 and v2−v3 = 0. If we let v3 = r ∈ R, then the solution set of this system is {(r, r, r) : r ∈ R}so the eigenvectors corresponding to λ = −4 are v1 = r(1, 1, 1) where r ∈ R.

If λ = 2 then (A− λI)v = 0 assumes the form

−2 −2 −2−2 −2 −2−2 −2 −2

v1

v2

v3

=

000

=⇒ v1 + v2 + v3 = 0. If

we let v2 = s ∈ R and v3 = t ∈ R, then the solution set of this system is {(−s − t, s, t) : s, t ∈ R}, so twolinearly independent eigenvectors corresponding to λ = 2 are v2 = s(−1, 1, 0) and v3 = t(−1, 0, 1).

Thus, the matrix S =

1 −1 −11 0 11 1 0

satisfies S−1AS = diag(−4, 2, 2).

8. det(A−λI) = 0 ⇐⇒

∣∣∣∣∣∣−2− λ 1 4−2 1− λ 4−2 1 4− λ

∣∣∣∣∣∣ = 0 ⇐⇒ λ2(λ−3) = 0 ⇐⇒ λ = 3, or λ = 0 of multiplicity

two.

If λ = 3 then (A− λI)v = 0 assumes the form

−5 1 4−2 −2 4−2 1 1

v1

v2

v3

=

000

=⇒ v1−v3 = 0 and v2−v3 = 0. If we let v3 = r ∈ R, then the solution set of this system is {(r, r, r) : r ∈ R},so the eigenvectors corresponding to λ = 3 are v1 = r(1, 1, 1) where r ∈ R.

If λ = 0 then (A− λI)v = 0 assumes the form

−2 1 4−2 1 4−2 1 4

v1

v2

v3

=

000

=⇒ −2v1 + v2 + 4v3 = 0. If

we let v1 = s ∈ R and v3 = t ∈ R, then the solution set of this system is {(s, 2s − 4t, t) : s, t ∈ R}, so twolinearly independent eigenvectors corresponding to λ = 0 are v2 = s(1, 2, 0) and v3 = t(0,−4, 1).

Thus, the matrix S =

1 1 01 2 −41 0 1

satisfies S−1AS = diag(3, 0, 0).

Page 389: Differential Equations and Linear Algebra - 3e - Goode -Solutions

389

9. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣2− λ 0 0

0 1− λ 02 −1 1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 2)(λ − 1)2 = 0 ⇐⇒ λ = 2 or λ = 1 of

multiplicity two.

If λ = 1 then (A − λI)v = 0 assumes the form

1 0 00 0 02 −1 0

v1

v2

v3

=

000

=⇒ v1 = v2 = 0 and

v3 ∈ R. If we let v3 = r ∈ R then the solution set of this system is {(0, 0, r) : r ∈ R}, so there is only onecorresponding linearly independent eigenvector. Hence, by Theorem 5.8.4, A is not diagonalizable.

10. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣4− λ 0 0

3 −1− λ −10 2 1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ2 + 1)(λ− 4) = 0 ⇐⇒ λ = 4, or λ = ±i.

If λ = 4 then (A− λI)v = 0 assumes the form

0 0 03 −5 −10 2 −3

v1

v2

v3

=

000

=⇒ 6v1 − 17v3 = 0 and 2v2 − 3v3 = 0. If we let v3 = 6r ∈ C, then the solution set of this system is{(17r, 9r, 6r) : r ∈ C}, so the eigenvectors corresponding to λ = 4 are v1 = r(17, 9, 6) where r ∈ C.

If λ = i then (A − λI)v = 0 assumes the form

4− i 0 03 −1− i −10 2 1− i

v1

v2

v3

=

000

=⇒ v1 = 0 and

2v2 + (1− i)v3 = 0. If we let v3 = −2s ∈ C, then the solution set of this system is {(0, (1− i)s, 2s) : s ∈ C},so the eigenvectors corresponding to λ = i are v2 = s(0, 1− i, 2) where s ∈ C. Since the entries of A are real,v3 = t(0, 1 + i, 2) where t ∈ C are the eigenvectors corresponding to λ = −i by Theorem 5.6.8.

Thus, the matrix S =

17 0 09 1− i 1 + i6 2 2

satisfies S−1AS = diag(4, i,−i).

11. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ 2 −1−2 −λ −2

1 2 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ λ(λ2 + 9) = 0 ⇐⇒ λ = 0, or λ = ±3i.

If λ = 0 then (A− λI)v = 0 assumes the form

0 2 −1−2 0 −2

1 2 0

v1

v2

v3

=

000

=⇒ v1 + v3 = 0 and 2v2 − v3 = 0. If we let v3 = 2r ∈ C, then the solution set of this system is{(−2r, r, 2r) : r ∈ C}, so the eigenvectors corresponding to λ = 0 are v1 = r(−2, 1, 2) where r ∈ C.

If λ = −3i then (A− λI)v = 0 assumes the form

3i 2 −1−2 3i −2

1 2 3i

v1

v2

v3

=

000

=⇒ 5v1+(−4+3i)v3 = 0 and 5v2+(2+6i)v3 = 0. If we let v3 = 5s ∈ C, then the solution set of this system is{((4−3i)s, (−2−6i)s, 5s) : s ∈ C}, so the eigenvectors corresponding to λ = −3i are v2 = s(4−3i,−2−6i, 5)where s ∈ C. Since the entries of A are real, v3 = t(4 + 3i,−2 + 6i, 5) where t ∈ C are the eigenvectorscorresponding to λ = 3i by Theorem 5.6.8.

Thus, the matrix S =

−2 4 + 3i 4− 3i1 −2 + 6i −2− 6i2 5 5

satisfies S−1AS = diag(0, 3i,−3i).

12. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ −2 0−2 1− λ 00 0 3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 3)2(λ + 1) = 0 ⇐⇒ λ = −1 or λ = 3 of

Page 390: Differential Equations and Linear Algebra - 3e - Goode -Solutions

390

multiplicity two.

If λ = −1 then (A− λI)v = 0 assumes the form

2 −2 0−2 2 0

0 0 4

v1

v2

v3

=

000

=⇒ 2v1 − v2 = 0 and v3 = 0. If we let v1 = r ∈ R, then the solution set of this system is {(r, r, 0) : r ∈ R}so the eigenvectors corresponding to λ = −1 are v1 = r(1, 1, 0) where r ∈ R.

If λ = 3 then (A − λI)v = 0 assumes the form

−2 −2 0−2 −2 0

0 0 0

v1

v2

v3

=

000

=⇒ v1 + v2 = 0 and

v3 ∈ R. If we let v2 = s ∈ R and v3 = t ∈ R, then the solution set of this system is {(−s, s, t) : s, t ∈ R}, sothe eigenvectors corresponding to λ = 3 are v2 = s(−1, 1, 0) and v3 = t(0, 0, 1).

Thus, the matrix S =

1 −1 01 1 00 0 1

satisfies S−1AS = diag(−1, 3, 3).

13. λ1 = 2 (multiplicity 2), basis for eigenspace: {(−3, 1, 0), (3, 0, 1)}.λ2 = 1, basis for eigenspace: {(1, 2, 2)}.

Set S =

−3 3 31 0 20 1 2

. Then S−1AS = diag(2, 2, 1).

14. λ1 = 0 (multiplicity 2), basis for eigenspace: {(0, 1, 0,−1), (1, 0,−1, 0)}.λ2 = 2, basis for eigenspace: {(1, 1, 1, 1)}. λ3 = 10, basis for eigenspace: {(−1, 1,−1, 1)}.

Set S =

0 1 1 −11 0 1 10 −1 1 −1

−1 0 1 1

. Then S−1AS = diag(0, 0, 2, 10).

15. The given system can be written as x′= Ax, where A =

[1 42 3

].

A has eigenvalues λ1 = −1, λ2 = 5 with corresponding linearly independent eigenvectors v1 = (−2, 1) and

v2 = (1, 1). If we set S =[−2 1

1 1

], then S−1AS = diag(−1, 5), therefore, under the transformation

x = Sy, the given system of differential equations simplifies to[y′

1

y′

2

]=[−1 0

0 5

] [y1

y2

].

Hence, y′

1 = −y1 and y′

2 = 5y2. Integrating these equations, we obtain

y1(t) = c1e−t, y2(t) = c2e

5t.

Returning to the original variables, we have

x = Sy =[−2 1

1 1

] [c1e−t

c2e5t

]=[−2c1e

−t + c2e5t

c1e−t + c2e

5t

].

Consequently, x1(t) = −2c1e−t + c2e

5t and x2(t) = c1e−t + c2e

5t.

16. The given system can be written as x′= Ax, where A =

[6 −2

−2 6

].

A has eigenvalues λ1 = 4, λ2 = 8 with corresponding linearly independent eigenvectors v1 = (1, 1) and

Page 391: Differential Equations and Linear Algebra - 3e - Goode -Solutions

391

v2 = (−1, 1). If we set S =[

1 −11 1

], then S−1AS = diag(4, 8), therefore, under the transformation

x = Sy, the given system of differential equations simplifies to[y′

1

y′

2

]=[

4 00 8

] [y1

y2

].

Hence, y′

1 = 4y1 and y′

2 = 8y2. Integrating these equations, we obtain

y1(t) = c1e4t, y2(t) = c2e

8t.

Returning to the original variables, we have

x = Sy =[

1 −11 1

] [c1e

4t

c2e8t

]=[

c1e4t − c2e

8t

c1e4t + c2e

8t

].

Consequently, x1(t) = c1e4t − c2e

8t and x2(t) = c1e4t + c2e

8t.

17. The given system can be written as x′= Ax, where A =

[9 6

−10 −7

].

A has eigenvalues λ1 = −1, λ2 = 3 with corresponding linearly independent eigenvectors v1 = (3,−5) and

v2 = (−1, 1). If we set S =[

3 −1−5 1

], then S−1AS = diag(−1, 3), therefore, under the transformation

x = Sy, the given system of differential equations simplifies to[y′

1

y′

2

]=[−1 0

0 3

] [y1

y2

].

Hence, y′

1 = −y1 and y′

2 = 3y2. Integrating these equations, we obtain

y1(t) = c1e−t, y2(t) = c2e

3t.

Returning to the original variables, we have

x = Sy =[

3 −1−5 1

] [c1e−t

c2e3t

]=[

3c1e−t − c2e

3t

−5c1e−t + c2e

3t

].

Consequently, x1(t) = 3c1e−t − c2e

3t and x2(t) = −5c1e−t + c2e

3t.

18. The given system can be written as x′= Ax, where A =

[−12 −7

16 10

].

A has eigenvalues λ1 = 2, λ2 = −4 with corresponding linearly independent eigenvectors v1 = (1,−2) and

v2 = (7,−8). If we set S =[

1 7−2 −8

], then S−1AS = diag(2,−4), therefore, under the transformation

x = Sy, the given system of differential equations simplifies to[y′

1

y′

2

]=[

2 00 −4

] [y1

y2

].

Hence, y′

1 = 2y1 and y′

2 = −4y2. Integrating these equations, we obtain

y1(t) = c1e2t, y2(t) = c2e

−4t.

Page 392: Differential Equations and Linear Algebra - 3e - Goode -Solutions

392

Returning to the original variables, we have

x = Sy =[

1 7−2 −8

] [c1e

2t

c2e−4t

]=[

c1e2t + 7c2e

−4t

−2c1e2t − 8c2e

−4t

].

Consequently, x1(t) = c1e2t + 7c2e

−4t and x2(t) = −2c1e2t − 8c2e

−4t.

19. The given system can be written as x′= Ax, where A =

[0 1

−1 0

].

A has eigenvalues λ1 = i, λ2 = −i with corresponding linearly independent eigenvectors v1 = (1, i) and

v2 = (1,−i). If we set S =[

1 1i −i

], then S−1AS = diag(i,−i), therefore, under the transformation

x = Sy, the given system of differential equations simplifies to[y′

1

y′

2

]=[

i 00 −i

] [y1

y2

].

Hence, y′

1 = iy1 and y′

2 = −iy2. Integrating these equations, we obtain

y1(t) = c1eit, y2(t) = c2e

−it.

Returning to the original variables, we have

x = Sy =[

1 1i −i

] [c1e

it

c2e−it

]=[

c1eit + c2e

−it

i(c1eit − c2e

−it)

].

Consequently, x1(t) = c1eit + c2e

−it and x2(t) = i(c1eit − c2e

−it). Using Eulers formula, these expressionscan be written as

x1(t) = (c1 + c2) cos t + i(c1 − c2) sin t,

x2(t) = i(c1 − c2) cos t− (c1 + c2) sin t,

or equivalently,x1(t) = a cos t + b sin t, x2(t) = b cos t− a sin t,

where a = c1 + c2, and b = i(c1 − c2).

20. The given system can be written as x′= Ax, where A =

3 −4 −10 −1 −10 −4 2

.

A has eigenvalue λ1 = −2 with corresponding eigenvector v1 = (1, 1, 1), and eigenvalue λ2 = 3 with corre-

sponding linearly independent eigenvectors v2 = (1, 0, 0) and v3 = (0, 1,−4). If we set S =

1 1 01 0 11 0 −4

,

then S−1AS = diag(−2, 3, 3), therefore, under the transformation x = Sy, the given system of differentialequations simplifies to y

1

y′

2

y′

3

=

−2 0 00 3 00 0 3

y1

y2

y3

.

Hence, y′

1 = −2y1, y′

2 = 3y2, and y′

3 = 3y3. Integrating these equations, we obtain

y1(t) = c1e−2t, y2(t) = c2e

3t, y3(t) = c3e3t.

Page 393: Differential Equations and Linear Algebra - 3e - Goode -Solutions

393

Returning to the original variables, we have

x = Sy =

1 1 01 0 11 0 −4

c1e−2t

c2e3t

c3e3t

=

c1e−2t + c2e

3t

c1e−2t + c3e

3t

c1e−2t − 4c3e

3t

.

Consequently, x1(t) = c1e−2t + c2e

3t, x2(t) = c1e−2t + c3e

3t, and c1e−2t − 4c3e

3t.

21. The given system can be written as x′= Ax, where A =

1 1 −11 1 1

−1 1 1

.

A has eigenvalue λ1 = −1 with corresponding eigenvector v1 = (−1, 1,−1), and eigenvalue λ2 = 2 with corre-

sponding linearly independent eigenvectors v2 = (0, 1, 1) and v3 = (1, 0,−1). If we set S =

−1 0 11 1 0

−1 1 −1

,

then S−1AS = diag(−1, 2, 2), therefore, under the transformation x = Sy, the given system of differentialequations simplifies to y

1

y′

2

y′

3

=

−1 0 00 2 00 0 2

y1

y2

y3

.

Hence, y′

1 = −y1, y′

2 = 2y2, and y′

3 = 2y3. Integrating these equations, we obtain

y1(t) = c1e−t, y2(t) = c2e

2t, y3(t) = c3e2t.

Returning to the original variables, we have

x = Sy =

−1 0 11 1 0

−1 1 −1

c1e−t

c2e2t

c3e2t

=

−c1e−t + c3e

2t

c1e−t + c2e

2t

−c1e−t + c2e

2t − c3e2t

.

Consequently, x1(t) = −c1e−t + c3e

2t, x2(t) = c1e−t + c2e

2t, and −c1e−t + (c2 − c3)e2t.

22. A2 = (SDS−1)(SDS−1) = SD(S−1S)DS−1 = SDInDS−1 = SD2S−1.We now use mathematical induction to establish the general result. Suppose that for k = m > 2 thatAm = SDmS−1. Then

Am+1 = AAm = (SDS−1)(SDmS−1 = SD(S−1S)DmS−1) = SDm+1S−1.

It follows by mathematical induction that Ak = SDkS−1 for k = 1, 2, . . .

23. Let A = diag(a1, a2, . . . , an) and let B = diag(b1, b2, . . . , bn). Then from the index form of the matrixproduct,

(AB)ij =n∑

k=1

aikbkj = aiibij =

{0, if i 6= j,

aibi, if i = j.

Consequently, AB = diag(a1b1, a2b2, . . . , anbn). Applying this result to the matrix D = diag(λ1, λ2, . . . , λk),it follows directly that Dk = diag(λk

1 , λk2 , . . . , λk

n).

24. The matrix A has eigenvalues λ1 = 5, λ2 = −1, with corresponding eigenvectors v1 = (1,−3) and

v2 = (2,−3). Thus, if we set S =[

1 2−3 −3

], then S−1AS = D, where D = diag(5,−1).

Page 394: Differential Equations and Linear Algebra - 3e - Goode -Solutions

394

Equivalently, A = SDS−1. It follows from the results of the previous two examples that

A3 = SD3S−1 =[

1 2−3 −3

] [125 00 −1

] [−1 − 2

31 1

3

]=[−127 −84

378 251

],

whereas

A5 = SD5S−1 =[

1 2−3 −3

] [3125 0

0 −1

] [−1 − 2

31 1

3

]=[−3127 −2084

9378 6251

].

25.

(a) This is self-evident from matrix multiplication. Another perspective on this is that when we multiply amatrix B on the left by a diagonal matrix D, the ith row of B gets multiplied by the ith diagonal elementof D. Thus, if we multiply the ith row of

√D,

√λi, by the ith diagonal element of

√D,

√λi, the result in

the ith row of the product is√

λi

√λi = λi. Therefore,

√D√

D = D, which means that√

D is a square rootof D.

(b) We have(S√

DS−1)2 = (S√

DS−1)(S√

DS−1) = S(√

DI√

D)S−1 = SDS−1 = A,

as required.

(c) We begin by diagonalizing A. We have

det(A− λI) = det[

6− λ −2−3 7− λ

]= (6− λ)(7− λ)− 6 = λ2 − 13λ + 36 = (λ− 4)(λ− 9),

so the eigenvalues of A are λ = 4 and λ = 9. An eigenvector of A corresponding to λ = 4 is[

11

], and an

eigenvector of A corresponding to λ = 9 is[

2−3

]. Thus, we can form

S =[

1 21 −3

]and D =

[4 00 9

].

We take√

D =[

2 00 3

]. A fast computation shows that S−1 =

[3/5 2/51/5 −1/5

]. By part (b), one square

root of A is given by√

A = S√

DS−1 =[

12/5 −2/5−3/5 13/5

].

Directly squaring this result confirms this matrix as a square root of A.

26. (a) Show: A ∼ A.The identity matrix, I, is invertible, and I = I−1. Since IA = AI =⇒ A = I−1AI, it follows that A ∼ A.

(b) Show: A ∼ B ⇐⇒ B ∼ A.A ∼ B =⇒ there exists an invertible matrix S such that B = S−1AS =⇒ A = SBS−1 = (S−1)−1BS−1.But S−1 is invertible since S is invertible. Consequently, B ∼ A.

(c) Show A ∼ B and B ∼ C ⇐⇒ A ∼ C.A ∼ B =⇒ there exists an invertible matrix S such that B = S−1AS; moreover, B ∼ C =⇒ there exists aninvertible matrix P such that C = P−1BP . Thus,C = P−1BP = P−1(S−1AS)P = (P−1S−1)A(SP ) = (SP )−1A(SP ) where SP is invertible. ThereforeA ∼ C.

Page 395: Differential Equations and Linear Algebra - 3e - Goode -Solutions

395

27. Let A ∼ B mean that A is similar to B. Show: A ∼ B ⇐⇒ AT ∼ BT .A ∼ B =⇒ there exists an invertible matrix S such that B = S−1AS=⇒ BT = (S−1AS)T = ST AT (S−1)T = ST AT (ST )−1. ST is invertible because S is invertible, [sincedet(S) = det(ST )]. Thus, AT ∼ BT .

28. We are given that Av = λv and B = S−1AS.B(S−1v) = (S−1AS)(S−1v) = S−1A(SS−1)v = S−1AIv = S−1Av = S−1(λv) = λ(S−1v).Hence, S−1v is an eigenvector of B corresponding to the eigenvalue λ.

29. (a) S−1AS = diag(λ1, λ2, . . . , λn) =⇒ det(S−1AS) = λ1λ2 · · ·λn

=⇒ det(A) det(S−1) det(S) = λ1λ2 · · ·λn =⇒ det(A) = λ1λ2 · · ·λn. Since all eigenvalues are nonzero, itfollows that det(A) 6= 0. Consequently, A is invertible.

(b) S−1AS = diag(λ1, λ2, . . . , λn) =⇒ [S−1AS]−1 = [diag(λ1, λ2, . . . , λn)]−1

=⇒ S−1A−1(S−1)−1 = diag(

1λ1

,1λ2

, . . . ,1λn

)=⇒ S−1A−1S = diag

(1λ1

,1λ2

, . . . ,1λn

).

30. (a) S−1AS = diag(λ1, λ2, . . . , λn) =⇒ (S−1AS)T = [diag(λ1, λ2, . . . , λn)]T

=⇒ ST AT (S−1)T = diag(λ1, λ2, . . . , λn) =⇒ ST AT (ST )−1 = diag(λ1, λ2, . . . , λn). Since we have thatQ = (ST )−1, this implies that Q−1AT Q = diag(λ1, λ2, . . . , λn).

(b) Let MC = [v1,v2,v3, . . . ,vn] where MC denotes the matrix of cofactors of S. We see from part (a) thatAT is nondefective, which means it possesses a complete set of eigenvectors. Also from part (a),Q−1AT Q = diag(λ1, λ2, . . . , λn) where Q = (ST )−1, so

AT Q = Qdiag(λ1, λ2, . . . , λn) (30.1)

If we let MC denote the matrix of cofactors of S, then

S−1 =adj(S)det(S)

=MT

C

det(S)=⇒ (S−1)T =

(MT

C

det(S)

)T

=⇒ (ST )−1 =(

MC

det(S)

)=⇒ Q =

MC

det(S).

Substituting this result into Equation (30.1), we obtain

AT MC

det(S)=

MC

det(S)diag(λ1, λ2, . . . , λn)

=⇒ AT MC = MC diag(λ1, λ2, . . . , λn)=⇒ AT [v1,v2,v3, . . . ,vn] = [v1,v2,v3, . . . ,vn] diag(λ1, λ2, . . . , λn)=⇒ [AT v1, A

T v2, AT v3, . . . , A

T vn] = [λ1v1, λ2v2, λ3v3, . . . , λnvn]=⇒ AT vi = λvi for i ∈ {1, 2, 3, . . . , n}.Hence, the column vectors of MC are linearly independent eigenvectors of AT .

31. det(A− λI) = 0 ⇐⇒∣∣∣∣ −2− λ 4

1 1− λ

∣∣∣∣ = 0 ⇐⇒ (λ + 3)(λ− 2) = 0 ⇐⇒ λ1 = −3 or λ2 = 2.

If λ1 = −3 the corresponding eigenvectors are of the v1 = r(−4, 1) where r ∈ R.If λ2 = 2 the corresponding eigenvectors are of the v2 = s(1, 1) where s ∈ R.

Thus, a complete set of eigenvectors is {(4, 1), (1, 1)} so that S =[

4 11 1

]. From problem 17, if MC denotes

the matrix of cofactors of S, then MC =[

1 −1−1 4

]. Consequently, (1,−1) is an eigenvector corresponding

to λ = −3 and (−1, 4) is an eigenvector corresponding to λ = 2 for the matrix AT .

Page 396: Differential Equations and Linear Algebra - 3e - Goode -Solutions

396

32. S−1AS = Jλ ⇐⇒ AS = SJλ ⇐⇒ A[v1,v2] = [v1,v2][

λ 10 λ

]=⇒ [Av1, Av2] = [λv1,v1 + λv2] =⇒ Av1 = λv1 and Av2 = v1 + λv2

=⇒ (A− λI)v1 = 0 and (A− λI)v2 = v1.

33. det(A− λI) = 0 ⇐⇒∣∣∣∣ 2− λ 1−1 4− λ

∣∣∣∣ = 0 ⇐⇒ (λ− 3)2 = 0 ⇐⇒ λ1 = 3 of multiplicity two.

If λ1 = 3 the corresponding eigenvectors are of the v1 = r(1, 1) where r ∈ R. Consequently, A does not havea complete set of eigenvectors, so it is a defective matrix.

By the preceding problem, J3 =[

3 10 3

]is similar to A. Hence, there exists S = [v1,v2] such that

S−1AS = J3. From the first part of the problem, we can let v1 = (1, 1). Now consider (A − λI)v2 = v1

where v1 = (a, b) for a, b ∈ R. Upon substituting, we obtain[−1 1−1 1

] [ab

]=[

11

]=⇒ −a + b = 1.

Thus, S takes the form S =[

1 b− 11 b

]where b ∈ R, and if b = 0, then S =

[1 −11 0

].

34.

S−1AS =

λ 1 00 λ 10 0 λ

⇐⇒ A[v1,v2,v3] = [v1,v2,v3]

λ 1 00 λ 10 0 λ

⇐⇒ [Av1, Av2, Av3] = [λv1,v1 + λv2,v2 + λv3]⇐⇒ Av1 = λv1, Av2 = v1 + λv2, and Av3 = v2 + λv3

⇐⇒ (A− λI)v1 = 0, (A− λI)v2 = v1, and (A− λI)v3 = v2.

35. (a) From (5.8.15), c1f1 + c2f2 + · · ·+ cnfn = 0 ⇐⇒n∑

i=1

ci

n∑j=1

sjiej

= 0

⇐⇒n∑

j=1

[n∑

i=1

sjici

]ej = 0 ⇐⇒

n∑i=1

sjici = 0, j = 1, 2, . . . , n, since {ei} is a linearly independent set. The

latter equation is just the component form of the linear system Sc = 0. Since {fi} is a linearly independentset, the only solution to this system is the trivial solution c = 0, so det(S) 6= 0. Consequently, S is invertible.

(b) From (5.8.14) and (5.8.15), we have

T (fk) =n∑

i=1

bik

n∑j=1

sjiej =n∑

j=1

[n∑

i=1

sjibik

]ej , k = 1, 2, . . . , n.

Replacing i with j and j with i yields

T (fk) =n∑

i=1

n∑j=1

sijbjk

ei, k = 1, 2, . . . , n. (∗)

(c) From (5.8.15) and (5.8.13), we have

T (fk) =n∑

j=1

sjkT (ej) =n∑

j=1

sjk

n∑i=1

aijei,

Page 397: Differential Equations and Linear Algebra - 3e - Goode -Solutions

397

that is,

T (fk) =n∑

i=1

n∑j=1

aijsjk

ei, k = 1, 2, . . . , n. (∗∗)

(d) Subtracting (∗) from (∗∗) yields

n∑i=1

n∑j=1

(sijbjk − aijsjk)

ei = 0.

Thus, since {e1} is a linearly independent set,

n∑j=1

sijbjk =n∑

j=1

aijsjk, i = 1, 2, . . . , n.

But this is just the index form of the matrix equation SB = AS. Multiplying both sides of the precedingequation on the left by S−1 yields B = S−1AS.

Solutions to Section 5.9

True-False Review:

1. TRUE. In the definition of the matrix exponential function eAt, we see that powers of the matrix Amust be computed:

eAt = In + (At) +(At)2

2!+

(At)3

3!+ . . . .

In order to do this, A must be a square matrix.

2. TRUE. We see this by plugging in t = 1 into the definition of the matrix exponential function. All termscontaining A3, A4, A5, . . . must be zero, leaving us with the result given in this statement.

3. FALSE. The inverse of the matrix exponential function eAt is the matrix exponential function e−At, andthis will exist for all square matrices A, not just invertible ones.

4. TRUE. The matrix exponential function eAt converges to a matrix the same size as A, for all t ∈ R.This is asserted, but not proven, directly beneath Definition 5.9.1.

5. FALSE. The correct statement is(SDS−1)k = SDkS−1.

The matrices S and S−1 on the right-hand side of this equation do not get raised to the power k.

6. FALSE. According to Property 1 of the Matrix Exponential Function, we have

(eAt)2 = (eAt)(eAt) = e2At.

Problems:

1. det(A− λI) = 0 ⇐⇒∣∣∣∣ 2− λ 2

2 −1− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − λ− 6 = 0 ⇐⇒ (λ + 2)(λ− 3) = 0 ⇐⇒ λ = −2 or

λ = 3.

If λ = −2 then (A−λI)v = 0 assumes the form[

4 22 1

] [v1

v2

]=[

00

]=⇒ 2v1 + v2 = 0. v1 = (1,−2), is

Page 398: Differential Equations and Linear Algebra - 3e - Goode -Solutions

398

an eigenvector corresponding to λ = −2 and w1 =v1

||v1||=(

1√5,− 2√

5

)is a unit eigenvector corresponding

to λ = −2.

If λ = 3 then (A− λI)v = 0 assumes the form[−1 2

2 −4

] [v1

v2

]=[

00

]=⇒ v1 + 2v2 = 0. v2 = (2, 1),

is an eigenvector corresponding to λ = 3 and w2 =v2

||v2||=(

2√5,

1√5

)is a unit eigenvector corresponding

to λ = 3.

Thus, S =

1√5

2√5

− 2√5

1√5

and ST AS = diag(−2, 3).

2. det(A− λI) = 0 ⇐⇒∣∣∣∣ 4− λ 6

6 9− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − 13λ = 0 ⇐⇒ λ(λ− 13) = 0 ⇐⇒ λ = 0 or λ = 13.

If λ = 0 then (A− λI)v = 0 assumes the form[

4 66 9

] [v1

v2

]=[

00

]=⇒ 2v1 + 3v2 = 0. v1 = (−3, 2), is

an eigenvector corresponding to λ = 0 and w1 =v1

||v1||=(− 3√

13,

2√13

)is a unit eigenvector corresponding

to λ = 0.

If λ = 13 then (A− λI)v = 0 assumes the form[−9 6

6 −4

] [v1

v2

]=[

00

]=⇒ −3v1 + 2v2 = 0.

v2 = (2, 3), is an eigenvector corresponding to λ = 13 and w2 =v2

||v2||=(

2√13

,3√13

)is a unit eigenvector

corresponding to λ = 13.

Thus, S =

− 3√13

2√13

2√13

3√13

and ST AS = diag(0, 13).

3. det(A− λI) = 0 ⇐⇒∣∣∣∣ 1− λ 2

2 1− λ

∣∣∣∣ = 0 ⇐⇒ (λ− 1)2 − 4 = 0 ⇐⇒ (λ + 1)(λ− 3) = 0 ⇐⇒ λ = −1 or

λ = 3.

If λ = −1 then (A− λI)v = 0 assumes the form[

2 22 2

] [v1

v2

]=[

00

]=⇒ v1 + v2 = 0. v1 = (−1, 1), is

an eigenvector corresponding to λ = −1 and w1 =v1

||v1||=(− 1√

2,

1√2

)is a unit eigenvector corresponding

to λ = −1.

If λ = 3 then (A− λI)v = 0 assumes the form[−2 2

2 −2

] [v1

v2

]=[

00

]=⇒ v1 − v2 = 0. v2 = (1, 1), is

an eigenvector corresponding to λ = 3 and w2 =v2

||v2||=(

1√2,

1√2

)is a unit eigenvector corresponding to

λ = 3.

Thus, S =

− 1√2

1√2

1√2

1√2

and ST AS = diag(−1, 3).

4. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ 0 3

0 −2− λ 03 0 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 2)(λ− 3)(λ + 3) = 0 ⇐⇒ λ = −3, λ = −2, or

Page 399: Differential Equations and Linear Algebra - 3e - Goode -Solutions

399

λ = 3.

If λ = −3 then (A− λI)v = 0 assumes the form

3 0 30 1 03 0 3

v1

v2

v3

=

000

=⇒ v1 + v3 = 0 and v2 = 0.

v1 = (−1, 0, 1), is an eigenvector corresponding to λ = −3 and w1 =v1

||v1||=(− 1√

2, 0,

1√2

)is a unit

eigenvector corresponding to λ = −3.

If λ = −2 then (A−λI)v = 0 assumes the form

2 0 30 0 03 0 2

v1

v2

v3

=

000

=⇒ v1 = v3 = 0 and v2 ∈ R.

v2 = (0, 1, 0), is an eigenvector corresponding to λ = −2 and w2 =v2

||v2||= (0, 1, 0) is a unit eigenvector

corresponding to λ = −2.

If λ = 3 then (A − λI)v = 0 assumes the form

−3 0 30 −5 03 0 −3

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and

v2 = 0. v3 = (1, 0, 1), is an eigenvector corresponding to λ = 3 and w3 =v3

||v3||=(

1√2, 0,

1√2

)is a unit

eigenvector corresponding to λ = 3.

Thus, S =

− 1√

20

1√2

0 1 01√2

01√2

and ST AS = diag(−3,−2, 3).

5. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ 2 1

2 4− λ 21 2 1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ λ3 − 6λ2 = 0 ⇐⇒ λ2(λ − 6) = 0 ⇐⇒ λ = 6 or

λ = 0 of multiplicity two.

If λ = 0 then (A − λI)v = 0 assumes the form

1 2 12 4 21 2 1

v1

v2

v3

=

000

=⇒ v1 + 2v2 + v3 = 0.

v1 = (−1, 0, 1) and v2 = (−2, 1, 0) are linearly independnet eigenvectors corresponding to λ = 0. v1 and v2

are not orthogonal since 〈v1,v2〉 = 2 6= 0, so we will use the Gram-Schmidt procedure.Let u1 = v1 = (−1, 0, 1), so

u2 = v2 −〈v2,u1〉||u1||2

u1 = (−2, 1, 0)− 22(−1, 0, 1) = (−1, 1,−1).

Now w1 =u1

||u1||=(− 1√

2, 0,

1√2

)and w2 =

u2

||u2||=(− 1√

3,

1√3,

1√3

)are orthonormal eigenvectors

corresponding to λ = 0.

If λ = 6 then (A − λI)v = 0 assumes the form

−5 2 12 −2 21 2 −5

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and

v2 − 2v3 = 0. v3 = (1, 2, 1), is an eigenvector corresponding to λ = 6 and w3 =v3

||v3||=(

1√6,

2√6,

1√6

)is

a unit eigenvector corresponding to λ = 6.

Page 400: Differential Equations and Linear Algebra - 3e - Goode -Solutions

400

Thus, S =

− 1√

2− 1√

31√6

01√3

2√6

1√2

− 1√3

1√6

and ST AS = diag(0, 0, 6).

6. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣2− λ 0 0

0 3− λ 10 1 3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 2)2(λ − 4) = 0 ⇐⇒ λ = 4 or λ = 2 of

multiplicity two.

If λ = 2 then (A − λI)v = 0 assumes the form

0 0 00 1 10 1 1

v1

v2

v3

=

000

=⇒ v2 + v3 = 0 and

v1 ∈ R. v1 = (1, 0, 0) and v2 = (0,−1, 1) are linearly independnet eigenvectors corresponding to λ = 2, and

w1 =v1

||v1||= (1, 0, 0) and w2 =

v2

||v2||=(

0,− 1√2,

1√2

)are unit eigenvectors corresponding to λ = 2. w1

and w2 are also orthogonal because 〈w1,w2〉 = 0.

If λ = 4 then (A− λI)v = 0 assumes the form

−2 0 00 −1 10 1 −1

v1

v2

v3

=

000

=⇒ v1 = 0 and

v2 − v3 = 0. v3 = (0, 1, 1), is an eigenvector corresponding to λ = 4 and w3 =v3

||v3||=(

0,1√2,

1√2

)is a

unit eigenvector corresponding to λ = 4.

Thus, S =

1 0 0

0 − 1√2

1√2

01√2

1√2

and ST AS = diag(2, 2, 4).

7. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ 1 0

1 −λ 00 0 1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 1)2(λ + 1) = 0 ⇐⇒ λ = −1 or λ = 1 of

multiplicity two.

If λ = 1 then (A − λI)v = 0 assumes the form

−1 1 01 −1 00 0 0

v1

v2

v3

=

000

=⇒ v1 − v2 = 0 and

v3 ∈ R. v1 = (1, 1, 0) and v2 = (0, 0, 1) are linearly independnet eigenvectors corresponding to λ = 1, and

w1 =v1

||v1||=(

1√2,

1√2, 0)

and w2 =v2

||v2||= (0, 0, 1) are unit eigenvectors corresponding to λ = 1. w1

and w2 are also orthogonal because 〈w1,w2〉 = 0.

If λ = −1 then (A− λI)v = 0 assumes the form

1 1 01 1 00 0 2

v1

v2

v3

=

000

=⇒ v1 + v2 = 0 and v3 = 0.

v3 = (−1, 1, 0), is an eigenvector corresponding to λ = −1 and w3 =v3

||v3||=(− 1√

2,

1√2, 0)

is a unit

eigenvector corresponding to λ = −1.

Page 401: Differential Equations and Linear Algebra - 3e - Goode -Solutions

401

Thus, S =

− 1√

20

1√2

1√2

01√2

0 1 0

and ST AS = diag(−1, 1, 1).

8. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ 1 −1

1 1− λ 1−1 1 1− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ− 2)2(λ + 1) = 0 ⇐⇒ λ = 2 of multiplicity two

λ = −1.

If λ = 2 then (A − λI)v = 0 assumes the form

−1 1 −11 −1 1

−1 1 −1

v1

v2

v3

=

000

=⇒ v1 − v2 + v3 = 0.

v1 = (1, 1, 0) and v2 = (−1, 0, 1) are linearly independnet eigenvectors corresponding to λ = 2. v1 and v2

are not orthogonal since 〈v1,v2〉 = −1 6= 0, so we will use the Gram-Schmidt procedure.Let u1 = v1 = (1, 1, 0), so

u2 = v2 −〈v2,u1〉||u1||2

u1 = (−1, 0, 1) +12(1, 1, 0) =

(−1

2,12, 1)

.

Now w1 =u1

||u1||=(

1√2,

1√2, 0)

and w2 =u2

||u2||=(− 1√

6,

1√6,

2√6

)are orthonormal eigenvectors

corresponding to λ = 2.

If λ = −1 then (A − λI)v = 0 assumes the form

2 1 −11 2 1

−1 1 2

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and

v2 +v3 = 0. v3 = (1,−1, 1), is an eigenvector corresponding to λ = −1 and w3 =v3

||v3||=(

1√3,− 1√

3,

1√3

)is a unit eigenvector corresponding to λ = −1.

Thus, S =

1√2

− 1√6

1√3

1√2

1√6

− 1√3

02√6

1√3

and ST AS = diag(2, 2,−1).

9. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣1− λ 0 −1

0 1− λ 1−1 1 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ (1 − λ)(λ + 1)(λ − 2) = 0 ⇐⇒ λ = −1, λ = 1, or

λ = 2.

If λ = −1 then (A − λI)v = 0 assumes the form

2 0 −10 2 1

−1 1 1

v1

v2

v3

=

000

=⇒ 2v1 − v3 = 0 and

2v2+v3 = 0. v1 = (1,−1, 2), is an eigenvector corresponding to λ = −1 and w1 =v1

||v1||=(

1√6,− 1√

6,

2√6

)is a unit eigenvector corresponding to λ = −1.

If λ = 1 then (A − λI)v = 0 assumes the form

0 0 −10 0 1

−1 1 −1

v1

v2

v3

=

000

=⇒ v1 − v2 = 0 and

v3 = 0. v2 = (1, 1, 0), is an eigenvector corresponding to λ = 1 and w2 =v2

||v2||=(

1√2,

1√2, 0)

is a unit

Page 402: Differential Equations and Linear Algebra - 3e - Goode -Solutions

402

eigenvector corresponding to λ = 1.

If λ = 2 then (A − λI)v = 0 assumes the form

−1 0 −10 −1 1

−1 1 −2

v1

v2

v3

=

000

=⇒ v1 + v3 = 0 and

v2 − v3 = 0. v3 = (−1, 1, 1), is an eigenvector corresponding to λ = 2 and w3 =v3

||v3||=(− 1√

3,

1√3,

1√3

)is a unit eigenvector corresponding to λ = 2.

Thus, S =

1√6

1√2

− 1√3

− 1√6

1√2

1√3

2√6

01√3

and ST AS = diag(−1, 1, 2).

10. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣3− λ 3 4

3 3− λ 04 0 3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ − 3)(λ + 2)(λ − 8) = 0 ⇐⇒ λ = −2, λ = 3,

or λ = 8.

If λ = −2 then (A−λI)v = 0 assumes the form

5 3 43 5 04 0 5

v1

v2

v3

=

000

=⇒ 4v1 +5v3 = 0 and 4v2−

3v3 = 0. v1 = (−5, 3, 4), is an eigenvector corresponding to λ = −2 and w1 =v1

||v1||=(− 1√

2,

35√

2,

45√

2

)is a unit eigenvector corresponding to λ = −2.

If λ = 3 then (A−λI)v = 0 assumes the form

0 3 43 0 04 0 0

v1

v2

v3

=

000

=⇒ v1 = 0 and 3v2 +4v3 = 0.

v2 = (0,−4, 3), is an eigenvector corresponding to λ = 3 and w2 =v2

||v2||=(

0,−45,35

)is a unit eigenvector

corresponding to λ = 3.

If λ = 8 then (A − λI)v = 0 assumes the form

−5 3 43 −5 04 0 −5

v1

v2

v3

=

000

=⇒ 4v1 − 5v3 = 0 and

4v2 − 3v3 = 0. v3 = (5, 3, 4), is an eigenvector corresponding to λ = 8 and w3 =v3

||v3||=(

1√2,

35√

2,

45√

2

)is a unit eigenvector corresponding to λ = 8.

Thus, S =

− 1√

20

1√2

35√

2−4

53

5√

24

5√

235

45√

2

and ST AS = diag(−2, 3, 8).

11. det(A − λI) = 0 ⇐⇒

∣∣∣∣∣∣−3− λ 2 2

2 −3− λ 22 2 −3− λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 5)2(λ − 1) = 0 ⇐⇒ λ = −5 of

multiplicity two, or λ = 1.

If λ = 1 then (A − λI)v = 0 assumes the form

−4 2 22 −4 22 2 −4

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and

Page 403: Differential Equations and Linear Algebra - 3e - Goode -Solutions

403

v2 − v3 = 0. v1 = (1, 1, 1) is an eigenvector corresponding to λ = 1 and w1 =v1

||v1||=(

1√3,

1√3,

1√3

)is a

unit eigenvector corresponding to λ = 1.

If λ = −5 then (A − λI)v = 0 assumes the form

2 2 22 2 22 2 2

v1

v2

v3

=

000

=⇒ v1 + v2 + v3 = 0.

v2 = (−1, 1, 0) and v3 = (−1, 0, 1) are linearly independent eigenvectors corresponding to λ = −5. v2 andv3 are not orthogonal since 〈v2,v3〉 = −1 6= 0, so we will use the Gram-Schmidt procedure.Let u2 = v2 = (−1, 1, 0), so

u3 = v3 −〈v3,u2〉||u2||2

u2 = (−1, 0, 1)− 12(−1, 1, 0) =

(−1

2,−1

2, 1)

.

Now w2 =u2

||u2||=(− 1√

2,

1√2, 0)

and w3 =u3

||u3||=(− 1√

6,− 1√

6,

2√6

)are orthonormal eigenvectors

corresponding to λ = −5.

Thus, S =

1√3

− 1√2

− 1√6

1√3

1√2

− 1√6

1√3

02√6

and ST AS = diag(1,−5,−5).

12. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ 1 1

1 −λ 11 1 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ (λ + 1)2(λ− 2) = 0 ⇐⇒ λ = −1 of multiplicity two, or

λ = 2.

If λ = −1 then (A − λI)v = 0 assumes the form

1 1 11 1 11 1 1

v1

v2

v3

=

000

=⇒ v1 + v2 + v3 = 0.

v1 = (−1, 0, 1) and v2 = (−1, 1, 0) are linearly independnet eigenvectors corresponding to λ = −1. v1 andv2 are not orthogonal since 〈v1,v2〉 = 1 6= 0, so we will use the Gram-Schmidt procedure.Let u1 = v1 = (−1, 0, 1), so

u2 = v2 −〈v2,u1〉||u1||2

u1 = (−1, 1, 0)− 12(−1, 0, 1) =

(−1

2, 1,−1

2

).

Now w1 =u1

||u1||=(− 1√

2, 0,

1√2

)and w2 =

u2

||u2||=(− 1√

6,

2√6,− 1√

6

)are orthonormal eigenvectors

corresponding to λ = −1.

If λ = 2 then (A − λI)v = 0 assumes the form

−2 1 11 −2 11 1 −2

v1

v2

v3

=

000

=⇒ v1 − v3 = 0 and

v2 − v3 = 0. v3 = (1, 1, 1), is an eigenvector corresponding to λ = 2 and w3 =v3

||v3||=(

1√3,

1√3,

1√3

)is a

unit eigenvector corresponding to λ = 2.

Thus, S =

− 1√

2− 1√

61√3

02√6

1√3

1√2

− 1√6

1√3

and ST AS = diag(−1,−1, 2).

Page 404: Differential Equations and Linear Algebra - 3e - Goode -Solutions

404

13. A has eigenvalues λ1 = 4 and λ2 = −2 with corresponding eigenvectors v1 = (1, 1) and v2 = (−1, 1).

Therefore, a set of principal axes is{

1√2(1, 1),

1√2(−1, 1)

}. Relative to these principal axes, the quadratic

form reduces to 4y21 − 2y2

2 .

14. A has eigenvalues λ1 = 7 and λ2 = 3 with corresponding eigenvectors v1 = (1, 1) and v2 = (1,−1).

Therefore, a set of principal axes is{

1√2(1, 1),

1√2(1,−1)

}. Relative to these principal axes, the quadratic

form reduces to 7y21 + 3y2

2 .

15. A has eigenvalue λ = 2 of multiplicity two with corresponding linearly independent eigenvectorsv1 = (1, 0,−1) and v2 = (0, 1, 1). Using the Gram-Schmidt procedure, an orthogonal basis in this eigenspaceis {u1,u2} where

u1 = (1, 0,−1), u2 = (0, 1, 1) +12(1, 0,−1) =

12(1, 2, 1).

An orthonormal basis for the eigenspace is{

1√2(1, 0,−1),

1√6(1, 2, 1)

}. The remaining eigenvalue of A is

λ = −1, with eigenvector v3 = (−1, 1,−1). Consequently, a set of principal axes for the given quadratic

form is{

1√2(1, 0,−1),

1√6(1, 2, 1),

1√3(−1, 1,−1)

}. Relative to these principal axes, the quadratic form

reduces to 2y21 + 2y2

2 − y23 .

16. A has eigenvalue λ = 0 of multiplicity two with corresponding linearly independent eigenvectorsv1 = (0, 1, 0,−1) and v2 = (1, 0,−1, 0). Notice that these are orthogonal, hence we do not need to applythe Gram-Schmidt procedure. The remaining eigenvalues of A are λ = 4 and λ = 8, with correspondingeigenvectors v3 = (−1, 1,−1, 1) and v4 = (1, 1, 1, 1), respectively. Consequently, a set of principal axes forthe given quadratic form is{

1√2(0, 1, 0,−1),

1√2(1, 0,−1, 0),

12(−1, 1,−1, 1),

12(1, 1, 1, 1)

}.

Relative to these principal axes, the quadratic form reduces to 4y23 + 8y2

4 .

17. A =[

a bb c

]where a, b, c ∈ R.

Consider det(A− λI) = 0 ⇐⇒∣∣∣∣ a− λ c

c b− λ

∣∣∣∣ = 0 ⇐⇒ λ2 − (a + c)λ + (ac− b2) = 0

⇐⇒ λ =(a + c)±

√(a− c)2 + 4b2

2. Now A has repeated eigenvalues

⇐⇒ (a− c)2 + 4b2 = 0 ⇐⇒ a = c and b = 0 (since a, b, c ∈ R) ⇐⇒ A =[

a 00 a

]⇐⇒ A = aI2 ⇐⇒ A is a scalar matrix.

18.(a) A is a real n× n symmetric matrix =⇒ A possesses a complete set of eigenvectors=⇒ A is similar to a diagonal matrix=⇒ there exists an invertible matrix S such that S−1AS = diag(λ, λ, λ, . . . , λ) where λ occurs n times.But diag(λ, λ, λ, . . . , λ) = λIn, so S−1AS = λIn =⇒ AS = S(λIn) =⇒ AS = λS =⇒ A = λSS−1

=⇒ A = λIn =⇒ A is a scalar matrix

(b) Theorem: Let A be a nondefective n × n matrix. If λ is an eigenvalue of multiplicity n, then A is ascalar matrix.Proof: The proof is the same as that in part (a) since A has a complete set of eigenvectors.

Page 405: Differential Equations and Linear Algebra - 3e - Goode -Solutions

405

19. Since real eigenvectors of A that correspond to distinct eigenvalues are orthogonal, it must be the casethat if y = (y1, y2) corresponds to λ2 where Ay = λ2y, then

〈x,y〉 = 0 =⇒ 〈(1, 2), (y1, y2)〉 = 0 =⇒ y1 + 2y2 = 0 =⇒ y1 = −2y2 =⇒ y = (−2y2, y2) = y2(−2, 1).

Consequently, (−2, 1) is an eigenvector corresponding to λ2.

20. (a) Let A be a real symmetric 2× 2 matrix with two distinct eigenvalues, λ1 and λ2, where v1 = (a, b)is an eigenvector corresponding to λ1. Since real eigenvectors of A that correspond to distinct eigenvaluesare orthogonal, it follows that if v2 = (c, d) corresponds to λ2 where av2 = λ2v2, then〈v1,v2〉 = 0 =⇒ 〈(a, b), (c, d)〉 = 0, that is, ac + bd = 0. By inspection, we see that v2 = (−b, a). Anorthonormal set of eigenvectors for A is{

1√a2 + b2

(a, b),1√

a2 + b2(−b, a)

}.

Thus, if S =

1√

a2 + b2a − 1√

a2 + b2b

1√a2 + b2

b1√

a2 + b2a

, then ST AS = diag(λ1, λ2).

(b) ST AS = diag(λ1, λ2) =⇒ AS = S diag(λ1, λ2), since ST = S−1. Thus,

A = S diag(λ1, λ2)ST =1

(a2 + b2)

[a −bb a

] [λ1 00 λ2

] [a b

−b a

]=

1(a2 + b2)

[a −bb a

] [λ1a λ1b

−λ2b λ2a

]=

1(a2 + b2)

[λ1a

2 + λ2b2 ab(λ1 − λ2)

ab(λ1 − λ2) λ1b2 + λ2a

2

].

21. A is a real symmetric 3× 3 matrix with eigenvalues λ1 and λ2 of multiplicity two.

(a) Let v1 = (1,−1, 1) be an eigenvector of A that corresponds to λ1. Since real eigenvectors of A thatcorrespond to distinct eigenvalues are orthogonal, it must be the case that if v = (a, b, c) corresponds to λ2

where Av = λ2v, then 〈v1,v〉 = 0 =⇒ 〈(1,−1, 1), (a, b, c)〉 = 0 =⇒ a−b+c = 0 =⇒ v = r(1, 1, 0)+s(−1, 0, 1)where r and s are free variables. Consequently, v2 = (1, 1, 0) and v3 = (−1, 0, 1) are linearly independenteigenvectors corresponding to λ2. Thus, {(1, 1, 0), (−1, 0, 1)} is a basis for E2. v2 and v3 are not orthogonalsince 〈v2,v3〉 = −1 6= 0, so we will apply the Gram-Schmidt procedure to v2 and v3. Let u2 = v2 = (1, 1, 0)and

u3 = v3 −〈v3,u2〉||u2||2

u2 = (−1, 0, 1) +12(1, 1, 0) =

(−1

2,12, 1)

.

Now, w1 =v1

||v1||=(

1√3,− 1√

3,

1√3

)is a unit eigenvector corresponding to λ1, and

w2 =u2

||u2||=(

1√2,

1√2, 0)

, w3 =u3

||u3||=(− 1√

6,

1√6,

2√6

)are orthonormal eigenvectors corresponding

to λ2.

Consequently, S =

1√3

1√2

− 1√6

− 1√3

1√2

1√6

1√3

02√6

and ST AS = diag(λ1, λ2, λ2).

Page 406: Differential Equations and Linear Algebra - 3e - Goode -Solutions

406

(b) Since S is an orthogonal matrix, ST AS = diag(λ1, λ2, λ2) =⇒ AS = S diag(λ1, λ2, λ3)=⇒ A = S diag(λ1, λ2, λ3)ST =⇒

A =

1√3

1√2

− 1√6

− 1√3

1√2

1√6

1√3

02√6

λ1 0 0

0 λ2 00 0 λ2

1√3

− 1√3

1√3

1√2

1√2

0

− 1√6

1√6

2√6

=

1√3

1√2

− 1√6

− 1√3

1√2

1√6

1√3

02√6

λ1√

3− λ1√

3λ1√

3λ2√

2λ2√

20

− λ2√6

λ2√6

2λ2√6

=13

λ1 + 2λ2 −λ1 + λ2 λ1 − λ2

−λ1 + λ2 λ1 + 2λ2 −λ1 + λ2

λ1 − λ2 −λ1 + λ2 λ1 + 2λ2

.

22. (a) Let v1,v2 ∈ Cn and recall that vT1 v2 = [〈v1,v2〉].

[〈Av1,v2〉] = (Av1)T v2 = (vT1 AT )v2 = vT

1 (−A)v2 = −vT1 (A)v2 = −vT

1 (Av2 ) = −vT1 Av2 = [−〈v1, Av2〉].

Thus,〈Av1,v2〉 = −〈v1, Av2〉. (22.1)

(b) Let v1 be an eigenvector corresponding to the eigenvalue λ1, so

Av1 = λ1v1. (22.2)

Taking the inner product of (22.2) with v1 yields 〈Av1,v1〉 = λ1〈v1,v1〉, that is

〈Av1,v1〉 = λ1||v1||2. (22.3)

Taking the complex conjugate of (22.3) gives 〈Av1,v1〉 = λ1||v1||2, that is

〈v1, Av1〉 = λ1||v1||2. (22.4)

Adding (22.3) and (22.4), and using (22.1) with v2 = v1 yields (λ1 + λ1)||v1||2 = 0. But ||v1||2 6= 0, soλ1 + λ1 = 0, or equivalently, λ1 = −λ1, which means that all nonzero eigenvalues of A are pure imaginary.

23. Let A be an n × n real skew-symmetric matrix where n is odd. Since A is real, the characteristicequation, det(A − λI) = 0, has real coefficients, so its roots come in conjugate pairs. By problem 20, allnonzero solutions of det(A − λI) = 0 are pure imaginary, hence when n is odd, zero will be one of theeigenvalues of A.

24. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ 4 −4−4 −λ −2

4 2 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ λ3 + 36λ = 0 ⇐⇒ λ = 0 or λ = ±6i.

If λ = 0 then (A − λI)v = 0 assumes the form

0 4 −4−4 0 −2

4 2 0

v1

v2

v3

=

000

=⇒ 2v1 + v3 = 0 and

v2 − v3 = 0. If we let v3 = 2r ∈ C, then the solution set of this system is {(−r, 2r, 2r) : r ∈ C} so theeigenvectors corresponding to λ = 0 are v1 = r(−1, 2, 2) where r ∈ C.

If λ = −6i then (A−λI)v = 0 assumes the form

6i 4 −4−4 6i −2

4 2 6i

v1

v2

v3

=

000

=⇒ 5v1+(−2+6i)v3 = 0

Page 407: Differential Equations and Linear Algebra - 3e - Goode -Solutions

407

and 5v2 + (4 + 3i)v3 = 0. If we let v3 = 5s ∈ C, then the solution set of this system is{(2− 6i)s, (−4− 3i)s, 5s : s ∈ C}, so the eigenvectors corresponding to λ = −6i are v2 = s(2− 6i,−4− 3i, 5)where s ∈ C. By Theorem 5.6.8, since A is a matrix with real entries, the eigenvectors corresponding toλ = 6i are of the form v3 = t(2 + 6i,−4 + 3i, 5) where t ∈ C.

25. det(A− λI) = 0 ⇐⇒

∣∣∣∣∣∣−λ −1 −6

1 −λ 56 −5 −λ

∣∣∣∣∣∣ = 0 ⇐⇒ −λ3 − 62λ = 0 ⇐⇒ λ = 0 or λ = ±√

62i.

If λ = 0 then (A−λI)v = 0 assumes the form

0 −1 −61 0 56 −5 0

v1

v2

v3

=

000

=⇒ v3 = t, v2 = −6t, v1 =

−5t, where t ∈ C. Thus, the solution set of the system is {(−5t,−6t, t) : t ∈ C}, so the eigenvectorscorresponding to λ = 0 are v = t(−5,−6, 1), where t ∈ C.For the other eigenvalues, it is best to use technology to generate the corresponding eigenvectors.

26. A is a real n×n orthogonal matrix⇐⇒ A−1 = AT ⇐⇒ AT A = In. Suppose that A = [v1,v2,v3, . . . ,vn].Since the ith row of AT is equal to vT

i , the matrix multiplication assures us that the ijth entry of AT A isequal to 〈vT

i ,vi〉. Thus from the equality AT A = In = [δij ], an n × n matrix A = [v1,v2,v3, . . . ,vn] isorthogonal if an only if 〈vT

i ,vi〉 = δij , that is, if and only if the columns (rows) of A, {v1,v2,v3, . . . ,vn},form an orthonormal set of vectors.

Solutions to Section 5.11

True-False Review:

1. TRUE. See Remark 1 following Definition 5.11.1.

2. TRUE. Each Jordan block corresponds to a cycle of generalized eigenvectors, and each such cycle containsexactly one eigenvector. By construction, the eigenvectors (and generalized eigenvectors) are chosen to forma linearly independent set of vectors.

3. FALSE. For example, in a diagonalizable n× n matrix, the n linearly independent eigenvectors can bearbitrarily placed in the columns of the matrix S. Thus, an ample supply of invertible matrices S can beconstructed.

4. FALSE. For instance, if J1 = J2 =[

0 10 0

], then J1 and J2 are in Jordan canonical form, but

J1 + J2 =[

0 20 0

]is not in Jordan canonical form.

5. TRUE. This is simply a restatement of the definition of a generalized eigenvector.

6. FALSE. The number of Jordan blocks corresponding to λ in the Jordan canonical form of A is thenumber of linearly independent eigenvectors of A corresponding to λ, which is dim[Eλ], the dimension of theeigenspace corresponding to λ, not dim[Kλ].

7. TRUE. This is the content of Theorem 5.11.8.

8. FALSE. For instance, if J1 = J2 =[

1 10 1

], then J1 and J2 are in Jordan canonical form, but

J1J2 =[

1 20 1

]is not in Jordan canonical form.

9. TRUE. If we place the vectors in a cycle of generalized eigenvectors of A (see Equation (5.11.3)) inthe columns of the matrix S formulated in this section in the order they appear in the cycle, then the

Page 408: Differential Equations and Linear Algebra - 3e - Goode -Solutions

408

corresponding columns of the matrix S−1AS will form a Jordan block.

10. TRUE. The assumption here is that all Jordan blocks have size 1 × 1, which precisely says that theJordan canonical form of A is a diagonal matrix. This means that A is diagonalizable.

11. TRUE. Suppose that S−1AS = B and that J is a Jordan canonical form of A. So there exists aninvertible matrix T such that T−1AT = J . Then B = S−1AS = S−1(TJT−1)S = (T−1S)−1J(T−1S), andhence,

(T−1S)B(T−1S)−1 = J,

which shows that J is also a Jordan canonical form of B.

12. FALSE. For instance, if J =[

0 10 0

]and r = 2, then J is in Jordan canonical form, but rJ =

[0 20 0

]is not in Jordan canonical form.

Problems:

1. There are 3 possible Jordan canonical forms: 1 0 00 1 00 0 1

,

1 1 00 1 00 0 1

,

1 1 00 1 10 0 1

.

2. There are 4 possible Jordan canonical forms:1 0 0 00 1 0 00 0 3 00 0 0 3

,

1 1 0 00 1 0 00 0 3 00 0 0 3

,

1 0 0 00 1 0 00 0 3 10 0 0 3

,

1 1 0 00 1 0 00 0 3 10 0 0 3

.

3. There are 7 possible Jordan canonical forms:

2 0 0 0 00 2 0 0 00 0 2 0 00 0 0 2 00 0 0 0 2

,

2 1 0 0 00 2 0 0 00 0 2 0 00 0 0 2 00 0 0 0 2

,

2 1 0 0 00 2 0 0 00 0 2 1 00 0 0 2 00 0 0 0 2

,

2 1 0 0 00 2 1 0 00 0 2 0 00 0 0 2 00 0 0 0 2

2 1 0 0 00 2 1 0 00 0 2 0 00 0 0 2 10 0 0 0 2

,

2 1 0 0 00 2 1 0 00 0 2 1 00 0 0 2 00 0 0 0 2

,

2 1 0 0 00 2 1 0 00 0 2 1 00 0 0 2 10 0 0 0 2

4. There are 10 possible Jordan canonical forms:

3 0 0 0 0 00 3 0 0 0 00 0 3 0 0 00 0 0 3 0 00 0 0 0 9 00 0 0 0 0 9

,

3 1 0 0 0 00 3 0 0 0 00 0 3 0 0 00 0 0 3 0 00 0 0 0 9 00 0 0 0 0 9

,

3 1 0 0 0 00 3 0 0 0 00 0 3 1 0 00 0 0 3 0 00 0 0 0 9 00 0 0 0 0 9

,

3 1 0 0 0 00 3 1 0 0 00 0 3 0 0 00 0 0 3 0 00 0 0 0 9 00 0 0 0 0 9

,

Page 409: Differential Equations and Linear Algebra - 3e - Goode -Solutions

4093 1 0 0 0 00 3 1 0 0 00 0 3 1 0 00 0 0 3 1 00 0 0 0 9 00 0 0 0 0 9

,

3 0 0 0 0 00 3 0 0 0 00 0 3 0 0 00 0 0 3 0 00 0 0 0 9 10 0 0 0 0 9

,

3 1 0 0 0 00 3 0 0 0 00 0 3 0 0 00 0 0 3 0 00 0 0 0 9 10 0 0 0 0 9

,

3 1 0 0 0 00 3 0 0 0 00 0 3 1 0 00 0 0 3 0 00 0 0 0 9 10 0 0 0 0 9

,

3 1 0 0 0 00 3 1 0 0 00 0 3 0 0 00 0 0 3 0 00 0 0 0 9 10 0 0 0 0 9

,

3 1 0 0 0 00 3 1 0 0 00 0 3 1 0 00 0 0 3 1 00 0 0 0 9 10 0 0 0 0 9

.

5. Since λ = 2 occurs with multiplicity 4, it can give rise to the following possible Jordan block sizes:(a) 4(b) 3,1(c) 2,2(d) 2,1,1(e) 1,1,1,1Likewise, λ = 6 occurs with multiplicity 4, so it can give rise to the same five possible Jordan block sizes.

Finally, λ = 8 occurs with multiplicity 3, so it can give rise to three possible Jordan block sizes:(a) 3(b) 2,1(c) 1,1,1Since the block sizes for each eigenvalue can be independently determined, we have 5 · 5 · 3 = 75 possible

Jordan canonical forms.

6. Since λ = 2 occurs with multiplicity 4, it can give rise to the following possible Jordan block sizes:(a) 4(b) 3,1(c) 2,2(d) 2,1,1(e) 1,1,1,1Next, λ = 5 occurs with multiplicity 6, so it can give rise to the following possible Jordan block sizes:(a) 6(b) 5,1(c) 4,2(d) 4,1,1(e) 3,3(f) 3,2,1(g) 3,1,1,1(h) 2,2,2(i) 2,2,1,1(j) 2,1,1,1,1(k) 1,1,1,1,1,1There are 5 possible Jordan block sizes corresponding to λ = 2 and 11 possible Jordan block sizes

corresponding to λ = 5. Multiplying these results, we have 5 · 11 = 55 possible Jordan canonical forms.

Page 410: Differential Equations and Linear Algebra - 3e - Goode -Solutions

410

7. Since (A− 5I)2 = 0, no cycles of generalized eigenvectors corresponding to λ = 5 can have length greaterthan 2, and hence, only Jordan block sizes 2 or less are possible. Thus, the possible block sizes under thisrestriction (corresponding to λ = 5) are:

2,2,22,2,1,12,1,1,1,11,1,1,1,1,1There are four such. There are still five possible block size lists corresponding to λ = 2. Multiplying

these results, we have 5 · 4 = 20 possible Jordan canonical forms under this restriction.

8.

(a): λ1 0 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 00 0 0 0 λ2

,

λ1 1 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 00 0 0 0 λ2

,

λ1 1 0 0 00 λ1 1 0 00 0 λ1 0 00 0 0 λ2 00 0 0 0 λ2

,

λ1 0 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 10 0 0 0 λ2

,

λ1 1 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 10 0 0 0 λ2

,

λ1 1 0 0 00 λ1 1 0 00 0 λ1 0 00 0 0 λ2 10 0 0 0 λ2

(b): The assumption that (A − λ1I)2 6= 0 implies that there can be no Jordan blocks corresponding to λ1

of size 3× 3 (or greater). Thus, the only possible Jordan canonical forms for this matrix now areλ1 0 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 00 0 0 0 λ2

,

λ1 1 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 00 0 0 0 λ2

,

λ1 0 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 10 0 0 0 λ2

,

λ1 1 0 0 00 λ1 0 0 00 0 λ1 0 00 0 0 λ2 10 0 0 0 λ2

9. The assumption that (A− λI)3 = 0 implies no Jordan blocks of size greater than 3× 3 are possible. Thefact that (A−λI)2 6= 0 implies that there is at least one Jordan block of size 3× 3. Thus, the possible blocksize combinations for a 6 × 6 matrix with eigenvalue λ of multiplicity 6 and no blocks of size greater than3× 3 with at least one 3× 3 block are:

3,33,2,13,1,1,1Thus, there are 3 possible Jordan canonical forms. (We omit the list itself; it can be produced simply

from the list of block sizes above.)

Page 411: Differential Equations and Linear Algebra - 3e - Goode -Solutions

411

10. The eigenvalues of the matrix with this characteristic polynomial are λ = 4, 4,−6. The possible Jordancanonical forms in this case are therefore: 4 0 0

0 4 00 0 −6

,

4 1 00 4 00 0 −6

.

11. The eigenvalues of the matrix with this characteristic polynomial are λ = 4, 4, 4,−1,−1. The possibleJordan canonical forms in this case are therefore:

4 0 0 0 00 4 0 0 00 0 4 0 00 0 0 −1 00 0 0 0 −1

,

4 1 0 0 00 4 0 0 00 0 4 0 00 0 0 −1 00 0 0 0 −1

,

4 1 0 0 00 4 1 0 00 0 4 0 00 0 0 −1 00 0 0 0 −1

,

4 0 0 0 00 4 0 0 00 0 4 0 00 0 0 −1 10 0 0 0 −1

,

4 1 0 0 00 4 0 0 00 0 4 0 00 0 0 −1 10 0 0 0 −1

,

4 1 0 0 00 4 1 0 00 0 4 0 00 0 0 −1 10 0 0 0 −1

.

12. The eigenvalues of the matrix with this characteristic polynomial are λ = −2,−2,−2, 0, 0, 3, 3. Thepossible Jordan canonical forms in this case are therefore:

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 −2 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 3 00 0 0 0 0 0 3

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 −2 0 0 0 00 0 0 0 1 0 00 0 0 0 0 00 0 0 0 0 3 00 0 0 0 0 0 3

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 −2 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 3 10 0 0 0 0 0 3

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 −2 0 0 0 00 0 0 0 1 0 00 0 0 0 0 0 00 0 0 0 0 3 10 0 0 0 0 0 3

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 −2 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 3 00 0 0 0 0 0 3

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 −2 0 0 0 00 0 0 0 1 0 00 0 0 0 0 00 0 0 0 0 3 00 0 0 0 0 0 3

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 −2 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 3 10 0 0 0 0 0 3

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 −2 0 0 0 00 0 0 0 1 0 00 0 0 0 0 0 00 0 0 0 0 3 10 0 0 0 0 0 3

,

−2 1 0 0 0 0 00 −2 1 0 0 0 00 0 −2 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 3 00 0 0 0 0 0 3

,

Page 412: Differential Equations and Linear Algebra - 3e - Goode -Solutions

412

−2 1 0 0 0 0 00 −2 1 0 0 0 00 0 −2 0 0 0 00 0 0 0 1 0 00 0 0 0 0 00 0 0 0 0 3 00 0 0 0 0 0 3

,

−2 1 0 0 0 0 00 −2 1 0 0 0 00 0 −2 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 3 10 0 0 0 0 0 3

,

−2 1 0 0 0 0 00 −2 1 0 0 0 00 0 −2 0 0 0 00 0 0 0 1 0 00 0 0 0 0 0 00 0 0 0 0 3 10 0 0 0 0 0 3

.

13. The eigenvalues of the matrix with this characteristic polynomial are λ = −2,−2, 6, 6, 6, 6, 6. Thepossible Jordan canonical forms in this case are therefore:

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 0 0 0 00 0 0 6 0 0 00 0 0 0 6 0 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 0 0 0 00 0 0 6 1 0 00 0 0 0 6 0 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 0 0 00 0 0 0 6 1 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 0 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 0 00 0 0 0 0 6 10 0 0 0 0 0 6

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 1 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 1 00 0 0 0 0 6 10 0 0 0 0 0 6

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 6 0 0 0 00 0 0 6 0 0 00 0 0 0 6 0 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 0 0 00 0 0 0 6 0 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 0 0 00 0 0 0 6 1 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 0 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 0 00 0 0 0 0 6 10 0 0 0 0 0 6

Page 413: Differential Equations and Linear Algebra - 3e - Goode -Solutions

413

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 1 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 1 00 0 0 0 0 6 10 0 0 0 0 0 6

14. Of the Jordan canonical forms in Problem 13, we are asked here to find the ones that contain exactlyfive Jordan blocks, since there is a correspondence between the Jordan blocks and the linearly independenteigenvectors. There are three:

−2 1 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 0 0 00 0 0 0 6 0 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 0 0 00 0 0 0 6 1 00 0 0 0 0 6 00 0 0 0 0 0 6

,

−2 0 0 0 0 0 00 −2 0 0 0 0 00 0 6 1 0 0 00 0 0 6 1 0 00 0 0 0 6 0 00 0 0 0 0 6 00 0 0 0 0 0 6

15. Many examples are possible here. Let A =[

0 10 0

]. The only eigenvalue of A is 0. The vector

v =[

01

]is not an eigenvector since Av =

[10

]6= λv. However A2 = 02, so every vector is a generalized

eigenvector of A corresponding to λ = 0.

16. Many examples are possible here. Let A =

0 1 00 0 00 0 0

. The only eigenvalue of A is 0. The vector

v =

010

is not an eigenvector since Av =

100

6= λv. However, A2 = 03, so every vector is a generalized

eigenvector of A corresponding to λ = 0.

17. The characteristic polynomial is

det(A− λI) = det[

1− λ 1−1 3− λ

]= (1− λ)(3− λ) + 1 = λ2 − 4λ + 4 = (λ− 2)2,

with roots λ = 2, 2. We have

A− 2I =[−1 1−1 1

]∼[

1 −10 0

].

Because there is only one unpivoted column in this latter matrix, we only have one eigenvector for A. Hence,A is not diagonalizable, and therefore

JCF(A) =[

2 10 2

].

To determine the matrix S, we must find a cycle of generalized eigenvectors of length 2. Therefore, it suffices

to find a vector v in R2 such that (A− 2I)v 6= 0. Many choices are possible here. We take v =[

01

]. Then

Page 414: Differential Equations and Linear Algebra - 3e - Goode -Solutions

414

(A− 2I)v =[

11

]. Thus, we have

S =[

1 01 1

].

18. The characteristic polynomial is

det(A− λI) = det

1− λ 1 10 1− λ 10 1 1− λ

= (1− λ)[(1− λ)2 − 1

]= (1− λ)(λ2 − 2λ) = λ(1− λ)(λ− 2),

with roots λ = 0, 1, 2. Since A is a 3×3 matrix with three distinct eigenvalues, it is diagonalizable. Therefore,it’s Jordan canonical form is simply a diagonal matrix with the eigenvalues as its diagonal entries:

JCF(A) =

0 0 00 1 00 0 2

.

To determine the invertible matrix S, we must find eigenvectors associated with each eigenvalue.

Eigenvalue λ = 0: Consider

nullspace(A) = nullspace

1 1 10 1 10 1 1

,

and this latter matrix can be row reduced to

1 1 10 1 10 0 0

. The equations corresponding to the rows of this

matrix are x + y + z = 0 and y + z = 0. Setting z = t, then y = −t and x = 0. With t = 1 this gives us theeigenvector (0,−1, 1).

Eigenvalue λ = 1: Consider

nullspace(A− I) = nullspace

0 1 10 0 10 1 0

.

By inspection, we see that z = 0 and y = 0 are required, but x = t is free. Thus, an eigenvector associatedwith λ = 1 may be chosen as (1, 0, 0).

Eigenvalue λ = 2: Consider

nullspace(A− 2I) = nullspace

−1 1 10 −1 10 1 −1

,

which can be row-reduced to

1 −1 −10 1 −10 0 0

. Setting z = t, we have y = t and x = 2t. Thus, with t = 1

we obtain the eigenvector (2, 1, 1) associated with λ = 2.

Page 415: Differential Equations and Linear Algebra - 3e - Goode -Solutions

415

Placing the eigenvectors obtained as the columns of S (with columns corresponding to the eigenvalues ofJCF(A) above), we have

S =

0 1 2−1 0 1

1 0 1

.

19. We can get the characteristic polynomial by using cofactor expansion along the second column as follows:

nullspace(A−λI) = det

5− λ 0 −11 4− λ −11 0 3− λ

= (4−λ) [(5− λ)(3− λ) + 1] = (4−λ)(λ2−8λ+16) = (4−λ)(λ−4)2,

with roots λ = 4, 4, 4.

We have A − 4I =

1 0 −11 0 −11 0 −1

, and so vectors (x, y, z) in the nullspace of this matrix must satisfy

x− z = 0. Setting z = t and y = s, we have x = t. Hence, we obtain two linearly independent eigenvectors

of A corresponding to λ = 4:

101

and

010

. Therefore, JCF(A) contains exactly two Jordan blocks.

This uniquely determines JCF(A), up to a rearrangement of the Jordan blocks:

JCF(A) =

4 1 00 4 00 0 4

.

To determine the matrix S, we must seek a generalized eigenvector. It is easy to verify that (A− 4I)2 = 03,so every nonzero vector v is a generalized eigenvector. We must choose one such that (A − 4I)v 6= 0 in

order to form a cycle of length 2. There are many choices here, but let us choose v =

100

. Then

(A− 4I)v =

111

. Notice that this is an eigenvector of A corresponding to λ = 4. To complete the matrix

S, we will need a second linearly independent eigenvector. Again, there are a multitude of choices. Let us

choose the eigenvector

010

found above. Thus,

S =

1 1 01 0 11 0 0

.

20. We will do cofactor expansion along the first column of the matrix to obtain the characteristic polynomial:

Page 416: Differential Equations and Linear Algebra - 3e - Goode -Solutions

416

det(A− λI) = det

4− λ −4 5−1 4− λ 2−1 2 4− λ

= (4− λ)(λ2 − 8λ + 12) + (−4)(4− λ)− 10− (−8− 5(4− λ))

= (4− λ)(λ2 − 8λ + 12) + (2− λ)= (4− λ)(λ− 2)(λ− 6) + (2− λ)= (λ− 2) [(4− λ)(λ− 6)− 1]

= (λ− 2)(−λ2 + 10λ− 25)

= −(λ− 2)(λ− 5)2,

with eigenvalues λ = 2, 5, 5.Since λ = 5 is a repeated eigenvalue, we consider this eigenvalue first. We must consider the nullspace

of the matrix A− 5I =

−1 −4 5−1 −1 2−1 2 −1

, which can be row-reduced to

1 1 −20 3 −30 0 0

, a matrix that has

only one unpivoted column, and hence λ = 5 only yields one linearly independent eigenvector. Thus,

JCF(A) =

2 0 00 5 10 0 5

.

To determine an invertible matrix S, we first proceed to find a cycle of eigenvectors of length 2 correspondingto λ = 5. Therefore, we must find a vector v in R3 such that (A− 5I)v 6= 0, but (A− 5I)2v = 0 (in order

that v is a generalized eigenvector). Note that (A− 5I)2 =[

0 18 −180 9 −9

], so if we set v =

100

, then

(A− 5I)v =

−1−1−1

and (A− 5I)2v = 0. Thus, we obtain the cycle

−1−1−1

,

100

.

Next, corresponding to λ = 2, we must find an eigenvector. We need to find a nonzero vector (x, y, z)

in nullspace

2 −4 5−1 2 2−1 2 2

, and this latter matrix can be row-reduced to

1 −2 −20 0 10 0 0

. The middle

row requires that z = 0, and if we set y = t, then x = 2t. Thus, by using t = 1, we obtain the eigenvector(2, 1, 0).

Thus, we can form the matrix

S =

2 −1 11 −1 00 −1 0

.

21. We are given that λ = −5 occurs with multiplicity 2 as a root of the characteristic polynomial of A. To

Page 417: Differential Equations and Linear Algebra - 3e - Goode -Solutions

417

search for corresponding eigenvectors, we consider

nullspace(A + 5I) = nullspace

−1 1 0− 1

212

12

− 12

12 − 1

2

,

and this matrix row-reduces to

1 −1 00 0 10 0 0

. Since there is only one unpivoted column in this row-echelon

form of A, the eigenspace corresponding to λ = −5 is only one-dimensional. Thus, based on the eigenvaluesλ = −5,−5,−6, we already know that

JCF(A) =

−5 1 00 −5 00 0 −6

.

Next, we seek a cycle of generalized eigenvectors of length 2 corresponding to λ = −5. The cycletakes the form {(A + 5I)v,v}, where v is a vector such that (A + 5I)2v = 0. We readily compute that

(A + 5I)2 =

12 − 1

212

0 0 012 − 1

212

. An obvious vector that is killed by (A + 5I)2 (although other choices are

also possible) is v =

011

. Then (A + 5I)v =

110

. Hence, we have a cycle of generalized eigenvectors

corresponding to λ = −5: 1

10

,

011

.

Now consider the eigenspace corresponding to λ = −6. We need only find one eigenvector (x, y, z) in thiseigenspace. To do so, we must compute

nullspace(A + 6I) = nullspace

0 1 0− 1

232

12

− 12

12

12

,

and this matrix row-reduces to

1 −3 −10 1 00 0 0

. We see that y = 0 and x− 3y− z = 0, which is equivalent

to x − z = 0. Setting z = t, we have x = t. With t = 1, we obtain the eigenvector (1, 0, 1). Hence, we canform the matrix

S =

1 0 11 1 00 1 1

.

22. Because the matrix is upper triangular, the eigenvalues of A appear along the main diagonal: λ = 2, 2, 3.Let us first consider the eigenspace corresponding to λ = 2: We consider

nullspace(A− 2I) = nullspace

0 −2 140 1 −70 0 0

,

Page 418: Differential Equations and Linear Algebra - 3e - Goode -Solutions

418

which row reduces to

0 1 −70 0 00 0 0

. There are two unpivoted columns, so this eigenspace is two-dimensional.

Therefore, the matrix A is diagonalizable:

JCF(A) =

2 0 00 2 00 0 3

.

Next, we must determine an invertible matrix S such that S−1AS is the Jordan canonical form just obtained.Using the row-echelon form of A − 2I obtained above, vectors (x, y, z) in nullspace(A − 2I) must satisfyy − 7z = 0. Setting z = t, y = 7t, and x = s, we obtain the eigenvectors (1, 0, 0) and (0, 7, 1).

Next, consider the eigenspace corresponding to λ = 3. We consider

nullspace(A− 3I) = nullspace

−1 −2 140 0 −70 0 −1

,

which row reduces to

1 2 −140 0 10 0 0

. Vectors (x, y, z) in the nullspace of this matrix must satisfy z = 0

and x + 2y − 14z = 0 or x + 2y = 0. Setting y = t and x = −2t. Hence, setting t = 1 gives the eigenvector(−2, 1, 0).

Thus, using the eigenvectors obtained above, we obtain the matrix

S =

1 0 −20 7 10 1 0

.

23. We use the characteristic polynomial to determine the eigenvalues of A:

det(A− λI) = det

7− λ −2 20 4− λ −1−1 1 4− λ

= (4− λ) [(7− λ)(4− λ) + 2] + (7− λ− 2)

= (4− λ)(λ2 − 11λ + 30) + (5− λ)= (4− λ)(λ− 5)(λ− 6) + (5− λ)= (5− λ) [1− (4− λ)(6− λ)]

= (5− λ)(λ− 5)2

= −(λ− 5)3.

Hence, the eigenvalues are λ = 5, 5, 5. Let us consider the eigenspace corresponding to λ = 5. We consider

nullspace(A− 5I) = nullspace

2 −2 20 −1 −1

−1 1 −1

,

and this latter matrix row-reduces to

1 −1 10 1 10 0 0

, which contains one unpivoted column. Therefore, the

eigenspace corresponding to λ = 5 is one-dimensional. Therefore, the Jordan canonical form of A consists

Page 419: Differential Equations and Linear Algebra - 3e - Goode -Solutions

419

of one Jordan block:

JCF(A) =

5 1 00 5 10 0 5

.

A corresponding invertible matrix S in this case must have columns that consist of one cycle of generalizedeigenvectors, which will take the form {(A−5I)2v, (A−5I)v,v}, where v is a generalized eigenvector. Now,we can verify quickly that

A− 5I =

2 −2 20 −1 −1

−1 1 −1

, (A− 5I)2 =

2 0 41 0 2

−1 0 −2

, (A− 5I)3 = 03.

The fact that (A − 5I)3 = 03 means that every nonzero vector v is a generalized eigenvector. Hence, we

simply choose v such that (A − 5I)2v 6= 0. There are many choices. Let us take v =

100

. Then

(A− 5I)v =

20

−1

and (A− 5I)2v =

21

−1

. Thus, we have the cycle of generalized eigenvectors

2

1−1

,

20

−1

,

100

.

Hence, we have

S =

2 2 11 0 0

−1 −1 0

.

24. Because the matrix is upper triangular, the eigenvalues of A appear along the main diagonal: λ =−1,−1,−1. Let us consider the eigenspace corresponding to λ = −1. We consider

nullspace(A + I) = nullspace

0 −1 00 0 −20 0 0

,

and it is straightforward to see that the nullspace here consists precisely of vectors that are multiples of(1, 0, 0). Because only one linearly independent eigenvector was obtained, the Jordan canonical form of thismatrix consists of only one Jordan block:

JCF(A) =

−1 1 00 −1 10 0 −1

.

A corresponding invertible matrix S in this case must have columns that consist of one cycle of generalizedeigenvectors, which will take the form {(A + I)2v, (A + I)v,v}. Here, we have

A + I =

0 −1 00 0 −20 0 0

and (A + I)2 =

0 0 20 0 00 0 0

and (A + I)3 = 03.

Page 420: Differential Equations and Linear Algebra - 3e - Goode -Solutions

420

Therefore, every nonzero vector is a generalized eigenvector. We wish to choose a vector v such that

(A + I)2v 6= 0. There are many choices, but we will choose v =

001

. Then (A + I)v =

0−2

0

and

(A + I)2v =

200

. Hence, we form the matrix S as follows:

S =

2 0 00 −2 00 0 1

.

25. We use the characteristic polynomial to determine the eigenvalues of A:

det(A− λI) = det

2− λ −1 0 1

0 3− λ −1 00 1 1− λ 00 −1 0 3− λ

= (2− λ)(3− λ) [(3− λ)(1− λ) + 1]

= (2− λ)(3− λ)(λ2 − 4λ + 4) = (2− λ)(3− λ)(λ− 2)2,

and so the eigenvalues are λ = 2, 2, 2, 3.First, consider the eigenspace corresponding to λ = 2. We consider

nullspace(A− 2I) = nullspace

0 −1 0 10 1 −1 00 1 −1 00 −1 0 1

,

and this latter matrix can be row-reduced to

0 1 −1 00 0 1 −10 0 0 00 0 0 0

. There are two unpivoted columns, and

therefore two linearly independent eigenvectors corresponding to λ = 2. Thus, we will obtain two Jordanblocks corresponding to λ = 2, and they necessarily will have size 2× 2 and 1× 1. Thus, we are already ina position to write down the Jordan canonical form of A:

JCF(A) =

2 1 0 00 2 0 00 0 2 00 0 0 3

.

We continue in order to obtain an invertible matrix S such that S−1AS is in Jordan canonical form. Tothis end, we see a generalized eigenvector v such that (A− 2I)v 6= 0 and (A− 2I)2v = 0. Note that

A− 2I =

0 −1 0 10 1 −1 00 1 −1 00 −1 0 1

and (A− 2I)2 =

0 −2 1 10 0 0 00 0 0 00 −2 1 1

.

Page 421: Differential Equations and Linear Algebra - 3e - Goode -Solutions

421

By inspection, we see that by taking v =

00−11

(there are many other valid choices, of course), then

(A− 2I)v =

1111

and (A− 2I)2v = 0. We also need a second eigenvector corresponding to λ = 2 that is

linearly independent from (A − 2I)v just obtained. From the row-echelon form of A − 2I, we see that all

eigenvectors corresponding to λ = 2 take the form

sttt

, so for example, we can take

1000

.

Next, we consider the eigenspace corresponding to λ = 3. We consider

nullspace(A− 3I) = nullspace

−1 −1 0 1

0 0 −1 00 1 −2 00 −1 0 0

.

Now, if (x, y, z, w) is an eigenvector corresponding to λ = 3, the last three rows of the matrix imply thaty = z = 0. Thus, the first row becomes −x + w = 0. Setting w = t, then x = t, so we obtain eigenvectors inthe form (t, 0, 0, t). Setting t = 1 gives the eigenvector (1, 0, 0, 1).

Thus, we can now form the matrix S such that S−1AS is the Jordan canonical form we obtained above:

S =

1 0 1 11 0 0 01 −1 0 01 1 0 1

.

26. From the characteristic polynomial, we have eigenvalues λ = 2, 2, 4, 4. Let us consider the associatedeigenspaces.

Corresponding to λ = 2, we seek eigenvectors (x, y, z, w) by computing

nullspace(A− 2I) = nullspace

0 −4 2 2

−2 −2 1 3−2 −2 1 3−2 −6 3 5

,

and this matrix can be row-reduced to

2 2 −1 −30 2 −1 −10 0 0 00 0 0 0

. Setting w = 2t and z = 2s, we obtain y = s + t

and x = 2t, so we obtain the eigenvectors (2, 1, 0, 2) and (0, 1, 2, 0) corresponding to λ = 2.Next, corresponding to λ = 4, we seek eigenvectors (x, y, z, w) by computing

nullspace(A− 4I) = nullspace

−2 −4 2 2−2 −4 1 3−2 −2 −1 3−2 −6 3 3

.

Page 422: Differential Equations and Linear Algebra - 3e - Goode -Solutions

422

This matrix can be reduced to

1 2 −1 −10 2 −1 −10 0 1 −10 0 0 0

. Since there is only one unpivoted column, this eigenspace

is only one-dimensional, despite λ = 4 occurring with multiplicity 2 as a root of the characteristic equation.Therefore, we must seek a generalized eigenvector v such that (A − 4I)v is an eigenvector. This in turnrequires that (A− 4I)2v = 0. We find that

A− 4I =

−2 −4 2 2−2 −4 1 3−2 −2 −1 3−2 −6 3 3

and (A− 4I)2 =

4 8 −4 −44 4 0 −44 0 4 −44 8 −4 −4

.

Note that the vector v =

1001

satisfies (A− 4I)2v = 0 and (A− 4I)v =

0111

. Thus, we have the cycle

of generalized eigenvectors corresponding to λ = 4 given by

0111

,

1001

.

Hence, we can form the matrix

S =

2 0 0 11 1 1 00 2 1 02 0 1 1

and JCF(A) =

2 0 0 00 2 0 00 0 4 10 0 0 4

.

27. Since A is upper triangular, the eigenvalues appear along the main diagonal: λ = 2, 2, 2, 2, 2. Lookingat

nullspace(A− 2I) = nullspace

0 1 1 1 10 0 0 0 10 0 0 0 10 0 0 0 10 0 0 0 0

,

we see that the row-echelon form of this matrix will contain two pivots, and therefore, three unpivotedcolumns. That means that the eigenspace corresponding to λ = 2 is three-dimensional. Therefore, JCF(A)consists of three Jordan blocks. The only list of block sizes for a 5× 5 matrix with three blocks are (a) 3,1,1and (b) 2,2,1. In this case, note that

(A− 2I)2 =

0 0 0 0 40 0 0 0 10 0 0 0 10 0 0 0 10 0 0 0 1

6= 05,

Page 423: Differential Equations and Linear Algebra - 3e - Goode -Solutions

423

so that it is possible to find a vector v that generates a cycle of generalized eigenvectors of length 3:{(A − 2I)2v, (A − 2I)v,v}. Thus, JCF(A) contains a Jordan block of size 3 × 3. We conclude that thecorrect list of block sizes for this matrix is 3,1,1:

JCF(A) =

2 1 0 0 00 2 1 0 00 0 2 0 00 0 0 2 00 0 0 0 2

.

28. Since A is upper triangular, the eigenvalues appear along the main diagonal: λ = 0, 0, 0, 0, 0. Looking atnullspace(A− 0I) = nullspace(A), we see that eigenvectors (x, y, z, u, v) corresponding to λ = 0 must satisfyz = u = v = 0 (since the third row gives 6u = 0, the first row gives u + 4v = 0, and the second row givesz + u + v = 0). Thus, we have only two free variables, and thus JCF(A) will consist of two Jordan blocks.The only list of block sizes for a 5× 5 matrix with two blocks are (a)4,1 and (b) 3,2. In this case, it is easyto verify that A3 = 0, so that the longest possible cycle of generalized eigenvectors {A2v, Av,v} has length3. Therefore, case (b) holds: JCF(A) consists of one Jordan block of size 3× 3 and one Jordan block of size2× 2:

JCF(A) =

0 1 0 0 00 0 1 0 00 0 0 0 00 0 0 0 10 0 0 0 0

.

29. Since A is upper triangular, the eigenvalues appear along the main diagonal: λ = 1, 1, 1, 1, 1, 1, 1, 1.Looking at

nullspace(A− I) = nullspace

0 1 0 1 0 1 0 10 0 0 0 0 0 0 00 0 0 1 0 1 0 10 0 0 0 0 0 0 00 0 0 0 0 1 0 10 0 0 0 0 0 0 00 0 0 0 0 0 0 10 0 0 0 0 0 0 0

,

we see that if (a, b, c, d, e, f, g, h) is an eigenvector of A, then b = d = f = h = 0, and a, c, e, and g arefree variables. Thus, we have four linearly independent eigenvectors of A, and hence we expect four Jordanblocks. Now, an easy calculation shows that (A − I)2 = 0, and thus, no Jordan blocks of size greater than2× 2 are permissible. Thus, it must be the case that JCF(A) consists of four Jordan blocks, each of whichis a 2× 2 matrix:

JCF(A) =

1 1 0 0 0 0 0 00 1 0 0 0 0 0 00 0 1 1 0 0 0 00 0 0 1 0 0 0 00 0 0 0 1 1 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 10 0 0 0 0 0 0 1

.

Page 424: Differential Equations and Linear Algebra - 3e - Goode -Solutions

424

30. NOT SIMILAR. We will compute JCF(A) and JCF(B). If they are the same (up to a rearrangementof the Jordan blocks), then A and B are similar; otherwise they are not. Both matrices have eigenvaluesλ = 6, 6, 6. Consider

nullspace(A− 6I) = nullspace

1 1 0−1 −1 0

1 0 0

.

We see that eigenvectors (x, y, z) corresponding to λ = 6 must satisfy x + y = 0 and x = 0. Therefore,x = y = 0, while z is a free variable. Since we obtain only one free variable, JCF(A) consists of just oneJordan block corresponding to λ = 6:

JCF(A) =

6 1 00 6 10 0 6

.

Next, consider

nullspace(B − 6I) = nullspace

0 −1 10 0 00 0 0

.

In this case, eigenvectors (x, y, z) corresponding to λ = 6 must satisfy −y + z = 0. Therefore, both x and zare free variables, and hence, JCF(B) consists of two Jordan blocks corresponding to λ = 6:

JCF(B) =

6 1 00 6 00 0 6

.

Since JCF(A) 6= JCF(B), we conclude that A and B are not similar.

31. SIMILAR. We will compute JCF(A) and JCF(B). If they are the same (up to a rearrangement of theJordan blocks), then A and B are similar; otherwise they are not. Both matrices have eigenvalues λ = 5, 5, 5.(For A, this is easiest to compute by expanding det(A− λI) along the middle row, and for B, this is easiestto compute by expanding det(B − λI) along the second column.) In Problem 23, we computed

JCF(A) =

5 1 00 5 10 0 5

.

Next, consider

nullspace(B − 5I) = nullspace

−2 −1 −21 1 11 0 1

,

and this latter matrix can be row-reduced to

1 1 10 1 00 0 0

, which has only one unpivoted column. Thus,

the eigenspace of B corresponding to λ = 5 is only one-dimensional, and so

JCF(B) =

5 1 00 5 10 0 5

.

Since A and B each had the same Jordan canonical form, they are similar matrices.

Page 425: Differential Equations and Linear Algebra - 3e - Goode -Solutions

425

32. The eigenvalues of A are λ = −1,−1, and the eigenspace corresponding to λ = −1 is only one-dimensional. Thus, we seek a generalized eigenvector v of A corresponding to λ = −1 such that {(A+I)v,v}

is a cycle of generalized eigenvectors. Note that A + I =[−2 −2

2 2

]and (A + I)2 = 02. Thus, every

nonzero vector v is a generalized eigenvector of A corresponding to λ = −1. Let us choose v =[

10

]. Then

(A + I)v =[−2

2

]. Form the matrices

S =[−2 1

2 0

]and J =

[−1 1

0 −1

].

Via the substitution x = Sy, the system x′ = Ax is transformed into y′ = Jy. The corresponding equationsare

y′1 = −y1 + y2 and y′2 = −y2.

The solution to the second equation isy2(t) = c1e

−t.

Substituting this solution into y′1 = −y1 + y2 gives

y′1 + y1 = c1e−t.

This is a first order linear equation with integrating factor I(t) = et. When we multiply the differentialequation for y1(t) by I(t), it becomes (y1 · et)′ = c1. Integrating both sides yields y1 · et = c1t + c2. Thus,

y1(t) = c1te−t + c2e

−t.

Thus, we have

y(t) =[

y1(t)y2(t)

]=[

c1te−t + c2e

−t

c1e−t

].

Finally, we solve for x(t):

x(t) = Sy(t) =[−2 1

2 0

] [c1te

−t + c2e−t

c1e−t

]=[−2(c1te

−t + c2e−t) + c1e

−t

2(c1te−t + c2e

−t)

]= c1e

−t

[−2t + 1

2t

]+ c2e

−t

[−2

2

].

This is an acceptable answer, or we can write the individual equations comprising the general solution:

x1(t) = −2(c1te−t + c2e

−t) + c1e−t and x2(t) = 2(c1te

−t + c2e−t).

33. The eigenvalues of A are λ = −1,−1, 1. The eigenspace corresponding to λ = −1 is

nullspace(A + I) = nullspace

1 1 00 1 11 1 0

,

Page 426: Differential Equations and Linear Algebra - 3e - Goode -Solutions

426

which is only one-dimensional, spanned by the vector (1,−1, 1). Therefore, we seek a generalized eigenvectorv of A corresponding to λ = −1 such that {(A + I)v,v} is a cycle of generalized eigenvectors. Note that

A + I =

1 1 00 1 11 1 0

and (A + I)2 =

1 2 11 2 11 2 1

.

In order that v be a generalized eigenvector of A corresponding to λ = −1, we should choose v such that

(A + I)2v = 0 and (A + I)v 6= 0. There are many valid choices; let us choose v =

10

−1

. Then

(A + I)v =

1−1

1

. Hence, we obtain the cycle of generalized eigenvectors corresponding to λ = −1:

1−1

1

,

10

−1

.

Next, consider the eigenspace corresponding to λ = 1. For this, we compute

nullspace(A− I) = nullspace

−1 1 00 −1 11 1 −2

.

This can be row-reduced to

1 −1 00 1 −10 0 0

. We find the eigenvector

111

as a basis for this eigenspace.

Hence, we are ready to form the matrices S and J :

S =

1 1 1−1 0 1

1 −1 1

and J =

−1 1 00 −1 00 0 1

.

Via the substitution x = Sy, the system x′ = Ax is transformed into y′ = Jy. The corresponding equationsare

y′1 = −y1 + y2, y′2 = −y2, y′3 = y3.

The third equation has solution y3(t) = c3et, the second equation has solution y2(t) = c2e

−t, and so the firstequation becomes

y′1 + y1 = c2e−t.

This is a first-order linear equation with integrating factor I(t) = et. When we multiply the differentialequation for y1(t) by I(t), it becomes (y1 · et)′ = c2. Integrating both sides yields y1 · et = c2t + c1. Thus,

y1(t = c2te−t + c1e

−t.

Thus, we have

y(t) =

y1(t)y2(t)y3(t)

=

c2te−t + c1e

−t

c2e−t

c3et

.

Page 427: Differential Equations and Linear Algebra - 3e - Goode -Solutions

427

Finally, we solve for x(t):

x(t) = Sy(t) =

1 1 1−1 0 1

1 −1 1

c2te−t + c1e

−t

c2e−t

c3et

=

c2te−t + c1e

−t + c2e−t + c3e

t

−c2te−t − c1e

−t + c3et

c2te−t + c1e

−t − c2e−t + c3e

t

= c1e

−t

1−1

1

+ c2e−t

t + 1−t

t− 1

+ c3et

111

.

34. The eigenvalues of A are λ = −2,−2,−2. The eigenspace corresponding to λ = −2 is

nullspace(A + 2I) = nullspace

0 0 01 −1 −1

−1 1 1

,

and there are two linearly independent vectors in this nullspace, corresponding to the unpivoted columns ofthe row-echelon form of this matrix. Therefore, the Jordan canonical form of A is

J =

−2 1 00 −2 00 0 −2

.

To form an invertible matrix S such that S−1AS = J , we must find a cycle of generalized eigenvectorscorresponding to λ = −2 of length 2: {(A + 2I)v,v}. Now

A + 2I =

0 0 01 −1 −1

−1 1 1

and (A + 2I)2 = 03.

Since (A + 2I)2 = 03, every nonzero vector in R3 is a generalized eigenvector corresponding to λ = −2. Weneed only find a nonzero vector v such that (A + 2I)v 6= 0. There are many valid choices; let us choose

v =

100

. Then (A + 2I)v =

01

−1

, an eigenvector of A corresponding to λ = −2. We also need a

second linearly independent eigenvector corresponding to λ = −2. There are many choices; let us choose 110

. Therefore, we can form the matrix

S =

0 1 11 0 1

−1 0 0

.

Via the substitution x = Sy, the system x′ = Ax is transformed into y′ = Jy. The corresponding equationsare

y′1 = −2y1 + y2, y′2 = −2y2, y′3 = −2y3.

Page 428: Differential Equations and Linear Algebra - 3e - Goode -Solutions

428

The third equation has solution y3(t) = c3e−2t, the second equation has solution y2(t) = c2e

−2t, and so thefirst equation becomes

y′1 + 2y1 = c2e−2t.

This is a first-order linear equation with integrating factor I(t) = e2t. When we multiply the differentialequation for y1(t) by I(t), it becomes (y1 · e2t)′ = c2. Integrating both sides yields y1 · e2t = c2t + c1. Thus,

y1(t) = c2te−2t + c1e

−2t.

Thus, we have

y(t) =

y1(t)y2(t)y3(t)

=

c2te−2t + c1e

−2t

c2e−2t

c3e−2t

.

Finally, we solve for x(t):

x(t) = Sy(t) =

0 1 11 0 1

−1 0 0

c2te−2t + c1e

−2t

c2e−2t

c3e−2t

=

c2e−2t + c3e

−2t

c2te−2t + c1e

−2t + c3e−2t

−(c2te−2t + c1e

−2t)

= c1e

−2t

01

−1

+ c2e−2t

1t

−t

+ c3e−2t

110

.

35. The eigenvalues of A are λ = 4, 4, 4. The eigenspace corresponding to λ = 4 is

nullspace(A− 4I) = nullspace

0 0 01 0 00 1 0

,

and there is only one eigenvector. Therefore, the Jordan canonical form of A is

J =

4 1 00 4 10 0 4

.

Next, we need to find an invertible matrix S such that S−1AS = J . To do this, we must find a cycle ofgeneralized eigenvectors {(A− 4I)2v, (A− 4I)v,v} of length 3 corresponding to λ = 4. We have

A− 4I =

0 0 01 0 00 1 0

and (A− 4I)2 =

0 0 00 0 01 0 0

and (A− 4I)3 = 03.

From (A − 4I)3 = 03, we know that every nonzero vector is a generalized eigenvector corresponding to

λ = 4. We choose v =

100

(any multiple of the chosen vector v would be acceptable as well). Thus,

Page 429: Differential Equations and Linear Algebra - 3e - Goode -Solutions

429

(A− 4I)v =

010

and (A− 4I)2v =

001

. Thus, we have the cycle of generalized eigenvectors

0

01

,

010

,

100

.

Thus, we can form the matrix

S =

0 0 10 1 01 0 0

.

Via the substitution x = Sy, the system x′ = Ax is transformed into y′ = Jy. The corresponding equationsare

y′1 = 4y1 + y2, y′2 = 4y2 + y3, y′3 = 4y3.

The third equation has solution y3(t) = c3e4t, and the second equation becomes

y′2 − 4y2 = c3e4t.

This is a first-order linear equation with integrating factor I(t) = e−4t. When we multiply the differentialequation for y2(t) by I(t), it becomes (y2 · e−4t)′ = c3. Integrating both sides yields y2 · e−4t = c3t + c2.Thus,

y2(t) = c3te4t + c2e

4t = e4t(c3t + c2).

Therefore, the differential equation for y1(t) becomes

y′1 − 4y1 = e4t(c3t + c2).

This equation is first-order linear with integrating factor I(t) = e−4t. When we multiply the differentialequation for y1(t) by I(t), it becomes

(y1 · e−4t)′ = c3t + c2.

Integrating both sides, we obtain

y1 · e−4t = c3t2

2+ c2t + c1.

Hence,

y1(t) = e4t

(c3

t2

2+ c2t + c1

).

Thus, we have

y(t) =

y1(t)y2(t)y3(t)

=

e4t(c3

t2

2 + c2t + c1

)e4t(c3t + c2)

c3e4t

.

Page 430: Differential Equations and Linear Algebra - 3e - Goode -Solutions

430

Finally, we solve for x(t):

x(t) = Sy(t) =

0 0 10 1 01 0 0

e4t

(c3

t2

2 + c2t + c1

)e4t(c3t + c2)

c3e4t

=

c3e4t

e4t(c3t + c2)e4t(c3

t2

2 + c2t + c1

)

= c1e4t

001

+ c2e4t

01t

+ c3e4t

1t

t2/2

.

36. The eigenvalues of A are λ = −3,−3. The eigenspace corresponding to λ = −3 is only one-dimensional,and therefore the Jordan canonical form of A contains one 2× 2 Jordan block:

J =[−3 1

0 −3

].

Next, we look for a cycle of generalized eigenvectors of the form {(A + 3I)v,v}, where v is a generalized

eigenvector of A. Since (A+3I)2 =[

1 −11 −1

]2= 02, every nonzero vector in R2 is a generalized eigenvector.

Let us choose v =[

10

]. Then (A + 3I)v =

[11

]. Thus, we form the matrix

S =[

1 11 0

].

Via the substitution x = Sy, the system x′ = Ax is transformed into y′ = Jy. The corresponding equationsare

y′1 = −3y1 + y2 and y′2 = −3y2.

The second equation has solution y2(t) = c2e−3t. Substituting this expression for y2(t) into the differential

equation for y1(t) yieldsy′1 + 3y1 = c2e

−3t.

An integrating factor for this first-order linear differential equation is I(t) = e3t. Multiplying the differentialequation for y1(t) by I(t) gives us (y1 · e3t)′ = c2. Integrating both sides, we obtain y1 · e3t = c2t + c1. Thus,

y1(t) = c2te−3t + c1e

−3t.

Thus,

y(t) =[

y1(t)y2(t)

]=[

c2te−3t + c1e

−3t

c2e−3t

].

Finally, we solve for x(t):

x(t) = Sy(t) =[

1 11 0

] [c2te

−3t + c1e−3t

c2e−3t

]=[

c2te−3t + c1e

−3t + c2e−3t

c2te−3t + c1e

−3t

]= c1e

−3t

[11

]+ c2e

−3t

[t + 1

t

].

Page 431: Differential Equations and Linear Algebra - 3e - Goode -Solutions

431

Now, we must apply the initial condition:[0

−1

]= x(0) = c1

[11

]+ c2

[10

].

Therefore, c1 = −1 and c2 = 1. Hence, the unique solution to the given initial-value problem is

x(t) = −e−3t

[11

]+ e−3t

[t + 1

t

].

37. Let J = JCF(A) = JCF(B). Thus, there exist invertible matrices S and T such that

S−1AS = J and T−1BT = J.

Thus,S−1AS = T−1BT,

and soB = TS−1AST−1 = (ST−1)−1A(ST−1),

which implies by definition that A and B are similar matrices.

38. Since the characteristic polynomial has degree 3, we know that A is a 3× 3 matrix. Moreover, the rootsof the characteristic equation has roots λ = 0, 0, 0. Hence, the Jordan canonical form J of A must be one ofthe three below: 0 0 0

0 0 00 0 0

,

0 1 00 0 00 0 0

,

0 1 00 0 10 0 0

.

In all three cases, note that J3 = 03. Moreover, there exists an invertible matrix S such that S−1AS = J .Thus, A = SJS−1, and so

A3 = (SJS−1)3 = SJ3S−1 = S03S−1 = 03,

which implies that A is nilpotent.

39.

(a): Let J be an n× n Jordan block with eigenvalue λ. Then the eigenvalues of JT are λ (with multiplicityn). The matrix JT − λI consists of 1’s on the subdiagonal (the diagonal parallel and directly beneath themain diagonal) and zeros elsewhere. Hence, the null space of JT −λI is one-dimensional (with a free variablecorresponding to the right-most column of JT −λI). Therefore, the Jordan canonical form of JT consists of asingle Jordan block, since there is only one linearly independent eigenvector corresponding to the eigenvalueλ. However, a single Jordan block with eigenvalue λ is precisely the matrix J . Therefore,

JCF(JT ) = J.

(b): Let JCF(A) = J . Then there exists an invertible matrix S such that S−1AS = J . Transposing bothsides, we obtain (S−1AS)T = JT , or ST AT (S−1)T = JT , or ST AT (ST )−1 = JT . Hence, the matrix AT issimilar to JT . However, by applying part (a) to each block in JT , we find that JCF(JT ) = J . Hence, JT

is similar to J . By Problem 26 in Section 5.8, we conclude that AT is similar to J . Hence, AT and J havethe same Jordan canonical form. However, since JCF(J) = J , we deduce that JCF(AT ) = J = JCF(A), asrequired.

Solutions to Section 5.12

Page 432: Differential Equations and Linear Algebra - 3e - Goode -Solutions

432

Problems:

1. NO. Note that T (1, 1) = (2, 0, 0, 1) and T (2, 2) = (4, 0, 0, 4) 6= 2T (1, 1). Thus, T is not a lineartransformation.

2. YES. The function T can be represented by the matrix function

T (x) = Ax,

where

A =[

2 −3 0−1 0 0

]and x is a vector in R3. Every matrix transformation of the form T (x) = Ax is linear. Since the domain ofT has larger dimension than the codomain of T , T cannot be one-to-one. However, since

T (0,−1/3, 0) = (1, 0) and T (−1,−2/3, 0) = (0, 1),

we see that T is onto. Thus, Rng(T ) = R2 (2-dimensional), and so a basis for Rng(T ) is {(1, 0), (0, 1)}. Thekernel of T consists of vectors of the form (0, 0, z), and hence, a basis for Ker(T ) is {(0, 0, 1)} and Ker(T ) is1-dimensional.

3. YES. The function T can be represented by the matrix function

T (x) = Ax,

where

A =[

0 0 −32 −1 5

]and x is a vector in R3. Every matrix transformation of the form T (x) = Ax is linear. Since the domain ofT has larger dimension than the codomain of T , T cannot be one-to-one. However, since

T (1, 1/3,−1/3) = (1, 0) and T (1, 1, 0) = (0, 1),

we see that T is onto. Thus, Rng(T ) = R2, and so a basis for Rng(T ) is {(1, 0), (0, 1)}, and Rng(T ) is2-dimensional. The kernel of T consists of vectors of the form (t, 2t, 0), where t ∈ R, and hence, a basis forKer(T ) is {(1, 2, 0)}. We have that Ker(T ) is 1-dimensional.

4. YES. The function T is a linear transformation, because if g, h ∈ C[0, 1], then

T (g + h) = ((g + h)(0), (g + h)(1)) = (g(0) + h(0), g(1) + h(1)) = (g(0), g(1)) + (h(0), h(1)) = T (g) + T (h),

and if c is a scalar,

T (cg) = ((cg)(0), (cg)(1)) = (cg(0), cg(1)) = c(g(0), g(1)) = cT (g).

Note that any function g ∈ C[0, 1] for which g(0) = g(1) = 0 (such as g(x) = x2−x) belongs to Ker(T ), andhence, T is not one-to-one. However, given (a, b) ∈ R2, note that g defined by

g(x) = a + (b− a)x

satisfiesT (g) = (g(0), g(1)) = (a, b),

Page 433: Differential Equations and Linear Algebra - 3e - Goode -Solutions

433

so T is onto. Thus, Rng(T ) = R2, with basis {(1, 0), (0, 1)} (2-dimensional). Now, Ker(T ) is infinite-dimensional. We cannot list a basis for this subspace of C[0, 1].

5. YES. The function T can be represented by the matrix function

T (x) = Ax,

whereA =

[1/5 1/5

]and x is a vector in R2. Every such matrix transformation is linear. Since the domain of T has largerdimension than the codomain of T , T cannot be one-to-one. However, T (5, 0) = 1, so we see that T is onto.Thus, Rng(T ) = R, a 1-dimensional space with basis {1}. The kernel of T consists of vectors of the formt(1,−1), and hence, a basis for Ker(T ) is {(1,−1)}. We have that Ker(T ) is 1-dimensional.

6. NO. For instance, note that

T

[1 01 1

]= (1, 0) and T

[1 10 1

]= (0, 1),

and so

T

[1 01 1

]+ T

[1 10 1

]= (1, 0) + (0, 1) = (1, 1).

However,

T

([1 01 1

]+[

1 10 1

])= T

[2 11 2

]= (2, 2).

Thus, T does not respect addition, and hence, T is not a linear transformation. Similar work could be givento show that T also fails to respect scalar multiplication.

7. YES. We can verify that T respects addition and scalar multiplication as follows:

T respects addition: Let a1 + b1x + c1x2 and a2 + b2x + c2x

2 belong to P2. Then

T ((a1 + b1x + c1x2) + (a2 + b2x + c2x

2)) = T ((a1 + a2) + (b1 + b2)x + (c1 + c2)x2)

=[−(a1 + a2)− (b1 + b2) 03(c1 + c2)− (a1 + a2) −2(b1 + b2)

]=[−a1 − b1 03c1 − a1 −2b1

]+[−a2 − b2 03c2 − a2 −2b2

]= T (a1 + b1x + c1x

2) + T (a2 + b2x + c2x2).

T respects scalar multiplication: Let a + bx + cx2 belong to P2 and let k be a scalar. Then we have

T (k(a + bx + cx2)) = T ((ka) + (kb)x + (kc)x2) =[

−ka− kb 03(kc)− (ka) −2(kb)

]=[

k(−a− b) 0k(3c− a) k(−2b)

]= k

[−a− b 03c− a −2b

]= kT (a + bx + cx2).

Page 434: Differential Equations and Linear Algebra - 3e - Goode -Solutions

434

Next, observe that a + bx + cx2 belongs to Ker(T ) if and only if −a − b = 0, 3c − a = 0, and −2b = 0.These equations require that b = 0, a = 0, and c = 0. Thus, Ker(T ) = {0}, which implies that Ker(T )is 0-dimensional (with basis ∅), and that T is one-to-one. However, since M2(R) is 4-dimensional and P2

is only 3-dimensional, we see immediately that T cannot be onto. By the Rank-Nullity Theorem, in fact,Rng(T ) must be 3-dimensional, and a basis is given by

Basis for Rng(T ) = {T (1), T (x), T (x2)} ={[

−1 0−1 0

],

[−1 0

0 −2

],

[0 03 0

]}.

8. YES. We can verify that T respects addition and scalar multiplication as follows:

T respects addition: Let A,B belong to M2(R). Then

T (A + B) = (A + B) + (A + B)T = A + B + AT + BT = (A + AT ) + (B + BT ) = T (A) + T (B).

T respects scalar multiplication: Let A belong to M2(R), and let k be a scalar. Then

T (kA) = (kA) + (kA)T = kA + kAT = k(A + AT ) = kT (A).

Thus, T is a linear transformation. Note that if A is any skew-symmetric matrix, then T (A) = A+AT =A + (−A) = 0, so Ker(T ) consists precisely of the 2× 2 skew-symmetric matrices. These matrices take the

form[

0 −aa 0

], for a constant a, and thus a basis for Ker(T ) is given by

{[0 −11 0

]}, and Ker(T ) is

1-dimensional. Consequently, T is not one-to-one.Therefore, by Proposition 5.4.13, T also fails to be onto. In fact, by the Rank-Nullity Theorem, Rng(T )

must be 3-dimensional. A typical element of the range of T takes the form

T

([a bc d

])=[

a bc d

]+[

a cb d

]=[

2a b + cc + b 2d

].

The characterizing feature of this matrix is that it is symmetric. So Rng(T ) consists of all 2× 2 symmetricmatrices, and hence a basis for Rng(T ) is{[

1 00 0

],

[0 11 0

],

[0 00 1

]}.

9. YES. We can verify that T respects addition and scalar multiplication as follows:

T respects addition: Let (a1, b1, c1) and (a2, b2, c2) belong to R3. Then

T ((a1, b1, c1) + (a2, b2, c2)) = T (a1 + a2, b1 + b2, c1 + c2)

= (a1 + a2)x2 + (2(b1 + b2)− (c1 + c2))x + (a1 + a2 − 2(b1 + b2) + (c1 + c2))

= [a1x2 + (2b1 − c1)x + (a1 − 2b1 + c1)] + [a2x

2 + (2b2 − c2)x + (a2 − 2b2 + c2)]= T ((a1, b1, c1)) + T ((a2, b2, c2)).

T respects scalar multiplication: Let (a, b, c) belong to R3 and let k be a scalar. Then

T (k(a, b, c)) = T (ka, kb, kc) = (ka)x2 + (2kb− kc)x + (ka− 2kb + kc)

= k(ax2 + (2b− c)x + (a− 2b + c))= kT ((a, b, c)).

Page 435: Differential Equations and Linear Algebra - 3e - Goode -Solutions

435

Thus, T is a linear transformation. Now, (a, b, c) belongs to Ker(T ) if and only if a = 0, 2b− c = 0, anda−2b+ c = 0. These equations collectively require that a = 0 and 2b = c. Setting c = 2t, we find that b = t.Hence, (a, b, c) belongs to Ker(T ) if and only if (a, b, c) has the form (0, t, 2t) = t(0, 1, 2). Hence, {(0, 1, 2)}is a basis for Ker(T ), which is therefore 1-dimensional. Hence, T is not one-to-one.

By Proposition 5.4.13, T is also not onto. In fact, the Rank-Nullity Theorem implies that Rng(T ) mustbe 2-dimensional. It is spanned by

{T (1, 0, 0), T (0, 1, 0), T (0, 0, 1)} = {x2 + 1, 2x− 2,−x + 1},

but the last two polynomials are proportional to each other. Omitting the polynomial 2x − 2 (this is anarbitrary choice; we could have omitted −x + 1 instead), we arrive at a basis for Rng(T ): {x2 + 1,−x + 1}.10. YES. We can verify that T respects addition and scalar multiplication as follows:

T respects addition: Let (x1, x2, x3) and (y1, y2, y3) be vectors in R3. Then

T ((x1, x2, x3) + (y1, y2, y3)) = T (x1 + y1, x2 + y2, x3 + y3)

=[

0 (x1 + y1)− (x2 + y2) + (x3 + y3)−(x1 + y1) + (x2 + y2)− (x3 + y3) 0

]=[

0 x1 − x2 + x3

−x1 + x2 − x3 0

]+[

0 y1 − y2 + y3

−y1 + y2 − y3 0

]= T ((x1, x2, x3)) + T ((y1, y2, y3)).

T respects scalar multiplication: Let (x1, x2, x3) belong to R3 and let k be a scalar. Then

T (k(x1, x2, x3)) = T (kx1, kx2, kx3) =[

0 (kx1)− (kx2) + (kx3)−(kx1) + (kx2)− (kx3) 0

]= k

[0 x1 − x2 + x3

−x1 + x2 − x3 0

]= kT ((x1, x2, x3)).

Thus, T is a linear transformation. Now, (x1, x2, x3) belongs to Ker(T ) if and only if x1 − x2 + x3 = 0and −x1 +x2−x3 = 0. Of course, the latter equation is equivalent to the former, so the kernel of T consistssimply of ordered triples (x1, x2, x3) with x1 − x2 + x3 = 0. Setting x3 = t and x2 = s, we have x1 = s− t,so a typical element of Ker(T ) takes the form (s − t, s, t), where s, t ∈ R. Extracting the free variables, wefind a basis for Ker(T ): {(1, 1, 0), (−1, 0, 1)}. Hence, Ker(T ) is 2-dimensional.

By the Rank-Nullity Theorem, Rng(T ) must be 1-dimensional. In fact, Rng(T ) consists precisely of the

set of 2× 2 skew-symmetric matrices, with basis{[

0 −11 0

]}. Since M2(R) is 4-dimensional, T fails to be

onto.

11. We haveT (x, y, z) = (−x + 8y, 2x− 2y − 5z).

12. We haveT (x, y) = (−x + 4y, 2y, 3x− 3y, 3x− 3y, 2x− 6y).

13. We have

T (x) =x

2T (2) =

x

2(−1, 5, 0,−2) =

(−x

2,5x

2, 0,−x

).

Page 436: Differential Equations and Linear Algebra - 3e - Goode -Solutions

436

14. For an arbitrary 2× 2 matrix[

a bc d

], if we write

[a bc d

]= r

[1 00 1

]+ s

[0 11 0

]+ t

[1 00 0

]+ u

[1 10 0

],

we can solve for r, s, t, u to find

r = d, s = c, t = a− b + c− d, u = b− c.

Thus,

T

([a bc d

])= T

(d

[1 00 1

]+ c

[0 11 0

]+ (a− b + c− d)

[1 00 0

]+ (b− c)

[1 10 0

])= dT

[1 00 1

]+ cT

[0 11 0

]+ (a− b + c− d)T

[1 00 0

]+ (b− c)T

[1 10 0

]= d(2,−5) + c(0,−3) + (a− b + c− d)(1, 1) + (b− c)(−6, 2)= (a− 7b + 7c + d, a + b− 4c− 6d).

15. For an arbitrary element ax2 + bx + c in P2, if we write

ax2 + bx + c = r(x2 − x− 3) + s(2x + 5) + 6t = rx2 + (−r + 2s)x + (−3r + 5s + 6t),

we can solve for r, s, t to find r = a, s = 12 (a + b), and t = 1

12a− 512b + 1

6c. Thus,

T (ax2 + bx + c) = T

(a(x2 − x− 3) +

12(a + b)(2x + 5) +

12a− 5

2b + c

)= aT (x2 − x− 3) +

12(a + b)T (2x + 5) +

(112

a− 512

b +c

6

)T (6)

= a

[−2 1−4 −1

]+

12(a + b)

[0 12 −2

]+(

112

a− 512

b +c

6

)[12 66 18

]=[−a− 5b + 2c 2a− 2b + c− 5

2a− 32b + c − 1

2a− 172 b + 3c

].

16. Since dim[P5] = 6 and dim[M2(R)] = 4 = dim[Rng(T )] (since T is onto), the Rank-Nullity Theoremgives

dim[Ker(T )] = 6− 4 = 2.

17. Since T is one-to-one, dim[Ker(T )] = 0, so the Rank-Nullity Theorem gives

dim[Rng(T )] = dim[M2×3(R)] = 6.

18. Since A is lower triangular, its eigenvalues lie along the main diagonal, λ1 = 3 and λ2 = −1. Since A is2×2 with two distinct eigenvalues, A is diagonalizable. To get an invertible matrix S such that S−1AS = D,we need to find an eigenvector associated with each eigenvector:

Eigenvalue λ1 = 3: To get an eigenvector, we consider

nullspace(A− 3I) = nullspace[

0 016 −4

],

Page 437: Differential Equations and Linear Algebra - 3e - Goode -Solutions

437

and we see that one possible eigenvector is[

14

].

Eigenvalue λ2 = −1: To get an eigenvector, we consider

nullspace(A + I) = nullspace[

4 016 0

],

and we see that one possible eigenvector is[

01

].

Putting the above results together, we form

S =[

1 04 1

]and D =

[3 00 −1

].

19. To compute the eigenvalues, we find the characteristic equation

det(A− λI) = det[

13− λ −925 −17− λ

]= (13− λ)(−17− λ) + 225 = λ2 + 4λ + 4,

and the roots of this equation are λ = −2,−2.

Eigenvalue λ = 2: We compute

nullspace(A− 2I) = nullspace[

15 −925 −15

],

but since there is only one linearly independent solution to the corresponding system (one free variable), theeigenvalue λ = 2 does not have two linearly independent solutions. Hence, A is not diagonalizable.

20. To compute the eigenvalues, we find the characteristic equation

det(A− λI) = det

−4− λ 3 0−6 5− λ 03 −3 −1− λ

= (−1− λ) [(−4− λ)(5− λ) + 18]

= (−1− λ)(λ2 − λ− 2)

= −(λ + 1)2(λ− 2),

so the eigenvalues are λ1 = −1 and λ2 = 2.

Eigenvalue λ1 = −1: To get eigenvectors, we consider

nullspace(A + I) = nullspace

−3 3 0−6 6 0

3 −3 0

1 −1 00 0 00 0 0

.

There are two free variables, z = t and y = s. From the first equation x = s. Thus, two linearly independenteigenvectors can be obtained corresponding to λ1 = −1: 1

10

and

001

.

Page 438: Differential Equations and Linear Algebra - 3e - Goode -Solutions

438

Eigenvalue λ2 = 2: To get an eigenvector, we consider

nullspace(A− 2I) = nullspace

−6 3 0−6 3 0

3 −3 −3

1 −1 −10 1 20 0 0

.

We let z = t. Then y = −2t from the middle line, and x = −t from the top line. Thus, an eigenvectorcorresponding to λ2 = 2 may be chosen as −1

−21

.

Putting the above results together, we form

S =

−1 1 0−2 1 0

1 0 1

and D =

2 0 00 −1 00 0 −1

.

21. To compute the eigenvalues, we find the characteristic equation

det(A− λI) = det

1− λ 1 0−4 5− λ 017 −11 −2− λ

= (−2− λ) [(1− λ)(5− λ) + 4]

= (−2− λ)(λ2 − 6λ + 9)

= (−2− λ)(λ− 3)2,

so the eigenvalues are λ1 = 3 and λ2 = −2.

Eigenvalue λ1 = 3: To get eigenvectors, we consider

nullspace(A−3I) = nullspace

−2 1 0−4 2 017 −11 −5

−2 1 017 −11 −50 0 0

1 −3 −5−2 1 0

0 0 0

1 −3 −50 1 20 0 0

.

The latter matrix contains only one unpivoted column, so that only one linearly independent eigenvectorcan be obtained. However, λ1 = 3 occurs with multiplicity 2 as a root of the characteristic equation for thematrix. Therefore, the matrix is not diagonalizable.

22. We are given that the only eigenvalue of A is λ = 2.

Eigenvalue λ = 2: To get eigenvectors, we consider

nullspace(A− 2I) = nullspace

−3 −1 34 2 −4

−1 0 1

1 0 −1−3 −1 3

4 2 −4

1 0 −10 1 00 2 0

.

We see that only one unpivoted column will occur in a row-echelon form of A − 2I, and thus, only onelinearly independent eigenvector can be obtained. Since the eigenvalue λ = 2 occurs with multiplicity 3 asa root of the characteristic equation for the matrix, the matrix is not diagonalizable.

23. We are given that the eigenvalues of A are λ1 = 4 and λ2 = −1.

Page 439: Differential Equations and Linear Algebra - 3e - Goode -Solutions

439

Eigenvalue λ1 = 4: We consider

nullspace(A− 4I) = nullspace

5 5 −50 −5 010 5 −10

.

The middle row tells us that nullspace vectors (x, y, z) must have y = 0. From this information, the first andlast rows of the matrix tell us the same thing: x = z. Thus, an eigenvector corresponding to λ1 = 4 may bechosen as 1

01

.

Eigenvalue λ2 = −1: We consider

nullspace(A + I) = nullspace

10 5 −50 0 010 5 −5

10 5 −50 0 00 0 0

.

From the first row, a vector (x, y, z) in the nullspace must satisfy 10x+5y−5z = 0. Setting z = t and y = s,we get x = 1

2 t− 12s. Hence, the eigenvectors corresponding to λ2 = −1 take the form ( 1

2 t− 12s, s, t), and so

a basis for this eigenspace is 1/2

01

,

−1/210

.

Putting the above results together, we form

S =

1 1/2 −1/20 0 11 1 0

and D =

4 0 00 −1 00 0 −1

.

28. We will compute the dimension of each of the two eigenspaces associated with the matrix A.For λ1 = 1, we compute as follows:

nullspace(A− I) = nullspace

4 8 164 0 8

−4 −4 −12

1 2 40 1 10 0 0

,

which has only one unpivoted column. Thus, this eigenspace is 1-dimensional.For λ2 = −3, we compute as follows:

nullspace(A + 3I) = nullspace

8 8 164 4 8

−4 −4 −8

1 1 20 0 00 0 0

,

which has two unpivoted columns. Thus, this eigenspace is 2-dimensional. Between the two eigenspaces, wehave a complete set of linearly independent eigenvectors. Hence, the matrix A in this case is diagonalizable.Therefore, A is diagonalizable, and we may take

J =

1 0 00 −3 00 0 −3

.

Page 440: Differential Equations and Linear Algebra - 3e - Goode -Solutions

440

(Of course, the eigenvalues of A may be listed in any order along the main diagonal of the Jordan canonicalform, thus yielding other valid Jordan canonical forms for A.)

29. We will compute the dimension of each of the two eigenspaces associated with the matrix A.For λ1 = −1, we compute as follows:

nullspace(A + I) = nullspace

3 1 12 2 −2

−1 0 −1

1 0 10 1 −20 0 0

,

which has one unpivoted column. Thus, this eigenspace is 1-dimensional.For λ2 = 3, we compute as follows:

nullspace(A− 3I) = nullspace

−1 1 12 −2 −2

−1 0 −5

1 −1 −10 1 60 0 0

,

which has one unpivoted column. Thus, this eigenspace is also one-dimensional.Since we have only generated two linearly independent eigenvectors from the eigenvalues of A, we know

that A is not diagonalizable, and hence, the Jordan canonical form of A is not a diagonal matrix. We musthave one 1× 1 Jordan block and one 2× 2 Jordan block. To determine which eigenvalue corresponds to the1× 1 block and which corresponds to the 2× 2 block, we must determine the multiplicity of the eigenvaluesas roots of the characteristic equation of A.

A short calculation shows that λ1 = −1 occurs with multiplicity 2, while λ2 = 3 occurs with multiplicity1. Thus, the Jordan canonical form of A is

J =

−1 1 00 −1 00 0 3

.

30. There are 3 different possible Jordan canonical forms, up to a rearrangement of the Jordan blocks:

Case 1:

J =

−1 0 0 0

0 −1 0 00 0 −1 00 0 0 2

.

In this case, the matrix has four linearly independent eigenvectors, and because all Jordan blocks have size1× 1, the maximum length of a cycle of generalized eigenvectors for this matrix is 1.

Case 2:

J =

−1 1 0 0

0 −1 0 00 0 −1 00 0 0 2

.

In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = −1 and onecorresponding to λ = 2). There is a Jordan block of size 2× 2, and so a cycle of generalized eigenvectors canhave a maximum length of 2 in this case.

Page 441: Differential Equations and Linear Algebra - 3e - Goode -Solutions

441

Case 3:

J =

−1 1 0 0

0 −1 1 00 0 −1 00 0 0 2

.

In this case, the matrix has two linearly independent eigenvectors (one corresponding to λ = −1 and onecorresponding to λ = 2). There is a Jordan block of size 3× 3, and so a cycle of generalized eigenvectors canhave a maximum length of 3 in this case.

31. There are 7 different possible Jordan canonical forms, up to a rearrangement of the Jordan blocks:

Case 1:

J =

4 0 0 0 00 4 0 0 00 0 4 0 00 0 0 4 00 0 0 0 4

.

In this case, the matrix has five linearly independent eigenvectors, and because all Jordan blocks have size1× 1, the maximum length of a cycle of generalized eigenvectors for this matrix is 1.

Case 2:

J =

4 1 0 0 00 4 0 0 00 0 4 0 00 0 0 4 00 0 0 0 4

.

In this case, the matrix has four linearly independent eigenvectors, and because the largest Jordan block isof size 2× 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2.

Case 3:

J =

4 1 0 0 00 4 0 0 00 0 4 1 00 0 0 4 00 0 0 0 4

.

In this case, the matrix has three linearly independent eigenvectors, and because the largest Jordan block isof size 2× 2, the maximum length of a cycle of generalized eigenvectors for this matrix is 2.

Case 4:

J =

4 1 0 0 00 4 1 0 00 0 4 0 00 0 0 4 00 0 0 0 4

.

In this case, the matrix has three linearly independent eigenvectors, and because the largest Jordan block isof size 3× 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3.

Page 442: Differential Equations and Linear Algebra - 3e - Goode -Solutions

442

Case 5:

J =

4 1 0 0 00 4 1 0 00 0 4 0 00 0 0 4 10 0 0 0 4

.

In this case, the matrix has two linearly independent eigenvectors, and because the largest Jordan block isof size 3× 3, the maximum length of a cycle of generalized eigenvectors for this matrix is 3.

Case 6:

J =

4 1 0 0 00 4 1 0 00 0 4 1 00 0 0 4 00 0 0 0 4

.

In this case, the matrix has two linearly independent eigenvectors, and because the largest Jordan block isof size 4× 4, the maximum length of a cycle of generalized eigenvectors for this matrix is 4.

Case 7:

J =

4 1 0 0 00 4 1 0 00 0 4 1 00 0 0 4 10 0 0 0 4

.

In this case, the matrix has only one linearly independent eigenvector, and because the largest Jordan blockis of size 5× 5, the maximum length of a cycle of generalized eigenvectors for this matrix is 5.

32. There are 10 different possible Jordan canonical forms, up to a rearrangement of the Jordan blocks:

Case 1:

J =

6 0 0 0 0 00 6 0 0 0 00 0 6 0 0 00 0 0 6 0 00 0 0 0 −3 00 0 0 0 0 −3

.

In this case, the matrix has six linearly independent eigenvectors, and because all Jordan blocks have size1× 1, the maximum length of a cycle of generalized eigenvectors for this matrix is 1.

Case 2:

J =

6 1 0 0 0 00 6 0 0 0 00 0 6 0 0 00 0 0 6 0 00 0 0 0 −3 00 0 0 0 0 −3

.

In this case, the matrix has five linearly independent eigenvectors (three corresponding to λ = 6 and twocorresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Page 443: Differential Equations and Linear Algebra - 3e - Goode -Solutions

443

Case 3:

J =

6 1 0 0 0 00 6 0 0 0 00 0 6 1 0 00 0 0 6 0 00 0 0 0 −3 00 0 0 0 0 −3

.

In this case, the matrix has four linearly independent eigenvectors (two corresponding to λ = 6 and twocorresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Case 4:

J =

6 1 0 0 0 00 6 1 0 0 00 0 6 0 0 00 0 0 6 0 00 0 0 0 −3 00 0 0 0 0 −3

.

In this case, the matrix has four linearly independent eigenvectors (two corresponding to λ = 6 and twocorresponding to λ = −3), and because the largest Jordan block is of size 3 × 3, the maximum length of acycle of generalized eigenvectors for this matrix is 3.

Case 5:

J =

6 1 0 0 0 00 6 1 0 0 00 0 6 1 0 00 0 0 6 0 00 0 0 0 −3 00 0 0 0 0 −3

.

In this case, the matrix has three linearly independent eigenvectors (one corresponding to λ = 6 and twocorresponding to λ = −3), and because the largest Jordan block is of size 4 × 4, the maximum length of acycle of generalized eigenvectors for this matrix is 4.

Case 6:

J =

6 0 0 0 0 00 6 0 0 0 00 0 6 0 0 00 0 0 6 0 00 0 0 0 −3 10 0 0 0 0 −3

.

In this case, the matrix has five linearly independent eigenvectors (four corresponding to λ = 6 and onecorresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Page 444: Differential Equations and Linear Algebra - 3e - Goode -Solutions

444

Case 7:

J =

6 1 0 0 0 00 6 0 0 0 00 0 6 0 0 00 0 0 6 0 00 0 0 0 −3 10 0 0 0 0 −3

.

In this case, the matrix has four linearly independent eigenvectors (three corresponding to λ = 6 and onecorresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Case 8:

J =

6 1 0 0 0 00 6 0 0 0 00 0 6 1 0 00 0 0 6 0 00 0 0 0 −3 10 0 0 0 0 −3

.

In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = 6 and onecorresponding to λ = −3), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Case 9:

J =

6 1 0 0 0 00 6 1 0 0 00 0 6 0 0 00 0 0 6 0 00 0 0 0 −3 10 0 0 0 0 −3

.

In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = 6 and onecorresponding to λ = −3), and because the largest Jordan block is of size 3 × 3, the maximum length of acycle of generalized eigenvectors for this matrix is 3.

Case 10:

J =

6 1 0 0 0 00 6 1 0 0 00 0 6 1 0 00 0 0 6 0 00 0 0 0 −3 10 0 0 0 0 −3

.

In this case, the matrix has two linearly independent eigenvectors (one corresponding to λ = 6 and onecorresponding to λ = −3), and because the largest Jordan block is of size 4 × 4, the maximum length of acycle of generalized eigenvectors for this matrix is 4.

33. There are 15 different possible Jordan canonical forms, up to a rearrangement of Jordan blocks:

Page 445: Differential Equations and Linear Algebra - 3e - Goode -Solutions

445

Case 1:

J =

2 0 0 0 0 0 00 2 0 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 0 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has seven linearly independent eigenvectors (four corresponding to λ = 2 and threecorresponding to λ = −4), and because all Jordan blocks are size 1 × 1, the maximum length of a cycle ofgeneralized eigenvectors for this matrix is 1.

Case 2:

J =

2 1 0 0 0 0 00 2 0 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 0 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has six linearly independent eigenvectors (three corresponding to λ = 2 and threecorresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Case 3:

J =

2 1 0 0 0 0 00 2 0 0 0 0 00 0 2 1 0 0 00 0 0 2 0 0 00 0 0 0 −4 0 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has five linearly independent eigenvectors (two corresponding to λ = 2 and threecorresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Case 4:

J =

2 1 0 0 0 0 00 2 1 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 0 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has five linearly independent eigenvectors (two corresponding to λ = 2 and threecorresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of acycle of generalized eigenvectors for this matrix is 3.

Page 446: Differential Equations and Linear Algebra - 3e - Goode -Solutions

446

Case 5:

J =

2 1 0 0 0 0 00 2 1 0 0 0 00 0 2 1 0 0 00 0 0 2 0 0 00 0 0 0 −4 0 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has four linearly independent eigenvectors (one corresponding to λ = 2 and threecorresponding to λ = −4), and because the largest Jordan block is of size 4 × 4, the maximum length of acycle of generalized eigenvectors for this matrix is 4.

Case 6:

J =

2 0 0 0 0 0 00 2 0 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has six linearly independent eigenvectors (four corresponding to λ = 2 and twocorresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Case 7:

J =

2 1 0 0 0 0 00 2 0 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has five linearly independent eigenvectors (three corresponding to λ = 2 and twocorresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Case 8:

J =

2 1 0 0 0 0 00 2 0 0 0 0 00 0 2 1 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has four linearly independent eigenvectors (two corresponding to λ = 2 and twocorresponding to λ = −4), and because the largest Jordan block is of size 2 × 2, the maximum length of acycle of generalized eigenvectors for this matrix is 2.

Page 447: Differential Equations and Linear Algebra - 3e - Goode -Solutions

447

Case 9:

J =

2 1 0 0 0 0 00 2 1 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has four linearly independent eigenvectors (two corresponding to λ = 2 and twocorresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of acycle of generalized eigenvectors for this matrix is 3.

Case 10:

J =

2 1 0 0 0 0 00 2 1 0 0 0 00 0 2 1 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 00 0 0 0 0 0 −4

.

In this case, the matrix has three linearly independent eigenvectors (one corresponding to λ = 2 and twocorresponding to λ = −4), and because the largest Jordan block is of size 4 × 4, the maximum length of acycle of generalized eigenvectors for this matrix is 4.

Case 11:

J =

2 0 0 0 0 0 00 2 0 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 10 0 0 0 0 0 −4

.

In this case, the matrix has five linearly independent eigenvectors (four corresponding to λ = 2 and onecorresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of acycle of generalized eigenvectors for this matrix is 3.

Case 12:

J =

2 1 0 0 0 0 00 2 0 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 10 0 0 0 0 0 −4

.

In this case, the matrix has four linearly independent eigenvectors (three corresponding to λ = 2 and onecorresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of acycle of generalized eigenvectors for this matrix is 3.

Page 448: Differential Equations and Linear Algebra - 3e - Goode -Solutions

448

Case 13:

J =

2 1 0 0 0 0 00 2 0 0 0 0 00 0 2 1 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 10 0 0 0 0 0 −4

.

In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = 2 and onecorresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of acycle of generalized eigenvectors for this matrix is 3.

Case 14:

J =

2 1 0 0 0 0 00 2 1 0 0 0 00 0 2 0 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 10 0 0 0 0 0 −4

.

In this case, the matrix has three linearly independent eigenvectors (two corresponding to λ = 2 and onecorresponding to λ = −4), and because the largest Jordan block is of size 3 × 3, the maximum length of acycle of generalized eigenvectors for this matrix is 3.

Case 15:

J =

2 1 0 0 0 0 00 2 1 0 0 0 00 0 2 1 0 0 00 0 0 2 0 0 00 0 0 0 −4 1 00 0 0 0 0 −4 10 0 0 0 0 0 −4

.

In this case, the matrix has two linearly independent eigenvectors (one corresponding to λ = 2 and onecorresponding to λ = −4), and because the largest Jordan block is of size 4 × 4, the maximum length of acycle of generalized eigenvectors for this matrix is 4.

34. FALSE. For instance, if A =[

1 10 1

]and B =

[1 01 1

], then we have eigenvalues λA = λB = 1, but

the matrix A − B =[

0 1−1 0

]is invertible, and hence, zero is not an eigenvalue of A − B. This can also

be verified directly.

35. FALSE. For instance, if A = I2 and B =[

1 10 1

], then A2 = B2 = I2, but the matrices A and B are

not similar. (Otherwise, there would exist an invertible matrix S such that S−1AS = B. But since A = I2

this reduces to I2 = B, which is clearly not the case. Thus, no such invertible matrix S exists.)

36. To see that T1 + T2 is a linear transformation, we must verify that it respects addition and scalarmultiplication:

Page 449: Differential Equations and Linear Algebra - 3e - Goode -Solutions

449

T1 + T2 respects addition: Let v1 and v2 belong to V . Then we have

(T1 + T2)(v1 + v2) = T1(v1 + v2) + T2(v1 + v2)= [T1(v1) + T1(v2)] + [T2(v1) + T2(v2)]= [T1(v1) + T2(v1)] + [T1(v2) + T2(v2)]= (T1 + T2)(v1) + (T1 + T2)(v2),

where we have used the linearity of T1 and T2 individually in the second step.

T1 + T2 respects scalar multiplication: Let v belong to V and let k be a scalar. Then we have

(T1 + T2)(kv) = T1(kv) + T2(kv)= kT1(v) + kT2(v)= k[T1(v) + T2(v)]= k(T1 + T2)(v),

as required.

There is no particular relationship between Ker(T1), Ker(T2), and Ker(T1 + T2).

37. FALSE. For instance, consider T1 : R → R defined by T1(x) = x, and consider T2 : R → R defined byT2(x) = −x. Both T1 and T2 are linear transformations, and both of them are onto. However, (T1 +T2)(x) =T1(x) + T2(x) = x + (−x) = 0, so Rng(T1 + T2) = {0}, which implies that T1 + T2 is not onto.

38. FALSE. For instance, consider T1 : R → R defined by T1(x) = x, and consider T2 : R → R definedby T2(x) = −x. Both T1 and T2 are linear transformations, and both of them are one-to-one. However,(T1 + T2)(x) = T1(x) + T2(x) = x + (−x) = 0, so Ker(T1 + T2) = R, which implies that T1 + T2 is notone-to-one.

39. Assume thatc1T (v1 + c2T (v2) + · · ·+ cnT (vn) = 0.

We wish to show thatc1 = c2 = · · · = cn = 0.

To do this, use the linearity of T to rewrite the above equation as

T (c1v1 + c2v2 + · · ·+ cnvn) = 0.

Now, since Ker(T ) = {0}, we conclude that

c1v1 + c2v2 + · · ·+ cnvn = 0.

Since {v1,v2, . . . ,vn} is a linearly independent set, we conclude that c1 = c2 = · · · = cn = 0, as required.

40. Assume that V1∼= V2 and V2

∼= V3. Then there exist isomorphisms T1 : V1 → V2 and T2 : V2 → V3.Since the composition of two linear transformations is a linear transformation (Theorem 5.4.2), we have alinear transformation T2T1 : V1 → V3. Moreover, since both T1 and T2 are one-to-one and onto, T2T1 isalso one-to-one and onto (see Problem 39 in Section 5.4). Thus, T2T1 : V1 → V3 is an isomorphism. Hence,V1∼= V3, as required.

41. We haveAiv = λ1v

Page 450: Differential Equations and Linear Algebra - 3e - Goode -Solutions

450

for each i = 1, 2, . . . , k. Thus,

(A1A2 . . . Ak)v = (A1A2 . . . Ak−1)(Akv) = (A1A2 . . . Ak−1)(λkv) = λk(A1A2 . . . Ak−1)v= λk(A1A2 . . . Ak−2)(Ak−1v) = λk(A1A2 . . . Ak−2)(λk−1v) = λk−1λk(A1A2 . . . Ak−2)v...= λ2λ3 . . . λk(A1v) = λ2λ3 . . . λk(λ1v)= (λ1λ2 . . . λk)v,

which shows that v is an eigenvector of A1A2 . . . Ak with corresponding eigenvalue λ1λ2 . . . λk.

42. We first show that T is a linear transformation:

T respects addition: Let A and B belong to Mn(R). Then

T (A + B) = S−1(A + B)S = S−1AS + S−1BS = T (A) + T (B),

and so T respects addition.

T respects scalar multiplication: Let A belong to Mn(R), and let k be a scalar. Then

T (kA) = S−1(kA)S = k(S−1AS) = kT (A),

and so T respects scalar multiplication.

Next, we verify that T is both one-to-one and onto (of course, in view of Proposition 5.4.13, it is onlynecessary to confirm one of these two properties, but we will nonetheless verify them both):

T is one-to-one: Assume that T (A) = 0n. That is, S−1AS = 0n. Left multiplying by S and right multiplyingby S−1 on both sides of this equation yields A = S0nS−1 = 0n. Hence, Ker(T ) = {0n}, and so T is one-to-one.

T is onto: Let B be an arbitrary matrix in Mn(R). Then

T (SBS−1) = S−1(SBS−1)S = (S−1S)B(S−1S) = InBIn = B,

and hence, B belongs to Rng(T ). Since B was an arbitrary element of Mn(R), we conclude that Rng(T ) =Mn(R). That is, T is onto.

Solutions to Section 6.1

True-False Review:

1. TRUE. This is essentially the statement of Theorem 6.1.3.

2. FALSE. As stated in Theorem 6.1.5, if there is any point x0 in I such that W [y1, y2, . . . , yn](x0) = 0,then {y1, y2, . . . , yn} is linearly dependent on I.

3. FALSE. Many counterexamples are possible. Note that

(xD −Dx)(x) = xD(x)−D(x2) = x− 2x = (−1)(x).

Therefore, xD−Dx = −1. Setting L1 = x and L2 = D, we therefore see that L1L2 6= L2L1 in this example.

4. TRUE. By assumption, L1(y1 + y2) = L1(y1) + L1(y2) and L1(cy) = cL1(y) for all functions y, y1, y2.Likewise, L2(y1 + y2) = L2(y1) + L2(y2) and L2(cy) = cL2(y) for all functions y, y1, y2. Therefore,

(L1+L2)(y1+y2) = L1(y1+y2)+L2(y1+y2) = (L1(y1)+L1(y2))+(L2(y1)+L2(y2)) = (L1(y1)+L2(y1))+(L1(y2)+L2(y2)) = (L1+L2)(y1)+(L1+L2)(y2)

Page 451: Differential Equations and Linear Algebra - 3e - Goode -Solutions

451

and((L1 + L2)(cy) = L1(cy) + L2(cy) = cL1(y) + cL2(y) = c(L1(y) + L2(y)) = c(L1 + L2)(y).

Therefore, L1 + L2 is a linear differential operator.

5. TRUE. By assumption L(y1 + y2) = L(y1) + L(y2) and L(ky) = kL(y) for all scalars k. Therefore, forall constants c, we have

(cL)(y1 + y2) = cL(y1 + y2) = c(L(y1) + L(y2)) = cL(y1) + cL(y2) = (cL)(y1) + (cL)(y2)

and(cL)(ky) = c(L(ky)) = c(k(L(y))) = k(cL)(y).

Therefore, cL is a linear differential operator.

6. TRUE. We haveL(yp + u) = L(yp) + L(u) = F + 0 = F.

7. TRUE. We haveL(y1 + y2) = L(y1) + L(y2) = F1 + F2.

Solutions to Section 6.2

True-False Review:

1. FALSE. Even if the auxiliary polynomial fails to have n distinct roots, the differential equation still hasn linearly independent solutions. For example, if L = D2 +2D +1, then the differential equation Ly = 0 hasauxiliary polynomial with (repeated) roots r = −1,−1. Yet we do have two linearly independent solutionsy1(x) = e−x and y2(x) = xe−x to the differential equation.

2. FALSE. Theorem 6.2.1 only applies to polynomial differential operators. However, in general, manycounterexamples can be given. For example, note that

(xD −Dx)(x) = xD(x)−D(x2) = x− 2x = (−1)(x).

Therefore, xD−Dx = −1. Setting L1 = x and L2 = D, we therefore see that L1L2 6= L2L1 in this example.

3. TRUE. This is really just the statement that a polynomial of degree n always has n roots, withmultiplicities counted.

4. TRUE. Since 0 is a root of multiplicity four, each term of the polynomial differential operator mustcontain a factor of D4, so that any polynomial of degree three or less becomes zero after taking four (ormore) derivatives. Therefore, for a homogeneous differential equation of this type, a polynomial of degreethree or less must be a solution.

5. FALSE. Note that r = 0 is a root of the auxiliary polynomial, but only of multiplicity 1. The expressionc1 + c2x in the solution reflects r = 0 as a root of multiplicity 2.

6. TRUE. The roots of the auxiliary polynomial are r = −3,−3, 5i,−5i. The portion of the solutioncorresponding to the repeated root r = −3 is c1e

−3x +c2xe−3x, and the portion of the solution correspondingto the complex conjugate pair r = ±5i is c3 cos 5x + c4 sin 5x.

7. TRUE. The roots of the auxiliary polynomial are r = 2 ± i, 2 ± i. The terms corresponding to thefirst pair 2± i are c1e

2x cos x and c2e2x sinx, and the repeated root gives two more terms: c3xe2x cos x and

c4xe2x sinx.

Page 452: Differential Equations and Linear Algebra - 3e - Goode -Solutions

452

8. FALSE. Many counterexamples can be given. For instance, if P (D) = D − 1, then the general solutionis y(x) = cex. However, the differential equation (P (D))2y = (D − 1)2y has auxiliary equation with rootsr = 1, 1 and general solution z(x) = c1e

x + c2xex 6= xy(x).

Solutions to Section 6.3

True-False Review:

1. FALSE. Under the given assumptions, we have A1(D)F1(x) = 0 and A2(D)F2(x) = 0. However, thismeans that

(A1(D)+A2(D))(F1(x)+F2(x)) = A1(D)F1(x)+A1(D)F2(x)+A2(D)F1(x)+A2(D)F2(x) = A1(D)F2(x)+A2(D)F1(x),

which is not necessarily zero. As a specific example, if A1(D) = D − 1 and A2(D) = D − 2, then A1(D)annihilates F1(x) = ex and A2(D) annihilates F2(x) = e2x. However, A1(D) + A2(D) = 2D − 3 does notannihilate ex + e2x.

2. FALSE. The annihilator of F (x) in this case is A(D) = Dk+1, since it takes k + 1 derivatives in orderto annihilate xk.

3. TRUE. We apply rule 1 in this section with k = 1, or we can compute directly that (D−a)2 annihilatesxeax.

4. FALSE. Some functions cannot be annihilated by a polynomial differential operator. Only those of theforms listed in 1-4 can be annihilated. For example, F (x) = lnx does not have an annihilator.

5. FALSE. For instance, if F (x) = x, A1(D) = A2(D) = D, then although A1(D)A2(D)F (x) = 0, neitherA1(D) nor A2(D) annihilates F (x) = x.

6. FALSE. The annihilator of F (x) = 3 − 5x is D2, but since r = 0 already occurs twice as a root of theauxiliary equation, the appropriate trial solution here is yp(x) = A0x

2 + A0x3.

7. FALSE. The annihilator of F (x) = x4 is D5, but since r = 0 already occurs three times as a root of theauxiliary equation, the appropriate trial solution here is yp(x) = A0x

3 + A1x4 + A2x

5 + A3x6 + A4x

7.

8. TRUE. The annihilator of F (x) = cos x is D2 + 1, but since r = ±i already occurs once as a complexconjugate pair of roots of the auxiliary equation, the appropriate trial solution is not yp(x) = A0 cos x +B0 sinx; we must multiply by a factor of x to occur for the fact that r = ±i is a pair of roots of the auxiliaryequation.

Solutions to Section 6.4

True-False Review:

1. TRUE.

Solutions to Section 6.5

True-False Review:

1. TRUE. This is reflected in the negative sign appearing in Hooke’s Law. The spring force Fs is given byFs = −kL0, where k > 0 and L0 is the displacement of the spring from its equilibrium position. The springforce acts in a direction opposite to that of the displacement of the mass.

2. FALSE. The circular frequency is the square root of k/m:

ω0 =

√k

m.

Page 453: Differential Equations and Linear Algebra - 3e - Goode -Solutions

453

3. TRUE. The frequency of oscillation, denoted f in the text, and the period of oscillation, denoted T inthe text, are inverses of one another: fT = 1. For instance, if a system undergoes f = 3 oscillations in onesecond, then each oscillation takes one-third of a second, so T = 1

3 .

4. TRUE. This is mathematically seen from Equation (6.5.14) in which the exponential factor e−ct/2m

decays to zero for large t. Therefore, y(t) becomes smaller and smaller as t increases. This is depicted inFigure 6.5.5.

5. FALSE. In the cases of critical damping and overdamping, the system cannot oscillate. In fact, thesystem passes through the equilibrium position at most once in these cases.

6. TRUE. The frequency of the driving term is denoted by ω, while the circular frequency of the spring-masssystem is denoted by ω0. Case 1(b) in this section describes resonance, the situation in which ω = ω0.

7. TRUE. Air resistance tends to dampen the motion of the spring-mass system, since it acts in a directionopposite to that of the motion of the spring. This is reflected in Equation (6.5.4) by the negative signappearing in the formula for the damping force Fd.

8. FALSE. From the formula

T = 2π

√m

k

for the period of oscillation, we see that for a larger mass, the period is larger (i.e. longer).

9. TRUE. We see in the section on free oscillations of a mechanical system, we see that the resulting motionof the mass is given by (6.5.11), (6.5.12), or (6.5.13), and in all of these cases, the amplitude of this system isbounded. Only in the case of forced oscillations with resonance can the amplitude increase without bound.

Solutions to Section 6.6

True-False Review:

1. TRUE. The differential equation governing the situation in which no driving electromotive force ispresent is (6.6.3), and its solutions are given in (6.6.4). In all cases, q(t) → 0 as t → ∞. Since the chargedecays to zero, the rate of change of charge, which is the current in the circuit, eventually decreases to zeroas well.

2. TRUE. For the given constants, it is true that R2 < 4L/C, since R2 = 16 and 4L/C = 272.

3. TRUE. The amplitude of the steady-state current is given by Equation (6.6.6), which is maximum whenω2 = ω. Substituting ω2 = 1

LC , it follows that the amplitude of the steady-state current will be a maximumwhen

ω = ωmax =1√LC

.

4. TRUE. From the form of the amplitude of the steady-state current given in Equation (6.6.6), we seethat the amplitude A is directly proportional to the amplitude E0 of the external driving force.

5. FALSE. The current i(t) in the circuit is the derivative of the charge q(t) on the capacitor, given in thesolution to Example 6.6.1:

q(t) = A0e−Rt/2L cos(µt− φ) +

E0

Hcos(ωt− η).

The derivative of this is not inversely proportional to R.

6. FALSE. The charge on the capacitor decays over time if there is no driving force.

Page 454: Differential Equations and Linear Algebra - 3e - Goode -Solutions

454

Solutions to Section 6.7

True-False Review:

1. TRUE. This is essentially the statement of the variation-of-parameters method (Theorem 6.7.1).

2. FALSE. The solutions y1, y2, . . . , yn must form a linearly independent set of solutions to the associatedhomogeneous differential equation (6.7.18).

3. FALSE. The requirement on the functions u1, u2, . . . , un, according to Theorem 6.7.6, is that theysatisfy Equations (6.7.22). Because these equations involve the derivatives of u1, u2, . . . , un only, constantsof integration can be arbitrarily chosen in solving (6.7.22), and therefore, addition of constants to thefunctions u1, u2, . . . , un again yields valid solutions to (6.7.22).

Solutions to Section 6.8

True-False Review:

1. FALSE. First of all, the given equation only addresses the case of a second-order Cauchy-Euler equation.Secondly, the term containing y′ must contain a factor of x (see Equation 6.8.1):

x2y′′ + a1xy′ + a2y = 0.

2. TRUE. The indicial equation (6.8.2) in this case is

r(r − 1)− 2r − 18 = r2 − 3r − 18 = 0,

with real and distinct roots r = 6 and r = −3. Hence, y1(x) = x6 and y2(x) = x−3 are two linearlyindependent solutions to this Cauchy-Euler equation.

3. FALSE. The indicial equation (6.8.2) in this case is

r(r − 1) + 9r + 16 = r2 + 8r + 16 = (r + 4)2 = 0,

and so we have only one real root, r = 4. Therefore, the only solutions to this Cauchy-Euler equation of theform y = xr take the form y = cx−4. Therefore, only one linearly independent solution of the form y = xr

to the Cauchy-Euler equation has been obtained.

4. TRUE. The indicial equation (6.8.2) in this case is

r(r − 1) + 6r + 6 = r2 + 5r + 6 = (r + 2)(r + 3) = 0,

with roots r = −2 and r = −3. Therefore, the general solution to this Cauchy-Euler equation is

y(x) = c1x−2 + c2x

−3 =c1

x2+

c2

x3.

Therefore, as x → +∞, y(x) for all values of the constants c1 and c2.

5. TRUE. A solution obtained by the method in this section containing the function ln x implies a repeatedroot to the indicial equation. In fact, such a solution takes the form y2(x) = xr1 lnx. Therefore, if y(x) =lnx/x is a solution, we conclude that r1 = r2 = −1 is the only root of the indicial equation r(r−1)+a1r+a2 =0. From Case 2 in this section, we see that −1 = −a1−1

2 , and so a1 = 3. Moreover, the discriminant ofEquation (6.8.3) must be zero:

(a1 − 1)2 − 4a2 = 0.

That is, 22 − 4a2 = 0. Therefore, a2 = 1. Therefore, the Cauchy-Euler equation in question, with a1 = 3and a2 = 1, must be as given.

Page 455: Differential Equations and Linear Algebra - 3e - Goode -Solutions

455

6. FALSE. The indicial equation (6.8.2) in this case is

r(r − 1)− 5r − 7 = r2 − 6r − 7 = (r − 7)(r + 1) = 0,

with roots r = 7 and r = −1. Therefore, the general solution to this Cauchy-Euler equation is

y(x) = c1x7 + c2x

−1,

and this is not an oscillatory function of x for any choice of constants c1 and c2.

Solutions to Section 7.1

True-False Review:

1. TRUE. If we make the substitution x → x1 and y → x2, then in terms of Definition 7.1.1, we have twoequations with a11(t) = t2,a12(t) = −t, a21(t) = sin t, a22(t) = 5, and b1(t) = b2(t) = 0.

2. TRUE. If we make the substitution x → x1 and y → x2, then in terms of Definition 7.1.1, we have twoequations with a11(t) = t4, a12(t) = −et, a21(t) = t2 + 3, a22(t) = 0, and b1(t) = 4 and b2(t) = −t2.

3. FALSE. The term txy on the right-hand side of the formula for x′ is non-linear because the unknownfunctions x and y are multiplied together, and this is prohibited in the form of a first-order linear system ofdifferential equations.

4. FALSE. The term eyx on the right-hand side of the formula for y′ is non-linear because the knownfunction y appears as an exponent. This is prohibited in the form of a first-order linear system of differentialequations.

5. TRUE. If we consider Equation (7.1.16) with n = 3, then the text describes how the substitution x1 = x,x2 = x′, x3 = x′′ enables us to replace (7.1.16) with the equivalent first-order linear system

x′1 = x2, x′2 = x3, x′3 = −a3(t)x1 − a2(t)x2 − a1(t)x1 + F (t).

6. FALSE. The technique for converting a first-order linear system of two differential equations to a second-order linear differential equation only applies in the case where the coefficients aij(t) of the system (7.1.1)are constants.

7. FALSE. As indicated in Definition 7.1.6, an initial-value problem consists of auxiliary conditions all ofwhich are applied at the same time t0. In this system, the value of x is specified at t = 0 while the value ofy is specified at t = 1.

8. TRUE. This has the required form in Definition 7.1.6. Note that both x and y have initial-valuesspecified at t0 = 0.

9. FALSE. There is no initial-value specified for the function y(t). The value y(2) would be required inorder for the to be an initial-value problem.

10. FALSE. As indicated in Definition 7.1.6, an initial-value problem consists of auxiliary conditions all ofwhich are applied at the same time t0. In this system, the value of x is specified at t = 3 while the value ofy is specified at t = −3.

Solutions to Section 7.2

True-False Review:

1. TRUE. If the unknown function x(t) is a column n-vector function, then A(t) must contain n columnsin order to be able to compute A(t)x(t). And since x′(t) is also a column n-vector function, then A(t)x(t)

Page 456: Differential Equations and Linear Algebra - 3e - Goode -Solutions

456

must have n rows, and therefore, A(t) must also have n rows. Therefore, A(t) contains the same number ofrows and columns.

2. FALSE. If two columns of a matrix are interchanged, the determinant changes sign:

det([x1(t),x2(t)]) = −det([x2(t),x1(t)]).

3. FALSE. Many counterexamples are possible. For instance, if

x1 =[

11

], x2 =

[1t

], x3 =

[1t2

],

then if we setc1x1(t) + c2x2(t) + c2x3(t) = 0,

then we arrive at the equations

c1 + c2 + c3 = 0 and c1 + tc2 + t2c3 = 0.

Setting t = 0, we conclude that c1 = 0. Next, setting t = −1, we have −c2 + c3 = 0 = c2 + c3, and thisrequires c2 = c3 = 0. Therefore c1 = c2 = c3 = 0. Hence, {x1,x2,x3} is linearly independent.

4. FALSE. Theorem 7.2.4 asserts that the Wronskian of these vector functions must be nonzero for somet0 ∈ I, not for all values in I.

5. FALSE. To the contrary, an n×n linear system x′(t) = Ax(t) always has n linearly independent solutions,regardless of any condition on the determinant of the matrix A. This is explained in the paragraph precedingExample 7.2.7 and is proved in Theorem 7.3.2 in the next section.

6. TRUE. This is actually described in the preceding section with Equation (7.1.16) and following. Wecan replace the fourth-order linear differential equation for x(t) by making the change of variables x1 = x,x2 = x′, x3 = x′′, and x4 = x′′′. In so doing, we obtain the equivalent first-order linear system

x′1 = x2, x′2 = x3, x

′3 = x4, x

′4 = −a4(t)x1 − a3(t)x2 − a2(t)x3 − a1(t)x4 + F (t).

7. FALSE. We have(x0(t) + b(t))′ = x′0(t) + b′(t) = A(t)x0(t) + b′(t),

and this in general is not the same as

A(t)(x0(t) + b(t)) + b(t).

Solutions to Section 7.3

True-False Review:

1. FALSE. With b(t) 6= 0, note that x(t) = 0 is not a solution to the system x′(t) = A(t)x(t) + b(t), andlacking the zero vector, the solution set to the differential equation cannot form a vector space.

2. TRUE. Since any fundamental matrix X(t) = [x1,x2, . . . ,xn] is comprised of linearly independentcolumns, the Invertible Matrix Theorem guarantees that the fundamental matrix is invertible under allcircumstances.

3. TRUE. More than this, a fundamental solution set for x′(t) = A(t)x(t) is in fact a basis for the space ofsolutions to the linear system. In particular, it spans the space of all solutions to x′ = Ax.

Page 457: Differential Equations and Linear Algebra - 3e - Goode -Solutions

457

4. TRUE. This is essentially the content of Theorem 7.3.6. The general solution to the homogeneous vectordifferential equation x′(t) = A(t)x(t) is xc(t) = c1x1 + c2x2 + · · ·+ cnxn, and therefore the general solutionto x′(t) = A(t)x(t) + b(t) is of the form x(t) = xc(t) + xp(t).

Solutions to Section 7.4

True-False Review:

1. TRUE. If x(t) = eλtv, then

x′(t) = λeλtv = eλt(λv) = eλt(Av) = A(eλtv) = Ax(t).

This calculation appears prior to Theorem 7.4.1.

2. TRUE. The two real-valued solutions are derived in the work preceding Example 7.4.6. They are

x1(t) = eat(cos btr− sin bts) and x2(t) = eat(sin btr + cos bts),

where r and s are the real and imaginary parts of an eigenvector v = r + is corresponding to λ = a + bi.

3. FALSE. Many counterexamples can be given. For instance, if A =[

0 00 0

]and B =

[0 10 0

], then

the matrices A and B have the same characteristic equation: λ2 = 0. However, whereas any constant vector

function x(t) =[

c1

c2

]is a solution to x′ = Ax, we have Bx =

[c2

0

]and x′ = 0. Therefore, unless c2 = 0,

x is not a solution to the system x′ = Bx. Hence, the systems x′ = Ax and x′ = Bx have differing solutionsets.

4. TRUE. Assume that the eigenvalue/eigenvector pairs are (λ1,v1), (λ2,v2), . . . , (λn,vn). In this case,the general solution to both linear systems is the same:

x(t) = c1eλ1tv1 + c2e

λ2tv2 + · · ·+ cneλntvn.

5. TRUE. The two real-valued solutions forming a basis for the set of solutions to x′ = Ax in this case are

x1(t) = eat(cos bt r− sin bt s) and x2(t) = eat(sin bt r + cos bt s).

The general solution is a linear combination of these, and since a > 0, eat → ∞ as t → ∞. Since x1 andx2 cannot cancel out in a linear combination (since they are linearly independent), eat remains as a factorin any particular solution to the vector differential equation, and as this factor tends to ∞, ‖x(t)‖ → ∞ ast →∞.

6. ??????????????????????????????????

Solutions to Section 7.5

True-False Review:

1. FALSE. Every n×n linear system of differential equations has n linearly independent solutions (Theorem7.3.2), regardless of whether A is defective or nondefective.

2. TRUE. This fact is part of the information contained in Theorem 7.5.4.

3. FALSE. The number of linearly independent solutions to x′ = Ax corresponding to λ is equal to thealgebraic multiplicity of the eigenvalue λ as a root of the characteristic equation for A, not the dimension ofthe eigenspace Eλ.

Page 458: Differential Equations and Linear Algebra - 3e - Goode -Solutions

458

4. FALSE. We have

Ax = A(eλtv1) = eλt(Av1) = eλt(v0 + λv1) = eλtv0 + λeλtv1 = eλtv0 + x′(t).

Therefore, Ax 6= x′.

Solutions to Section 7.6

True-False Review:

1. FALSE. The particular solution, given by Equation (7.6.4), depends on a fundamental matrix X(t) for thenonhomogeneous linear system, and the columns of any fundamental matrix consist of linearly independentsolutions to the corresponding homogeneous vector differential equation x′ = Ax.

2. FALSE. The vector function u(t) is not arbitrary; it must satisfy the equation X(t)u′(t) = b(t),according to Theorem 7.6.1.

3. TRUE. By definition, we have X = [x1,x2, . . . ,xn], so

X ′ = [x′1,x′2, . . . ,x

′n] = [Ax1, Ax2, . . . , Axn] = AX.

4. FALSE. If we assume that xp is a solution to x′ = Ax + b, then note that

(c · xp)′ = c · x′p = c · (Axp + b),

whereasA(c · xp) + b = c · (Axp) + b.

The right-hand sides of the above expressions are not equal. Therefore, c ·xp is not a solution to x′ = Ax+bin general.

5. FALSE. A formula for the particular solution is given in Theorem 7.6.1:

xp(t) = X(t)∫ t

X−1(s)b(s)ds.

Since an arbitrary integration constant can be applied to this expression, we have an arbitrary number ofdifferent choices for the particular solution xp(t).

6. TRUE. From Xu′ = b, we use the invertibility of X to obtain u′ = X−1b. Therefore, u is obtained byintegrating X−1b:

u(t) =∫ t

X−1(s)b(s)ds.

Solutions to Section 7.7

True-False Review:

1. FALSE. In order to solve a coupled spring-mass system with two masses and two springs as a first-orderlinear system, a 4× 4 linear system is required. More precisely, if the masses are m1 and m2, and the springconstants are k1 and k2, then the first-order linear system for the spring-mass system is x′ = Ax where

A =

0 1 0 0

− 1m1

(k1 + k2) 0 k2m1

00 0 0 1k2m2

0 − k2m2

0

.

Page 459: Differential Equations and Linear Algebra - 3e - Goode -Solutions

459

2. TRUE. Hooke’s Law states that the force due to a spring on a mass is proportional to the displacementof the mass from equilibrium, and oriented in the direction opposite to the displacement: F = −kx. Theunits of force are Newtons, and the units of displacement are meters. Therefore, in solving for the springconstant k, we have k = −F

x , with units of Newtons on top of the fraction and units of meters on the bottomof the fraction.

3. FALSE. The amount of chemical in a tank of solution is measured in grams in the metric system, butthe concentration is measured in grams/L, since concentration is computed as the amount of substance perunit of volume.

4. FALSE. The correct relation is that rin + r12 − r21 = 0, where r12 and r21 are the rates of flow fromtank 1 to tank 2, and from tank 2 to tank 1, respectively. Example 7.7.2 shows quite clearly that r12 andr21 need not be the same.

5. FALSE. Although this is a closed system, chemicals still pass between the two tanks. So, for example,if tank 1 contained 100 grams of chemical and tank 2 contained no chemical, then as time passes and fluidflows between the two tanks, some of the chemical will move to tank 2. The amount of chemical in eachtank changes over time.

Solutions to Section 7.8

True-False Review:

1. TRUE. This is precisely the content of Equation (7.8.2).

2. TRUE. If we denote the fundamental matrix for the linear system x′ = Ax by

X(t) = [x1,x2, . . . ,xn],

then from Equations (7.8.4) and (7.8.7), we see that

eAt = X(t)X−1(0) = X0(t).

3. TRUE. For each generalized eigenvector v, eAtv is a solution to x′ = Ax. If A is an n × n matrix,then A has n linearly independent generalized eigenvectors, and therefore, the general solution to x′ = Axcan be formulated as linear combinations of n linearly independent solutions of the form eAtv. As usual,application of the initial condition yields the unique solution to the given initial-value problem.

4. TRUE. Since the transition matrix for x′ = Ax is a fundamental matrix for the system, its columnsconsist of linearly independent vector functions, and hence, this matrix is invertible.

5. TRUE. This follows immediately from Equations (7.8.4) and (7.8.6).

6. TRUE. Ifc1e

Atv1 + c2eAtv2 + · · ·+ cneAtvn = 0,

theneAt [c1v1 + c2v2 + · · ·+ cnvn] = 0.

Since eAt is invertible for all matrices A, we conclude that

c1v1 + c2v2 + · · ·+ cnvn = 0.

Since {v1,v2, . . . ,vn} is linearly independent, c1 = c2 = · · · = cn = 0, and hence, {eAtv1, eAtv2, . . . , e

Atvn}is linearly independent.

Page 460: Differential Equations and Linear Algebra - 3e - Goode -Solutions

460

Solutions to Section 7.9

True-False Review:

1. FALSE. The trajectories need not approach the equilibrium point as t →∞. For instance, Figures 7.9.4and 7.9.8 show equilibrium points for which not all solution trajectories approach the origin as t →∞.

2. TRUE. This is well-illustrated in Figure 7.9.7. All trajectories in this case are closed curves, and theequilibrium point is a stable center.

3. TRUE. This is case (b), described below Theorem 7.9.3.

4. ?????????????

Solutions to Section 7.10

True-False Review:

1. TRUE. This is just the definition of the Jacobian matrix given in the text.

2. FALSE. Although the conclusion is valid in most cases, Table 7.10.1 shows that in the case of pureimaginary eigenvalues for the linear system, the behavior of the linear approximation can diverge from thatof the given nonlinear system.

3. TRUE. From Equations (7.10.2) and (7.10.3), we find that the Jacobian of the system is

J(x, y) =[

a− by −bxcy cx− d

],

so that J(0, 0) =[

a 00 −d

]. Since a, d > 0, we have one positive and one negative eigenvalue, and therefore,

the equilibrium point (0, 0) is a saddle point.

4. TRUE. The linear system corresponding to the Van der Pol Equation is given in (7.10.5), and the onlyequilibrium point of this system occurs at (0, 0).

5. FALSE. The equilibrium point is only an unstable spiral if µ < 2. In this equation, µ = 3, in which casethe equilibrium point is an unstable node, as discussed in the text.

Solutions to Section 8.1

True-False Review:

1. FALSE. This function has a discontinuity at every integer n 6= 0, 1, and therefore, it has infinitely manydiscontinuities and cannot be piecewise continuous.

2. FALSE. This function has a discontinuity at every multiple of π, and therefore, it has infinitely manydiscontinuities and cannot be piecewise continuous.

3. TRUE. If f has discontinuities at a1, a2, . . . , ar in I and g has discontinuities at b1, b2, . . . , bs in I, thenf +g has at most r+s discontinuities in I. Therefore, we can divide I into finitely many subintervals so thatf + g is continuous on each subinterval and f approaches a finite limit as the endpoints of each subintervalare approached from within. Hence, from Definition 8.1.4, f is piecewise continuous on the interval I.

4. TRUE. By definition, we can divide [a, b] into finitely many subintervals so that (1) and (2) in Definition8.1.4 are satisfied, and likewise, we can divide [b, c] into finitely many subintervals so that (1) and (2) in

Page 461: Differential Equations and Linear Algebra - 3e - Goode -Solutions

461

Definition 8.1.4 are satisfied. Taking all of the subintervals so obtained, we have divided [a, c] into finitelymany subintervals so that (1) and (2) in Definition 8.1.4 are satisfied.

5. FALSE. The lower limit of the integral defining the Laplace transform must be 0, not 1 (see Definition8.1.1).

6. TRUE. The formula L[eat] = 1s−a only applies for s > a, for otherwise the improper integral defining

L[eat] fails to converge. In this case a = 1, so we must restrict s so that s > 1.

7. FALSE. We have L[2 cos 3t] = 2L[cos 3t], and L[cos 3t] = ss2+9 provided that s > 0, according to

Equation (8.1.4). So this Laplace transform is defined for s > 0, not just s > 3.

8. FALSE. For instance, if we take f(t) = et, then L[f ] = 1s−1 , but L[1/f ] = L[e−t] = 1

s+1 . Obviously,L[1/f ] 6= 1/L[f ] in this case.

9. FALSE. For instance, if we take f(t) = et, then L[f ] = 1s−1 , but L[f2] = L[e2t] = 1

s−2 6=(

1s−1

)2

.

Solutions to Section 8.2

True-False Review:

1. TRUE. If f is of exponential order, then by Definition 8.2.1, we have |f(t)| ≤ Meαt (t > 0) for someconstants M and α. But then |g(t)| < f(t) ≤ |f(t)| ≤ Meαt for all t > 0, and so g is also of exponentialorder.

2. TRUE. Suppose that |f(t)| ≤ M1eα1t and |g(t)| ≤ M2e

α2t for t > 0, for some constants M1,M2, α1, α2.Then

|(f + g)(t)| = |f(t) + g(t)| ≤ |f(t)|+ |g(t)| ≤ M1eα1t + M2e

α2t.

If we set M = max{M1,M2} and α = max{α1, α2}, then the last expression on the right above is ≤ 2Meαt

for all t > 0. Therefore, f + g is of exponential order.

3. FALSE. Looking at the Comparison Test for Improper Integrals, we should conclude from the assump-tions that if

∫∞0

H(t)dt converges, then so does∫∞0

G(t)dt. As a specific counterexample, we could takeG(t) = 0 and H(t) = t. Then although G(t) ≤ H(t) for all t ≥ 0 and

∫∞0

G(t)dt = 0 converges,∫∞0

H(t)dtdoes not converge.

4. TRUE. Assume that L−1[F ](t) = f(t) and L−1[G](t) = g(t). Then we have

L−1[F + G](t) = (f + g)(t) = f(t) + g(t) = L−1[F ](t) + L−1[G](t).

Therefore, L−1[F + G] = L−1[F ] + L−1[G]. Moreover, we have

L−1[cF ](t) = (cf)(t) = cf(t) = cL−1[F ](t),

and therefore, L−1[cF ] = cL−1[F ]. Hence, L−1 is a linear transformation.

5. FALSE. From the formulas preceding Example 8.2.6, we see that the inverse Laplace transform of ss2+9

is f(t) = cos 3t.

6. FALSE. From the formulas preceding Example 8.2.6, we see that the inverse Laplace transform of 1s+3

is f(t) = e−3t.

Solutions to Section 8.3

True-False Review:

Page 462: Differential Equations and Linear Algebra - 3e - Goode -Solutions

462

1. FALSE. The period of f is required to be the smallest positive real number T such that f(t + T ) = f(t)for all t ≥ 0. There can be only one smallest positive real number T , so the period, if it exists, is uniquelydetermined.

2. TRUE. The functions f and g are simply horizontally shifted from one another. Therefore, if f hasperiod T , then f(t + T ) = f(t), and so then g(t + T ) = f(t + T + c) = f(t + c) = g(t). Thus, g also hasperiod T .

3. FALSE. The function f(t) = cos(2t) has period π, not π/2.

4. FALSE. For all T > 0, we have f(t + T ) = sin((t + T )2) 6= sin(t2), so f is not periodic.

5. FALSE. Even a continuous function need not be periodic. For instance, the function f(t) = sin(t2) inthe previous item is continuous, hence piecewise continuous, but not periodic.

6. FALSE. If f(t) = cos t and g(t) = sin t, then f and g are periodic with period m = n = 2π. However,(f + g)(t) = cos t + sin t is periodic with period 2π, not period 4π.

7. FALSE. If f(t) = cos t and g(t) = sin t, then f and g are periodic with period m = n = 2π. However,(fg)(t) = cos t sin t is periodic with period 2π, not period (2π)2.

8. FALSE. Many examples are possible. The function f given in Example 8.3.4 is one. More simply, wesimply note that L[sin t] = 1

s2+1 , and F (s) = 1s2+1 is not a periodic function.

Solutions to Section 8.4

True-False Review:

1. FALSE. The Laplace transform of f may not exist unless we assume additionally that f is of exponentialorder.

2. TRUE. This is illustrated in the examples throughout this section. The procedure is outlined aboveFigure 8.4.1 in the text and indicates that the initial condition is imposed at the outset of the solution.

3. TRUE. If we leave the initial conditions as arbitrary constants, the solution technique presented in thissection will result in the general solution to the differential equation.

4. FALSE. The expression for Y (s) is affected by the initial conditions, because the initial conditions arisein the formulas for the Laplace transforms of the derivatives of the unknown function y.

Solutions to Section 8.5

True-False Review:

1. TRUE. This follows at once from the first formula in the First Shifting Theorem, with a replaced by−a.

2. FALSE. For instance, if f(t) = et, then f(t− 1) = et−1 6= et − 1.

3. TRUE. The formula for f(t) is obtained from the formula for f(t + 2) by replacing t + 2 with t (andhence t + 3 with t + 1 and t with t− 2).

4. FALSE. If we take f(x) = x and g(x) = x− 3, for example, then∫ 1

0

f(t)dt =12

and∫ 1

0

g(t)dt = −52,

Page 463: Differential Equations and Linear Algebra - 3e - Goode -Solutions

463

and so ∫ 1

0

f(t)dt 6=∫ 1

0

g(t)dt− 3.

5. FALSE. The correct formula is

L[e−t sin 2t] =2

(s + 1)2 + 4.

6. TRUE. The Laplace transform of f(t) = t3 is F (s) = 3!s4 = 6

s4 , and so the First Shifting Theorem witha = 2 gives L[e2tt3] = 6

(s−2)4 .

7. TRUE. The inverse Laplace transform of F (s) = ss2+9 is f(t) = cos 3t, and so the First Shifting Theorem

with a = −4 gives the indicated inverse Laplace transform.

8. FALSE. The correct formula is

L−1

[3

(s + 1)2 + 36

]=

12e−t sin 6t.

Solutions to Section 8.6

True-False Review:

1. FALSE. The given function is not well-defined at t = a. The unit step function is 0 for 0 ≤ t < a, not0 ≤ t ≤ a.

2. FALSE. As Figure 8.6.2 shows, the value of ua(t)− ub(t) at t = b is 0, not 1.

3. FALSE. For values of t with a < t < b, we have ua(t) = 1 and ub(t) = 0, so the given inequality doesnot hold for such t.

4. FALSE. The given function is precisely the one sketched in Figure 8.6.2, which is ua(t) − ub(t), notub(t)− ua(t).

Solutions to Section 8.7

True-False Review:

1. FALSE. According to Equation (8.7.2), the inverse Laplace transform of easF (s) is u−a(t)f(t+a). Notethat this requires a < 0.

2. TRUE. This is Corollary 8.7.2.

3. TRUE. This is an immediate consequence of the Second Shifting Theorem.

4. FALSE. According to the Second Shifting Theorem, we have

L[u2(t) cos 4(t− 2)] =se−2s

s2 + 16,

not

L[u2(t) cos 4t] =se−2s

s2 + 16.

Page 464: Differential Equations and Linear Algebra - 3e - Goode -Solutions

464

5. FALSE. We have

L[u3(t)et] = L[u3(t)e(t+3)−3] = e−3sL[et+3] = e−3se3 1s− 1

=e3

e3s(s− 1).

6. ?????

7. FALSE. The correct formula is

L−1

[1

se2s

]= u2(t).

Solutions to Section 8.8

True-False Review:

1. TRUE. We have

I =∫ ∞−∞

F (t)dt,

where F (t) is the magnitude of the applied force at time t.

2. TRUE. A unit impulse force acts instantaneously on an object and delivers a unit impulse.

3. FALSE. The correct formula is L[δ(t− a)] = e−as.

4. TRUE. An excellent illustration of this is given in Example 8.8.3. A spring-mass system that receivesan instantaneous blow experiences an acceleration, which is the second derivative of the position.

5. FALSE. The initial conditions are unrelated to the instantaneous blow. They are the pre-blow positionand velocity of the mass, and these are not related in any way to the instantaneous blow.

Solutions to Section 8.9

True-False Review:

1. TRUE. We have (f ∗ g)(t) =∫ t

0f(t − τ)g(τ)dτ and (g ∗ f)(t) =

∫ t

0g(t − τ)f(τ)dτ . The latter integral

becomes the same as the first one via the u-substitution u = t− τ .

2. TRUE. Since both f and g are positive functions, the expression f(t − τ)g(τ) is positive for all t, τ .Thus, if we increase the interval of integration [0, t] by increasing t, the value of (f ∗ g)(t) also increases.

3. FALSE. The Convolution Theorem states that L[f ∗ g] = L[f ]L[g].

4. TRUE. This is exactly the form of the equation in (8.9.3).

5. FALSE. For instance, if f is identically zero, then f ∗ g = f ∗ h = 0 for all functions g and h.

6. TRUE. This is expressed mathematically in Equation (8.9.2).

7. TRUE. This follows from the fact that constants can be factored out of integrals, and a(f ∗ g), (af) ∗ g,and f ∗ (ag) can be expressed as integrals. In short, integration is linear.

Solutions to Section 9.1

True-False Review:

Page 465: Differential Equations and Linear Algebra - 3e - Goode -Solutions

465

1. TRUE. Theorem 9.1.7 addresses this for a rational function f(x) = p(x)q(x) . The radius of convergence of

the power series representation of f around the point x0 is the distance from x0 to the nearest root of q(x).

2. TRUE. We have∞∑

n=0

(an + bn)xn =∞∑

n=0

anxn +∞∑

n=0

bnxn,

and if each of the latter sums converges at x1, say to f and g respectively, then

∞∑n=0

(an + bn)xn = f + g.

3. FALSE. It could be the case, for example, that bn = −an for all n = 0, 1, 2, . . . . Then

∞∑n=0

(an + bn)xn = 0,

but the individual summations need not converge.

4. FALSE. This is not necessarily the case, as Example 9.1.3 illustrates. In that example, the radius ofconvergence is R = 3, but the series diverges at the endpoints x = ±3. The endpoints must be consideredseparately in general.

5. TRUE. This is part of the statement in Theorem 9.1.6.

6. FALSE. The Taylor series expansion of an infinitely differentiable function f need not converge for allreal values of x in the domain. It is only guaranteed to converge for x in the domain that lie within theinterval of convergence.

7. TRUE. A polynomial is a rational function p(x)/q(x) where q(x) = 1. Since q(x) = 1 has no roots,Theorem 9.1.7 guarantees that the power series representation of p(x)/q(x) about any point x0 has an infiniteradius of convergence.

8. ????????????

9. FALSE. According to the algebra of power series, the coefficient cn of xn is

cn =n∑

k=0

akbn−k.

10. TRUE. This is the definition of a Maclaurin series.

Solutions to Section 9.2

True-False Review:

1. TRUE. Since a polynomial is analytic at every x0 in R, it follows from Definition 9.2.1 that every pointof R is an ordinary point.

2. FALSE. According to Theorem 9.2.4 with x0 = −3 and R = 2, the radius of convergence of the powerseries representation of the solution to this differential equation is at least 2.

Page 466: Differential Equations and Linear Algebra - 3e - Goode -Solutions

466

3. FALSE. The nearest singularity to x = 2 occurs in p with root x = 1. Hence, R = 1 is the radius ofconvergence of p(x) = 1

x2−1 about x = 2. Therefore, by Theorem 9.2.4, the radius of convergence of a powerseries solution to this differential equation is at least 1 (not at least 2).

4. TRUE. This is seen directly by computing y(x0) and y′(x0) for the series solution

y(x) =∞∑

n=0

an(x− x0)n,

as explained in the statement of Theorem 9.2.4.

5. TRUE. The radii of convergence of the power series expansions of p and q are both positive, and thus,Theorem 9.2.4 implies that the general solution to the differential equation can be represented as a powerseries with a positive radius of convergence.

6. FALSE. The two linearly independent solutions to y′′ + (2 − 4x2)y′ − 8xy = 0 found in Example 9.2.7both contain common powers of x.

7. FALSE. For example, the power series y1(x) found in Example 9.2.6 is a solution to (9.2.10), but xy1(x)is not a solution.

8. FALSE. The value of a0 only determines the values a0, a2, a4, . . . , but there is no information about theterms a1, a3, a5, . . . unless a1 is also specified.

9. TRUE. The value of ak depends on ak−1 and ak−3, so once a0, a1, and a2 are specified, we can finda3, a4, a5, . . . uniquely from the recurrence relation.

10. FALSE. If the recurrence relation cannot be solved, it simply means that lengthy brute force calculationsmay be required to determine the solution, not that the solution does not exist.

Solutions to Section 9.3

True-False Review:

1. FALSE. Theorem 9.2.4 guarantees that R = 1 is a lower bound on the radius of convergence of the powerseries solutions to Legendre’s equation about x = 0. The radius of convergence could be greater than 1.

2. TRUE. Relative to the inner product

〈p, q〉 =∫ 1

−1

p(x)q(x)dx,

{P0, P1, . . . , PN} is an orthogonal set (Theorem 9.3.4), and since each Pi is of different degree, ranging from0 to N , {P0, P1, . . . , PN} is linearly independent.

3. TRUE. If α is a positive even integer or a negative odd integer or zero, then eventually the coefficientsof (9.3.3) become zero, whereas if α is a negative even integer or a positive odd integer, then eventually thecoefficients of (9.3.4) become zero. Therefore, for any integer α, either (9.3.3) or (9.3.4) contains only finitelymany nonzero terms.

4. TRUE. The Legendre polynomials correspond to polynomial solutions to (9.3.3) and (9.3.4), and we seedirectly from their form that these solutions contain terms with all odd powers of x or with all even powersof x.

Page 467: Differential Equations and Linear Algebra - 3e - Goode -Solutions

467

Solutions to Section 9.4

True-False Review:

1. FALSE. It is required that (x − x0)2Q(x) be analytic at x = x0, not that (x − x0)Q(x) be analytic atx = x0.

2. FALSE. It is necessary to assume that r1 and r2 are distinct and do not differ by an integer in order todraw this conclusion.

3. TRUE. This is well-illustrated by Examples 9.4.5 and 9.4.6. The values of a0, a1, a2, . . . in the Frobeniusseries solution are obtained by directly substituting it into both sides of the differential equation and matchingup the coefficients of x resulting on each side of the differential equation.

4. TRUE. Example 9.4.2 is a fine illustration of this.

5. TRUE. For instance, the only singular point of y′′+ 1x2 y′+y = 0 is x0 = 0, and it is an irregular singular

point.

Solutions to Section 9.5

True-False Review:

1. FALSE. The indicial equation (9.5.4) in this case reads r(r− 1)− r + 1 = 0, or r2 − 2r + 1 = 0, and theroots of this equation are r = 1, 1, so the roots are not distinct.

2. TRUE. The indicial equation (9.5.4) in this case reads r(r−1)+(1−2√

5)r+ 194 = 0, or r2−2

√5r+ 19

4 = 0.The quadratic formula quickly yields the roots r =

√5± 1

2 , which are distinct and differ by 1.

3. FALSE. The indicial equation (9.5.4) in this case reads r(r− 1) + 9r + 25 = 0, or r2 + 8r + 25 = 0. Thequadratic formula quickly yields the roots r = −4± 3i, which do not differ by an integer.

4. TRUE. The indicial equation (9.5.4) in this case reads r(r − 1) + 4r − 74 = 0, or r2 + 3r − 7

4 = 0. Thequadratic formula quickly yields the roots r = − 7

2 and r = 12 , which are distinct and differ by 4.

5. TRUE. The indicial equation (9.5.4) in this case reads r(r − 1) = 0, with roots r = 0 and r = 1. Theydiffer by 1.

6. FALSE. As indicated in Theorem 9.5.1, if the roots of the indicial equation do not differ by an integer,then two linearly independent solutions to x2y′′+ xp(x)y′+ q(x)y = 0 can be found as in Equations (9.5.13)and (9.5.14).

Solutions to Section 9.6

True-False Review:

1. FALSE. The requirement that guarantees the existence of two linearly independent Frobenius seriessolutions is that 2p is not an integer, not that p is a positive noninteger.

2. TRUE. Since the roots of the indicial equation are r = ±p, it follows from the general Frobenius theorythat, provided 2p is not equal to an integer, we can get two linearly independent Frobenius series solutionsto Bessel’s equation of order p, written as Jp and J−p in (9.6.18). The variation appearing in (9.6.20) alsoapplies when p is not a positive integer, which holds in particular if 2p is not an integer.

3. FALSE. The gamma function is not defined for p = 0,−1,−2, . . . .

Page 468: Differential Equations and Linear Algebra - 3e - Goode -Solutions

468

4. TRUE. From Figure 9.6.2, the values of p such that Γ(p) < 0 occur in the intervals (−1, 0), (−3,−2),(−5,−4), and so on. In these intervals, the greatest integer less than or equal to p are −1,−3,−5, . . . , whichare the odd, negative integers.

5. FALSE. Although it is possible via Property 3 in Equation (9.6.27) to express Jp(x) in terms of Jp−1(x)and Jp−2(x) as

Jp(x) = 2x−1(p− 1)Jp−1(x)− Jp−2(x),

the expression on the right-hand side is not a linear combination.

6. FALSE. The correct formula for J ′p(x) can be computed directly from Equation (9.6.22). A validexpression for J ′p(x) is also given in Property 4 (Equation (9.6.28)).

7. FALSE. From Equation (9.6.31), we see that Jp(λnx) and Jp(λmx) are orthogonal on (0, 1) relative tothe weight function w(x) = x: ∫ 1

0

xJp(λmx)Jp(λnx)dx = 0.