lab 9 solns
DESCRIPTION
MATH 115TRANSCRIPT
Math 115 Lab 9 Solutions
1. Let ~v1 =
1−12
, ~v2 =
241
, ~v3 =
3−1−2
. You are given that {~v1, ~v2, ~v3} is an orthogonal basis for
R3.
(a) Express ~w =
314
as a linear combination of these three vectors.
Solution:
~w =~w · ~v1
||~v1||2~v1 +
~w · ~v2
||~v2||2~v2 +
~w · ~v3
||~v3||2~v3
=106
~v1 +1421
~v2 +014
~v3
=53~v1 +
23~v2 + 0~v3.
(b) Find an orthonormal basis for R3 whose vectors have the same directions as ~v1, ~v2, ~v3.Solution: We divide each vector by its magnitude to get
1/√
6−1/
√6
2/√
6
,
2/√
214/√
211/√
21
,
3/√
14−1/
√14
−2/√
14
.
(c) Determine an equation of the plane which contains ~v1 and ~v2.Solution: The vector ~v3 is orthogonal to ~v1 and ~v2, hence it is orthogonal to the plane. So anequation of the plane is 3x − y − 2z = 0.
2. Let W be a subspace of Rn. Prove that W⊥ is also a subspace of Rn. (Recall: W⊥ = {~x ∈ Rn | ~x · ~u =0 for all ~u ∈ W}.)Solution: We need to check the 3 subspace rules.
(a) The zero vector ~0 · ~x = 0 for every vector ~x ∈ W , so ~0 ∈ W⊥.(b) Let ~v, ~w ∈ W⊥. Then ~v · ~x = ~w · ~x = 0 for every ~x ∈ W . Therefore,
(~v + ~w) · ~x = ~v · ~x + ~w · ~x = 0 + 0 = 0
for every ~x ∈ W . Therefore, ~v + ~w ∈ W⊥.(c) Let ~v ∈ W⊥ and t ∈ R. Then ~v · ~x = 0 for every ~x ∈ W . Therefore, (t~v) · ~x = t(~v · ~x) = 0 for every
~x ∈ W . So t~v ∈ W⊥.
3. Let ~v1 = [1 1 1 1 1]T , ~v2 = [1 1 1 1 0]T , ~v3 = [1 1 0 0 0]T . Let W = span{~v1, ~v2, ~v3}.
(a) Use the Gram-Schmidt procedure on these three vectors to produce an orthogonal basis for W .Solution: We pick the first vector ~w1 = ~v1 = [1 1 1 1 1]T .To get the second vector, we calculate the following:
~v2 −~w1 · ~v2
||~w1||2~w1 = [1 1 1 1 0]T − 4
5[1 1 1 1 1]T =
15[1 1 1 1 − 4]T
So we pick ~w2 = [1 1 1 1 − 4]T .To get the third vector, we calculate the following:
~v3 −~w2 · ~v3
||~w2||2~w2 −
~w1 · ~ve
||~w1||2~w1
=[1 1 0 0 0]T − 220
[1 1 1 1 − 4]T − 25[1 1 1 1 1]T
=110
[5 5 − 5 − 5 0]T =12[1 1 − 1 − 1 0]T .
So we pick ~w3 = [1 1 − 1 − 1 0]T .Then {~w1, ~w2, ~w3} is an orthogonal basis for W .
(b) Find an orthogonal basis for the orthogonal complement of W .Solution: We need to find all ~x such that ~w1 · x = 0, ~w2 · x = 0, ~w3 · x = 0. This is a ho-
mogeneous system, which we may solve by reducing the matrix
1 1 1 1 11 1 1 1 −41 1 −1 −1 0
to its
RREF
1 1 0 0 00 0 1 1 00 0 0 0 1
. Taking parameters s and t as the second and fourth variables, the
complete solution is [−s s − t t 0]T = s[−1 1 0 0 0]T + t[0 0 − 1 1 0]T . These two vectors{[−1 1 0 0 0]T , [0 0 − 1 1 0]T } span the solutions set and are linearly independent, hence they forma basis for the orthogonal complement.
(c) Let ~u = [3 1 4 1 5]T . Find a vector ~x in W and another vector ~y in W⊥ so that ~u = ~x + ~y.Solution: This ~x is the projection of ~u onto W , and ~y is ~u− ~x. Or, alternatively, ~y is the projectionof ~u onto W⊥, and ~x = ~u − ~y. We will perform the calculation for this alternative. From part (b),an orthogonal basis for W⊥ is {~z1 = [−1 1 0 0 0]T , ~z2 = [0 0 − 1 1 0]T }. So
~y = projW⊥(~u) =~u · ~z1
||~z1||2~z1 +
~u · ~z2
||~z2||2~z2 =
−22
[−1 1 0 0 0]T +−32
[0 0 − 1 1 0]T = [1 − 132
− 32
0]T .
Therefore,
~x = ~u − ~y = [3 1 4 1 5]T − [1 − 132
− 32
0]T = [2 252
52
5]T .
4. Let {~v1, . . . , ~vk} be an orthogonal basis for a subspace W in Rn. Let {~u1, . . . , ~um} be an orthogonal basisfor W⊥. Prove that S = {~v1, . . . , ~vk, ~u1, . . . , ~um} is an orthogonal basis for Rn. Conclude from this thatdim W + dim W⊥ = n.
Solution: First we need to check that the vectors in S are orthogonal. Any pair ~vi, ~vj or ~ui, ~uj isorthogonal for i 6= j because they come from orthogonal bases. For a pair ~vi, ~uj , since ~vi ∈ W and~uj ∈ W⊥, they must be orthogonal by the definition of W⊥. So our set S is orthogonal, hence it is alsolinearly independent.
To check that S spans Rn, recall that any vector ~x ∈ Rn can be expressed as a sum of two vectors ~x = ~y+~zwhere ~y ∈ W and ~z ∈ W⊥. So ~y is a linear combination of ~v1, . . . , ~vk and ~z is a linear combination of~u1, . . . , ~um. Therefore ~x is a linear combination of ~v1, . . . , ~vk, ~u1, . . . , ~ul. Hence S is spanning, and so S isan orthogonal basis for Rn.
Any basis of Rn has size n, so k + m = n, or dim W + dim W⊥ = n.
5. The purpose of this exercise is to illustrate one way of using projection to find the equation of a best-fittingline through certain data points from an experiment. Consider the following overly-simplistic data setwhere each column represents one data point (x, y).
x 1 2 3 4 5 6 7y 4 4 4 5 5 6 7
Here is a graph which illustrates these data points.
x
y
We may try to find a polynomial of degree 7 for an exact fit of the data, but that might not representthe general relation between x and y. If we allow errors in the data, we might say that this set of datapoints looks almost linear. Our goal is to find a line of the form y = ax + b such that the overall “error”between this line and the 7 points is as small as possible.
(a) Use the 7 data points to obtain a system of 7 linear equations with a, b as the variables. Write it in
the form A~x = ~c where ~x =[
ab
].
Solution: From the 7 data points, plug them into y = ax + b to get
4 = a + b, 4 = 2a + b, 4 = 3a + b, 5 = 4a + b, 5 = 5a + b, 6 = 6a + b, 7 = 7a + b.
Written in matrix form, this is
1 12 13 14 15 16 17 1
[
ab
]=
4445567
.
(b) This system A~x = ~c has no solutions. For each possible value of ~x (i.e. for each possible line), we
define E(~x) = A~x − ~c to be its error vector. For example, the vector ~x0 =[
12
]represents the line
y = x+2 as shown here. Determine E(~x0), and explain the meaning of this error vector with respectto the graph.
Solution: For ~x0 =[
12
],
E(~x0) = A~x −~b =
3456789
−
4445567
=
−1011222
.
On the graph, the i-th entry of E(~x0) represents the line segment between the i-th data point andthe line at x = i, as illustrated here.
x
y
(c) The goal is to find a vector ~x such that its error vector E(~x) has the smallest magnitude, i.e. ||A~x−~c||is minimized. Find the value of ||E(~x)0||2 and explain the meaning of ||E(~x0)||2 with respect to thegraph.Solution: From part (b), ||E(~x0)||2 = (−1)2+02+12+12+22+22+22 = 11. The value of ||E(~x0)||2represents the square of all the lengths of the differences between the data points and the line.
(d) The system A~x = ~c has no solution because ~c is not in the columnspace of A (recall that any vectorof the form A~x is in the columnspace of A). The goal of minimizing ||E(~x)|| or ||A~x − ~c|| becomesfinding the vector ~x such that A~x is the vector in the columnspace of A that is closest to ~c. Explainhow this A~x can be found using projections, and then find it. (Hint: You may want to performGram-Schmidt on the column vectors of A.)Solution: The vector A~x is the one that minimizes the distance between the columnspace of A and~c, so it is the projection of ~c onto the columnspace of A. To find this projection, we first need anorthogonal basis for the columnspace of A. We perform Gram-Schmidt on the two columns of A: Wepick ~w1 = [1 1 1 1 1 1 1]T (the second column), and find that ~w2 = [−3 − 2 − 1 0 1 2 3]T to be theother vector. Then {~w1, ~w2} form an orthogonal basis for the columnspace. We project ~c onto thecolumnspace:
projcol(A)(~c) =~c · ~w1
||~w1||2~w1 +
~c · ~w2
||~w2||2~w2 =
12[7 8 9 10 11 12 13]T .
(e) Using your result in part (d), find ~x and determine the line y = ax+ b that best fits the 7 given datapoints. Draw the line on the graph here.
Solution: We solve A~x = 12 [7 8 9 10 11 12 13]T and we get that ~x =
[1/23
]. So the best-fit line is
y = 12x + 3. We illustrate this line here.
x
y
(Note: The error vector here is 12 [−1 0 1 0 1 0 − 1]T whose magnitude squared is 2. This is the
smallest magnitude possible over all lines.)