waldron linear algebra

Upload: asacomaster

Post on 03-Jun-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 Waldron Linear Algebra

    1/407

    Linear Algebra

    David Cherney, Tom Denton and Andrew Waldron

  • 8/12/2019 Waldron Linear Algebra

    2/407

    2

    Edited by Katrina Glaeser, Rohit Thomas and Travis ScrimshawFirst Edition. Davis California, 2013.

    This work is licensed under aCreative Commons Attribution-NonCommercial-

    ShareAlike 3.0 Unported License.

    2

    http://creativecommons.org/licenses/by-nc-sa/3.0/deed.en_UShttp://creativecommons.org/licenses/by-nc-sa/3.0/deed.en_UShttp://creativecommons.org/licenses/by-nc-sa/3.0/deed.en_UShttp://creativecommons.org/licenses/by-nc-sa/3.0/deed.en_UShttp://creativecommons.org/licenses/by-nc-sa/3.0/deed.en_US
  • 8/12/2019 Waldron Linear Algebra

    3/407

    Preface

    This book grew out of a series of twenty five lecture notes for a sophomorelinear algebra class taught at the University of California, Davis. The audi-ence was primarily engineering students and students of pure sciences, someof whom may go on to major in mathematics. It was motivated by the lack ofa book that taught students basic structures of linear algebra without overdo-ing mathematical rigor or becoming a mindless exercise in crunching recipesat the cost of fundamental understanding. In particular we wanted a bookthat was suitable for all students, not just math majors, that focussed onconcepts and developing the ability to think in terms of abstract structuresin order to address the dizzying array of seemingly disparate applications

    that can all actually be addressed with linear algebra methods.In addition we had practical concerns. We wanted to offer students a

    online version of the book for free, both because we felt it our academicduty to do so, but also because we could seamlessly link an online book toa myriad of other resourcesin particular WeBWorK exercises and videos.We also wanted to make the LaTeX source available to other instructorsso they could easily customize the material to fit their own needs. Finally,we wanted to restructure the way the course was taught, by getting thestudents to direct most of their effort at more difficult problems where theyhad to think through concepts, present well-thought out logical argumentsand learn to turn word problems into ones where the usual array of linearalgebra recipes could take over.

    How to Use the Book

    At the end of each chapter there is a set of review questions. Our studentsfound these very difficult, mostly because they did not know where to begin,rather than needing a clever trick. We designed them this way to ensure thatstudents grappled with basic concepts. Our main aim was for students tomaster these problems, so that we could ask similar high caliber problemson midterm and final examinations. This meant that we did have to direct

    resources to grading some of these problems. For this we used two tricks.First we asked students to hand in more problems than we could grade, andthen secretly selected a subset for grading. Second, because there are morereview questions than what an individual student could handle, we split theclass into groups of three or four and assigned the remaining problems to them

    3

  • 8/12/2019 Waldron Linear Algebra

    4/407

    4

    for grading. Teamwork is a skill our students will need in the workplace; also

    it really enhanced their enjoyment of mathematics.Learning math is like learning to play a violinmany technical exercisesare necessary before you can really make music! Therefore, each chapter hasa set of dedicated WeBWorK skills problems where students can test thatthey have mastered basic linear algebra skills. The beauty of WeBWorK isthat students get instant feedback and problems can be randomized, whichmeans that although students are working on the same types of problem,they cannot simply tell each other the answer. Instead, we encourage themto explain to one another how to do the WeBWorK exercises. Our experienceis that this way, students can mostly figure out how to do the WeBWorKproblems among themselves, freeing up discussion groups and office hours for

    weightier issues. Finally, we really wanted our students to carefully read thebook. Therefore, each chapter has several very simple WeBWorK readingproblems. These appear as links at strategic places. They are very simpleproblems that can answered rapidly if a student has read the preceding text.

    The Material

    We believe the entire book can be taught in twenty five fifty minute lecturesto a sophomore audience that has been exposed to a one year calculus course.Vector calculus is useful, but not necessary preparation for this book, which

    attempts to be self-contained. Key concepts are presented multiple times,throughout the book, often first in a more intuitive setting, and then againin a definition, theorem, proof style later on. We do not aim for studentsto become agile mathematical proof writers, but we do expect them to beable to show and explain why key results hold. We also often use the reviewexercises to let students discover key results for themselves; before they arepresented again in detail later in the book.

    Linear algebra courses run the risk of becoming a conglomeration of learn-by-rote recipes involving arrays filled with numbers. In the modern computerera, understanding these recipes, why they work, and what they are for ismore important than ever. Therefore, we believe it is crucial to change the

    students approach to mathematics right from the beginning of the course.Instead of them asking us what do I do here?, we want them to ask whywould I do that? This means that students need to start to think in termsof abstract structures. In particular, they need to rapidly become conversantin sets and functionsthe first WeBWorK set will help them brush up these

    4

  • 8/12/2019 Waldron Linear Algebra

    5/407

    skills.

    There is no best order to teach a linear algebra course. The book hasbeen written such that instructors can reorder the chapters (using the La-TeX source) in any (reasonable) order and still have a consistent text. Wehammer the notions of abstract vectors and linear transformations hard andearly, while at the same time giving students the basic matrix skills neces-sary to perform computations. Gaussian elimination is followed directly byan exploration chapter on the simplex algorithm to open students mindsto problems beyond standard linear systems ones. Vectors in Rn and generalvector spaces are presented back to back so that students are not strandedwith the idea that vectors are just ordered lists of numbers. To this end, wealso labor the notion of all functions from a set to the real numbers. In the

    same vein linear transformations and matrices are presented hand in hand.Once students see that a linear map is specified by its action on a limited setof inputs, they can already understand what a basis is. All the while studentsare studying linear systems and their solution sets, so after matrices deter-minants are introduced. This material can proceed rapidly since elementarymatrices were already introduced with Gaussian elimination. Only then is acareful discussion of spans, linear independence and dimension given to readystudents for a thorough treatment of eigenvectors and diagonalization. Thedimension formula therefore appears quite late, since we prefer not to elevaterote computations of column and row spaces to a pedestal. The book ends

    with applicationsleast squares and singular values. These are a fun way toend any lecture course. It would also be quite easy to spend any extra timeon systems of differential equations and simple Fourier transform problems.

    5

  • 8/12/2019 Waldron Linear Algebra

    6/407

  • 8/12/2019 Waldron Linear Algebra

    7/407

    Contents

    1 What is Linear Algebra? 13

    1.1 Vectors? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.2 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . 151.3 What is a Matrix? . . . . . . . . . . . . . . . . . . . . . . . . 191.4 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 23

    2 Systems of Linear Equations 29

    2.1 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . 292.1.1 Augmented Matrix Notation . . . . . . . . . . . . . . . 292.1.2 Equivalence and the Act of Solving . . . . . . . . . . . 322.1.3 Reduced Row Echelon Form . . . . . . . . . . . . . . . 322.1.4 Solutions and RREF . . . . . . . . . . . . . . . . . . . 36

    2.2 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 382.3 Elementary Row Operations . . . . . . . . . . . . . . . . . . . 42

    2.3.1 EROs and Matrices . . . . . . . . . . . . . . . . . . . . 422.3.2 Recording EROs in (M|I) . . . . . . . . . . . . . . . . 442.3.3 The Three Elementary Matrices . . . . . . . . . . . . . 46

    2.3.4 LU, LDU, and LDPU Factorizations. . . . . . . . . . 482.4 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 512.5 Solution Sets for Systems of Linear Equations . . . . . . . . . 53

    2.5.1 The Geometry of Solution Sets: Hyperplanes. . . . . . 542.5.2 Particular Solution + Homogeneous Solutions . . . . . 55

    7

  • 8/12/2019 Waldron Linear Algebra

    8/407

    8

    2.5.3 Solutions and Linearity. . . . . . . . . . . . . . . . . . 56

    2.6 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 58

    3 The Simplex Method 61

    3.1 Pablos Problem. . . . . . . . . . . . . . . . . . . . . . . . . . 613.2 Graphical Solutions . . . . . . . . . . . . . . . . . . . . . . . . 633.3 Dantzigs Algorithm . . . . . . . . . . . . . . . . . . . . . . . 653.4 Pablo Meets Dantzig . . . . . . . . . . . . . . . . . . . . . . . 683.5 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 70

    4 Vectors in Space, n-Vectors 714.1 Addition and Scalar Multiplication in Rn . . . . . . . . . . . . 72

    4.2 Hyperplanes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.3 Directions and Magnitudes . . . . . . . . . . . . . . . . . . . . 754.4 Vectors, Lists and Functions . . . . . . . . . . . . . . . . . . . 794.5 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 82

    5 Vector Spaces 85

    5.1 Examples of Vector Spaces . . . . . . . . . . . . . . . . . . . . 865.1.1 Non-Examples. . . . . . . . . . . . . . . . . . . . . . . 90

    5.2 Other Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915.3 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 92

    6 Linear Transformations 95

    6.1 The Consequence of Linearity . . . . . . . . . . . . . . . . . . 956.2 Linear Functions on Hyperplanes . . . . . . . . . . . . . . . . 976.3 Linear Differential Operators. . . . . . . . . . . . . . . . . . . 986.4 Bases (Take 1) . . . . . . . . . . . . . . . . . . . . . . . . . . 996.5 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 101

    7 Matrices 103

    7.1 Linear Transformations and Matrices . . . . . . . . . . . . . . 1037.1.1 Basis Notation . . . . . . . . . . . . . . . . . . . . . . 103

    7.1.2 From Linear Operators to Matrices . . . . . . . . . . . 1097.2 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 1117.3 Properties of Matrices . . . . . . . . . . . . . . . . . . . . . . 113

    7.3.1 Associativity and Non-Commutativity . . . . . . . . . 1207.3.2 Block Matrices . . . . . . . . . . . . . . . . . . . . . . 122

    8

  • 8/12/2019 Waldron Linear Algebra

    9/407

    7.3.3 The Algebra of Square Matrices . . . . . . . . . . . . 123

    7.3.4 Trace. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257.4 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 126

    7.5 Inverse Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . 130

    7.5.1 Three Properties of the Inverse . . . . . . . . . . . . . 130

    7.5.2 Finding Inverses (Redux). . . . . . . . . . . . . . . . . 131

    7.5.3 Linear Systems and Inverses . . . . . . . . . . . . . . . 133

    7.5.4 Homogeneous Systems . . . . . . . . . . . . . . . . . . 134

    7.5.5 Bit Matrices . . . . . . . . . . . . . . . . . . . . . . . . 134

    7.6 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 135

    7.7 LU Redux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

    7.7.1 Using LU Decomposition to Solve Linear Systems . . . 1407.7.2 Finding anLU Decomposition. . . . . . . . . . . . . . 142

    7.7.3 BlockLDU Decomposition. . . . . . . . . . . . . . . . 145

    7.8 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 146

    8 Determinants 149

    8.1 The Determinant Formula . . . . . . . . . . . . . . . . . . . . 149

    8.1.1 Simple Examples . . . . . . . . . . . . . . . . . . . . . 149

    8.1.2 Permutations . . . . . . . . . . . . . . . . . . . . . . . 150

    8.2 Elementary Matrices and Determinants . . . . . . . . . . . . . 154

    8.2.1 Row Swap . . . . . . . . . . . . . . . . . . . . . . . . . 1558.2.2 Scalar Multiply . . . . . . . . . . . . . . . . . . . . . . 156

    8.2.3 Row Addition . . . . . . . . . . . . . . . . . . . . . . . 157

    8.2.4 Determinant of Products . . . . . . . . . . . . . . . . . 159

    8.3 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 162

    8.4 Properties of the Determinant . . . . . . . . . . . . . . . . . . 166

    8.4.1 Determinant of the Inverse . . . . . . . . . . . . . . . . 170

    8.4.2 Adjoint of a Matrix . . . . . . . . . . . . . . . . . . . . 170

    8.4.3 Application: Volume of a Parallelepiped . . . . . . . . 172

    8.5 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 173

    9 Subspaces and Spanning Sets 177

    9.1 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

    9.2 Building Subspaces . . . . . . . . . . . . . . . . . . . . . . . . 179

    9.3 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 184

    9

  • 8/12/2019 Waldron Linear Algebra

    10/407

    10

    10 Linear Independence 185

    10.1 Showing Linear Dependence . . . . . . . . . . . . . . . . . . . 18610.2 Showing Linear Independence . . . . . . . . . . . . . . . . . . 18910.3 From Dependent Independent . . . . . . . . . . . . . . . . . . 19010.4 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 191

    11 Basis and Dimension 195

    11.1 Bases inRn. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19811.2 Matrix of a Linear Transformation (Redux) . . . . . . . . . . 20011.3 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 203

    12 Eigenvalues and Eigenvectors 207

    12.1 Invariant Directions . . . . . . . . . . . . . . . . . . . . . . . . 20812.2 The EigenvalueEigenvector Equation. . . . . . . . . . . . . . 21312.3 Eigenspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21612.4 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 218

    13 Diagonalization 221

    13.1 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . 22113.2 Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . 22213.3 Changing to a Basis of Eigenvectors . . . . . . . . . . . . . . . 22613.4 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 228

    14 Orthonormal Bases and Complements 23314.1 Properties of the Standard Basis. . . . . . . . . . . . . . . . . 23314.2 Orthogonal and Orthonormal Bases . . . . . . . . . . . . . . . 23514.3 Relating Orthonormal Bases . . . . . . . . . . . . . . . . . . . 23614.4 Gram-Schmidt & Orthogonal Complements . . . . . . . . . . 239

    14.4.1 The Gram-Schmidt Procedure . . . . . . . . . . . . . . 24214.5 QRDecomposition . . . . . . . . . . . . . . . . . . . . . . . . 24314.6 Orthogonal Complements . . . . . . . . . . . . . . . . . . . . 24514.7 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 249

    15 Diagonalizing Symmetric Matrices 253

    15.1 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 257

    16 Kernel, Range, Nullity, Rank 261

    16.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26816.2 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 269

    10

  • 8/12/2019 Waldron Linear Algebra

    11/407

    1

    17 Least squares and Singular Values 275

    17.1 Singular Value Decomposition . . . . . . . . . . . . . . . . . . 27817.2 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 283

    A List of Symbols 285

    B Fields 287

    C Online Resources 289

    D Sample First Midterm 291

    E Sample Second Midterm 301

    F Sample Final Exam 311

    G Movie Scripts 339

    G.1 What is Linear Algebra? . . . . . . . . . . . . . . . . . . . . . 339G.2 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . 340G.3 Vectors in Space n-Vectors . . . . . . . . . . . . . . . . . . . . 349G.4 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 351G.5 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . 355G.6 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357G.7 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . 367

    G.8 Subspaces and Spanning Sets . . . . . . . . . . . . . . . . . . 375G.9 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . 376G.10 Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . 379G.11 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . 381G.12 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . 387G.13 Orthonormal Bases and Complements. . . . . . . . . . . . . . 393G.14 Diagonalizing Symmetric Matrices. . . . . . . . . . . . . . . . 400G.15 Kernel, Range, Nullity, Rank. . . . . . . . . . . . . . . . . . . 402G.16 Least Squares and Singular Values. . . . . . . . . . . . . . . . 404

    Index 404

    11

  • 8/12/2019 Waldron Linear Algebra

    12/407

    12

    12

  • 8/12/2019 Waldron Linear Algebra

    13/407

  • 8/12/2019 Waldron Linear Algebra

    14/407

    14 What is Linear Algebra?

    (B) 3-vectors: 110+

    011 =

    121.

    (C) Polynomials: Ifp(x) = 1 + x 2x2 + 3x3 andq(x) =x + 3x2 3x3 + x4, thentheir sump(x) + q(x) is the new polynomial 1 + 2x + x2 + x4.

    (D) Power series: Iff(x) = 1+x+12! x2+13! x

    3+ and g(x) = 1x+12! x2 13! x3+ ,thenf(x) + g(x) = 1 + 12! x

    2 + 14! x4 is also a power series.

    (E) Functions: Iff(x) =ex andg(x) =ex, then their sum f(x) + g(x)is the new

    function2cosh x.Because they can be added, you should now start thinking of all the aboveobjects as vectors! In Chapter 5 we will give the precise rules that vectoraddition must obey. In the above examples, however, notice that the vectoraddition rule stems from the rules for adding numbers.

    When adding the same vector over and over, for example

    x+x , x+x+x , x+x+x+x , . . . ,

    we will write2x , 3x , 4x , . . . ,

    respectively. For example,

    4

    110

    =11

    0

    +11

    0

    +11

    0

    +11

    0

    =44

    0

    .Defining 4x= x + x + x + xis fine for integer multiples, but does not help usmake sense of 1

    3x. For the different types of vectors above, you can probably

    guess how to multiply a vector by a scalar, for example:

    1

    3 1

    1

    0 = 131

    30 .

    In any given problem that you are planning to describe using vectors, youneed to decide on a way to add and scalar multiply vectors. In summary:

    14

  • 8/12/2019 Waldron Linear Algebra

    15/407

    1.2 Linear Transformations 1

    Vectors are things you can add and scalar multiply.

    1.2 Linear Transformations

    In calculus classes, the main subject of investigation was functions and theirrates of change. In linear algebra, functions will again be focus of yourattention, but now functions of a very special type. In calculus, you probablyencountered functions f(x), but were perhaps encouraged to think of this amachine f, whose input is some real number x. For each input x this

    machine outputs a single real number f(x):

    In linear algebra, the functions we study will take vectors, of some type,as both inputs and outputs. We just saw that vectors were objects that couldbe added or scalar multiplieda very general notionso the functions weare going study will look novel at first. So things dont get too abstract, hereare five questions that can be rephrased in terms of functions of vectors:

    Example 2 (Functions of Vectors in Disguise)

    (A) What numberxsolves 10x= 3?

    (B) What vectorufrom 3-space satisfies the cross product equation

    110

    u=01

    1

    ?15

  • 8/12/2019 Waldron Linear Algebra

    16/407

    16 What is Linear Algebra?

    (C) What polynomialp(x)satisfies

    11p(y)dy= 0 and

    11 yp(y)dy= 1?

    (D) What power seriesf(x)satisfies x ddxf(x) 2f(x) = 0?(E) What numberx solves 4x2 = 1?

    For part (A), the machine needed would look like:

    x 10x ,

    which is just like a function f(x) from calculus that takes in a number x andspits out the number f(x) = 10x. For part (B), we need something moresophisticated along the lines of:

    xyz

    zzy x

    ,whose inputs and outputs are both 3-vectors. You are probably getting thegist by now, but here is the machine needed for part (C):

    p(x)

    11p(y)dy11 yp(y)dy

    .Here we input a polynomial and get a 2-vector as output!

    By now you may be feeling overwhelmed, surely the study of functions as

    general as the ones exhibited is very difficult. However, in linear algebra, wewill restrict ourselves to a very important, yet much simpler, class of func-tions of vectors than the most general ones. Lets use the letter L for thesefunctions and think again about vector addition and scalar multiplication.Lets suppose v and u are vectors andcis a number. Then we already know

    16

  • 8/12/2019 Waldron Linear Algebra

    17/407

    1.2 Linear Transformations 1

    that u+ v and cu are also vectors. Since L is a function of vectors, if we

    input u into L, the output L(u) will also be some sort of vector. The samegoes for L(v), L(u+v) and L(cu). Moreover, we can now also think aboutaddingL(u) andL(v) to get yet another vector L(u) + L(v) or of multiplyingL(u) byc to obtain the vector cL(u). Perhaps a picture of all this helps:

    The blob on the left represents all the vectors that you are allowed to inputinto the function L, and the blob on the right denotes the correspondingoutputs. Hopefully you noticed that there are two vectors apparently notshownon the blob of outputs:

    L(u) +L(v) & cL(u) .

    You might already be able to guess the values we would like these to take. Ifnot, heres the answer, its the key equation of the whole class, from whicheverything else follows:

    1. Additivity:L(u+v) =L(u) +L(v) .

    2. Homogeneity:L(cu) =cL(u) .

    17

  • 8/12/2019 Waldron Linear Algebra

    18/407

    18 What is Linear Algebra?

    Most functions of vectors do not obey this requirement; linear algebra is the

    study of those that do. Notice that the additivity requirement says that thefunction L respects vector addition: it does not matter if you first add uandv and then input their sum into L, or first inputu andv into L sepa-rately and then add the outputs. The same holds for scalar multiplicationtrywriting out the scalar multiplication version of the italicized sentence. Whena function of vectors obeys the additivity and homogeneity properties we saythat it is linear(this is the linear of linear algebra). Together, additivityand homogeneity are called linearity. Other, equivalent, names for linearfunctions are:

    The questions in cases (A-D) of our example can all be restated as a singleequation:

    Lv= w

    where v is an unknown and w a known vector, and L is a linear transfor-mation. To check that this is true, one needs to know the rules for addingvectors (both inputs and outputs) and then check linearity ofL. Solving theequation Lv = w often amounts to solving systems of linear equations, theskill you will learn in Chapter2.

    A great example is the derivative operator:

    Example 3 (The derivative operator is linear)For any two functions f(x), g(x) and any number c, in calculus you probably learntthat the derivative operator satisfies

    1. ddx(cf) =c ddxf,

    18

  • 8/12/2019 Waldron Linear Algebra

    19/407

    1.3 What is a Matrix? 1

    2. ddx(f+ g) = ddxf+

    ddxg.

    If we view functions as vectors with addition given by addition of functions and scalarmultiplication just multiplication of functions by a constant, then these familiar prop-erties of derivatives are just the linearity property of linear maps.

    Before introducing matrices, notice that for linear maps L we will oftenwrite simplyLu instead ofL(u). This is because the linearity property of alinear transformation L means that L(u) can be thought of as multiplyingthe vectoru by the linear operator L. For example, the linearity ofL impliesthat ifu, v are vectors and c, d are numbers, then

    L(cu+dv) =cLu+dLv ,

    which feels a lot like the regular rules of algebra for numbers. Notice though,that uL makes no sense here.

    Remark A sum of multiples of vectors cu+ dv is called a linear combination ofuandv .

    1.3 What is a Matrix?

    Matrices are linear operators of a certain kind. One way to learn about themis by studying systems of linear equations.

    Example 4 A room contains xbags and y boxes of fruit:

    Each bag contains 2 apples and 4 bananas and each box contains 6 apples and 8bananas. There are 20 apples and 28 bananas in the room. Find x and y.

    19

  • 8/12/2019 Waldron Linear Algebra

    20/407

    20 What is Linear Algebra?

    The values are the numbers x and y that simultaneously make both of the following

    equations true:

    2 x + 6 y = 20

    4 x + 8 y = 28 .

    Here we have an example of a System of Linear Equations. Its a collectionof equations in which variables are multiplied by constants and summed, andno variables are multiplied together: There are no powers of variables greaterthan one (likex2 orb5), non-integer or negative powers of variables (like y1/2

    or a), and no places where variables are multiplied together (likeab or xy).

    Reading homework: problem 1

    Information about the fruity contents of the room can be stored two ways:

    (i) In terms of the number of apples and bananas.

    (ii) In terms of the number of bags and boxes.

    Intuitively, knowing the information in one form allows you to figure out theinformation in the other form. Going from (ii) to (i) is easy: If you knew therewere 3 bags and 2 boxes it would be easy to calculate the number of apples

    and bananas, and doing so would have the feel of multiplication (containerstimes fruit per container). In the example above we are required to go theother direction, from (i) to (ii). This feels like the opposite of multiplication,i.e., division. Matrix notation will make clear what we are dividing by.

    The goal of Chapter 2is to efficiently solve systems of linear equations.Partly, this is just a matter of finding a better notation, but one that hintsat a deeper underlying mathematical structure. For that, we need rules foradding and scalar multiplying 2-vectors:

    c

    xy

    :=

    cxcy

    and

    xy

    +

    x

    y

    :=

    x+x

    y+y

    .

    Writing our fruity equations as an equality between 2-vectors and then usingthese rules we have:

    2 x+ 6 y= 204 x+ 8 y= 28

    2x+ 6y4x+ 8y

    =

    2028

    x

    24

    +y

    68

    =

    2028

    .

    20

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/WhatIsLinearAlgebra/1/http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/WhatIsLinearAlgebra/1/
  • 8/12/2019 Waldron Linear Algebra

    21/407

    1.3 What is a Matrix? 2

    Now we introduce an operator which takes in 2-vectors and gives out 2-

    vectors. We denote it by an array of numbers called a matrix :

    The operator

    2 64 8

    is defined by

    2 64 8

    xy

    :=x

    24

    +y

    68

    .

    A similar definition applies to matrices with different numbers and sizes:

    Example 5 (A bigger matrix)

    1 0 3 45 0 3 4

    1 6 2 5

    xyz

    w

    :=x

    15

    1

    + y

    006

    + z

    332

    + w

    445

    .

    Viewed as a machine that inputs and outputs 2-vectors, our 2 2 matrixdoes the following:

    xy

    2x+ 6y4x+ 8y

    .

    Our fruity problem is now rather concise:Example 6 (This time in purely mathematical language):

    What vector

    xy

    satisfies

    2 64 8

    xy

    =

    2028

    ?

    This is of the same Lv = w form as our opening examples. The matrixencodes fruit per container. The equation is roughly fruit per containertimes number of containers. To solve for fruit we want to somehow divideby the matrix.

    Another way to think about the above example is to remember the rulefor multiplying a matrix times a vector. If you have forgotten this, you canactually guess a good rule by making sure the matrix equation is the sameas the system of linear equations. This would require that

    2 64 8

    xy

    :=

    2x+ 6y4x+ 8y

    21

  • 8/12/2019 Waldron Linear Algebra

    22/407

    22 What is Linear Algebra?

    Indeed this is an example of the general rule that you have probably seen

    before p qr s

    xy

    :=px+qy

    rx+sy

    =x

    pr

    +y

    qs

    .

    Notice, that the second way of writing the output on the right hand side ofthis equation is very useful because it tells us what all possible outputs amatrix times a vector look like they are just sums of the columns of thematrix multiplied by scalars. The set of all possible outputs of a matrix timesa vector is called thecolumn space(it is also the image of the linear functiondefined by the matrix).

    Reading homework: problem 2

    A matrix is an example of a Linear Transformation, because it takes onevector and turns it into another in a linear way. Of course, we can havemuch larger matrices if our system has more variables.

    Matrices in Space!

    Matrices are linear operators. The statement of this for the matrix in ourfruity example looks like

    1.2 6

    4 8

    cx

    y

    =c2 6

    4 8a

    b

    2.

    2 64 8

    xy

    +

    x

    y

    =

    2 64 8

    xy

    +

    2 64 8

    x

    y

    These equalities can already be verified using only the rules we introducedso far.

    Example 7 Verify that

    2 64 8

    is a linear operator.

    Homogeneity:2 64 8

    ca

    b

    =2 6

    4 8

    cacb

    =ca

    24

    + cb

    68

    =

    2ac4ac

    +

    6bc8bc

    =

    2ac + 6bc4ac + 8bc

    22

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/WhatIsLinearAlgebra/2/http://math.ucdavis.edu/~linear/videos/what_is_linear_algebra_Possibilities.mp4http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/WhatIsLinearAlgebra/2/
  • 8/12/2019 Waldron Linear Algebra

    23/407

    1.4 Review Problems 2

    which ought (and does) give the same result as

    c

    2 64 8

    ab

    =c

    a

    24

    + b

    68

    =c

    2a4a

    +

    6b8b

    =c

    2a + 6b4a + 8b

    =

    2ac + 6bc4ac + 8bc

    Additivity:2 64 8

    ab

    +

    cd

    =

    2 64 8

    a + cb + d

    = (a + c)

    24

    + (b + d)

    68

    =2(a + c)

    4(a + c)

    +6(b + d)

    8(b + d)

    =2a + 2c + 6b + 6d

    4a + 4c + 8b + 8d

    which we need to compare to

    2 64 8

    ab

    +

    2 64 8

    cd

    =a

    24

    + b

    68

    + c

    24

    + d

    68

    =

    2a4a

    +

    6b8b

    +

    2c4c

    +

    6d8d

    =

    2a + 2c + 6b + 6d4a + 4c + 8b + 8d

    .

    We have come full circle; matrices are just examples of the kinds of linear

    operators that appear in algebra problems like those in section 1.2. Anyequation of the form M v= w with Ma matrix, andv, w n-vectors is calleda matrix equation. Chapter 2 is about efficiently solving systems of linearequations, or equivalently matrix equations.

    1.4 Review Problems

    You probably have already noticed that understanding sets, functions andbasic logical operations is a must to do well in linear algebra. Brush up onthese skills by trying these background webwork problems:

    Logic 1Sets 2

    Functions 3Equivalence Relations 4

    Proofs 5

    23

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/1http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/2http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/3http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/4http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/5http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/5http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/4http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/3http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/2http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Background/1
  • 8/12/2019 Waldron Linear Algebra

    24/407

    24 What is Linear Algebra?

    Each chapter also has reading and skills WeBWorK problems:

    Webwork: Reading problems 1 ,2

    Probably you will spend most of your time on the review questions:

    1. Problems A, B, and C of example2can all be written as Lv = w where

    L: V W ,

    (read this as L maps the set of vectors Vto the set of vectorsW). Foreach case write down the sets V and W where the vectors v and wcome from.

    2. Torque is a measure of rotational force. It is a vector whose directionis the (preferred) axis of rotation. Upon applying a force Fon an objectat point r the torque is the cross product r F =.

    Lets find the forceF(a vector) must one apply to a wrench lying along

    the vector r =

    110

    ft, to produce a torque

    001

    ftlb:

    (a) Find a solution by writing out this equation with F =

    abc

    .(Hint: Guess and check that a solution with a = 0 exists).

    24

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/WhatIsLinearAlgebra/1http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/WhatIsLinearAlgebra/2http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/WhatIsLinearAlgebra/2http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/WhatIsLinearAlgebra/1
  • 8/12/2019 Waldron Linear Algebra

    25/407

    1.4 Review Problems 2

    (b) Add 1

    10 to your solution and check that the result is a solution.(c) Give a physics explanation why there can be two solutions, and

    argue that there are, in fact, infinitely many solutions.

    (d) Set up a system of three linear equations with the three compo-nents ofFas the variables which describes this situation. Whathappens if you try to solve these equations by substitution?

    3. The function P(t) gives gas prices (in units of dollars per gallon) as afunction oft the year, and g(t) is the gas consumption rate measured

    in gallons per year by an average driver as a function of their age.Assuming a lifetime is 100 years, what function gives the total amountspent on gas during the lifetime of an individual born in an arbitraryyeart? Is the operator that maps g to this function linear?

    4. The differential equation (DE)

    d

    dtf= 2f

    says that the rate of change of f is proportional to f. It describesexponential growth because

    f(t) =f(0)e2t

    satisfies the DE for any numberf(0). The number 2 in the DE is calledthe constant of proportionality. A similar DE

    d

    dtf=

    2

    tf

    has a time-dependent constant of proportionality.

    (a) Do you think that the second DE describes exponential growth?

    (b) Write both DEs in the form Df= 0 withD a linear operator.

    5. Pablo is a nutritionist who knows that oranges always have twice asmuch sugar as apples. When considering the sugar intake of schoolchil-dren eating a barrel of fruit, he represents the barrel like so:

    25

  • 8/12/2019 Waldron Linear Algebra

    26/407

    26 What is Linear Algebra?

    sugar

    fruit

    (s, f)

    Finda linear operator relating Pablos representation to the everydayrepresentation in terms of the number of apples and number of oranges.Write your answer as a matrix.

    Hint: Let represent the amount of sugar in each apple.

    Hint

    6. Matrix Multiplication: LetM andNbe matrices

    M=a b

    c d

    and N=

    e fg h

    ,

    andv the vector

    v=

    xy

    .

    If we first applyNand thenM tov we obtain the vector MN v.

    (a) Show that the composition of matricesMN is also a linear oper-

    ator.

    (b) Write out the components of the matrix productM Nin terms ofthe components ofM and the components ofN. Hint: use thegeneral rulefor multiplying a 2-vector by a 22 matrix.

    26

    http://math.ucdavis.edu/~linear/videos/what_is_linear_algebra_hint.mp4
  • 8/12/2019 Waldron Linear Algebra

    27/407

    1.4 Review Problems 2

    (c) Try to answer the following common question, Is there any sense

    in which these rules for matrix multiplication are unavoidable, orare they just a notation that could be replaced by some othernotation?

    (d) Generalize your multiplication rule to 3 3 matrices.7. Diagonal matrices: A matrixMcan be thought of as an array of num-

    bersmij, known as matrix entries, or matrix components, whereiand jindex row and column numbers, respectively. Let

    M=

    1 23 4

    =

    mij

    .

    Computem11,m12,m

    21 andm

    22.

    The matrix entries mii whose row and column numbers are the sameare called the diagonalofM. Matrix entriesmij with i= j are calledoff-diagonal. How many diagonal entries does an n n matrix have?How many off-diagonal entries does an n nmatrix have?If all the off-diagonal entries of a matrix vanish, we say that the matrixis diagonal. Let

    D= 00 and D

    = 00 .

    Are these matrices diagonal and why? Use the rule you found in prob-lem6 to compute the matrix products DD and DD. What do youobserve? Do you think the same property holds for arbitrary matrices?What about products where only one of the matrices is diagonal?

    8. Find the linear operator that takes in vectors from n-space and givesout vectors from n-space in such a way that whatever you put in, youget exactly the same thing out. Show that it is unique. Can you writethis operator as a matrix? Hint: To show something is unique, it isusually best to begin by pretending that it isnt, and then showing

    that this leads to a nonsensical conclusion. In mathspeakproof bycontradiction.

    9. Consider the set S ={, , #}. It contains just 3 elements, and hasno ordering;{, , #} ={#, , } etc. (In fact the same is true for

    27

  • 8/12/2019 Waldron Linear Algebra

    28/407

    28 What is Linear Algebra?

    {1, 2, 3} ={2, 3, 1} etc, although we could make this an ordered setusing 3> 2 > 1.)

    (i) Invent a function with domain{, , #} and codomain R. (Re-member that thedomainof a function is the set of all its allowedinputs and the codomain (or target space) is the set where theoutputs can live. A function is specified by assigning exactly onecodomain element to each element of the domain.)

    (ii) Choose an ordering on{, , #}, and then use it to write yourfunction from part (i) as a triple of numbers.

    (iii) Choose a new ordering on{, , #}and then write your functionfrom part (i) as a triple of numbers.

    (iv) Your answers for parts (ii) and (iii) are different yet represent thesame function explain!

    28

  • 8/12/2019 Waldron Linear Algebra

    29/407

    2Systems of Linear Equations

    2.1 Gaussian Elimination

    Systems of linear equations can be written as matrix equations. Now youwill learn an efficient algorithm for (maximally) simplifying a system of linearequations (or a matrix equation) Gaussian elimination.

    2.1.1 Augmented Matrix Notation

    Efficiency demands a new notation, called an augmented matrix, which weintroduce via examples:The linear system

    x + y = 272x y = 0 ,

    is denoted by the augmented matrix1 1 272 1 0

    .

    This notation is simpler than the matrix one,1 12 1

    xy

    =

    27

    0

    ,

    although all three of the above equations denote the same thing.

    29

  • 8/12/2019 Waldron Linear Algebra

    30/407

  • 8/12/2019 Waldron Linear Algebra

    31/407

    2.1 Gaussian Elimination 3

    Entries left of the divide carry two indices; subscripts denote column number

    and superscripts row number. We emphasize, the superscripts here do notdenote exponents. Make sure you can write out the system of equations andthe associated matrix equation for any augmented matrix.

    Reading homework: problem 1

    We now have three ways of writing the same question. Lets put themside by side as we solve the system by strategically adding and subtractingequations.

    Example 8 (How matrix equations and augmented matrices change in elimination)

    x + y = 272x y = 0

    1 1

    2 1x

    y

    =27

    0

    1 1 27

    2 1 0

    .

    Replace the first equation by the sum of the two equations:

    3x + 0 = 272x y = 0

    3 02 1

    xy

    =

    27

    0

    3 0 272 1 0

    .

    Let the new first equation be the old first equation divided by 3:

    x + 0 = 92x y = 0

    1 02 1

    xy

    =

    90

    1 0 92 1 0

    .

    Replace the second equation by the second equation minus two times the first equation:

    x + 0 = 90 y = 18

    1 00 1

    xy

    =

    9

    18

    1 0 90 1 18

    .

    Let the new second equation be the old second equation divided by -1:

    x + 0 = 90 + y = 18

    1 00 1

    xy

    =

    918

    1 0 90 1 18

    .

    Did you see what the strategy was? To eliminatey from the first equationand then eliminate x from the second. The result was the solution to the

    system.Here is the big idea: Everywhere in the instructions above we can replace

    the word equation with the word row and interpret them as telling uswhat to do with the augmented matrix instead of the system of equations.Performed systemically, the result is the Gaussian eliminationalgorithm.

    31

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/1/http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/1/
  • 8/12/2019 Waldron Linear Algebra

    32/407

    32 Systems of Linear Equations

    2.1.2 Equivalence and the Act of Solving

    We introduce the symbolwhich is called tilde but should be read as is(row) equivalent to because at each step the augmented matrix changes byan operation on its rows but its solutions do not. For example, we foundabove that

    1 1 272 1 0

    1 0 92 1 0

    1 0 90 1 18

    .

    The last of these augmented matrices is our favorite!

    Equivalence Example

    Setting up a string of equivalences like this is a means of solving a systemof linear equations. This is the main idea of section2.1.3. This next examplehints at the main trick:

    Example 9 (Using Gaussian elimination to solve a system of linear equations)

    x + y = 5x + 2y = 8

    1 1 51 2 8

    1 1 50 1 3

    1 0 20 1 3

    x + 0 = 20 + y = 3

    Note that in going from the first to second augmented matrix, we used the top left 1to make the bottom left entry zero. For this reason we call the top left entry a pivot.

    Similarly, to get from the second to third augmented matrix, the bottom right entry(before the divide) was used to make the top right one vanish; so the bottom rightentry is also called a pivot.

    This name pivot is used to indicate the matrix entry used to zero outthe other entries in its column.

    2.1.3 Reduced Row Echelon Form

    For a system of two linear equations, the goal of Gaussian elimination is toconvert the part of the augmented matrix left of the dividing line into the

    matrixI=

    1 00 1

    ,

    called the Identity Matrix, since this would give the simple statement of asolution x = a, y = b. The same goes for larger systems of equations for

    32

    http://math.ucdavis.edu/~linear/videos/gaussian_elimination_background.mp4
  • 8/12/2019 Waldron Linear Algebra

    33/407

    2.1 Gaussian Elimination 3

    which the identity matrix I has 1s along its diagonal and all off-diagonal

    entries vanish:

    I=

    1 0 00 1 0...

    . . . ...

    0 0 1

    Reading homework: problem 2

    For many systems, it is not possible to reach the identity in the augmentedmatrix via Gaussian elimination:

    Example 10 (Redundant equations)

    x + y = 2

    2x + 2y = 4

    1 1 2

    2 2 4

    1 1 2

    0 0 0

    x + y = 2

    0 + 0 = 0

    This example demonstrates if one equation is a multiple of the other the identity matrixcan not be a reached. This is because the first step in elimination will make the second

    row a row of zeros. Notice that solutions still existsx= 1, y = 1 is a solution. Thelast augmented matrix here is in RREF.

    Example 11 (Inconsistent equations)

    x + y = 2

    2x + 2y = 5

    1 1 2

    2 2 5

    1 1 2

    0 0 1

    x + y = 2

    0 + 0 = 1

    This system of equation has a solution if there exists two numbers x, andy such that0 + 0 = 1. That is a tricky way of saying there are no solutions. The last form of theaugmented matrix here is in RREF.

    Example 12 (Silly order of equations)A robot might make this mistake:

    0x + y = 2x + y = 7

    0 1 21 1 7

    ,

    33

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/2/http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/2/
  • 8/12/2019 Waldron Linear Algebra

    34/407

    34 Systems of Linear Equations

    and then give up because the the upper left slot can not function as a pivot since the 0

    that lives there can not be used to eliminate the zero below it. Of course, the rightthing to do is to change the order of the equations before starting

    x + y = 7

    0x + y = 2

    1 1 7

    0 1 2

    1 0 9

    0 1 2

    x + 0 = 9

    0 + y = 2 .The third augmented matrix above is the RREF of the first and second. That is tosay, you can swap rows on your way to RREF.

    For larger systems of matrices, these three kinds of problems are theobstruction to obtaining the identity matrix, and hence to a simple statementof a solution in the form x = a, y = b, . . . . What can we do to maximally

    simplify a system of equations in general? We need to perform operationsthat simplify our systemwithout changing its solutions. Because, exchangingthe order of equations, multiplying one equation by a non-zero constant oradding equations does not change the systems solutions, we are lead to threeoperations:

    (Row Swap) Exchange any two rows.

    (Scalar Multiplication) Multiply any row by a non-zero constant.

    (Row Sum) Add a multiple of one row to another row.

    These are called Elementary Row Operations, or EROs for short, and are

    studied in detail in section2.3. Suppose now we have a general augmentedmatrix for which the first entry in the first row does not vanish. Then, using

    just the three EROs, we could then perform the following algorithm:

    Make the leftmost nonzero entry in the top row 1 by multiplication.

    Then use that 1 as a pivot to eliminate everything below it.

    Then go to the next row and make the leftmost non zero entry 1.

    Use that 1 as a pivot to eliminate everything below and above it!

    Go to the next row and make the leftmost nonzero entry 1... etcIn the case that the first entry of the first row is zero, we may first interchangethe first row with another row whose first entry is non-vanishing and thenperform the above algorithm. If the entire first column vanishes, we may stillapply the algorithm on the remaining columns.

    34

  • 8/12/2019 Waldron Linear Algebra

    35/407

    2.1 Gaussian Elimination 3

    Beginner Elimination

    This algorithm is known as Gaussian elimination, its endpoint is an aug-mented matrix of the form

    1 0 0 0 b10 0 1 0 0 b20 0 0 0 1 0 b3...

    ... ...

    ... ...

    ...

    0 0 0 0 0 1 bk0 0 0 0 0

    0 0 bk+1

    ... ... ... ... ... ... ... ...

    0 0 0 0 0 0 0 br

    This is called Reduced Row Echelon Form (RREF). The asterisks denotethe possibility of arbitrary numbers (e.g., the second 1 in the top line ofexample10). The following properties define RREF:

    1. In every row the left most non-zero entry is 1 (and is called a pivot).

    2. The pivot of any given row is always to the right of the pivot of therow above it.

    3. The pivot is the only non-zero entry in its column.

    Here are some examples:

    Example 13 (Augmented matrix in RREF)1 0 7 00 1 3 00 0 0 10 0 0 0

    Example 14 (Augmented matrix NOT in RREF)

    1 0 3 00 0 2 00 1 0 10 0 0 1

    Actually, this NON-example breaks all three of the rules!

    35

    http://math.ucdavis.edu/~linear/videos/gaussian_elimination_Beginner_Elimination.mp4
  • 8/12/2019 Waldron Linear Algebra

    36/407

  • 8/12/2019 Waldron Linear Algebra

    37/407

    2.1 Gaussian Elimination 3

    In this case, we say that x, y, and z arepivot variablesbecause they appear with a

    pivot coefficient in RREF. Since w never appears with a pivot coefficient, it is not apivot variable. One way to express the solutions to this system of equations is to putall the pivot variables on one side and all the non-pivot variableson the other side.It is also nice to add the empty equation w= w to obtain the system

    x = 5 3wy = 6 2wz = 8 4ww = w

    x

    y

    z

    w

    =

    56

    8

    0

    + w

    324

    1

    ,which we have written as the solution to the corresponding matrix problem. There areinfinitely many solutions, one for each value ofz. We call the collection of all solutionsthe solution set. A good check is to set w = 0and see if the system is solved.

    The last example demonstrated the standard approachfor solving a sys-tem of linear equations in its entirety:

    1. Write the augmented matrix.

    2. Perform EROs to reach RREF.

    3. Express the non-pivot variables in terms of the pivot variables.

    There are always exactly enough non-pivot variables to index your solutions.In any approach, the variables which are not expressed in terms of the othervariables are called free variables. The standard approach is to use the non-pivot variables as free variables.

    Non-standard approach: solve forw in terms ofzand substitute into theother equations. You now have an expression for each component in termsofz. But why pickzinstead ofy or x? (or x+y?) The standard approachnot only feels natural, but is canonical, meaning that everyone will get thesame RREF and hence choose the same variables to be free. However, it isimportant to remember that so long as their setof solutions is the same, any

    two choices of free variables is fine. (You might think of this as the differencebetween using Google MapsTM or MapquestTM; although their maps maylook different, the placehome sicthey are describing is the same!)

    When you see an RREF augmented matrix with two columns that haveno pivot, you know there will be two free variables.

    37

  • 8/12/2019 Waldron Linear Algebra

    38/407

    38 Systems of Linear Equations

    Example 17

    1 0 7 0 40 1 3 4 10 0 0 0 00 0 0 0 0

    x + 7z = 4y+ 3z+4w = 1Expressing the pivot variables in terms of the non-pivot variables, and using two emptyequations gives

    x = 4 7zy = 1 3z 4wz = z

    w = w

    xyz

    w

    =

    4100

    + z

    73

    10

    + w

    04

    01

    There are infinitely many solutions; one for each pair of numbers z, w.

    Solution set in set notation

    You can imagine having three, four, or fifty-six non-pivot columns andthe same number of free variables indexing your solutions set. You need tobecome very adept at reading off solutions of linear systems from the RREFof their augmented matrix.

    Worked examples of Gaussian elimination

    2.2 Review Problems

    Webwork:

    Reading problems 1 ,2Augmented matrix 6

    2 2 systems 7,8,9,10,11,123 2 systems 13,143 3 systems 15,16,17

    1. State whether the following augmented matrices are in RREF and com-

    pute their solution sets.1 0 0 0 3 10 1 0 0 1 20 0 1 0 1 30 0 0 1 2 0

    ,38

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/1http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/2http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/6http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/7http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/8http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/9http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/10http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/11http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/11http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/12http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/13http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/13http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/14http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/15http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/16http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/17http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/17http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/16http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/15http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/14http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/13http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/12http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/11http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/10http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/9http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/8http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/7http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/6http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/2http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/1http://math.ucdavis.edu/~linear/videos/elementary_row_operations_worked_examples.mp4http://math.ucdavis.edu/~linear/videos/solution_sets_for_systems_example.mp4
  • 8/12/2019 Waldron Linear Algebra

    39/407

    2.2 Review Problems 3

    1 1 0 1 0 1 00 0 1 2 0 2 00 0 0 0 1 3 00 0 0 0 0 0 0

    ,

    1 1 0 1 0 1 0 10 0 1 2 0 2 0 10 0 0 0 1 3 0 10 0 0 0 0 2 0 20 0 0 0 0 0 1 1

    .2. Solve the following linear system:

    2x1+ 5x2 8x3+ 2x4+ 2x5= 06x1+ 2x2 10x3+ 6x4+ 8x5= 63x1+ 6x2 + 2x3+ 3x4+ 5x5= 6

    3x1+ 1x2 5x3+ 3x4+ 4x5= 36x1+ 7x2 3x3+ 6x4+ 9x5= 9

    Be sure to set your work out carefully with equivalence signs betweeneach step, labeled by the row operations you performed.

    3. Check that the following two matrices are row-equivalent:1 4 7 102 9 6 0

    and

    0 1 8 204 18 12 0

    .

    Now remove the third column from each matrix, and show that theresulting two matrices (shown below) are row-equivalent:

    1 4 102 9 0

    and

    0 1 204 18 0

    .

    Now remove the fourth column from each of the original two matri-ces, and show that the resulting two matrices, viewed as augmented

    matrices (shown below) are row-equivalent:1 4 72 9 6

    and

    0 1 84 18 12

    .

    Explain why row-equivalence is never affected by removing columns.

    39

  • 8/12/2019 Waldron Linear Algebra

    40/407

    40 Systems of Linear Equations

    4. Check that the system of equations corresponding to the augmented

    matrix 1 4 103 13 94 17 20

    has no solutions. If you remove one of the rows of this matrix, doesthe new matrix have any solutions? In general, can row equivalence beaffected by removing rows? Explain why or why not.

    5. Explain why the linear system has no solutions:

    1 0 3 10 1 2 40 0 0 6

    For which values ofk does the system below have a solution?

    x 3y = 6x + 3z = 3

    2x + ky + (3 k)z = 1

    Hint

    6. Show that the RREF of a matrix is unique. (Hint: Consider whathappens if the same augmented matrix had two different RREFs. Tryto see what happens if you removed columns from these two RREFaugmented matrices.)

    7. Another method for solving linear systems is to use row operations tobring the augmented matrix to Row Echelon Form (REF as opposed toRREF). In REF, the pivots are not necessarily set to one, and we onlyrequire that all entries left of the pivots are zero, not necessarily entries

    above a pivot. Provide a counterexample to show that row echelon formis not unique.

    Once a system is in row echelon form, it can be solved by back substi-tution. Write the following row echelon matrix as a system of equa-tions, then solve the system using back-substitution.

    40

    http://math.ucdavis.edu/~linear/videos/elementary_row_operations_hint.mp4
  • 8/12/2019 Waldron Linear Algebra

    41/407

    2.2 Review Problems 4

    2 3 1 60 1 1 20 0 3 3

    8. Show that this pair of augmented matrices are row equivalent, assuming

    ad bc = 0: a b e

    c d f

    1 0 debfadbc

    0 1 afceadbc

    9. Consider the augmented matrix:

    2 1 36 3 1 .Give a geometric reason why the associated system of equations hasno solution. (Hint, plot the three vectors given by the columns of thisaugmented matrix in the plane.) Given a general augmented matrix

    a b ec d f

    ,

    can you find a condition on the numbers a, b, canddthat corresponding

    to the geometric condition you found?10. A relation on a set of objects U is an equivalence relation if the

    following three properties are satisfied:

    Reflexive: For anyx U, we have x x. Symmetric: For anyx, y U, ifx y theny x. Transitive: For anyx, yand z U, ifx yand y zthenx z.

    Show that row equivalence of matrices is an example of an equivalencerelation.

    (For a discussion of equivalence relations, seeHomework 0, Problem 4)

    Hint

    41

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Homework0-Background/4/http://math.ucdavis.edu/~linear/videos/gaussian_elimination_hints.mp4http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/Homework0-Background/4/
  • 8/12/2019 Waldron Linear Algebra

    42/407

    42 Systems of Linear Equations

    11. Equivalence of augmented matrices does not come from equality of their

    solution sets. Rather, we define two matrices to be equivalent one canbe obtained from the other by elementary row operations.

    Find a pair of augmented matrices that are not row equivalent but dohave the same solution set.

    2.3 Elementary Row Operations

    Elementary row operations are systems of linear equations relating the oldand new rows in Gaussian elimination:

    Example 18 (Keeping track of EROs with equations between rows)We refer to the new kth row as Rk and the old kth row as Rk.

    0 1 1 72 0 0 40 0 1 4

    R1=0R1+ R2+0R3R2= R1+0R2+0R3R3=0R1+0R2+ R3

    2 0 0 40 1 1 70 0 1 4

    R1R2R3

    =0 1 01 0 0

    0 0 1

    R1R2R3

    R1=

    12R1+0R2+0R3

    R2=0R1+ R2+0R3R3=0R1+0R2+ R3

    1 0 0 20 1 1 7

    0 0 1 4 R1R2R3 =

    12 0 00 1 0

    0 0 1R1R2

    R3R1= R1+0R2+0R3R2=0R1+ R2 R3R3=0R1+0R2+ R3

    1 0 0 20 1 0 30 0 1 4

    R1R2R3

    =1 0 00 1 1

    0 0 1

    R1R2R3

    On the right, we have listed the relations between old and new rows in matrix notation.

    Reading homework: problem 3

    2.3.1 EROs and Matrices

    The matrix describing the system of equations relating rows performs thecorresponding ERO on the augmented matrix:

    42

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/3/http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/3/
  • 8/12/2019 Waldron Linear Algebra

    43/407

    2.3 Elementary Row Operations 4

    Example 19 (Performing EROs with Matrices)

    0 1 01 0 00 0 1

    0 1 1 72 0 0 40 0 1 4

    =2 0 0 40 1 1 7

    0 0 1 4

    12 0 00 1 0

    0 0 1

    2 0 0 40 1 1 70 0 1 4

    =1 0 0 20 1 1 7

    0 0 1 4

    1 0 00 11

    0 0 1

    1 0 0 20 1 1 70 0 1 4

    = 1 0 0 20 1 0 30 0 1 4

    Here we have multiplied the augmented matrix with the matrices that acted on rowslisted on the right of example 18.

    Realizing EROs as matrices allows us to give a concrete notion ofdi-viding by a matrix;we can now perform manipulations on both sides of anequation in a familiar way:

    Example 20 (UndoingA in Ax= b slowly, for A= 6 = 3 2)

    6x = 12

    316x = 3112 2x = 4 212x = 21 4

    1x = 2

    The matrices corresponding to EROs undo a matrix step by step.

    43

  • 8/12/2019 Waldron Linear Algebra

    44/407

    44 Systems of Linear Equations

    Example 21 (Undoing A inAx= b slowly, for A = M=...)

    0 1 12 0 00 0 1

    xyz

    = 744

    0 1 01 0 00 0 1

    0 1 12 0 00 0 1

    xyz

    =0 1 01 0 0

    0 0 1

    744

    2 0 00 1 10 0 1

    xyz

    =47

    4

    12 0 00 1 0

    0 0 1

    2 0 00 1 10 0 1

    xyz

    = 12 0 00 1 00 0 1

    474

    1 0 00 1 10 0 1

    xyz

    =27

    4

    1 0 00 110 0 1

    1 0 00 1 10 0 1

    xyz

    =1 0 00 11

    0 0 1

    274

    1 0 0

    0 1 00 0 1x

    yz = 2

    34This is another way of thinking about Gaussian elimination which feels morelike elementary algebra in the sense that you do something to both sides ofan equation until you have a solution.

    2.3.2 Recording EROs in(M|I)Just as we put together 3121 = 61 to get a single thing to apply to bothsides of 6x = 12 to undo 6, we should put together multiple EROs to get

    a single thing that undoes our matrix. To do this, augment by the identitymatrix (not just a single column) and then perform Gaussian elimination.There is no need to write the EROs as systems of equations or as matriceswhile doing this.

    44

  • 8/12/2019 Waldron Linear Algebra

    45/407

    2.3 Elementary Row Operations 4

    Example 22 (Collecting EROs that undo a matrix)

    0 1 1 1 0 02 0 0 0 1 00 0 1 0 0 1

    2 0 0 0 1 00 1 1 1 0 00 0 1 0 0 1

    1 0 0 0 12 00 1 1 1 0 0

    0 0 1 0 0 1

    1 0 0 0 12 00 1 0 1 01

    0 0 1 0 0 1

    .As we changed the left side from the matrix Mto the identity matrix, theright side changed from the identity matrix to the matrix which undoes M:

    Example 23 (Checking that one matrix undoes another) 0 12 01 010 0 1

    0 1 12 0 00 0 1

    = 1 0 00 1 00 0 1

    .If the matrices are composed in the opposite order, the result is the same. 0 1 12 0 0

    0 0 1

    0 12 01 010 0 1

    = 1 0 00 1 0

    0 0 1

    .Whenever the product of two matrices MN = I, we say that N is the

    inverse ofMorN=M1 and converselyMis the inverse ofNorM=N1.In abstract generality, let Mbe some matrix and, as always, let I stand

    for the identity matrix. Imagine the process of performing elementary rowoperations to bringM to the identity matrix:

    (M|I) (E1M|E1) (E2E1M|E2E1) (I| E2E1) .The ellipses stand for additional EROs. The result is a product ofmatrices that form a matrix which undoesM

    E2E1M=I .This is only true if the RREF ofM is the identity matrix. In that case, wesayM is invertible.

    Much use is made of the fact that invertible matrices can be undone withEROs. To begin with, since each elementary row operation has an inverse,

    M=E11 E12 ,

    45

  • 8/12/2019 Waldron Linear Algebra

    46/407

    46 Systems of Linear Equations

    while the inverse ofM is

    M1 = E2E1 .This is symbolically verified by

    M1M= E2E1 E11 E12 = E2 E12 = =I .Thus, ifM is invertible, then Mcan be expressed as the product of EROs.(The same is true for its inverse.) This has the feel of the fundamentaltheorem of arithmetic (integers can be expressed as the product of primes)or the fundamental theorem of algebra (polynomials can be expressed as theproduct of [complex] first order polynomials); EROs are building blocks ofinvertible matrices.

    2.3.3 The Three Elementary Matrices

    We now work toward concrete examples and applications. It is surprisinglyeasy to translate between EROs and matrices that perform EROs. Thematrices corresponding to these kinds are close in form to the identity matrix:

    Row Swap: Identity matrix with two rows swapped.

    Scalar Multiplication: Identity matrix with onediagonal entrynot 1.

    Row Sum: The identity matrix with one off-diagonal entrynot 0.

    Example 24 (Correspondences between EROs and their matrices)

    The row swap matrix that swaps the 2nd and 4th row is the identity matrix withthe 2nd and 4th row swapped:

    1 0 0 0 00 0 0 1 00 0 1 0 00 1 0 0 00 0 0 0 1

    . The scalar multiplication matrix that replaces the 3rd row with 7 times the 3rd

    row is the identity matrix with 7 in the 3rd row instead of 1:1 0 0 00 1 0 00 0 7 00 0 0 1

    .

    46

  • 8/12/2019 Waldron Linear Algebra

    47/407

    2.3 Elementary Row Operations 4

    The row sum matrix that replaces the 4th row with the 4th row plus 9 times

    the 2nd row is the identity matrix with a 9 in the 4th row, 2nd column:

    1 0 0 0 0 0 00 1 0 0 0 0 00 0 1 0 0 0 00 9 0 1 0 0 00 0 0 0 1 0 00 0 0 0 0 1 00 0 0 0 0 0 1

    .

    We can write an explicit factorization of a matrix into EROs by keepingtrack of the EROs used in getting to RREF.

    Example 25 (Express M fromExample22as a product of EROs)Note that in the previous example one of each of the kinds of EROs is used, in theorder just given. Elimination looked like

    M=

    0 1 12 0 00 0 1

    E1 2 0 00 1 1

    0 0 1

    E2 1 0 00 1 1

    0 0 1

    E3 1 0 00 1 0

    0 0 1

    =I ,where the EROs matrices are

    E1= 0 1 01 0 0

    0 0 1 , E2= 12 0 00 1 0

    0 0 1 , E3= 1 0 00 1

    1

    0 0 1 .The inverse of the ERO matrices (corresponding to the description of the reverse rowmaniplulations)

    E11 =

    0 1 01 0 00 0 1

    , E12 = 2 0 00 1 0

    0 0 1

    , E13 = 1 0 00 1 1

    0 0 1

    .Multiplying these gives

    E11 E12 E

    13 =

    0 1 01 0 0

    0 0 1 2 0 00 1 0

    0 0 1 1 0 00 1 1

    0 0 1 =

    0 1 01 0 00 0 1

    2 0 00 1 10 0 1

    = 0 1 12 0 0

    0 0 1

    =M .47

  • 8/12/2019 Waldron Linear Algebra

    48/407

    48 Systems of Linear Equations

    2.3.4 LU, LDU, and LDPU Factorizations

    The process of elimination can be stopped halfway to obtain decompositionsfrequently used in large computations in sciences and engineering. The firsthalf of the elimination process is to eliminate entries below the diagonal.leaving a matrix which is called upper triangular. The elementary matriceswhich perform this part of the elimination are lower triangular, as are theirinverses. But putting together the upper triangular and lower triangularparts one obtains the so-called LU factorization.

    Example 26 (LU factorization)

    M=

    2 0 3 10 1 2 2

    4 0 9 20 1 1 1

    E1

    2 0 3 10 1 2 20 0 3 40 1 1 1

    E2

    2 0 3 10 1 2 20 0 3 4

    0 0 3 1

    E3

    2 0 3 10 1 2 20 0 3 4

    0 0 0 3

    :=U ,

    where the EROs and their inverses are

    E1 =

    1 0 0 00 1 0 02 0 1 00 0 0 1

    , E2=

    1 0 0 00 1 0 00 0 1 00 1 0 1

    , E3 =

    1 0 0 00 1 0 00 0 1 00 0 1 1

    E11 = 1 0 0 0

    0 1 0 02 0 1 00 0 0 1

    , E12 = 1 0 0 0

    0 1 0 00 0 1 00 1 0 1

    , E13 = 1 0 0 0

    0 1 0 00 0 1 00 0 1 1

    .

    Applying inverse elementary matrices to both sides of the equality U = E3E2E1M

    48

  • 8/12/2019 Waldron Linear Algebra

    49/407

    2.3 Elementary Row Operations 4

    givesM=E11 E12 E

    13 U or

    2 0 3 10 1 2 2

    4 0 9 20 1 1 1

    =

    1 0 0 00 1 0 0

    2 0 1 00 0 0 1

    1 0 0 00 1 0 00 0 1 001 0 1

    1 0 0 00 1 0 00 0 1 00 0 1 1

    2 03 10 1 2 20 0 3 40 0 0 3

    =

    1 0 0 00 1 0 0

    2 0 1 00 0 0 1

    1 0 0 00 1 0 00 0 1 00 1 1 1

    2 0 3 10 1 2 20 0 3 40 0 0 3

    = 1 0 0 0

    0 1 0 02 0 1 00 1 1 1

    2 0 3 10 1 2 20 0 3 40 0 0 3

    .This is a lower triangular matrix times an upper triangular matrix.

    49

  • 8/12/2019 Waldron Linear Algebra

    50/407

    50 Systems of Linear Equations

    What if we stop at a different point in elimination? We could multiply

    rows so that the entries in the diagonal are 1 next. Note that the EROs thatdo this are diagonal. This gives a slightly different factorization.

    Example 27 (LDU factorization building from previous example)

    M=

    2 0 3 10 1 2 2

    4 0 9 20 1 1 1

    E3E2E1

    2 0 3 10 1 2 20 0 3 40 0 0 3

    E4

    1 0 3212

    0 1 2 20 0 3 40 0 0 3

    E5

    1 0 3212

    0 1 2 2

    0 0 1 4

    30 0 0 3

    E6

    1 0 3212

    0 1 2 2

    0 0 1 4

    30 0 0 1

    =:U

    The corresponding elementary matrices are

    E4=

    12 0 0 00 1 0 00 0 1 00 0 0 1

    , E5 =

    1 0 0 00 1 0 00 0 13 00 0 0 1

    , E6 =

    1 0 0 00 1 0 00 0 1 00 0 0 13

    ,

    E1

    4 = 2 0 0 00 1 0 0

    0 0 1 00 0 0 1

    , E15 = 1 0 0 00 1 0 0

    0 0 3 00 0 0 1

    , E16 = 1 0 0 00 1 0 0

    0 0 1 00 0 0 3

    .The equation U=E6E5E4E3E2E1Mcan be rearranged as

    M= (E11 E12 E

    13 )(E

    14 E

    15 E

    16 )U.

    We calculated the product of the first three factors in the previous example; it wasnamed L there, and we will reuse that name here. The product of the next threefactors is diagonal and we wil name it D. The last factor we namedU(the name meanssomething different in this example than the last example.) TheLDUfactorizationof our matrix is

    2 0 3 10 1 2 2

    4 0 9 20 1 1 1

    =

    1 0 0 00 1 0 0

    2 0 1 00 1 1 1

    2 0 0 00 1 0 00 0 3 00 0 1 3

    1 0 32 120 1 2 20 0 1 430 0 0 1

    .

    50

  • 8/12/2019 Waldron Linear Algebra

    51/407

    2.4 Review Problems 5

    TheLDUfactorization of a matrix is a factorization into blocks of EROs

    of a various types: Lis the product of the inverses of EROs which eliminatebelow the diagonal by row addition,D the product of inverses of EROs whichset the diagonal elements to 1 by row multiplication, and U is the productof inverses of EROs which eliminate above the diagonal by row addition.

    You may notice that one of the three kinds of row operation is missingfrom this story. Row exchange may be necessary to obtain RREF. Indeed, sofar in this chapter we have been working under the tacit assumption that Mcan be brought to the identity by just row multiplication and row addition.If row exchange is necessary, the resulting factorization is LDPUwhereP isthe product of inverses of EROs that perform row exchange.

    Example 28 (LDPU factorization, building from previous examples)

    M=

    0 1 2 22 0 3 1

    4 0 9 20 1 1 1

    E7

    0 1 2 22 0 3 1

    4 0 9 20 1 1 1

    E6E5E4E3E2E1 L

    E7=

    0 1 0 01 0 0 00 0 1 0

    0 0 0 1

    =E17

    M= (E11 E12 E

    13 )(E

    14 E

    15 E

    16 )(E

    17 )U=LDPU

    0 1 2 22 0 3 1

    4 0 9 20 1 1 1

    =

    1 0 0 00 1 0 0

    2 0 1 00 1 1 1

    2 0 0 00 1 0 00 0 3 00 0 13

    0 1 0 01 0 0 00 0 1 00 0 0 1

    1 0 32 120 1 2 20 0 1 430 0 0 1

    2.4 Review Problems

    Webwork:

    Reading problems 3Matrix notation 18LU 19

    51

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/3http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/18http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/19http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/19http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/18http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/3
  • 8/12/2019 Waldron Linear Algebra

    52/407

  • 8/12/2019 Waldron Linear Algebra

    53/407

    2.5 Solution Sets for Systems of Linear Equations 5

    (b) What are the columns ofM1 in (M|I) (I|M1)?6. How can you convince your fellow students to never make this mistake?

    1 0 2 30 1 2 32 0 1 4

    R1=R1+R2R2=R1R2R3=R1+2R2

    1 1 4 611 0 01 2 6 9

    7. IsLUfactorization of a matrix unique? Justify your answer.

    . If you randomly create a matrix by picking numbers out of the blue,it will probably be difficult to perform elimination or factorization;

    fractions and large numbers will probably be involved. To invent simpleproblems it is better to start with a simple answer:

    (a) Start with any augmented matrix in RREF. Perform EROs tomake most of the components non-zero. Write the result on aseparate piece of paper and give it to your friend. Ask that friendto find RREF of the augmented matrix you gave them. Make surethey get the same augmented matrix you started with.

    (b) Create an upper triangular matrixUand a lower triangular ma-trixL with only 1s on the diagonal. Give the result to a friend tofactor into LU form.

    (c) Do the same with an LDUfactorization.

    2.5 Solution Sets for Systems of Linear Equa-

    tions

    Algebra problems can have multiple solutions. For examplex(x 1) = 0 hastwo solutions: 0 and 1. By contrast, equations of the form Ax = b withA alinear operator (with scalars the real numbers) have the following property:

    IfA is a linear operator and b is known, then Ax = b has either

    1. One solution

    2. No solutions

    3. Infinitely many solutions

    53

  • 8/12/2019 Waldron Linear Algebra

    54/407

    54 Systems of Linear Equations

    2.5.1 The Geometry of Solution Sets: Hyperplanes

    Consider the following algebra problems and their solutions

    1. 6x= 12, one solution: 2

    2. 0x= 12, no solution

    3. 0x= 0, one solution for each number: x

    In each case the linear operator is a 1 1 matrix. In the first case, the linearoperator is invertible. In the other two cases it is not. In the first case, thesolution set is a point on the number line, in the third case the solution setis the whole number line.

    Lets examine similar situations with larger matrices.

    1.

    6 00 2

    xy

    =

    12

    6

    , one solution:

    23

    2.

    1 30 0

    xy

    =

    41

    , no solutions

    3.

    1 30 0

    xy

    =

    40

    , one solution for each number y:

    4 3y

    y

    4. 0 00 0xy = 00, one solution for each pair of numbers x, y: xyAgain, in the first case the linear operator is invertible while in the othercases it is not. When the operator is not invertible the solution set can beempty, a line in the plane or the plane itself.

    For a system of equations withrequations andkveriables, one can have anumber of different outcomes. For example, consider the case ofr equationsin three variables. Each of these equations is the equation of a plane in three-dimensional space. To find solutions to the system of equations, we look forthe common intersection of the planes (if an intersection exists). Here we

    have five different possibilities:1. Unique Solution. The planes have a unique point of intersection.

    2. No solutions. Some of the equations are contradictory, so no solutionsexist.

    54

  • 8/12/2019 Waldron Linear Algebra

    55/407

    2.5 Solution Sets for Systems of Linear Equations 5

    3. Line. The planes intersect in a common line; any point on that line

    then gives a solution to the system of equations.

    4. Plane. Perhaps you only had one equation to begin with, or else allof the equations coincide geometrically. In this case, you have a planeof solutions, with two free parameters.

    Planes

    5. All ofR3. If you start with no information, then any point in R3 is a

    solution. There are three free parameters.

    In general, for systems of equations with k unknowns, there are k+ 2possible outcomes, corresponding to the possible numbers (i.e., 0, 1, 2, . . . , k)of free parameters in the solutions set, plus the possibility of no solutions.These types of solution sets are hyperplanes, generalizations of planesthat behave like planes in R3 in many ways.

    Reading homework: problem 4

    Pictures and Explanation

    2.5.2 Particular Solution+Homogeneous Solutions

    In the standard approach, variables corresponding to columns that do notcontain a pivot (after going to reduced row echelon form) are free. We calledthem non-pivot variables. They index elements of the solution set by actingas coefficients of vectors.

    Example 29 (Non-pivot columns determine terms of the solutions)1 0 1 10 1 1 10 0 0 0

    x1x2x3x4

    =11

    0

    1x1+ 0x2+ 1x3 1x4 = 10x1+ 1x2 1x3+ 1x4 = 10x1+ 0x2+ 0x3+ 0x4 = 0

    55

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/4/http://math.ucdavis.edu/~linear/videos/solution_sets_for_systems_of_linear_equations_overview.mp4http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/4/http://math.ucdavis.edu/~linear/videos/solution_sets_for_systems_of_linear_equations_planes.mp4
  • 8/12/2019 Waldron Linear Algebra

    56/407

    56 Systems of Linear Equations

    Following the standard approach, express the pivot variables in terms of the non-pivot

    variables and add empty equations. Herex3 andx4 are non-pivot variables.

    x1 = 1 x3+ x4x2 = 1 + x3 x4x3 = x3x4 = x4

    x1x2x3x4

    =

    1100

    + x3

    1110

    + x4

    11

    01

    The preferred way to write a solution set is with set notation.

    S=

    x1x2x3

    x4

    =

    110

    0

    + 1

    111

    0

    + 2

    11

    0

    1

    :1, 2 R

    Notice that the first two components of the second two terms come from the non-pivotcolumns. Another way to write the solution set is

    S= {X0+ 1Y1+ 2Y2: 1, 2 R} ,

    where

    X0=

    1100

    , Y1=

    1110

    , Y2=

    11

    01

    .HereX0is called aparticular solutionwhileY1and Y2are calledhomogeneoussolutions.

    2.5.3 Solutions and Linearity

    Motivated by example 29, we say that the matrix equation MX = V hassolution set{X0+1Y1+2Y2 | 1, 2 R}. Recallthat matrices are linearoperators. Thus

    M(X0+1Y1+2Y2) =M X0+1MY1+2MY2 = V ,

    for any 1, 2 R. Choosing 1 = 2 = 0, we obtain

    MX0= V .

    This is whyX0 is an example of a particular solution.

    56

  • 8/12/2019 Waldron Linear Algebra

    57/407

    2.5 Solution Sets for Systems of Linear Equations 5

    Setting 1 = 1, 2 = 0, and using the particular solution MX0 = V, we

    obtain MY1= 0 .

    Likewise, setting 1= 0, 2 = 1, we obtain

    MY2= 0 .

    Here Y1 and Y2 are examples of what are called homogeneous solutions tothe system. Theydo notsolve the original equation MX = V, but insteadits associatedhomogeneous equationM Y = 0.

    We have just learnt a fundamental lesson of linear algebra: the solutionset to Ax = b, where A is a linear operator, consists of a particular solution

    plus homogeneous solutions.

    {Solutions} ={Particular solution + Homogeneous solutions}

    Example 30 Consider the matrix equation of example29. It has solution set

    S=

    1100

    + 1

    1110

    + 2

    11

    01

    |1, 2 R

    .

    ThenM X0 = V says that

    x1

    x2x3x4

    =110

    0

    solves the original matrix equation, whichis certainly true, but this is not the only solution.

    M Y1= 0says that

    x1x2x3x4

    =

    1110

    solves the homogeneous equation.

    M Y2= 0says that x1

    x2x3x4

    = 1

    101

    solves the homogeneous equation.Notice how adding any multiple of a homogeneous solution to the particular solutionyields another particular solution.

    57

  • 8/12/2019 Waldron Linear Algebra

    58/407

    58 Systems of Linear Equations

    Reading homework: problem 2.5

    2.6 Review Problems

    Webwork:

    Reading problems 4 ,5Solution sets 20,21,22Geometry of solutions 23,24,25,26

    1. Write down examples of augmented matrices corresponding to eachof the five types of solution sets for systems of equations with threeunknowns.

    2. Invent a simple linear system. Use thestandard approachfor solvinglinear systems and a non-standard approach to obtain different descrip-tions of the solution set which have different particular solutions.

    3. Let f(X) =M Xwhere

    M=

    1 0 110 11 10 0 0 0

    and X=

    x1x2x3x4

    , Y =

    y1y2y3y4

    .Suppose that is any number. Compute the following four quantities:

    X , f(X) , f(X) andf(X) .

    Check your work by verifying that

    f(X) =f(X) , andf(X+Y) =f(X) +f(Y) .

    Now explain why your results for f(X) andf(X+ Y) together imply

    f(X+Y) =f(X) +f(Y) .

    (Be sure to state which values of the scalars andare allowed.)

    4. Let

    M=

    a11 a12 a1ka21 a

    22 a2k

    ... ...

    ...

    ar1 ar2 ark

    and X=

    x1

    x2

    ...

    xk

    .

    58

    http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/ReadingHomework2/5/http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/4http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/5http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/20http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/21http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/22http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/23http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/24http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/25http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/25http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/26http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/26http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/25http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/24http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/23http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/22http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/21http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/20http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/5http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/SystemsOfLinearEquations/4http://webwork.math.ucdavis.edu/webwork2/LinearAlgebra/ReadingHomework2/5/
  • 8/12/2019 Waldron Linear Algebra

    59/407

    2.6 Review Problems 5

    Note: x2 does not denote the square ofx. Instead x1, x2, x3, etc...,

    denote different variables; the superscript is an index. Although confus-ing at first, this notation was invented by Albert Einstein who noticedthat quantities likea21x

    1 + a2