linear equations and

Upload: riddhima-mukherjee

Post on 03-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 Linear Equations And

    1/24

    UNIT 14 LINEAR EQUATIONS ANDEUCLIDEAN SPACESStructure

    14 1 IntroductionObjectives

    14 2 System of Linear Equations14 2 1 Canonical Systems14 2 2 Gaussian Elimination Method

    14 3 Euclidean Spaces14 3 1 Subspaces14 3 2 Linear Span

    14 4 Linear Independence14 4 1 Basis14 4 2 Dimension of a Subspace

    14 5 Inner Products14 5 1 Orthonormal Basis

    14 6 Summary

    14 1 INTRODUCTIONYou are fam iliar with systems of linear equations in two o r three variables. Linearsystem s of equatio ns involving more than three variables are encountered in almost allfields. In this unit we first give a general procedure to solve any system of linearequations. Th e method in principle can be used to determine all the solutions of asystem of linear equations.Nex t we d iscuss some important aspects of E uclidean Spaces. These are generalisations ofthe very useful concepts of two o r three dimensional vectors. Systems of linear equa tionsalso play an important role in the study of Euclidean Spaces. The concepts introduced inthis unit w ill be used in subsequent units to understand clearly the structure of solutions of asystem of line ar equations. In many situations an understanding of this structure ismoreimportant than the ability to obtain various solutiom.ObjectivesThe main objectives of this unit are

    to intrnduce the Gaussian Elimination Method to find solutions of any systemof linear equationsto introduce som e important aspects of Euclidean spaces which re essential andto un derstand the structure of solutions of a system of linear equations.

    14 2 SYSTEM OF LINEAR EQUATIONSYou are familiar with an equation of the type

    in which a and are fixed real numbers. Such an equation is called a linearequation in two variablesx and y n this equation a and b are referred to a s thecoefficients of x and y respectively. The number c is referred to ds the right hand side

  • 8/12/2019 Linear Equations And

    2/24

    con stant. Any values of x an dy for which ax + y equals c is called a solutio n of theequation. To find a solution of this'equation, we can fix any values of x ay xo ndthen solv e the resulting equation for the corresponding value of y .e.y = c - ax0 ). Thus, we can obtain a solution of the given equ ation correspond ingto each choice of .We also know that all these solutions of a linear equation in 2variables can be represented as points ( x y ) on a straight line in a plane.You are also familiar with a linear equation in 3 variables. These equations are of thet Pe

    and the solutions of this equation can be represented by points ( x y z on a plane ina3- dimensional space.You have also encountered with systems of line ar equa tion s in 2 or 3 variables. Insuc h systems, more than one linear equation is considered simultaneou sly. Forexample, the following is a system of 2 linear equations in 3 variables :

    Th e solutions of this systein are those values of ( x , y z ) which satisfy both theseequation s simultaneously. Thus, we ca n find the solutions of each equ ation separatelyand then the com mo n solutions, if any, are the solutions of the system. Sin ce twointersecting planes have a line in comm on, the solutions of a system of 2 linearequations in 3 unknown s can be represented by a line in a 3- dimen sional space. It maybe noted that a system of 2 linear equations in 3 variables does not have any solution ifthe two p lanes, representing the system, do not i~l tersect .Th e foregoing discussion shows that a single non-trivial linear equation in 2 or 3unkowils always has an infinite number of solutions. However, when w e considersystems of su ch equatioils, then the system may fail to h ave any solution.In g eneral, a linear equation in n variables is of the type

    where n is a fixed natural number and al, a2 , . . n and b are fixed real numbers. Inthis equation, denotes the number of variables, ai is referred to as the coefficient of thevariables xi, and b is referred to a right hand side constant. Any set of n numbersXI x2 , . . ., x,, ) for which a1 xl + a2 xz . . + a, x,, equals is called a solu tionof the equation. To find a solution, we can fix the values of any n 1variables and thendetermine the value of the nth variable (not fixed so far) from the resulting eq uation.

    0 0 0Thus, if xl xz .. . x,, - 1 are fixed at values X I , x2 , . . . , x,, -1 then1 0 0x,, = - a1 x? - a? z - . . . - a,, 1 x 1 ), if a, 0 hus, we get a solutioilanof the given equation fo r each cho ice of w xz , . . xn- .We o bserve that a singlenon- trivial linear equation in rt variables, n 2, always has an infinite number ofsolutions and that all the solutions can indeed be obtained. However, the geom etricalanalogue seems to be lost for the moment.

    In several situations, we are required to consider more tha n on e linear equationsimnulta~leously. f m linear equations in n variables have to be consideredsimultaneously, we say that we have a system o f r l inear equations in variables.Such a systein can be written as follows :allxl 012x2 . . . nl,, x,, = bl

  • 8/12/2019 Linear Equations And

    3/24

    The system can be compactly written as

    In the above system, ij is the coefficient of the variable xj in the th equation of thesystem and bi s the right hand side constant of the th equation of the system. set ofnumbers rl , r2 , r,, is a solution of the system if xi = ri , for i = 1 2 , , ,satisfy each equation of the system seperately. We can identify the solutions of eachindividual equation of the system. How can we identify the common solutions, whichare solutions of the system ? Recall that the system may have no solution at all. We nowproceed to describe one method which can be used to systematically identify thesolutions of the system, if these exist.14.2.1 Canonical SystemsIn some linear systems of equations, coefficients in some positions are zero such thatthe system can be trivially solved. Such systems are called Canonical Systems. n thissection, we discuss two such canonical syste-and also the method to solve these. Inthe next sedion, we shall show that any general linear system can be reduced to one ofthe canonical systems discussed in this section.We first consider a system of linear equations of the following form

    system of the above type can, easily, be solved as follows:Substitutingz = 3 in the first two equations, we get

    Now substitutingy = 1 in the first equation, we get

    Hence, the only solution of the system is x = 1, y = 1, z = 3.The form of the system which can be solved by the above method, called backsubstitution process, can be described as followsa Order the variables as first variable, second variable, etc. n the example above, x isthe first variable,y is the second variable and z is the third variable.b The coefficient of the th variable is not zero in the th equation of the system andis zero in all the subsequent equations 1 onwards of the system.

    system of n linear equations in n variables is called a triangular system if thevariables and equations can be ordered so that condition b , given above, is satisfied.To be more precise, a system of n linear equations in n variables

    is triangular if the coefficients ij of the system satisfy the conditions For i = 1 o n ,ii rr nd ij = or j = 1 to n iangular system of the type considend above

    Linear~ u bndudidemn p a

  • 8/12/2019 Linear Equations And

    4/24

  • 8/12/2019 Linear Equations And

    5/24

    SystemB

    Note that the systemA and B involve the same number of variables. However, system Aconsists of m equations and systemB consists of equations. We say that systemA isequivalenttosystern if every solution of is a solution ofB and every solution of Bis a solution of A. Verify that the follow ing two systems are equivalent :

    System SystemB

    Suppose that we want to solve a given systemA. Does here exist a trapezoidal ortriangular) system B which is equivalent to systemA Interestingly, the answer to-thisquestion is yes provided the system A has a solution.In this section, we describe the Gaussian Elimination Methodwhich identifies atrapezoidal system equivalent to a give n linear system. Since we know how to solve atrapezoidal system, the G aussian Elimination Method can be used to solve any linea rsystem. W e first make the following intuitive obse,mations:The solutions of a systemdo not change ifa) any one equ ation of the system ismultiplied by a non-zero number ;orb) any tw o equations of the system are interchanged ; rc) any equation of the system is added to any other equation of the system .The reade r is invited topro ve the above statement. The Gaussian Elimination Methoduses the abo ve op erations, in a sequen tial manner, starting with the given system until atrapezoidal equivalent system is obtained. We illustrate the method through examp lesonly.Example

    Let us consider the system

    To obtain a trapezoidal system, we can elimin atex from the second and thirdequations and the variable y from the third equation.Adding the first two e quations, we get an equivalent system

    Multiplying the first equation by end then adding to the third equation gives anequivalent systemx y z t = 2 0

    Adding the se cond equ ation to the third equ.ation, we get an equivalent system

  • 8/12/2019 Linear Equations And

    6/24

    This system is a trapezoidal system whose solution is g iven by

    t to,where to can be assigned any value.

    ExampleLet us consider the system

    2 x + y z , t 15

    Since the coefficient of is zero in the second and third equations, x can be takenas the first variable in the triangu lar part.If we deci dey as the second variable in the triangular part, then we m ustinterchange the second and third equations. Thus we get an equivalent system

    2 x y z t = 1 53 y 5 z 6 t 102 2 3 t 20.

    The ab ove system is an equivalent trapezoidal system and can be easily solved.Example :

    Let us con sider the system3 x + 5 y 2 z 3;- 10

    4 S t 15

    Here either o ry c an be taken as the first variable of the triangular part. Let uschoose x as the first variable. Then y carmot be the second variable of thetriangular partbecause the wefficien t of y in all the subsequent equations is zero . Thus y mustbe in the non-triangular part only. Mu ltiplying the second eq uation by 3 andadding to the third equation, we get an equivalent system

    This is an equivalent trapezoidal system in whic hx o r y is the variable of the non-triangular palt.

  • 8/12/2019 Linear Equations And

    7/24

    Consider the systemx y z = 4

    - x 2 y 3 2 -6 y z = 20.

    Adding the first equation to the second, we getx y z = 4

    3 y 4 2 = 96 y 8 z = 20.

    Multiplying the second equation by 2 and adding to the third equation, we getx y z = 4

    3 y 4 2 9oy oz 2.

    We now observe that the third equation cannot be satisfied for any choice of X JJand z. Hence, this system has no solution. Thus, the original system also has nosolution.

    Consider the systemx y z = 4

    - x 2 y 3 z = 56 y 8 z = 18.

    Performing the sequence of operations as in Example 4, we get an equivalent sys-

    x y z = 43 y 4 2 9

    This is a trapezoidal system and the solutioii is given by1x = l - 2 034y = 3 - - n ,3

    z zowhere any value can be assigned to 20 .

    Solve the following canonical syStem of linear equations:2 x y -z 3 t 4

    2y 3z 5t = 6z-2t =

    4t = 12

    inear quationsanduclidcnnSpec

  • 8/12/2019 Linear Equations And

    8/24

    se he Gauss elimination method to solve the following systems of linearequationsa x 2 y z - 1 9

    EShow th t th following system of linear equationsh s no solution

    2 x - y z - 4~ 2 y - 3 2 - 8

    4x y 5z 10.

  • 8/12/2019 Linear Equations And

    9/24

    -

    14 3 EUCLIDE N SP CESA linear equation in n variables involvesa string of n + 1numbers (al, a2, . ., n, .A system of m inear n n variables canbe finedby m suchstdqp as shown

    a11 a u ... s n bi. .. .. .

    A close look at the Gaussian elimination method shows that we are required to(i) multiply a string by a non-zero number(ii) add two strings(iii) interchange two strings.

    We also observed that the geometrical analogue, available for n = 2 or 3, is lost whenn >3 .We can indeed introduce some arithmetic in strings of arbitrary number ofcomponents and retrieve the geometrical analogue also. We introduce the relevant ideasin this section.Let (al, az, . , be a string, called n-tuple, of n components, where al, a&.. ,are real numbers. Let us consider the collection of such n-tuples. hiscollection isdenoted by

    n = {( a l , az, .. , a,), I al , a2, .. ., an are real numbers},i.e., R s the set of all n-tuples of real numbers. Two n-tuples

    A = (a l , az , . . . , a, and B = (b i , h ..., bn)are said to be equal denoted by A = B, if

    ai = bi for i = 1 to nThus any two n-tuples are equal if their corresponding components are equal.The sum ofA and B, denoted byA B , s defined s

    A + B (a1 + bl , a z . + h , ..., an + b,),i.e., the sum of any two n-tuples can be obtained by adding the correspondingcomponents of the given n-tuples. The following properties of the sum of n-tuples, sdefined above, can be proved easily :(i) IfA Rn and B Rn ,th enA +B ER(ii) A B = (a1 + bl, a2 + h ., a, + b,),

    B + A (b i + a l , h + a 2 , ..., bn+an) .: B + A - A + B(iii) Let C = ( CL , c2, . C ), Then

    (A + B ) + C ((a1 + b i ) + C L (a2 + h ) + c 2 , ..., (an bn) 4(a1 + ( b ~ C L ) , 2 ( h ~ ) ,- . , a n + ( b ~ cn)

    - A + ( B + C )(iv) If we denote the n-tuple ( 0,0, ... 0 ) by 0, then

    A+O=O +A=A.

  • 8/12/2019 Linear Equations And

    10/24

    - A + A = A ( - A ) = 0 ,where - A -= ( - a l , - a2 , . . . -a,) .Let A = al , ;a2, . . . ,a, ) be any n-tuple and a be any real number. Then, wedefine the scalar multiple aA bya A = ( a a l , a a t , ..., a a , ) ,i.e., a A is obtained by multiplying every component of A by a.

    This operation is called multiplication of a n-tuple by a scalar. The followingproperties of scalar multiplication are also easy to observe.vi) I f E R and a E R then a A E Rn.vii) a ( A + B ) = a ( a l + b l , a z + b t , ..., a n + b n )

    = ( a a l + a b l , a a t + a h , .., a a ,+ a b , )

    = a A a B .( v i i i ) ( a P ) A = a B ) a l , a P ) a z , ..., a + P ) a n )

    = ( a a l + p a l , a a z + B a z , ..., a a n + P a n )= ( a a l , a a z , . . . , a a , ) P a l , p a z , . . . , p a n >= a A P A .

    ( i x ) a ) A = ( ( a P ) a l , a ) a 2 , . . . , ( a f J ) a n )= ( a ( f l a ~ ) , B a t ) , ..., a ( B a n ) )= a ( P a 1 , P a* , . . . , b a n >= a ( P A ) .

    XI 1 A = la^, l a 2 , ..., l a n )= ( a ~ ;z , ..., a , )

    The set n ogether with the operation of addition and scalar multiplication, as definedabove, is called a Euclidean Space. For = or 3 he operations that we havedefined above coincide with the corresponding operations of vector in or 3dimensional space. In view of this, the n-tuples a1 a2 . .. a, ) are also calledvectors. Although, we are restricting our attention to n-tuples of real numbers, thegeneralization to n-tuples of complex numbe~ss almost identical.E

    Use the properties of addition and scalar multiplication to prove the following :a) a A - 0 - a = O o r A = Ob) ( - l ) A = - AC) A - ( B - C ) = A - B + C .

  • 8/12/2019 Linear Equations And

    11/24

    14.3.1 SubspacesLet us consider an arbitrary subset S of R .Some subset have the followingproperties :

    (a) A E S - a A E S forall a E R(b) A E S a n d B E S = s A + B E S

    Such subsets are called subspaces.Example 6

    LetSi = { ( x ,Y , 0 ) x R , y R }be a subset of R 3, i.e., S1 consists of all points in the x ,y - plane. Then ifA = ( a l , a t , 0 ) and = ( b l , 62, O)areinSl,wehavea A = (aa , a a z , 0 ) S1 for all a R andA B = ( a1 b l , a b2, 0 ) E S l .Hence the set of all points in the (x , y ) - plane is a subspace ofR 3.

    ExampleLets z = { ( x , y , 2 ) x 2 y + 32 = 01,i.e., Sz is the set of all points lying in the plane x 2 y 3 z = 0, which passesthrough the origin.If A = ( a l , a2, a3) and B = ( b l , b z b3) arein S2,thenwemusthavea1 2a2 3a3 = 0 , and bl 2b2 3b3 = 0 .Adding these equations, we get

    (a1 b l ) 2 (a2 b2) 3 ( a3 b3) = 0 ,i.e., A B - a1 bl , a2 62, a3 b ) Sz .Similarly, verify thata A S2 for all a R .Hence Sz ,which is a plane passing through theorigin, is also a subspace of R 3.

    Exarnple 8Let

    S3= { ( x , y , 2 ) x 2y 3 = 4 . I=Note that S3 s also a plane, but it does not contain the origin. Hence S3 cannot besub space of R 3. Give reasons.

    The above example illustrate that some subsets of R "are subspaces of R nwhereassome other subsets are not subspaces of R .

    Which of the following subsets ofR are subspaces :(a) {(XI ,~ 2 ,.., xn) I XI = ~ 3 }(b) {(XI,x2, ..., x n ) x2 = 11(c) ( (XI ,12, ..., x,,) I XI + x i = 0 and XI = x4)(d) ((XI.,x2, ..., x,,) 1 .

  • 8/12/2019 Linear Equations And

    12/24

    14.3.2 Linear Span. Let S { A ~ ,2, ....,Ap}b e n y s u b s e t o f ~ ~ n d c h o o s e n ya , a2 , .. a in R .Then, using the operations of addition and scalarmultiplication, we construct a vectorinR .For a fixed choice of a , a 2 . . ., a the vector A ,as defined above, iscalled linear combination of the vectorsA1 , A2 , . . ., A .Let us consideri.e., L S ) is the set of all possible linear combinations of the vectors in S .We callL, S ) as the linear a p a i o f ~ . e now observe that L (S ) is a subspa& ofR forevery non-empty subsetS .Let

    andB = PlAi 2A2 ... PpAp E L ( S )

    Then clearlyy A - y a i A l + y a 2 A z + . . . + y a p A p E L ( S ) fo r a l l y E R

    andA + B = (a l+ l )Al+(az+ 2)A2+... ( ap +Pp )A pE L ( S )

    Hence, L (S ) is,a subspace ofR .Example9

    Lets = { ( I , l , O ) , ( 0 , 2 , o )} .

    Hence,L { ( I , 1 0 ) , ( 0 , 2 ,0)}is thexy-planein R ~E

    Prove the following :(a) SI G S2 L (S1) L(S 2)(b) Sisasubspacee L ( S ) .

  • 8/12/2019 Linear Equations And

    13/24

    If P and Q are subspaces o fR n ,prove thatP + Q = { A + B ( A E P nd B E Q }is also a subspace of R .I f P = L { ( O , 1 , 1 I ) , ( 0 , 1 , 0 , 1 ) ) a n d

    Q = L { ( 1 , 0 , 0 , O ) , ( 0 , 0 , 1 , 0) ) l e te r in in e P + Q

    14 4 LINE R INDEPENDENCELet = { A I A 2 , , Ap ] be any subset of Rn . Consider the equation

    Obviously, a = a 2 = : - a = 0 satisfies this equation irrespective of ou r choiceo f A 1 , A 2 , , A, . n case eac hA iis non-zero and a = a 2 = . = a = 0 is theonly solution o f the above equa tion, then the set S of non-zero vectorsA1 , A 2 , . A, is said to be linear ly indep end ent. Otherwise, is called linea llydep ende nt. Thus the vectorsAl , A 2 , . Ap are linearly dependen t if atleast one ofthe a s is not zero and a A1 + a 2 A2 + . + apAp is the zero vector. It may benoted that any s ubset of R n s either linearly independent or linearly dependent.E xample 10

    etuscheckwhetherthesetS = { ( I , 0 , 0 ) , ( 1 , 1 , O ) , ( 1 , 1 , l ) } i slinearly indepen dent or linearly dependent. Consider the equationa ( l , 0 , 0 ) + P ( 1 , 1 , 0 ) + ~ ( 1 , , 1 ) = ( 0 , 0 , O ) ,

    ~ . e . , ( a + P + / , Y , Y ) = O , O , 0 )Th e equality of two vectors requires :

    a + p + y = Op + y = o

    y = 0This is a triangular sy stem w ith the unique solutioil a = = y = 0. Hence is alinearly ind epende nt subset of R 3Example 11

    Lets = { ( I , 0 , - 1 1 , ( 1 , 1 , I ) , ( 2 , 1 , 011.

    T hena ( l , O , - l ) + P ( l , 1 , l ) + y ( 2 , 1 , 0 ) = ( 0 , 0 , 0 )

    Linear Equations m dEuclidcm Space

    givesa + p + 2 y = O

  • 8/12/2019 Linear Equations And

    14/24

    Check that = 1 , 8 = 1 , = - 1 is one of the solutions of the abov e system .Hence, { 1 , 0 , - ) , 1 , 1 , 1 ) , ( 2 , , 0 ) is l inearly dependent.W e now m ake some useful observations about linearly independenttdependent se ts.

    1. The zero vector cannot belong to a linearly independent set.2. If S is a linearly independ ent set, then every subset of s als o linearlyindependent.3. If s a linearly depend ent set, then every set containing S is also liilearly

    dependent.The se observations follow immediately from the definition itself. Next, we prov e twoimportant results.Theorem

    If { A1 , A2 , . . , Ap ) is linearly independent and {A 1 , A2 , . ., Ap , Ap + )is linearly dependent, the nA p+ is a linear combination ofA l , A 2 , . . . , Ap .Proof

    Since { A1 , A2 , . . . , Ap , Ap + is linearly dependent, we ca n find scalars81 , 82 , . . . 8 p + l ~ ~ ~ h t h a twhere atleast one of the 8 s is not zero. We ob serve that + i z 0because if = 0, then we must have

    where atleast one of the Pi, 1 s p , s non-zero. This means that{ A 1 , A2 , . . . Ap is linearly depend ent, which is contrary to what is give n.Hence, 8, + z 0. Thus, we have

    i.e., A p + is a linear comb ination of A1 , A2, . .., Ap .Theorem 2

    If {A1 , A?, . . . A, is linearly independent and A L { A1 , A 2 , ... A, ,then { A1 , A2, . . , A, , A is liilearly independ ent.Proof

    If possible, let {A1, A2, . . , Ap , A be linearly dependent. T hen by theoreml , A can be expressed as a linear combination of A l , A 2 , . .,Ap .This is notpossiblebecauseA L {A1 , A2, . , A, . Hence { A1 , A 2 , . . . , A, A must belinearly independent.E 8

    Wh ich of the following sets are linearly independent ?(a) { ( I , 1 , 0 1 , ( 1 , 2 , 0 1 7 ( 2 , 1 , 0 1 1(b) [ ( l , 1 , I ) , ( 2 9 1 , I ) , ( 1 , 2 , 2 ) ](c) [ ( 2 , 2 , 2 1 , ( 3 , 1 , 1 1 , ( 1 , 3 , 3 1 )(d) [ ( L - 1 , 1 9 -1 1 , ( - 1 , - 1 7 - 1 9 1 1 , ( - 1 , 1 , 1 , I ) ](el [ ( I , 0 , 0 1 , ( 0 , 3 , 0 1 7 ( 1 , 1 , 1 1 , ( 1 9 2 3

  • 8/12/2019 Linear Equations And

    15/24

    EI f { A , B , C islinearindependent,provethat{A B , B C , C A ) i salso linearly independent.

    inear tio om dEuclideanSpace

    14 4 1 BasisLet S be a subspa ce of R . A subset T of S is called a b asis of S if

    (a) T is linearly independ ent, and(b) L ( T ) = S .For example, let

    = I ( x , y , O ) I ~ E R , Y E R } .Example9shows tha t S - ~ { ( l ,, O ) , ( 0 , 2 , o ) } .Verify that [ 1 , 1 , 0 ) , 0 , 2 , 0 ) is linearly indep enden t also. Hence,{ ( I , 1 , O ) , ( 0 , 2 , 0 ) ) is a b a s i so f S .W e invite the reader to show that the following set is a basis of R n

    T = { E I , E z , . . , En ] where for = 1 toEi = ( 0 , 0 , ..., 0 , 1 , 0 , ..., 0 )

    is a vector who se th component is 1and all the other components are zero. The set T ,given abov e, is called the standard basis of .W e now observe that theorem 2 can be used to generate a basis set of any subspace Sof R n . Choose any A1 0 in S f L {A1 ) ) = S, then {A1 ) is a basis set of SOtherwise, wh enL ( {A1 ) ) S ,we can choose A2 E S such thatA2 L {A 1 ) ). Then Theorem 2 shows that {A 1, A2 ) is linearly independent.Clearly,this iterative process of enlarging a linearly independent subset To f S can berepeated till we ob tain a linearly independent subset T of S satisfying L T ) = S Notethat we ca n also generate a basis of Swhich contains any given linearly independentsubset of S .This is s o because the iterative process can be initiated with the givenlinearly independent sub set. Thu s any linearly independent subset of R can beextended to a basis of R .

    If { A1 , A2 , . . . ,A, ) is a basis of subsp ace S of R , then it is easy to see that anyX E S can be expressed as a unique linear combination of the basis, that is, there existunique scalars a , a 2 , . a R such thatX = a A1 u2 A2 . . . a ApThus a , a 2 , . , a ) are called coo rdinates of Xwith respect to the basis{ A l , A 2 , - . . ,Ap) .

  • 8/12/2019 Linear Equations And

    16/24

    Show that {{ ( x , Y , 2

    basis of the subspace

    14 4 2 Dimension of a SubspaceWe know that every subspace of Rn has several basis sets. Is it possible that twodifferent bases of the same subspace contain different numbers of vectors ? Thefollowing theorem, whose proof is not in the course, shows that this is not possible.Theorem 3

    Any two bases of a subspace of R contain the same number of vectors.The above theorem shows that with every subspace of R nwe can associate a uniquenumber, namely, the number of vectors in any basis of the subspace. This uniquenumber is called the dimension of the subspace. Thus, to find the dimension of asubspace, we can determine any one basis of the subspace and count the number ofvectors in that basis.Since we know that

    is one of the basis of Rn, we can now say that the dimension of R is nWe invite the reader to prove the following obvious results :

    1. If is ap dimensional subspace of R , then every set ofp linearlyindependent vectors in is a basis of S .

    2. Every set of p 1 or more vectors in a p dimensional subspace of R n slinearly dependent.

    3. The dimension of any subspace of R n cannot exceed .Example 1

    Let us compute the dimension of the subspace

    Clearly the vector 1 ,- 1 , 0 , 0 ) is i n s Also

    = { ( a , - a , 0 , 0 ) R ) .S incethevector(0 , 1 , - 1 , 0 ) i s i n ~ utnotin ~ ( ( 1 , - 1 , , 0)} ,weconclude, using theorem 2, that { 1 , - 1 , 0 , 0 ) , 0 , 1,- 1 , 0 ) ) is alinearly independent subset of NowL { l , - 1 , 0 , 0 ) , 0 , 1 , - l , O ) } = { a ( l , - l , O , O )

    + j 3 ( 0 , 1 , - 1 , 0 ) l a ~ , f 3 ~ }

  • 8/12/2019 Linear Equations And

    17/24

    ~ince thevec tor (0 , , 1 ,-1) is inSbutnot in L {( 1 , - I , 0 , o ) ,( 0 , 1 - 1 o.)), weconclydethat{(l,-1, 0 , 0 1 , ( 0 , 1 , - 1 , o ) ,

    0 , 0 , 1,- 1) is a linearly independent subset of S .Since the vector1, 1, 1, 1 , he dimension of S cannot equal the dimension of R ' i.e.,

    4 . ~ e n c e { ( l , - 1 , 0 , o ) , ( 0 , 1 - 1 0 ) , ( 0 , 0 , 1 , - 1 )) i s a b as i s o f ~and, therefore, the dimension of S is 3.The example that we have given so far point towards the fact that a study of subspacesof Rn s intimately CO~eCted itiithe solutions of a system of linear equations in nvariables. The dimension of the subspace is concerned with the number of independentequations in the system. We shall unfold this relationship between subspaces of R andthe solutions of systems of linear equations innvariables in the subsequent sections.

    S howtha t( ( 1 , 0 , O), ( 1 , 1 , O), ( 1 , 1 l i s a b a s i s o f ~ ~ . ~ h o w t h a t{ ( 4 , 5 , O) , ( 1 , 1 , o ) , ( 1 , 1 , 1 ) ) and ( 1 , 0 , 0 ) , ( 4 , 5, O),( 1 , 1 , 1 ) ) a r e al s o b a s i s o f ~~ . ~ s{ ( l ,, o ) , ( 1 , 1 O), ( 4 , 5 , 0 ) ) abasis of R ?

    2Determine bases Sl and S2 of R satisfying the following conditions :(a) Sl n S2 p(b) ( ( l , O , O , 0 ) , ( 1 , i , O , o ) J G S l(c) { ~ 1 , 1 , 1 , 0 ~ , ~ 1 , 1 , 1 , 1 ~ G s 2 - .

    3 iIf A , B , and CinRnsatisfy A B C = 0, showthatL ( { A , B ) ) = L ( { B , c ) ) = L ( { A , c ) ).

    iur quationsmduclideanSpace

    Show that { XI x2, x3 , x4 ) I xi x2 and x3 = u is a subspace ofDetermine the dimension of this subspace.

  • 8/12/2019 Linear Equations And

    18/24

  • 8/12/2019 Linear Equations And

    19/24

    - a [ a A , A ) + fJ B,A)]+ f J [ a A , B ) P B , B ) [use iv)]= a 2 ~ , ~ ) +~ P A , B ) ] +~ B , B ) [use iii)]

    a2 ~ l2 ~ ~ P A , B )f J 2 1 ~ 1 1 2 .vii) Substituting = 0 in vi), we get

    I l a ~ l l ~a 2 l l ~ I l 2 IaAIl = lal IIAII.viii) Using i) and ii), we. observe that

    IlAII > 0 ifA i 0 and )lo = 0 .ix) Substituting a = and = - A , B ) in vi) and using viii), we

    i.e. , A,B)2 s 11AI12 IlB112HenceI A,B)I s Al BI1.This inequality is called the Sch wa n Inequality.

    x) Substitutinga = fJ = 1 in vi), we get

    [use ix)]

    HencellA B I s lA II + IIB II

    This inequality is called the triangle Inequality.The Schwarz inequality shows that

    Thus, we can define the angle 0 between the vactorsA and B as follows:cos 0 = A , B )ll A I I I B II

    The above definition of angle coincides with the familiar notion of angle inR or R 3Now we have generalised the notion of angle inR .We can now say that vectors A andB are perpendicular if A ,B ) = 0.14 5 1 Orthogonal BasisA subset of S ofR is said to be orthogonal if for allA and B in S ,we have

    A,B) = 0 ,wheneverA B .An orthogonal set S is said to be orthonormal if, in addition, we have

    A = for every A S .Theorem 4 :

    Every orthogonal set of non-zero vectors is linearly independent.

  • 8/12/2019 Linear Equations And

    20/24

    Proof:Let { A1 ,Az , . ,A ) be any orthogonal subset of R .Then a l A l + a z A 2 + . . . + a p A p = O

    ~ I A I a2Az + . . . + a p A p , A i ) = ( 0 , ~ ; ) 0; ( A ; , A ; ) = 0 , because , (A , ,A ;) = 0 for j #

    = > a ; 0 , because A; 0 implies A; , A; ) * 0.Thus a a 2 = .. = a = 0 . Hence is linearly independent.

    Theorem :If {A1, Az , . . ., Ap ) is any orthonormal set in R and X R , hen

    P

    Proof:Choose any Aj , j = 1 to p. ThenIx- C ( X , Ai) Ai , Aj I = (x , A,) - 2 X , ; ) (A;, A ~ )

    / = ( X , A j ) - ( x , ~ j ) ecause(A;,Aj) = O i jand(A;,A;) = ( ( ~ ~ ( 1 1

    = 0.We now ask the impottant question :Is it necessary that every subspace ofR has an orthonormal basis ? The followingtheorem shows the answer is 'Yes'.Theorem I

    Every non-zero subspace of R has an orthonormal basis.Proof :

    Let be any subspace of Rn. hoose any arbitrary basis of , ay{ A l , A z , ..., Ap).We shall now describe a procedure which gives us an orthonormal subset{ B1 , BZ, . . . Bp ) of such that Bj is a linear combination o f A l , A 2 . . . , A;Clearly Aj * 0 for anyj. Let

    so that { Bi ) is an orthonormal set. Using theorem 4,cz = A Z - ( A Z ,B I ) B I

    is orthogonal toB1 .Let

    Then { B1, B2 ) is an orthonormal set. Assuming that an orthonormal subset{ B1 , BZ, . . ., B, ) of is available, we shall, now, show how we can construct

  • 8/12/2019 Linear Equations And

    21/24

    an orthonormal subset { B1, Bz , . , B, + ) of ,provided r < p UsingTheorem 4,

    is orthogonal to each B; for i = 1 to r Setting

    we get { B1, Bz , . ., B, 1 as an orthonormal subset of .The set{ B1 , BZ, . , Bp ), wnstucted as above, gives an orthoilormal basis ofThe process outlined in theorem6 to construct an orthonormal basis of , tarting withan arbitrary basis, is called Gram Schmidt Orthogonalisation ProcessExample 3

    We illustrate the Gram Schmidt process with the following starting basis :

    Hence, the orthonormal basis constructed by using the Gram Schmidt process is{B l , Bz, B3) = 1 , 0 , 01 , 0, , o ) , 0 , 0 , 1))whichiss tandardbasis of 3.

    It may be noted that if we change the order of the vectors in the starting basis, then theGram Schmidt process gives a different orthonormal basis. Try the above example withthestartingbasisas{ 1 , 1 , o ) , 1 , 0 , 0 1, 1 , 1 , 1 ) ) .Let {A1 , Az, . .,Ap)beany basisofasubspaceSofRn.Ifthebasis{A1 ,Az , .. . A is known to be orthononnal and

    then, for any = 1 to p ,we have

    b e c a ~ s e ~ ~ ,;) = when j and A; , A;) = 1 .

  • 8/12/2019 Linear Equations And

    22/24

    15Determinean orthogonal basis ofR containing 1,-2 , 1

    16Use Gram Schmidt process on the basis { 0 , 0 , 1) ,to obtain an orthonormal basis ofR 3

    etermine an orthonopinal basis of the subspaceL { 1,0,2,1,0) , 1 ,0,8;0)) . ,

    18If IjA = B1 , how that A B and A - B are orthogonal.

    Prove the followinga) llA B1I2 llA - B1I2 = 211A1I2 211B1I2b) -4, B ) = 0 11-4 B I2= I I ~ ~ ~I I B ~ ~ ~

  • 8/12/2019 Linear Equations And

    23/24

    f4.6 SUMM RYWe briefly sum up what has been done in this unit.1. Canonical systems are linear systems of equations in which coefficients in some

    positions are zero and the system can be trivially solved by

    Linear Equations andEuclidean Space

    a) ordering the variable and using back substitution process or b) GaussElimination Method in which solutions remain unchanged, i) multiplication of anequation by a non-zero number ii) interchange of any G o equtionsiii)addition oftwo equations.

    2. The set R = a 1, a 2,..., a, ) 1 al , a2, ..., a, E R together with theoperations of additions and multiplications, is called a Euclideanspace.

    3 Subsets S E R having properties, a)A E S * A E S V a E Rb) A E S and B E S => +B E S, are called subspaces

    4. The setL S ), of all possible linear coinbinations of vectors ins , is called Linearspan of S.

    5 The vectorsAl,A2, Ap of SRnare linearly dependent ifa l A 1 + a 2 A 2 + ...+ apAp = Oandatleastoneofthea sisnot zero;otherwise the vector are linearly independent.

    6) a) A subset T of S, a subspace of R , is called a basis of S if i) T b linearlyindependent ii)L T ) =S.b) Any linearly independent subset of R can be extended to a basis of Rn by

    iterative process.c) The number of vectors in any basis of subspace of R is called the dimension

    ofS.7* a) ForA = al, a2 ,..., a,) E R and B = bl, 62 ,...,6, )E Rn, he inner productA , B) i s de f i ne d a s A ,B ) = a l b l + a z 6 2 + ... +a,b,.;b) The length of norm ofA is I A = f ic) 1 A, B ) s IJA 1 11 B 1 Schwarz Inequality)

    d) 1 A +B 1 A 1 + I B I Triangle of Inequlity).e) Angle 8 between vector A aild B is given by cos 8 = A,BIIA I1 IIB 11 .

    8. a) A subset S of Rn is said to be orthogonal if, VAJ S, AJ ) = 0 .In addition, if A V A E S t he set S is called orthonormalb) Every non-zero subspace of Rn has an orthonormal basis.c) Gram Schmidt process is outlined in 14 S.1, to construct orthonormal basis.

    14.7 SOLUTIONSI NSWERSE l . a ) t 3 , z - 9 , y =-18 , x 11.

    b) t = t o , z = 6 + t o , y = - 8 + t o , x = 39E2. a) = - 4 , y = 9 , z = 5.

    b) = 4 - 2 t 0 , y = 0 , = 2+to, t = to.E 5. Subspaces are given by a) and c).E7. P + Q = L { 1, 07 0, O), 0,0, 1, O), 0, 190, 1) ).E 8. Linearly independent sets are given by d). Others are linearly dependent.

  • 8/12/2019 Linear Equations And

    24/24

    E14. Dimension is 2E 1 5 - { l i - 2 , l , 3 ) , 2 , 1 , - 3 , 1 ) , l , l , l , O ) , -1 9 1 ,O i l )) .