+ review of linear algebra 10-725 - optimization 1/14/10 recitation sivaraman balakrishnan

19
+ Review of Linear Algebra 10-725 - Optimization 1/14/10 Recitation Sivaraman Balakrishnan

Upload: ralf-davidson

Post on 28-Dec-2015

214 views

Category:

Documents


1 download

TRANSCRIPT

+Review of Linear Algebra

10-725 - Optimization1/14/10 Recitation

Sivaraman Balakrishnan

+Outline

Matrix subspaces

Linear independence and bases

Gaussian elimination

Eigen values and Eigen vectors

Definiteness

Matlab essentials Geoff’s LP sketcher linprog Debugging and using documentation

Basic concepts

Vector in Rn is an ordered set of n real numbers. e.g. v = (1,6,3,4) is in R4

A column vector: A row vector:

m-by-n matrix is an object with m rows and n columns, each entry filled with a real (typically) number:

4

3

6

1

4361

239

6784

821

Basic concepts - II

Vector dot product:

Matrix product:

22112121 vuvuvvuuvu

2222122121221121

2212121121121111

2221

1211

2221

1211 ,

babababa

babababaAB

bb

bbB

aa

aaA

+Matrix subspaces

What is a matrix? Geometric notion – a matrix is an object that “transforms” a

vector from its row space to its column space

Vector space – set of vectors closed under scalar multiplication and addition

Subspace – subset of a vector space also closed under these operations Always contains the zero vector (trivial subspace)

+Row space of a matrix

Vector space spanned by rows of matrix

Span – set of all linear combinations of a set of vectors

This isn’t always Rn – example !!

Dimension of the row space – number of linearly independent rows (rank)

We’ll discuss how to calculate the rank in a couple of slides

+Null space, column space

Null space – it is the orthogonal compliment of the row space

Every vector in this space is a solution to the equation Ax = 0

Rank – nullity theorem

Column space

Compliment of rank-nullity

+Linear independence

A set of vectors is linearly independent if none of them can be written as a linear combination of the others

Given a vector space, we can find a set of linearly independent vectors that spans this space

The cardinality of this set is the dimension of the vector space

+Gaussian elimination

Finding rank and row echelon form

Applications Solving a linear system of equations (we saw this in class) Finding inverse of a matrix

+Basis of a vector space

What is a basis? A basis is a maximal set of linearly independent vectors

and a minimal set of spanning vectors of a vector space

Orthonormal basis Two vectors are orthonormal if their dot product is 0, and

each vector has length 1 An orthonormal basis consists of orthonormal vectors.

What is special about orthonormal bases? Projection is easy Very useful length property Universal (Gram Schmidt) given any basis can find an

orthonormal basis that has the same span

+Matrices as constraints

Geoff introduced writing an LP with a constraint matrix

We know how to write any LP in standard form

Why not just solve it to find “opt”?

A special basis for square matrices

The eigenvectors of a matrix are unit vectors that satisfy Ax = λx

Example calculation on next slide

Eigenvectors are orthonormal and eigenvalues are real for symmetric matrices

This is the most useful orthonormal basis with many interesting properties Optimal matrix approximation (PCA/SVD)

Other famous ones are the Fourier basis and wavelet basis

Eigenvalues

(A – λI)x = 0

λ is an eigenvalue iff det(A – λI) = 0

Example:

2/100

64/30

541

A

)2/1)(4/3)(1(

2/100

64/30

541

)det(

IA

2/1,4/3,1

+Projections (vector)

2

2

2

000

010

001

0

2

2

0

2a

aa

bac

T

T

(0,0,1)

(0,1,0)

(1,0,0)

(2,2,2)

a = (1,0)

b = (2,2)

+Matrix projection

Generalize formula from the previous slide Projected vector = (QTQ)-1 QTv

Special case of orthonormal matrix Projected vector = QTv

You’ve probably seen something very similar in least squares regression

Definiteness

Characterization based on eigen values

Positive definite matrices are a special sub-class of invertible matrices

One way to test for positive definiteness is by showing xTAx > 0 for all x

A very useful example that you’ll see a lot in this class Covariance matrix

Matlab Tutorial - 1

Linsolve Stability and condition number

Geoff’s sketching code – might be very useful for HW1

Matlab Tutorial - 2

Linprog – Also, very useful for HW1

Also, covered debugging basics and using Matlab help

+Extra stuff

Vector and matrix norms Matrix norms - operator norm, Frobenius norm Vector norms - Lp norms

Determinants

SVD/PCA