quadratic form and functional optimization

31
Quadratic Form and Functional Optimization 9 th June, 2011 Junpei Tsuji

Upload: junpei-tsuji

Post on 15-Jun-2015

610 views

Category:

Education


0 download

DESCRIPTION

This slideshow describes a mathematics topic on quadratic form in English. I made the slideshow to understand this topic on a deeper way.ใ€€ I like linear algebra very much!!

TRANSCRIPT

Page 1: Quadratic form and functional optimization

Quadratic Form and Functional Optimization

9th June, 2011 Junpei Tsuji

Page 2: Quadratic form and functional optimization

Optimization of multivariate quadratic function

๐ฝ ๐‘ฅ1, ๐‘ฅ2 = 1.2 + 0.2, 0.3๐‘ฅ1๐‘ฅ2 +

12๐‘ฅ1, ๐‘ฅ2

3 11 4

๐‘ฅ1๐‘ฅ2

= 1.2 + 0.2๐‘ฅ1 + 0.3๐‘ฅ2 +32๐‘ฅ12 + ๐‘ฅ1๐‘ฅ2 + 2๐‘ฅ22

๐‘ฅ1, ๐‘ฅ2, ๐ฝ = 0.045, 0.064, 1.1881

Page 3: Quadratic form and functional optimization

Quadratic approximation By Taylor's expansion

๐‘“ ๐’™ โ‰ˆ ๐‘“ฬ… + ๏ฟฝฬ…๏ฟฝ โˆ™ ๐’™ โˆ’ ๐’™๏ฟฝ +12 ๐’™ โˆ’ ๐’™๏ฟฝ ๐‘‡๐‘ฏ๏ฟฝ ๐’™ โˆ’ ๐’™๏ฟฝ

where

โ€ข ๐’™ โˆถ= ๐‘ฅ1, ๐‘ฅ2,โ‹ฏ๐‘ฅ๐‘๐‘‡

โ€ข ๐‘“ฬ… โˆถ= ๐‘“ ๐’™๏ฟฝ

โ€ข ๏ฟฝฬ…๏ฟฝ: = ๐œ•๐‘“๐œ•๐‘ฅ1

, ๐œ•๐‘“๐œ•๐‘ฅ2

,โ‹ฏ , ๐œ•๐‘“๐œ•๐‘ฅ๐‘ ๐’™=๐’™๏ฟฝ

Jacobian (gradient)

โ€ข ๐‘ฏ๏ฟฝ โˆถ=

๐œ•2๐‘“๐œ•๐‘ฅ1๐œ•๐‘ฅ1

โ‹ฏ ๐œ•2๐‘“๐œ•๐‘ฅ1๐œ•๐‘ฅ๐‘

โ‹ฎ โ‹ฑ โ‹ฎ๐œ•2๐‘“

๐œ•๐‘ฅ๐‘๐œ•๐‘ฅ1โ‹ฏ ๐œ•2๐‘“

๐œ•๐‘ฅ๐‘๐œ•๐‘ฅ๐‘ ๐’™=๐’™๏ฟฝ

Hessian (constant)

quadratic form constant linear form

Page 4: Quadratic form and functional optimization

Completing the square

๐‘“ ๐’™ = ๐‘“ฬ… + ๏ฟฝฬ…๏ฟฝ โˆ™ ๐’™ โˆ’ ๐’™๏ฟฝ +12๐’™ โˆ’ ๐’™๏ฟฝ ๐‘‡๐‘ฏ๏ฟฝ ๐’™ โˆ’ ๐’™๏ฟฝ

โ€ข Let ๐’™๏ฟฝ = ๐’™โˆ— where ๐‘ฑ ๐’™โˆ— ๐‘‡ = ๐ŸŽ then

๐‘“ ๐’™ = ๐‘“โˆ— +12๐’™ โˆ’ ๐’™โˆ— ๐‘‡๐‘ฏโˆ— ๐’™ โˆ’ ๐’™โˆ—

quadratic form constant

Page 5: Quadratic form and functional optimization

Completing the square ๐‘“ ๐’™ = ๐‘ + ๐’ƒ๐‘‡๐’™ +

12๐’™๐‘‡๐‘จ๐’™

๐‘“ ๐’™ = ๐‘‘ +12๐’™ โˆ’ ๐’™0 ๐‘‡๐‘จ ๐’™ โˆ’ ๐’™0

= ๐‘‘ +12๐’™0๐‘‡๐‘จ๐’™0 โˆ’

12๐’™0๐‘‡ ๐‘จ + ๐‘จ๐‘‡ ๐’™ +

12๐’™๐‘‡๐‘จ๐’™

โ€ข ๐’ƒ๐‘‡ = โˆ’12๐’™0๐‘‡ ๐‘จ + ๐‘จ๐‘‡

๐’™0๐‘‡ = โˆ’2๐’ƒ๐‘‡ ๐‘จ + ๐‘จ๐‘‡ โˆ’1 ๐’™0 = โˆ’2 ๐‘จ + ๐‘จ๐‘‡ โˆ’1๐’ƒ

โ€ข ๐‘ = ๐‘‘ + 12๐’™0๐‘‡๐‘จ๐’™0

๐‘‘ = ๐‘ โˆ’12๐’™0

๐‘‡๐‘จ๐’™0 = ๐‘ โˆ’ 2๐’ƒ๐‘‡ ๐‘จ + ๐‘จ๐‘‡ โˆ’1๐‘จ ๐‘จ + ๐‘จ๐‘‡ โˆ’1๐’ƒ

Therefore, ๐‘“ ๐’™ = ๐‘ โˆ’ 2๐’ƒ๐‘‡ ๐‘จ + ๐‘จ๐‘‡ โˆ’1๐‘จ ๐‘จ + ๐‘จ๐‘‡ โˆ’1๐’ƒ

+12 ๐’™ + 2 ๐‘จ + ๐‘จ๐‘‡ โˆ’1๐’ƒ ๐‘‡๐‘จ ๐’™ + 2 ๐‘จ + ๐‘จ๐‘‡ โˆ’1๐’ƒ

โ€ข If ๐‘จ was symmetric matrix,

๐‘“ ๐’™ = ๐‘ โˆ’12๐’ƒ

๐‘‡๐‘จโˆ’1๐’ƒ +12 ๐’™ + ๐‘จโˆ’1๐’ƒ ๐‘‡๐‘จ ๐’™ + ๐‘จโˆ’1๐’ƒ

Page 6: Quadratic form and functional optimization

Quadratic form

๐‘“ ๐’™๐’™ = ๐’™๐’™๐‘‡๐‘บ๐’™๐’™ where โ€ข ๐‘บ is symmetric matrix.

Page 7: Quadratic form and functional optimization

Symmetric matrix โ€ข Symmetric matrix ๐‘บ is defined as a matrix that satisfies the

following formula: ๐‘บ๐‘‡ = ๐‘บ

โ€ข Symmetric matrix ๐‘บ has real eigenvalues ๐œ†๐‘– and

eigenvectors ๐’–๐‘– that consist of normal orthogonal base. where

๐‘บ๐’–๐‘– = ๐œ†๐‘–๐’–๐‘– ๐œ†1 โ‰ฅ ๐œ†2 โ‰ฅ โ‹ฏ โ‰ฅ ๐œ†๐‘

๐’–๐‘– ,๐’–๐‘— = ๐›ฟ๐‘–๐‘— ๐›ฟ๐‘–๐‘— is Kronecker's delta

Page 8: Quadratic form and functional optimization

Diagonalization of symmetric matrix

โ€ข We define an orthogonal matrix ๐‘ผ as follows: ๐‘ผ = ๐’–1,๐’–2,โ‹ฏ ,๐’–๐‘

โ€ข Then, ๐‘ผ satisfies the following formulas: ๐‘ผ๐‘‡๐‘ผ = ๐‘ฐ

โˆด ๐‘ผโˆ’1 = ๐‘ผ๐‘‡ โ€ข where ๐‘ฐ is an identity matrix.

๐‘บ๐‘ผ = ๐‘บ ๐’–1,๐’–2,โ‹ฏ ,๐’–๐‘ = ๐‘บ๐’–1,๐‘บ๐’–2,โ‹ฏ ,๐‘บ๐’–๐‘

= ๐œ†1๐’–1, ๐œ†2๐’–2,โ‹ฏ , ๐œ†๐‘๐’–๐‘ = ๐’–1,โ‹ฏ ,๐’–๐‘๐œ†1 โ‹ฑ ๐œ†๐‘

= ๐‘ผ ๐๐๐๐ ๐œ†1, ๐œ†2,โ‹ฏ , ๐œ†๐‘ โˆด ๐‘บ = ๐‘ผ ๐๐๐๐ ๐œ†1, ๐œ†2,โ‹ฏ , ๐œ†๐‘ ๐‘ผ๐‘‡

Page 9: Quadratic form and functional optimization

Transformation to principal axis

๐‘“ ๐’™๐’™ = ๐’™๐’™๐‘‡๐‘บ๐’™๐’™ โ€ข Then, we assume ๐’™๐’™ = ๐‘ผ๐‘‡๐’›, where ๐’› =

๐‘ง1, ๐‘ง1,โ‹ฏ , ๐‘ง๐‘ .

๐‘“ ๐‘ผ๐‘‡๐’› = ๐‘ผ๐‘‡๐’› ๐‘‡๐‘บ ๐‘ผ๐‘‡๐’› = ๐’›๐‘‡๐‘ผ๐‘บ๐‘ผ๐‘‡๐’›= ๐’›๐‘‡ ๐๐๐๐ ๐œ†1, ๐œ†2,โ‹ฏ , ๐œ†๐‘ ๐’›

โˆด ๐‘“ ๐’› = ๏ฟฝ๐œ†๐‘–๐‘ง๐‘–2๐‘

๐‘–=1

Page 10: Quadratic form and functional optimization

Contour surface

โ€ข If we assume ๐‘“ ๐’› equals constant ๐‘,

๐‘“ ๐’› = ๏ฟฝ๐œ†๐‘–๐‘ง๐‘–2๐‘

๐‘–=1

= ๐‘

โ€ข When ๐‘ = 2, โ€“ a locus of ๐’› illustrates an ellipse if ๐œ†1๐œ†2 > 0. โ€“ a locus of ๐’› illustrates a hyperbola if ๐œ†1๐œ†2 < 0.

Page 11: Quadratic form and functional optimization

Contour surface

๐‘ง1

๐‘ง2

๐‘“ ๐’› = ๏ฟฝ๐œ†๐‘–๐‘ง๐‘–22

๐‘–=1

= ๐‘๐‘๐‘๐‘๐‘.

๐œ†1๐œ†2 > 0

๐‘“ ๐‘ฅ1, ๐‘ฅ2 = โˆ’๐‘ฅ12 โˆ’ 2๐‘ฅ22 + 20.0

maximal or minimal point

Page 12: Quadratic form and functional optimization

Transformation to principal axis

๐‘“ ๐’™๐’™ = ๐‘๐‘๐‘๐‘๐‘.

๐‘ฅ๐’™1

๐‘ฅ๐’™2

๐’™๐’™ = ๐‘ผ๐‘‡๐’› โˆด ๐’› = ๐‘ผ๐’™โ€ฒ

Transformation to principal axis

Page 13: Quadratic form and functional optimization

๐‘ฅ๐’™1

๐‘ฅ๐’™2

Parallel translation

๐‘“ ๐’™ = ๐‘๐‘๐‘๐‘๐‘.

๐‘ฅ1

๐‘ฅ2

๐’™๏ฟฝ

๐’™๐’™ = ๐’™ โˆ’ ๐’™๏ฟฝ

Page 14: Quadratic form and functional optimization

Contour surface of quadratic function

๐‘“ ๐’™ = ๐‘“โˆ— +12๐’™ โˆ’ ๐’™โˆ— ๐‘‡๐‘ฏโˆ— ๐’™ โˆ’ ๐’™โˆ—

๐‘“ ๐’™ = ๐‘๐‘๐‘๐‘๐‘.

๐‘ฅ1

๐‘ฅ2

๐’™๏ฟฝ

Page 15: Quadratic form and functional optimization

Contour surface

๐‘“ ๐’› = ๏ฟฝ๐œ†๐‘–๐‘ง๐‘–22

๐‘–=1

= ๐‘๐‘๐‘๐‘๐‘.

๐œ†1๐œ†2 < 0 ๐‘ง1

๐‘ง2

saddle point

๐‘“ ๐‘ฅ1, ๐‘ฅ2 = ๐‘ฅ12 โˆ’ ๐‘ฅ22

Page 16: Quadratic form and functional optimization

Stationary points

saddle point maximal point

๐‘“ ๐‘ฅ1, ๐‘ฅ2 = ๐‘ฅ13 + ๐‘ฅ23 + 3๐‘ฅ1๐‘ฅ2 + 2

Page 17: Quadratic form and functional optimization

Stationary points

maximal point saddle point

๐‘“ ๐‘ฅ1, ๐‘ฅ2 = exp โˆ’13๐‘ฅ13 + ๐‘ฅ1 โˆ’ ๐‘ฅ22

Page 18: Quadratic form and functional optimization
Page 19: Quadratic form and functional optimization

Newton-Raphson method

โ€ข Newtonโ€™s method is an approximate solver of ๐‘“๐’™ ๐’™ = ๐ŸŽ where ๐‘“ ๐’™ is ๐‘-th polynomial by using a quadratic approximation.

๐‘“ ๐’™

quadratic approximation of ๐‘“ ๐’™ in ๐’™

๐‘“๐’™ ๐’™โˆ— = ๐ŸŽ

๐’™ ๐’™ ๐’™ + ๐šซ๐’™ ๐’™โˆ—

๐‘“ ๐’™ + ฮ”๐’™ โ‰ˆ ๐‘“ ๐’™ + ๐‘ฑ ๐’™ โˆ™ ฮ”๐’™ +12ฮ”๐’™๐‘‡๐‘ฏ ๐’™ ฮ”๐’™

๐œ•๐‘“ ๐’™ + ฮ”๐’™๐œ• ฮ”๐’™

= ๐‘ฑ ๐’™ ๐‘‡ + ๐‘ฏ ๐’™ ฮ”๐’™

Page 20: Quadratic form and functional optimization

Algorithm of Newtonโ€™s method

Procedure Newton (๐‘ฑ ๐’™ , ๐‘ฏ ๐’™ ) 1. Initialize ๐’™. 2. Calculate ๐‘ฑ ๐’™ and ๐‘ฏ ๐’™ . 3. Solve the following simultaneous

equation and giving โˆ†๐’™ : ๐‘ฑ ๐’™ ๐‘‡ + ๐‘ฏ ๐’™ โˆ†๐’™ = ๐ŸŽ

4. Update ๐’™ as follows: ๐’™ โ† ๐’™ + โˆ†๐’™

5. If โˆ†๐’™ < ๐›ฟ then return ๐’™ else go back to 2.

Page 21: Quadratic form and functional optimization

Linear regression

๐’™๐‘– ,๐‘ฆ๐‘–

๐’™

๐‘ฆ ๐‘ฆ = ๐‘“ ๐’™ = ๐›ฝ0 + ๏ฟฝ๐›ฝ๐‘—๐‘ฅ๐‘—

๐‘

๐‘—=1

We would like to find ๐œทโˆ— that minimizes the residual sum of square (RSS).

๐‘ samples

๐‘-th dimensional space

Page 22: Quadratic form and functional optimization

Linear regression

min๐œท

RSS ๐œท โ€ข where

RSS ๐œท = ๏ฟฝ ๐‘ฆ๐‘– โˆ’ ๐‘“ ๐’™๐‘– 2๐‘

๐‘–=1

= ๏ฟฝ ๐‘ฆ๐‘– โˆ’ ๐›ฝ0 + ๏ฟฝ๐›ฝ๐‘—๐‘ฅ๐‘–๐‘—

๐‘

๐‘—=1

2๐‘

๐‘–=1

โ€ข Given ๐‘ฟ,๐’š,๐œท as follows:

๐‘ฟ =๐‘ฅ11 โ‹ฏ ๐‘ฅ1๐‘โ‹ฎ โ‹ฑ โ‹ฎ๐‘ฅ๐‘1 โ‹ฏ ๐‘ฅ๐‘๐‘

1โ‹ฎ1

, ๐’š =๐‘ฆ1โ‹ฎ๐‘ฆ๐‘

, ๐œท =๐›ฝ1โ‹ฎ๐›ฝ๐‘

โˆด RSS ๐œท = ๐’š โˆ’ ๐‘ฟ๐œท 2

Page 23: Quadratic form and functional optimization

Linear regression

RSS ๐œท = ๐ฝ ๐œท = ๐’š โˆ’ ๐‘ฟ๐œท 2 = ๐’š โˆ’ ๐‘ฟ๐œท ๐‘‡ ๐’š โˆ’ ๐‘ฟ๐œท = ๐’š๐‘‡๐’š โˆ’ ๐œท๐‘‡๐‘ฟ๐‘‡๐’š โˆ’ ๐’š๐‘‡๐‘ฟ๐œท + ๐œท๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œท

โ€ข ๐œ•๐œ•๐œท

๐’‚๐‘‡๐œท = ๐’‚

โ€ข ๐œ•๐œ•๐œท

๐œท๐‘‡๐’‚ = ๐’‚

โ€ข ๐œ•๐œ•๐œท

๐œท๐‘‡๐‘จ๐œท = ๐‘จ

๐ฝโ€ฒ ๐œท =๐œ•๐ฝ๐œ•๐œท

= โˆ’2๐‘ฟ๐‘‡๐’š + 2๐‘ฟ๐‘‡๐‘ฟ๐œท

Page 24: Quadratic form and functional optimization

Linear regression Given ๐œทโˆ— that satisfies ๐ฝโ€ฒ ๐œทโˆ— = ๐ŸŽ,

๐‘ฟ๐‘‡๐’š = ๐‘ฟ๐‘‡๐‘ฟ๐œทโˆ— ๐’š๐‘‡๐‘ฟ = ๐œทโˆ—๐‘‡๐‘ฟ๐‘‡๐‘ฟ

โˆด ๐œทโˆ— = ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡๐’š โˆด ๐ฝ ๐œท = ๐’š๐‘‡๐’š โˆ’ ๐œท๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œทโˆ— โˆ’ ๐œทโˆ—๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œท + ๐œท๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œท โˆด ๐ฝ ๐œท

= ๐’š๐‘‡๐’š โˆ’ ๐œทโˆ—๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œทโˆ— + ๐œทโˆ—๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œทโˆ— โˆ’ ๐œท๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œทโˆ—

โˆ’ ๐œทโˆ—๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œท + ๐œท๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œท โˆด ๐ฝ ๐œท = ๐’š๐‘‡๐’š โˆ’ ๐œทโˆ—๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œทโˆ— + ๐œท โˆ’ ๐œทโˆ— ๐‘‡๐‘ฟ๐‘‡๐‘ฟ ๐œท โˆ’ ๐œทโˆ—

completing the square

Page 25: Quadratic form and functional optimization

Linear regression

๐ฝ ๐œท = ๐’š๐‘‡๐’š โˆ’ ๐œทโˆ—๐‘‡๐‘ฟ๐‘‡๐‘ฟ๐œทโˆ— + ๐œท โˆ’ ๐œทโˆ— ๐‘‡๐‘ฟ๐‘‡๐‘ฟ ๐œท โˆ’ ๐œทโˆ— = ๐’š โˆ’ ๐‘ฟ๐œทโˆ— 2 + ๐œท โˆ’ ๐œทโˆ— ๐‘‡๐‘ฟ๐‘‡๐‘ฟ ๐œท โˆ’ ๐œทโˆ—

= ๐ฝ ๐œทโˆ— +12๐œท โˆ’ ๐œทโˆ— ๐‘‡๐‘ฏ ๐œท โˆ’ ๐œทโˆ—

quadratic form Residual sum of squares (RSS) by Linear Regression

๐ฝ ๐œท = ๐‘๐‘๐‘๐‘๐‘.

๐›ฝ1

๐›ฝ2

๐œทโˆ—

๐œทโˆ— = ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡๐’š ๐‘ฏ = 2๐‘ฟ๐‘‡๐‘ฟ

Page 26: Quadratic form and functional optimization

Hessian

โ€ข ๐‘ฏ โ‰” ๐œ•2๐ฝ๐œ•๐›ฝ๐‘–๐œ•๐›ฝ๐‘—

= 2๐‘ฟ๐‘‡๐‘ฟ

โ€ข ๐‘ฏ has the following two features: โ€“ symmetric matrix: ๐‘ฏ๐‘‡ = ๐‘ฏ โ€“ positive-definite matrix: ๐’™โˆ€ โ‰  ๐ŸŽ, ๐’™๐‘‡๐‘ฏ๐’™ > 0

Therefore, ๐œทโˆ— = ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡๐’š is the minimum of ๐ฝ ๐œท .

Page 27: Quadratic form and functional optimization

Analysis of residuals

๐’šโˆ— = ๐‘ฟ๐œทโˆ— โ€ข Then, we substitute ๐œทโˆ— = ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡๐’š in the above,

๐’šโˆ— = ๐‘ฟ๐œทโˆ— = ๐‘ฟ ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡ ๐’š

โˆด ๐’šโˆ— = โ„‹๐’š (Hat matrix) โ€ข the vector of residuals ๐’“ can be expressed by follows:

๐’“ = ๐’š โˆ’ ๐’šโˆ— = ๐’š โˆ’โ„‹๐’š = ๐‘ฐ โˆ’โ„‹ ๐’š ๐‘‰๐‘‰๐‘‰ ๐’“ = ๐‘‰๐‘‰๐‘‰ ๐‘ฐ โˆ’โ„‹ ๐’š = ๐‘ฐ โˆ’โ„‹ ๐‘‰๐‘‰๐‘‰ ๐’š ๐‘ฐ โˆ’โ„‹ ๐‘‡

Page 28: Quadratic form and functional optimization

Analysis of residuals

โ„‹ = ๐‘ฟ ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡ The hat matrix โ„‹ is a projection matrix, which satisfies the following equations: 1. Projection: โ„‹2 = โ„‹

โ„‹2 = โ„‹ โˆ™โ„‹ = ๐‘ฟ ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡ โˆ™ ๐‘ฟ ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡ = ๐‘ฟ ๐‘ฟ๐‘‡๐‘ฟ โˆ’1 ๐‘ฟ๐‘‡๐‘ฟ ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡

= ๐‘ฟ ๐‘ฟ๐‘‡๐‘ฟ โˆ’1๐‘ฟ๐‘‡ = โ„‹

2. Orthogonal: โ„‹๐‘‡ = โ„‹

Page 29: Quadratic form and functional optimization

Analysis of residuals

๐‘ฆ1โˆ—โ‹ฎ๐‘ฆ๐‘โˆ—

=๐‘ฅ11 โ‹ฏ ๐‘ฅ1๐‘โ‹ฎ โ‹ฑ โ‹ฎ๐‘ฅ๐‘1 โ‹ฏ ๐‘ฅ๐‘๐‘

1โ‹ฎ1

๐›ฝ1โˆ—

โ‹ฎ๐›ฝ๐‘

โˆ—

๐›ฝ0โˆ—

= ๐›ฝ1โˆ—๐‘ฅ11โ‹ฎ๐‘ฅ๐‘1

+ โ‹ฏ+ ๐›ฝ๐‘โˆ—

๐‘ฅ1๐‘โ‹ฎ

๐‘ฅ๐‘๐‘+ ๐›ฝ0

โˆ—1โ‹ฎ1

linear combination in ๐‘ + 1 -th vector space

๐’™1 ๐’™๐‘ ๐’™๐‘+1 = ๐Ÿ

Page 30: Quadratic form and functional optimization

Analysis of residuals

๐’™๐‘

๐’™๐‘—

๐’šโˆ—

๐’š

๐‘-th dimensional space

๐’šโˆ— = โ„‹๐’š (Projection)

๐‘ + 1 -th dimensional super surface

Page 31: Quadratic form and functional optimization

Analysis of residuals

๐’š = ๐‘ฟ๐œท โ€ข ๐œท = ๐‘ฟโˆ’1๐’š, where ๐‘ฟโˆ’1 is M-P generalized inverse.

1. Unique solution: ๐‘ = ๐‘ 2. Many solutions: ๐‘ > ๐‘ 3. No solution: ๐‘ < ๐‘

โ€ข ๐‘ฟโˆ’1 = ๏ฟฝ๐‘ฟโˆ’1

๐‘ฟ๐’™ ๐‘ฟ๐‘ฟ๐’™ โˆ’1 ๐œท = ๐‘ฟโˆ’1๐’š is min in ๐œท๐‘ฟ๐’™๐‘ฟ โˆ’1๐‘ฟ๐’™ ๐’š โˆ’ ๐‘ฟ๐œท 2 is min