zhonggang zeng northeastern illinois university, usa
TRANSCRIPT
A numerical Approach toward
Approximate Algebraic ComputatitionZhonggang Zeng
Northeastern Illinois University, USA
Oct. 18, 2006, Institute of Mathematics and its Applications
What would happen
when we try numerical computation
on algebraic problems?
A numerical analyst got a surprise 50 years agoon a deceptively simple problem.
1
James H. Wilkinson (1919-1986)
Britain’s Pilot Ace
Start of project: 1948Completed: 1950Add time: 1.8 microsecondsInput/output: cardsMemory size: 352 32-digit wordsMemory type: delay linesTechnology: 800 vacuum tubesFloor space: 12 square feetProject leader: J. H. Wilkinson
2
The Wilkinson polynomial
p(x) = (x-1)(x-2)...(x-20)= x20 - 210 x19 + 20615 x18 + ...
Wilkinson wrote in 1984:
Speaking for myself I regard it as the most traumatic experience in my career as a numerical analyst.
57521 )379.18()98.11()7222.5()3145.2()99651.0( −−−−−≈ xxxxx
3
Matrix rank problem
5
Factoring a multivariate polynomial:
A factorable polynomial irreducibleapproximation
6
Solving polynomial systems:
Example: A distorted cyclic four system:
Translation: There are two 1-dimensional solution set:
−===±=
33,,6
4321tztztztz m
7
Distorted Cyclic Four system in floating point form:
1-dimensional solution set Isolated solutionsapproximation
8
tiny perturbationin data
(< 0.00000001)
huge error In solution
( >105 ) 9
What could happen in approximate algebraic computation?
• “traumatic” error
• dramatic deformation of solution structure
• complete loss of solutions
• miserable failure of classical algorithms
• Polynomial division• Euclidean Algorithm• Gaussian elimination• determinants• … …
10
So, why bother with approximation in algebra?
1. You may have no choice (e.g. Abel’s Impossibility Theorem)
All subsequent computations become approximate
Either
or
11
So, why bother with approximation solutions?
1. You may have no choice
2. Approximate solutions are better!
1)),(),,(( =yxgyxfGCD
true image
Application: Image restoration (Pillai & Liang)blurred image blurred image
),(),(),( yxyxyxp εη +
),( yxf=
),(),(),( yxyxyxp δµ +
),( yxg=
),( yxp
Application: Image restoration (Pillai & Liang)
),(),(),( yxyxyxp εη +
),( yxf=
),(),(),( yxyxyxp δµ +
),( yxg=
),(~)),(),,(( yxpyxgyxfAGCD =
true image
blurred image blurred image
restored image
),( yxp
Approximate solution is better than exact solution! 13
Perturbed Cyclic 4
Exact solutions by Maple:
16 isolated (codim 4) solutions
Approximate solutions
by Bertini
(Courtesy of Bates, Hauenstein,
Sommese, Wampler)
Perturbed Cyclic 4
Exact solutions by Maple:
16 isolated (codim 4) solutions
Or, by an experimental approximate elimination combined with approximate GCD
Approximate solutions are better than exact ones , arguably15
So, why bother with approximation solutions?
1. You may have no choice
2. Approximate solutions are better
3. Approximate solutions (usually) cost less
Example: JCF computation
≅
ts
sr
rr
1
11
,2=r
Special case:
,3=s 5=t
Maple takes 2 hours
On a similar 8x8 matrix, Maple and Mathematica run out of memory
1. You may have no choice
2. Approximate solutions are better
3. Approximate solutions (usually) cost less
16
Pioneer works in numerical algebraic computation (incomplete list)
• Homotopy method for solving polynomial systems(Li, Sommese, Wampler, Verschelde, …)
• Numerical Polynomial Algerba(Stetter)
• Numerical Algebraic Geometry(Sommese, Wampler, Verschelde, …)
17
What is an “approximate solution”?
To solve 0122 =+− xx with 8 digits precision:
backward error: 0.00000001 -- method is good
forward error: 0.0001 -- problem is bad
00000001.010 8 =−
bac
kwar
d e
rro
r
0001.010 4 =−
forw
ard erro
r
0122 =+− xx 1=xexact computation
,9999.0=x
approximate solution
using 8-digits precision
,0001.1axact solution
0)0001.1)(9999.0( =−− xx
0)10()1( 242 =−− −x
18
The condition number
[Forward error] < [Condition number] [Backward error]
A large condition number <=> The problem is sensitive or, ill-conditioned
From numerical method
From problem
An infinite condition number <=> The problem is ill-posed19
Wilkinson’s Turing Award contribution:
Backward error analysis
• A numerical algorithm solves a “nearby” problem
• A “good” algorithm may still get a “bad” answer,if the problem is ill-conditioned (bad)
20
A well-posed problem: (Hadamard, 1923)the solution satisfies
• existence• uniqueness• continuity w.r.t data
Ill-posed problems are common in applications
- image restoration - deconvolution- IVP for stiction damped oscillator - inverse heat conduction- some optimal control problems - electromagnetic inverse scatering- air-sea heat fluxes estimation - the Cauchy prob. for Laplace eq.… …
21
An ill-posed problem is infinitely sensitive to perturbation
tiny perturbation è huge error
Ill-posed problems are commonin algebraic computing
- Multiple roots
- Polynomial GCD
- Factorization of multivariate polynomials
- The Jordan Canonical Form
- Multiplicity structure/zeros of polynomial systems
- Matrix rank
22
If the answer is highly sensitive to perturbations, you have probably asked the wrong question.
Maxims about numerical mathematics, computers, science and life, L. N. Trefethen. SIAM News
23
Does that mean:
(Most) algebraic problems are wrong problems?
A numerical algorithm seeks the exact solution of a nearby problem
Ill-posed problems are infinitely sensitiveto data perturbation
Conclusion: Numerical computation is incompatible
with ill-posed problems.
Solution: Formulate the right problem.
P : Dataà SolutionP
P
Data
Solution
P
Challenge in solving ill-posed problems:
Can we recover the lost solution when the problem is inexact?
24
William Kahan:
This is a misconception
Are ill-posed problems really sensitive to perturbations?
Kahan’s discovery in 1972:
Ill-posed problems are sensitive to arbitrary perturbation,but insensitive to structure preserving perturbation.
25
Why are ill-posed problems infinitely sensitive?
Plot of pejorative manifolds of degree 3 polynomials with multiple roots
• The solution structure is lost when the problem leaves the manifold due to an arbitrary perturbation
• The problem may not be sensitive at all if the problem stays on the manifold,unless it is near another pejorative manifold
• Problems with certain solution structure form a “pejorative manifold”
W. Kahan’s observation (1972)
26
{ } )( | matrices Rank * rArankCAMr nmnmr =∈= ××
( ) ))(( codim -- rnrmM nmr −−=×
{ } )),((deg |),( pairs al Polynomi* , rqpGCDqpP nmr ==
( ) rP nmr codim -- =×
Geometry of ill-posed algebraic problems
nmnmn
nmn PPP ××
−× ⊂⊂⊂ 01 L
nmn
nmnm MMM ××× ⊂⊂⊂ L10
Similar manifold stratification exists for problems like factorization, JCF, multiple roots …
27
Manifolds of 4x4 matrices defined by Jordan structures(Edelman, Elmroth and Kagstrom 1997)
e.g. {2,1} {1} is the structure of 2 eigenvalues in 3 Jordan blocks of sizes 2, 1 and 1
28
1 codimsion =
2 ncodimensio =
3 codimsion =
B
29
Illustration of pejorative manifolds
0 codimsion =
A?
?
Problem A Problem Bperturbation
The “nearest” manifold may not be the answer
The right manifold is of highest codimension within a certain distance
A “three-strikes” principle for formulating an “approximate solution” to an ill-posed problem:
• Backward nearness: The approximate solution is the exact solution of a nearby problem
• Maximum codimension: The approximate solution is the exact solution of a problem on the nearby pejorative manifold of the highest codimension.
• Minimum distance: The approximate solution is the exact solution of the nearest problem on the nearby pejorative manifold of the highest codimension.
Finding approximate solution is (likely) a well-posed problem
Approximate solution is a generalization of exact solution.
30
Continuity of the approximate solution:
Formulation of the approximate rank /kernel:
)(min)( BrankArankAB θθ ≤−
=
0 and >∀∈∀ × θnmCA
The approximate rank of A within θ :Backward nearness: app-rank of A is the exact rank of certain matrix B within θ.
Maximum codimension: That matrix Bis on the pejorative manifold Π possessing thehighest co-dimension and intersecting theθ−neighborhood of A.
)()( BKerAKer =θwith
2)()(2min ACAB
ArankCrank−=−
= θ
The approximate kernel of A within θ :
Minimum distance: That B is the nearestmatrix on the pejorative manifold Π.
• An exact rank is the app-rank within sufficiently small θ.
• App-rank is continuous (or well-posed)31
Rank
= 4nullity = 2
+ εE = 6nullity = 0
kernel
basis
+ εE = 4nullityθ = 2
98.40 <<≤ θε
εε
εθ −<+−
126.61
))()(( EAKerAKerdistRankθ
= 4nullityθ = 2
After reformulating the rank:
32
Ill-posedness is removed successfully.
App-rank/kernel can be computed by SVD and other rank-revealing algorithms(e.g. Li-Zeng, SIMAX, 2005)
Formulation of the approximate GCD ngmfxxCgf l ==>∀∈∀ )deg( ,)deg( ,0 ],,,[),( 1 εL
{( ) } ),(deg,)deg(,)deg(
],,[),( 1,
jqpGCDnqmp
xxCqpP lnm
j
===
∈≡ L
{ }nmjj Pqpqpgfgf ,),( ),(),(inf),( ∈−=θ
),(),(),(min),(),(,),(
gfvugfqpgf kPvu nm
k
θ=−=−∈
{ })codim(max ,
),(
nmjgf
Pkj εθ <
=
),(),( qpEGCDgfAGCD =ε
The AGCD within ε :
),( gf
nmkP ,
),( qp
nmkP ,
1+
nmkP ,
1−
• Finding AGCD is well-posed if θκ(f,g) is sufficiently small
• EGCD is an special case of AGCD for sufficiently small ε
(Z. Zeng, Approximate GCD of inexact polynomials, part I&II) 33
Similar formulation strikes out ill-posedness in problems such as
• Approximate rank/kernel (Li,Zeng 2005, Lee, Li, Zeng 2006)
• Approximate multiple roots/factorization (Zeng 2005)
• Approximate GCD (Zeng-Dayton 2004, Gao-Kaltofen-May-Yang-Zhi 2004)
• Approximate Jordan Canonical Form (Zeng-Li 2006)
• Approximate irreducible factorization (Sommesse-Wampler-Verschelde 2003,Gao et al 2003, 2004, in progress)
• Approximate dual basis and multiplicity structure(Dayton-Zeng 05, Bates-Peterson-Sommese ’06)
• Approximate elimination ideal (in progress)
34
after formulating the approximate solution to problem P within ε
P
The two-staged algorithm
Stage II: Find/solve problem Q such that
RPQPR
−=−Π∈
min
Q
Stage I: Find the pejorative manifold Π of the highest dimension s.t.
ε<Π),(PdistΠ
Exact solution of Q is the approximate solution of P within ε
which approximates the solution of S where P is perturbed from
S
35
Case study: Univariate approximate GCD:
Stage I: Find the pejorative manifold
ngmfxCgf ==>∀∈∀ )deg( ,)deg( ,0 ],[),( ε
( ) ( ) nmgfSPgfdist knm
k +≤⇒< εσε ),( ),,( min,
knwkmv
vgwfv,wgfSk
−≤−≤⋅−⋅→
)deg( ,)deg( with
)(for matrix theis ),( where
)0 and ( ≈⋅−⋅⇒⋅≈⋅≈ vgwfwugvufQ
for a least squares solution (u,v,w) by Gauss-Newton iteration
==⋅=⋅
1)( u
gwufvu
ϕ
Stage II: solve the (overdetermined) quadratic system ),(),,( gfbwvuF =
(key theorem: The Jacobian of F(u,v,w) is injective.) 36
Start: k = n
Is AGCD of degree kpossible?no
k := k-1
Successful?
no
k := k-1
Refine with G-N Iteration
probably
yes
Output GCD
Univariate AGCD algorithm
Max-codimension
Min-distancenearness
37
Case study: Multivariate approximate GCD:
Stage I: Find the max-codimension pejorative manifold byapplying univariate AGCD algorithm on each variable xj
ngmfxxCgf lvvL ==>∀∈∀ )deg( ,)deg( ,0 ],,,[),( 1 ε
Stage II: solve the (overdetermined) quadratic system
==⋅=⋅
1)( u
gwufvu
ϕ
for a least squares solution (u,v,w) by Gauss-Newton iteration
),(),,( gfbwvuF =
(key theorem: The Jacobian of F(u,v,w) is injective.)
),,( ),,(),,(
),,( ),,(),,( and
LLLLLL
LLLLLLQ
jjj
jjj
xwxuxg
xvxuxfwugvuf
=
=⇒⋅≈⋅≈
38
Case study: univariate factorization:
Stage I: Find the max-codimension pejorative manifold byapplying univariate AGCD algorithm on (f, f’ )
nfxCf =>∀∈∀ )deg( ,0 ],[ ε
1m1m1
1m1m1
mm1
k1
k1
k1
)()( )',(
)()()()('
)()()(
−−
−−
−−≈⇒
−−≈⇒
−−≈
k
k
k
zxzxffAGCD
xqzxzxxf
zxzxxf
L
L
LQ
Stage II: solve the (overdetermined) polynomial system F(z1 ,…,zk )=f
for a least squares solution (z1 ,…,zk ) by Gauss-Newton iteration
(key theorem: The Jacobian is injective.)
)( ) () ( k1 mm1 •=−•−• fzz kL
(in the form of coefficient vectors)
39
Case study: Finding the nearest matrix with a Jordan structure
Π
J =
λ 1 λ 1
λ
λ
[ ] [ ] JxxxxxxxxA ,,,,,, 43214321vvvvvvvv =
Segre characteristic = [3,1]
Equations determining the manifold
[ ] [ ][ ] [ ]
==−=+−
00,,,,,,0)( ,,,,,,
1
43214321
43214321
ubIuuuuuuuu
SIuuuuuuuuA
T
T
vvvvvvvvvv
vvvvvvvv λ
0),,,,,,,,,,( 34241423134321 =sssssuuuuAF vvvvλ
3 1
2
1
1Ferrer’s diagram
A ~ J
codim = -1 + 3 + 3(1) = 5
A
B
[ ] [ ] )( ,,,,,, 43214321 SIuuuuuuuuA += λvvvvvvvv
λ
λλI+S=
λλ
s13s23
s14s24s34
Wyre characteristic = [2,1,1]
ijji uu δ=⋅ vv
0),,,( =SUAF λ 40
Case study: Finding the nearest matrix with a Jordan structure
Π
Equations determining the manifold
[ ] [ ][ ] [ ]
==−=+−
00,,,,,,0)( ,,,,,,
1
43214321
43214321
ubIuuuuuuuu
SIuuuuuuuuA
T
T
vvvvvvvvvv
vvvvvvvv λ
A ~ J
A
B
2
2,,
2
2),,,(min )ˆ,ˆ,ˆ,( SUBFSUBF
SUλλ
λ=
For B not on the manifold, we can still solve
for a least squares solution :
0),,,( =SUBF λ
When2
)( SIUBU +− λ is minimized, so is 22)( ABUSIUB T −=+− λ
The crucial requirement: The Jacobian ),,,( •••AJ of ),,,( •••AF is injective.
(Zeng & Li, 2006)41
tangent plane P0 :u = G(z0)+J(z0)(z- z0)
initial iterate
u0 =G(z
0 )
Least squares solution
u* =G(z
* )
a
Project to tangent plane
u 1 = G(z0)+J(z 0)(z1- z0)
~
new iterate
u1 =G(z1 )
Pejor
ative m
anifol
du =
G( z )
SolveG( z ) = a
for nonlinear least squares solution z=z*
SolveG(z0)+J(z0)( z - z0 ) = afor linear least squares solution z = z1
G(z0)+J(z0)( z - z0 ) = aJ(z0)( z - z0 ) = - [G(z0) - a ] z1 = z0 - [J(z0)+] [G(z0) - a]
Solving G(z) = a
42
Stage II: Find/solve the nearest problem on the manifoldvia solving an overdetermined system G(z)=afor a least squares solution z* s.t . ||G(z*)-a||=minz ||G(z)-a||by the Gauss-Newton iteration
Stage I: Find the nearby max-codim manifold
[ ] L,2,1,0 ,)()( 1 =−−= ++ kazGzJzz kkkk
Key requirement: Jacobian J(z*) of G(z) at z* is injective(i.e. the pseudo-inverse exists)
tohzGzGzJ
zz ..)ˆ()()ˆ(
1 ˆ +−≤−
+
)ˆ(zG
)(zG
condition number(sensitivity measure)
43
Summary:
• An (ill-posed) algebraic problem can be formulated using the three-strikes principle (backward nearness, maximum-codimension, and minimum distance)to remove the ill-posedness
• The re-formulated problem can be solved by numerical computation in two stages(finding the manifold, solving least squares)
• The combined numerical approach leads to Matlab/Maple toolbox ApaTools for approximate polynomial algebra. The toolbox consists of
univariate/multivariate GCD
matrix rank/kernel
dual basis for a polynomial ideal
univariate factorization
irreducible factorization
elimination ideal
… …
(to be continued in the workshopnext week)
44