an introduction to modern symbolic-numeric computation ...watt/talks/2013-synasc-finalest...an...
TRANSCRIPT
An Introduction toModern Symbolic-Numeric Computation
SYNASC 2013 Tutorial
Stephen M. Watt
Department of Computer ScienceUniversity of Western Ontario
London, Canada N6A 5B7
25 September 2013
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Outline
1 Introduction and Basic Notions
2 Approximate GCD
3 Application: Image Processing
4 Other Problems
5 Conclusion
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Symbolic-Numeric Computing
Symbolic computing and intermediate expression swell.Growth of exact coefficients.
Speed motivation ⇒ “Hybrid computation.”
Early days: naıve use of floating point in algebraic objects.
Later: use mathematical structure of the problem space.
Now: well-developed field.
Concentrate on symbolic-numeric algorithms for polynomials.
Usual algorithms of exact symbolic computation break down.New algorithms needed.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Symbolic-Numeric Computing
Symbolic computing and intermediate expression swell.Growth of exact coefficients.
Speed motivation ⇒ “Hybrid computation.”
Early days: naıve use of floating point in algebraic objects.
Later: use mathematical structure of the problem space.
Now: well-developed field.
Concentrate on symbolic-numeric algorithms for polynomials.
Usual algorithms of exact symbolic computation break down.New algorithms needed.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Symbolic-Numeric Computing
Symbolic computing and intermediate expression swell.Growth of exact coefficients.
Speed motivation ⇒ “Hybrid computation.”
Early days: naıve use of floating point in algebraic objects.
Later: use mathematical structure of the problem space.
Now: well-developed field.
Concentrate on symbolic-numeric algorithms for polynomials.
Usual algorithms of exact symbolic computation break down.New algorithms needed.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Symbolic-Numeric Computing
Symbolic computing and intermediate expression swell.Growth of exact coefficients.
Speed motivation ⇒ “Hybrid computation.”
Early days: naıve use of floating point in algebraic objects.
Later: use mathematical structure of the problem space.
Now: well-developed field.
Concentrate on symbolic-numeric algorithms for polynomials.
Usual algorithms of exact symbolic computation break down.New algorithms needed.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Symbolic-Numeric Computing
Symbolic computing and intermediate expression swell.Growth of exact coefficients.
Speed motivation ⇒ “Hybrid computation.”
Early days: naıve use of floating point in algebraic objects.
Later: use mathematical structure of the problem space.
Now: well-developed field.
Concentrate on symbolic-numeric algorithms for polynomials.
Usual algorithms of exact symbolic computation break down.New algorithms needed.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Symbolic-Numeric Computing
Symbolic computing and intermediate expression swell.Growth of exact coefficients.
Speed motivation ⇒ “Hybrid computation.”
Early days: naıve use of floating point in algebraic objects.
Later: use mathematical structure of the problem space.
Now: well-developed field.
Concentrate on symbolic-numeric algorithms for polynomials.
Usual algorithms of exact symbolic computation break down.New algorithms needed.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Symbolic-Numeric Computing
Symbolic computing and intermediate expression swell.Growth of exact coefficients.
Speed motivation ⇒ “Hybrid computation.”
Early days: naıve use of floating point in algebraic objects.
Later: use mathematical structure of the problem space.
Now: well-developed field.
Concentrate on symbolic-numeric algorithms for polynomials.
Usual algorithms of exact symbolic computation break down.New algorithms needed.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Early Work
Use floating point numbers in place of rational arithmetic and“let ’er rip”
Use floating point numbers in place of rational arithmetic withfuzzy zero test.
Pretend floating point numbers form a field.
Real numbers as infinite precision, e.g. quasi GCD.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Early Work
Use floating point numbers in place of rational arithmetic and“let ’er rip”
Use floating point numbers in place of rational arithmetic withfuzzy zero test.
Pretend floating point numbers form a field.
Real numbers as infinite precision, e.g. quasi GCD.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Early Work
Use floating point numbers in place of rational arithmetic and“let ’er rip”
Use floating point numbers in place of rational arithmetic withfuzzy zero test.
Pretend floating point numbers form a field.
Real numbers as infinite precision, e.g. quasi GCD.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Early Work
Use floating point numbers in place of rational arithmetic and“let ’er rip”
Use floating point numbers in place of rational arithmetic withfuzzy zero test.
Pretend floating point numbers form a field.
Real numbers as infinite precision, e.g. quasi GCD.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Example: The Euclidean Algorithm
Compute the greatest common divisor.
Uses the fact that if a = qb + r then gcd(a, b)|r .
gcd := proc(a, b)
if b = 0 then
a
else
gcd(b, a rem b)
fi
end:
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Example: The Euclidean Algorithm
Compute the greatest common divisor.
Uses the fact that if a = qb + r then gcd(a, b)|r .
gcd := proc(a, b)
if b = 0 then
a
else
gcd(b, a rem b)
fi
end:
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Example: The Euclidean Algorithm
If a and b are polynomials with floating point coefficients,then this might not terminate.
gcd := proc(a, b)
if b = 0 then
a
else
gcd(b, a rem b)
fi
end:
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Example: The Euclidean Algorithm
Next attempt: stop when remainder is “essentially zero”.
gcd := proc(a, b)
if degree(b, 0) and abs(b) < .00001 then
a
else
gcd(b, a rem b)
fi
end:
What on Earth is the meaning of the result?
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Example: The Euclidean Algorithm
Next attempt: stop when remainder is “essentially zero”.
gcd := proc(a, b)
if degree(b, 0) and abs(b) < .00001 then
a
else
gcd(b, a rem b)
fi
end:
What on Earth is the meaning of the result?
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Example: The Euclidean Algorithm
Next attempt: stop when remainder is “essentially zero”.
gcd := proc(a, b)
if degree(b, 0) and abs(b) < .00001 then
a
else
gcd(b, a rem b)
fi
end:
What on Earth is the meaning of the result?
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Floating Point as a Pretend Field
Axiom — a computer algebra system with parametricpolymorphism. Type categories follow modern algebra.
UnivPoly(R: Ring, x: Variable): Algebra(R) with {
if R has Field then Euclidean Domain;
...
} == ...
DoubleFloat: Field with {
abs: % -> %;
...
}
p : UnivPoly(DoubleFloat, ’x’) := ...
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Floating Point as a Pretend Field
Axiom — a computer algebra system with parametricpolymorphism. Type categories follow modern algebra.
UnivPoly(R: Ring, x: Variable): Algebra(R) with {
if R has Field then Euclidean Domain;
...
} == ...
DoubleFloat: Field with {
abs: % -> %;
...
}
p : UnivPoly(DoubleFloat, ’x’) := ...
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Floating Point as a Pretend Field
Axiom — a computer algebra system with parametricpolymorphism. Type categories follow modern algebra.
UnivPoly(R: Ring, x: Variable): Algebra(R) with {
if R has Field then Euclidean Domain;
...
} == ...
DoubleFloat: Field with {
abs: % -> %;
...
}
p : UnivPoly(DoubleFloat, ’x’) := ...
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Floating Point as a Pretend Field
Axiom — a computer algebra system with parametricpolymorphism. Type categories follow modern algebra.
UnivPoly(R: Ring, x: Variable): Algebra(R) with {
if R has Field then Euclidean Domain;
...
} == ...
DoubleFloat: Field with {
abs: % -> %;
...
}
p : UnivPoly(DoubleFloat, ’x’) := ...
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Is Floating Point the Problem?
Q: So aren’t all these problems due to the rounding error offloating point numbers?
A: No. We just have to think about what we intend.Approximation 6= floating point.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Is Floating Point the Problem?
Q: So aren’t all these problems due to the rounding error offloating point numbers?
A: No. We just have to think about what we intend.Approximation 6= floating point.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Modern History
Influences: polynomial root finding, matrix algorithms
The beginnings: Schonhage, Stetter, Pan, Sasaki/Noda
Start of personal involvement: CGTWImport ideas from numerical analysis:conditioning, backwarderror, svd
Conferences: SNAP 1996, SNC 2005, SNC 2007, SNC 2009,SNC 2011, SNC 2014
Journal Issues: JSC 1998, TCS 2004, TCS 2008, TCS 2011
Proceedings:SNC, several others
Books: Bailey/Borwein, Bini/Pan, Demmel, Stetter
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Modern History
Influences: polynomial root finding, matrix algorithms
The beginnings: Schonhage, Stetter, Pan, Sasaki/Noda
Start of personal involvement: CGTWImport ideas from numerical analysis:conditioning, backwarderror, svd
Conferences: SNAP 1996, SNC 2005, SNC 2007, SNC 2009,SNC 2011, SNC 2014
Journal Issues: JSC 1998, TCS 2004, TCS 2008, TCS 2011
Proceedings:SNC, several others
Books: Bailey/Borwein, Bini/Pan, Demmel, Stetter
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
“Condition” of a Problem
Suppose we have a problem F with exact input x and exactoutput F (x).
The maximum relative change in the output versus the changein the input is the “relative condition number” of the problem.
limε→0+
sup||∆x ||<ε
[||F (x + ∆x)− F (x)||
||F (x)||/||∆x ||||x ||
]The condition is about the the sensitivity to changes of input.It has nothing to do with the way the result is computed orapproximation errors along the way.
We may separately talk about the “stability” of an algorithmto compute F , which addresses the susceptibility of analgorithm to round off error, etc.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
“Condition” of a Problem
Suppose we have a problem F with exact input x and exactoutput F (x).
The maximum relative change in the output versus the changein the input is the “relative condition number” of the problem.
limε→0+
sup||∆x ||<ε
[||F (x + ∆x)− F (x)||
||F (x)||/||∆x ||||x ||
]
The condition is about the the sensitivity to changes of input.It has nothing to do with the way the result is computed orapproximation errors along the way.
We may separately talk about the “stability” of an algorithmto compute F , which addresses the susceptibility of analgorithm to round off error, etc.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
“Condition” of a Problem
Suppose we have a problem F with exact input x and exactoutput F (x).
The maximum relative change in the output versus the changein the input is the “relative condition number” of the problem.
limε→0+
sup||∆x ||<ε
[||F (x + ∆x)− F (x)||
||F (x)||/||∆x ||||x ||
]The condition is about the the sensitivity to changes of input.It has nothing to do with the way the result is computed orapproximation errors along the way.
We may separately talk about the “stability” of an algorithmto compute F , which addresses the susceptibility of analgorithm to round off error, etc.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Forward vs Backward Error
Given input x to problem F it might not be interesting,convenient or even possible to compute F (x).
We may however wish to compute a value y + ∆y that is nearthe exact answer y = F (p).In this case, we call ∆y a “forward error”.
Or we may compute a value y∗ = F (x + ∆x) which is theexact answer to a nearby problem.In this case, we call ∆x a “backward error”.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Forward vs Backward Error
Early theoretical work examined solving problems within agiven forward error bound.
Currently prefer to solve problems within a given backwarderror bound.
Concept makes sense if nearby problems are indistinguishable.
Can ask quesions like “What is the best answer within thebackward error tolerance?”
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Ring Operations
Suppose p, q ∈ k[x ], k a field with norm || · ||,deg p = n, deg q = m.
Normalize so leading coefficients are 1.
The ring operations +,× are well conditioned underperturbation.
Perturbing the input pair (p, q) perturbs the sum or productby a similar amount.
Strightforward algorithms require no comparisons or equalitytests so are well defined.
Forward or backward error is straigntforward.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Symbolic-Numeric ComputingEarly Approaches Using Floating PointModern HistoryCondition, Forward and Backward ErrorApproximate Polynomials
Ring Operations
Suppose p, q ∈ k[x ], k a field with norm || · ||,deg p = n, deg q = m.
Normalize so leading coefficients are 1.
The ring operations +,× are well conditioned underperturbation.
Perturbing the input pair (p, q) perturbs the sum or productby a similar amount.
Strightforward algorithms require no comparisons or equalitytests so are well defined.
Forward or backward error is straigntforward.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Outline
1 Introduction and Basic Notions
2 Approximate GCD
3 Application: Image Processing
4 Other Problems
5 Conclusion
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Approximate GCD
Suppose p, q ∈ k[x ], k a field with norm || · ||,deg p = n, deg q = m.
Normalize so leading coefficients are 1.
Perturbing the input pair typically destroys any gcd. Why?
The dimension of the problem space is n + m.
The dimension of the subspace with some gcd(p, q) = g , isn + m − deg g , which is lower than that of the whole space ifdeg g > 0.
Therefore a small pertubration of a pair with a gcd will ingeneral have no non-trivial gcd.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Approximate GCD
Suppose p, q ∈ k[x ], k a field with norm || · ||,deg p = n, deg q = m.
Normalize so leading coefficients are 1.
Perturbing the input pair typically destroys any gcd. Why?
The dimension of the problem space is n + m.
The dimension of the subspace with some gcd(p, q) = g , isn + m − deg g , which is lower than that of the whole space ifdeg g > 0.
Therefore a small pertubration of a pair with a gcd will ingeneral have no non-trivial gcd.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Approximate GCD
Suppose p, q ∈ k[x ], k a field with norm || · ||,deg p = n, deg q = m.
Normalize so leading coefficients are 1.
Perturbing the input pair typically destroys any gcd. Why?
The dimension of the problem space is n + m.
The dimension of the subspace with some gcd(p, q) = g , isn + m − deg g , which is lower than that of the whole space ifdeg g > 0.
Therefore a small pertubration of a pair with a gcd will ingeneral have no non-trivial gcd.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Approximate GCD
Suppose p, q ∈ k[x ], k a field with norm || · ||,deg p = n, deg q = m.
Normalize so leading coefficients are 1.
Perturbing the input pair typically destroys any gcd. Why?
The dimension of the problem space is n + m.
The dimension of the subspace with some gcd(p, q) = g , isn + m − deg g , which is lower than that of the whole space ifdeg g > 0.
Therefore a small pertubration of a pair with a gcd will ingeneral have no non-trivial gcd.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Possible Directions
Q1: Given a pair of polyomials (p, q) near a pair with anon-trivial gcd (p + ∆p, q + ∆q), can we define a conceptthat is “almost like a gcd” for exactly the pair (p, q)?
Q2: Given a pair of polyomials (p, q) near a pair(p + ∆p, q + ∆q) with a non-trivial gcd, can we find p + ∆pand q + ∆q and their gcd?
Q3: Given a pair of polyomials (p, q) and an error tolerance ε,does there exist a pair (p + ∆p, q + ∆q) with||∆p||, ||∆q|| < ε with a non-trivial gcd? If so, find one.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Possible Directions
Q1: Given a pair of polyomials (p, q) near a pair with anon-trivial gcd (p + ∆p, q + ∆q), can we define a conceptthat is “almost like a gcd” for exactly the pair (p, q)?
Q2: Given a pair of polyomials (p, q) near a pair(p + ∆p, q + ∆q) with a non-trivial gcd, can we find p + ∆pand q + ∆q and their gcd?
Q3: Given a pair of polyomials (p, q) and an error tolerance ε,does there exist a pair (p + ∆p, q + ∆q) with||∆p||, ||∆q|| < ε with a non-trivial gcd? If so, find one.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Possible Directions
Q1: Given a pair of polyomials (p, q) near a pair with anon-trivial gcd (p + ∆p, q + ∆q), can we define a conceptthat is “almost like a gcd” for exactly the pair (p, q)?
Q2: Given a pair of polyomials (p, q) near a pair(p + ∆p, q + ∆q) with a non-trivial gcd, can we find p + ∆pand q + ∆q and their gcd?
Q3: Given a pair of polyomials (p, q) and an error tolerance ε,does there exist a pair (p + ∆p, q + ∆q) with||∆p||, ||∆q|| < ε with a non-trivial gcd? If so, find one.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Possible Directions
Q4: Given a pair of polyomials (p, q) and an error tolerance ε,does there exist a pair (p + ∆p, q + ∆q) with||∆p||, ||∆q|| < ε with a non-trivial gcd? If so, find one withminimal max(||∆p||, ||∆q||).
Q5: Given a pair of polyomials (p, q) and an error tolerance ε,does there exist a pair (p + ∆p, q + ∆q) with||∆p||, ||∆q|| < ε with a non-trivial gcd? If so, find a pairwithin this tolerance with the gcd of maximum degree.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Possible Directions
Q4: Given a pair of polyomials (p, q) and an error tolerance ε,does there exist a pair (p + ∆p, q + ∆q) with||∆p||, ||∆q|| < ε with a non-trivial gcd? If so, find one withminimal max(||∆p||, ||∆q||).
Q5: Given a pair of polyomials (p, q) and an error tolerance ε,does there exist a pair (p + ∆p, q + ∆q) with||∆p||, ||∆q|| < ε with a non-trivial gcd? If so, find a pairwithin this tolerance with the gcd of maximum degree.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Q1. Early Work: Quasi GCD. Forward approach.
Quasi-GCD Computations, Arnold Schonhage, J. Complexity1985.
f , g ∈ k[x ], k = R or C + technical conditions.
Find h such that |hf1 − f | < ε and |hg1 − g | < ε andany exact divisor of f and of g approximately divides h.
Refers to infintely precise divisors of f and g .
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Q1. Early Work: Quasi GCD. Forward approach.
Quasi-GCD Computations, Arnold Schonhage, J. Complexity1985.
f , g ∈ k[x ], k = R or C + technical conditions.
Find h such that |hf1 − f | < ε and |hg1 − g | < ε andany exact divisor of f and of g approximately divides h.
Refers to infintely precise divisors of f and g .
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Q1. Early Work: Quasi GCD. Forward approach.
Quasi-GCD Computations, Arnold Schonhage, J. Complexity1985.
f , g ∈ k[x ], k = R or C + technical conditions.
Find h such that |hf1 − f | < ε and |hg1 − g | < ε andany exact divisor of f and of g approximately divides h.
Refers to infintely precise divisors of f and g .
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Is This What We Want?
Well defined, but is it useful in practice?
We may not know the inputs precisely.
Even if we can compute them precisely, it may cost a lot.
We might not care whether we solve a nearby problem instead.
It is interesting and useful to know the structure of space ofproblems and to exploit it. GCD is “ill posed” in the sensethat arbitrarily small changes in the input can change theresult completely.
Pretending coefficients are known precisely leads to extracomputation and fragile results.
So, ”no”, this is not what we want.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Is This What We Want?
Well defined, but is it useful in practice?
We may not know the inputs precisely.
Even if we can compute them precisely, it may cost a lot.
We might not care whether we solve a nearby problem instead.
It is interesting and useful to know the structure of space ofproblems and to exploit it. GCD is “ill posed” in the sensethat arbitrarily small changes in the input can change theresult completely.
Pretending coefficients are known precisely leads to extracomputation and fragile results.
So, ”no”, this is not what we want.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Is This What We Want?
Well defined, but is it useful in practice?
We may not know the inputs precisely.
Even if we can compute them precisely, it may cost a lot.
We might not care whether we solve a nearby problem instead.
It is interesting and useful to know the structure of space ofproblems and to exploit it. GCD is “ill posed” in the sensethat arbitrarily small changes in the input can change theresult completely.
Pretending coefficients are known precisely leads to extracomputation and fragile results.
So, ”no”, this is not what we want.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Is This What We Want?
Well defined, but is it useful in practice?
We may not know the inputs precisely.
Even if we can compute them precisely, it may cost a lot.
We might not care whether we solve a nearby problem instead.
It is interesting and useful to know the structure of space ofproblems and to exploit it. GCD is “ill posed” in the sensethat arbitrarily small changes in the input can change theresult completely.
Pretending coefficients are known precisely leads to extracomputation and fragile results.
So, ”no”, this is not what we want.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Is This What We Want?
Well defined, but is it useful in practice?
We may not know the inputs precisely.
Even if we can compute them precisely, it may cost a lot.
We might not care whether we solve a nearby problem instead.
It is interesting and useful to know the structure of space ofproblems and to exploit it.
GCD is “ill posed” in the sensethat arbitrarily small changes in the input can change theresult completely.
Pretending coefficients are known precisely leads to extracomputation and fragile results.
So, ”no”, this is not what we want.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Is This What We Want?
Well defined, but is it useful in practice?
We may not know the inputs precisely.
Even if we can compute them precisely, it may cost a lot.
We might not care whether we solve a nearby problem instead.
It is interesting and useful to know the structure of space ofproblems and to exploit it. GCD is “ill posed” in the sensethat arbitrarily small changes in the input can change theresult completely.
Pretending coefficients are known precisely leads to extracomputation and fragile results.
So, ”no”, this is not what we want.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Is This What We Want?
Well defined, but is it useful in practice?
We may not know the inputs precisely.
Even if we can compute them precisely, it may cost a lot.
We might not care whether we solve a nearby problem instead.
It is interesting and useful to know the structure of space ofproblems and to exploit it. GCD is “ill posed” in the sensethat arbitrarily small changes in the input can change theresult completely.
Pretending coefficients are known precisely leads to extracomputation and fragile results.
So, ”no”, this is not what we want.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Is This What We Want?
Well defined, but is it useful in practice?
We may not know the inputs precisely.
Even if we can compute them precisely, it may cost a lot.
We might not care whether we solve a nearby problem instead.
It is interesting and useful to know the structure of space ofproblems and to exploit it. GCD is “ill posed” in the sensethat arbitrarily small changes in the input can change theresult completely.
Pretending coefficients are known precisely leads to extracomputation and fragile results.
So, ”no”, this is not what we want.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Q2-Q5. Backward Error Approach. Matrices. SVD.
Work on polynomial algorithms moves between variousexplicit and matrix representations, see e.g. Bini and PanPolynomial and Matrix Computations, Birkhauser 1994.
We cast the GCD as a matrix problem, bringing varioius toolsto bear, in particular the singular value decomposition.
The Singular Value Decomposition for Polynomial Systems,Corless, Gianni, Trager, SMW, ISSAC 1995.
Other approaches before and after. Noda and Sasaki, Pan,Karkanias and Mitrouli, Hribernig and Stetter.
Contribution here is a direct geometric interpretation andmeaningful backward error analysis.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Polynomial-Vector Identification
Associate the vector p∗ = [pn, pn−1, . . . , p0]T with thepolynomial p(x) = pnx
n + pn−1xn−1 + · · ·+ p0 .
Notation for vector of powers of x , xn = [xn, xn−1, ..., 1]T .p(x) = p∗ · xn.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Polynomial-Vector Identification
Associate the vector p∗ = [pn, pn−1, . . . , p0]T with thepolynomial p(x) = pnx
n + pn−1xn−1 + · · ·+ p0 .
Notation for vector of powers of x , xn = [xn, xn−1, ..., 1]T .p(x) = p∗ · xn.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Cauchy Matrix for Multiplication
Write the Cauchy matrix
Ck(p) =
pnpn−1 pn
......
. . .
p0. . .
p0 pn. . .
...p0
, k columns.
Then a(x) = p(x) · q(x)⇔ a∗ = Cm+1(p)q∗.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
The Sylvester Matrix
Let the degrees of p and q be n and m respectively.
The Sylvester matrix is S(p, q) =
[Cm(p)T
Cn(q)T
]
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
The Sylvester Matrix
E.g. p(x) = 3x3 + 2x2 + x − 1, q(x) = x2 − 2x + 5.
S(p, q) =
3 2 1 −1 00 3 2 1 −11 −2 5 0 00 1 −2 5 00 0 1 −2 5
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
The Sylvester Matrix
Linear combinations of rows of S correspond with polynomialcombinations of p(x) and q(x).
Let a(x) = am−1xm−1 + am−2x
m−2 + · · ·+ a0,b(x) = bn−1x
n−1 + bn−2xn−2 + · · ·+ b0.
Then [a∗ | b∗]T S(p, q)xm+n−1 = a(x)p(x) + b(x)q(x)
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
The Sylvester Matrix and GCD
If we can find the linear combination of the rows of S thatgives a non-zero row with the most leading zeros then we willhave found the coefficients of the GCD of p and q.
This is just the last row of the row echelon form of S .
Therefore deg gcd(p, q) = n + m − rankS(p, q).
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
The Sylvester Matrix and GCD
If we can find the linear combination of the rows of S thatgives a non-zero row with the most leading zeros then we willhave found the coefficients of the GCD of p and q.
This is just the last row of the row echelon form of S .
Therefore deg gcd(p, q) = n + m − rankS(p, q).
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Example
Consider p(x) = x2 + 1.99x + 1 and q(x) = x + 1.
gcd(p, q) = 1, but the pair is close to a pair with gcd = q.
The Sylvester matrix is
S(p, q) =
1.00 1.99 1.001.00 1.00 0.000.00 1.00 1.00
We wish to discover that this matrix is “close” to one of rank2 and to read the gcd from the last non-zero row of the rowechelon form of that matrix.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Matrix Norms
To talk about “nearby” objects, we recall various norms.
The Euclidean norm of a vector v is ||v ||2 =√v · v
The Euclidean norm of a matrix A is ||A||2 = maxv 6=0||Av ||2||v ||2
We drop the 2, as we restrict attention to these norms.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Matrix Norms
To talk about “nearby” objects, we recall various norms.
The Euclidean norm of a vector v is ||v ||2 =√v · v
The Euclidean norm of a matrix A is ||A||2 = maxv 6=0||Av ||2||v ||2
We drop the 2, as we restrict attention to these norms.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Singular Values
The matrix A will deform the sphere {v s.t. ||v || = 1} to anellipsoid.
An equivalent formulation for the Euclidean norm of a matrixis ||A|| = max||v ||=1 ||Av ||, i.e. the length of the semi-majoraxis of the deformed sphere.
The lengths of the full set of semi-axes of the ellipoidal imageare called the “singular values” of A, and are usually writtenσ1 ≥ σ2 ≥ · · · ≥ σn.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Singular Values
The matrix A will deform the sphere {v s.t. ||v || = 1} to anellipsoid.
An equivalent formulation for the Euclidean norm of a matrixis ||A|| = max||v ||=1 ||Av ||, i.e. the length of the semi-majoraxis of the deformed sphere.
The lengths of the full set of semi-axes of the ellipoidal imageare called the “singular values” of A, and are usually writtenσ1 ≥ σ2 ≥ · · · ≥ σn.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Singular Values
The matrix A will deform the sphere {v s.t. ||v || = 1} to anellipsoid.
An equivalent formulation for the Euclidean norm of a matrixis ||A|| = max||v ||=1 ||Av ||, i.e. the length of the semi-majoraxis of the deformed sphere.
The lengths of the full set of semi-axes of the ellipoidal imageare called the “singular values” of A, and are usually writtenσ1 ≥ σ2 ≥ · · · ≥ σn.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Singular Value Decomposition
We can factor A = UΣV T where U and V are orthogonaland Σ = diag(σ1, σ2, . . . , σn).
This is the “Singular Value Decomposition” of A.
There are several efficient standard libraries to compute this,including LAPACK.
A useful property of the SVD is that σk is the 2-norm distanceto the nearest matrix of rank less than k (Golub and VanLoan, Wiley 1981).
This allows us to find the distance to nearby gcd of desireddegrees.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Singular Value Decomposition
We can factor A = UΣV T where U and V are orthogonaland Σ = diag(σ1, σ2, . . . , σn).
This is the “Singular Value Decomposition” of A.
There are several efficient standard libraries to compute this,including LAPACK.
A useful property of the SVD is that σk is the 2-norm distanceto the nearest matrix of rank less than k (Golub and VanLoan, Wiley 1981).
This allows us to find the distance to nearby gcd of desireddegrees.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Singular Value Decomposition
We can factor A = UΣV T where U and V are orthogonaland Σ = diag(σ1, σ2, . . . , σn).
This is the “Singular Value Decomposition” of A.
There are several efficient standard libraries to compute this,including LAPACK.
A useful property of the SVD is that σk is the 2-norm distanceto the nearest matrix of rank less than k (Golub and VanLoan, Wiley 1981).
This allows us to find the distance to nearby gcd of desireddegrees.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
SVD GCD Algorithm
Input: Univariate polynomials p and q of degrees n and mrespectively, n ≥ m and an error tolerance ε.Output: A polynomial d of degree nd which satisfies the followingproperties:
1 The polynomial d is the exact GCD of some pair ofpolynomials p + ∆p and q + ∆q, with ε = max(||∆p||, ||∆q||).
2 The degree of d satisfies deg(d) = max deg(gcd(r , s)) wherethe maximum is taken over all polynomials r ∈ Nε(p) ands ∈ Nε(q).
3 Among all polynomials (d∗, p + ∆p, q + ∆q) satisfying thefirst two properties, the associated polynomials p + ∆p andq + ∆q are the closest to p and q in the least-squares sense.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
SVD GCD Algorithm
Processing:
1 Form the Sylvester matrix S of p and q.
2 Compute the SVD of S = UΣV T .
3 Find the maximum k such that σk > ε√m + n and σk+1 ≤ ε
(if all σj > ε√m + n then set d = 1, and if there is no such
‘gap’ in the singular values then report failure). The index k isthe declared rank of S , and the degree of d will be nd = n− k .
4 Compute d by of the methods described next.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
SVD GCD – Have nd , compute d
1 Compute d by the ordinary Euclidean algorithm, terminatingwhen the degree of the remainder is nd (perhaps scaled asNoda and Sasaki do it).
2 Solve the minimization problem defined below, by standardoptimization techniques. This has the advantage that thebackward error analysis is all done at the same time, and isnumerically stable.
3 Use the modified Lazard algorithm detailed in CGTW (ISSAC95) Section 4 specialized to the univariate case, to find all theroots of the GCD and hence the GCD.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Optimization
The gcd found by SVD is not guaranteed to require theminimum perturbation of p and q. But once we have used theSVD to determine the degree d of the approximate gcd, wecan formulate the optimization problem
ming||Cn−d+1(g)q1 − p||2 + ||Cm−d+1(g)p1 − q||2
The optimization approach was fully developed by Karmarkarand Lakshman, ISSAC 1996.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
QR Factorization
If S(p, q) = QR with Q orthogonal and R upper triangular,we wish to find the last non-zero row of R.
This can improve numerical behaviour, performance andpreserve structure, e.g.Zarowski et al (IEEE Trans Signal Processing 2000),Zhi and Noda (ASCM 2000),Corless, SMW and Zhi (IEEE Trans Signal Processing 2004).The last paper extended to handle cases with roots near oroutside the unit circle giving a practical algorithm.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
The QuestionQuasi-GCDMatrix FormulationSVDImprovements
Exploiting Structure
We also note that a singular value gives the distance to theclosest matrix of less than a given rank. This is not necessarilya Sylvester matrix. The problem of structured low rankapproximation of a Sylvester matrix was taken up by Kaltofen,Yang ans Zhi in Symbolic-Numeric Computation, Wang andZhi (eds), Trends in Mathematics. Birkhuser 2007.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Outline
1 Introduction and Basic Notions
2 Approximate GCD
3 Application: Image Processing
4 Other Problems
5 Conclusion
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Application: Image Processing
Deconvolution via Fast Approximate GCDZijia Li, Zhengfeng Yang, Lihong Zhi. ISSAC 2010.
Approximage gcd of bivariate polynomials corresponding toz-transforms of several blurred images.
Time for n × n image O(n2 log n) vs previous O(n8).
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Notation
Image matrix P. Blurring U. Noise N.
Polluted matrix F = P ∗ U + N.
Convolution (P ∗ U)i ,j =∑
k
∑h Ph,kUi−h,j−k
Signal to noise ratio SNR = 10 log10
(σ2P∗Uσ2N
),
where σ2 is variance.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Notation
Image matrix P. Blurring U. Noise N.
Polluted matrix F = P ∗ U + N.
Convolution (P ∗ U)i ,j =∑
k
∑h Ph,kUi−h,j−k
Signal to noise ratio SNR = 10 log10
(σ2P∗Uσ2N
),
where σ2 is variance.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Notation
Image matrix P. Blurring U. Noise N.
Polluted matrix F = P ∗ U + N.
Convolution (P ∗ U)i ,j =∑
k
∑h Ph,kUi−h,j−k
Signal to noise ratio SNR = 10 log10
(σ2P∗Uσ2N
),
where σ2 is variance.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
2D z-transform
Two-dimensional z-transform of an m× n matrix P is the bivariatepolynomial
p(x , y) = xTm−1 · P · yn−1
Then F = P ∗ U + N becomes
f (x , y) = p(x , y)u(x , y) + n(x , y)
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
2D z-transform
Two-dimensional z-transform of an m× n matrix P is the bivariatepolynomial
p(x , y) = xTm−1 · P · yn−1
Then F = P ∗ U + N becomes
f (x , y) = p(x , y)u(x , y) + n(x , y)
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Image Recovery
Two distorted images
F1 = P ∗ U + N1
F2 = P ∗ V + N2
Applying z-transform gives
f1(x , y) = p(x , y)u(x , y) + n1(x , y)
f2(x , y) = p(x , y)v(x , y) + n2(x , y)
Reconstruct true image with approximate gcd of f1 and f2.
In practice deg u and deg v are low compared to deg p so gcdis of high degree.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Image Recovery
Two distorted images
F1 = P ∗ U + N1
F2 = P ∗ V + N2
Applying z-transform gives
f1(x , y) = p(x , y)u(x , y) + n1(x , y)
f2(x , y) = p(x , y)v(x , y) + n2(x , y)
Reconstruct true image with approximate gcd of f1 and f2.
In practice deg u and deg v are low compared to deg p so gcdis of high degree.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Image Recovery
Two distorted images
F1 = P ∗ U + N1
F2 = P ∗ V + N2
Applying z-transform gives
f1(x , y) = p(x , y)u(x , y) + n1(x , y)
f2(x , y) = p(x , y)v(x , y) + n2(x , y)
Reconstruct true image with approximate gcd of f1 and f2.
In practice deg u and deg v are low compared to deg p so gcdis of high degree.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Examples
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Image Recovery
Single colour image. R, G, B same blurring function.
F1 = P1 ∗ U + N1
F2 = P2 ∗ U + N2
F3 = P3 ∗ U + N3
f1(x , y) = p1(x , y)u(x , y) + n1(x , y)
f2(x , y) = p2(x , y)u(x , y) + n2(x , y)
f3(x , y) = p3(x , y)u(x , y) + n3(x , y)
First find u as gcd of f1, f2, f3, which as low degree.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Image Recovery
Single colour image. R, G, B same blurring function.
F1 = P1 ∗ U + N1
F2 = P2 ∗ U + N2
F3 = P3 ∗ U + N3
f1(x , y) = p1(x , y)u(x , y) + n1(x , y)
f2(x , y) = p2(x , y)u(x , y) + n2(x , y)
f3(x , y) = p3(x , y)u(x , y) + n3(x , y)
First find u as gcd of f1, f2, f3, which as low degree.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Example
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
FormulationRecovery from Multiple ImagesRecovery from One Image
Example
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Approximate Decomposition
Outline
1 Introduction and Basic Notions
2 Approximate GCD
3 Application: Image Processing
4 Other Problems
5 Conclusion
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Approximate Decomposition
Many Other Problems
approximate square free decomposition
approximate functional decomposition of polynomials
approximate irredicubility testing
approximate factorization
approximate constraint solving
closest polynomial with a given property
use of other resultant matrices
use of structure
multivariate mappings
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Approximate Decomposition
Many Other Problems
approximate square free decomposition
approximate functional decomposition of polynomials
approximate irredicubility testing
approximate factorization
approximate constraint solving
closest polynomial with a given property
use of other resultant matrices
use of structure
multivariate mappings
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Approximate Decomposition
Approximate Polynomial Decomposition
GIven f ∈ k[z ] for k = R or C and ε > 0, do there existg , h,∆f ∈ k[z ] such that
(f + ∆f )(z) = (g ◦ h)(z) = g(h(z))
for deg g < deg f , deg h < deg f , deg ∆f ≤ deg f and||∆f || < ε?
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Approximate Decomposition
About Polynomial Decomposition
If f = g ◦ h and deg g = r , deg h = s, then deg f = rs.
If f is “maximally decomposed” as f = f1 ◦ · · · ◦ fn, then thisdecomposition is unique up to
application of linear composition factors, i.e.
f1 ◦ f2 = f1 ◦ (µ ◦ µ−1) ◦ f2 = (f1 ◦ µ) ◦ (µ−1 ◦ f2)
where µ = az + b, µ−1 = 1/az − b/acommuting Chebyshev polynomials
Tn ◦ Tm = Tm ◦ Tn
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Approximate Decomposition
About Polynomial Decomposition
WLOG, assume f , g , h monic and h(0) = 0.
In the exact case, hr and f agree on the first s coefficients,which can be used to recover h. (Recall r = deg g , s = deg h.)
Then g can be obtained by solving a triangular system.Kozen, Landau JSC 1989.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Approximate Decomposition
Approximate Decomposition
Corless, GIesbrecht, Jeffrey, SMW ISSAC 1999.
Find initial h(0) as in Kozen, Landau, or preferably adreformulated by von zur Gathen (series in 1/z).
Given h(k), solve a linear least-squares problem for the bestpossible g (k). Stop if ||∆f || is small enough or is no longerdecreasing.
Given g (k), approximate a solution to the nonlinear leastsquares problem for the best possible h(k+1). Newton iterationproved effective.
Further question on stability of method and whether structureof Jacobian can be used to make Newton iteration moreefficient.
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Outline
1 Introduction and Basic Notions
2 Approximate GCD
3 Application: Image Processing
4 Other Problems
5 Conclusion
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Conclusions
Do not mindlessly use computer algebra algorithms withfloating point coefficients.
Do think about the mathematical nature of the problem andthe final result it must obtain – not how to obtain it, butwhat.
New methods to compute these objects.
Use of backward error concept.
Use of intimate relation between polynomials and matrices.
Still a growing area
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Conclusions
Do not mindlessly use computer algebra algorithms withfloating point coefficients.
Do think about the mathematical nature of the problem andthe final result it must obtain – not how to obtain it, butwhat.
New methods to compute these objects.
Use of backward error concept.
Use of intimate relation between polynomials and matrices.
Still a growing area
S.M. Watt Intro to SNC
Introduction and Basic NotionsApproximate GCD
Application: Image ProcessingOther Problems
Conclusion
Conclusions
Do not mindlessly use computer algebra algorithms withfloating point coefficients.
Do think about the mathematical nature of the problem andthe final result it must obtain – not how to obtain it, butwhat.
New methods to compute these objects.
Use of backward error concept.
Use of intimate relation between polynomials and matrices.
Still a growing area
S.M. Watt Intro to SNC