fx fx f x tx x x x dt − = + − − −−−− () ( )*'* * * ()...
TRANSCRIPT
Life is really Nonlinear.
( )Given : , we seek to solve 0n nF R R F x→ =
( ) ( )'Assuming is differentiable F exists.iij
j
fF xx
x∂⇒ =
∂
From Calculus, one has:
( ) ( ) ( )( )( )1* ' * * *
0
* *
Theorem 0:
(1)
provided and is sufficiently near .
F x F x F x t x x x x dt
x x x
− = + − − − − − −
∈Ω
∫
( ) ( )
Definition 1:: is Lipschitz continuous on .
if for some >0.
N MG R R
G x G y x yγ γ
Ω ⊂ → Ω
− ≤ −
is a contraction mapping on if is Lipschitz continuous and 0< <1.K K
γΩ
( )
*
*1
Theorem 1:If is a contraction mapping with Lipschtz constant then a unique fixed point such that q-linearly with q-factor (very slow)k k
Kx
x K x x
γ
γ−
∃
= →
{ }
10 1 0 1 0
1 1
1 1 0 1 0
* *
Proof:1(1)
1
(2) 1
Cauchy sequence for some
k ki
k i i ii i
nn
n k n n k n k
n n
x x x x x x x x
x x x x x x x x n
x x x x
γγ
γγ γγ
−−
= =
+ + − −
− ≤ − ≤ − ≤ −−
− ≤ − ≤ − ≤ − → →−
⇒ ⇒ →
∑ ∑
0 ∞
Definition 2:
*
2* *1
*
* *1
*
(i) q-quadratically (fast)
if for some >0
(ii) q-superlinearly with q order >1
if (faster than q-superlinearly)
(iii)
n
n n
n
n n
n
x x
x x K x x K
x x
x x K x x
x x
α
α+
+
→
− ≤ −
→
− ≤ −
→
( )
*1
*
*
* *1
q-superlinearly
if lim 0
(iv) q-linearly with q-factor 0,1
if (slowest)
n
nn
n
n n
x x
x x
x x
x x x x
α
α
+
→∞
+
−=
−
→ ∈
− ≤ −
( ) ( )
( ) ( )
( ) ( )
'0 0
1'
11 1 1
1
Application: Consider with
The backward Euler method is
To obtain , one needs to solve (2)
n n
n nn n n n
n
n
y f y y t y
y yyt
y y f y y y f yty
y K y y f y t
+
++ + +
+
= =
−≈
Δ
−= ⇒ = +
Δ
= = + Δ − − − − −
tΔ
( ) ( ) ( ) ( )* *
* *
If is Lipschitz continuous with Lipschitz constant ,
1the mapping is a contraction when .
By Theorem 1, (2) can be solved by the fixed point iteration
when ti
f M
K y K y t f y f yt M
y y y y
K tM
⎛ ⎞− Δ −⎜ ⎟≤ ≤ Δ ⋅⎜ ⎟− −⎝ ⎠
Δ <
1me step .tM
Δ <
( ) ( )( )
'
'
Exercise: Using the backward Euler method to solve (i) with 0 1 =1,10,100
(ii) cos with 0 0
y y y
y y y
ε ε= =
= =
( )
*
'
' *
Standard assumption in Nonlinear iterations.
1. The equation has a solution .2. is Lipschtz continuous with Lipschtz constant .
3. is non-singular.
xF
F x
γ
Newton’s method and stopping criterion of nonlinear
iterations:
( )
( ) ( )( )( )
( )*
* *
1 ' * * *
0
is between and
suppose be the root of and is near by Theorem 0,
we have (3)
x x x x
x F x x x
F x F x t x x x x dt= + − − −∫
( ) ( )
( ) ( )
( )
( ) ( )
( ) ( ) ( )
' ' *
'*
' ' * *
* * ' *
' ' *
(3)(4)' * * ' *
Since is Lipschitz
1When is close enough to such that
one has 2 (4)
2 2 (5)
F x F xF
x x
F x F x x x
x x x x F x
F x F x
F x F x x x F x e
γ
γ
γ
−<
−
⇒ < + −
⎛ ⎞− <⎜ ⎟
⎝ ⎠
< −
⇒ ≤ − = −* (here )e x x= −
( ) ( ) ( ) ( )( )( ) ( )( )( )
( ) ( ) ( ) ( )( )( )
*
11 1' * ' * ' * *
0
1 1' * ' * *
0
11 1' * ' * ' * *
0
Moreover, let
consider
e x x
F x F x F x F x t x x dt
e I F x F x t x x edt
F x F x e I F x F x t x x edt
− −
−
− −
= −
= + −
⎡ ⎤= − − + −⎢ ⎥⎣ ⎦
⎡ ⎤⇒ ≥ − − + −⎢ ⎥⎣ ⎦
∫∫
∫
( ) ( )( ) ( )1 1' * ' * *
0 ----- *e I F x F x t x x e
−≥ − − + −∫
( )( )( )
( ) ( )( )( ) ( ) ( )
' * *
* * *
1' * ' * *
1' *
Since is nonsigular, when is close enough to x
i.e. is even closer to , one has
1 (6)2
1* (7)2
Hence, from (5) and (7), we have
2
F x x
x t x x x
I F x F x t x x
F x F x e
e
F
−
−
+ −
− + − < − − − −
⇒ ≥ −− − −
( )( ) ( )
( )( ) ( ) ( )
( )( )
( )( ) ( )( )
' *
relative error
' *1' *
1' * ' *
00
' *
0 0 relative residual
2 (8)
4
(9)
4
K F x
F x F x ex
F x eF x F x
eF x
F xeK F x
e F x
−
−
=
≤ ≤ −
⎧= ⋅⎪
⎪⎪⇒ −⎨⎪⎪ ≤⎪⎩
( ) ( ) ( )11' ' * *
Exercise: Show 0 such that
2 for (10)F x F x x B xδ
δ−−
∃ >
≤ ∈ −
( )( ) ( )
( )( )
( )( )
1' * * ' *
(6)
11
11
(*) consider ,
1 we have and is nonsingular.2
21
A F x t x x B F x
I AB AB
A B I I AB
BA B I I AB B
I AB
−
−−
−−
= + − =
⇒ − <
= − −
⇒ ≤ − − ≤ ≤− −
( )( ) ( )
( ) ( )1
*0
1' *1
1* * '1
Theorem 2:
Let the standard assumptions hold, such that if ,
the newton iteration q-quadraticallyProof:
Consider
n n
n n n n
n n n n
e e
x B x
x x F x F x x
x x x x F x F x
δδ
+
−+
−+
∃ ∈
= − →
− = − −
( ) ( ) ( )( )( ) ( ) ( )( )
( ) ( )( )
( )( )'
1' '
By Theorem 0 11' ' ' *
01
21'1 10 is Lip
0
1' *
*
1
here, , =Lipschtz constant
q-quadratically.
n n n n
n n n n
n n n n n nF
n
n
F x F x e F x
F x F x F x te e dt
e F x t e e dt K e
K F x
x x
γ
γ γ
−
−
−+
−
= −
= − +
⇒ ≤ − ≤
=
⇒ →
∫
∫
( )( )*
Re mark:
1. 9 The stopping criterion should be determined according to
the condition number of the Jacobian matrix
for the relative error to be less than a given toleran
F x
⇒
ce.
2. Moreover, for the Newton’s method, Theorem 2 implies that
( ) ( )
( )( )
( ) ( )
' *
1'1
Theorem 2 2 '
can be ignored when is well-conditioned
\
For absolute error to be less than a given tolerance , the newton iteration should be stopped when
n n n n
n n n n
F x
e e F x F x
e e F x F x
ε
−+
⎛ ⎞⎜ ⎟⎜ ⎟⎝ ⎠
= +
⇒ = Ο +
( ) ( )' \ < .n nF x F x ε
12
1
12
3.Checking the quadratic convergence rate of the Newton
method, we check a constant as
instead of checking a constant as
n n
n n
n
n
x xn
x x
x xn
x x
+
−
+
−→ →
−
−→ →
−
∞
∞
Chord method
( ) ( )1'1 0
*
(11)If the standard assumption holds, the chord method converges q-linearly to the root .
n n nx x F x F x
x
−+ = − −
)( ) (( )
1'0
1 1 1
Remark 1:
can be replaced by approximate inverse preconditioner
where 1 and is easier to compute .
F x
B I B A B
−
− − −− <
( )(11)
11 (12)n n nx x B F x−+⇒ = − −
( ) ( )
( ) ( )
( ) ( )
' '
'
Remark 2:When is difficult to compute, we can approximate by difference approximation.
0
0 0
In one-dimension case, this approach is
j
h jjj
F F
F x h x e F xx
h xF D F
F he Fx
h
⎧ + −≠⎪
⋅⎪≈ = ⎨−⎪
=⎪⎩
( )called the secant method
( )( ) ( )( ) ( )
( )
( )
0 0
' '0 0
0
Algorithm of Chord method
1. .
2. compute and Factor .
3. do while
(a) Solve (b) (c) Evaluate
F x
F x F x LU
F x F x
LUs F xx x s
F x
γ
ε
=
=
≥ ⋅
= −
= +
( )
( ) ( )( ) ( ) ( )( )
( ) ( )( )
1 1
1'
2
Theorem 3.Let the standard assumption hold.
Then, , and 0 such that if and
then is defined
and satisfies
c c
c c c c c
c c c c
K x B
x x F x x F x x
x x K x x x x x x
δ xδ δ δ
ε
ε
−
+
+
∃ > ∈ Δ
= − + Δ +
− ≤ − + Δ − +
( ) ( )
( ) ( ) ( )( )( ) ( )
( ) ( )( )( ) ( )( )
<
( ) ( ) ( )( ) ( )
( ) ( )( ) ( )
1'
11' '
1'
(5) 12 1' ' ' *
1'
Proof.
Let be the Newton update.
2
(13)
Nc c c
Nc c c c
c c c
c c c c
c c c
x x F x F x
x x F x F x x F x
F x x x
e K e F x F x x F x e
F x x x
ε
ε
−+
−−+ +
−
−−+
−
= −
⇒ = + − + Δ −
+ Δ
⇒ ≤ + − + Δ
+ + Δ −
c
( )( )
( )
( ) ( )( ) ( ) ( ) ( )( )( )( ) ( ) ( )( )( )
( ) ( )
11' *(10) 11'
11 1' ' '
11 1' '
1'1'
1If then 4 2
1 1
c c
c c c c c
c c
c
c c
F xx x F
F x x F x I F x x
F x I F x x
F xF x x
−−
−−
−− −
−− −
−
−
Δ ≤ ⇒ Δ ≤
⇒ +Δ = + Δ
≤ + Δ
≤− Δ
( )
c
c
x
( ) 11' ' * 2 4 (14)cF x F x−−≤ ≤ −
( ) ( ) ( )( ) ( )
( )( )( ) ( ) ( )
211 1' ' '
2
21 1' * ' * ' *
Moreover, by the same argument, one can show
8 (15)
plug (14) (15) into (13), we have
,
here, 16 4
c c c c c
c c c c
F x F x x F x x
e K e x e x
K K F x F x F x
ε
−− −
+
− −
− + Δ ≤ Δ −
≤ + Δ +
= + +
( )0
*1 0
Theorem 4.Let standard assumption holds. There are and 0such that if , the chord iterates converge q-linearly
to and ( )
c
n c n
Kx B
x e K e e
δδ
+
>
∈
≤ − ++
( )( ) ( ) ( ) ( )( ) ( )
0
' ' *0 0
0
* *0
1
Proof.Let be small enough so that Theorem 3 hold, and 0,
for
is Lipschtz.
By Theorem 3,
1
c
c
c c
c c
c
e e
n n
x
x F x F x x B x
x x x F
x x x x
e K e
δ
δ ε
γ
γ
+
=
Δ = − ∈
⇒ Δ ≤ −
⎛ ⎞⎜ ⎟≤ − + −⎜ ⎟⎜ ⎟⎝ ⎠
≤ ( )( ) ( )( )
( )( )
( )
0
0
0 0
1 2
assume . This is true when is chosen such that
and 1 2 1.
Clearly, the chord iterates converge q-linearly when 1 2 1,
let 1 2 . The theorem is proved.
n n
n
c
e e K e
e e
e K e
K
K K
γ γ γ δ
δ
δ γ
γ δ
γ
+ + ≤ +
⎛ < ⎞⎜ ⎟⎜ ⎟< + <⎝ ⎠
+ <
= +
( )( )
( ) ( )( )
1 0
' *1
Theorem 5. Let the standard assumption hold.Then 0, 0 and >0 such that if
and approximate inverse satisfies
for all , then the iterati
BK x
B x
I B F x x
x B
Bδ δ δ
ρ δ
δ
∃ > > ∈
− ⋅ = <
∈
( ) ( )
( )( )( )
1
*
1
on
converges q-linearly to and
.
using chord iteration to accerlate iterations
n n n n
n B n n n
x x B x F x
x
e K x e eρ
+
+
= −
≤ +
( ) ( )( ) ( )
1'1
1'1
Shamanskii method:Alternation of a Newton step with a sequence of chord steps.
(*) 1 1
c c c
j j c j
m
y x F x F x
y y F x F y j m
x y
−
−+
+
= −
= − ≤ ≤ −
=
go for next newton step.
When m=1, (*) Newton iterationWhen m= , (*) Chord iteration
⎧⎪⎪⎨⎪⎪⎩
≡∞ ≡
Algorithm of Shamanskii
( )( )
( )( )
( )
( )( )
0
0
'
'
0
1.
2. Do while
(a) compute
(b) (c) for 1 ~ (i) solve (ii) (iii) Evaluate
(iv) if , break
F x
F x
F x
F x LUj m
LUs F xx x s
F x
F x
γ
τγ
τγ
=
>
=
=
= −
= +
≤
( ) 0 (d) if , breakF x τγ≤
( )*0
11
Theorem 6.Let m 1 be given, 0, 0 such that
if , the Shamanskii iterates converge with
q-order 1 and
s
mn s n
K
x B x
m e K e
δ
δ
++
≥ ∃ > >
∈
+ ≤
Remark 3:
( )( ) ( ) ( ) ( ) ( )
( )
'
'1
0
(1) When is approximated by the difference approximation,
with
if Standard assumptions hold and with good initial guess ,
basically there is no diif
h j
h hc c h c cj
F D F
x F x D F F x x
e
δ
δ
Δ = − Δ <
<
( ) ('erence in using and the exact
in the chord iterations. The convergence rate is at least q-linearly.h cj
D F F x )
( )
( ) ( )
( )
21
(2) When , Theorem 3 impiles that no meaningful
error reduction can be obtain by iterations,
when the error in evaluation of the function
hn n
hn n n n n
n
e x
e K e x e x
x
ε
ε
+
< Δ
⎛ ⎞⎛ ⎞≤ + Δ +⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠
∵
value
admits no further reduction.
F
( )
( )( )
1 '1
11
Remark 4.
The conclusions in Remark 3 can be generalized to the
approximate inverse when . This further
implies that, in the approximate Newton step ,
the system n n n
n
B B F x
x x B F x
B x
δ−
−+
−
= −
Δ ( ) needs not to be solve exactly.nF x= −
( )
( ) ( ) 1
1Instead of solving , one can solve2
as long as .
here can be a preconditioner of such as a stationary iterations
CG iterationsfew steps of , e
PCG iterationsGMRES iterations
n n
n n
B x F
B x F x B B
B B
δ
Δ = −
Δ = − −
tc.
⎧⎪⎪⎨⎪⎪⎩
( ) ( )1'1
The newton iterative method consists of solving
by this approximation iscalled the inexact Newton Methods.
n n n nx x F x F x−+ = −
Please refer to C.T. kelly's "Iterative Methods" fordetail error analysis of the inexact Newton method.
( )
( ) ( )
1
11
1
Broyden's Method
Broyden's method is locally superlinearly convergent!in between Newton and chord method
In one-dimension space, consider the secant method
n nn n
n n
F x F xx x
x xθ −
−+
−
−⎛ ⎞= − ⎜ ⎟−⎝ ⎠
( )1
nF x−
( )( ) ( )( )2
Remark 5: the secant iterates converge q-superlinearly.
Proof:Let be small enough that theorem 3 holds and 0
(16)
c
c c c c
x
e K e x e x
δ ε
ε+
=
≤ + Δ + −
( ) ( ) ( )( )( )( ) ( ) ( ) ( )( ) ( )
( ) ( )
( )
1 '
0
1' ' '
0
1
0
Since, by theorem 0, one has
12
2
c c c
cc c
c
c c c
c
F x F x F x t x x x x dt
F x F xcF x F x t x x F x
x x
x t x x dt x x
e e
γγ
γ
− − − −
−− −
−
− −
−
− = + − −
−⇒ − = + − −
−
⇒ Δ ≤ − − = −
≤ +
∫
∫
∫
dt
( ) ( )
( )
plug into (16) 2
1 1
1 1 1 1
11
1
12 2
Choose such that 1 1
, ,
12 2
1 0 as
c c
n n
n n c n n n n
nn n
n
n
e K e e e
0K e e e e
e x x e e x x e e x x ee
K e ee
K e
γ γ
δ γ δ
γ γ
γ
+ −
−
+ + + − − −
+−
−
⎛ ⎞⎛ ⎞⇒ ≤ + +⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠+ < ⇒ < < < <
= − = = − = = − =
⎛ ⎞⎛ ⎞⇒ ≤ + +⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠
≤ + →
Hence result.n →∞
( ) ( )( )( ) ( )
( ) ( )
( )
' rank-one update
1
11
Broyden method computes
here and
considering 0In one-dimention, , the iteration
,
n h j
T Tc
c cT TF x or D F
c c
c
n c n
n n n
y B s s F x sB B B
s s s s
y F x F x s x x
Bx x x x
x x B F x
++
=
+ +
+ −
−+ +
−= + = +
= − = −
=⎛ ⎞⎜ ⎟= =⎝ ⎠
= −( ) ( ) ( )1
11
becomes
Clearly, the secant method is a special case of the Broyden method.
n nn n n
n n
F x F xx x F x
x x−
+−
−⎛ ⎞= − ⎜ ⎟−⎝ ⎠
( )( )
( )
1
1
The Broyden iterations can be written as following
(1)
(2) compute and
(3) update by rank-one update
(4) repeat (1) ~ (3) unt
c c c
c
c c T
c
x x B F x
s x x w F x
wsB Bs s
x x
−+
+ +
−
+
= −
= − =
= +
=il converge.
Obviously, we would like to ask the following questions:
Q1: When will Broyden iterates converge?
Q2: How fast the Broyden converge?
Answer of Q1 and Q2:
( )
( )
' *
1
0
11
Consider
The Dennis-More condition is lim 0
where .
Theorem 7. Let the standard assumption hold.Let be a sequence of nonsingular matrix, let
and
n n
n n
nn
n n n
Nn
n n n n
B F x E
E SS
S x x
B x
x x B F x
→∞
+
−+
= +
=
= −
∈
= − *
* *
. Assume for any n.
Then, q-superlinearly if and only if and the Dennis-More conditon holds.
n
n n
x x
x x x x
≠
→ →
R
( ) ( )( ) ( ) (
( )( ) ( ) ( )
( ) ( ) ( ) ( )( )( ) ( )
)
( )( )
*
'
' *1
' *1
1' * ' * ' *
0
2' *
*' *
Since *
we have , here
(*)
By theorem 0,
2
n n
n n n n n n
n n n n n n n
e x x
n n n
n n n n
n n n
n n
F x B s F x s E s
E s F x s F x s x x
F x e e F x
F x e F x F x F x e e dt
F x e F x e
E s F x e
γ
+
= −
+
− = = +
= − − = −
= − − − −
− = − +
⇒ − ≤
⇒ ≤
∫
21 2n neγ+ +
( )
*
1' *1 1
1
So, if q-superlinearly, given any 0,we have
1 1 , and 2 4 2
1for large enough. Moreover, 22
2 2 .4 4
By the definition of
n
n n n n n
n n n n
n n n n
n n
x x
e e e F x e e
n e e e
E s E ss e
ε
ε εγ
ε ε ε
−
+ +
+
→ >
< < <
≤ − ≤ ≤
⎛ ⎞⇒ ≤ ≤ + =⎜ ⎟⎝ ⎠
∵ ns e
limit, one has
lim 0 The Dennis-Mor'e condition holdsn n
nn
E ss→∞
= ⇒
*
1
On the other hand,
if lim 0 and
we want to show and 0 as .
n nnn
n
n n n n
E sx x
s
e e nη η
→∞
+
= →
< → →∞
( )
( ) ( )( )( )( )( ) ( )( )
( ) ( ) ( )( )
* * 11
11 * ' * *
0
11 ' * '1 0
11 ' * '1 0
From the Broyden iteration, one has
n n n n
n n n n n n
n n n n n n n
n n n n n n n
x x x x B F x
B B x x F x t x x x x dt
e B E e F x F x te e dt
B E e s F x F x te e dt
−+
−
−+
−+
− = − +
= − + + − −
⎡ ⎤⇒ = − − −⎢ ⎥⎣ ⎦⎡= + − − −⎢⎣
∫
∫
∫( ) ( ) ( )( )
( ) ( ) ( )( )
( )
( )
( )
1 ' * '1 0
11' * ' * '1 0
1 2' *1
1' *1
1' *
2
2 .2
Since lim 0 and 0,
we have 2
n n n n n n n n
n n n n n n
n n n n
n n nn n
n
n nnn
n
n nn
n
B E e E s F x F x te e dt
e F x E s F x F x te e dt
e F x E s e
E s ee F x e
s
E se
s
E s eF x
s
γ
γ
γη
+
−
+
−
+
−
+
→∞
−
⎤⎥⎦
⇒ − = − − −
⎡ ⎤⇒ = − − −⎢ ⎥⎣ ⎦⎡ ⎤⇒ ≤ +⎢ ⎥⎣ ⎦⎡ ⎤
⇒ ≤ +⎢ ⎥⎣ ⎦
= →
= +
∫
∫
*
0 as n .2
here, q-superlinearly.
n
nx x
⎡ ⎤→ →∞⎢ ⎥
⎣ ⎦→
( )0 0 2
0 0
*
Theorem 7.1 Let the standard assumption holds.Then there are and such that if and ,
the Broyden sequence for the data , , exist
and q-superlinearly.
B B
n
x B E
F x B
x x
δδ δ δ∈ <
→
( ) ( ) ( )( )
( ) ( )
11 1
' *
' * ' *1 1 2
1 2 2 2
2 2
Observation:
,
(&) , we have
n n n n n n n n
n n
TTn
n n n nTn
T T TT T Tn n n n n nn n n
n n n
TTn n n
n n nn n
F x x x B F x B s F x
E B F x
ssE B F x B F x Es s s
s s s s sE I E Es s s
s s sI E s Bs s
ω
ωω
ω
−+ +
+ +
+
= = − ⇒ = −
= −
= − = + − = +
⎛ ⎞= ⎜ − ⎟ + +
⎜ ⎟⎝ ⎠⎛ ⎞
= ⎜ − ⎟ +⎜ ⎟⎝ ⎠
( ) ( )( )
( ) ( ) ( )( )
( )( ) ( )( )
' *1
' *12 2
by (&) and theorem 0 1 ' '
12 2 0
P
2
n
TT Tn n
T Tn n n
n n n nn n
T
TTn n nn n n n
n n
Tn n
n
s F x F x
s s sI E F x F x s F xs s
s s sI E F x t x x F x dt ss s
s sIs
+
+
∗+
Δ
− +
⎛ ⎞= ⎜ − ⎟ + − − +
⎜ ⎟⎝ ⎠⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟
= − + + − − ⋅⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠
= −
∫ n
( )
( )
2
TTn n
n nn
s sEs
⎛ ⎞⎜ ⎟ + ⋅ Δ⎜ ⎟⎝ ⎠
1
(**)
T T Tn n n
T
T T Tn nn n n
n
E I P E P
s sE E Ps s
+⇒ = − + Δ
⎛ ⎞= − ⋅ + Δ − −⎜ ⎟⎜ ⎟
⎝ ⎠
{ } { }{ }
{ }{ } ( )
n 0 2n
0 2
n+11
Lemma 1. Let , 2 , 0 1
Let be such that and
be a set of vector that 1 or 0.
If is given by
then lim 0.
n
Nnn
n nn
Tn n nn
Tn nn
R
θ θ θ θ
ε ε
η η
n n n nϕ ϕ ϕ θ η ϕ η
η ϕ
∞
=
∞
=
∞
=
→∞
⊂ − < <
⊂ <
=
ε
∞
= − +
=
∑
( )
n n2
Consider , , .
Apply Lemma 1 to (**) with 1, we have
lim 0 for any
lim 0 for any
lim 0 the Dennis-Mor'e condition!
T Tnn n n n
n
nT
Tnnn
n
Tn n
nn
n n
nn
sE Ps
s Es
E ss
E ss
ϕ φ η ε φ
θ
φ φ
φ φ
→∞
→∞
→∞
= = = Δ
=
⋅ =
⇒ ⋅ =
⇒ =
⇒ * q-superlinearly by Theorem 7.nx x→
1
To prove Theorem 7.1, now the only thing we needs
to do is to choose , , such that the assumption
in Lemma 1 holds.
B nn
δ δ ε∞
=
< ∞∑
( )( ) ( )
( )
( )( ) ( ) ( ) ( )
1 ' '10
1 1 1
1 11 1
1
(***)4 4
By theorem 3, there exist a constant such that
, here ' * '
Tn n n n n n
n n n
n n n nn n
n n n n n n
P F x t x x F x d
e e e e
K
e K e x e x F x F x E
tε φ φ
γ γ
∞ ∞ ∞∗
+= = =
∞ ∞
+ += =
+
= Δ ≤ + − −
⎛ ⎞ ⎛ ⎞≤ + ≤ + −⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠
< + Δ Δ = − +
∑ ∑ ∑∫
∑ ∑
( )0 B
1
1 1
1
.
Clearly, when choosing , small enough such that1 1 and , we have .
2
As a result < .2
The series converges by the ratio test, because lim 1.
n
n n n n
n nn n
n
nn
e x e eK K
e
ee
δ δ
γε
+
∞ ∞
= =
→∞+
< Δ < <
⎛ ⎞⎜ ⎟⎝ ⎠
<
∑ ∑
( ) ( )
( ) ( )( ) ( )
( )( )
2
n n
1 1
2 22 2
22 2n +1
Finally, let's prove Lemma 1.
Observation: 2 0 for 0,1(1) consider 0
, ,
, 2
1 or 0 2
n
n
T Tn n n n n n n n n n n n
T Tn n n n n n n n n
Tn n n n n n
θ θ θ θε
ϕ ϕ ϕ θ η ϕ η ϕ θ η ϕ η
ϕ ϕ θ η ϕ θ η η ϕ
η ϕ ϕ θ θ η ϕ
+ +
− > > ∈
=
= − −
= − +
= ⇒ ≤ − −
( )
( ) ( ) ( )
2 22
2 2M 1 22 2 2Tn 1
n=0 02 2
0
Tn
For any M 0,
lim 0
Tn n n
M
n n nn
nn
ϕ θ η ϕ
η ϕ θ ϕ ϕ θ ϕ ϕ
θ ϕ
η ϕ
− −
+ +=
→∞
≤ −
>
< − = −
< < ∞
⇒ =
∑ ∑ 20 1M
( ) ( )( )( )( )
( )( )
( ) ( ) ( )
n2
2 2
22T Tn n
2Tn
n
2Tn
1
2Tn 1
n+1
(2) Consider 0 :
let's use the inequality 2
2
2
2
22
22
Since
n n n n n n n n
n n n
n
n n nn n n
n
nn n n n
n n
n
ba b aa
ε
ϕ θ η ϕ η ϕ θ θ η ϕ
θ θ η ϕϕ
ϕ
θ θ η ϕϕ ϕ ε
ϕ
ϕη ϕ ϕ ϕ ε
θ θ
ϕ ϕ
+
+
≠
− ≤ −
− ≤ − −
−≤ −
−⇒ ≤ − +
⇒ ≤ − +−
≤
( )
n
M 22Tn 0 1
n=0 0
Tn
and lim 0
such that for all n
2 for any given M
lim 0
n n
n
M
n M nn
nn
ε ε
μ ϕ μ
η ϕ θ μ ϕ ϕ ε
η ϕ
→∞
−
+=
→∞
+ =
⇒ ∃ <
⎛ ⎞⇒ ≤ − + < ∞⎜ ⎟
⎝ ⎠⇒ =
∑ ∑