[ieee 2010 third international workshop on advanced computational intelligence (iwaci) - suzhou,...

5
A Wide-Neighborhood Interior-Point Method for P (κ) Complementarity Problem Yanli Lv and Mingwang Zhang Abstract— In this paper we propose a new potential reduction interior-point method for a kind of nonlinear nonmonotone complementarity problem-P * (κ) complementarity problem, which is based on the wide-neighborhood N - (β). This method is a generalization of Mizuno, Todd and Ye’s result. Although the search direction of this algorithm is the same as that of the path-following algorithm, the step size is determined as the minimum point of the potential function in the neighborhood. Therefore, the duality gap is reduced by a fixed positive constant at each step. Finally, the polynomial complexity O (( 2κ + 1 + max{κ, 1 4 }M ) nt ) is attained when the problem satisfies a scaled Lipschitz condition, where t is a positive constant and M is defined in the condition. I. I NTRODUCTION T HE publication of Karmarkar’s paper initiated a new research area that is now referred to as Interior- Point Methods (IPMs) which not only have polynomial complexity but also very efficient in practice. The most ef- ficient methods are the so-called primal-dual path-following methods. Modern commercial solvers contain efficient im- plementations of both the Simplex method and a primal- dual IPM, they offer the user the opportunity to choose the method that fits best with the problem that he wants to solve. An interesting fact is that almost all the feasible primal-dual methods aim to produce a sequence of points in a neighborhood of the central path which is used as a guideline to the optimal set. A large body of implementation shows that primal-dual IPMs using wide neighborhoods perform much better than those counterparts based on small neighborhoods. Thus, many authors considered adapting the primal-dual IPMs to work in a wide neighborhood [15] . Such neighborhoods are always determined by some proximity measure which are used to measure the distance between the point and the central path as follows. δ 2 (x, s, μ)= xs μ e , δ (x, s, μ)= xs μ e , δ (x, s, μ)= xs μ e , Yanli Lv and Mingwang Zhang are with the College of Science, China Three Gorges University, YiChang, 443002, P. R. China. (email: mar- [email protected]). This work was supported by Foundation of CTGU (NO. KJ2008B085) where x= x and (x ) j = min {x j , 0}. Thus the corresponding neighborhoods become N 2 (β)= {(x, s) > 0: δ 2 (x, s, μ) β}, N (β)= {(x, s) > 0: δ (x, s, μ) β}, N (β)= {(x, s) > 0: δ (x, s, μ) β}. The N 2 (β) is considered to be a small neighborhood while N (β) is considered as a wide neighborhood. An important fact is that for each β (0, 1), the following inclusion holds. N 2 (β) N (β) N (β). Nonlinear complementarity problem (NCP) forms a fairly general class of mathematical programming problems with a large number of applications. For instance, any convex pro- gramming problem can be modeled as a monotone nonlinear complementarity problem (MNCP) [6] . Furthermore, it is a link between such concepts as fixed point theory, variational inequalities, linear and nonlinear analysis, game theory and optimal control problem [79] . For a good introduction in NCP and traditional solution methods we refer the reader to the book of Cottle et al. [10] . Since the fundamental work of McLinden ([11]), IPMs for NCP are extensively studied by many authors. For example, Megiddo [12] who developed the central path theory for the complementarity problem. Then, Wright and Ralph pro- posed an infeasible algorithm with superlinear convergence for monotone nonlinear complementarity problems in [13]. Jansen, Roos, Terlaky and Yoshise provided an analysis of the polynomiality of proposed primal-dual IPMs for NCP [14] . Zhao and Han developed a path following algorithm for a class of nonmonotone nonlinear complementarity problems ([15]). Both [14] and [15] used a scaled Lipschitz condition to analyze the polynomial complexity of their algorithms. Based on the algorithm described by Mizuno, Todd and Ye [16] for linear programming, we will introduce a new poten- tial reduction algorithm for a kind of nonlinear nonmonotone complementarity problemP (κ) complementarity problem, which is based on the wide-neighborhood N (β). This method belongs to primal-dual methods. Although the search direction of this algorithm is the same as that of the path- following algorithm, the step size is determined as the minimum point of the potential function in the neighborhood. Therefore, the duality gap is reduced by a fixed positive constant at each step of the method. Finally, the polyno- mial complexity O (( 2κ + 1 + max{κ, 1 4 }M ) nt ) is attained when the problem satisfies a scaled Lipschitz condition, 506 Third International Workshop on Advanced Computational Intelligence August 25-27, 2010 - Suzhou, Jiangsu, China 978-1-4244-6337-4/10/$26.00 @2010 IEEE

Upload: phamkhanh

Post on 28-Feb-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

A Wide-Neighborhood Interior-Point Method for P∗(κ)Complementarity Problem

Yanli Lv and Mingwang Zhang

Abstract— In this paper we propose a new potential reductioninterior-point method for a kind of nonlinear nonmonotonecomplementarity problem−P∗(κ) complementarity problem,which is based on the wide-neighborhood N−

∞(β). This methodis a generalization of Mizuno, Todd and Ye’s result. Althoughthe search direction of this algorithm is the same as that ofthe path-following algorithm, the step size is determined as theminimum point of the potential function in the neighborhood.Therefore, the duality gap is reduced by a fixed positiveconstant at each step. Finally, the polynomial complexityO((2κ+ 1 +max{κ, 1

4}M

)nt)

is attained when the problemsatisfies a scaled Lipschitz condition, where t is a positiveconstant and M is defined in the condition.

I. INTRODUCTION

THE publication of Karmarkar’s paper initiated a newresearch area that is now referred to as Interior-

Point Methods (IPMs) which not only have polynomialcomplexity but also very efficient in practice. The most ef-ficient methods are the so-called primal-dual path-followingmethods. Modern commercial solvers contain efficient im-plementations of both the Simplex method and a primal-dual IPM, they offer the user the opportunity to choosethe method that fits best with the problem that he wantsto solve. An interesting fact is that almost all the feasibleprimal-dual methods aim to produce a sequence of pointsin a neighborhood of the central path which is used as aguideline to the optimal set. A large body of implementationshows that primal-dual IPMs using wide neighborhoodsperform much better than those counterparts based on smallneighborhoods. Thus, many authors considered adapting theprimal-dual IPMs to work in a wide neighborhood[1−5]. Suchneighborhoods are always determined by some proximitymeasure which are used to measure the distance betweenthe point and the central path as follows.

δ2(x, s, µ) =

∥∥∥∥xsµ − e

∥∥∥∥ ,δ∞(x, s, µ) =

∥∥∥∥xsµ − e

∥∥∥∥∞,

δ−∞(x, s, µ) =

∥∥∥∥xsµ − e

∥∥∥∥−∞,

Yanli Lv and Mingwang Zhang are with the College of Science, ChinaThree Gorges University, YiChang, 443002, P. R. China. (email: [email protected]).

This work was supported by Foundation of CTGU (NO. KJ2008B085)

where ∥x∥−∞ = ∥x−∥∞ and (x−)j = min {xj , 0}. Thus thecorresponding neighborhoods become

N2(β) = {(x, s) > 0 : δ2(x, s, µ) ≤ β},N∞(β) = {(x, s) > 0 : δ∞(x, s, µ) ≤ β},N−

∞(β) = {(x, s) > 0 : δ−∞(x, s, µ) ≤ β}.

The N2(β) is considered to be a small neighborhood whileN−

∞(β) is considered as a wide neighborhood. An importantfact is that for each β ∈ (0, 1), the following inclusion holds.

N2(β) ⊆ N∞(β) ⊆ N−∞(β).

Nonlinear complementarity problem (NCP) forms a fairlygeneral class of mathematical programming problems with alarge number of applications. For instance, any convex pro-gramming problem can be modeled as a monotone nonlinearcomplementarity problem (MNCP)[6]. Furthermore, it is alink between such concepts as fixed point theory, variationalinequalities, linear and nonlinear analysis, game theory andoptimal control problem[7−9]. For a good introduction inNCP and traditional solution methods we refer the readerto the book of Cottle et al.[10].

Since the fundamental work of McLinden ([11]), IPMs forNCP are extensively studied by many authors. For example,Megiddo [12] who developed the central path theory forthe complementarity problem. Then, Wright and Ralph pro-posed an infeasible algorithm with superlinear convergencefor monotone nonlinear complementarity problems in [13].Jansen, Roos, Terlaky and Yoshise provided an analysis ofthe polynomiality of proposed primal-dual IPMs for NCP[14].Zhao and Han developed a path following algorithm for aclass of nonmonotone nonlinear complementarity problems([15]). Both [14] and [15] used a scaled Lipschitz conditionto analyze the polynomial complexity of their algorithms.

Based on the algorithm described by Mizuno, Todd and Ye[16] for linear programming, we will introduce a new poten-tial reduction algorithm for a kind of nonlinear nonmonotonecomplementarity problem−P∗(κ) complementarity problem,which is based on the wide-neighborhood N−

∞(β). Thismethod belongs to primal-dual methods. Although the searchdirection of this algorithm is the same as that of the path-following algorithm, the step size is determined as theminimum point of the potential function in the neighborhood.Therefore, the duality gap is reduced by a fixed positiveconstant at each step of the method. Finally, the polyno-mial complexity O

((2κ+ 1 +max{κ, 14}M

)nt)

is attainedwhen the problem satisfies a scaled Lipschitz condition,

506

Third International Workshop on Advanced Computational Intelligence August 25-27, 2010 - Suzhou, Jiangsu, China

978-1-4244-6337-4/10/$26.00 @2010 IEEE

where t is a positive constant and M is defined in thecondition.

This paper is organized as follows. The following sectiondeals with the fundamental properties of P∗(κ) NCP. Section3 describes a wide neighborhood potential reduction algo-rithm. And we prove that our algorithm has a polynomialcomplexity in section 4. Finally, concluding remarks aregiven in section 5.

II. PRELIMINARIES AND NOTATIONS

We consider the NCP of finding a vector pair (x, s) ∈Rn ×Rn, such that

s = F (x), (x, s) ≥ 0, xT s = 0. (1)

where F : Rn+ → Rn

+ is continuously differentiable in anopen set containing the nonnegative orthant of Rn (denotedby Rn

+). In the literature the linear complementarity problemhas gained much attention [10]. In this special case, themapping F is given by

F (x) = Ax+ q

for A ∈ Rn × Rn and q ∈ Rn. In this paper we will beinterested in the P∗(κ) complementarity problem. We assumemapping F is a P∗(κ) mapping in the sense that there existsa positive constant κ such that

(x−y)T (F (x)−F (y)) ≥ −4κ∑i∈I+

(xi−yi)(Fi(x)−Fi(y)),

(2)where I+ = {i|1 ≤ i ≤ n, (xi − yi)(Fi(x) − Fi(y)) ≥ 0}and Fi(x) = (F (x))i. By taking κ = 0, the problem (1) ismonotone. Furthermore, for any x ∈ Rn

+, the Jacobian matrixof F (denoted by F ′(x)) is a P∗(κ)-matrix[15]. In otherwords, for the constant κ such that (2) holds, the followinginequality holds.

zTF ′(x)z ≥ −4κ∑i∈I+

zi(F′(x)z)i for all z ∈ Rn

+, (3)

where I+ = {i|1 ≤ i ≤ n, zi(F′(x)z)i ≥ 0}. To be able to

show the polynomial complexity of our algorithm, we imposethe following condition on the mapping F .Scaled Lipschitz Condition (SLC). If x > 0 and∥X−1∆x∥∞ ≤ 1, then there exists a scalar M > 0 so that

∥X(F (x+∆x)−F (x)−F (x)∆x)∥∞ ≤ M

2|∆xTF ′(x)∆x|.

(4)The above condition is similar to Zhao’s ([15]) but differentfrom that used by Jansen et al. ([14]) which depends not onlyon the mapping F but also on the displacement used in theiralgorithm.

In what follows, we denote the set of all feasible points andthe set of strictly feasible points of P∗(κ) complementarityproblem by F = {(x, s) : s = F (x), (x, s) ≥ 0} andF 0 = {(x, s) : s = F (x), (x, s) > 0} respectively.Without loss of generality, we assume that F 0 = Ø, andmoreover, we have an initial point (x0, s0) ∈ F 0 ∩N−

∞(β).The parameter µ > 0 is set to be the duality gap xts

n .

Conventions. In this paper, we use the following notations.For any vector x = (x1, x2, ..., xn), xq denotes the ndimensional vector whose ith component is xqi . xT s is thescalar product of two vectors. ∥x∥ is the Euclidean norm of x,∥x∥∞ is the infinity norm. X and ∆X is the diagonal matrixfrom vector x and ∆x respectively. e is the n dimensionalvector of ones and I is the n dimensional identity matrix. Thevector e

x is written as x−1 and x−T s−1 = (x−1)T s−1 =n∑

i=1

1xisi

.

III. THE POTENTIAL REDUCTION ALGORITHM

In this section we propose a potential reduction algorithmbased on wide neighborhood N−

∞(β). The primal-dual po-tential function introduced in [16] is

ψ(x, s) = ρ log xT s−n∑

i=1

log xisi,

where ρ > n. By the arithmetic-geometric mean inequality,we can obtain that

ψ(x, s) ≥ (ρ− n) log xT s+ n log n.

So if we reduce ψ to −(ρ − n)t + n log n, we shall havexT s ≤ 2−t as desired[16]. Then starting from the currentpoint which is a strictly point (x, s) ∈ F 0 ∩ N−

∞(β), thesearch direction (∆x,∆s) can be calculated as the solutionof the linear system

∆s = F ′(x)∆x, (5)

S∆x+X∆s = γµe−Xs. (6)

When γ = 0, the direction generated by (5) and (6) iscalled primal-dual affine scaling direction. If γ = 1, thedirection is called the centering direction. When 0 < γ < 1,the direction can be viewed as the combination of the twodirections above. In what follows, the constants are given by

β ∈[13

17, 1

), γ ≤ 2(1− β), n ≥ 2. (7)

Denote

x(θ) = x+ θ∆x,

s(θ) = s+ θ∆s+ F (x(θ))− F (x)− θF ′(x)∆x,

µ+ = µ(θ) =x(θ)T s(θ)

n.

Remark 3.1. Since (x, s) ∈ F 0, we have s = F (x).This relation and the equation (5) imply that s(θ) = s +θ∆s+F (x(θ))−F (x)− θF ′(x)∆x = F (x(θ)). Therefore,(x(θ), s(θ)) ∈ N−

∞(β) is the necessary condition so that thestrictly feasibility of (x(θ), s(θ)) can be retained.

By taking a step along the search direction, where thestep size θ is chosen so that (x(θ), s(θ)) ∈ N−

∞(β) andψ(x(θ), s(θ)) ≤ ψ(x(θ), s(θ)) for each θ with (x(θ), s(θ)) ∈N−

∞(β), we can determine the next iterate as

x+ = x(θ), s+ = s(θ), µ+ = µ(θ).

507

Then repeat the step until certain stopping criterion is satis-fied.

Now the whole algorithm can be defined as follow.

Algorithm 3.1.Given an initial solution (x0, s0) ∈ N−

∞(β) which satisfiesψ(x0, s0) ≤ (ρ−n)t+n log n. Let t > 0 be a given toleranceand (7) holds. Set k = 0.Step 1. If (xk)T sk ≤ 2−t, stop. Otherwise, go to Step 2.Step 2. Set (x, s) = (xk, sk), then compute (∆x,∆s) fromsystem (5) and (6).Step 3. Denote x(θ) = x+θ∆x, s(θ) = s+θ∆s+F (x(θ))−F (x)−θF ′(x)∆x. Define θ by minimizing ψ(x(θ), s(θ)) forall θ ∈ (0, 1] such that(x(θ), s(θ)) ∈ N−

∞(β).Step 4. Let xk+1 = x(θ), sk+1 = s(θ) and µk+1 =(xk+1)T sk+1

n . Set k := k + 1, go to step 1.

IV. ITERATION COMPLEXITY ANALYSIS

In this section, we deal with the analysis of the previousalgorithm and compute the total number of iterations of thealgorithm. Recall the Remark 3.1, we are left to estimate thelower bound of the step length that keeps the next iteratein the neighborhood N−

∞(β). For this, we explore someproperties which will play an important role in our lateranalysis firstly.Lemma 4.1. Assume that (3) is satisfied. Let (x, s) ∈N−

∞(β). (∆x,∆s) is the solution of the linear system (5)and (6), then we have(i) ∥D∆x∥2 + ∥D−1∆s∥2 + 2(∆x)T∆s = ∥r∥2,(ii) −κ∥r∥2 ≤ (∆x)T∆s ≤ 1

4∥r∥2,

(iii) ∥∆X∆s∥ ≤ 2κ+12 ∥r∥2,

where D = (X−1S)12 and r = (XS)−

12 (γµe−Xs).

The following result can be easily proved from Lemma4.1.Corollary 4.2. Under the conditions of Lemma 4.1, then(i) ∥∆X∆s∥∞ ≤ 2κ+1

2 ∥r∥2,(ii) max{∥D∆x∥, ∥D−1∆s∥} ≤

√2κ+ 1∥r∥.

To use Lemma 4.1 and Corollary 4.2 we also need tobound r. The following result is useful.Lemma 4.3. Let (x, s) ∈ N−

∞(β), β and γ are given by (7),then we have

∥r∥2 ≤ nµ.

Proof. Since (x, s) ∈ N−∞(β), we have ∀i = 1, 2, ..., n,

xisi − µ ≥ −βµ holds, it is equal to xisi ≥ (1 − β)µ.Therefore, together with µ = xT s

n , we have

∥r∥2 =

n∑i=1

(γµ− xisi)2

xisi=

n∑i=1

((γµ)2

xisi− 2γµ+ xisi

)≤ n(γµ)2

(1− β)µ− 2nγµ+ nµ ≤ nµ.

The first and second inequality above follows from xisi ≥(1 − β)µ and γ ≤ 2(1 − β) respectively. The proof iscomplete.

For convenience in further discussions, we determineX(θ)s(θ) and µ(θ) as follow.

Lemma 4.4. Let (x, s) ∈ N−∞(β), (∆x,∆s) is the solution

of linear system (5) and (6). Therefore, the following resultsare important.

X(θ)s(θ)− µ(θ)e = (1− θ)Xs+ θγµe+ θ2∆X∆s+ h(θ),

µ(θ) = (1− θ)µ+ θγµ+1

nθ2(∆x)T∆s+

eTh(θ)

n,

where h(θ) = (X + θ∆X)(F (x(θ))− F (x)− θF ′(x)∆x).Proof. According to the definition of x(θ) and s(θ) we get

X(θ)s(θ)

= (X + θ∆X)(s+ θ∆s+ F (x(θ))− F (x)− θF ′(x)∆x)

= (X + θ∆X)(s+ θ∆s) + (X + θ∆X)(F (x(θ))− F (x)

−θF ′(x)∆x) = (1− θ)Xs+ θγµe+ θ2∆X∆s+ h(θ).

Then we have

µ(θ) =x(θ)T s(θ)

n= (1−θ)µ+θγµ+1

nθ2(∆x)T∆s+

eTh(θ)

n.

This proves the lemma.For further analysis, we need to compute a bound for h(θ).

Lemma 4.5. If θ∥X−1∆x∥∞ ≤ 1, the we have thefollowing inequality

∥h(θ)∥∞ ≤ λMθ2∥r∥2,

where λ = max{κ, 14} and M is the constant defined in (3).Proof. By simple computing we can drive from Lemma 4.1,Corollary 4.2 and (4) that

∥h(θ)∥∞= ∥X(I + θX−1∆X)(F (x(θ))− F (x)− θF ′(x)∆x)∥∞≤ 2∥X(F (x(θ))− F (x)− θF ′(x)∆x)∥∞≤M |(θ∆x)TF ′(x)(θ∆x)| ≤ λMθ2∥r∥2.

The first inequality above follows from θ∥X−1∆x∥∞ ≤ 1.Which proofs the lemma.

In what follows we will provide an lower bound on themaximal feasible step length.Lemma 4.6. Let (x, s) ∈ N−

∞(β) and (∆x,∆s) can becomputed from (5) and (6). Let θ be the step length inalgorithm, λ is defined in (3). Then the step length can becomputed as

θ ≥ θ1 ≡ min

{1

∥X−1∆x∥∞,

βγ

(2κ+ 1 + (2− β)λM)n

}.

Proof. It is evident that ∀θ ∈ (0, θ1], we haveθ∥X−1∆x∥∞ ≤ 1, it implies that Lemma 4.5 holds. More-over, (x, s) ∈ N−

∞(β) indicates that Xs − (1 − β)µe ≥ 0.Thus we can obtain the following result from the expression

508

of X(θ)s(θ) and µ(θ).

X(θ)s(θ)− (1− β)µ(θ)e

= (1− θ)(Xs− (1− β)µe) + θγµe− (1− β)θγµe+

θ2∆X∆s− (1− β)∆xT∆s

ne+ h(θ)− (1− β)

eTh(θ)

ne

≥ θβγµe− 2κ+ 1

2θ2nµe− (1− β)

θ2µ

4e−

(2− β)θ2λMnµe

≥ θµe

(βγ − (2(2κ+ 1) + 4(2− β)λM)n+ 1− β

(2(2κ+ 1) + 4(2− β)λM)n+ 1− ββγ

)= 0.

On the other hand, by the strict feasibility of (x, s) wehave both x > 0 and s = F (x) > 0, this together withθ∥X−1∆x∥∞ ≤ 1, we can deduce that x(θ) = x+ θ∆x =X(e + X−1θ∆x) > 0. Furthermore, we have s(θ) = s +θ∆s + F (x(θ)) − F (x) − θF ′(x)∆x = F (x(θ)) > 0, thelast inequality follows from F : Rn

+ → Rn+ is continuously

differentiable. Summing up all these results, we get ∀θ ∈(0, θ1], (x(θ), s(θ)) ∈ N−

∞(β). This finishes the proof.It follows immediately that

Corollary 4.7 Under the assumptions of Lemma 4.6.Suppose that β and γ satisfy (7). Then we have

θ ≥ θ1 =βγ

(2κ+ 1 + (2− β)λM)n.

Proof. Hence(x, s) ∈ N−∞(β) implies ∥(Xs)−1

2 ∥∞ ≤1√

(1−β)µ. On the other hand, the second result of Corol-

lary 4.2 and Lemma 4.3 imply ∥D∆x∥ ≤√(2κ+ 1)nµ.

Therefore we have

βγ

(2κ+ 1 + (2− β)λM)n∥X−1∆x∥∞

≤ βγ

(2κ+ 1 + (2− β)λM)n∥(Xs)

−12 ∥∞∥D∆x∥

βγ√2κ+ 1

(2κ+ 1 + (2− β)λM)√1− β

√n

≤ βγ√2κ+ 1

(2κ+ 1)√1− β

√n≤ 2β

√1− β√

(2κ+ 1)n< 1.

The fourth inequality follows from γ ≤ 2(1 − β). Thiscompletes the proof.

We proceed by estimating the decrease of the value of thepotential function during each step.Lemma 4.8. If β and γ are given by (7), and let

ρ = n+

(16(2κ+ 1 + (2− β)λM)

βγ(8− 17γ)log

1

1− β

)n2.

Thus for any θ ≥ θ1 defined in the above lemma achieves areduction in potential function by n log 1

1−β .Proof. By the value of parameters define in (7) we have

ρ > n. Then by the choice of θ we can deduce that

ψ(x(θ), s(θ))− ψ(x, s) ≤ ψ(x(θ1), s(θ1))− ψ(x, s)

= ρ log(x(θ1))T s(θ1)− log

n∏i=1

xi(θ1)si(θ1)− ρ log xT s

+ logn∏

i=1

xisi ≤ (ρ− n) logµ(θ1)

µ+

n∑i=1

logµ(θ1)

xi(θ1)si(θ1)

≤ (ρ− n) log

((1− θ1) + θ1γ +

θ214

+ λMnθ21

)+

n log1

1− β≤ −(ρ− n)

(1− γ − γ

8− γ

)θ1

+n log1

1− β= −n log 1

1− β.

The lemma is proved.Now we are ready to prove the polynomial complexity of

Algorithm 3.1.Theorem 4.9. For any give t > 0. Assume that a strictlyfeasible starting point (x0, s0) is available with ψ(x0, s0) ≤(ρ − n)t + n log n. Then Algorithm 3.1 generates an (x, s)point satisfying xT s ≤ 2−t in at most

O((

2κ+ 1 +M max{κ, 14})nt

)iterations, where M is the constant defined in SLC.Proof. By using Lemma 4.8 and the assumptionψ(x0, s0) ≤ (ρ− n)t+ n log n we have

ψ(xk, sk) ≤ −kn log 1

1− β+ (ρ− n)t+ n log n.

Therefore, by the stopping criterion we get

k ≥ 32(2κ+ 1 + (2− β)λM)

βγ(8− 17γ)nt.

Therefore, the O((2κ+ 1 +M max{κ, 14}

)nt)

iterationcomplexity of the algorithm follows directly.

V. CONCLUSION

In this paper, we introduced a wide-neighborhood po-tential reduction algorithm for monotone complementarityproblems. Since the mapping F is nonlinear, both the ex-pression of X(θ)s(θ) and µ(θ) is much more complicated.So the analysis is different from the one in the linearprogramming case. Our results show that if a strictly fea-sible starting point is available, then our algorithm has anO((2κ+ 1 +M max{κ, 14}

)nt)

worst-case iterations com-plexity when the problem satisfies a scaled Lipschitz con-dition. Unfortunately, the parameter κ of the matrix F ′(x)is not known appropriately and furthermore there is nopolynomial algorithm to decide whether a matrix is a P∗(κ)matrix or not (see [17]). Moreover, the scaled Lipschitzcondition is needed satisfied in some special cases, it seemsto be restrictive for many nonlinear maps. An interestingproblem in present research is then how this assumption canbe relaxed so as to make the algorithm applicable to a wideclass of problems.

509

REFERENCES

[1] C.Gonzaga, “Complexity of predictor-corrector algorithms for LCPbased on a large neighborhood of the central path,” ISIAM Journalon Optimization, vol. 10, pp. 83-194,1999.

[2] F.A.Potra, “A superlinearly convergent predictor-corrector method fordegenerate LCP in a wide neighborhood of the central path with O(nL)iteration complexity,” Mathematical Programming, vol. 100, pp. 317-337, 2004.

[3] F.A.Potra and X.Liu, “Predictor-corrector methods for sufficient linearcomplementarity problems in a wide neighborhood of the central path,”Optimization Methods and Software, vol. 20, pp. 145-168, 2005.

[4] G.Chon and M.Kim, “A new large-update interior point algorithm forP∗(κ) LCPs based on kenel functions,” Applied Mathematics andComputation, vol. 182, pp. 1169-1183, 2006.

[5] F.A.Potra, “Corrector-predictor methods for monotone linear comple-mentarity problems in a wide neighborhood of the central path,”Mathematical Programming, vol. 111, pp. 243-272, 2008.

[6] J.Sun and G.Zhao, “Global and local quadratic convergence of along-step adaptive-mode interior point methods for some monotonevariational inequality problems,“ SIAM Journal on Optimization, vol.8, pp.123-139, 1998.

[7] G.Isac, “Complementarity problems,“ SLecture Notes in Mathematics,Springer Verlag, vol. 1528, 1998.

[8] P.T.Harker and J.S.Pang, “Finite-dimensional variational inequality andnonlinear complementarity problems: a survey of theory, algorithms andapplication,” Mathematical Programming, vol. 48, pp. 161-220, 1990.

[9] J.Sun and G.Zhao, “A quadratically convergent polynomial long-step al-gorithm for a class of nonlinear monotone complementarity problems,”Technical Report, Department of Mathematics, National University ofSingapore, 1995.

[10] R.W.Cottle, J.S.Pang and R.E.Stone, The linear complementarity prob-lem. Academic Press Inc., San Diego, CA. 1992.

[11] L.McLinden, “The analogue of Moreau’s proximation theorem, withapplications to the nonlinear complmentarity problems,” Pacific Journalof Mathematics, vol. 88, pp. 101-161, 1980.

[12] N.Megiddo, “Pathways to the optimal set in linear programming,”Progress in Mathematical Programming: Interior pPoint and RelatedMethods, pp. 131-158, 1989.

[13] P.Tseng, “An infeasible path-following method for montone comple-mentarity problems,” SIAM Journal on Optimization, vol. 7, pp. 386-402, 1997.

[14] C B.Jansen, C.Roos, T.Terlaky and A.Yoshise, “Polynomiality ofprimal-dual affine scaling algorithms for nonlinear complementarityproblems,” Mathematical Programming, vol. 78, pp. 315-345, 1997.

[15] Y.B.Zhao and J.Y.Han, “Two interior-point methods for nonlinearP∗(τ) complementarity problems,” Journal of Optimization Theory andApplications, vol. 102, pp. 659-679, 1999.

[16] S.Mizuno, Todd and Y.Ye, “On adaptive-step primal-dual interior-point algorithms for linear programming,” Mathematics of OperationsResearch, vol. 18, pp. 964-981, 1993.

[17] T.Illes and M.Nagy, “A Mizuno-Todd-Ye type predictor-correctoralgorithm for sufficient linear complementarity problems,” EuropeanJournal of Operation Reasurch, vol. 181, pp. 1097-1111, 2007.

510