toward a strongly polynomial algorithm for the linear ... · toward a strongly polynomial algorithm...
TRANSCRIPT
Toward a Strongly Polynomial Algorithm for the LinearComplementarity Problems with Sufficient Matrices
Komei Fukuda
ETH Zurich, Switzerland
https://www.inf.ethz.ch/personal/fukudak/
September 22, 2015
Linear Programming Problem (LP)
(GivenA ∈ Rm×d , b ∈ R
m, c ∈ Rd)
LP: max cT x = ∑dj=1 c j x j
subject to A x ≤ b ∑dj=1 ai j x j ≤ bi,∀i = 1, . . . ,m
x ≥ 0. x j ≥ 0,∀ j = 1, . . . ,d.
The set of feasible solutions{x : Ax ≤ b,x ≥ 0} is aconvexpolyhedron.
x3
x1
x2
x3
x1
x2
Three Algorithms for Linear Programming
Simplex Method
1
2
3
Criss-Cross Method
1
2
3
4
Interior-Point Method
All existing polynomial algorithms are based on interior-point mothods(or the ellipsoid method).
Our ultimate goal is to find a strongly polynomial algorithm.
Convex Quadratic Programming Problem (convex QP)
(GivenA ∈ Rm×d , G ∈ R
d×d: positive semidefinite, b ∈ Rm, c ∈ R
d)
QP: max cT x− 12 xT Gx
subject to A x ≤ b
x ≥ 0.
The convex QP (and the LP) admits a certificate for optimality.
Theorem [QP Duality Theorem, Dorn 1960]
If the QP has an optimal solution, then its dual QD:
QD: min bT y+ 12xT Gx
subject to Gx +AT y ≥ c
y ≥ 0
has an optimal solution and the optimal values are equal. Moreoverx
values of the optimal solutions can be taken to be the same.
Convex Quadratic Programming Problem (convex QP)
All algorithms solving the convex QP (and the LP) aim at finding this
certificate (i.e. primal and dual solutions).
Moreover, this optimality is equivalent to theKarush-Kuhn-Tucker
(KKT) conditions:
Primal Ax ≤ b , y ≥ 0 Dual
Feasibility x ≥ 0 , Gx +AT y ≥ c Feasibility
Aix = bi or yi = 0 ∀i Complementary
x j = 0 or (Gx +AT y) j = c j ∀ j Slackness
Linear Complementarity Problem with Sufficient Matrices (SU-LCP)
(Given a sufficient matrixM ∈ Rn×n, q ∈ R
n)
LCP: find two vectorsw,z ∈ Rn satisfying
w = M z+q,
w ≥ 0, z ≥ 0, and
wT z = 0.
The convex QP is a special case of SU-LCP, because the KT conditions
can be written as LCP with
M =
0 −A
AT G
, q =
b
−c
BecauseG is PSD, the matrixM is sufficient.
Sufficient Matrices and LCP
A matrix M is calledcolumnsufficient if
[ zi(Mz)i ≤ 0 for all i ] =⇒ [zi(Mz)i = 0 for all i] .
A matrix M is calledrow sufficient if MT is column sufficient, and
sufficient if both column and row sufficient.
• Many LP algorithms (e.g. simplex, criss-cross, interior-point) can be
generalized to solve sufficient-matrix LCPs (SU-LCPs).
• There is an interior-point algorithm that runs in time polynomial in
the size of input and one parameterκ (that is not bounded from
above), due to Kojima, Megiddo, Noma and Yoshise (1991).
• No algorithm is known to run in polynomial time.
• If SU-LCP is NP-hard, it implies NP=co-NP that is unlikely.
Murty’s LCP (1978)
M =
1 2 2 . . . 2 2
0 1 2 . . . 2 2
0 0 1 . . . 2 2...
0 0 0 . . . 1 2
0 0 0 . . . 0 1
, q =
−1
−1
−1...
−1
−1
.
• The criss-cross method (and Murty’s least index method) takes2n −1
pivots for Murty’s LCP.
• KF-Namiki (1994) showed that therandomizedcriss-cross takes
exactly (expected)n pivots.
Morris’s LCP (2002)
M =
1 2 0 . . . 0 0 0
0 1 2 . . . 0 0 0
0 0 1 . . . 0 0 0...
0 0 0 . . . 1 2 0
0 0 0 . . . 0 1 2
2 0 0 . . . 0 0 1
, q =
−1
−1
−1...
−1
−1
−1
.
• Morris (2002) proved that the randomized principal pivot method
takes at least((n−1)/2)! pivots on avarage for Morris’s LCP.
(The associated unique sink orientation is highly cyclic.)
• Foniok-KF-Gartner-Luthi (2008) showed that the criss-cross method
with any permutation of variables takes at mostO(n2) pivots.
Morris’s LCP for n = 3
(
0 0 0− − −
)
(
1 0 0+ − +
) (
0 1 0+ + −
) (
0 0 1− + +
)
(
1 1 0− + −
) (
1 0 1+ − −
) (
0 1 1− − +
)
(
1 1 1+ + +
)
12
3
K-Matrix LCP
A matrix isK-matrix if it is P-matrix and its off diagonal entries are
non-positive.
• This class of LCPs is well known to be very easy to solve.
• Foniok-KF-Gartner-Luthi (2008) showed that any principal pivot
method (including the criss-cross) starting at any basis takes at most
2n pivots starting from any complementary basis.
• Foniok-KF-Klaus (2010) gave a purely combinatorial proof of the
fast convergence in the setting of oriented matroids.
K-Matrices
A matrix isZ-matrix if its off diagonal entries are non-positive.
A matrix isK-matrix if it is both P-matrix and Z-matrix.
Theorem [Foniok-KF-Gartner-L uthi (2009)].The simple principal pivoting method withany pivot rule solves the LCPwith a K-matrix in at most2n pivots.
Namely, every directed path in any K-USO has length≤ 2n.
Theorem (K-Matrix Characterizations) [Fiedler-Pt ak (1962)].Let M be a Z-matrix. Then the following conditions are equivalent:
(a) ∃ x ≥ 0 such thatMx > 0;(b) ∃ x > 0 such thatMx > 0;
(c) M is non-singular andM−1 ≥ 0;(d) for eachx 6= 0, ∃ k such thatxk(Mx)k > 0;(e) M is a P-matrix, (and thus a K-matrix).
Translating Fiedler-Ptak Conditions into Vector Subspaces
Let M be anyn×n matrix. LetS = {1, . . . ,n} andT = {n+1, . . . ,2n}.
Consider the vector subspace
V := {x ∈ R2n : [I −M]x = 0}
wherex ∈V meansxS = MxT .
(a) ∃ x ≥ 0 such thatMx > 0⇐⇒ ∃x ∈V such thatxS > 0 andxT ≥ 0
(b) ∃ x > 0 such thatMx > 0⇐⇒ ∃x ∈V such thatx > 0
(c) M is non-singular andM−1 ≥ 0⇐⇒ x ∈V andxS ≥ 0 imply xT ≥ 0
(WhenM is non-singular,V = {x ∈ R2n : [M−1 −I]x = 0})
(c) M is non-singular andM−1 ≥ 0⇐⇒ y ∈V⊥ andyT ≥ 0 imply yS ≤ 0
(WhenM is non-singular,V⊥ = {y ∈ R2n : [I (M−1)T ]y = 0})
The Main Matrix Theorem
Theorem [Foniok-KF-Klaus (2010)].Let M be a Z-matrix withV = {x ∈ R
2n : [I −M]x = 0}.
Then, the following conditions are equivalent:
(a)∃x ∈V : xT ≥ 0 andxS > 0; (a*) ∃y ∈V⊥ : yS ≤ 0 andyT > 0;
(b) ∃x ∈V : x > 0; (b*) ∃y ∈V⊥ : yS < 0 andyT > 0;
(c) x ∈V andxS ≥ 0 =⇒ xT ≥ 0; (c*) y ∈V⊥ andyT ≥ 0 =⇒ yS ≤ 0;
(d) there is no s.r. circuitc ∈V (d*) there is no s.p. cocircuitc ∈V⊥.
(that is,M is a P-matrix);
Note: Acircuit is a nonzero vector with a minimal support. A vectorc is
s.r. (sign-reversing) ifci · cn+i ≤ 0 for all i = 1, . . . ,n.
The theorem above is a corollary to the OM Theorem where “matrix” is
replaced by “OM” and “V ” by “ V ”.
The Main OM Theorem
Theorem [Foniok-KF-Klaus (2010)].Let M = (S∪T,V ) be a Z-matroid.
Then, the following conditions are equivalent:
(a)∃X ∈ V : XT ≥ 0 andXS > 0; (a*) ∃Y ∈ V ∗ : YS ≤ 0 andYT > 0;
(b) ∃X ∈ V : X > 0; (b*) ∃Y ∈ V ∗ : YS < 0 andYT > 0;
(c) X ∈ V andXS ≥ 0 =⇒ XT ≥ 0; (c*) Y ∈ V ∗ andYT ≥ 0 =⇒ YS ≤ 0;
(d) there is no s.r. circuitC ∈ V (d*) there is no s.p. cocircuitD ∈ V ∗.
(that is,M is a P-matroid);
The proof is short and elementary.
First we prove (a)→ (b) → (c) → (d), by using the OM axioms.
The dual implications (a*)→ (b*) → (c*) → (d*) come for free (by
symbolic translation and duality). Finally, the implication (d)→ (a*)
follows from duality, and (d*)→ (a) comes for free.
A Consequence of The Main Theorem
The K-matroid characterization theorem can be used to give apurely
combinatorial proof of the fast convergence of K-LCP pivoting:
Theorem [Foniok-KF-Gartner-L uthi (2009)].The simple principal pivoting method withany pivot rule solves the LCP
with a K-matrix in at most2n pivots.
• Currently we are working on an extension of the K-matrix
characterizations to thehidden K-matrices.
• Eventually, we aim to show that the randomized criss-cross pivot rule
is (expected) strongly polynomial for LPs and convex QPs.
Conjecture (KF)
The Randomised Criss-Cross Method is an expected polynomial-time
algorithm for SU-LCP.
References
• J. Foniok, K. Fukuda, B. Gartner, and H.-J. Luthi. Pivoting in linear
complementarity: two polynomial-time cases.DiscreteComput.
Geom., 42:187–205, 2009. http://arxiv.org/abs/0807.1249.
• J. Foniok, K. Fukuda, and L. Klaus. Combinatorial characterizations
of K-matrices.LinearAlgebraandits Applications, 434:68–80,
2011. http://arxiv.org/abs/0911.2171.
• K. Fukuda and M. Namiki. On extremal behaviors of Murty’s least
index method.MathematicalProgramming, 64:365–370, 1994.
• K. Fukuda and T. Terlaky. Linear complementarity and oriented
matroids.Journalof theOperationsResearchSocietyof Japan,
35:45–61, 1992.