dictator tests and hardness of approximating max-cut-gain ryan o’donnell carnegie mellon (includes...

34

Upload: patricia-carroll

Post on 18-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Dictator testsand

Hardness of approximating Max-

Cut-Gain

Ryan O’Donnell

Carnegie Mellon

(includes joint work with Subhash Khot of Georgia Tech)

Talk outline

1. Constraint satisfaction problems and hardness of approximation

2. Dictator Tests & “Slightly Dictator” Tests

3. A new Slightly Dictator Test and hardness of approximation

result for the Max-Cut-Gain problem.

Talk outline

1. Constraint satisfaction problems and hardness of approximation

2. Dictator Tests & “Slightly Dictator” Tests

3. A new Slightly Dictator Test and hardness of approximation

result for the Max-Cut-Gain problem

Constraint Satisfaction Problems

Let be a class of predicates (“constraints”) on a few bits; e.g.,

• “ X © Y © Z = b ”

• “ X Y ”

The “Max-” constraint satsifaction problem:

• Given m predicates/constraints over n variables,

find assignment satisfying as many as possible.

Max-3Lin

Max-Cut

Max-2SAT

Approximating CSPs

“A is a (c, s)-approximation algorithm for Max-”:

Given an instance where you can satisfy ¸ c fraction of constraints,

A outputs a solution satisfying ¸ s fraction of constraints.

A should run in polynomial time.

Approximating CSPs

• Gaussian Elimination is a (1, 1)-approximation algorithm for Max-3Lin

• Best known (1 − , s)-approximation for Max-3Lin is a trivial algorithm

with s = ½ – output all 0’s or all 1’s. (A (½ , ½)-approximation.)

• Goemans and Williamson ’95 gave a very famous approximation

algorithm for Max-Cut, which is a -approximation and

also a

(c, s)-approximation for every s < .878c.

• G&W is a (.51, .45)-approximation for Max-Cut, worse than trivial

(the Greedy algorithm is a (½ , ½)-approximation algorithm)

• Charikar and Wirth ’04 gave a ( ½ + , ½ + (/log(1/)) )-

approximation for Max-Cut. (A “Max-Cut-Gain” algorithm.)

Hardness of approximation

PCP (“Probabilistically Checkable Proofs”) technology used to prove

NP-hardness of (c,s)-approximation algorithms.

• Håstad ’97: (1 − , ½ + )-approximating Max-3Lin is NP-hard.

• Håstad ’97: (1, 7/8 + )-approximating Max-3SAT is NP-hard.

• KKMO ’04 + MOO ’05:

Doing any better than the Goemans-Williamson

approximation algorithm is NP-hard*.

* Assuming the “Unique Games Conjecture”.

Hardness of approximation

PCP hardness of approximation rule of thumb:

“To prove hardness of (c, s)-approximating Max-, it suffices to give a

“(c, s)-Slightly-Dictator-Test” where the test is from .”

Talk outline

1. Constraint satisfaction problems and hardness of approximation

2. Dictator Tests & “Slightly Dictator” Tests

3. A new Slightly Dictator Test and hardness of approximation

result for the Max-Cut-Gain problem

Dictators

We will be considering m-bit boolean functions:

Function f is called a “Dictator” if it is projection to one coordinate:

for some 1 · i · m.

(AKA “Singleton” AKA “Long Code”)

Dictator Testing

• In the field of “Property Testing”, unknown f given as a black box.

• Want to determine if f belongs to some class of functions C.

• Want to query f on as few strings as possible. (Constantly many.)

• Clearly, must use randomization, must admit some chance of error.

• For hardness-of-approximation, the relevant C is the class of

all m Dictator functions.

Testing Dictators

A (non-adaptive) Dictator Test:

• Picks x1, … , xq 2 {0,1}m in some random fashion.

• Picks a ‘predicate’ on q bits.

• Queries f (x1), …, f (xq).

• Says “YES” or “NO” according to (f (x1), …, f (xq)).

Each f : {0,1}m ! {0,1} has some probability of “passing” the test.

Hope: probability is large for dictators, and small for non-dictators.

Correlation

If f and g are “highly correlated” – i.e., they agree on almost all

inputs – then the probability they pass will be essentially the same.

So if g is highly correlated with a Dictator, we can’t help but let it pass

with high probability.

(A number between −1 and 1.)

Basic Dictator Testing

• If f is a Dictator, passes with probability 1.

• If f has correlation < 1 − with every Dictator, passes with

probability at most 1 − ().

• Number of queries q should be an absolute constant.

(Like 6 or something.)

(Remark 1: Given such a test, you can get a “standard” Dictator Test

by repeating O(1/) times and saying “YES” iff all tests pass.

Remark 2: ) “Assignment tester” (of exponential length) [Din06])

Examples

• Bellare-Goldreich-Sudan ’95: O(1) queries.

• Håstad ’97 probably gave a 3-query one (he at least could’ve).

• A 3-query one; if you know Fourier, proof is easy homework ex.:

• with probability ½ do the BLR test:

• pick x, y uniformly, and set z = x © y

• test that f (x) © f (y) © f (z) = 0

• with probability ½ do the NAE test:

• for each i = 1…m, choose (xi, yi, zi) uniformly from {0,1}3 n { (0,0,0), (1,1,1) }

• test that f (x), f (y), f (z) not all equal

xyz

Hardness of approximation

PCP hardness of approximation rule of thumb:

“To prove hardness of (c, s)-approximating Max-, it suffices to give a

“(c, s)-Slightly-Dictator-Test” where the test is from .”

(c, s)-Slightly-Dictator-Tests

• If f is a Dictator, passes with probability ¸ c.

• If f has correlation < with every Dictator (and Dictator-negation),

then f passes with probability < s + 0,

where 0 ! 0 as ! 0.

(“If f passes with high enough prob., it’s slightly Dictatorial.”)

(For PCP purposes, you can sometimes even get away with

“Very-Slightly-Dictator-Tests”…)

Talk outline

1. Constraint satisfaction problems and hardness of approximation

2. Dictator Tests & Slightly Dictator Tests

3. A new Slightly Dictator Test and hardness of approximation

result for the Max-Cut-Gain problem

Max-Cut Slightly-Dictator-Tests

For Max-Cut, you need a 2-query Slightly-Dictator-Test, where the

tests are of the form “f (x) f (y)”.

KKMO ’04 proposed the Noise Sensitivity test:

• Pick x 2 {0,1}m uniformly, form y 2 {0,1}m by flipping each bit independently with probability .

• Test f (x) f (y).

Theorem (conj’d by KKMO, proved in MOO ’05):

This is a (, arccos(1−2)/)-Very-Slightly-Dictator-Test.

Corollaries

• = 1 − : Gives -hardness* for Max-Cut

• : Gives (, .74)-hardness* for Max-Cut (.878-gap)

• = ½ + : Gives (½ + , ½ + (2/) )-hardness* for Max-Cut

The first two are best possible, as Goemans and Williamson gave

matching algorithms.

Last doesn’t match ( ½ + , ½ + (/log(1/)) )-approximation algorithm

of Charikar and Wirth. Our goal: give matching hardness.

A new result

Subhash Khot and I improved the hardness result to match Charikar and

Wirth, by analyzing a new Dictator Test:

• Do the Noise Sensitivity test some fraction of time with 1, and some

fraction of the time with 2, balanced so that Dictators pass w.p. ½ + .

Gives a ( ½ + , ½ + (/log(1/)) )-Slightly-Dictator-Test using tests.

Bonuses:

• It’s a Slightly-Dictator-Test (not Very-Slightly-).

• Unlike MOO ’05, after doing the usual Fourier analysis stuff, the proof is about 10 lines rather than 10 pages.

Main technical analysis

• First, rename bits to −1 and 1, rather than 0 and 1.

• Next, do the usual Fourier analysis stuff…

Let f : {−1,1}m ! {−1,1} be any function, and say it has correlation

ci with the ith Dictator function, i = 1…m.

Let L : {−1,1}m ! R be the function:

L(x1, …, xm) = c1 ¢ x1 + c2 ¢ x2

+ ¢ ¢ ¢ + cm ¢ xm

This gives the linear polynomial over R that f “looks most like”.

Main technical analysis

L(x1, …, xm) = c1 ¢ x1 + c2 ¢ x2

+ ¢ ¢ ¢ + cm ¢ xm

2 := ci2

(2 roughly measures how Dictatorial f is.)

Probability f : {−1,1}m ! {−1,1} passes the test is (essentially) equal to:

Main technical analysis

L(x1, …, xm) = c1 ¢ x1 + c2 ¢ x2

+ ¢ ¢ ¢ + cm ¢ xm

2 := ci2

Conclusion: If all correlations ci are small, the distribution of L looks

like a Gaussian. With variance = 2 .

Gaussian facts

• The probability that a Gaussian random variable with variance 1

goes above t is about exp(−t2 / 2).

• By scaling, the probability that a Gaussian with variance 2

goes above t is about exp(−t2 / 22).

• So the probability that a Gaussian with variance 2

goes above 2 is about exp(−2/2).

• If 2 ¸ 10/ln(1/), we have Prx [L(x) > 2] ¸ 1/5.

Main technical analysis

L(x1, …, xm) = c1 ¢ x1 + c2 ¢ x2

+ ¢ ¢ ¢ + cm ¢ xm

2 := ci2

If all correlations ci are small, then:

If 2 ¸ 10/ln(1/), we have Prx [L(x) > 2] ¸ 1/5

) ( ½ + , ½ + (/log(1/)) )-Slightly-Dictator-Test

Open problem

• Suppose you want a

3-query (1, s)-(Very)-Slightly-Dictator-Test

• Till recently, best s was Håstad’s 3/4.

• Khot & Saket ’06 got s down to 20/27.

• Conjectured (by Zwick) best s: 5/8 (!).

• I’m pretty sure I know the test, but I can’t analyze it…

The End