non-interactivesimulationand …computing sdp integrality gaps for csps [raghavendra-steurer ’09]...
TRANSCRIPT
Non-Interactive Simulation andDimension Reduction for Polynomials
Pritish Kamath
joint work with
BadihGhazi
PrasadRaghavendra
CCCUCSD
June 24,2018
1 / 9
Talk outline...
• Motivation
• Motivation
• Motivation
• “Dimension Reduction for Polynomials” lemma
• Summary & Open Directions!
2 / 9
Talk outline...
• Motivation
• Motivation
• Motivation
• “Dimension Reduction for Polynomials” lemma
• Summary & Open Directions!
•
2 / 9
Randomness Models in Distributed Tasks
randomnessrandomness randomnessrandomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2X Y
P(X, Y)In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
Randomness Models in Distributed Tasks
randomness
randomness randomnessrandomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2X Y
P(X, Y)In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
Randomness Models in Distributed Tasks
randomness
randomness randomness
randomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2X Y
P(X, Y)In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
Randomness Models in Distributed Tasks
randomnessrandomness randomness
randomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2X Y
P(X, Y)In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
Randomness Models in Distributed Tasks
randomnessrandomness randomness
randomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2
X Y
P(X, Y)In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
Randomness Models in Distributed Tasks
randomnessrandomness randomness
randomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2
X Y
P(X, Y)
In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
Randomness Models in Distributed Tasks
randomnessrandomness randomness
randomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2
X Y
P(X, Y)In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
Randomness Models in Distributed Tasks
randomnessrandomness randomness
randomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2X Y
P(X, Y)
In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
Randomness Models in Distributed Tasks
randomnessrandomness randomness
randomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2
X Y
P(X, Y)In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
▶ Communication Complexity
[Bavarian-Gavinsky-Ito ’14]
[Canonne-Guruswami-Meka-Sudan ’15]
3 / 9
Randomness Models in Distributed Tasks
randomnessrandomness randomness
randomness randomness
0
1
0
1
(1−ε)2
(1−ε)2
ε/2
ε/2
X Y
P(X, Y)In Information Theory . . .
▶ Common Information
[Gács-Körner ’73, Wyner ’75]
▶ Distributed Source Coding
[Slepian-Wolf ’73]
▶ · · ·
In Computer Science . . .
▶ Information Theoretic Crypto!
Key Agreement, Secure Computation, … ?
▶ Communication Complexity
[Bavarian-Gavinsky-Ito ’14]
[Canonne-Guruswami-Meka-Sudan ’15]
Abstract Goal:
Understand the power ofdifferent joint distributions!
3 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
a ba, b ∈ {0, 1}
(a, b) ∼ BSSδ
When can simulate BSSδ?
Main Question
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
BSSε
0
1
0
1
(1−ε)2
ε/2
ε/2(1−ε)
2
a ba, b ∈ {0, 1}
(a, b) ∼ BSSδ
1 0 0 1 0 1 0 0 1 0 1 1 · · ·
1 0 1 1 0 1 1 0 0 0 1 1 · · ·
When can simulate BSSδ?
Main Question
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
BSSε
0
1
0
1
(1−ε)2
ε/2
ε/2(1−ε)
2
a ba, b ∈ {0, 1}
(a, b) ∼ BSSδ
1 0 0 1 0 1 0 0 1 0 1 1 · · ·
1 0 1 1 0 1 1 0 0 0 1 1 · · ·
When can BSSε simulate BSSδ?
Main Question
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
BSSε
0
1
0
1
(1−ε)2
ε/2
ε/2(1−ε)
2
a ba, b ∈ {0, 1}
(a, b) ∼ BSSδ
1 0 0 1 0 1 0 0 1 0 1 1 · · ·
1 0 1 1 0 1 1 0 0 0 1 1 · · ·
Answer:YES δ ≥ ε
NO δ < ε
When can BSSε simulate BSSδ?
Main Question
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
DISJ
0
1
0
1
1/31/3 1/3
a ba, b ∈ {0, 1}
(a, b) ∼ BSSδ
1 0 0 0 0 0 0 0 1 0 1 1 · · ·
0 1 0 0 0 1 0 1 0 1 0 0 · · ·
When can DISJ simulate BSSδ?
Main Question
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
DISJ
0
1
0
1
1/31/3 1/3
a ba, b ∈ {0, 1}
(a, b) ∼ BSSδ
1 0 0 0 0 0 0 0 1 0 1 1 · · ·
0 1 0 0 0 1 0 1 0 1 0 0 · · ·
(Partial)Answer:
YES δ ≥ 38
OPEN δ ∈[
14 , 3
8
)NO δ < 1
4
When can DISJ simulate BSSδ?
Main Question
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
a ba, b ∈ {0, 1}
(a, b) ∼ BSSδ
X1, X2, X3, X4, X5, . . .
Y1, Y2, Y3, Y4, Y5, . . .
When can P simulate BSSδ?
Main Question
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
a ba, b ∈ [k]
(a, b) ∼ Q
X1, X2, X3, X4, X5, . . .
Y1, Y2, Y3, Y4, Y5, . . .
When can P simulate Q?
Main Question
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
a ba, b ∈ [k]
(a, b) ∼ Q
X1, X2, X3, X4, X5, . . .
Y1, Y2, Y3, Y4, Y5, . . .
When can P simulate Q?
Main Question
Analytically? OPEN in most cases!
Algorithmically decidable? Not obvious!
How can a “constant-sized”problem be HARD?
4 / 9
Non-Interactive Simulation of Joint Distributions
X YP(X, Y)
a ba, b ∈ [k]
(a, b) ∼ Q
X1, X2, X3, X4, X5, . . .
Y1, Y2, Y3, Y4, Y5, . . .
When can P simulate Q?
Main Question
Analytically? OPEN in most cases!
Algorithmically decidable? Not obvious!
How can a “constant-sized”problem be HARD?
4 / 9
Tensor Power Problems
Non-interactive Simulation falls under the category of
“Tensor Power” problems.
5 / 9
Tensor Power Problems
Non-interactive Simulation falls under the category of
“Tensor Power” problems.
In Information Theory,
▶ Zero-error Shannon capacity
▶ Zero-error Witsenhausen rate
5 / 9
Tensor Power Problems
Non-interactive Simulation falls under the category of
“Tensor Power” problems.
In Information Theory,
▶ Zero-error Shannon capacity
▶ Zero-error Witsenhausen rate
In Computer Science,
▶ (Classical) Amortized value of 2-prover 1-round games
▶ (Quantum) Entangled value of 2-prover 1-round games
▶ (Quantum) Local State Transformation
▶ Computing SDP integrality gaps for CSPs
▶ Amortized communication complexity
5 / 9
Tensor Power Problems
Non-interactive Simulation falls under the category of
“Tensor Power” problems.
In Information Theory,
▶ Zero-error Shannon capacity
▶ Zero-error Witsenhausen rate
In Computer Science,
▶ (Classical) Amortized value of 2-prover 1-round games
▶ (Quantum) Entangled value of 2-prover 1-round games
▶ (Quantum) Local State Transformation
▶ Computing SDP integrality gaps for CSPs [Raghavendra-Steurer ’09]
▶ Amortized communication complexity = Information complexity[Braverman-Rao ’11], [Braverman-Schneider ’15]
5 / 9
Tensor Power Problems
Non-interactive Simulation falls under the category of
“Tensor Power” problems.
In Information Theory,
▶ Zero-error Shannon capacity [Open]
▶ Zero-error Witsenhausen rate [Open]
In Computer Science,
▶ (Classical) Amortized value of 2-prover 1-round games [Open]
▶ (Quantum) Entangled value of 2-prover 1-round games [Open]
▶ (Quantum) Local State Transformation [Open]
▶ Computing SDP integrality gaps for CSPs [Raghavendra-Steurer ’09]
▶ Amortized communication complexity = Information complexity[Braverman-Rao ’11], [Braverman-Schneider ’15]
5 / 9
Decidability via “Dimension Reduction”
Can P simulate Q?
Main Question
6 / 9
Decidability via “Dimension Reduction”
Xn Yn
f
[k]
g
[k]
Can P simulate Q?
Main Question
( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn
6 / 9
Decidability via “Dimension Reduction”
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
Can P simulate Q?
Main Question
( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
6 / 9
Decidability via “Dimension Reduction”
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
Can P simulate Q?
Main Question
( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn
If P can simulate Q . . .
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
. . . then P can ε-approximately simulate Qwith only n0 samples
6 / 9
Decidability via “Dimension Reduction”
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
Can P simulate Q?
Main Question
( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn
If P can simulate Q . . .
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
. . . then P can ε-approximately simulate Qwith only n0 samples
Key point: n0 = n0(ε, P, k) is
explicit & does not depend on n.
6 / 9
Decidability via “Dimension Reduction”
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
Can P simulate Q?
Main Question
( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn
If P can simulate Q . . .
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
. . . then P can ε-approximately simulate Qwith only n0 samples
Key point: n0 = n0(ε, P, k) is
explicit & does not depend on n.
DECIDABLE
6 / 9
Decidability via “Dimension Reduction”
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
Can P simulate Q?
Main Question
( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn
If P can simulate Q . . .
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
. . . then P can ε-approximately simulate Qwith only n0 samples
Key point: n0 = n0(ε, P, k) is
explicit & does not depend on n.
DECIDABLE
Any distributed task performed with unbounded amounts of correlated randomness . . .. . . can also be approximately performed with an explicitly bounded number of samples!
Main “take-away” Theorem
6 / 9
Decidability via “Dimension Reduction”
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
Can P simulate Q?
Main Question
( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn
If P can simulate Q . . .
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
. . . then P can ε-approximately simulate Qwith only n0 samples
Key point: n0 = n0(ε, P, k) is
explicit & does not depend on n.
DECIDABLE
Any distributed task performed with unbounded amounts of correlated randomness . . .. . . can also be approximately performed with an explicitly bounded number of samples!
(extends to interactive settings even with inputs!)
Main “take-away” Theorem
6 / 9
Non-Interactive Agreement Distillation
Xn YnP⊗n
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 1k , . . . , 1
k ) and E[g] = ( 1k , . . . , 1
k ).
Max Agreement Distillation
For convenience, f : Xn 7→ Rk
where, i ∈ [k] corresponds to ei .
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 1k , . . . , 1
k ) and E[g] = ( 1k , . . . , 1
k ).
Max Agreement Distillation
For convenience, f : Xn 7→ Rk
where, i ∈ [k] corresponds to ei .
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
k = 2
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 12 , 1
2 ) and E[g] = ( 12 , 1
2 ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
k = 2
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 12 , 1
2 ) and E[g] = ( 12 , 1
2 ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
k = 2
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = (0.3, 0.7) and E[g] = (0.6, 0.4).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1 − α)
10
g(Yn) = sign(Y1 − β)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
k = 3
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 13 , 1
3 , 13 ) and E[g] = ( 1
3 , 13 , 1
3 ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
k = 3
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 13 , 1
3 , 13 ) and E[g] = ( 1
3 , 13 , 1
3 ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
k = 3
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 13 , 1
3 , 13 ) and E[g] = ( 1
3 , 13 , 1
3 ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
Generalize to non-uniform marginals? FALSE![HMN’16]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
k = 3
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 13 , 1
3 , 13 ) and E[g] = ( 1
3 , 13 , 1
3 ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
Generalize to non-uniform marginals? FALSE![HMN’16]
■ “Peace Sign Conjecture” implies optimal
strategy exists with 2 samples.
Q1. Does optimal strategy even exist with
some finite #samples?
Q2. How many samples n0(ε) needed to
get ε-close to optimal agreement?
Can we obtain an “explicit” bound?
[De-Mossel-Neeman’17]
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
k = 3
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 13 , 1
3 , 13 ) and E[g] = ( 1
3 , 13 , 1
3 ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
Generalize to non-uniform marginals? FALSE![HMN’16]
■ “Peace Sign Conjecture” implies optimal
strategy exists with 2 samples.
Q1. Does optimal strategy even exist with
some finite #samples?
Q2. How many samples n0(ε) needed to
get ε-close to optimal agreement?
Can we obtain an “explicit” bound?
[De-Mossel-Neeman’17]
Any distributed task performed with unbounded amounts of correlated randomness . . .
. . . can also be approximately performed with an explicitly bounded number of samples!
Main “take-away” Theorem
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 1k , . . . , 1
k ) and E[g] = ( 1k , . . . , 1
k ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
Generalize to non-uniform marginals? FALSE![HMN’16]
How many samples n0(ε) needed toget ε-close to the optimal agreement?
Can we obtain an “explicit” bound?
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 1k , . . . , 1
k ) and E[g] = ( 1k , . . . , 1
k ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
Generalize to non-uniform marginals? FALSE![HMN’16]
How many samples n0(ε) needed toget ε-close to the optimal agreement?
Can we obtain an “explicit” bound?
Gaussian Case: (X, Y) ∼ Gρ
[De-Mossel-Neeman’17, ’18] n0 = Ackermann(?)
[This Work!] n0 = exp(
k, 1ε , 1
1−ρ
)
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 1k , . . . , 1
k ) and E[g] = ( 1k , . . . , 1
k ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
Generalize to non-uniform marginals? FALSE![HMN’16]
How many samples n0(ε) needed toget ε-close to the optimal agreement?
Can we obtain an “explicit” bound?
Gaussian Case: (X, Y) ∼ Gρ
[De-Mossel-Neeman’17, ’18] n0 = Ackermann(?)
[This Work!] n0 = exp(
k, 1ε , 1
1−ρ
)General Case: (X, Y) ∼ P[Ghazi-K-Sudan’16] : Reduces∗ to Gaussian case!Using Regularity Lemma and Invariance Principle.
Solved k = 2 case, due to Borell’s theorem!
7 / 9
Non-Interactive Agreement from Correlated Gaussians
Xn YnG⊗n
ρ
N([
00
],
[1 ρ
ρ 1
])
f (Xn) g(Yn)
f (Xn), g(Yn) ∈ [k]
( f (Xn0 ), g(Yn0 )) ≈ε ( f (Xn), g(Yn))
Xn Yn
f
[k]
g
[k]
Dimension Reduction
Xn0 Yn0
f
[k]
g
[k]
supn, f ,g
Pr[ f (Xn) = g(Yn)] ?
E[ f ] = ( 1k , . . . , 1
k ) and E[g] = ( 1k , . . . , 1
k ).
Max Agreement Distillation
Borell’s Theorem [Bor85]
10
f (Xn) = sign(X1)
10
g(Yn) = sign(Y1)
“Halfspaces are most Noise Stable”
“Majority is Stablest” [MOO’04, Mos’10]
Generalizes to non-uniform marginals!
0
1
2
f (X1, X2)
0
1
2
g(Y1, Y2)
“Peace Sign Conjecture”
“Plurality is Stablest” [KKMO’04, IM’12]
Generalize to non-uniform marginals? FALSE![HMN’16]
How many samples n0(ε) needed toget ε-close to the optimal agreement?
Can we obtain an “explicit” bound?
Gaussian Case: (X, Y) ∼ Gρ
[De-Mossel-Neeman’17, ’18] n0 = Ackermann(?)
[This Work!] n0 = exp(
k, 1ε , 1
1−ρ
)General Case: (X, Y) ∼ P[Ghazi-K-Sudan’16] : Reduces∗ to Gaussian case![De-Mossel-Neeman’18] n0 = Ackermann(?)[This Work!] n0 = exp
(k, 1
ε , 11−ρ , log 1
α
)
7 / 9
Main Technique!
8 / 9
Main Technique!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
Dimension Reduction
fi : Rn0 → R gi : Rn0 → R
fi gj
{f1, f2, f3
}{g1, g2, g3}
⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
⟨fi, gj
⟩G⊗n
ρ
:= EX,Y∼G⊗n
ρ
[ fi(X)gj(Y)]
8 / 9
Main Technique!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
Dimension Reduction
fi : Rn0 → R gi : Rn0 → R
fi gj
{f1, f2, f3
}{g1, g2, g3}
⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
ui : [n]→ R vj : [n]→ R
{u1, u2, u3} {v1, v2, v3}
Johnson-Lindenstrauss
ui : [n0 ]→ R vj : [n0 ]→ R
{u1, u2, u3} {v1, v2, v3}
⟨ui, vj
⟩Rn≈ε
⟨ui, vj
⟩Rn0
8 / 9
Main Technique!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
Dimension Reduction
fi : Rn0 → R gi : Rn0 → R
fi gj
{f1, f2, f3
}{g1, g2, g3}
⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
ui : [n]→ R vj : [n]→ R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
ui ← Mui√n0
vj ←Mvj√
n0
ui : [n0 ]→ R vj : [n0 ]→ R
{u1, u2, u3} {v1, v2, v3}
⟨ui, vj
⟩Rn≈ε
⟨ui, vj
⟩Rn0
8 / 9
Main Technique!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
Dimension Reduction
fi : Rn0 → R gi : Rn0 → R
fi gj
{f1, f2, f3
}{g1, g2, g3}
⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
ui : [n]→ R vj : [n]→ R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
ui ← Mui√n0
vj ←Mvj√
n0
EM[⟨
ui , vj⟩]
=⟨ui , vj
⟩VarM
(⟨ui , vj
⟩)< ε2
n0 = O( 1ε2 )⟨
ui, vj
⟩Rn≈ε
⟨ui, vj
⟩Rn0
8 / 9
Main Technique!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
fi(a)← fi
(Ma√
n0
)gj(b)← gj
(Mb√
n0
)
fi : Rn0 → R gi : Rn0 → R
fi gj
{f1, f2, f3
}{g1, g2, g3}
⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
ui : [n]→ R vj : [n]→ R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
ui ← Mui√n0
vj ←Mvj√
n0
EM[⟨
ui , vj⟩]
=⟨ui , vj
⟩VarM
(⟨ui , vj
⟩)< ε2
n0 = O( 1ε2 )⟨
ui, vj
⟩Rn≈ε
⟨ui, vj
⟩Rn0
8 / 9
Main Technique!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
fi(a)← fi
(Ma√
n0
)gj(b)← gj
(Mb√
n0
)
EM
[⟨fi , gj
⟩]=⟨
fi , gj⟩
VarM
(⟨fi , gj
⟩)< ε2
?
⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
ui : [n]→ R vj : [n]→ R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
ui ← Mui√n0
vj ←Mvj√
n0
EM[⟨
ui , vj⟩]
=⟨ui , vj
⟩VarM
(⟨ui , vj
⟩)< ε2
n0 = O( 1ε2 )⟨
ui, vj
⟩Rn≈ε
⟨ui, vj
⟩Rn0
8 / 9
Main Technique!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
fi(a)← fi
(Ma√
n0
)gj(b)← gj
(Mb√
n0
)
EM
[⟨fi , gj
⟩]≈ε
⟨fi , gj
⟩VarM
(⟨fi , gj
⟩)< ε2
If f and g are multilinear, degree d,
n0 = dO(d)
ε2⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
ui : [n]→ R vj : [n]→ R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
ui ← Mui√n0
vj ←Mvj√
n0
EM[⟨
ui , vj⟩]
=⟨ui , vj
⟩VarM
(⟨ui , vj
⟩)< ε2
n0 = O( 1ε2 )⟨
ui, vj
⟩Rn≈ε
⟨ui, vj
⟩Rn0
8 / 9
Main Technique!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
fi(a)← fi
(Ma∥a∥2
)gj(b)← gj
(Mb∥b∥2
)
EM
[⟨fi , gj
⟩]≈ε
⟨fi , gj
⟩VarM
(⟨fi , gj
⟩)< ε2
If f and g are multilinear, degree d,
n0 = dO(d)
ε2⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
ui : [n]→ R vj : [n]→ R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
ui ← Mui√n0
vj ←Mvj√
n0
EM[⟨
ui , vj⟩]
=⟨ui , vj
⟩VarM
(⟨ui , vj
⟩)< ε2
n0 = O( 1ε2 )⟨
ui, vj
⟩Rn≈ε
⟨ui, vj
⟩Rn0
8 / 9
Main Technique: Dimension Reduction for Polynomials!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
fi(a)← fi
(Ma∥a∥2
)gj(b)← gj
(Mb∥b∥2
)
EM
[⟨fi , gj
⟩]≈ε
⟨fi , gj
⟩VarM
(⟨fi , gj
⟩)< ε2
If f and g are multilinear, degree d,
n0 = dO(d)
ε2⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
ui : [n]→ R vj : [n]→ R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
ui ← Mui√n0
vj ←Mvj√
n0
EM[⟨
ui , vj⟩]
=⟨ui , vj
⟩VarM
(⟨ui , vj
⟩)< ε2
n0 = O( 1ε2 )⟨
ui, vj
⟩Rn≈ε
⟨ui, vj
⟩Rn0
8 / 9
Main Technique: Dimension Reduction for Polynomials!
fi : Rn → R gi : Rn → R
fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
fi(a)← fi
(Ma∥a∥2
)gj(b)← gj
(Mb∥b∥2
)
EM
[⟨fi , gj
⟩]≈ε
⟨fi , gj
⟩VarM
(⟨fi , gj
⟩)< ε2
If f and g are multilinear, degree d,
n0 = dO(d)
ε2⟨fi, gj
⟩G⊗n
ρ
≈ε
⟨fi, gj
⟩G⊗n0
ρ
▶ Don’t care about seed length of M.
▶ Crucially, preserves other statisticalproperties!
Ma∥a∥2∼ N (0, 1)⊗n
Comparison with [Kane-Rao ’18]
Thanks to Sankeerth Rao
& Mitali Bafna!
8 / 9
Summary &Open Questions . . .
▶ Lower bounds on randomness reduction? Better upper bounds?
▶ Other applications of dimension reduction for polynomials?▶ Derandomization of the dimension reduction lemma?
▶ (NP-)hardness of deciding Non-Interactive Simulation?
▶ Other Tensor Power problems?
OpenQuestions
Thanks!Questions?
9 / 9
Summary &Open Questions . . .
Any distributed task performed with unbounded amounts of correlated randomness . . .. . . can also be approximately performed with an explicitly bounded number of samples!
(extends to interactive settings even with inputs!)
Main “take-away” Theorem
▶ Lower bounds on randomness reduction? Better upper bounds?
▶ Other applications of dimension reduction for polynomials?▶ Derandomization of the dimension reduction lemma?
▶ (NP-)hardness of deciding Non-Interactive Simulation?
▶ Other Tensor Power problems?
OpenQuestions
Thanks!Questions?
9 / 9
Summary &Open Questions . . .
Any distributed task performed with unbounded amounts of correlated randomness . . .. . . can also be approximately performed with an explicitly bounded number of samples!
(extends to interactive settings even with inputs!)
Proof via dimension reduction for low-degree multilinear polynomials
Main “take-away” Theorem
▶ Lower bounds on randomness reduction? Better upper bounds?
▶ Other applications of dimension reduction for polynomials?▶ Derandomization of the dimension reduction lemma?
▶ (NP-)hardness of deciding Non-Interactive Simulation?
▶ Other Tensor Power problems?
OpenQuestions
Thanks!Questions?
9 / 9
Summary &Open Questions . . .
Any distributed task performed with unbounded amounts of correlated randomness . . .. . . can also be approximately performed with an explicitly bounded number of samples!
(extends to interactive settings even with inputs!)
Proof via dimension reduction for low-degree multilinear polynomials
Main “take-away” Theorem
▶ Lower bounds on randomness reduction? Better upper bounds?
▶ Other applications of dimension reduction for polynomials?▶ Derandomization of the dimension reduction lemma?
▶ (NP-)hardness of deciding Non-Interactive Simulation?
▶ Other Tensor Power problems?
OpenQuestions
Thanks!Questions?
9 / 9
Summary &Open Questions . . .
Any distributed task performed with unbounded amounts of correlated randomness . . .. . . can also be approximately performed with an explicitly bounded number of samples!
(extends to interactive settings even with inputs!)
Proof via dimension reduction for low-degree multilinear polynomials
Main “take-away” Theorem
▶ Lower bounds on randomness reduction? Better upper bounds?
▶ Other applications of dimension reduction for polynomials?▶ Derandomization of the dimension reduction lemma?
▶ (NP-)hardness of deciding Non-Interactive Simulation?
▶ Other Tensor Power problems?
OpenQuestions
Thanks!Questions?
9 / 9