deterministic extractors for small space sources jesse kamp, anup rao, salil vadhan, david zuckerman
TRANSCRIPT
Deterministic Extractors for Small Space Sources
Jesse Kamp, Anup Rao, Salil Vadhan, David Zuckerman
Randomness Extractors
Defn: min-entropy(X)k if x Pr[X=x] · 2-k.
No “deterministic” (seedless) extractor for all X with min-entropy k:
1. Can add seed.2. Can restrict X.
ExtExtXX UniformUniform
Independent Sources
ExtExtUniformUniform
Bit-Fixing Sources
? 1 ? ? 0 1
ExtExt
Small Space Sources Space s source: min-entropy k source
generated by width 2s branching program.
n+1 layers
1 1 0 1 0 0
1/, 0
1-1/, 01,10.
1,0
0.8,1
0.1,0
0.3,
0
0.5,10.1,1
0.1,0
1
width 2s
Related Work
[Blum]: Markov Chain with a constant number of states
[Koenig, Maurer]: related model [Trevisan, Vadhan]: considered sources
sampled by small circuits requires complexity theoretic assumptions.
Small space sources capture:
Bit fixing sources space 0 sources General Sources with min-entropy k space
k sources c Independent sources space n/c sources
Bit Fixing Sources can be modelled by Space 0 sources
? 1 ? ? 0 1
0.5,1 0.5,1 0.5,1
0.5,0 0.5,0 0.5,0
1,1 1,0 1,1
General Sources are Space n sources
Pr[X 2 = 1|X 1
=1], 1
Pr[X1 = 0], 0
Pr[X 1 = 1], 1
Pr[X2 = 0|X
1=1], 0
n layers
width 2n
X = X1 X2 X3 X4 X5 …..…………..
Min-entropy k sources are convex combinations of space k sources
c Independent Sources: Space n/c sources.
0 1 1 1 0 1 0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 1 1 1 1
width 2n/c
Our Main Results
Min-Entropy Space Error Output Bits
k = n1-c n1-4c 2-nc 99% k
k = n cn 2-n/polylog(n) 99% k
c = sufficiently small constant > 0
Outline
Our Techniques Extractor for linear min-entropy rate Extractor for polynomial min-entropy rate
Future Directions
We reduce to another model
Total Entropy k independent sources:
X|State 5 = V
Y| State 5 = V Y
The Reduction
X
V
These two distributions are independent! Expect the min-entropy of X|State 5 = V, Y|State 5 =
V to be about k – s.
Can get many independent sources
W X Y Z
If we condition on t states, we expect to lose ts bits of min-entropy.
Entropy Loss
Let S1, …, St denote the random variables for the state in the t layers.
Pr[X = x]
Pr[X=x|S1=s1,…,St=st] Pr[S1=s1,…,St=st]
X|S1=s1,…,St=st has min-entropy < k – 2ts
) Pr[S1 = s1,…,St=st] < 2-2ts
Union bound: happens with prob < 2-ts
The Reduction
Every space s source with min-entropy k is close to a convex combination of t total entropy k-2ts sources.
W X Y Z
Some Additive Number Theory [Bourgain, Glibichuk, Konyagin]
( >0) ( integers C=C(), c=c()): non-trivial additive character of GF(2p) and every independent min-entropy p sources X1, …, XC,
| E[ ( X1 X2 … XC)] | < 2-cp
Vazirani’s XOR lemma Z GF(2n) a random variable with |
E[(Z)]| < for every nontrivial , then any m bits of Z are 2m/2 close to uniform.
| E[ ( X1 X2 … XC)] | < 2-cp
) lsbm (X1 X2 … XC) is
2m/2 – cp close to uniform
X1 X2 X3 X4 lsb(X1 X2 X3 X4)
More than an independent sources extractor
Analysis: (X1X2), (X3X4), (X5X6X7), X8 are independent sources.
X1 X2 X3 X4 X5 X6 X7 X8
lsbm(X1X2X3X4X5X6X7X8)
Small Space Extractor for n entropy
If the source has min-entropy n, /2 fraction of blocks must have min-entropy rate .
Take (2/) C(/2) blocks ) C(/2) blocks have min-entropy rate /2.
lsb()
Result
Theorem: ( > 0, > 0) efficient extractor for
min-entropy k nspace noutput length = (n)error = 2-(n)
Can improve to get 99% of the min-entropy out using techniques from [Gabizon,Raz,Shaltiel]
For Polynomial Entropy Rate
Black Boxes: Good Condensers: [Barak, Kindler, Shaltiel,
Sudakov, Wigderson], [Raz] Good Mergers: [Raz], [Dvir, Raz]
White Box: Condensing somewhere random sources:
[Rao]
Somewhere Random Source
Def: [TS96] Has some uniformly random row.
t
r
Aligned Somewhere High Entropy SourcesDef: Two somewhere high-entropy sources are aligned if the same row has high entropy in both sources.
Condensers [BKSSW],[Raz],[Z]
A B C n
A B C AC+B
(1.1) (2n/3)
Elements in a prime field
Iterating the condenser
A B C n
(1.1)t (2/3)tn
Mergers [Raz], [Dvir, Raz]
0.9
99% of rows in output have entropy rate 0.9
C
Condense + Merge [Raz]
1.1
99% of rows in output have entropy rate 1.1
Condense Merge
C
This process maintains alignment
1.1 C
(1.1)2C2
Bottom Line:
(1.1)t Ct
[BGK]X1 Y1 Z1
lsb(X1Y1Z1)
n/dt
Extracting from SR-sources [Rao] r
sqrt(r)
sqrt(r)
r
We generalize this:
Arbitrary number of sources
Recap
(1.1)t Ct
[BGK]X1 Y1 Z1
lsb(X1Y1Z1)
sqrt(r)
r
Arbitrary number of sources
W X Y Z
SolutionEntropy: n
2 of these have rate /2
4 of these have rate /4
Ct(1.1)t
[BGK]X1 Y1 Z1
lsb(X1Y1Z1)
FinalEntropy: n
2 of these have rate /2
If n-0.01
# rows << length of row
Result
Theorem: (Assuming we can find primes) ( ) efficient extractor for
min-entropy n1-
space n1-4
output length n(1)
error 2-n(1)
Can improve to get 99% of the min-entropy out using techniques from [Gabizon,Raz,Shaltiel]
Future Directions Smaller min-entropy k?
Non-explicit: k=O(log n) Our results: k=n1-(1)
Larger space? Non-explicit: (k) Our results: (k) only for k=(n)
Other natural models?
Questions?