an intro to lattices and learning with errorsgasarch/blogpapers/regevlwetalk.pdf · radius problem...
TRANSCRIPT
![Page 1: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/1.jpg)
An intro to lattices and learning with errorsA way to keep your secrets secret in a post-quantum world
Daniel Apon – Univ of Maryland
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 2: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/2.jpg)
Some images in this talk authored by meMany, excellent lattice images in this talk authored by Oded Regev
and available in papers and surveys on his personal websitehttp://www.cims.nyu.edu/∼regev/ (as of Sept 29, 2012)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 3: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/3.jpg)
Introduction to LWE
1. Learning with ErrorsI Let p = p(n) ≤ poly(n). Consider the noisy linear equations:
〈a1, s〉 ≈χ b1 (mod p)
〈a2, s〉 ≈χ b2 (mod p)
...
for s ∈ Znp, ai
$← Znp, bi ∈ Zp, and error χ : Zp → R+ on Zp.
I Goal: Recover s.
2. Why we care:I Believed hard for quantum algorithmsI Average-case = worst-caseI Many crypto applications!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 4: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/4.jpg)
Introduction to LWE
1. Learning with ErrorsI Let p = p(n) ≤ poly(n). Consider the noisy linear equations:
〈a1, s〉 ≈χ b1 (mod p)
〈a2, s〉 ≈χ b2 (mod p)
...
for s ∈ Znp, ai
$← Znp, bi ∈ Zp, and error χ : Zp → R+ on Zp.
I Goal: Recover s.
2. Why we care:I Believed hard for quantum algorithmsI Average-case = worst-caseI Many crypto applications!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 5: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/5.jpg)
Introduction to LWE
1. Learning with ErrorsI Let p = p(n) ≤ poly(n). Consider the noisy linear equations:
〈a1, s〉 ≈χ b1 (mod p)
〈a2, s〉 ≈χ b2 (mod p)
...
for s ∈ Znp, ai
$← Znp, bi ∈ Zp, and error χ : Zp → R+ on Zp.
I Goal: Recover s.
2. Why we care:I Believed hard for quantum algorithmsI Average-case = worst-caseI Many crypto applications!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 6: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/6.jpg)
Talk Overview. Up next: Intro to lattices
1. Intro to lattices
1.1 What’s a lattice?1.2 Hard lattice problems
2. Gaussians and lattices
3. From lattices to learning
4. From learning to crypto
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 7: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/7.jpg)
What’s a lattice?
I A lattice is a discrete additive subgroup of Rn
I A lattice is a set of points in n-dimensional space with aperiodic structure
I Given n linearly independent vectors v1, ..., vn ∈ Rn, thelattice they generate is the set of vectors
L(v1, ..., vn)def=
n∑
i=1
αivi
∣∣∣ αi ∈ Z
.
I The basis B =
(|v1|
|v2|
·········
|vn|
)generates the lattice L(B).
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 8: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/8.jpg)
What’s a lattice?
I A lattice is a discrete additive subgroup of Rn
I A lattice is a set of points in n-dimensional space with aperiodic structure
I Given n linearly independent vectors v1, ..., vn ∈ Rn, thelattice they generate is the set of vectors
L(v1, ..., vn)def=
n∑
i=1
αivi
∣∣∣ αi ∈ Z
.
I The basis B =
(|v1|
|v2|
·········
|vn|
)generates the lattice L(B).
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 9: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/9.jpg)
What’s a lattice?
I A lattice is a discrete additive subgroup of Rn
I A lattice is a set of points in n-dimensional space with aperiodic structure
I Given n linearly independent vectors v1, ..., vn ∈ Rn, thelattice they generate is the set of vectors
L(v1, ..., vn)def=
n∑
i=1
αivi
∣∣∣ αi ∈ Z
.
I The basis B =
(|v1|
|v2|
·········
|vn|
)generates the lattice L(B).
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 10: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/10.jpg)
What’s a lattice?
I A lattice is a discrete additive subgroup of Rn
I A lattice is a set of points in n-dimensional space with aperiodic structure
I Given n linearly independent vectors v1, ..., vn ∈ Rn, thelattice they generate is the set of vectors
L(v1, ..., vn)def=
n∑
i=1
αivi
∣∣∣ αi ∈ Z
.
I The basis B =
(|v1|
|v2|
·········
|vn|
)generates the lattice L(B).
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 11: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/11.jpg)
More on lattice bases
The gray-shaded region is the fundamental parallelepiped, given byP(B) = Bx | x ∈ [0, 1)n.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 12: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/12.jpg)
More on lattice bases
The gray-shaded region is the fundamental parallelepiped, given byP(B) = Bx | x ∈ [0, 1)n.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 13: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/13.jpg)
More on the fundamental parallelepiped
Useful facts:
I For bases B1,B2, L(B1) = L(B2)⇒ vol(P(B1)) = vol(P(B2))
I vol(P(B)) = det(B)
I det(B1) = det(B2) iff B1 = B2U for a unimodular matrix U
I A matrix U is unimodular if it is integral and det(U) = ±1.
Moral of the story: All lattices have countably infinitely manybases, and given some fixed lattice, all of its possible bases arerelated by “volume-preserving” transformations.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 14: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/14.jpg)
More on the fundamental parallelepiped
Useful facts:
I For bases B1,B2, L(B1) = L(B2)⇒ vol(P(B1)) = vol(P(B2))
I vol(P(B)) = det(B)
I det(B1) = det(B2) iff B1 = B2U for a unimodular matrix U
I A matrix U is unimodular if it is integral and det(U) = ±1.
Moral of the story: All lattices have countably infinitely manybases, and given some fixed lattice, all of its possible bases arerelated by “volume-preserving” transformations.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 15: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/15.jpg)
More on the fundamental parallelepiped
Useful facts:
I For bases B1,B2, L(B1) = L(B2)⇒ vol(P(B1)) = vol(P(B2))
I vol(P(B)) = det(B)
I det(B1) = det(B2) iff B1 = B2U for a unimodular matrix U
I A matrix U is unimodular if it is integral and det(U) = ±1.
Moral of the story: All lattices have countably infinitely manybases, and given some fixed lattice, all of its possible bases arerelated by “volume-preserving” transformations.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 16: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/16.jpg)
More on the fundamental parallelepiped
Useful facts:
I For bases B1,B2, L(B1) = L(B2)⇒ vol(P(B1)) = vol(P(B2))
I vol(P(B)) = det(B)
I det(B1) = det(B2) iff B1 = B2U for a unimodular matrix U
I A matrix U is unimodular if it is integral and det(U) = ±1.
Moral of the story: All lattices have countably infinitely manybases, and given some fixed lattice, all of its possible bases arerelated by “volume-preserving” transformations.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 17: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/17.jpg)
More on the fundamental parallelepiped
Useful facts:
I For bases B1,B2, L(B1) = L(B2)⇒ vol(P(B1)) = vol(P(B2))
I vol(P(B)) = det(B)
I det(B1) = det(B2) iff B1 = B2U for a unimodular matrix U
I A matrix U is unimodular if it is integral and det(U) = ±1.
Moral of the story: All lattices have countably infinitely manybases, and given some fixed lattice, all of its possible bases arerelated by “volume-preserving” transformations.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 18: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/18.jpg)
The dual of a lattice
I Given a lattice L = L(B), the dual lattice L∗def= L(B∗) is
generated by the dual basis B∗; the unique basis s.t.BTB∗ = I.
I Equivalently, the dual of a lattice L ∈ Rn is given by
L∗ =y ∈ Rn
∣∣∣ 〈x, y〉 ∈ Z, for all x ∈ L.
I Fact. For any L = L(B), L∗ = L(B∗),
|vol(P(B))| =
∣∣∣∣ 1
vol(P(B∗))
∣∣∣∣ .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 19: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/19.jpg)
The dual of a lattice
I Given a lattice L = L(B), the dual lattice L∗def= L(B∗) is
generated by the dual basis B∗; the unique basis s.t.BTB∗ = I.
I Equivalently, the dual of a lattice L ∈ Rn is given by
L∗ =y ∈ Rn
∣∣∣ 〈x, y〉 ∈ Z, for all x ∈ L.
I Fact. For any L = L(B), L∗ = L(B∗),
|vol(P(B))| =
∣∣∣∣ 1
vol(P(B∗))
∣∣∣∣ .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 20: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/20.jpg)
The dual of a lattice
I Given a lattice L = L(B), the dual lattice L∗def= L(B∗) is
generated by the dual basis B∗; the unique basis s.t.BTB∗ = I.
I Equivalently, the dual of a lattice L ∈ Rn is given by
L∗ =y ∈ Rn
∣∣∣ 〈x, y〉 ∈ Z, for all x ∈ L.
I Fact. For any L = L(B), L∗ = L(B∗),
|vol(P(B))| =
∣∣∣∣ 1
vol(P(B∗))
∣∣∣∣ .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 21: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/21.jpg)
Lattice problems
Defn: λ1(L) is the length of the shortest nonzero vector in L
1. GapSVPγI Input: n-dimensional lattice L and a number d > 0I Output: YES if λ1(L) ≤ d ; NO if λ1(L) > γ(n) · d
2. CVPL∗,d
I Input: n-dimensional (dual) lattice L∗ and a point x ∈ Rn
within distance d of L∗
I Output: the closest vector in L∗ to x
3. Other common lattice problems:I Shortest Independent Vectors Problem (SIVP), Covering
Radius Problem (CRP), Bounded Distance Decoding (BDD),Discrete Gaussian Sampling Problem (DGS), GeneralizedIndependent Vectors Problem (GIVP)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 22: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/22.jpg)
Lattice problems
Defn: λ1(L) is the length of the shortest nonzero vector in L
1. GapSVPγI Input: n-dimensional lattice L and a number d > 0I Output: YES if λ1(L) ≤ d ; NO if λ1(L) > γ(n) · d
2. CVPL∗,d
I Input: n-dimensional (dual) lattice L∗ and a point x ∈ Rn
within distance d of L∗
I Output: the closest vector in L∗ to x
3. Other common lattice problems:I Shortest Independent Vectors Problem (SIVP), Covering
Radius Problem (CRP), Bounded Distance Decoding (BDD),Discrete Gaussian Sampling Problem (DGS), GeneralizedIndependent Vectors Problem (GIVP)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 23: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/23.jpg)
Lattice problems
Defn: λ1(L) is the length of the shortest nonzero vector in L
1. GapSVPγI Input: n-dimensional lattice L and a number d > 0I Output: YES if λ1(L) ≤ d ; NO if λ1(L) > γ(n) · d
2. CVPL∗,d
I Input: n-dimensional (dual) lattice L∗ and a point x ∈ Rn
within distance d of L∗
I Output: the closest vector in L∗ to x
3. Other common lattice problems:I Shortest Independent Vectors Problem (SIVP), Covering
Radius Problem (CRP), Bounded Distance Decoding (BDD),Discrete Gaussian Sampling Problem (DGS), GeneralizedIndependent Vectors Problem (GIVP)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 24: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/24.jpg)
Lattice problems
Defn: λ1(L) is the length of the shortest nonzero vector in L
1. GapSVPγI Input: n-dimensional lattice L and a number d > 0I Output: YES if λ1(L) ≤ d ; NO if λ1(L) > γ(n) · d
2. CVPL∗,d
I Input: n-dimensional (dual) lattice L∗ and a point x ∈ Rn
within distance d of L∗
I Output: the closest vector in L∗ to x
3. Other common lattice problems:I Shortest Independent Vectors Problem (SIVP), Covering
Radius Problem (CRP), Bounded Distance Decoding (BDD),Discrete Gaussian Sampling Problem (DGS), GeneralizedIndependent Vectors Problem (GIVP)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 25: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/25.jpg)
Complexity of (Gap)SVP and (Gap)CVP (and SIVP)
Moral of the story: We can get O(2n)-approximate solutions inpolynomial time. Constant-factor approximations are NP-hard.The best algorithms for anything in between require Ω(2n) time.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 26: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/26.jpg)
Talk Overview. Up next: Gaussians and lattices
1. Intro to lattices
2. Gaussians and lattices
2.1 Uniformly sampling space2.2 DL,r : The discrete Gaussian of width r on a lattice L
3. From lattices to learning
4. From learning to crypto
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 27: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/27.jpg)
Uniformly sampling space
Question: How do you uniformly sample over an unboundedrange?
I Eg, how do you uniformly sample x ∈ Z?
Answer: You can’t!The “lattice answer”: Sample uniformly from Zp; view Z as beingpartitioned by copies of Zp
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 28: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/28.jpg)
Uniformly sampling space
Question: How do you uniformly sample over an unboundedrange?
I Eg, how do you uniformly sample x ∈ Z?
Answer: You can’t!
The “lattice answer”: Sample uniformly from Zp; view Z as beingpartitioned by copies of Zp
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 29: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/29.jpg)
Uniformly sampling space
Question: How do you uniformly sample over an unboundedrange?
I Eg, how do you uniformly sample x ∈ Z?
Answer: You can’t!The “lattice answer”: Sample uniformly from Zp; view Z as beingpartitioned by copies of Zp
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 30: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/30.jpg)
Uniformly sampling space
Question: How do you uniformly sample from Rn?
Answer: You can’t!The “lattice answer”: Sample uniformly from the fundamentalparallelepiped of a lattice.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 31: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/31.jpg)
Uniformly sampling space
Question: How do you uniformly sample from Rn?
Answer: You can’t!
The “lattice answer”: Sample uniformly from the fundamentalparallelepiped of a lattice.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 32: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/32.jpg)
Uniformly sampling space
Question: How do you uniformly sample from Rn?
Answer: You can’t!The “lattice answer”: Sample uniformly from the fundamentalparallelepiped of a lattice.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 33: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/33.jpg)
Lattices with Gaussian noise
A related question: What does a lattice look like when you“smudge” the lattice points with Gaussian-distributed noise?
Answer: Rn
I Left-to-right: PDFs of Gaussians centered at lattice pointswith increasing standard deviation
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 34: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/34.jpg)
Lattices with Gaussian noise
A related question: What does a lattice look like when you“smudge” the lattice points with Gaussian-distributed noise?Answer: Rn
I Left-to-right: PDFs of Gaussians centered at lattice pointswith increasing standard deviation
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 35: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/35.jpg)
The discrete Gaussian: DL,r
I Denote by DL,r the discrete Gaussian on a lattice L of width r
I Define the smoothing parameter, ηε(L), as the least width s.t.DL,r is at most ε-far from the continuous Gaussian (over L).
I Important fact. ηnegl(n)(L) = ω(√
log n) ≈ Θ(√
n)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 36: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/36.jpg)
The discrete Gaussian: DL,r
I Denote by DL,r the discrete Gaussian on a lattice L of width r
I Define the smoothing parameter, ηε(L), as the least width s.t.DL,r is at most ε-far from the continuous Gaussian (over L).
I Important fact. ηnegl(n)(L) = ω(√
log n) ≈ Θ(√
n)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 37: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/37.jpg)
The discrete Gaussian: DL,r
I Denote by DL,r the discrete Gaussian on a lattice L of width r
I Define the smoothing parameter, ηε(L), as the least width s.t.DL,r is at most ε-far from the continuous Gaussian (over L).
I Important fact. ηnegl(n)(L) = ω(√
log n) ≈ Θ(√
n)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 38: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/38.jpg)
Talk Overview. Up next: From lattices to learning
1. Intro to lattices
2. Gaussians and lattices
3. From lattices to learning
3.1 Reduction sketch: GapSVP to LWE
4. From learning to crypto
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 39: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/39.jpg)
Reduction from GapSVP to LWE – Overview
Reduction sketch
1. Our goal: Prove LWE is hard
2. Reduction outline
2.1 Why quantum?
3. Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
4. Quantum step: CVPL∗,αp/r oracle → DL,r√n/(αp)
4.1 Note: (ηε(L) ≈) αp > 2√
n → DL,r√n/(αp) ≈ DL,<r/2
5. Conclude: Either LWE is hard, or the complexity landscapeturns into a war zone
5.1 “War zone:” At least 4 or 5 good complexity classes had togive their lives to ensure stability – that sort of thing.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 40: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/40.jpg)
Reduction outline: GapSVP to LWE
I LLL Basis Reduction algorithm: In polytime, given anarbitrary L(B) outputs a new basis B′ of length at most 2n
times the shortest basis.
GOAL: Given an arbitrary lattice L, output a very short vector, ordecide none exist.
I Let ri denote r · (αp/√
n)i for i = 3n, 3n − 1, ..., 1 andr ≥ O(n/α). (Imagine α ≈ 1/n1.5, so r ≈ n1.5 · n.)
I Using LLL, generate B′, and using B′, draw nc samples fromDL,r3n .
I For i = 3n, ...1,I Call IterativeStep nc times, using the nc samples from
DL,ri to produce 1 sample from DL,ri−1 each time.
I Output a sample from DL,r0 = DL,r .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 41: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/41.jpg)
Reduction outline: GapSVP to LWE
I LLL Basis Reduction algorithm: In polytime, given anarbitrary L(B) outputs a new basis B′ of length at most 2n
times the shortest basis.
GOAL: Given an arbitrary lattice L, output a very short vector, ordecide none exist.
I Let ri denote r · (αp/√
n)i for i = 3n, 3n − 1, ..., 1 andr ≥ O(n/α). (Imagine α ≈ 1/n1.5, so r ≈ n1.5 · n.)
I Using LLL, generate B′, and using B′, draw nc samples fromDL,r3n .
I For i = 3n, ...1,I Call IterativeStep nc times, using the nc samples from
DL,ri to produce 1 sample from DL,ri−1 each time.
I Output a sample from DL,r0 = DL,r .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 42: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/42.jpg)
Reduction outline: GapSVP to LWE
I LLL Basis Reduction algorithm: In polytime, given anarbitrary L(B) outputs a new basis B′ of length at most 2n
times the shortest basis.
GOAL: Given an arbitrary lattice L, output a very short vector, ordecide none exist.
I Let ri denote r · (αp/√
n)i for i = 3n, 3n − 1, ..., 1 andr ≥ O(n/α). (Imagine α ≈ 1/n1.5, so r ≈ n1.5 · n.)
I Using LLL, generate B′, and using B′, draw nc samples fromDL,r3n .
I For i = 3n, ...1,I Call IterativeStep nc times, using the nc samples from
DL,ri to produce 1 sample from DL,ri−1 each time.
I Output a sample from DL,r0 = DL,r .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 43: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/43.jpg)
The iterative step
Two steps: (1) classical, (2) quantum
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 44: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/44.jpg)
Why quantum?
I Let L be a lattice. Let d λ1(L).
I You are given an oracle O that, on input x ∈ Rn withindistance d from L, outputs the closest lattice vector to x.
I (Caveat: If x of distance > d from L, O’s output is arbitrary.)
I How do you use O?
I One idea: Choose some lattice vector y ∈ L. Let x = y + zwith ||z|| ≤ d . Give x to O.
I But then O(x) = y!
I But quantumly, knowing how to compute y given only y + z isuseful – it allows us to uncompute a register containing y.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 45: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/45.jpg)
Why quantum?
I Let L be a lattice. Let d λ1(L).
I You are given an oracle O that, on input x ∈ Rn withindistance d from L, outputs the closest lattice vector to x.
I (Caveat: If x of distance > d from L, O’s output is arbitrary.)
I How do you use O?
I One idea: Choose some lattice vector y ∈ L. Let x = y + zwith ||z|| ≤ d . Give x to O.
I But then O(x) = y!
I But quantumly, knowing how to compute y given only y + z isuseful – it allows us to uncompute a register containing y.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 46: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/46.jpg)
Why quantum?
I Let L be a lattice. Let d λ1(L).
I You are given an oracle O that, on input x ∈ Rn withindistance d from L, outputs the closest lattice vector to x.
I (Caveat: If x of distance > d from L, O’s output is arbitrary.)
I How do you use O?
I One idea: Choose some lattice vector y ∈ L. Let x = y + zwith ||z|| ≤ d . Give x to O.
I But then O(x) = y!
I But quantumly, knowing how to compute y given only y + z isuseful – it allows us to uncompute a register containing y.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 47: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/47.jpg)
Why quantum?
I Let L be a lattice. Let d λ1(L).
I You are given an oracle O that, on input x ∈ Rn withindistance d from L, outputs the closest lattice vector to x.
I (Caveat: If x of distance > d from L, O’s output is arbitrary.)
I How do you use O?
I One idea: Choose some lattice vector y ∈ L. Let x = y + zwith ||z|| ≤ d . Give x to O.
I But then O(x) = y!
I But quantumly, knowing how to compute y given only y + z isuseful – it allows us to uncompute a register containing y.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 48: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/48.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Let D be a probability distribution on a lattice L. Considerthe Fourier transform f : Rn → C, given by
f (x)def=∑y∈L
D(y)exp(2πi〈x, y〉) = Ey←D [exp(2πi〈x, y〉)]
I Using Hoeffding’s inequality, if y1, ..., yN are N = poly(n)independent samples from D, then w.h.p.
f (x) ≈ 1
N
N∑j=1
exp(2πi〈x, yj〉)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 49: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/49.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Let D be a probability distribution on a lattice L. Considerthe Fourier transform f : Rn → C, given by
f (x)def=∑y∈L
D(y)exp(2πi〈x, y〉) = Ey←D [exp(2πi〈x, y〉)]
I Using Hoeffding’s inequality, if y1, ..., yN are N = poly(n)independent samples from D, then w.h.p.
f (x) ≈ 1
N
N∑j=1
exp(2πi〈x, yj〉)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 50: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/50.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Applying this idea to DL,r , we get a good approximation of itsFourier transform, denoted f1/r . Note f1/r is L∗-periodic.
I It can be shown that 1/r λ1(L∗), so we have
f1/r (x) ≈ exp(−π(r · dist(L∗, x))2)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 51: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/51.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Applying this idea to DL,r , we get a good approximation of itsFourier transform, denoted f1/r . Note f1/r is L∗-periodic.
I It can be shown that 1/r λ1(L∗), so we have
f1/r (x) ≈ exp(−π(r · dist(L∗, x))2)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 52: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/52.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Attempt #1: Using samples from DL,r , we repeatedlycompute approximations to f1/r and attempt to “walk uphill”to find the peak (a dual lattice point).
I The problem: This procedure only gives a method to solveCVPL∗,1/r . (Beyond that distance, the value of f1/r becomesnegligible.)
I Plugging this into our iterative step means we go from DL,r toDL,r
√n, which is the wrong direction!
I Goal: We need a FATTER Fourier transform!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 53: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/53.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Attempt #1: Using samples from DL,r , we repeatedlycompute approximations to f1/r and attempt to “walk uphill”to find the peak (a dual lattice point).
I The problem: This procedure only gives a method to solveCVPL∗,1/r . (Beyond that distance, the value of f1/r becomesnegligible.)
I Plugging this into our iterative step means we go from DL,r toDL,r
√n, which is the wrong direction!
I Goal: We need a FATTER Fourier transform!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 54: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/54.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Attempt #1: Using samples from DL,r , we repeatedlycompute approximations to f1/r and attempt to “walk uphill”to find the peak (a dual lattice point).
I The problem: This procedure only gives a method to solveCVPL∗,1/r . (Beyond that distance, the value of f1/r becomesnegligible.)
I Plugging this into our iterative step means we go from DL,r toDL,r
√n, which is the wrong direction!
I Goal: We need a FATTER Fourier transform!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 55: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/55.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Equivalently, we need tighter samples!
I Attempt #2: Take samples from DL,r and just divide everycoordinate by p. This gives samples from DL/p,r/p, where L/pis L scaled down by a factor of p.
I That is, the lattice L/p consists of pn translates of L.I Label these pn translates by vectors from Zn
p.
I It can be shown that r/p > ηε(L), which implies DL/p,r/p isuniform over the set of L + La/p, for a ∈ Zn
pI For any choice of a ∈ Zn
p, L + La/p (modulo theparallelepiped) corresponds to a choice of translate
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 56: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/56.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Equivalently, we need tighter samples!
I Attempt #2: Take samples from DL,r and just divide everycoordinate by p. This gives samples from DL/p,r/p, where L/pis L scaled down by a factor of p.
I That is, the lattice L/p consists of pn translates of L.I Label these pn translates by vectors from Zn
p.
I It can be shown that r/p > ηε(L), which implies DL/p,r/p isuniform over the set of L + La/p, for a ∈ Zn
pI For any choice of a ∈ Zn
p, L + La/p (modulo theparallelepiped) corresponds to a choice of translate
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 57: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/57.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I Equivalently, we need tighter samples!
I Attempt #2: Take samples from DL,r and just divide everycoordinate by p. This gives samples from DL/p,r/p, where L/pis L scaled down by a factor of p.
I That is, the lattice L/p consists of pn translates of L.I Label these pn translates by vectors from Zn
p.
I It can be shown that r/p > ηε(L), which implies DL/p,r/p isuniform over the set of L + La/p, for a ∈ Zn
pI For any choice of a ∈ Zn
p, L + La/p (modulo theparallelepiped) corresponds to a choice of translate
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 58: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/58.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I This motivates defining a new distribution, D with samples(a, y) obtained by:
1. y← DL/p,r/p
2. a ∈ Znp s.t. y ∈ L + La/p (← Complicated to analyze..?)
I From the previous slide, we know that we can obtain D fromDL,r .
Also, we know that D is equivalently obtained by:
1. First, a$← Zn
p (← Ahh! Much nicer. :))2. Then, y← DL+La/p,r/p
I The width of the discrete Gaussian samples y is tighter now!..
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 59: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/59.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I This motivates defining a new distribution, D with samples(a, y) obtained by:
1. y← DL/p,r/p
2. a ∈ Znp s.t. y ∈ L + La/p (← Complicated to analyze..?)
I From the previous slide, we know that we can obtain D fromDL,r .
Also, we know that D is equivalently obtained by:
1. First, a$← Zn
p (← Ahh! Much nicer. :))2. Then, y← DL+La/p,r/p
I The width of the discrete Gaussian samples y is tighter now!..
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 60: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/60.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I This motivates defining a new distribution, D with samples(a, y) obtained by:
1. y← DL/p,r/p
2. a ∈ Znp s.t. y ∈ L + La/p (← Complicated to analyze..?)
I From the previous slide, we know that we can obtain D fromDL,r .
Also, we know that D is equivalently obtained by:
1. First, a$← Zn
p (← Ahh! Much nicer. :))2. Then, y← DL+La/p,r/p
I The width of the discrete Gaussian samples y is tighter now!..
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 61: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/61.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
I This motivates defining a new distribution, D with samples(a, y) obtained by:
1. y← DL/p,r/p
2. a ∈ Znp s.t. y ∈ L + La/p (← Complicated to analyze..?)
I From the previous slide, we know that we can obtain D fromDL,r .
Also, we know that D is equivalently obtained by:
1. First, a$← Zn
p (← Ahh! Much nicer. :))2. Then, y← DL+La/p,r/p
I The width of the discrete Gaussian samples y is tighter now!..
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 62: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/62.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
How about the Fourier transform of D? It’s wider now! But...
The problem: Each hill of fp/r now has its own “phase.” Do weclimb up or down?
I Two examples of the Fourier transform of DL+La/p,r/p witha=(0,0) (left) and a=(1,1) (right).
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 63: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/63.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
How about the Fourier transform of D? It’s wider now! But...The problem: Each hill of fp/r now has its own “phase.” Do weclimb up or down?
I Two examples of the Fourier transform of DL+La/p,r/p witha=(0,0) (left) and a=(1,1) (right).
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 64: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/64.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
Key observation #1:
I For x ∈ L∗, each sample (a, y)← D gives a linear equation
〈a, τ(x)〉 = p〈x, y〉 mod p
for a$← Zn
p. After about n equations, we can use Gaussianelimination to recover τ(x) ∈ Zn
p.
I What if x 6∈ L∗?
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 65: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/65.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
Key observation #2:
I For x close to L∗, each sample (a, y)← D gives a linearequation with error
〈a, τ(x)〉 ≈ bp〈x, y〉e mod p
for a$← Zn
p. After poly(n) equations, we use the LWE oracleto recover τ(x) ∈ Zn
p. (NOTE: |error | = ||τ(x)||2)
I This lets us compute the phase exp(2πi〈a, τ(x)〉/p), andhence recover the closest dual lattice vector to x.
I Classical step DONE.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 66: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/66.jpg)
Classical step: DL,r + LWE oracle → CVPL∗,αp/r oracle
Key observation #2:
I For x close to L∗, each sample (a, y)← D gives a linearequation with error
〈a, τ(x)〉 ≈ bp〈x, y〉e mod p
for a$← Zn
p. After poly(n) equations, we use the LWE oracleto recover τ(x) ∈ Zn
p. (NOTE: |error | = ||τ(x)||2)
I This lets us compute the phase exp(2πi〈a, τ(x)〉/p), andhence recover the closest dual lattice vector to x.
I Classical step DONE.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 67: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/67.jpg)
Quantum step: CVPL∗,αp/r oracle → DL,r√
n/(αp)
Observe: CVPL∗,αp/r → DL,r√n/(αp) = CVPL∗,
√n/r → DL,r
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 68: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/68.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Ok, let’s give a solution for CVPL∗,√n/r → DL,r .
GOAL: Get a quantum state corresponding to f1/r (on the duallattice) and use the quantum Fourier transform to get DL,r (on theprimal lattice). We will use the promised CVP oracle to do so.
1. Create a uniform superposition on L∗:∑
x∈L∗ |x〉.2. On a separate register, create a “Gaussian state” of width
1/r :∑
z∈Rn exp(−π||rz||2)|z〉.3. The combined system state is written:∑
x∈L∗,z∈Rn
exp(−π||rz||2)|x, z〉.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 69: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/69.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Ok, let’s give a solution for CVPL∗,√n/r → DL,r .
GOAL: Get a quantum state corresponding to f1/r (on the duallattice) and use the quantum Fourier transform to get DL,r (on theprimal lattice). We will use the promised CVP oracle to do so.
1. Create a uniform superposition on L∗:∑
x∈L∗ |x〉.2. On a separate register, create a “Gaussian state” of width
1/r :∑
z∈Rn exp(−π||rz||2)|z〉.3. The combined system state is written:∑
x∈L∗,z∈Rn
exp(−π||rz||2)|x, z〉.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 70: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/70.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Ok, let’s give a solution for CVPL∗,√n/r → DL,r .
GOAL: Get a quantum state corresponding to f1/r (on the duallattice) and use the quantum Fourier transform to get DL,r (on theprimal lattice). We will use the promised CVP oracle to do so.
1. Create a uniform superposition on L∗:∑
x∈L∗ |x〉.
2. On a separate register, create a “Gaussian state” of width1/r :
∑z∈Rn exp(−π||rz||2)|z〉.
3. The combined system state is written:∑x∈L∗,z∈Rn
exp(−π||rz||2)|x, z〉.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 71: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/71.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Ok, let’s give a solution for CVPL∗,√n/r → DL,r .
GOAL: Get a quantum state corresponding to f1/r (on the duallattice) and use the quantum Fourier transform to get DL,r (on theprimal lattice). We will use the promised CVP oracle to do so.
1. Create a uniform superposition on L∗:∑
x∈L∗ |x〉.2. On a separate register, create a “Gaussian state” of width
1/r :∑
z∈Rn exp(−π||rz||2)|z〉.
3. The combined system state is written:∑x∈L∗,z∈Rn
exp(−π||rz||2)|x, z〉.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 72: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/72.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Ok, let’s give a solution for CVPL∗,√n/r → DL,r .
GOAL: Get a quantum state corresponding to f1/r (on the duallattice) and use the quantum Fourier transform to get DL,r (on theprimal lattice). We will use the promised CVP oracle to do so.
1. Create a uniform superposition on L∗:∑
x∈L∗ |x〉.2. On a separate register, create a “Gaussian state” of width
1/r :∑
z∈Rn exp(−π||rz||2)|z〉.3. The combined system state is written:∑
x∈L∗,z∈Rn
exp(−π||rz||2)|x, z〉.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 73: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/73.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Key rule: All quantum computations must be reversible.
1. Add the first register to the second (reversible) to obtain:∑x∈L∗,z∈Rn exp(−π||rz||2)|x, x + z〉.
2. Since we have a CVPL∗,√n/r oracle we can compute x from
x + z. Therefore, we can uncompute the first register:∑x∈L∗,z∈Rn
exp(−π||rz||2)|x + z〉 ≈∑z∈Rn
f1/r (z)|z〉.
3. Finally, apply the quantum Fourier transform to obtain∑y∈L
DL,r (y)|y〉,
and measure it to obtain a sample from ≈ DL,r .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 74: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/74.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Key rule: All quantum computations must be reversible.
1. Add the first register to the second (reversible) to obtain:∑x∈L∗,z∈Rn exp(−π||rz||2)|x, x + z〉.
2. Since we have a CVPL∗,√n/r oracle we can compute x from
x + z. Therefore, we can uncompute the first register:∑x∈L∗,z∈Rn
exp(−π||rz||2)|x + z〉 ≈∑z∈Rn
f1/r (z)|z〉.
3. Finally, apply the quantum Fourier transform to obtain∑y∈L
DL,r (y)|y〉,
and measure it to obtain a sample from ≈ DL,r .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 75: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/75.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Key rule: All quantum computations must be reversible.
1. Add the first register to the second (reversible) to obtain:∑x∈L∗,z∈Rn exp(−π||rz||2)|x, x + z〉.
2. Since we have a CVPL∗,√n/r oracle we can compute x from
x + z. Therefore, we can uncompute the first register:∑x∈L∗,z∈Rn
exp(−π||rz||2)|x + z〉 ≈∑z∈Rn
f1/r (z)|z〉.
3. Finally, apply the quantum Fourier transform to obtain∑y∈L
DL,r (y)|y〉,
and measure it to obtain a sample from ≈ DL,r .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 76: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/76.jpg)
New quantum step: CVPL∗,√
n/roracle → DL,r
Key rule: All quantum computations must be reversible.
1. Add the first register to the second (reversible) to obtain:∑x∈L∗,z∈Rn exp(−π||rz||2)|x, x + z〉.
2. Since we have a CVPL∗,√n/r oracle we can compute x from
x + z. Therefore, we can uncompute the first register:∑x∈L∗,z∈Rn
exp(−π||rz||2)|x + z〉 ≈∑z∈Rn
f1/r (z)|z〉.
3. Finally, apply the quantum Fourier transform to obtain∑y∈L
DL,r (y)|y〉,
and measure it to obtain a sample from ≈ DL,r .
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 77: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/77.jpg)
Talk Overview. Up next: From learning to crypto
1. Intro to lattices
2. Gaussians and lattices
3. From lattices to learning
4. From learning to crypto
4.1 Regev’s PKE scheme from LWE
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 78: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/78.jpg)
Decisional Learning with Errors (DLWE)
I For positive integers n and q ≥ 2, a secret s ∈ Znq, and a
distribution χ on Z, define As,χ as the distribution obtained by
drawing a$← Zn
q uniformly at random and a noise term e$← χ,
and outputting (a, b) = (a, 〈a, s + e〉 (mod q)) ∈ Znq × Zq.
I (DLWEn,q,χ). An adversary gets oracle access to either As,χ orU(Zn
q × Zq) and aims to distinguish (with non-negligibleadvantage) which is the case.
I Theorem. Let B ≥ ω(log n) ·√
n. There exists an efficientlysampleable distribution χ with |χ| < B (meaning, χ issupported only on [−B,B]) s.t. if an efficient algorithm solvesthe average-case DLWEn,q,χ problem, then there is an efficientquantum algorithm that solves GapSVPO(n·q/B) on anyn-dimensional lattice.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 79: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/79.jpg)
Decisional Learning with Errors (DLWE)
I For positive integers n and q ≥ 2, a secret s ∈ Znq, and a
distribution χ on Z, define As,χ as the distribution obtained by
drawing a$← Zn
q uniformly at random and a noise term e$← χ,
and outputting (a, b) = (a, 〈a, s + e〉 (mod q)) ∈ Znq × Zq.
I (DLWEn,q,χ). An adversary gets oracle access to either As,χ orU(Zn
q × Zq) and aims to distinguish (with non-negligibleadvantage) which is the case.
I Theorem. Let B ≥ ω(log n) ·√
n. There exists an efficientlysampleable distribution χ with |χ| < B (meaning, χ issupported only on [−B,B]) s.t. if an efficient algorithm solvesthe average-case DLWEn,q,χ problem, then there is an efficientquantum algorithm that solves GapSVPO(n·q/B) on anyn-dimensional lattice.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 80: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/80.jpg)
Decisional Learning with Errors (DLWE)
I For positive integers n and q ≥ 2, a secret s ∈ Znq, and a
distribution χ on Z, define As,χ as the distribution obtained by
drawing a$← Zn
q uniformly at random and a noise term e$← χ,
and outputting (a, b) = (a, 〈a, s + e〉 (mod q)) ∈ Znq × Zq.
I (DLWEn,q,χ). An adversary gets oracle access to either As,χ orU(Zn
q × Zq) and aims to distinguish (with non-negligibleadvantage) which is the case.
I Theorem. Let B ≥ ω(log n) ·√
n. There exists an efficientlysampleable distribution χ with |χ| < B (meaning, χ issupported only on [−B,B]) s.t. if an efficient algorithm solvesthe average-case DLWEn,q,χ problem, then there is an efficientquantum algorithm that solves GapSVPO(n·q/B) on anyn-dimensional lattice.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 81: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/81.jpg)
Regev’s PKE scheme
1. SecretKeyGen(1n): Sample s$← Zn
q. Output sk = s.
2. PublicKeyGen(s): Let Ndef= O(n log q). Sample A
$← ZN×nq
and e$← χN . Compute b = A · s + e (mod q), and define
Pdef= [b|| − A] ∈ ZN×(n+1)
q .
Output pk = P.3. Encpk(m): To encrypt a message m ∈ 0, 1 using pk = P,
sample r ∈ 0, 1N and output the ciphertext
c = PT · r +⌊q
2
⌋·m mod q ∈ Zn+1
q ,
where mdef= (m, 0, ..., 0) ∈ 0, 1n+1.
4. Decsk(c): To decrypt c ∈ Zn+1q using secret key sk = s,
compute
m =
⌊2
q(〈c, (1, s)〉 mod q)
⌉mod 2.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 82: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/82.jpg)
Regev’s PKE scheme
1. SecretKeyGen(1n): Sample s$← Zn
q. Output sk = s.
2. PublicKeyGen(s): Let Ndef= O(n log q). Sample A
$← ZN×nq
and e$← χN . Compute b = A · s + e (mod q), and define
Pdef= [b|| − A] ∈ ZN×(n+1)
q .
Output pk = P.
3. Encpk(m): To encrypt a message m ∈ 0, 1 using pk = P,sample r ∈ 0, 1N and output the ciphertext
c = PT · r +⌊q
2
⌋·m mod q ∈ Zn+1
q ,
where mdef= (m, 0, ..., 0) ∈ 0, 1n+1.
4. Decsk(c): To decrypt c ∈ Zn+1q using secret key sk = s,
compute
m =
⌊2
q(〈c, (1, s)〉 mod q)
⌉mod 2.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 83: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/83.jpg)
Regev’s PKE scheme
1. SecretKeyGen(1n): Sample s$← Zn
q. Output sk = s.
2. PublicKeyGen(s): Let Ndef= O(n log q). Sample A
$← ZN×nq
and e$← χN . Compute b = A · s + e (mod q), and define
Pdef= [b|| − A] ∈ ZN×(n+1)
q .
Output pk = P.3. Encpk(m): To encrypt a message m ∈ 0, 1 using pk = P,
sample r ∈ 0, 1N and output the ciphertext
c = PT · r +⌊q
2
⌋·m mod q ∈ Zn+1
q ,
where mdef= (m, 0, ..., 0) ∈ 0, 1n+1.
4. Decsk(c): To decrypt c ∈ Zn+1q using secret key sk = s,
compute
m =
⌊2
q(〈c, (1, s)〉 mod q)
⌉mod 2.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 84: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/84.jpg)
Regev’s PKE scheme
1. SecretKeyGen(1n): Sample s$← Zn
q. Output sk = s.
2. PublicKeyGen(s): Let Ndef= O(n log q). Sample A
$← ZN×nq
and e$← χN . Compute b = A · s + e (mod q), and define
Pdef= [b|| − A] ∈ ZN×(n+1)
q .
Output pk = P.3. Encpk(m): To encrypt a message m ∈ 0, 1 using pk = P,
sample r ∈ 0, 1N and output the ciphertext
c = PT · r +⌊q
2
⌋·m mod q ∈ Zn+1
q ,
where mdef= (m, 0, ..., 0) ∈ 0, 1n+1.
4. Decsk(c): To decrypt c ∈ Zn+1q using secret key sk = s,
compute
m =
⌊2
q(〈c, (1, s)〉 mod q)
⌉mod 2.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 85: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/85.jpg)
Regev’s PKE scheme: Correctness
Encryption noise. Let all parameters be as before. Then for some ewhere |e| ≤ N · B, 〈c, (1, s)〉 =
⌊q2
⌋·m + e (mod q).
Proof . 〈c, (1, s)〉 =⟨PT · r +
⌊q
2
⌋·m, (1, s)
⟩(mod q)
=⌊q
2
⌋·m + rTP · (1, s) (mod q)
=⌊q
2
⌋·m + rTb− rTAs (mod q)
=⌊q
2
⌋·m + 〈r, e〉 (mod q),
and |〈r, e〉| ≤ ||r||1 · ||e||∞ = N · B.
Decryption noise. We’re good to go as long as noise ≤ bq/2c/2!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 86: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/86.jpg)
Regev’s PKE scheme: Correctness
Encryption noise. Let all parameters be as before. Then for some ewhere |e| ≤ N · B, 〈c, (1, s)〉 =
⌊q2
⌋·m + e (mod q).
Proof . 〈c, (1, s)〉 =⟨PT · r +
⌊q
2
⌋·m, (1, s)
⟩(mod q)
=⌊q
2
⌋·m + rTP · (1, s) (mod q)
=⌊q
2
⌋·m + rTb− rTAs (mod q)
=⌊q
2
⌋·m + 〈r, e〉 (mod q),
and |〈r, e〉| ≤ ||r||1 · ||e||∞ = N · B.
Decryption noise. We’re good to go as long as noise ≤ bq/2c/2!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 87: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/87.jpg)
Regev’s PKE scheme: Correctness
Encryption noise. Let all parameters be as before. Then for some ewhere |e| ≤ N · B, 〈c, (1, s)〉 =
⌊q2
⌋·m + e (mod q).
Proof . 〈c, (1, s)〉 =⟨PT · r +
⌊q
2
⌋·m, (1, s)
⟩(mod q)
=⌊q
2
⌋·m + rTP · (1, s) (mod q)
=⌊q
2
⌋·m + rTb− rTAs (mod q)
=⌊q
2
⌋·m + 〈r, e〉 (mod q),
and |〈r, e〉| ≤ ||r||1 · ||e||∞ = N · B.
Decryption noise. We’re good to go as long as noise ≤ bq/2c/2!
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 88: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/88.jpg)
Regev’s PKE scheme: Security
Let n, q, χ be chosen so that DLWEn,q,χ holds. Then for anym ∈ 0, 1, the joint distribution (P, c) is computationally
indistinguishable from U(ZN×(n+1)q × Zn+1
q
), where P and c come
from Regev’s PKE scheme.
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors
![Page 89: An intro to lattices and learning with errorsgasarch/BLOGPAPERS/RegevLWEtalk.pdf · Radius Problem (CRP), Bounded Distance Decoding (BDD), Discrete Gaussian Sampling Problem (DGS),](https://reader034.vdocuments.mx/reader034/viewer/2022042913/5f4c19c360bad950872bf52d/html5/thumbnails/89.jpg)
That’s all. :)
Daniel Apon – Univ of Maryland An intro to lattices and learning with errors