slides by irina volinsky evgeny shulgin lev solar

39
1 Slides by Irina Volinsky Slides by Irina Volinsky Evgeny Shulgin Evgeny Shulgin Lev Solar Lev Solar Adapted from Dr. Ely Porat’s course lecture Adapted from Dr. Ely Porat’s course lecture notes. notes.

Upload: yasuo

Post on 15-Jan-2016

60 views

Category:

Documents


0 download

DESCRIPTION

Advanced Algorithms Course Final Project. Slides by Irina Volinsky Evgeny Shulgin Lev Solar Adapted from Dr. Ely Porat’s course lecture notes. Oblivious Transfer. Oblivious Transfer (OT). OT is a cornerstone in the Foundations of Cryptography - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

1

Slides by Irina VolinskySlides by Irina Volinsky

Evgeny ShulginEvgeny Shulgin

Lev SolarLev Solar

Adapted from Dr. Ely Porat’s course lecture Adapted from Dr. Ely Porat’s course lecture notes.notes.

Page 2: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

2

Page 3: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

3

Oblivious Transfer (OT)

• OT is a cornerstone in the Foundations of Cryptography

• OT become the basis for realizing a broad class of interactive protocols, such as bit commitment, zero-knowledge proofs.

Page 4: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

4

Oblivious Transfer (OT) cont.

• An OT protocol is a two-party protocol in which Alice (A) transmits a bit to Bob (B) and Bob receives it with probability ½ , and Alice doesn’t learn whether Bob receives the bit.

• Alice sends X over some channel X Y to Bob, where Bob may choose the channel hidden from Alice from a previously agreed-on and/or the channel may add noise to the transmission

Page 5: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

5

Oblivios Transfer (OT) Formal Definition

• One-out-of-two String OT, denoted – One party A owns two secret k-bit strings w0

and w1

– Another party B wants to learn wc c{0,1} for a secret bit c of his choice

– B doesn’t learn any information about w1-c and A cannot obtain any information about c

Page 6: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

6

OT Variations

• A natural restriction of the previous definition is:

One-out-of-two Bit Obvious Transfer, denoted

It concerns the case k = 1 in which w0 and w1 are single bits

secrets, generally called b0 and b1

• A natural extension of One-out-of-two String OT

called All-or-Nothing Disclosure of Secrets (ANDOS), denoted

21

2OT

kOT21

t

Page 7: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

7

ANDOS OT

• A owns t secret k-bit strings w0, w1, …,wt-1 and B wants to learn wc for a secret integer 0c<t of his choice.

• Transfer between two parties is done in an all-or-nothing fashion , which means it must be impossible for B to obtain information on more than one wi 0i<t and for A to obtain information about which secret B learned.

• ANDOS has applications to zero-knowledge proofs, exchange of secrets, identification, etc.

Page 8: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

8

Example of OTThis example uses one-way function ƒ.

Alice has a list of secrets s1,…sk. • Alice tells Bob about ƒ.• Bob wants to know si. He chooses random numbers x1,

…,xk in the domain of ƒ and sends Alice y1,…,yk , where: xi if j i

yi= ƒ(xj) if j = i

• Alice computes zj = ƒ –1(yj) for j=1,…,k and sends Bob zj

si

• Bob can compute zi = ƒ –1 (ƒ (xi)) = xi and so can recover si since zj si = xi si

Page 9: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

9

Example of OT (cont.)

• Notice that Bob can cheat by sending other yj as ƒ(xj) rather than xj as he is supposed to do. This is called active cheating. Passive cheating involve analyzing protocol compliant data outside the protocol. Cheating is what makes protocol analysis difficult from a mathematical perspective.

Page 10: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

10

Correctness• Formally speaking, OT is a two-party protocol that

satisfies the constraint of correctness• Let [P0, P1](a)(b) be a random variable that describes the

output obtained by A and B when they execute together the programs P0 and P1 on respective inputs a and b.

• Similarly, let [P0, P1]*(a)(b) be the random variable that describes the total1 information acquired during the execution of protocol [P0, P1] on inputs a and b. 1total information includes not only messages received and issued by the parties but also the result of any local random sampling they may have performed

Page 11: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

11

Correctness (cont.)• Let [P0, P1]p(a)(b) and [P0, P1]*

p(a)(b) be the marginal random variables obtained by restricting the previous definitions to only one party P (it’s often called the view of P).

• Definition of correctness ( stands for the empty string)– Protocol is correct for if

– For any program there exists a probabilistic program s.t. wFtk, cT

]B,[A

0)} wε,( )(c)w](B ,AProb{[

Tc ,Fw

c

tk

A~

S~

accepts) B|))(c)w(S~

( ]B,A([ accepts) B|)(c)w(]B ,A~

([ BB

kOT21

t

Page 12: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

12

Correctness (cont.)• Intuitive description:

– Condition 1 means that if the protocol is executed as described, it will accomplish the task it was designed for: B receives word wc and A receives nothing.

– Condition 2 means that the situation in which B does not abort, A cannot induce a distribution on B’s output using a dishonest that she could not induce simply by changing the input words and then being honest (which she can always do without being detected). This condition called awareness. It concerned with the future use of the outputs of a protocol.

– No correctness condition involving is necessary since A receives no output.

A~

B~

Page 13: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

13

Page 14: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

14

Introduction Why we need an approximation algorithms?Why we need an approximation algorithms?• Many problems of practical are NP-complete, and we need to

solve them.

• If problem is is NP-complete, we are unlikely to find a polynomial-time algorithm for solving it exactly, but it may still be possible to find near-optimal solution in polynomial time (either in the worst case or on the average). In practice, near-optimality is often good enough.

• An algorithms that returns near-optimal solutions is called an approximation algorithmsapproximation algorithms.

Page 15: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

15

Definition

An algorithm is an An algorithm is an -approximation -approximation for an for an optimization problem optimization problem P P if:if:

• The algorithm runs in polynomial time• The algorithm always produces a solution that is

within a factor of (ratio bound) of the optimal solution

Page 16: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

16

TSP problem statement

• Input:Input: Complete undirected graph G = (V, E) that has a nonnegative integer cost c(u,v) associated with each edge (u,v) E

• Goal:Goal: To find a Hamilton cycle of G with minimal cost

Page 17: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

17

TSP (with triangle inequality)

• The cost function c satisfies the triangletriangle inequality inequality if for all vertices u,v,w V,

c(u,w) c(u,v) + c(v,w)• Since TSP is NP-Complete, approximation algorithms

allow for getting a solution close to the solution of an NP problem in polynomial time

• TSP have an approximation algorithm with a ratio bound of 2

Page 18: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

18

Approximation algorithm for TSP (with triangle inequality)

Approx-TSP-Tour(G,c)Approx-TSP-Tour(G,c) 1. Select a vertex r V[G] to be a “root” vertex 2. Grow a minimum spanning tree T for G from root r using MST-Prim(G,c,r) 3. Let L be the list of vertices visited in a preorder tree walk of T 4. Return Return the Hamilton cycle H that visits the vertices in order L

Page 19: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

19

Approx-TSP-Tour Algorithm

• Approx-TSP-Tour is an approximation algorithm with a ratio bound of 2 for the traveling-salesman problem with triangle inequality

Page 20: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

20

Approx-TSP-Tour Algorithm Let K denote an optimal tour for the given set of vertices. We have to show that c(K) 2c(H), where H is the tour returned by Approx-TSP-Tour.

Since we obtain a spanning tree by deleting any edge from a tour, if T is a minimum spanning tree for the given set of vertices,

then c(T) c(K). (1)

A full walkfull walk of T lists the vertices when they are first visited and also whenever they are returned to after a visit to a subtree. Let us call this walk W. Since the full walk traverses every edge of T exactly twice, we have

c(W)=2c(T). (2) Equations (1) and (2) imply that c(W) c(K). (3)

Page 21: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

21

Approx-TSP-Tour Algorithm (cont.)

W is generally not a tour, since it visits some vertices more than once. By repeatedly applying the triangle inequality, we can remove from W all but the first visit to each vertex and the cost does not increase. This ordering is the same as that obtained by a preorder walk of the tree T.

Let H be the cycle corresponding to this preorder walk. It is a hamiltonian cycle, since every vertex is visited exactly once, and in fact it is the cycle computed by Approx-TSP-Tour. Since H is obtained by deleting vertices from the full walk W,

we have c(H) c(W). (4) Combining inequalities (3) and (4) completes the proof.

Page 22: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

22

Bin Packing problem statement

• Input:Input: Set of n items. Item i has size si , where each si is a rational number, 0 si 1

• Goal:Goal: Minimize the number of bins of size 1 such that all the items can be packed into them

Page 23: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

23

Bin Packing

• Since Bin Packing is NP-Hard, approximation algorithms allow for getting a solution close to the solution of an NP problem in polynomial time

• Bin Packing have an approximation algorithm with a ratio bound of 2

Page 24: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

24

First Fit approximation algorithm for Bin Packing

Consider some ordering on empty bins.

First-FitFirst-Fit 1. For i=1 to n 2. Let j be the first bin such that i fits in bin j 3. Put i in bin j 4. End

Page 25: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

25

First-Fit Algorithm

First-Fit is an approximation algorithm with a ratio First-Fit is an approximation algorithm with a ratio bound of 2 for the Bin Packing problembound of 2 for the Bin Packing problem

We have to show that First-Fit(I) 2OPT(I) + 1 for all instances I.

Let SIZE(I) denote the sum of all si.

Then: SIZE(I) OPT(I).

Page 26: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

26

First-Fit Algorithm (cont.)

At most one bin can be half-full in the output of Firs-Fit(I), because if there were two bins half full, then the last item added to the latter bin should have been added to the firs

bin. Thus, 1/2(First-Fit(I) - 1) SIZE(I) which implies First-Fit(I) 2SIZE(I) + 1 i.e., First-Fit(I) 2OPT(I) + 1 The last equation completes the proof.

Page 27: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

27

Remarks

• There is no polynomial-time approximation algorithm with ratio bound 1 for the general traveling-salesman problem (without the triangle inequality )

• There is no polynomial-time approximation algorithm for the clique problem in any ratio bound

Page 28: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

28

Definition of PTAS

• Definition: A polynomial time approximation A polynomial time approximation scheme (PTAS)scheme (PTAS) for a minimization problem is a family of algorithms {Aε: ε > 0} such that for each ε > 0, Aε is a (1 + ε) approximation algorithm which runs in polynomial time in input size for fixed ε. For a maximization problem, we require that Aε is a (1- ε) - approximation algorithm.

• Some problems which have a PTAS are knapsack and some scheduling problem.

Page 29: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

29

Dynamic programming: Knapsack

Here we consider the “knapsack problemknapsack problem”, and show that the technique of dynamic programming is useful in designing approximation algorithm. 

• Knapsack:Knapsack:• Input:Input: Set of items {1, ... , n}. Item i has a value vi

and size si. Total “capacity” is B. vi, si, B Є Z+

• Goal:Goal: Find a subset of items S that maximizes the value of subject to the constraint

.

Si

iv

Si

i Bs

Page 30: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

30

Dynamic programming: Knapsack (cont.)

We assume that , since if it can never be included in any feasible solution.

 • We now show that dynamic programming can be used to

solve the knapsack problem exactly. 

iBsi Bsi

Page 31: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

31

Definition

• Definition: Let size of “smallest” subset of {1, … , i} with value exactly v. ( if no such subset exists).  

• Now consider the following dynamic programming algorithm. Note that if V = maxi vi then is an upper bound on the value of any solution.

nV

),( viA

Page 32: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

32

DynProg Algorithm

).,1(),(

else

)),(),,1(min(),(

if

to1For

n to2iFor

),1(

to1For

0)0,(

n to1For

max

1

11

viAviA

vviAsviAviA

vv

nVv

otherwise

vvifsvA

nVv

iA

i

vV

ii

i

ii

Page 33: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

33

Have we proven that P = NP ?

• It is known that knapsack problem is NP- hard. But the It is known that knapsack problem is NP- hard. But the running time of the algorithm seems to be polynomial. running time of the algorithm seems to be polynomial. Have we proven that P = NP ? No, since input is usually Have we proven that P = NP ? No, since input is usually represented in binary; that is, it takes bits to write represented in binary; that is, it takes bits to write down . Since the running time is polynomial in down . Since the running time is polynomial in it is exponential in the input size of the . We could it is exponential in the input size of the . We could think of write problem in unary (i.e. bits to encode ), think of write problem in unary (i.e. bits to encode ), in which case the running time would be polynomial in in which case the running time would be polynomial in size of the input.size of the input.

ivlog

iv ii vmaxiv

iv iv

Page 34: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

34

Definitions• Definition: An algorithm for a problem with running

polynomial of input encoded in unary is called pseudopolynomial.

• Definition: A polynomial time approximation scheme (PTAS) is a family of algorithms {Aε: ε > 0} for a problem such that for each ε > 0, Aε is a (1 + ε) approximation algorithm (for min problem) or (1- ε) - approximation algorithm (for max problems). If the running time is also a polynomial in , then {Aε} is a fully polynomial – time

approximation scheme (FTAS, FPTAS).

1

Page 35: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

35

DynProg2 Algorithm

• Run DynProg on ( ).

iK

vv

n

VK

ii

ii vs ,

Page 36: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

36

Theorem:

Theorem: DynProg2 is an FPAS for knapsack.

Proof: • Let be the set of items found by DynProg2. Let be the

optimal set. We know , since one possible knapsack is to simply take the most valuable item.

• We also know, by the definition of ,

S OOPTV

iv

)1( iii vKvvK

Page 37: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

37

Proof (cont.) which implies

• Then

KvvK ii

Si

iSi

i vKv

Oi

ivK

Oi

i KOv

Oi

i Knv

Page 38: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

38

Proof (cont.)

• So the running time is

, so it is an FTAS.

Oi

i Vv

OPTOPT

OPT)1(

)1

()()( 322

nO

K

VnOVnO

Page 39: Slides by   Irina Volinsky Evgeny Shulgin Lev Solar

39

Bibliography

• Dr. Ely Porat’s course lecture notes• Introduction to algorithms,

H. Cormen, E. Leiserson, L. Rivest• Lecture Notes on Approximation Algorithms Fall 1998,

P. Williamson• Oblivious Transfer and Intersecting codes, Gilles

Brassard, Claude Crepeau, Miklos Santha