space complexity - cyberneticaahtbu/complexity_4_slides.pdf · the machine just uses the linear...

46
Lecture 4 of Complexity Theory October 29, 2009 Space Complexity Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Ap- proach. Ahto Buldas [email protected]

Upload: others

Post on 18-Mar-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Lecture 4 of Complexity Theory October 29, 2009

Space Complexity

Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Ap-proach.

Ahto Buldas

[email protected]

Lecture 4 of Complexity Theory October 29, 2009

Lecture 4 of Complexity Theory October 29, 2009

Motivation

We will study the memory requirements of computational tasks.

We define space-bounded computation, which has to be performed by theTM using a restricted number of tape cells, the number being a function ofthe input size.

We also study nondeterministic space-bounded TMs.

The goal of introducing a new complexity class is to capture interestingcomputational phenomena.

One phenomenon that space-bounded computations capture is the com-putation of winning strategies in 2-person games, which seems inherentlydifferent from solving NP problems such as SAT.

2

Lecture 4 of Complexity Theory October 29, 2009

Space-Bounded Computation

Def.4.1 (Space-bounded computation): Let S : N → N and L ⊆ 0,1∗.We say that L ∈ SPACE(s(n)) (resp. L ∈ NSPACE(s(n))) if there isa constant c and TM (resp. NDTM) M deciding L such that on every inputx ∈ 0,1∗, the total number of locations that are at some point non-blankduring M ’s execution on x is at most c · s(|x |). (Non-blank locations in theread-only input tape do not count.)

3

Lecture 4 of Complexity Theory October 29, 2009

Some Remarks

Analogous to time complexity, we restrict our attention to space boundsS : N → N that are space-constructible functions, i.e. there is a TM thatcomputes S(n) in O(S(n)) space when given 1n as input.

If S is space-constructible, then the machine “knows” the space bound it isoperating under. For example, logn, n and 2n, are space-constructible.

As the work tape is separated from the input tape, it makes sense to con-sider space-bounded machines that use space S(n) < n.

We will assume however that S(n) > logn since the input tape has lengthn, and we would like the machine to at least be able to remember the indexof the cell of the input tape that it is currently reading.

4

Lecture 4 of Complexity Theory October 29, 2009

Relations Between SPACE and TIME

DTIME(S(n)) ⊆ SPACE(S(n)) since a TM can access only one tapecell per step.

Space can be reused: a cell on the work tape can be overwritten an ar-bitrary number of times. A space S(n) machine can easily run for asmuch as 2O(S(n)) steps—for example a machine that increments a S(n)-bit counter.

Theorem 4.3: For every space constructible S : N→ N,

DTIME(S(n)) ⊆ SPACE(S(n)) ⊆ NSPACE(S(n))

⊆ DTIME(2O(S(n))

).

5

Lecture 4 of Complexity Theory October 29, 2009

Configuration Graph

A configuration ofM consists of the contents of all non-blank entries ofMsworking tape, along with its state and head positions on the working andinput tapes, at a particular point in its execution. For every M and inputx ∈ 0,1∗, the configuration graph of M on input x, denoted by GM,x,is a directed graph whose nodes correspond to possible configurationsthat M can reach from the starting configuration Cxstart. The graph hasa directed edge from a configuration C to a configuration C′ if C′ can bereached from C in one step according to Ms transition function

6

Lecture 4 of Complexity Theory October 29, 2009

Properties of the Configuration Graph

If M is deterministic then the graph has out-degree one.

If M is non-deterministic then it has an out-degree at most two.

We can assume that M ’s computation on x does not repeat the sameconfiguration twice (as otherwise it will enter into an infinite loop) and hencethat the graph is a directed acyclic graph (DAG).

By modifying M to erase all its work tapes before halting, we can assumethat there is only a single configuration Caccept on which M halts and out-puts 1. This means that M accepts the input x iff there exists a (directed)path in GM,x from Cstart to Caccept.

7

Lecture 4 of Complexity Theory October 29, 2009

Properties of the Configuration Graph

Claim 4.4: Let GM,x be the configuration graph of a space-S(n) machineM on some input x of length n. Then,• Every vertex in GM,x can be described using c · S(n) bits for someconstant c (depending on M ’s alphabet size and number of tapes) and inparticular, GM,x has at most 2c·S(n) nodes.• There is an O(S(n))-size CNF formula ϕM,x such that for every twostrings C,C′, we have ϕM,x(C,C′) = 1 if and only if C,C′ encode twoneighboring configurations in GM,x.

Proof sketch: Part 1 follows from observing that a configuration is com-pletely described by giving the contents of all work tapes, the position ofthe head, and the state that the TM is in. We can encode a configurationby first encoding the snapshot (i.e., state and current symbol read by all

8

Lecture 4 of Complexity Theory October 29, 2009

tapes) and then encoding in sequence the non-blank contents of all thework-tape, inserting a special “marker” symbol, to denote the locations ofthe heads.

Part 2 follows using similar ideas as in the proof of the Cook-Levin theorem.There we showed that deciding whether two configurations are neighboringcan be expressed as the AND of many checks, each depending on only aconstant number of bits, where such checks can be expressed by constant-sized CNF formulae (by Claim 2.14).

Proof of Theorem 4.3: Clearly SPACE(S(n)) ⊆ NSPACE(S(n)) andso we just need to show NSPACE(S(n)) ⊆ DTIME

(2O(S(n))

). By

enumerating over all possible configurations we can construct the graphGM,x in 2O(S(n))-time and check whether Cstart is connected to Cacceptin GM,x using the standard (linear in the size of the graph) breadth-firstsearch algorithm for connectivity.

8

Lecture 4 of Complexity Theory October 29, 2009

Regular Languages

SPACE(0) – regular languages decidable by finite state machines.

Theorem (Stearns,Hartmanis,Lewis, 1965∗) If s(n) = o(log logn)) thenSPACE(s(n)) contains only regular languages.

Proof. We will show that for any M that decides a non-regular languageLM using space s(n), there is p ∈ R such that for all j ∈ N there is nj ∈ Nsuch that: s(nj) ≥ j and nj ≤ p p

jand hence, there are infinitely many

nj for which

s(nj) ≥ j ≥ logp logp nj = Ω(log lognj) ,

and hence we derive a contradiction with s(n) = o(log logn).∗J. Hartmanis, P. L. Lewis II, and R. E. Stearns. Hierarchies of memory-limited computa-tions. Proceedings of the 6th Annual IEEE Symposium on Switching Circuit Theory andLogic Design, pp. 179–190. 1965.

9

Lecture 4 of Complexity Theory October 29, 2009

We define a memory configuration as any combination of state, string ofnon-blank symbols on the working tape, and the location of the workinghead on these symbols. Note that the position of the input head is NOT apart of a memory configuration.

If the string of symbols has length strictly less than j, we call the memoryconfiguration a j-configuration. As |Γ|≥2, the number of j-configurations is

j−1∑i=1

i · |Q| · |Γ|i < j · |Q| · |Γ|j .

Now we choose p such that for all j:

ppj> |Γ| · 2j·|Q|·|Γ|

j.

Let nj be the smallest integer such that M uses at least j working tapecells to process some input word x = x1x2 . . . xnj of length nj. The non-regularity of LM guarantees the existence of such word. Because of x, we

10

Lecture 4 of Complexity Theory October 29, 2009

must have s(nj) ≥ j, and hence, when M finally either accepts or rejectsx, it is not in a j-configuration.

In processing x, the machine M may cross and recross many times anyparticular input cell xi on its input tape. For each i ≤ nj, let Si be the setof all memory configurations achieved by M while processing x and whilethe input head is at the symbol xi.

In order to bound nj, we will first show that Sr = Ss for r < s implies thatxr 6= xs. Indeed, assume to the contrary that Sr = Ss and xr = xs forsome r < s, and consider what happens if M processes the word

x′ = x1 . . . xrxs+1 . . . xnj .

The length of x′ is less than nj and so all memory configurations that occurin processing x′ must be j-configurations because of the minimal nature

10

Lecture 4 of Complexity Theory October 29, 2009

of nj. We now show that the j-configurations for any xi of x′ will be j-configurations from the set Si (defined for x).

Assume to the contrary that that in processing x′, the machine M is read-ing xi and for the first time enters a j-configuration that is not in Si. Thiscould only occur when the reading head had just crossed from xr to xs+1

or form xs+1 to xr. Suppose, the machine is crossing from xr to xs+1. Byassumptions, at xr its configuration is in Sr = Ss and hence this configu-ration could also have occurred at xs in x. Furthermore, since xr = xs,the machine must go into one of the configurations that occur at xs in pro-cessing x.

x1 . . . xr−1 xr xs+1 . . . xnj||

x1 . . . xr−1 xs xs+1 . . . xnj

10

Lecture 4 of Complexity Theory October 29, 2009

This configuration is by definition in Ss, which proves the assertion that allj-configurations occurring from processing x′ are Si-configurations. Butthis means that M does not stop while processing x′ since the only stop-ping configuration in the processing of x is not a j-configuration (becauseat least j cells are needed and thus the Si contain no stopping configura-tion). Since M must stop for all inputs by definition, this is a contradictionwhich proves the assertion that Sr = S imply xr 6= xs.

Since Si have to be distinct for two occurrences of the same input symbol,the input length nj must be no greater than the number of subsets of j-configurations times the number of input symbols. But this number is lessthan |Γ| · 2j·|Q|·|Γ|j ≤ ppj and hence nj ≤ pp

j.

10

Lecture 4 of Complexity Theory October 29, 2009

Some Space Complexity Classes

Def.4.5:

PSPACE = ∪c>0SPACE(nc)

NPSPACE = ∪c > 0NSPACE(nc)

L = SPACE(logn)

NL = NSPACE(logn)

Example 4.6: We show how 3SAT ∈ PSPACE by describing a TM thatdecides 3SAT in linear space (that is,O(n) space, where n is the size of the3SAT instance). The machine just uses the linear space to cycle throughall 2k assignments in order, where k is the number of variables. Note thatonce an assignment has been checked it can be erased from the workingtape, and the working tape then reused to check the next assignment. Asimilar idea of cycling through all potential certificates applies to any NPlanguage, so in fact NP ⊆ PSPACE.

11

Lecture 4 of Complexity Theory October 29, 2009

Example 4.7 : Check that the following languages are in L:

EVEN = x : x has an even number of 1-sMULT = (n,m, nm): n,m ∈ N .

Claim The following problem PATH is in NL:

PATH = 〈G, s, t〉 : G is a directed graph in which there is a path from s to t .

Proof sketch: A nondeterministic machine can take a nondeterministic walkstarting at s, always maintaining the index of the vertex it is at, and usingnondeterminism to select a neighbor of this vertex to go to next. The ma-chine accepts iff the walk ends at t in at most n steps, where n is thenumber of nodes. If the nondeterministic walk has run for n steps already

12

Lecture 4 of Complexity Theory October 29, 2009

and t has not been encountered, the machine rejects. The work tape onlyneeds to hold O(logn) bits of information at any step, namely, the numberof steps that the walk has run for, and the identity of the current vertex.

Is PATH in L as well? This is an open problem equivalent to whether ornot L = NL. We will show that PATH is NL-complete. A recent surprisingresult shows that the restriction of PATH to undirected graphs is in L.

12

Lecture 4 of Complexity Theory October 29, 2009

PSPACE completeness

As already indicated, we do not know if P 6= PSPACE, though we stronglybelieve that the answer is YES. Notice, P = PSPACE implies P = NP.Since complete problems can help capture the essence of a complexityclass, we now present some complete problems for PSPACE.

Def.4.8: A language A is PSPACE-hard if for every L ∈ PSPACE, L ≤pA. If in addition A ∈ PSPACE then A is PSPACE-complete.

If any PSPACE-complete language is in P then so is every other lan-guage in PSPACE. Viewed contra-positively, if PSPACE 6= P then aPSPACE-complete language is not in P. Intuitively, a PSPACE-completelanguage is the most difficult problem of PSPACE. Just as NP triviallycontains NP-complete problems, so does PSPACE. The following is one:

SPACETM = 〈M,w,1n〉 : DTM M accepts w in space n .

13

Lecture 4 of Complexity Theory October 29, 2009

True Quantified Boolean Formulae

A quantified boolean formula has the form

Q1x1Q2x2 . . . Qnxnϕ(x1, x2, . . . , xn) ,

where eachQi is one of the two quantifiers ∀ or ∃ and ϕ is an (unquantified)boolean formula. Here ∀ and ∃ quantify over the universe 0,1.

Example 4.9: The formula ∀x∃y(x ∧ y) ∨ (x ∧ y) is true, because “forevery x ∈ 0,1 there is a y ∈ 0,1 that is different from it”. The formula∀x∀y(x ∧ y) ∨ (x ∧ y) is false.

Def. TQBF is the language that represents all quantified boolean formulaethat are true.

14

Lecture 4 of Complexity Theory October 29, 2009

TQBF is PSPACE-complete

First we show that TQBF ∈ PSPACE. Let

ψ = Q1x1Q2x2 . . . Qnxnϕ(x1, x2, . . . , xn) (1)

be a quantified Boolean formula with n variables, where we denote the sizeof ϕ by m. We show a simple recursive algorithm A that can decide thetruth of ψ in O(n+m) space. We will solve the slightly more general casewhere, in addition to variables and their negations, ϕ may also include theconstants 0 and 1. If n = 0 then the formula contains only constants andcan be evaluated in O(m) time and space. Let n > 0 and let ψ be as in(1). For b ∈ 0,1, denote by ψ|x1=b the modification of ψ where the firstquantifier Q1 is dropped and all occurrences of x1 are replaced with theconstant b. Algorithm A will work as follows: if Q1 = ∃ then output 1 iffat least one of ψ|x1=0 and ψ|x1=1 returns 1. If Q1 = ∀ then output 1 iffboth ψ|x1=0 and ψ|x1=b return 1.

15

Lecture 4 of Complexity Theory October 29, 2009

Space consumption: Let sn,m denote the space A uses on formulas withn variables and description size m. The crucial point is that both recursivecomputations ψ|x1=0 and ψ|x1=1 can run in the same space. Specifically,after computing ψ|x1=0, the algorithm A needs to retain only the single bitof output from that computation, and can reuse the rest of the space forthe computation of ψ|x1=1. Thus, assuming that A uses O(m) space towrite ψ|x1=b for its recursive calls, well get that sn,m = sn−1,m + O(m)

yielding sn,m = O(n ·m).

This suffices to show TQBF ∈ PSPACE. Still, we can show that thealgorithm runs in O(m + n) space. Note that algorithm always workswith restrictions of the same formula ψ. So it can keep a global partialassignment array that for each variable xi will contain either 0, 1 or q (ifits quantified and not assigned any value). Algorithm A will use this globalspace for its operation, where in each call it will find the first quantified

15

Lecture 4 of Complexity Theory October 29, 2009

variable, set it to 0 and make the recursive call, then set it to 1 and makethe recursive call, and then set it back to q. We see that As space usage isgiven by the equation sn,m = sn−1,m+O(1) and hence it usesO(n+m)

space.

PSPACE-hardness: We show that L ≤p TQBF for every L ∈ PSPACE.LetM be a machine that decides L in S(n) space and let x ∈ 0,1n. Weshow how to construct a quantified Boolean formula ψ of size O

(S(n)2

)that is true iff M accepts x. Recall that (by Claim 4.4), there is a Booleanformula ϕM,x such that for every two strings C,C′ ∈ 0,1m (wherem = O(S(n)) is the number of bits require to encode a configurationof M ), ϕM,x(C,C′) = 1 iff C and C′ are valid encodings of two adjacentconfigurations in the configuration graph GM,x. We will use ϕM,x to comeup with a polynomial-sized quantified Boolean formula ψ′ that has poly-nomially many Boolean variables bound by quantifiers and additional 2m

15

Lecture 4 of Complexity Theory October 29, 2009

unquantified Boolean variables C1, . . . , Cm, C′1, . . . , C

′m (or, equivalently,

two variables C,C′ over 0,1m) such that for every C,C′ ∈ 0,1m,ψ(C,C′) is true iff C has a directed path to C′ in GM,x. By plugging inthe values Cstart and Caccept to ψ′ we get a quantified Boolean formula ψthat is true iff M accepts x.

We define the formula ψ′ inductively. We let ψi(C,C′) be true if and onlyif there is a path of length at most 2i from C to C′ in GM,x. Note thatψ = ψm and ψ0 = ϕM,x. The crucial observation is that there is a pathof length at most 2i from C to C′ if and only if there is a configuration C′′

such that there are paths of length at most 2i−1 path from C to C′′ andfrom C′′ to C′. Thus the following formula suggests itself:

ψi(C,C′) = ∃C′′ : ψi−1(C,C′) ∧ ψi−1(C′′, C) .

However, this formula is no good. It implies that ψi is twice the size of ψi−1,and a simple induction shows that ψ = ψm has size about 2m, which is too

15

Lecture 4 of Complexity Theory October 29, 2009

large. Instead, we use additional quantified variables to save on descriptionsize, using the following more succinct definition for ψi(C,C′):

∃C′′∀D∀E : (D = C∧E = C′)∨(D = C′∧E = C′′) ⇒ ψi−1(D,E) .

Here, = and ⇒ are convenient short-hands, and can be replaced by ap-propriate combinations of the standard Boolean operations ∧ and ¬.) Notethat |ψi| ≤ |ψi−1|+ O(m) and hence |ψm| = O(m2). We can convertthe final formula to prenex form in polynomial time.

15

Lecture 4 of Complexity Theory October 29, 2009

Savitch’s theorem

The reader may notice that because the above proof uses the notion ofa configuration graph and does not require this graph to have out-degreeone, it actually yields a stronger statement: that TQBF is not just hard forPSPACE but in fact even for NPSPACE. Since TQBF ∈ PSPACE,this implies that PSPACE = NSPACE, which is quite surprising sinceour intuition is that the corresponding classes for time (P and NP) aredifferent. In fact, using the ideas of the above proof, one can obtain thefollowing theorem:

Theorem 4.12 (Savitch 1970): For any space-constructible S : N→ N withS(n) ≥ logn, NSPACE(S(n)) ⊆ SPACE(S(n)2). We remark that therunning time of the algorithm obtained from this theorem can be as high as2Ω(S(n)2).

16

Lecture 4 of Complexity Theory October 29, 2009

Proof : (Follows the proof that TQBF is PSPACE-complete. Let L ∈NSPACE(S(n)) be a language decided by a TM M such that for ev-ery x ∈ 0,1n, the graph GM,x has at most N = 2O(S(n)) vertices.We describe a recursive procedure REACH(u, v, i) that returns 1 if thereis a path from u to v of length at most 2i and 0 otherwise. Note thatREACH(s, t, dlogNe) is 1 iff t is reachable from s. Again, the main obser-vation is that there is a path from u to v of length at most 2i iff theres a ver-tex z with paths from u to z and from z to v of lengths at most 2i−1. Thus,REACH(u, v, i) will enumerate over all vertices z (at a cost of O(logN)

space) and output 1 if it finds z such that REACH(u, z, i − 1) = 1 andREACH(z, v, i − 1) = 1. Once again, the crucial observation is thatalthough the algorithm makes n recursive invocations, it can reuse thespace in each of these invocations. Thus, if we let sN,i be the spacecomplexity of REACH(u, v, i) on an N -vertex graph we get that sN,i =

sN,i−1 +O(logN) and thus sN,logN = O(log2N) = O(S(n)2).

17

Lecture 4 of Complexity Theory October 29, 2009

PSPACE: Optimum Strategies for Games

The central feature of NP-complete problems is that a yes answer has ashort certificate. The analogous concept for PSPACE-complete problemsis that of a winning strategy for a two-player game with perfect information.

A good example of such a game is Chess: two players alternately makemoves, and the moves are made on a board visible to both. Thus moveshave no hidden side effects; hence the term “perfect information”.

What does it mean for a player to have a “winning strategy?” The firstplayer has a winning strategy iff there is a 1st move for player 1 such thatfor every possible 1st move of player 2 there is a 2nd move of player 1 suchthat ...(and so on) such that at the end player 1 wins.

18

Lecture 4 of Complexity Theory October 29, 2009

Thus deciding whether or not the first player has a winning strategy seemsto require searching the tree of all possible moves. This is reminiscent ofNP, for which we also seem to require exponential search.

But the crucial difference is the lack of a “short certificate”, since the onlycertificate we can think of is the winning strategy itself, which as noticed,requires exponentially many bits to even describe. Thus we seem to bedealing with a fundamentally different phenomenon than the one capturedby NP.

18

Lecture 4 of Complexity Theory October 29, 2009

The QBF Game

The interplay of existential and universal quantifiers in the description ofthe the winning strategy motivates us to invent the following game:

Example 4.13 (The QBF game): The “board” for the QBF game is a Booleanformula ϕ whose free variables are x1, x2, . . . , x2n. The two players al-ternately make moves, which involve picking values for x1, x2, . . . in order.Thus player 1 will pick values for the odd-numbered variables x1, x3, x5, . . .(in that order) and player 2 will pick values for the even-numbered variablesx2, x4, x6, . . .. We say player 1 wins iff at the end ϕ becomes true. Clearly,player 1 has a winning strategy iff

∃x1∀x2∃x3∀x4 . . . ∀x2n : ϕ(x1, x2, . . . , x2n) ,

namely, iff this quantified boolean formula is true. Thus deciding whetherplayer 1 has a winning strategy for a given board in the QBF game isPSPACE-complete.

19

Lecture 4 of Complexity Theory October 29, 2009

NL completeness

We cannot use ≤p since NL ⊆ P and every language in NL is Karp re-ducible to the trivial language 1. Reduction: “decide in poly-time whetherthe input is in the NL-language, and then map to 1 or 0 accordingly”. Suchtrivial languages should not be the “hardest” languages of NL.

When choosing the type of reduction to define completeness for a complex-ity class, we must keep in mind the complexity phenomenon we seek to un-derstand. In this case, the complexity question is whether or not NL = L.The reduction should not be more powerful than L. For this reason we uselog-space reductions. To define such reductions we must tackle the trickyissue that a reduction typically maps instances of size n to instances ofsize at least n, and so a log-space machine computing such a reductiondoes not have even the memory to write down its output!

20

Lecture 4 of Complexity Theory October 29, 2009

Log-Space Reductions

The way out is to require that the reduction should be able to compute anydesired bit of the output in logarithmic space. The formal definition is asfollows:

Definition 4.14 (log-space reduction): Let f : 0,1∗ → 0,1∗ be apolynomially-bounded function (i.e., theres a constant c > 0 such thatf(x) ≤ |x|c for every x ∈ 0,1∗). We say that f is implicitly log-spacecomputable, if the following two languages are in L:

Lf = 〈x, i〉 : f(x)i = 1 and L′f = 〈x, i〉 : i ≤ |f(x)| .Informally, we can think of a single O(log |x|)-space machine that giveninput (x, i) outputs f(x)i provided that i ≤ |f(x)|. Language A is log-space reducible to language B, denoted by A ≤l B, if there is an implicitlylog-space computable function f : 0,1∗ → 0,1∗ such that x ∈ A ifff(x) ∈ B for every x ∈ 0,1∗.

21

Lecture 4 of Complexity Theory October 29, 2009

Properties of Log-Space Reductions

Lemma 4.15: (a) If A ≤l B and B ≤l C then A ≤l C. (b) If A ≤l B andB ∈ L then A ∈ L.

Proof : We prove that if f, g are implicitly log-space computable, then so ish(x) = g(f(x)). Then part (b) follows by letting g be the characteristicfunction of B,i.e. g(y) = 1 iff y ∈ B.

Let Mf ,Mg be the log-space machines that compute the mappings x, i 7→f(x)i and y, j 7→ g(y)j respectively. We construct Mh that computes themapping x, j 7→ g(f(x))j, in other words, given as input (x, j) outputsg(f(x))j provided that j ≤ |g(f(x))|. Machine Mh will pretend that ithas an additional (fictitious) input tape on which f(x) is written, and it issimulating Mg on this input.

22

Lecture 4 of Complexity Theory October 29, 2009

The true input tape has x, j written on it. To maintain its fiction, Mh holdson its work-tape the position i of the “head” on the fictitious tape that Mg

is currently reading; this requires only log |f(x)| space. To make onestep, Mg needs to know the “input symbol”, i.e. f(x)i. At this point, Mhtemporarily suspends its simulation of Mg (copying Mg-s work-tape to asafe place on its own work-tape) and invokes Mf on inputs x, i to getf(x)i. Then it resumes its simulation of Mg using this bit. The total spaceMh uses isO(log |g(f(x))|)+O(log |x|)+O(log |f(x)|) = O(log |x|).

22

Lecture 4 of Complexity Theory October 29, 2009

NL-completeness

A language A is NL-complete if A ∈ NL and A ≤l B for every B ∈ NL.Note that an NL-complete language is in L iff NL = L.

Theorem 4.16: PATH is NL-complete.Proof : We already have PATH ∈ NL. Let L ∈ NL and M decides itin space O(logn). We describe an implicitly log-space computable fthat reduces L to PATH. For any x ∈ 0,1n, define f(x) as a triple〈GM,x, Cstart, Caccept〉. There is a path from Cstart to Caccept iff M ac-cepts x. The graph is represented by an adjacency matrix that contain 1 inthe 〈C,C′〉-th position (i.e. rows and columns are numbers between 0 and2O(logn)) iff there is an edge from C to C′ in GM,x. It remains to showthat this adjacency matrix can be computed by a log-space reduction. Thisis easy since given 〈C,C′〉, a deterministic machine can examine in spaceO(|C|+ |C′|) = O(log |x|) whether C′ is one of the (at most two) config-urations that can follow C according to the transition function of M .

23

Lecture 4 of Complexity Theory October 29, 2009

24

Lecture 4 of Complexity Theory October 29, 2009

25

Lecture 4 of Complexity Theory October 29, 2009

26

Lecture 4 of Complexity Theory October 29, 2009

27

Lecture 4 of Complexity Theory October 29, 2009

28

Lecture 4 of Complexity Theory October 29, 2009

29

Lecture 4 of Complexity Theory October 29, 2009

30

Lecture 4 of Complexity Theory October 29, 2009

31

Lecture 4 of Complexity Theory October 29, 2009

32

Lecture 4 of Complexity Theory October 29, 2009

33

Lecture 4 of Complexity Theory October 29, 2009

34