cooperative multiple access encoding states …verdu/reprints/somshaver...causal asymmetric side...
Embed Size (px)
TRANSCRIPT

ISIT2007, Nice, France, June 24  June 29, 2007
Cooperative Multiple Access Encoding with StatesAvailable at One Transmitter
Anelia SomekhBaruchPrinceton UniversityPrinceton, NJ, [email protected]
Shlomo Shamai (Shitz)Technion I.I.T.Haifa, Israel
[email protected] ee.technion.ac.il
Sergio VerduiPrinceton UniversityPrinceton, NJ, [email protected]
Abstract We generalize the Gel'fandPinsker model to encompass the setup of a memoryless multipleaccess channel.According to this setup, only one of the encoders knows thestate of the channel (noncausally), which is also unknown to thereceiver. Two independent messages are transmitted: a commonmessage and a message transmitted by the informed encoder. Wefind explicit characterizations of the capacity region with bothnoncausal and causal state information. Further, we apply thegeneral formula to the Gaussian case with noncausal channelstate information, under an individual power constraint as wellas a sum power constraint. In this case, the capacity region isachievable by a generalized writingondirtypaper scheme.
I. INTRODUCTION
The capacity of statedependent channels has become awidely investigated research area. The framework of channelstates available at the transmitter dates back to Shannon [1],who characterized the capacity of a statedependent memoryless channel whose states are i.i.d. and available causally to thetransmitter. In their celebrated paper [2], Gel'fand and Pinsker(GP) established a singleletter formula for the capacity of thesame channel under the conceptually different setup where thetransmitter observes the channel states noncausally. The maintool to prove achievability in this setup is the binning encodingprinciple [2]. Costa [3] applied GP's result to the Gaussiancase with two additive Gaussian noise sources, one of which,the interference, takes the role of the channel state. Costaoriginated the term "writing on dirty paper" which stands foran application of GP's binning encoding scheme that adaptsthe transmitted signal to the channel state sequence rather thanattempting to cancel it. This results in a surprising conclusion:the capacity of the channel without interference can be attainedeven though the interference is not known to the receiver. Forextensions of this principle and other related work see [4] andreferences therein.Much research has been devoted to applications of these
channel models, for example, watermarking, (see [5] and references therein), multiinputmultioutput (MIMO) broadcastchannels, [6], where dirtypaper coding happens to be a centralingredient in achieving the capacity region, and cooperativenetworks [7].
In [4] we introduced the setup of cooperative encoding overthe asymmetric (channel states known to one encoder only)GP MAC, with a single common message source fed to bothencoders. We characterized the capacity of this channel both
W2
Sn
Fig. 1. Asymmetric statedependent MAC with a common message.
for the general finite input alphabet case and the Gaussiancase. We also briefly discussed a generalization of this setupto include an additional message source that is fed to the informed user only. In this paper, we perform this generalization.The results presented in this paper have applications relating tocognitive radio (see [8] and references therein), watermarking,and other scenarios of cooperative communication, e.g., [9].
II. PROBLEM SETUP
A stationary memoryless statedependent multipleaccesschannel is defined by a distribution Qs on the set S andthe channel conditional probability distribution Wyls,x,,x2from S x X1 x X2 to Y. Let X(71) = (X1(1), ,Xi(n)) andX(2) = (X2 (1), X2 (n)) designate the inputs of transmitters1 and 2 to the channel, respectively. The output of the channelis denoted by yn. The symbols Si,Xl(i),X2(i) and Yirepresent the channel state, the channel inputs produced bytwo distinct encoders, and the channel output, at time index i,respectively. We assume that the channel states Sn are i.i.d.,each distributed according to Qs. As can be seen in Figure 1,the setup we consider is asymmetric in the sense that onlyencoder 2 is informed of the channel states, while neitherencoder 1 nor the decoder know the channel states. Unlike theordinary MAC with partially known state information (whichwas considered in [10], [1 1], where inner and outer bounds onthe capacity region are derived) we allow a common messagesource fed to both encoders and an independent message thatis to be transmitted by the informed encoder. When encoder 2observes the CSI noncausally, we shall refer to this channel asa Generalized Gel'fandPinsker (GGP) channel, when encoder
1424414296/07/$25.00 2007 IEEE 1556
Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.

ISIT2007, Nice, France, June 24  June 29, 2007
2 observes the states causally, the channel will be referred toas an asymmetric causal statedependent channel.The common message, WV, and the private message, AV2,
are independent random variables uniformly distributed overthe sets {1, ..., Mc} and {1, ..., M2}, respectively, where Mc =LenR, j and M2 LenR2i. An (6Rc, 6R2, n)code for the GGPchannel consists of two encoders (1) (n2) and a decoder Obn:the first encoder, unaware of the CSI, is a mapping
(1l) {:... Mc} > ln.(1(Pn ~~~~~~~~~~(1)The second encoder observes the CSI noncausally and isdefined by a mapping
Th 2):d...i MCI xa in...gM2 x Sn > X2n. (2)The decoder is a mapping
An (eRC, eR2, n)code for the asymmetric causal statedependent channel is defined similarly with the exception thatthe second encoder is defined by a sequence of mappings
lp(2) i {1 ...: McI x f1: ...: M2} x Si > X2 (4)where i 1,..., n, and X2n(i)= (}Vc,i 2, S)An (e, n, Rc, R2)code for the GGP channel is a code
(Wn)n :o)n, yn) having average probability of error not exceeding c, i.e., Pr((Wc, V2) t On (Yin)) < e. A rate pair(RC R2) is said to be achievable if there exists a sequenceof (cn, n, RC, R2)codes with limn, 'En = 0. The capacityregion of the GGP channel is defined as the closure of the setof achievable ratepairs. The definitions of an (e, n, RC, R2)code, an achievable rate pair and the capacity region of theasymmetric causal statedependent channel are similar.Due to space limitations, the proofs of the results of this
paper are omitted and can be found in [12].
111. CAPACITY REGION  FINITE INPUT ALPHABET GGPCHANNEL
In this section it is assumed that the alphabets S, X1, X2 arefinite.
Theorem 1 The capacity region of the GGP channel, C, isthe closure of the set of all ratepairs, (RC, R2), satisfying
R2 < I(U;Y Xi)Rc+R2 < I(U,Xi;Y)
I(U; S X1)I(U, Xi; S),
for some joint measure Ps,x1,u,x2,y having the form
PS,x1,U,x2,Y = QsPxiPu s,xiPX21S,X1,UWyIS,XI,X2, (6)where 1UN < S *Xfl * X2 .Theorem 1 continues to hold if in (6) we replace Px21s,x1,uby PX21S,u with slightly larger 1UN.The proof of Theorem 1 is an immediate extension of the
proof of Theorem 1 in [4]. In particular, the achievability part
analyzes the error probability of a coding scheme describedbelow (after Corollary 1).
It is noted that Theorem 1 remains intact if we allow forfeedback to the informed encoder, i.e., if, before producing theith channel input symbol, the informed encoder observes theprevious channel outputs, Yt1, that is, the informed encoderis actually a sequence of mappings o 2) = 12'i) In 1 with
(2,i) : ... Mc} x f1: ... M2} x Snx  XWe now specialize Theorem 1 to the important case where
only the common message is transmitted.
Corollary 1 The common message capacity of thefinite inputalphabet GGP channel is given by
C = max [I(U, Xi; Y) I(U, Xi; S)], (7)
where the maximum is over all the joint measuresPS,x1,u,x2,y having the form (6) where 1UN < ISI * IXl *X2
We note that there exists a maximizing measure with X2 thatis equal to a deterministic function of (S, X1, U).The achievability part of Theorem 1 is based on the follow
ing random coding scheme:Generation of Codebooks:
Fix a measure Ps,x1,U,X2,y satisfying (6), and set
R2 = I(U; YXi) I(U; S Xi)
RC = I(Xi; Y) I(Xi; S) = I(Xi; Y). (8)Denote M1 = en[Rc] M2 en[R2`c] and Jen[I(U;SXi)+2E]. First, M1 i.i.d. nvectors, {xf}lf_1, aredrawn, each with i.i.d. components subject to Px1. The ordered collection of the drawn vectors constitutes the codebookused by the uninformed encoder.
For each codeword, xe, one draws M2 x J auxiliary nvectors denoted {Uf,k,J}, k = 1, ...M2, j 1,.J, independently and with i.i.d. components given x. subject tothe conditional measure PuIx1 induced by QSPXI PU,X2 Ix1 ,SHence, each codeword in the uninformed user's codebook isassociated with a codebook of auxiliary codewords.Encoding: To transmit X, the uninformed encoder transmitsthe vector xe. Transmission of k is done by the informedencoder who searches for the lowest Jo e {1, J} suchthat Uf,k,jO is jointly typical with (xe, s). Denote this j byj(s, f, k). If such Jo is not found, or if the observed statesequence s is nontypical, an error is declared and j(s, X, k)is set toj 1.
Finally, the output of the second (informed) encoderis an nvector x that is drawn i.i.d. conditionally given(s, u. kj(s .e ), x.e) (using conditional measure Px2Is,xI,u induced by QSPxi Pu,x2 IXI,s)Decoding: Upon observing y, the decoder searches for a pairof indices, (i, k), such that xi, Ue k j are jointly typical with yand outputs them. If there is no such pair, or it is not unique,an error is declared.The analysis of the probability of error of the above
described scheme (see [12]) establishes the achievability of(R, R2). It is easy to realize that if a ratepair (R ,R2)
1557
Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.

ISIT2007, Nice, France, June 24  June 29, 2007
is achievable, then so is (RC + R2, 0). This proves that also(RC + R2, 0) is achievable and thus the entire trapezoid (5) isan achievable region by timesharing arguments.When the common message capacity is concerned, the
above encoding scheme can be applied by attributing the bitsassigned to W2 in the above described scheme, to the commonmessage Wc. The output of decoder is the pair m = (i, k).
Although the capacity region of the GGP channel is characterized in Theorem 1, we next state an outer for it. The reasonfor that is that this outer bound is achievable in the Gaussiancase, and hence is useful in the proof of the converse part of theGaussian coding theorem. The outer bound is a generalizationof the trivial upper bound maxp I, I(X; Y S) on the capacityof the ordinary singleuser GP channel.
Theorem 2 The closure of the convex hull of the set of ratepairs satisfying
R2 Amin, it is achieved with eitherp  A, p = 0 or any real root of gA (p) that satisfiesp 1 A, 0], where
9A(p) =P2(Pi+ Q)P4+[P2(i A)(P1+2 Q(1  A) [+ (1  A)Q [P1  J
2QP2(P2+ Q + N +)p3 2Q) (P2 + Q + N)2 _ PiQ] p2[P1 (P2+Q+ N)] pP2(1  A)]. (1
The proof is based on showing that for the Gaussianchannel in (5), one can restrict attention to jointly Gaussian(S, X1, X2), and an optimal choice for U is U = X2 + agptS
P2P1Q P1i2u with opt =P2P1Q + P1NQ P,N(72, P1N2 Picr
12Qu12Q
where different values of Or12 = E(X1X2) and Or2, = E(X2S)are chosen to achieve different points that lie in (or, on theborder of) the capacity region. The allowable values for thecovariances, (712 and Or2, are such that the resulting covariancematrix of (Xl, X2, S), satisfies the nonnegativedefinitenesscondition, expressed in terms of the correlation coefficientsP12 71P2 P12 P2Q
P12+P2s P2Q will be referred to as the silent regime
and its complement will be referred to as the active regime.
The following proposition simplifies the capacity regionexpression for certain ranges of rates. It indicates a range ofrates for which the outer bound given in Theorem 5 is tight.
Proposition 1 1) For any P1, P2, Q, N, the segment connecting the following points (a) and (b) in the R R2 plane lieson the boundary of C(P1, P2, Q, N)
(a) (RC, R2) = (,log (1 + P))(b) (RC, R2) = (og (1+ QPN) , log (1+ ))
2) If P1, P2, Q, N lie in the active regime, the segmentconnecting the following points (c) and (d) also lies on theboundary of C(P1, P2, Q, N)
(c) (RC, R2) = (2 109 (p AQ) 2 log ((P2+)NQ)A)(d) (RC: R2) = ( 10loI( + Pi. + l log (I + PN2 ):0
where A = Q(Pi + Q) P1 (P2 + N).
The capacity region for (P1, P2, Q, N) = (1, 1, 3, 1) whichlies in the active regime, as well as the points (a), (b), (c), (d),and the outer bound of Corollary 5, are plotted in Figure 2.The dotted trapezoids express achievable regions attained bychoosing specific A values in (17). Note that the segmentconnecting (c) and (d) meets the Rc axis at 450.
C. The Common Message Capacity with Individual PowerConstraints
This subsection is devoted to specializing the results to thecommon message capacity, C(P1, P2, Q, N).
1559
Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.

ISIT2007, Nice, France, June 24  June 29, 2007
Theorem 7 C(P1,P2,Q,N)=
1 log (+ )+ 1 log (1+P fP(P+)< P2Q
max 1,0] 2log I+ (PI+ /P2V1p2) otherwise(21)
where the maximization over p can be limited to either p =1, p = 0 or a real root p of go(p) (see (18)) s.t. p e [ 1, 0].
Discussion: In the silent regime, the optimal values of O12and O2, as far as the C(P1, P2, Q, N) is concerned, are suchthat inequality (20) is met with equality, i.e., P2+P2s= 1.This is equivalent to X2 = 17j2 X1 + QsS. This implies thatin the silent regime the optimal value of a (19) and U area sileent =_2s Usilent X2 72S = U12 X1. It is easyopt Q' opt Q pi ito verify that the simpler selection of Usilent = 0 yields theoptsame achievable rate and hence is also optimal. Consequently,in the silent regime, in order to achieve C(P1, P2, Q, N),the informed encoder can put all its power into decreasingthe interference and enhancing the signal of the uninformedencoder. No power is devoted to the transmission of additionalinformation, and hence, we refer to this region as silent.Since Usilent = 0, no binning is needed. Given a messageoptrm to be transmitted, which corresponds to the codewordXm = (Xm(1), xm(2), ..., xm(nr)) of the uninformed encoder,and a statesequence s, the informed encoder simply transmitsat time index i, Xri = xm(T)(0 +si U2 , where 72s= P2QP,pi Qwith p being the maximizer in (21) and P12 = ptos
In the active regime, the maximizing (X1, X2, S) is Gaussian with (active = _active PI(P2+N) Thus, aopt (see(19)) is given by aactive p2 which is equal to the optimalopt P2Na in Costa's setup [3] when the uninformed user is not present.This results in a surprising phenomenon which happens onlyin the active regime. The highest achievable common messagerate is 2 log (I + pi) + 2 log (1 + P2 ), i.e., the upper boundon C(P1, P2, Q, N) deduced from Theorem 5 is achievable.
D. Sum Power Constraints
Next, we state a closed form characterization of the capacityregion under a sum power constraint. We denote by theportion of the power that is used by the informed user.
Theorem 8 The capacity region of the Gaussian GGP channel under sum power constraints, C(P, Q, N), is given by
C(P, Q, N) = U(c:[0,1]C((I ()P, P, Q, N).The following theorem gives the common message capacity
under sum power constraints, C(P, Q, N).
Theorem 9'
2log (1I+ N) ifN + P < QC(P Q, N) 11 (Q N ) < Q < N+P2g 4QN if Q No