cooperative multiple access encoding states …verdu/reprints/somshaver...causal asymmetric side...

5
ISIT2007, Nice, France, June 24 - June 29, 2007 Cooperative Multiple Access Encoding with States Available at One Transmitter Anelia Somekh-Baruch Princeton University Princeton, NJ, USA [email protected] Shlomo Shamai (Shitz) Technion I.I.T. Haifa, Israel sshlomo @ ee.technion.ac.il Sergio Verdui Princeton University Princeton, NJ, USA [email protected] Abstract- We generalize the Gel'fand-Pinsker model to en- compass the setup of a memoryless multiple-access channel. According to this setup, only one of the encoders knows the state of the channel (non-causally), which is also unknown to the receiver. Two independent messages are transmitted: a common message and a message transmitted by the informed encoder. We find explicit characterizations of the capacity region with both non-causal and causal state information. Further, we apply the general formula to the Gaussian case with non-causal channel state information, under an individual power constraint as well as a sum power constraint. In this case, the capacity region is achievable by a generalized writing-on-dirty-paper scheme. I. INTRODUCTION The capacity of state-dependent channels has become a widely investigated research area. The framework of channel states available at the transmitter dates back to Shannon [1], who characterized the capacity of a state-dependent memory- less channel whose states are i.i.d. and available causally to the transmitter. In their celebrated paper [2], Gel'fand and Pinsker (GP) established a single-letter formula for the capacity of the same channel under the conceptually different setup where the transmitter observes the channel states non-causally. The main tool to prove achievability in this setup is the binning encoding principle [2]. Costa [3] applied GP's result to the Gaussian case with two additive Gaussian noise sources, one of which, the interference, takes the role of the channel state. Costa originated the term "writing on dirty paper" which stands for an application of GP's binning encoding scheme that adapts the transmitted signal to the channel state sequence rather than attempting to cancel it. This results in a surprising conclusion: the capacity of the channel without interference can be attained even though the interference is not known to the receiver. For extensions of this principle and other related work see [4] and references therein. Much research has been devoted to applications of these channel models, for example, watermarking, (see [5] and ref- erences therein), multi-input-multi-output (MIMO) broadcast channels, [6], where dirty-paper coding happens to be a central ingredient in achieving the capacity region, and cooperative networks [7]. In [4] we introduced the setup of cooperative encoding over the asymmetric (channel states known to one encoder only) GP MAC, with a single common message source fed to both encoders. We characterized the capacity of this channel both W2 Sn- Fig. 1. Asymmetric state-dependent MAC with a common message. for the general finite input alphabet case and the Gaussian case. We also briefly discussed a generalization of this setup to include an additional message source that is fed to the in- formed user only. In this paper, we perform this generalization. The results presented in this paper have applications relating to cognitive radio (see [8] and references therein), watermarking, and other scenarios of cooperative communication, e.g., [9]. II. PROBLEM SETUP A stationary memoryless state-dependent multiple-access channel is defined by a distribution Qs on the set S and the channel conditional probability distribution Wyls,x,,x2 from S x X1 x X2 to Y. Let X(71) = (X1(1), ,Xi(n)) and X(2) = (X2 (1), X2 (n)) designate the inputs of transmitters 1 and 2 to the channel, respectively. The output of the channel is denoted by yn. The symbols Si,Xl(i),X2(i) and Yi represent the channel state, the channel inputs produced by two distinct encoders, and the channel output, at time index i, respectively. We assume that the channel states Sn are i.i.d., each distributed according to Qs. As can be seen in Figure 1, the setup we consider is asymmetric in the sense that only encoder 2 is informed of the channel states, while neither encoder 1 nor the decoder know the channel states. Unlike the ordinary MAC with partially known state information (which was considered in [10], [1 1], where inner and outer bounds on the capacity region are derived) we allow a common message source fed to both encoders and an independent message that is to be transmitted by the informed encoder. When encoder 2 observes the CSI non-causally, we shall refer to this channel as a Generalized Gel'fand-Pinsker (GGP) channel, when encoder 1-4244-1429-6/07/$25.00 ©2007 IEEE 1556 Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.

Upload: buidiep

Post on 10-Mar-2018

214 views

Category:

Documents


2 download

TRANSCRIPT

ISIT2007, Nice, France, June 24 - June 29, 2007

Cooperative Multiple Access Encoding with StatesAvailable at One Transmitter

Anelia Somekh-BaruchPrinceton UniversityPrinceton, NJ, [email protected]

Shlomo Shamai (Shitz)Technion I.I.T.Haifa, Israel

sshlomo@ ee.technion.ac.il

Sergio VerduiPrinceton UniversityPrinceton, NJ, [email protected]

Abstract- We generalize the Gel'fand-Pinsker model to en-compass the setup of a memoryless multiple-access channel.According to this setup, only one of the encoders knows thestate of the channel (non-causally), which is also unknown to thereceiver. Two independent messages are transmitted: a commonmessage and a message transmitted by the informed encoder. Wefind explicit characterizations of the capacity region with bothnon-causal and causal state information. Further, we apply thegeneral formula to the Gaussian case with non-causal channelstate information, under an individual power constraint as wellas a sum power constraint. In this case, the capacity region isachievable by a generalized writing-on-dirty-paper scheme.

I. INTRODUCTION

The capacity of state-dependent channels has become awidely investigated research area. The framework of channelstates available at the transmitter dates back to Shannon [1],who characterized the capacity of a state-dependent memory-less channel whose states are i.i.d. and available causally to thetransmitter. In their celebrated paper [2], Gel'fand and Pinsker(GP) established a single-letter formula for the capacity of thesame channel under the conceptually different setup where thetransmitter observes the channel states non-causally. The maintool to prove achievability in this setup is the binning encodingprinciple [2]. Costa [3] applied GP's result to the Gaussiancase with two additive Gaussian noise sources, one of which,the interference, takes the role of the channel state. Costaoriginated the term "writing on dirty paper" which stands foran application of GP's binning encoding scheme that adaptsthe transmitted signal to the channel state sequence rather thanattempting to cancel it. This results in a surprising conclusion:the capacity of the channel without interference can be attainedeven though the interference is not known to the receiver. Forextensions of this principle and other related work see [4] andreferences therein.Much research has been devoted to applications of these

channel models, for example, watermarking, (see [5] and ref-erences therein), multi-input-multi-output (MIMO) broadcastchannels, [6], where dirty-paper coding happens to be a centralingredient in achieving the capacity region, and cooperativenetworks [7].

In [4] we introduced the setup of cooperative encoding overthe asymmetric (channel states known to one encoder only)GP MAC, with a single common message source fed to bothencoders. We characterized the capacity of this channel both

W2

Sn-

Fig. 1. Asymmetric state-dependent MAC with a common message.

for the general finite input alphabet case and the Gaussiancase. We also briefly discussed a generalization of this setupto include an additional message source that is fed to the in-formed user only. In this paper, we perform this generalization.The results presented in this paper have applications relating tocognitive radio (see [8] and references therein), watermarking,and other scenarios of cooperative communication, e.g., [9].

II. PROBLEM SETUP

A stationary memoryless state-dependent multiple-accesschannel is defined by a distribution Qs on the set S andthe channel conditional probability distribution Wyls,x,,x2from S x X1 x X2 to Y. Let X(71) = (X1(1), ,Xi(n)) and

X(2) = (X2 (1), X2 (n)) designate the inputs of transmitters1 and 2 to the channel, respectively. The output of the channelis denoted by yn. The symbols Si,Xl(i),X2(i) and Yirepresent the channel state, the channel inputs produced bytwo distinct encoders, and the channel output, at time index i,respectively. We assume that the channel states Sn are i.i.d.,each distributed according to Qs. As can be seen in Figure 1,the setup we consider is asymmetric in the sense that onlyencoder 2 is informed of the channel states, while neitherencoder 1 nor the decoder know the channel states. Unlike theordinary MAC with partially known state information (whichwas considered in [10], [1 1], where inner and outer bounds onthe capacity region are derived) we allow a common messagesource fed to both encoders and an independent message thatis to be transmitted by the informed encoder. When encoder 2observes the CSI non-causally, we shall refer to this channel asa Generalized Gel'fand-Pinsker (GGP) channel, when encoder

1-4244-1429-6/07/$25.00 ©2007 IEEE 1556

Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.

ISIT2007, Nice, France, June 24 - June 29, 2007

2 observes the states causally, the channel will be referred toas an asymmetric causal state-dependent channel.The common message, WV, and the private message, AV2,

are independent random variables uniformly distributed overthe sets {1, ..., Mc} and {1, ..., M2}, respectively, where Mc =

LenR, j and M2 LenR2i. An (6Rc, 6R2, n)-code for the GGPchannel consists of two encoders (1) (n2) and a decoder Obn:the first encoder, unaware of the CSI, is a mapping

(1l) {:... Mc} > ln.(1(Pn ~~~~~~~~~~(1)The second encoder observes the CSI non-causally and isdefined by a mapping

Th 2):d...i MCI xa in...gM2 x Sn > X2n. (2)

The decoder is a mapping

An (eRC, eR2, n)-code for the asymmetric causal state-dependent channel is defined similarly with the exception thatthe second encoder is defined by a sequence of mappings

lp(2) i {1 ...: McI x f1: ...: M2} x Si -> X2 (4)

where i 1,..., n, and X2n(i)= (}Vc,i 2, S)An (e, n, Rc, R2)-code for the GGP channel is a code

(Wn)n :o)n, yn) having average probability of error not ex-

ceeding c, i.e., Pr((Wc, V2) t On (Yin)) < e. A rate pair(RC R2) is said to be achievable if there exists a sequenceof (cn, n, RC, R2)-codes with limn, 'En = 0. The capacityregion of the GGP channel is defined as the closure of the setof achievable rate-pairs. The definitions of an (e, n, RC, R2)-code, an achievable rate pair and the capacity region of theasymmetric causal state-dependent channel are similar.Due to space limitations, the proofs of the results of this

paper are omitted and can be found in [12].

111. CAPACITY REGION - FINITE INPUT ALPHABET GGPCHANNEL

In this section it is assumed that the alphabets S, X1, X2 arefinite.

Theorem 1 The capacity region of the GGP channel, C, isthe closure of the set of all rate-pairs, (RC, R2), satisfying

R2 < I(U;Y Xi)Rc+R2 < I(U,Xi;Y)

I(U; S X1)I(U, Xi; S),

for some joint measure Ps,x1,u,x2,y having the form

PS,x1,U,x2,Y = QsPxiPu s,xiPX21S,X1,UWyIS,XI,X2, (6)

where 1UN < |S| *Xfl * |X2 .

Theorem 1 continues to hold if in (6) we replace Px21s,x1,uby PX21S,u with slightly larger 1UN.The proof of Theorem 1 is an immediate extension of the

proof of Theorem 1 in [4]. In particular, the achievability part

analyzes the error probability of a coding scheme describedbelow (after Corollary 1).

It is noted that Theorem 1 remains intact if we allow forfeedback to the informed encoder, i.e., if, before producing thei-th channel input symbol, the informed encoder observes theprevious channel outputs, Yt-1, that is, the informed encoderis actually a sequence of mappings o 2) = 12'i) In 1 with

(2,i) : ... Mc} x f1: ... M2} x Snx - X

We now specialize Theorem 1 to the important case whereonly the common message is transmitted.

Corollary 1 The common message capacity of thefinite inputalphabet GGP channel is given by

C = max [I(U, Xi; Y) -I(U, Xi; S)], (7)

where the maximum is over all the joint measuresPS,x1,u,x2,y having the form (6) where 1UN < ISI * IXl *X2

We note that there exists a maximizing measure with X2 thatis equal to a deterministic function of (S, X1, U).The achievability part of Theorem 1 is based on the follow-

ing random coding scheme:Generation of Codebooks:

Fix a measure Ps,x1,U,X2,y satisfying (6), and set

R2 = I(U; YXi) -I(U; S Xi)

RC = I(Xi; Y)- I(Xi; S) = I(Xi; Y). (8)Denote M1 = en[Rc] M2 en[R2`c] and Jen[I(U;SXi)+2±E]. First, M1 i.i.d. n-vectors, {xf}lf_1, aredrawn, each with i.i.d. components subject to Px1. The or-dered collection of the drawn vectors constitutes the codebookused by the uninformed encoder.

For each codeword, xe, one draws M2 x J auxiliary n-vectors denoted {Uf,k,J}, k = 1, ...M2, j 1,.J, in-dependently and with i.i.d. components given x. subject tothe conditional measure PuIx1 induced by QSPXI PU,X2 Ix1 ,SHence, each codeword in the uninformed user's codebook isassociated with a codebook of auxiliary codewords.Encoding: To transmit X, the uninformed encoder transmitsthe vector xe. Transmission of k is done by the informedencoder who searches for the lowest Jo e {1, J} suchthat Uf,k,jO is jointly typical with (xe, s). Denote this j byj(s, f, k). If such Jo is not found, or if the observed statesequence s is non-typical, an error is declared and j(s, X, k)is set toj 1.

Finally, the output of the second (informed) encoderis an n-vector x that is drawn i.i.d. conditionally given(s, u. kj(s .e ), x.e) (using conditional measure Px2Is,xI,u in-duced by QSPxi Pu,x2 IXI,s)Decoding: Upon observing y, the decoder searches for a pairof indices, (i, k), such that xi, Ue k j are jointly typical with yand outputs them. If there is no such pair, or it is not unique,an error is declared.The analysis of the probability of error of the above

described scheme (see [12]) establishes the achievability of(R, R2). It is easy to realize that if a rate-pair (R ,R2)

1557

Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.

ISIT2007, Nice, France, June 24 - June 29, 2007

is achievable, then so is (RC + R2, 0). This proves that also(RC + R2, 0) is achievable and thus the entire trapezoid (5) isan achievable region by time-sharing arguments.When the common message capacity is concerned, the

above encoding scheme can be applied by attributing the bitsassigned to W2 in the above described scheme, to the commonmessage Wc. The output of decoder is the pair m = (i, k).

Although the capacity region of the GGP channel is charac-terized in Theorem 1, we next state an outer for it. The reasonfor that is that this outer bound is achievable in the Gaussiancase, and hence is useful in the proof of the converse part of theGaussian coding theorem. The outer bound is a generalizationof the trivial upper bound maxp I, I(X; Y S) on the capacityof the ordinary single-user GP channel.

Theorem 2 The closure of the convex hull of the set of ratepairs satisfying

R2 <Rc + R2 <

I(X2; Y S, X1)

I(Xl, X2; Y S) -I(S; Xl Y) (9)

for some measure Ps,x ,x2 ,y = QsPxi Px2 s,x1 WY sx, x2is an outer bound on the capacity region of the GGP channel.

A sub-class of GGP channels that will be of special interestis the following. A memoryless parallel channel with non-causal asymmetric side information is a GGP channel withY = (Y1,Y2) and Wy1,y2lS,X1,X2 = WyIX1,SWWy2lX2,S Inwords, this is a GGP channel with two outputs Y1(1), ..., Yi(n)and Y2(1), ..., Y2((n) that are both observed by the receiver. If,in addition, one has Wy2IX2S = Wy2 X2 we shall say that theparallel channel is degenerate.

In the following theorem, we find the capacity region of thedegenerate parallel GGP channel, for which we establish thefact that the CSI does not help.

Theorem 3 The capacity region of the degenerate parallelGGP channel is equal to the capacity region obtained withouttransmitter CSI, i.e., the union of rate-pairs (Re, R2) s.t.

R2 < C2Rc+R2 < C1 +C2, (10)

where Ci is the capacity of the channel Wy1lx1,s obtainedwithout transmitter CSI, and C2 is the capacity of the channelWY2 X2 -

IV. THE CAUSAL ASYMMETRIC STATE-DEPENDENTCHANNEL

Theorem 4 The capacity region of the finite input alphabetcausal asymmetric state-dependent channel is given by theclosure of the set of rate pairs (Rc, R2) satisfying

R2 < I(U;Y Xi)Rc + R2 < I(U, Xi; Y), (I 1)

for some joint measure Ps,x1,u,x2,y having the form

PS,x1,U,x2,Y = QsPxi,uPx22Js,x,uWY1s,x1,x2, (12)

where UM < IS. * IXfl * IX21 + 1.

The expression for the capacity region of Theorem 4 can

be interpreted as a special case of Theorem 1, where U isindependent of S. Specializing Theorem 4 to the case wherethere is only a transmission of a common message, we get thefollowing.

Corollary 2 The common message capacity of thefinite inputalphabet causal asymmetric state-dependent channel is givenby

maxI(U, Xj; Y), (13)

where the maximum is over all the joint measuresPS,x1,u,x2,y having theform (12) and UK<S. X2 +1.

We note that an alternative expression for the common mes-sage capacity in the causal case is with I(U; Y) replacingI(U, X1; Y) in (13) and where X1 is deterministic given Uwith slightly larger 1U . As a side note, we mention that ifthe CSI is available to both of the encoders, the single-letterexpression for the common message capacity is deduced as adirect application Shannon's formula [1] for a channel withinput alphabet X1 x X2.

V. THE GAUSSIAN GGP CHANNEL

A. Channel Model

The Gaussian GGP channel is given by

(14)

As before, the message WV is available to both encoders, andonly the second encoder knows the realization of the interfer-ence S' (non-causally), and the message W2 to be transmitted.The noise processes, S' and N', are assumed to be zero-meanGaussian i.i.d. with E(S ) Q and E(Nj2) = N. The pro-cess N' is independent of (Xvi),X$)), sn). Several powerconstraints can be considered: a) Individual power constraints:1 , X2(i) < P1t X2(i) < P2. b) Sum powerconstraint: = X< (i) + _ X2(i) P. c) Total re-ceived power power constraint: 1 E (Xl(i) + X2(i))2 <P. When c) is concerned, it is evident that all the power shouldbe assigned to the informed encoder, and the problem degen-erates to the ordinary "dirty paper" Costa setup [3], wherethe informed transmitter can assign bits of the transmittedinformation to either WV or W2. Consequently, the capacityregion in this case is a triangle whose vertex (RC, R2) pointsare (°,°0), (0, 1 log (I + P ) ) and ( 1 log (I + PN) '°0)-We are interested in finding the capacity regions for the

individual power constraints and the sum power constraint,denoted C(P1, P2, Q, N) and C(P, Q, N), respectively.

B. Capacity Region under Individual Power Constraints

First, we consider the Gaussian degenerate parallel channelwith non-causal asymmetric CSI, which is a GGP channelwhose i-th output is given by Yi = (Y1(i), Y2(i)) with Y1(i) =

1558

t tyi = Xl(')+X2(')+Si+Ni-

Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.

ISIT2007, Nice, France, June 24 - June 29, 2007

Xi (i)+Si and Y2 (i) = X2 (i) +Nj, where Xi (i), Xi (i), Si, Niare defined in (14).

Theorem 5 The capacity region of the Gaussian degenerateparallel channel with non-causal asymmetric CSI under in-dividual power constraints is given by the set of rate pairs(RC, R2) satisfying

R2 < Ilog(1+ P222 NJ

Rc +R2 K Ilog Il+-Pi-2 \QJ + log (1 +

(15)

(16)

Theorem 5 obviously provides a simple outer bound onC(P1, P2, Q, N), as the decoder has more information,(Yi (i), Y2 (i)) rather than Y(i) = Y (i)+ Y2(i). In the sequel,we establish the tightness of this bound for a certain range ofrates. The following theorem provides an explicit expressionfor C(P1, P2, Q, N).

Theorem 6 Let Amin min {0,1 _ P2Q(P+Q) }, and let

R(A) = maxP[ 1 logI+(P_1±(+ QpVP)2±N )

C(P1, P2, Q, N) is equal to the union over A C [Amin, 1] ofthe rate pairs satisfying satisfying

R2 < - log I+11 +-2 \N/

Rc+R2 < Ilog(I+ PEA) +R(A) (17)

where if Amin > 0, R(Amin) is achieved by p_ Pi(PI+±Q) and for A > Amin, it is achieved with either

p - A, p = 0 or any real root of gA (p) that satisfiesp 1 A, 0], where

9A(p) =-P2(Pi+ Q)P4+[P2(i- A)(P1-+2 Q(1 - A) [+ (1 - A)Q [P1 - J

-2QP2(P2+ Q + N +)p3- 2Q) -(P2 + Q + N)2 _ PiQ] p2[P1 -(P2+Q+ N)] pP2(1 - A)]. (1

The proof is based on showing that for the Gaussianchannel in (5), one can restrict attention to jointly Gaussian(S, X1, X2), and an optimal choice for U is U = X2 + agptS

P2P1Q -P1i2u -

with opt =P2P1Q + P1NQ P,N(72,- P1N-2 -Picr

-12Qu12Q

where different values of Or12 = E(X1X2) and Or2, = E(X2S)are chosen to achieve different points that lie in (or, on theborder of) the capacity region. The allowable values for thecovariances, (712 and Or2, are such that the resulting covariancematrix of (Xl, X2, S), satisfies the nonnegative-definitenesscondition, expressed in terms of the correlation coefficientsP12 71P2 P12 P2Q

P12+P2s<< (20)

025cs

Fig. 2. C (-, 1, 3, 1) and outer bound.

For reasons that will become clear in the sequel, weintroduce the following terminology.

Definition 1 The set of parameters P1, P2, Q,N such that(P12±) > P2Q will be referred to as the silent regime

and its complement will be referred to as the active regime.

The following proposition simplifies the capacity regionexpression for certain ranges of rates. It indicates a range ofrates for which the outer bound given in Theorem 5 is tight.

Proposition 1 1) For any P1, P2, Q, N, the segment connect-ing the following points (a) and (b) in the R -R2 plane lieson the boundary of C(P1, P2, Q, N)

(a) (RC, R2) = (,log (1 + P))(b) (RC, R2) = (og (1+ Q±P±N) , log (1+ ))

2) If P1, P2, Q, N lie in the active regime, the segmentconnecting the following points (c) and (d) also lies on theboundary of C(P1, P2, Q, N)

(c) (RC, R2) = (2 109 (p AQ) 2 log ((P2+)NQ)A)(d) (RC: R2) = ( 10loI( + Pi. + l log (I + PN2 ):0

where A = Q(Pi + Q) -P1 (P2 + N).

The capacity region for (P1, P2, Q, N) = (1, 1, 3, 1) whichlies in the active regime, as well as the points (a), (b), (c), (d),and the outer bound of Corollary 5, are plotted in Figure 2.The dotted trapezoids express achievable regions attained bychoosing specific A values in (17). Note that the segmentconnecting (c) and (d) meets the Rc axis at -450.

C. The Common Message Capacity with Individual PowerConstraints

This subsection is devoted to specializing the results to thecommon message capacity, C(P1, P2, Q, N).

1559

Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.

ISIT2007, Nice, France, June 24 - June 29, 2007

Theorem 7 C(P1,P2,Q,N)=

1 log (+ )+ 1 log (1+P fP(P+)< P2Q

max 1,0] 2log I+ (PI+ /P2V1p2) otherwise

(21)

where the maximization over p can be limited to either p =-1, p = 0 or a real root p of go(p) (see (18)) s.t. p e [- 1, 0].

Discussion: In the silent regime, the optimal values of O12and O2, as far as the C(P1, P2, Q, N) is concerned, are suchthat inequality (20) is met with equality, i.e., P2+P2s= 1.This is equivalent to X2 = 17j2 X1 + ¢QsS. This implies thatin the silent regime the optimal value of a (19) and U area sileent =_2s Usilent X2- 72S = U12 X1. It is easyopt Q' opt Q pi i

to verify that the simpler selection of Usilent = 0 yields theoptsame achievable rate and hence is also optimal. Consequently,in the silent regime, in order to achieve C(P1, P2, Q, N),the informed encoder can put all its power into decreasingthe interference and enhancing the signal of the uninformedencoder. No power is devoted to the transmission of additionalinformation, and hence, we refer to this region as silent.Since Usilent = 0, no binning is needed. Given a messageoptrm to be transmitted, which corresponds to the codewordXm = (Xm(1), xm(2), ..., xm(nr)) of the uninformed encoder,and a state-sequence s, the informed encoder simply transmitsat time index i, Xri = xm(T)(0 +si U2 , where 72s= P2QP,pi Qwith p being the maximizer in (21) and P12 = ptos

In the active regime, the maximizing (X1, X2, S) is Gaus-sian with (active = _active PI(P2+N) Thus, aopt (see(19)) is given by aactive p2 which is equal to the optimalopt P2±Na in Costa's setup [3] when the uninformed user is not present.This results in a surprising phenomenon which happens onlyin the active regime. The highest achievable common messagerate is 2 log (I + pi) + 2 log (1 + P2 ), i.e., the upper boundon C(P1, P2, Q, N) deduced from Theorem 5 is achievable.

D. Sum Power Constraints

Next, we state a closed form characterization of the capacityregion under a sum power constraint. We denote by ¢ theportion of the power that is used by the informed user.

Theorem 8 The capacity region of the Gaussian GGP chan-nel under sum power constraints, C(P, Q, N), is given by

C(P, Q, N) = U(c:[0,1]C((I -()P, ¢P, Q, N).The following theorem gives the common message capacity

under sum power constraints, C(P, Q, N).

Theorem 9'

2log (1I+ N) ifN + P < QC(P Q, N) 11 (Q N ) < Q < N+P2g 4QN if Q No<Q<

maxo<(<1 R((, P, Q, N) otherwise

where QoR((;,P,Q,N)

l (N P) + 2 N2 +NP+P2 and

P+_\ pV 2)2

maxpC 1,0] 2log +Q+((1 /-P± ) 2

The power allocation that achieves C(P, Q, N) is

1 ifN+P<Q

(opt(P, Q,PN) P±QPN if Qo < Q < N + Pargmaxo<(<1R(¢, P, Q, N) otherwise

Theorem 9 enables to simplify the expression for C(P, Q, N)as follows: (i) Whenever Q > N + P, the capacity regionis not affected by Q, since the best strategy is to let theinformed user use all the power. This degenerates to a singleuser Costa channel, where the transmitted information bits canbe divided between W2 and WV. (ii) Whenever Qo < Q <N + P, the border of C(P, Q, N) contains the line segmentbetween the points (RC, R2) given by ((P4Q±N ) 0) and

K-oKP+Q+N 8 o K(3Q-P-N)(P+Q+Nf)r 2 3Q-P-NJ'2 V 4QNJJ

ACKNOWLEDGMENT

This research was supported by a Marie Curie InternationalFellowship within the 6th European Community FrameworkProgramme. The work of S. Shamai and S. Verdu' has also beensupported by the Binational US-Israel Scientific Foundation,Grant 2004140.

REFERENCES

[1] C. E. Shannon, "Channels with side information at the transmitter," IBMJ. Res. Develop., pp. 289-293, 1958.

[2] S. Gelfand and M. Pinsker, "Coding for channels with random parame-ters," Problems of control and information theory, vol. 9, no. 1, pp. 19-31, 1980.

[3] M. Costa, "Writing on dirty paper," IEEE Transactions on InformationTheory, vol. IT-29, pp. 439-441, May 1983.

[4] A. Somekh-Baruch, S. Shamai (Shitz), and S. Verdu, "Cooperativeencoding with asymmetric state information at the transmitters," in Proc.Allerton Conf Communications, Control, and Computing, (Monticello,IL), September 2006.

[5] N. Merhav, "On joint coding for watermarking and encryption," IEEETransactions on Information Theory, vol. 52, pp. 190-205, January 2006.

[6] G. Caire and S. Shamai (Shitz), "On the achievable throughput ofa multi-antenna Gaussian broadcast channel," IEEE Transactions onInformation Theory, vol. 49, pp. 1691-1706", July 2003.

[7] A. H0st-Madsen, "On the capacity of cooperative diversity in slowfading channels," in Proc. Allerton Conf Communications, Control, andComputing, (Monticello, IL), October 2002.

[8] N. Devroye, P. Mitran, and V. Tarokh, "Limits on communications ina cognitive radio channel," IEEE Communications Magazine, vol. 44,pp. 44-49, June 2006.

[9] A. H0st-Madsen, "Capacity bounds for cooperative diversity," IEEETransactions on Information Theory, vol. 52, pp. 1522-144, April 2006.

[10] S. Kotagiri and J. N. Laneman, "Achievable rates for multiple accesschannels with state information known at one encoder," in Proc. Aller-ton Conf Communications, Control, and Computing, (Monticello, IL),October 2004.

[11] S. Kotagiri and J. N. Laneman, "Multiple access channels with stateinformation known to some encoders." preprint, 2006.

[12] A. Somekh-Baruch, S. Shamai (Shitz), and S. Verdu, "Cooperativemultiple access encoding with states available at one transmitter."submitted to IEEE Transactions on Information Theory, January 2007.

1560

Authorized licensed use limited to: Princeton University. Downloaded on November 3, 2008 at 20:01 from IEEE Xplore. Restrictions apply.