solutions to midterm exam - wikispaces · pdf filesolutions to midterm exam ... rst page of...

13
Advanced Digital Communications Ecole Polytechnique F´ ed´ erale, Lausanne: Fall 2015 Gastpar November 4, 2015 Solutions to Midterm Exam Last name First name Student ID Nr General Note: In my view, the most important issue is to know how to address a particular prob- lem. Therefore, there will be partial credit for good solution outlines even if not all the mathematical manipulations are completed. You have 100 minutes to complete this exam. You are allowed two two-sided sheets of notes. Printed matter, calculators, computing and com- munication devices are not permitted. Unless explicitly stated otherwise, detailed derivations of the results are required for full credit. Results from class do not need to be derived again, but you should state them explicitly when you are using them. For example: “Since we know from class that under AWGN, the ML detector is minimum distance, we obtain ...” is considered a detailed derivation. Recall the geometric series: for any q with |q| < 1, we have n=0 q n = 1 1-q . From the FFT-OFDM lecture notes: Recall that using ω = e -j2π/N , the N -dimensional Fourier matrix is the matrix F N whose entry in row ‘, column n is given by 1 N ω (-1)(n-1) (for ‘, n {1, 2,...,N } ) and the resulting equivalent channel coefficients can be calculated via the formula H m = N-1 =0 h ω ‘m . *** Good Luck! *** Problem Points earned out of Problem Points earned out of Problem 1 20 Problem 3 20 Problem 2 20 Total 60 1

Upload: doancong

Post on 06-Mar-2018

220 views

Category:

Documents


1 download

TRANSCRIPT

Advanced Digital Communications Ecole Polytechnique Federale, Lausanne: Fall 2015Gastpar November 4, 2015

Solutions to Midterm Exam

Last name First name Student ID Nr

General Note: In my view, the most important issue is to know how to address a particular prob-lem. Therefore, there will be partial credit for good solution outlines even if not all the mathematicalmanipulations are completed.

• You have 100 minutes to complete this exam.

• You are allowed two two-sided sheets of notes. Printed matter, calculators, computing and com-munication devices are not permitted.

• Unless explicitly stated otherwise, detailed derivations of the results are required for full credit.Results from class do not need to be derived again, but you should state them explicitly when youare using them. For example: “Since we know from class that under AWGN, the ML detector isminimum distance, we obtain ...” is considered a detailed derivation.

• Recall the geometric series: for any q with |q| < 1, we have∑∞n=0 q

n = 11−q .

• From the FFT-OFDM lecture notes: Recall that using ω = e−j2π/N , the N -dimensional Fouriermatrix is the matrix FN whose entry in row `, column n is given by 1√

Nω(`−1)(n−1) (for `, n ∈

{1, 2, . . . , N} ) and the resulting equivalent channel coefficients can be calculated via the formula

Hm =∑N−1`=0 h`ω

`m.

*** Good Luck! ***

Problem Points earned out of Problem Points earned out of

Problem 1 20 Problem 3 20

Problem 2 20

Total 60

1

Problem 1 (Zero-forcing) 20 Points

Consider the usual discrete-time ISI channel model considered in class, with

D(z) =1

(1− αz)(1− αz−1), (1)

where α is a real-valued constant satisfying |α| < 1, meaning that the channel is given by (in thenotation of the class) U [n] =

∑∞k=−∞ d[k]I[n − k] + V [n], where V [n] is additive Gaussian noise with

power spectral density SV (z) = N0

2 D(z).

(a) (8 Points) If we apply the zero-forcing equalizer 1/D(z) to U [n], then we have seen in class thatthe resulting overall channel becomes IZF [n] = I[n] + V [n]. Find the variance of the noise V [n], that is,find E[V [n]2] (as a function of N0 and α ).

Solution:

This was a replay of Homework 4, Problem 1, and Homework 5, Problem 1. The easiest approach is toconsider the power spectral density of the equivalent noise V [n], which is simply the original noise V [n]passed through the filter 1/D(z). From Equation (2.29) of the lecture notes, we know that

SV (z) =1

D(z)

1

D∗(1/z∗)SV (z) =

1

D(z)

1

D∗(1/z∗)

N0

2D(z) (2)

=N0

2

1

D∗(1/z∗)=N0

2((1− α(1/z∗))(1− αz∗))∗ (3)

=N0

2(1− αz−1)(1− αz) (4)

=N0

2

((1 + α2)− αz − αz−1

). (5)

The noise variance we are looking for satisfies

E[V [n]2] = RV [k = 0], (6)

where RV [k] is the autocorrelation function, which is simply the inverse Z-transform of the power spectraldensity SV (z). But we can read this directly out of the formula:

E[V [n]2] = RV [k = 0] =N0

2(1 + α2). (7)

(b) (3 Points) Assuming that we use BPSK (that is, the information symbols I[n] are independent ofeach other, and are ±

√E with uniform priors) and we separately slice each received symbol IZF [n] at

zero (exactly as in class), give the formula for the resulting bit error probability in terms of the problemparameters, using the Q-function.

Solution:

This was still a replay of Homework 4, Problem 1. Just like there, you simply use Equation (2.62) fromthe lecture notes (from “The example you should know by heart...”):

P (ZF )e = Q

(d

2√σ2

), (8)

where d = 2√E is the distance between the two message points, and σ2 = N0

2 (1 +α2) is the variance ofthe Gaussian noise that affects our decision, hence,

P (ZF )e = Q

(√2E

N0(1 + α2)

), (9)

2

(c) (6 Points) Next, we would like to understand how suboptimal the Zero-forcing solution is. To thisend, as in class, let us apply the whitening filter (1 + αz) to the channel output U [n]. Working out thedetails, we find that the resulting equivalent channel is given by

S[n] =

∞∑k=0

αkI[n− k] +W [n], (10)

where W [n] is the usual (real-valued) AWGN of variance N0/2.

To find the minimum possible bit error probability is difficult for this model. Instead, we will now derivea lower bound on the bit error probability — that is, an expression that is lower than what we could everhope to achieve. In order to obtain such a lower bound, we idealize the situation: let us suppose that weset I[n] = 0 for all n 6= 0, and in fact only send a single bit by setting I[0] =

√E if the bit is one, and

I[0] = −√E if the bit is zero. Using the full received sequence S[n], for −∞ < n < ∞, find a formula

for the error probability of the ML detector for recovering the single transmitted bit.

Solution:

This one was conceptually somewhat new. Clearly, we start by plugging in what we know about I[n] toobtain

S[n] =

{αnI[0] +W [n], for n ≥ 0,W [n], for n < 0.

(11)

Since the noise W [n] is AWGN, the samples S[n] for n < 0 are pure noise and thus, irrelevant infor-mation. Moreover, the noiseless transmitted sequence αnI[0] (for n ≥ 0 ) can only assume two differentvalues:

If I[0] =√E :

√E (1, α, α2, α3, α4, . . .) (12)

If I[0] = −√E : −

√E (1, α, α2, α3, α4, . . .). (13)

Hence, all we are doing is to distinguish between two possible sequences in AWGN of power N0/2, whichis the basic problem discussed in Chapter 3. Since we know from class that under AWGN, the MLdetector is minimum distance, the error probability is simply given by

Pe = Q

(d

2√N0/2

), (14)

where d is the distance between the two possible message points:

d2 = E(22 + (2α)2 + (2α2)2 + (2α3)2 + (2α4)2 + . . .) = 4E∞∑k=0

α2k = 4E 1

1− α2. (15)

Hence, we find as our desired lower bound

P (lowerbound)e = Q

(√2E

N0(1− α2)

). (16)

(d) (3 Points) Compare the error probability formulas from Parts (b) and (c). Specifically, in order toattain a certain desired target error probability (call it P0 ), how much more transmit energy does theZero-forcing solution from Part (b) need than the lower bound from Part (c)? Discuss this as a functionof the parameter α.

Solution:

Let us denote the energy used by the ZF as EZF . We have to equate P(ZF )e to the lower bound:

Q

(√2EZF

N0(1 + α2)

)= Q

(√2E

N0(1− α2)

). (17)

3

The two are equal if and only if the arguments inside the Q-function are equal, meaning that

2EZFN0(1 + α2)

=2E

N0(1− α2). (18)

That is, we need

EZF = E 1 + α2

1− α2. (19)

What we can see is that when α = 0, Zero-forcing is optimal (not surprisingly), but as α tends to one,the power penalty of Zero-forcing tends to infinity.

Perhaps a better question would be: If I allow you to use twice the power of the lower bound (the usual3dB), how large an α can you tolerate? That we, we must have

1 + α2

1− α2≤ 2, (20)

which, as you can verify, says that we can tolerate up to |α| ≤ 1√3≈ 0.577, showing that zero-forcing is

not that bad after all.

4

Problem 2 (Multi-user FFT-OFDM) 20 Points

Consider a single transmitter and two receivers, which we will refer to as users A and B, respectively.The transmitted signal is x[n], and the received signals are

yA[n] = x[n] +1

2x[n− 1] +

1

2x[n− 3] + wA[n] (21)

yB [n] =

√3

2x[n] +

√3

2x[n− 2] + wB [n], (22)

where wA[n] and wB [n] are independent of each other and the usual circularly symmetric complex-valuedGaussian noises of variance N0, exactly as discussed in class. The transmitter uses FFT-OFDM.

(a) (4 Points) Determine the minimum length of cyclic prefix needed so that both users have an ISI-freeequivalent channel. No derivation is necessary for full credit.

Solution:

This was of course exactly HW6, Problem 3(a), except that you had to do it twice: For user A, theminimum cyclic prefix is 3, and for user B it is 2. Hence, if you use a cyclic prefix of length 3, both userswill have no ISI.

(b) (4 Points) For N = 4, determine the equivalent four channel gains from the transmitter to user B(that is, the coefficients that were denoted as H0, H1, H2, and H3 in the class lecture notes).

Solution:

This was exactly HW6, Problem 3(c). You could just use the formula (which was also reprinted on thefirst page of the exam):

Hm =

N−1∑`=0

h`ω`m (23)

where

ω = e−j2π/N = −j for the case N = 4. (24)

For the channel to user B, we have h0 =√

32 , h1 = 0, h2 =

√32 , and h3 = 0. That is, we find

H0 = h0 + h1 + h2 + h3 =√

6 (25)

H1 = h0 + (−j)h1 + (−j)2h2 + (−j)3h3 = h0 − jh1 − h2 + jh3 = 0 (26)

H2 = h0 + (−j)2h1 + (−j)4h2 + (−j)6h3 = h0 − h1 + h2 − h3 =√

6 (27)

H3 = h0 + (−j)3h1 + (−j)6h2 + (−j)9h3 = h0 + jh1 − h2 − jh3 = 0. (28)

5

In order to avoid error propagation, please assume for Parts (c), (d), (e) that the answer to Part (b) is

Channel 0 Channel 1 Channel 2 Channel 3

User A 3 1 0 1

User B√

11 j/2√

11 −j/2

(Note that this is not the correct answer to Part (b).)

(c) (4 Points) Suppose that the transmitter is only sending a single bit: If the bit is one, then thetransmitter sends

√E through all 4 channels (that is, in the notation of the class, we set X0 = X1 =

X2 = X3 =√E ), and if the bit is zero, then the transmitter sends −

√E through all 4 channels (that is,

in the notation of the class, we set X0 = X1 = X2 = X3 = −√E ).

• For user A, determine the ML detector of the single transmitted bit, given all four channel outputs.

• Give a formula for the resulting error probability.

Solution:

Let us start by noticing that there are only two possible transmitted message points of length N = 4 :

x1 =√E(3, 1, 0, 1) (29)

x2 = −√E(3, 1, 0, 1) (30)

The only slight difficulty was that instead of transmitting over the real-valued Gaussian vector channelof length N = 4 (like in chapter 3), we are now using the complex-valued Gaussian vector channel.

The key observation was to see that the imaginary parts of the 4 received signals contain only noise (sincethere is no signal component in the imaginary parts). Moreover, since real and imaginary parts of thecircularly symmetric complex-valued Gaussian noise are independent of each other, the imaginary partsof the received symbols are irrelevant information, and their real parts are a sufficient statistic.

Hence, since we know from class that under AWGN, the ML detector is minimum distance, the MLdetector is given by first keeping the real parts of the received symbols only, and then by finding thecloser of the two possible message points x1 or x2.

Finding the corresponding error probability is again the same old formula:

Pe = Q

(d

2√N0/2

), (31)

and the distance between our message points is

d2 = E(62 + 22 + 22) = 44E , (32)

thus,

Pe = Q

(√22EN0

). (33)

6

(d) (4 Points) Suppose that the transmitter has one bit for user A and one bit for user B. In each of the4 channels, we will send only either

√E or −

√E , exactly like in Part (c). But this time, we choose a

subset of the channels to send one bit to user A, and a different subset of the channels to send the otherbit to user B. Note that user A is only interested in her bit, and does not attempt to recover the bit foruser B, and vice versa. The goal is to ensure that both users experience the same (or almost the same)error probability in recovering their respective bit of interest, and that this error probability is as smallas possible. How should the channels be divided between the users? And what are the resulting errorprobabilities for users A and B, respectively?

Channel 0 Channel 1 Channel 2 Channel 3

Which user? A A B A(A or B, or unused)

Solution:

This was the place where you had to be a little more creative. We did not discuss this in much detail(and in fact, to the best of my knowledge, there are no general tools to do this).

But the main insight I expected everyone to get is that we should give those channels that are good foruser A to user A, and those that are good for user B to user B. Clearly, we give Channel 2 to user B,since it is useless for user A. Moreover, Channels 1 and 3 should probably go to user A. The only trickyone is Channel 0, since it is good for both users.

Let us start by giving Channel 0 to user A. Clearly, since Channel 2 is anyway useless for user A, theerror probability for user A will be exactly as in Part (c), namely

Puser Ae = Q

(√22EN0

). (34)

In this case, user B only has channel 2, which is really easy to analyze: we have a distance of d = 2√

11E ,and hence,

Puser Be = Q

(√22EN0

). (35)

Both error probabilities are equal in this case.

Now, clearly, if we give anything more to user B, then user B will get better, and user A will get worse.So, this is the fairest solution that simultaneously minimizes error probabilities.

7

(e) (4 Points) Exactly like in Part (d), suppose that the transmitter has one bit for user A and one bitfor user B. But by contrast, we now suppose that we have a total energy of 4E that we are allowed tosplit any way we want between the two users. More precisely, in each channel, we still do BPSK, but weselect the energy for the BPSK in a clever way. Then, each channel is assigned either to user A or touser B (or it can be left unused), and for each channel, we have to determine what fraction of the totalenergy to use. As before, the goal is to simultaneously have the smallest possible error probability forboth users in recovering their respective bits. Also give the formulas for the attained error probabilities.

Channel 0 Channel 1 Channel 2 Channel 3

Which user? A unused B unused(A or B, or unused)What energy? 2.2 E 1.8 E

Solution:

This one was the most difficult question of this problem, and you had to think a little bit more freely.There are many ways to obtain the answer. One is to go back to Part (c) and redo this allowing to spreadthe total energy 4E arbitrarily amongst the four channels. Let us say that we place α2

0E in channel 0,α21E in channel 1, and so on, with α2

0 + α21 + α2

2 + α23 = 4. Then, following the analysis in Part (c), we

see that we are distinguishing between the following two possible signal points:

x1 =√E(3α0, α1, 0, α3) (36)

x2 = −√E(3α0, α1, 0, α3) (37)

and thus, the distance is

d2 = E((6α0)2 + (2α1)2 + (2α3)2

)= E

(36α2

0 + 4α21 + 4α2

3

). (38)

As always, the larger d2, the smaller an error probability we attain. But you can then directly see thatyou should select α2

0 = 4 and α1 = α2 = α3 = 0, that is, you should only use the best channel! Asecond observation from this derivation is that if you have two channels of equal quality, you can assignpowers between them any way you want — that is, if there are two best channels, you can use one, theother, or both, without changing the performance at all.

But from these insights, you can immediately conclude that the best is for user A to use Channel 0, andfor user B to use Channel 2. Channels 1 and 3 are left unused.

Finally, for the energy: Let us denote the energy for user A by EA and the energy for user B by EB .Then, we have

Puser Ae = Q

(√18EAN0

)(39)

Puser Be = Q

(√22EBN0

). (40)

To make both error probabilities the same, we need to have 18EA = 22EB . Moreover, we need to satisfyEA + EB = 4. So, two variables, two equations — we easily find the allocation.

8

Problem 3 (Hard decision receivers.) 20 Points

(a) (5 Points) Consider the scalar AWGN channel Y = x+Z, where x is uniformly selected from ±√E

and Z is the usual AWGN with power N0/2. Suppose that we quantize Y as follows:

D =

1, if Y ≥ θ

√E ,

∗, if − θ√E ≤ Y < θ

√E ,

0, if Y < −θ√E ,

(41)

where θ is an arbitrary (non-negative) constant. For future convenience, let us define:

α = P(D = 0|x =√E) (42)

β = P(D = ∗|x =√E). (43)

Calculate the values of α and β as a function of the problem parameters, using the Q-function. Finally,argue that we also have P(D = ∗|x = −

√E) = β and P(D = 1|x = −

√E) = α.

Solution:

This entire problem was a rehash of Homework 4, Problem 1. The first part should have been easy. Thesolution is given by the following picture:

For α : This is the probability that the noise is very negative, so that although the transmitted signal is√E , the noise Z pushes the received signal Y = x+ Z below the threshold −θ

√E . Formally,

α = P(D = 0|x =√E) = P(Z < −(1 + θ)

√E). (44)

This is of course the basic ever-returning probability formula of this class. I am sure you have your owntricks. For example, from Homework 1, Problem 2, you recall that for a Gaussian with mean m andvariance σ2,

P(X < α) = 1−Q(α−mσ

),

hence,

α = 1−Q

(− (1 + θ)

√E√

N0/2

). (45)

Finally, as we have seen, Q(−x) = 1−Q(x), hence we can rewrite

α = Q

((1 + θ)

√E√

N0/2

). (46)

9

For β, it is a little more complicated. By analogy to α, we can easily write:

β = P(D = ∗|x =√E) = P(−(1 + θ)

√E ≤ Z < −(1− θ)

√E). (47)

But from the above sketch of the Gaussian bell curve, we can see that

β = P(Z < −(1− θ)√E)− P(Z < −(1 + θ)

√E). (48)

Using the same formula from Homework 1, we can rewrite this as

β = 1−Q

(− (1− θ)

√E√

N0/2

)−

(1−Q

(− (1 + θ)

√E√

N0/2

))(49)

= Q

((1− θ)

√E√

N0/2

)−Q

((1 + θ)

√E√

N0/2

). (50)

10

(b) (10 Points) Now consider the vector AWGN channel of length N, i.e., Y = x + Z, where Z isthe usual AWGN of variance N0/2. Suppose that x =

√E(1, 1, . . . , 1) or x = −

√E(1, 1, . . . , 1) (with

uniform priors). Next, we apply the hard-decision detector from Part (a) separately to each entry in thevector Y to obtain the vector D, whose entries are thus either 0, 1, or ∗ . Derive the ML detector basedon D. Simplify it as much as possible (the simpler your description, the more points you will get).

Solution:

Just like in HW4, Problem 1, you can express the likelihoods conveniently by letting k(D) denote thenumber of ones in D, `(D) denote the number of zeroes in D, and s(D) denote the number of stars inD. Then, you can write for any given sequence d

p(d|√E transmitted) = (1− α− β)k(d)α`(d)βs(d) (51)

p(d| −√E transmitted) = (1− α− β)`(d)αk(d)βs(d) (52)

The ML can then be found easily from the Likelihood Ratio:

Λ12 =p(d|√E transmitted)

p(d| −√E transmitted)

(53)

=(1− α− β)k(d)α`(d)βs(d)

(1− α− β)`(d)αk(d)βs(d)(54)

=(1− α− β)k(d)α`(d)

(1− α− β)`(d)αk(d). (55)

In other words, a sufficient statistic for d is the number of ones and the number of zeroes in the vector— the number of ∗ is irrelevant information and can be dropped.

At this point, remembering HW4, Problem 1, I am sure you guess the solution: Take d, remove all the∗ symbols, and then take the majority: If there are more ones than zeroes, the ML says that

√E was

transmitted; if there are more zeroes than ones, the ML says that −√E was transmitted. (If the number

of ones is equal to the number of zeroes, then we have a tie and it does not matter what we decide.)

Now, how to prove this formally? We know that the ML decides in favor of√E if the Likelihood Ratio

is larger than 1:

(1− α− β)k(d)α`(d)

(1− α− β)`(d)αk(d)≥ 1 (56)

Equivalently, the ML decides in favor of√E if(

1− α− βα

)k(d)≥

(1− α− β

α

)`(d). (57)

The final step is to notice that no matter how large we select θ, we must have that 1−α−β ≥ α, hence1−α−β

α ≥ 1. This can be seen both from the figure and from the following simple analysis:

1− β − α = 1−Q

((1− θ)

√2EN0

)+Q

((1 + θ)

√2EN0

)−Q

((1 + θ)

√2EN0

)(58)

= Q

((θ − 1)

√2EN0

)(a)

≥ Q

((θ + 1)

√2EN0

)= α (59)

where (a) follows from the fact that (θ − 1)√

2EN0≤ (θ + 1)

√2EN0

.

But this means that the ML decides in favor of√E if

k(d) ≥ `(d), (60)

that is, if there are at least as many ones in the sequence d as there are zeroes.

11

(c) (5 Points) In Homework 4 (the graded HW), we considered a hard-decision receiver for CDMA. As aquick reminder, we considered a situation with two transmitters, user A and user B, where the receiverobserves Y = xA + xB + Z, where all signals are vectors of length N, and xA is uniformly selected

from ±√EbN (1, 1, 1, . . . , 1) and xB is uniformly selected from ±

√EbN (+1,−1,+1,−1, . . .). We studied a

simple hard-decision receiver that thresholded each entry of the vector Y separately at zero, resultingin the binary vector D (this is of course the special case of Part (a) where we set θ = 0 ). When weanalyzed the ML detector based on D, we observed a very undesirable behavior: after some value ofthe transmitted energy E , the error probability started saturating, no matter how much energy E weinvested.

By constrast, let us now suppose that we use the quantizer from Part (a). That is, each entry of thevector Y is separately quantized and thus turned into 1, 0 or ∗ . Does there exist a θ such as to avoidthis saturation behavior and attain an error probability that tends to zero as the transmitted energy Eincreases?

Solution:

First, let us state clearly that in our view, this was the most difficult question of the exam. It reallyrequired that you understood Homework 4, Problem 1, very well.

The answer to the question is yes, this refined quantization indeed removes the saturation behavior.There are several arguments possible here, but perhaps the easiest is along the lines of the one given inthe solutions to HW4: Consider the special case N = 4. If we quantize like in HW4, we obtained thetable

Transmitted Bits Noiseless equivalent transmitted signal Corresponding quantized sequence Dwhen the noise is very weak

11 2 0 2 0 1 E 1 E10 0 2 0 2 E 1 E 101 0 -2 0 -2 E 0 E 000 -2 0 -2 0 0 E 0 E

where E denotes a symbol that is equally likely to be zero or one — and it was precisely this uncertaintythat produced the saturation phenomenon.

Now, if we quantize in the more general fashion considered here, we hope to obtain the following table:

Transmitted Bits Noiseless equivalent transmitted signal Corresponding quantized sequence Dwhen the noise is very weak

11 2 0 2 0 1 ∗ 1 ∗10 0 2 0 2 ∗ 1 ∗ 101 0 -2 0 -2 ∗ 0 ∗ 000 -2 0 -2 0 0 ∗ 0 ∗

It should be clear that from this table, we can decode correctly: Each of the quantized sequences in thecase when the noise is very weak is unique. So, if we can select θ so as to obtain the above table, then wehave proved that the answer is yes: The saturation behavior disappears, and the error probability tendsto zero as we let E tend to infinity.

In order to obtain this table, we need to satisfy two conditions:

1. The first condition so as to obtain the above table is that whenever the noiseless equivalent trans-

mitted signal is 2√EbN , the probability of quantizing to a 1 should tend to one; and when the

noiseless equivalent transmitted signal is −2√EbN , the probability of quantizing to a 0 should tend

to one. We have already calculated this probability in Part (a), where we called it (1−α−β). The

12

only change is that we have to substitute√E = 2

√EbN . So, using the result from Part (a), we find

1− α− β = 1−Q

((1− θ)

√8EbNN0

), (61)

and so, any value of θ < 1 (where the strict inequality is important!) will ensure that this probabilitytends to one in the limit as Eb tends to infinity.

By contrast, when θ ≥ 1, then the argument in the Q-function is negative, and so, the Q-functionexpression tends to one. In that case, we can never satisfy our condition (in fact, in this case, inthe limit, we would only get the ∗ symbol).

2. The second condition so as to obtain the above table is that whenever the noiseless equivalenttransmitted signal is 0, the probability of quantizing to a ∗ should tend to one.

To find this probability, it is again best to sketch the Gaussian bell curve:

Using exactly the notation from Part (a), we obtain

P(D = ∗|x = 0) = P(−θ√E ≤ Z < θ

√E) = 1− 2Q

√2θ2EN0

, (62)

where again, we substitute√E = 2

√EbN to obtain

P(D = ∗|x = 0) = 1− 2Q

√8θ2EbNN0

. (63)

Clearly, this probability tends to one as Eb tends to infinity for any value of θ > 0.

In summary, if we select 0 < θ < 1, then we obtain our desired quantization table in the limit as thesignal-to-noise ratio tends to infinity, and thus, an error probability that tends to zero.

13