design of irregular ldpc codes with optimized performance-complexity tradeoff
TRANSCRIPT
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 58, NO. 2, FEBRUARY 2010 489
Design of Irregular LDPC Codes withOptimized Performance-Complexity Tradeoff
Benjamin Smith, Masoud Ardakani, Senior Member, IEEE,Wei Yu, Senior Member, IEEE, and Frank R. Kschischang, Fellow, IEEE
AbstractβThe optimal performance-complexity tradeoff forerror-correcting codes at rates strictly below the Shannon limitis a central question in coding theory. This paper proposesa numerical approach for the minimization of decoding com-plexity for long-block-length irregular low-density parity-check(LDPC) codes. The proposed design methodology is applicableto any binary-input memoryless symmetric channel and anyiterative message-passing decoding algorithm with a parallel-update schedule. A key feature of the proposed optimizationmethod is a new complexity measure that incorporates both thenumber of operations required to carry out a single decodingiteration and the number of iterations required for convergence.This paper shows that the proposed complexity measure canbe accurately estimated from a density-evolution and extrinsic-information transfer chart analysis of the code. A sufficientcondition is presented for convexity of the complexity measurein the variable edge-degree distribution; when it is not satisfied,numerical experiments nevertheless suggest that the local min-imum is unique. The results presented herein show that whenthe decoding complexity is constrained, the complexity-optimizedcodes significantly outperform threshold-optimized codes at longblock lengths, within the ensemble of irregular codes.
Index TermsβConvex optimization, extrinsic-informationtransfer (EXIT) charts, decoding complexity, low-density parity-check (LDPC) codes.
I. INTRODUCTION
THE design of high-performance irregular low-densityparity-check (LDPC) codes has been studied extensively
in the literature (e.g., [1]β[3]). At long block lengths, codesderived from capacity-approaching degree distributions (so-called threshold-optimized codes) provide excellent perfor-mance, albeit with high decoding complexity. In particular,to achieve the full potential of a threshold-optimized code,an impractically large number of messages may need to bepassed in the iterative decoder. In this paper, we present adesign method to find irregular LDPC codes with an optimized
Paper approved by F. Fekri, the Editor for LDPC Codes and Applicationsof the IEEE Communications Society. Manuscript received April 14, 2008;revised February 12, 2009.
B. Smith, W. Yu, and F. R. Kschischang are with the Electrical andComputer Engineering Department, University of Toronto, 10 Kingβs Col-lege Road, Toronto, Ontario M5S 3G4, Canada (e-mail: {ben, weiyu,frank}@comm.utoronto.ca).
M. Ardakani is with the Electrical and Computer Engineering Department,University of Alberta (e-mail: [email protected]).
This work was supported by the Natural Sciences and Engineering ResearchCouncil (NSERC) of Canada.
The material in this paper was presented in part at the IEEE InternationalSymposium on Information Theory, Adelaide, Sept. 2005 and at the 43rdAnnual Allerton Conference on Communication, Control, and Computing,Allerton, IL, Sept. 2005.
Digital Object Identifier 10.1109/TCOMM.2010.02.080193
performance-complexity tradeoff, thus permitting excellentperformance with reduced decoding complexity.
In designing threshold-optimized codes, decoding complex-ity is not explicitly considered in the code design process.However, if one wishes to design a practically decodable code,it would be more natural to try to find the highest-rate code fora certain affordable level of complexity on a given channel,or the code with the minimum decoding complexity for arequired rate and a given channel condition, or the code withthe highest decoding threshold at a given rate and complexity.Clearly, these design objectives better reflect the requirementsof practical communications systems.
A main challenge in incorporating complexity in codedesign is that characterizing decoding complexity as a functionof code parameters may appear, at a first glance, to be adifficult task. This is especially true for iterative decodingsystems in which decoding complexity depends not only onthe complexity of each iteration, but also on the number ofiterations required for convergence. The main idea of thispaper is that by using a uni-parametric representation of thedecoding trajectory [4], [5], the required number of iterationsin the iterative message-passing decoding of a long-block-length LDPC code under the parallel-update schedule can beaccurately estimated. This allows one to define and to quantifya measure of decoding complexity, which can then be used tooptimize the complexity-rate tradeoff for LDPC codes.
This paper uses a density evolution analysis of the LDPCdecoding process and adopts a uniparametric representation ofthe decoding trajectory in terms of the evolution of message-error rate, in a format similar to an extrinsic-informationtransfer (EXIT) chart [4], [6], [7], except that message-errorrate is tracked instead of mutual information, and that noGaussian assumption is made. The use of probability-of-error-based EXIT charts (which originated from the work ofGallager [8]; see also [5]) offers two key advantages. First, theuniparametric representation of the decoding trajectory allowsone to accurately estimate the number of iterations required toreduce the bit-error rate (BER) from that given by the channelto a desired target. Second, in terms of message error rate, onecan show that the decoding trajectory of an irregular LDPCcode can be expressed as a linear combination of trajectoriesof elementary codes parameterized by their variable degrees.These two facts allow one to define a measure that relates thedecoding complexity of an irregular LDPC code to its degreedistribution. The code design problem then reduces to theshaping of the decoding trajectory for an optimal complexity-
0090-6778/10$25.00 cβ 2010 IEEE
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
490 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 58, NO. 2, FEBRUARY 2010
rate tradeoff. An optimization program that determines theminimum-complexity degree distributions for a fixed code rateand a target channel can then be formulated.
This paper shows that the optimization problem formulatedin this way is convex under a mild condition, which facil-itates its numerical solution. Numerical code design exam-ples suggest that complexity-optimized codes can significantlyoutperform irregular threshold-optimized codes at long blocklengths.
The methodology proposed in this paper offers one ofthe few instances of an iterative system in which analytictools are available not only for predicting the convergencetrajectory, but also for the tuning of system parameters for fastconvergence. The proposed code design method is applicableto any binary-input memoryless symmetric (BMS) channeland any iterative decoding algorithm with symmetric updaterules. These are the conditions required for the validity ofdensity evolution analysis [2].
A. Related Work
The performance-complexity tradeoff for error-correctingcodes has always been a central issue in coding theory [9]; inthe context of capacity-approaching codes, there exist severalinformation-theoretic results. For LDPC codes and irregularrepeat-accumulate (IRA) codes, Khandekar and McEliece [10]conjectured that for symmetric channels with binary inputs,the graphical complexity (i.e., the decoding complexity periteration) per information bit scales like log 1
π , and the numberof iterations scales like 1
π , where π is the multiplicative gapto capacity. In the special case of the binary erasure channel(BEC), Sason and Wiechman [11] proved that the numberof iterations scales at least like 1
π for LDPC, systematicIRA, non-systematic IRA, and systematic accumulate-repeat-accumulate (ARA) codes. Furthermore, the graphical com-plexity per information bit of LDPC and systematic IRAcodes has been shown to scale like log 1
π [12], [13], whilenon-systematic IRA and systematic ARA codes can achievebounded graphical complexity per information bit [14], [15];in practice, systematic ARA are preferable to non-systematicIRA codes due to their lower error floors and systematicencoding.
With respect to more practically-oriented studies of theperformance-complexity tradeoff, Richardson et al. [2] pro-posed a numerical design procedure for irregular LDPC codesinvolving an ad-hoc measure of the number of iterations. How-ever, their motivation was in providing an efficient techniquefor finding threshold-optimized codes. A decoding complexitymeasure similar to ours is presented in [16], however, itsusefulness is limited to capacity-approaching codes over theBEC; ours is more general, but reduces to their measure in thecase of capacity-approaching codes over the BEC. In a relatedpaper [17], we studied the optimization of decoding complex-ity for LDPC codes under Gallagerβs decoding Algorithm B[8] over the binary symmetric channel. The results of [17] relyon the one-dimensional nature of decoding Algorithm B; thispaper addresses the more general case.
B. Outline of the Paper
The rest of this paper is organized as follows. In Section II,we briefly review the background and terminology pertainingto LDPC codes. In Section III, we present the new complexitymeasure for the iterative decoding system, and formulate thecomplexity optimization methodology. Numerically optimizeddegree distributions and simulation results are presented inSection IV, and conclusions are provided in Section V.
II. PRELIMINARIES
A. LDPC Codes
In this paper we consider the ensemble of irregular LDPCcodes [1], [2], characterized by a variable degree distributionπ(π₯) and a check degree distribution π(π₯),
π(π₯) =βπβ₯2
πππ₯πβ1 and π(π₯) =
βπβ₯2
πππ₯πβ1
with π(1) = π(1) = 1. The rate of an LDPC code is relatedto its degree distribution by
π = 1ββ
πππ
πβπππ
π
. (1)
Due to their representation via sparse bipartite graphs,LDPC codes are amenable to iterative decoding techniques.Iterative decoding proceeds by successively passing messagesbetween variable nodes and check nodes, where the messagesrepresent βbeliefsββtypically expressed as log-likelihood ra-tios (LLRs)βabout the values of the variable node connectedto a given edge. In a single decoding iteration, messagesππ£βπ are first sent (in parallel) from variable nodes to checknodes. At each check node a check-update computation isperformed, generating messages ππβπ£ to be sent (in parallel)from the check nodes to the variable nodes. Finally, at eachvariable node, a variable-update computation is performed togenerate the messages for the next iteration. After sufficientlymany iterations have been performed, each variable node canproduce a decision about its corresponding variable. In the restof this paper, we assume that the check- and variable-nodeupdates are performed according to the sum-product algo-rithm [18] with a parallel message-passing schedule, althoughour technique is applicable to any symmetric update rules.Note that even though one may reduce the decoding complex-ity by employing simplified (sub-optimal) updates, our methodprovides further complexity reduction by minimizing the totalnumber of messages passed.
For the sum-product algorithm, the update rule for an LLRmessage at a variable-node π£ is
ππ£βπ = π0 +β
ββπ(π£)β{π}πββπ£, (2)
and the update rule at a check-node π is
ππβπ£ = 2tanhβ1
ββ β
π¦βπ(π)β{π£}tanh(ππ¦βπ/2)
ββ , (3)
where π0 is the channel message in LLR form, and π(π£)represents the nodes connected directly to node π£ by an edge.
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
SMITH et al.: DESIGN OF IRREGULAR LDPC CODES WITH OPTIMIZED PERFORMANCE-COMPLEXITY TRADEOFF 491
In a pair of landmark papers [2], [19], Richardson et al.presented density evolution, a numerical tool to track thedensity of messages passed during iterative decoding of LDPCcodes, under the parallel message-passing schedule. Given adegree distribution pair (π(π₯), π(π₯)) and a target channel, onecan use density evolution to determine the βthresholdβ of thecode, where the threshold corresponds to the largest value ofthe noise parameter such that the bit-error probability goes tozero, under iterative decoding, as the block length tends toinfinity.
In one iteration of density evolution, the algorithm combinesthe probability density function (pdf) of channel messages, thepdf of messages sent from variable nodes to check nodes inthe previous iteration, and the degree distributions of the code,and computes the pdf of the messages that will be sent fromvariable nodes to check nodes in the current iteration. In otherwords, one iteration of density evolution computes
π(π)π£βπ = Ξ¦(π(πβ1)
π£βπ ,ππβ, π(π₯), π(π₯)), (4)
where π(π)π£βπ represents the pdf of variable-node to check-node
messages at iteration π, and ππβ represents the pdf of channelmessages.
B. EXIT Charts
EXIT charts, originally introduced by ten Brink in thecontext of turbo codes [4], provide a graphical descriptionof density evolution. This is accomplished by tracking astatistic of the variable-to-check message density π(π)
π£βπ. Oftenthe mutual information between the message value and thecorresponding variable value is tracked in an EXIT chart. Inthis paper, as in [5], [8], we track a different parameter ofthe message distribution; namely, the probability that the log-likelihood ratio (LLR) message has the incorrect sign. Roughlyspeaking, this parameter measures the fraction of messagesin βerrorβ. Although we do not track mutual information, wenevertheless continue to refer to the parameterized trajectoriesas βEXIT charts.β The use of EXIT charts is central to theformulation of a threshold-optimization program, and later, acomplexity-minimization program. However, to minimize theloss of information associated with a uni-parametric repre-sentation of message densities, we use density evolution foranalysis; the EXIT charts serve only to provide a graphicalrepresentation of the convergence trajectory.
For a given degree distribution pair and channel condition,one can visualize the convergence behavior of the decoderby performing π iterations of density evolution, and plottingthe message error rate (i.e., the fraction of variable-to-checkmessages ππ£βπ with the incorrect sign) at iteration π, π β{1, 2, β β β , π}, denoted as π(π), versus the message error rateat iteration π β 1, i.e., π(πβ1). Specifically, let ππβ be theconditional pdf of the LLR at the output of the channel giventhat π₯ = 1 is transmitted (we assume 0 β +1 and 1 β β1for BPSK modulated signals), and let Pe(β ) be a function thatreturns the area under the error-tail of a pdf. Clearly,
π(0) = Pe(ππβ)
is the bit error probability associated with uncoded communi-cations. For π β₯ 1, we perform density evolution via equation
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.160
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
p
f2(p)
f3(p)
f5(p)
f10
(p)f20
(p)
f(p)
Fig. 1. Elementary EXIT charts, π(π₯) = 0.1π₯+0.3π₯2 +0.1π₯4 +0.2π₯9 +0.3π₯19, π(π₯) = π₯7.
(4), where π(0)π£βπ is initialized to the Dirac delta function at
zero, and we obtain
π(π) = Pe(π(π)π£βπ),
the message error at iteration π. Plotting π(π) versus π(πβ1)
gives the EXIT chart of the code for the assumed channelcondition. Equivalently, we can think of the EXIT chart as amapping
π(π) = π(ππβ1), π β {1, 2, . . .}. (5)
Due to the random construction of the graph of an irregularLDPC code, one can use the total probability theorem todecompose the overall EXIT chart into a sum of elementaryEXIT charts [5], i.e.,
π(π) =βπ
ππππ(π, π), (6)
where ππ(π, π) is the elementary EXIT chart associated withdegree-π variable nodes, and the check degree distribution isfixed to an appropriate value. For the calculation of the π-thelementary chart, we first compute
π(π)π
β³= Pe(Ξ¦(π(πβ1)
π£βπ ,ππβ, π₯πβ1, π(π₯))), π β {1, 2, . . .}
which is the error rate of the messages emanating from degreeπ variable nodes after the π-th iteration, where π(πβ1)
π£βπ is thepdf of messages after πβ 1 iterations of density evolution onthe code with degree distributions π(π₯) and π(π₯). The π-thelementary EXIT function ππ is then obtained by interpolatinga curve through the origin and the points (π(πβ1), π
(π)π ), π β₯
1. Fig.1 presents the elementary EXIT charts and the overallEXIT chart for a particular irregular code, using sum-productdecoding over the AWGN channel.
Note that the elementary EXIT charts are in general afunction of π. This is because the densities that are passedin iteration π are influenced by π. However, if π is perturbedslightly, the elementary EXIT charts are effectively unchanged,and thus for sufficiently small changes in π, we disregard
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
492 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 58, NO. 2, FEBRUARY 2010
the dependency of elementary EXIT charts on π.1 With thisassumption, (6) can be simplified to
π(π) =βπ
ππππ(π), (7)
therefore, the code-design problem becomes that of synthe-sizing a suitable EXIT chart as a convex combination of anumber of pre-computed elementary EXIT charts. Finally, dueto the weak dependence of elementary EXIT charts on π, theaccuracy of the design method can be further improved byupdating the elementary EXIT charts whenever π undergoesa sufficiently large change.
C. Threshold Optimization
Traditionally, degree distributions have been designed toachieve a desired rate, while maximizing the decoding thresh-old. Equivalently, given a target channel condition, one candetermine the maximum rate code such that successful decod-ing is possible. Indeed, for a fixed check degree, the latter isobtained as the solution to the following linear optimizationprogram:
maximizeβπ
πππ
subject toβπ
ππππ(π) < π, π β (0,Pe(ππβ)]
βπ
ππ = 1
ππ β₯ 0 (8)
Here the central condition is the first constraint, namely thatβπ ππππ(π) < π; this condition ensures an βopenβ EXIT chart
in which the decoder can make progress (decrease π) duringeach iteration. One clearly observes the code-design problemas βcurve-shaping.β
III. COMPLEXITY-OPTIMIZED LDPC CODES
In this section, we formulate an optimization program thatfinds the minimum-complexity code as a function of code rateon a fixed channel. The key to this formulation is a measurethat accurately estimates the number of computations neededto successfully decode an irregular LDPC code using the sum-product algorithm. A binary-input additive white Gaussiannoise (AWGN) channel is assumed, although the proposedoptimization technique applies to any binary-input memorylesssymmetric channel.
A. Measure of Decoding Complexity
For a parallel message-passing decoder, the decoding com-plexity of LDPC codes is proportional to the product of thenumber of decoding iterations and the number of arithmeticoperations performed per iteration. The computational effort(in computing (2) and (3)) per iteration is proportional to the
1Intuitively, this is justified by the accuracy of methods that assume aGaussian distribution for variable-node to check-node messages, which leadsto elementary EXIT charts that are independent of π; elementary EXITcharts computed in this manner closely approximate elementary EXIT chartscomputed by density evolution [5], [20].
number of edges πΈ in the graph of the code, while the numberof messages passed per iteration is easily seen to be 2πΈ. Thus,as we argue below, the overall complexity is proportional tothe total number of messages passed in the iterative decodingprocess.
To see that the complexity per iteration scales linearly withπΈ, we may explicitly compute (for the sum-product algorithm)the number of operations performed in one iteration; theanalysis of other update rules is similar. The variable-nodeupdate for the sum-product algorithm (2) can be performed asfollows. At each variable node of degree ππ£, we first computethe sum of the ππ£ extrinsic messages and the channel message;this requires ππ£ additions. To compute the output messagefor some particular edge, we can then subtract the extrinsicmessage received over the same edge from the computedsum. Thus, computing the π output messages requires 2ππ£operations. Summing over all variable nodes, π(πΈ) operationsare required.
Similarly, to perform the check-update (3) at a check nodeof degree π, we can first compute tanh(π2 ) for each incomingmessage π. Next, performing π β 1 operations, we computethe product of these terms. Finally, computing an outputrequires dividing this quantity by the term associated with theincoming message over the same edge, then taking 2tanhβ1
of the resulting value. Thus, computing all the check-nodeoutput messages for a check node of degree π requires πβ 1multiplications, π divisions, and 2π tanh/tanhβ1 calculations.Summing over all check nodes, π(πΈ) operations are neededin total.
The overall complexity for decoding one codeword is there-fore proportional to ππΈ, where π is the number of decodingiterations. Since each codeword encodes π π information bits,where π is the code rate and π is the block length, thedecoding complexity per information bit is then π
(ππΈπ π
).
Now, using the fact that π = πΈβ ππ
π and using the expression(1), which relates π with ππ and ππ, we obtain the followingexpression for the decoding complexity πΎ
πΎ = ππΈ
π π= π
1
π β ππ
π
= π1βπ
π β ππ
π
. (9)
In the rest of this section, we formulate the complexity-ratetradeoff problem as that of minimizing πΎ for a fixed π . Fur-thermore, since the average check node degree can be shownto be ππ =
(βπππ
π
)β1, minimizing the decoding complexity
becomes equivalent to minimizing πππ, and the search forcomplexity-minimized codes involves an investigation of thetradeoff between average check node degree and the numberof decoding iterations.
B. Characterizing the Number of Decoding Iterations
The main idea of this paper is that the total number ofdecoding iterations can be accurately estimated based on theshape of the EXIT chart π(π). Further, this characterizationof complexity can be expressed as a differentiable functionof code parameters (i.e., ππ), which allows one to use acontinuous optimization approach for finding good codes withreduced decoding complexities.
From the definition of EXIT charts, it is clear that com-puting the decoding trajectory for a given code is equivalent
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
SMITH et al.: DESIGN OF IRREGULAR LDPC CODES WITH OPTIMIZED PERFORMANCE-COMPLEXITY TRADEOFF 493
f(p(0))
f(p(1))
f(p(2))
f(p)
pout
pin
Ξ±0
Ξ±1
p(0)p(1)p(2)
Fig. 2. The iterative decoding process can be represented by an iterativefunction evaluation of the EXIT chart, i.e, π(π) = π(π(πβ1)). Further, π(π)
and π(πβ1) are related by the local slope πΌπ = π(π+1)
π(π) .
to an iterative function evaluation of π(π). Given some initialmessage error rate π0 > 0 and a target message error rateππ‘ < π0, let π(0) = π0. The iterative decoding processis represented by (5), i.e., π(π) = π(π(πβ1)) for π β β.Convergence to ππ‘ implies the existence of some π β β suchthat π(π) = π(π(πβ1)) β€ ππ‘. The required number of iterationsfor convergence, π , is defined to be the minimum such π.Note that the number of required iterations is determined bythe shape of π(π) alone.
Note that a sufficient condition for convergence to ππ‘ is thatπ(π) < π, βπ β [ππ‘, π0], since this implies that the output ofan iteration is strictly less than its input, and that the recursionhas no fixed points in the interval of interest.
The following definition plays an important role in thecharacterization of the required number of iterations for con-vergence. Define the local slope of π(β ) at π(π) to be
πΌπ =π(π(π))
π(π). (10)
Since π(1) = π(π(0)), π(2) = π(π(1)), β β β , π(π) = π(π(πβ1)),where π(π) β€ ππ‘ and π(πβ1) > ππ‘, we have
π(π) = πΌπβ1πΌπβ2 β β β πΌ1πΌ0π0. (11)
Note that the value of πΌπ is equal to the slope of the linepassing through the origin and the point (π(π), π(π(π))). Thisis illustrated in Fig. 2.
We first consider the case of determining the requirednumber of iterations for convergence for an EXIT function thatis a straight line passing through the origin, i.e., π(π) = πΌπ,πΌ < 1. In this simplest case, (11) reduces to π(π) = πΌππ0.The number of iterations required to go from π0 to ππ‘ can thenbe computed exactly:
π =
βln(ππ‘)β ln(π0)
ln(πΌ)
β. (12)
The main idea for extending the above formula to nonlinearπ(π) is to momentarily ignore the fact that π has to be aninteger, and to compute the incremental increase in π asa function of the incremental change in ππ‘. The key is to
TABLE IESTIMATES FOR THE NUMBER OF ITERATIONS, ππ‘ = 10β6 , π0 = 1
π(π) Actual Estimated
0.4π+ 0.45π2 β 1.05π3 + 0.2π4 + 0.2π5 + 0.4π6 16 15.4
0.7π + 0.2π2 + 0.40π3 β 0.4π6 60 59.1
0.5π β 0.45π2 + 0.5π4 + 0.4π6 21 19.6
recognize from (11) that the required number of iterations isa function of local slopes only. When π(π) is nonlinear, thenonuniform local slopes can be used as weighting factors inan accurate estimate of π .
Fix some π€ β [ππ‘, π0]. Assume that πΌ(π) = π(π)π is constant
for π β [π€ βΞπ€,π€], with Ξπ€ > 0 being sufficiently small.Returning to the linear case, and considering π to be a non-integer quantity, the incremental change in π as a functionof an incremental change in ππ‘ can be obtained by taking thederivative of π with respect to ππ‘:
ππ
ππ=
1
π ln(πΌ(π)). (13)
Therefore, the required number of iterations to progress fromπ€ to π€ βΞπ€ is Ξπ(π€) = βΞπ€
π€ ln(πΌ(π€)) . Thus, to estimate thetotal number of iterations from π0 to ππ‘, we may integrate (13)and obtain
π ββ« π0
ππ‘
ππ
π ln(
ππ(π)
) . (14)
Despite making various βcontinuous approximationsβ in itsderivation, this formula provides a surprisingly accurate es-timate of the required number of function evaluations, asexemplified in Table I for a set of polynomials that closelyapproximate the shape of typical EXIT charts. We note thatwhile it is possible to generate functions for which themeasure is arbitrarily inaccurate, for the class of EXIT-chart-like functions, such pathological cases do not arise.
Of course, to obtain π(π) one must perform density evo-lution, from which one could directly obtain the requirednumber of iterations to achieve a target error rate. Thusthe value of the measure lies not so much in its ability toaccurately predict the number of iterations but rather as anobjective function for an optimization program that seeks todetermine an π(π) =
βπ ππππ(π) that minimizes π , subject to
a rate constraint. This is facilitated by the fact that the aboveexpression for π is differentiable with respect to the designparameters ππ. More importantly, under a mild condition, (14)can be shown to be a convex function of ππ.
Theorem 1: Let π(π) =β
π ππππ(π), where π(π) < π.The function
β« π0
ππ‘
ππ
π ln( ππ(π) )
is convex in ππ in the region
{ππ β£ π(π) β₯ πβ2π}.Proof: Convexity of the integrand is sufficient for convex-
ity of the integral, since the sum of convex functions is convex.Further, to show convexity of the integrand as a function ofππ, it follows directly from the definition of convexity thatone need only show convexity along all lines in π-space thatintersect its domain [21]; a function restricted to a line isunivariate in π‘ β β, and thus the convexity of the integrand(on its domain, {π‘ β β β£βπ(π‘πΎπ + ππ)ππ(π) < π}) can be
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
494 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 58, NO. 2, FEBRUARY 2010
verified directly by taking its second derivative. The integrand,restricted to the line, is of the form:
π(π‘) =1/π
Ξ¦β ln(π‘Ξ +Ξ¨)(15)
where Ξ¦ = ln(π), Ξ =β
π πΎπππ(π) and Ξ¨ =β
π ππππ(π).Using the fact that Ξ¦ > ln(π‘Ξ + Ξ¨) on the domain of π(π‘),a direct computation reveals that the second derivative withrespect to π‘ is always positive if Ξ¦β ln(π‘Ξ + Ξ¨) β€ 2, whichis equivalent to π(π) β₯ πβ2π.
Note that Theorem 1 provides a sufficient condition for con-vexity; only when the condition is violated for all π β [ππ‘, π0],does it imply that the optimization program is nonconvex.
C. Optimization Program Formulation
We are now ready to formulate the problem of minimizingthe decoding complexity of a code subject to a rate constraint.The optimization variables are the variable degree distributionparameters ππ. Implicitly, we must have
βπ ππ = 1 and ππ β₯
0. It is easy to see from (1) that if the check-degree distributionππ is assumed to be fixed, the code rate is simply a functionof ππ. More specifically, the rate constraint becomes a linearconstraint: β
π
ππ/π β₯ 1
1βπ 0
βπ
ππ/π. (16)
Clearly, the above constraint would be met with equality forthe minimal complexity code. In this case, the complexitymeasure πΎ is then directly proportional to the number ofiterations π .
The idea is now to start with some initial οΏ½ΜοΏ½, compute itsassociated EXIT functions ππ(π), and update π by solving thefollowing optimization problem:
minimize
(1βπ 0
π 0
βπ ππ/π
)β« π0
ππ‘
ππ
π ln(
πβπ ππππ(π)
)
subject toβπ
ππ/π β₯ 1
1βπ 0
βπ
ππ/π
βπ
ππ = 1
ππ β₯ 0
β£β£πβ οΏ½ΜοΏ½β£β£β < π (17)
Here, ππ is fixed, π 0 is the fixed target rate, and π is themaximum permissible change in any component of π, whichis set to a small number to ensure that the elementary EXITcharts ππ(π) remain accurate.
In practice, the above optimization problem is solved re-peatedly, with οΏ½ΜοΏ½ updated in each step. The initial οΏ½ΜοΏ½ can beset to be the variable degree distribution of the threshold-optimized code for some target channel condition. Such adistribution can be obtained by solving (8). Next, we set π 0
to be the target rate (which must be lower than the rate of thethreshold-optimized code), and solve the optimization problemto obtain a minimal complexity code. However, because ofthe π constraint, the achievable complexity reduction in oneiteration is typically limited. To further reduce complexity,we set οΏ½ΜοΏ½ to the most recently computed π, and repeat the
optimization procedure using the updated elementary EXITcharts. This iterative process should be repeated until thecomplexity reduction from one iteration to the next falls belowsome threshold value.
D. Design Notes
While the preceding optimization program forms the core ofour design procedure, several further comments are in order.
1) Convexity and Optimality: Each iteration of the designprocedure consists of finding the minimum-complexity codewhose degree distribution deviates from the initial conditionοΏ½ΜοΏ½ by at most π, by solving the optimization problem (17).This is done for a fixed channel condition and a giventarget rate π 0. Note that the constraints of (17) are linear.Thus, when the objective function of (17) is indeed convex,the optimization problem (17) belongs to a class of convexoptimization problems for which the globally optimal solutioncan be efficiently found using standard convex optimizationtechniques [21]. However, from Theorem 1, we only knowthat convexity holds when {ππ β£ π(π) β₯ πβ2π}.
In some cases, we can argue that the sufficient conditionfor convexity necessarily holds, and thus by Theorem 1the problem is convex. First, observe that for all cases ofinterest, when the sufficient condition for convexity conditionis violated, it is violated near the origin of an EXIT chart.The convexity condition imposes a minimum slope for π(π)near the origin, analogous to the usual stability condition [19],which constrains the slope near the origin to be less than 1.Under the assumption that an EXIT chart that violates theconvexity condition necessarily violates the condition near theorigin, one can determine the rate π π such that an EXIT chartassociated with a code of rate greater than π π must have aslope greater than 1
π2 at π = 0. If the desired rate of thecomplexity-minimized code is greater than π π, it is sufficientto limit the optimization to the convex region, and a globallyoptimal solution for (17) can be efficiently found; if the desiredrate is less than π π, then we solve the optimization problemwithout the {ππ β£ π(π) β₯ πβ2π} constraint, but we are onlyguaranteed to find a local optimum. As a specific illustra-tion of the preceding, consider the special case of capacity-approaching codes, where the flatness condition [22] implies2
that the slope of the EXIT chart approaches 1 (which is clearlygreater than 1
π2 ) as π tends to zero, therefore the problem offinding capacity-approaching complexity-minimized codes isconvex; this agrees with the results of [16], which proved theconvexity as the gap to capacity goes to zero.
Finally, even when the sufficient condition for convexityis not satisfied, we observed experimentally that regardlessof the initial variable degree distribution, the solution ofthe optimization program converges to a unique distribution(for some fixed target rate, channel condition, and checkdegree distribution), and thus in practice it seems a globallyoptimal solution can always be found efficiently. Coupled withthe accuracy of our measure, this suggests that solving ouroptimization program yields a near-minimal-complexity codein an efficient manner.
2Strictly speaking, this result is proved for the BEC, but a similar obser-vation holds for more general channels
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
SMITH et al.: DESIGN OF IRREGULAR LDPC CODES WITH OPTIMIZED PERFORMANCE-COMPLEXITY TRADEOFF 495
2) Setting a Target Message Error Rate: The proposeddesign procedure involves tracking the evolution of the averagemessage error rate over all variable nodes. In practice, we arenot directly interested in achieving a target message-error rateππ‘, but rather a target bit-error rate ππ over the informationbits of the code. In the following, we relate ππ with ππ‘ andgive guidelines as to how to set ππ‘ in practice.
First, the message error rates at different variable nodes aredifferent and they depend on the variable degrees. Further,when a decoder completes its alloted quantity of iterations,making a decision at each variable node involves combiningthe channel message and all π check-to-variable messages.Note that this is equivalent to the extrinsic message error ratefor a degree π + 1 variable node, which is different fromthat of the outgoing message of a degree π variable node,(which involves a channel message and πβ1 check-to-variablemessages.)
To compute the message error rate of variable nodes of dif-ferent degrees, it is necessary to compute the node-perspective(rather than edge-perspective) variable degree distribution:
Ξ(π₯) =
ππ£βπ=2
Ξππ₯π, (18)
where
Ξπ =ππ
πβππ£
π=2ππ
π
. (19)
Furthermore, in a systematic realization of an LDPC code,one can assign information bits to high-degree variable nodesand parity-check bits to low-degree variable nodes. In thecase of threshold-optimized codes, this is without loss inperformance [23, Theorem 1]. Thus, we define a node-perspective variable degree distribution over the informationbits as follows. Let π be such that
βππ£
π=π Ξπ β₯ π 0 andβππ£
π=π+1 Ξπ < π 0, where π 0 is the target rate. The degreedistribution Ξ(π₯) over the information bits is:
Ξ(π₯) =
ππ£βπ=π
Ξππ₯π, (20)
where
Ξπ = 1ββππ£
π=π+1 Ξπ
π 0, and Ξπ =
Ξπ
π 0for π > π. (21)
Finally, for a given code and channel condition, the bit-errorrate over the information bits ππ, and the average message-error rate ππ‘ can be related as follows:
ππ = BER(ππ‘) =
ππ£βπ=π
Ξπππ+1(ππ‘), (22)
The above relation allows one to determine the appropriateππ‘ in code design, for a given target ππ. In each iterationof the optimization procedure, ππ‘ can be computed based onthe target ππ and the π(π₯) in the current iteration. Note thatequation (22) can be easily solved numerically, as BER(ππ‘)is a strictly increasing function of ππ‘.
Channel
Input to next iteration
Output of previous iteration
Input to next iteration
Output of previous iteration
Fig. 3. Depth-one decoding trees: The (variable-perspective) EXIT chartsin Section II correspond to the left-hand tree, whereas the right-hand treecorresponds to check-perspective EXIT charts.
3) Optimizing the Check-Degree Distribution: The pro-posed optimization program assumes a fixed check-degreedistribution π(π₯). Thus, finding the minimal complexity coderequires an optimization over the choice of check degreedistribution.
For threshold-optimized codes, there exists convincing ev-idence that it suffices to consider only check-degree distri-butions whose degrees are concentrated on two consecutivevalues [20], [24]. That is to say, if the average check nodedegree is chosen to be π β€ ππ < π+1, where π is an integer,then non-zero weights exist only on degree-π and degree-π+1check-nodes, and ππ satisfies
1
ππ=πππ+1β πππ+ 1
. (23)
For complexity-minimized codes, it is possible to use aprocedure similar to the proposed complexity-optimizationprogram to provide further evidence in favor of concentratedcheck distributions. The idea is to fix a target rate π 0 and tofix π(π₯), and to show that the complexity-minimizing π(π₯) hasa concentrated distribution. Note that fixing π 0 and π(π₯), theaverage check-degree is fixed, therefore minimizing the com-plexity is equivalent to minimizing the number of decodingiterations. It is possible to use an approach analogous to thatoutlined in Section II-B to define a check-perspective EXITchart (i.e., one that tracks the message error rate of check-to-variable messages, see Fig. 3). One can then express thedecoding complexity as a function of π(π₯) and the elementarycheck-perspective EXIT charts, and a suitable optimizationprogram can be formulated to find the complexity-minimizingcheck-degree distribution. In all investigated cases, we foundthat the resulting check-degree distribution was indeed con-centrated on two consecutive degrees.
Finally, in many cases it suffices to consider only regularcheck degrees, i.e. ππ = 1 for some π, with negligible perfor-mance degradation. Note that with regular check degrees, thecapacity of the BEC can be achieved [22], and for the Gaussianchannel, performance very close to the Shannon limit has alsobeen reported [5].
IV. CODE DESIGN AND SIMULATION RESULTS
Given a target rate and a target decoding complexity,the best code is the one which permits successful decoding(with the target complexity) on the worst channel. In otherwords, it is the highest threshold code that satisfies thedesign specifications. In this section, we illustrate how to
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
496 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 58, NO. 2, FEBRUARY 2010
find such codes using the optimization program presented inSection III, and compare their performance, via simulation,to traditional threshold-optimized codes, which were designedusing LdpcOpt [25]. A binary-input AWGN channel withnoise variance π2 and a target ππ = 10β6 are assumed, andthe sum-product algorithm is used at the decoder.
A. Design Example
To illustrate our optimization method, we design codes ofrate one-half for different target decoding complexities, witha maximum variable node degree of 30.3
First, from the given specifications, the threshold-optimizedcode can be found by well-known methods [1]β[3]. UsingLdpcOpt, the threshold-optimized code ππ has decodingthreshold π = 0.9713, π(π₯) = π₯8, and
π(π₯) = 0.21236π₯+ 0.19853π₯2 + 0.00838π₯4
+0.07469π₯5 + 0.01424π₯6 + 0.16652π₯7
+0.00912π₯8 + 0.02002π₯9 + 0.00025π₯19
+0.29589π₯29.
Note that finding the threshold-optimized code implicitlyinvolves a search over π(π₯).
For π(π₯) = π₯8 and any π < 0.9713, we can solve ouroptimization program (17) to find the minimal complexitycode of rate one-half; that is, the optimization yields a uniquecomplexity-minimized code for each channel condition. Inessence, we obtain a curve of the optimized complexity vs.the channel condition. Now, as discussed in Section III, mini-mizing the decoding complexity at a fixed channel condition isequivalent to minimizing πππ, and therefore involves a searchover the average check degree. Thus, in a manner analogousto the case of π(π₯) = π₯8, we compute the πΎ vs. π curve forvarious ππ (from an appropriately chosen set, centered aboutππ = 9), and plot the curves on a single graph. The resultinggraph is illustrated in Fig. 4, where check distributions of theform π(π₯) = π₯ππβ1 are considered. Note that a slightly morerefined result is possible if one considers check distributionsconcentrated on consecutive degrees.
Now, for a target decoding complexity, it is straightforwardto use Fig. 4 to determine the parameters of the best code.This is accomplished by finding the largest π at which oneof the curves achieves the target complexity, and findingthe complexity-minimized code for the corresponding checkdegree and noise variance. We present below the resultingcodes for three target complexities.For πΎ = 150, we have ππ = 6 and π = 0.855, and theresulting code π150 has
π(π₯) = 0.02799π₯+ 0.94752π₯2 + 0.02449π₯6.
For πΎ = 400, we have ππ = 8 and π = 0.913, and theresulting code π400 has
π(π₯) = 0.10733π₯+ 0.50459π₯2 + 0.07627π₯12
+0.31181π₯13.
3This is sufficient such that threshold-optimized codes approach the BI-AWGN capacity to within 0.07dB.
0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.940
100
200
300
400
500
600
700
800
Ο
K
dc=5
dc=6
dc=7
dc=8
dc=9
dc=10
dc=11
Fig. 4. Minimum decoding complexity as a function of channel condition,for various check degrees, with π = 0.5 and maximum variable degree 30.
For πΎ = 800, we have ππ = 8 and π = 0.939, and theresulting code π800 has
π(π₯) = 0.16627π₯+ 0.40111π₯2 + 0.08803π₯10 + 0.17446π₯11
+0.17013π₯15.
B. Simulation Results
We compare the performance of the complexity-minimizedcodes with that of threshold-optimized codes. For each degreedistribution, we design a parity-check matrix of length π =105. We ensure that the corresponding graph does not haveany cycles of length 4, but the matrix is otherwise randomlyconstructed. We note that for short-block-lengths, graph-structure optimization via progressive-edge-growth [26], andits variants (see [27] and references therein), significantlyimprove performance. However, for long-block-lengths, thesealgorithms are computationally intensive, and their benefits arereduced.
The complexity-minimized codes are evaluated for theirtarget decoding complexities, while the performance of thethreshold-optimized codes are presented for a range of decod-ing complexities. The bit-error-rates (BER), computed over theinformation bits of the code, as a function of signal-to-noise(SNR) ratio, are plotted.
While it is traditional to discuss the coding gain (in dB)of one coding scheme relative to another, we prefer to eval-uate the complexity-reduction afforded by our complexity-minimized codes. Fig. 5 shows the BER curves for π150, π400and π800, as well as ππ . Each of the complexity-optimizedcodes is evaluated for its target decoding complexity, whilethe performance of the threshold-optimized code is presentedfor a range of decoding iterations. For π decoding iterations,the decoding complexity of the threshold-optimized code isπΎ = 9π .
For a BER ππ = 10β6, ππ requires π to be slightlygreater than 30, which translates to a decoding complexityπΎ = 270, whereas π150 achieves the target with πΎ = 150;thus, the complexity-optimized code provides a 44% reductionin decoding complexity at the same SNR.
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
SMITH et al.: DESIGN OF IRREGULAR LDPC CODES WITH OPTIMIZED PERFORMANCE-COMPLEXITY TRADEOFF 497
0.2 0.4 0.6 0.8 1 1.2 1.4 1.610
β6
10β5
10β4
10β3
10β2
10β1
Eb/N
0 (dB)
BE
RC
T (N=20)
CT (N=25)
CT (N=30)
CT (N=40)
CT (N=50)
CT (N=75)
CT (N=200)
C150
(K=150)C
400 (K=400)
C800
(K=800)
Fig. 5. Performance of π150, π400 , π800 , each evaluated for its targetdecoding complexity, and ππ for various numbers of decoding iterations.π = 0.5. Blocklength π = 105. Note that for ππ with π decoding iterations,πΎ = 9π .
Of course, as we increase the decoding complexity, the SNRto achieve the target BER will decrease. Relative to π400,which achieves ππ = 10β6 at πΈπ/π0 = 0.84dB, ππ requiresπ to be slightly greater than 75, which translates to a decodingcomplexity πΎ = 675; in this case, the complexity-optimizedcode provides a 41% reduction in decoding complexity at thesame SNR.
Relative to π800, which achieves ππ = 10β6 at πΈπ/π0 =0.69dB, ππ requires π to be greater than 200, which trans-lates to a decoding complexity πΎ = 1800; in this case,the complexity-optimized code provides more than a 56%reduction in decoding complexity at the same SNR.
Finally, since the check degree distribution of a complexity-minimized code is a function of its target decoding complexity,we investigated the performance of threshold-optimized codeswith reduced check degrees, and found that reducing the checkdegree of threshold-optimized codes (from ππ = 9, whichmaximizes the threshold) does not improve their performance-complexity tradeoff. This justifies the comparison of our codesto ππ .
C. EXIT Chart Interpretation of the Results
To further explain the observed results, Fig. 6 shows theEXIT charts of the threshold-optimized code and the πΎ = 150complexity-optimized code, for π = 0.9. The behavior of theEXIT charts near the origin on a log-scale is plotted in Fig. 7.For a target message error rate ππ‘ = 10β4, π150 requires 35decoding iterations (πΎ = 280), while ππ requires 63 decodingiterations (πΎ = 567); this translates to a reduction in decodingcomplexity of approximately 50%.
An important observation is that the complexity-optimizedcode has a more closed EXIT chart in the earlier iterationsand a more open EXIT chart at the later iterations. It isshown in [28] that, for a wide class of decoding algorithms,when the EXIT chart of two codes πππ΄(π) and πππ΅ (π) satisfyπππ΄(π) β€ πππ΅ (π) , βπ β (0, π0], then ππ΅ has a higher rate than
0 0.02 0.04 0.06 0.08 0.1 0.120
0.02
0.04
0.06
0.08
0.1
0.12
pin
p out
ThresholdβoptimizedComplexityβoptimized
Fig. 6. EXIT charts for π150 and ππ on AWGN channel, π = 0.9, in linearscale.
10β4
10β3
10β2
10β1
10β4
10β3
10β2
10β1
pin
p out
ThresholdβoptimizedComplexityβoptimized
Fig. 7. EXIT charts for π150 and ππ on AWGN channel, π = 0.9, in logscale.
ππ΄ (for the BEC, this also follows from [29]). In this example,since both codes have the same rate, one EXIT chart cannotdominate the other one everywhere, but the optimization pro-gram carefully trades the area underneath the EXIT chart forthe complexity. Furthermore, for the complexity-minimizedcodes, the delayed appearance of the waterfall region is dueto the fact that their EXIT charts are initially less open thanthe EXIT chart of the threshold-optimized code. On the otherhand, the openness of the complexity-minimized EXIT chartsnear the origin, translates to the steeper waterfalls in theabove figures. These effects can be attributed to the decreasedfraction of degree-two nodes in the complexity-minimizedcodes. It is well known that the minimum distance [30] and thesize of the smallest stopping set [31] are strongly dependent onthe fraction of degree-two nodes; therefore, it is not surprisingthat the complexity-minimized codes do not appear to sufferfrom error floors, even when the number of decoding iterationsis small.
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
498 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 58, NO. 2, FEBRUARY 2010
D. Discussion
The effectiveness of our design methodology stems from theaccuracy of our measure of complexity, but it also rests uponthe accuracy of density evolution analysis. At short-block-lengths, where the performance of irregular LDPC codes aresensitive to the structure of the parity-check matrix, and theunderlying assumptions for density evolution are not valid, ourcomplexity-optimized codes provide very similar performanceto threshold-optimized codes. We note that it may be possibleto further improve upon our results (both at short and longblock lengths) by optimizing over a more general ensemble ofLDPC codes. Specifically, multi-edge type LDPC codes [32]generalize the irregular ensemble to include various edgetypes, thus providing greater control over graph structure;most importantly, this structure translates to short block lengthperformance that more closely approaches the asymptotic pre-dictions of density evolution. Similarly, ARA codes generallyprovide improved short block length performance, relative toan LDPC code of the same threshold. However, in order toapply our complexity-minimization method to either familyof codes, an EXIT chart-based design procedure would berequired; to our knowledge, no EXIT chart-based designprocedure has been developed for either family of codes.
V. CONCLUDING REMARKS
This paper presents a methodology for the design of irreg-ular LDPC codes with an optimized complexity-rate tradeoff.Our methodology is based on a new measure of complexity,which accurately models the required number of iterations forconvergence in the iterative decoding process, and a novelformulation of a complexity-minimization problem. A suffi-cient condition for convexity of the complexity-optimizationproblem in the variable edge-degree distribution is presented;when it is not satisfied, numerical experiments neverthelesssuggest that the local minimum is unique. The effectivenessof our design methodology stems from the accuracy of ourmeasure of complexity, but also rests upon the accuracy ofdensity evolution analysis. When the block lengths of thecodes are long enough such that density evolution providesan accurate prediction of decoding trajectory, our complexity-minimized codes provide a superior performance-complexitytradeoff as compared to threshold-optimized codes, within theensemble of irregular codes.
ACKNOWLEDGMENT
We acknowledge the anonymous reviewers for their detailedcomments, which improved the presentation of the paper.
REFERENCES
[1] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman,βImproved low-density parity-check codes using irregular graphs," IEEETrans. Inf. Theory, vol. 47, no. 2, pp. 585-598, Feb. 2001.
[2] T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, βDesign ofcapacity-approaching irregular low-density parity-check codes," IEEETrans. Inf. Theory, vol. 47, no. 2, pp. 619-637, Feb. 2001.
[3] S.-Y. Chung, G. D. Forney Jr., T. J. Richardson, and R. L. Urbanke,βOn the design of low-density parity-check codes within 0.0045 dB ofthe Shannon limit," IEEE Commun. Lett., vol. 5, no. 2, pp. 58-60, Feb.2001.
[4] S. ten Brink, βConvergence behavior of iteratively decoded parallelconcatenated codes," IEEE Trans. Commun., vol. 49, pp. 1727-1737,Oct. 2001.
[5] M. Ardakani and F. R. Kschischang, βA more accurate one-dimensionalanalysis and design of LDPC codes," IEEE Trans. Commun., vol. 52,no. 12, pp. 2106-2114, Dec. 2004.
[6] S. ten Brink and G. Kramer, βDesign of repeat-accumulate codes foriterative detection and decoding," IEEE Trans. Signal Process., vol. 51,pp. 2764-2772, Nov. 2003.
[7] S. ten Brink, G. Kramer, and A. Ashikhmin, βDesign of low-densityparity-check codes for modulation and detection," IEEE Trans. Com-mun., vol. 52, pp. 670-678, Apr. 2004.
[8] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: MITPress, 1963.
[9] D. J. Costello and G. D. Forney Jr., βChannel coding: the road to channelcapacity," Proc. IEEE, vol. 95, no. 6, pp. 1150-1177, June 2007.
[10] A. Khandekar and R. J. McEliece, βOn the complexity of reliablecommunication on the erasure channel," in Proc. IEEE InternationalSymp. Inf. Theory, 2001, p. 1.
[11] I. Sason and G. Wiechman, βBounds on the number of iterations forturbo-like ensembles over the binary erasure channel," IEEE Trans. Inf.Theory, vol. 55, no. 6, pp. 2602-2617, June 2009.
[12] I. Sason and R. Urbanke, βParity-check density versus performance ofbinary linear block codes over memoryless symmetric channels," IEEETrans. Inf. Theory, vol. 49, no. 7, pp. 1611-1635, July 2003.
[13] ββ, βComplexity versus performance of capacity-achieving irregularrepeat-accumulate codes on the binary erasure channel," IEEE Trans.Inf. Theory, vol. 50, no. 6, pp. 1247-1256, June 2004.
[14] H. Pfister, I. Sason, and R. Urbanke, βCapacity-achieving ensembles forthe binary erasure channel with bounded complexity," IEEE Trans. Inf.Theory, vol. 51, no. 7, pp. 2352-2379, July 2005.
[15] H. D. Pfister and I. Sason, βAccumulate-repeat-accumulate codes:capacity-achieving ensembles of systematic codes for the erasure chan-nel with bounded complexity," IEEE Trans. Inf. Theory, vol. 53, no. 6,pp. 2088-2115, June 2007.
[16] X. Ma and E.-H. Yang, βLow-density parity-check codes with fastdecoding convergence speed," in Proc. IEEE International Symp. Inf.Theory, Chicago, USA, June 2004, p. 103.
[17] W. Yu, M. Ardakani, B. Smith, and F. R. Kschischang, βComplexity-optimized LDPC codes for Gallager decoding algorithm B," in Proc.IEEE International Symp. Inf. Theory, Adelaide, Australia, Sept. 2005.
[18] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, βFactor graphs andthe sum-product algorithm," IEEE Trans. Inf. Theory, vol. 47, no. 2, pp.498-519, Feb. 2001.
[19] T. J. Richardson and R. L. Urbanke, βThe capacity of low-density parity-check codes under message-passing decoding," IEEE Trans. Inf. Theory,vol. 47, no. 2, pp. 599-618, Feb. 2001.
[20] S.-Y. Chung, T. J. Richardson, and R. L. Urbanke, βAnalysis of sum-product decoding of low-density parity-check codes using a Gaussianapproximation," IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 657-670,Feb. 2001.
[21] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, UK:Cambridge University Press, 2004.
[22] A. Shokrollahi, βNew sequences of linear time erasure codes approach-ing the channel capacity," in Proc. International Symp. Applied Algebra,Algebraic Algorithms Error-Correcting Codes, no. 1719, Lecture Notesin Computer Science, 1999, pp. 65-67.
[23] T. Richardson and R. Urbanke, βEfficient encoding of low-densityparity-check codes," IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 638β656, Feb. 2001.
[24] L. Bazzi, T. J. Richardson, and R. L. Urbanke, βExact thresholds andoptimal codes for the binary-symmetric channel and Gallagerβs decodingalgorithm A," IEEE Trans. Inf. Theory, vol. 50, no. 9, pp. 2010-2021,Sept. 2004.
[25] [Online]. Available: http://lthcwww.epfl.ch/research/ldpcopt/[26] E.-Y. Hu, E. Eleftheriou, and D. M. Arnold, βRegular and irregular
progressive edge-growth Tanner graphs," IEEE Trans. Inf. Theory,vol. 51, pp. 386-398, Jan. 2005.
[27] E. Sharon and S. Litsyn, βConstructing LDPC codes by error mini-mization progressive edge growth," IEEE Trans. Commun., vol. 56, pp.359-368, Mar. 2008.
[28] M. Ardakani, T. H. Chan, and F. R. Kschischang, βEXIT-Chart proper-ties of the highest-rate LDPC code with desired convergence behavior,"IEEE Commun. Lett., vol. 9, no. 1, pp. 52-54, Jan. 2005.
[29] A. Ashikhmin, G. Kramer, and S. ten Brink, βExtrinsic informationtransfer functions: model and erasure channel properties," IEEE Trans.Inf. Theory, vol. 50, pp. 2657-2673, Nov. 2004.
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.
SMITH et al.: DESIGN OF IRREGULAR LDPC CODES WITH OPTIMIZED PERFORMANCE-COMPLEXITY TRADEOFF 499
[30] C. Di, T. J. Richardson, and R. L. Urbanke, βWeight distribution of low-density parity-check codes," IEEE Trans. Inf. Theory, vol. 52, no. 11,pp. 4839-4855, Nov. 2006.
[31] A. Orlitsky, K. Viswanathan, and J. Zhang, βStopping set distributionof LDPC code ensembles," IEEE Trans. Inf. Theory, vol. 51, no. 3, pp.929-953, Mar. 2005.
[32] T. Richardson and R. Urbanke, Modern Coding Theory. Cambridge,UK: Cambridge University Press, 2008.
Benjamin Smith received the B.A.Sc. degree fromthe University of Windsor, Windsor, ON, Canada, in2004, and the M.A.Sc. degree from the Universityof Toronto, Toronto, ON, Canada, in 2006, both inelectrical engineering. He is currently working to-ward the Ph.D. degree in electrical engineering at theUniversity of Toronto. His current research interestsinclude coding and information theory for fiber-opticcommunication systems. He is a recipient of theNatural Sciences and Engineering Research Council(NSERC) of Canada Postgraduate Scholarship.
Frank R. Kschischang received the B.A.Sc. de-gree (with honors) from the University of BritishColumbia, Vancouver, BC, Canada, in 1985 and theM.A.Sc. and Ph.D. degrees from the University ofToronto, Toronto, ON, Canada, in 1988 and 1991,respectively, all in electrical engineering. He is aProfessor of Electrical and Computer Engineeringand Canada Research Chair in Communication Al-gorithms at the University of Toronto, where he hasbeen a faculty member since 1991. During 1997-98,he was a visiting scientist at MIT, Cambridge, MA
and in 2005 he was a visiting professor at the ETH, Zurich. His researchinterests are focused on the area of channel coding techniques.
He was the recipient of the Ontario Premierβs Research Excellence Award.From 1997-2000, he served as an Associate Editor for Coding Theory forthe IEEE TRANSACTIONS ON INFORMATION THEORY. He also served astechnical program co-chair for the 2004 IEEE International Symposium onInformation Theory (ISIT), Chicago, and as general co-chair for ISIT 2008,Toronto.
Wei Yu (Sβ97-Mβ02) received the B.A.Sc. degreein Computer Engineering and Mathematics from theUniversity of Waterloo, Waterloo, Ontario, Canadain 1997 and M.S. and Ph.D. degrees in ElectricalEngineering from Stanford University, Stanford, CA,in 1998 and 2002, respectively. Since 2002, he hasbeen with the Electrical and Computer EngineeringDepartment at the University of Toronto, Toronto,Ontario, Canada, where he is now an AssociateProfessor and holds a Canada Research Chair inInformation Theory and Digital Communications.
His main research interests include multiuser information theory, optimization,wireless communications and broadband access networks.
Prof. Wei Yu is currently an Editor for IEEE TRANSACTIONS ON COM-MUNICATIONS. He was an Editor for IEEE TRANSACTIONS ON WIRELESSCOMMUNICATIONS from 2004 to 2007, a Guest Editor of IEEE JOURNAL ON
SELECTED AREAS IN COMMUNICATIONS for a special issue on βNonlinearOptimization of Communications Systems" in 2006, and a Guest Editorof EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING for a specialissue on βAdvanced Signal Processing for Digital Subscriber Lines" in 2005.He is a member of the Signal Processing for Communications TechnicalCommittee of the IEEE Signal Processing Society. He received the IEEESignal Processing Society 2008 Best Paper Award, the McCharles Prize forEarly Career Research Distinction in 2008, the Early Career Teaching Awardfrom the Faculty of Applied Science and Engineering, University of Torontoin 2007, and the Early Researcher Award from Ontario in 2006. He is aregistered Professional Engineer in Ontario.
Masoud Ardakani received the B.Sc. degree fromIsfahan University of Technology in 1994, the M.Sc.degree from Tehran University in 1997 and the Ph.D.degree from the University of Toronto in 2004, all inElectrical Engineering. He was a Postdoctoral fellowat the University of Toronto from 2004 to 2005.He is currently an Associate Professor of Electricaland Computer Engineering and Alberta IngenuityNew Faculty at the University of Alberta, wherehe holds an Informatics Circle of Research Excel-lence (iCORE) Junior Research Chair in wireless
communications. His research interests are in the general area of digital com-munications, codes defined on graphs and iterative decoding techniques. Heserves as an Associate Editor for the IEEE TRANSACTIONS ON WIRELESS
COMMUNICATIONS and the IEEE COMMUNICATION LETTERS.
Authorized licensed use limited to: The University of Toronto. Downloaded on April 15,2010 at 15:09:09 UTC from IEEE Xplore. Restrictions apply.