recursive inverse adaptive filtering algorithm

6
Digital Signal Processing 21 (2011) 491–496 Contents lists available at ScienceDirect Digital Signal Processing www.elsevier.com/locate/dsp Recursive inverse adaptive filtering algorithm Mohammad Shukri Ahmad a,b,, Osman Kukrer a , Aykut Hocanin a a Department of Electrical and Electronic Engineering, Eastern Mediterranean University, Gazimagusa, via Mersin 10, North Cyprus, Turkey b Department of Electrical and Electronic Engineering, European University of Lefke, Gemikonagi, via Mersin 10, North Cyprus, Turkey article info abstract Article history: Available online 9 March 2011 Keywords: Adaptive filters RRLS TDVSS SFTRLS In this paper, a new FIR adaptive filtering algorithm is proposed. The approach uses a variable step- size and the instantaneous value of the autocorrelation matrix in the coefficient update equation that leads to an improved performance. Convergence analysis of the algorithm has been presented. Simulation results show that the algorithm performs better than the Transform Domain LMS with Variable Step-Size (TDVSS) in stationary Additive White Gaussian Noise (AWGN) and Additive Correlated Gaussian Noise (ACGN) environments in a system identification setting. It is shown that the algorithm has a performance better than RLS and very similar to RRLS algorithm with a considerable reduction in computational complexity. Additionally, the performance of the proposed algorithm is shown to be superior to that of the Stabilized Fast Transversal Recursive Least Squares (SFTRLS) algorithm under the same conditions. © 2011 Elsevier Inc. All rights reserved. 1. Introduction Adaptive filtering is one of the well-established topics in signal processing [1,2]. The RLS algorithm [3–5] offers superior speed of convergence compared to the LMS algorithm and its variants, espe- cially in highly correlated and nonstationary environments. SFTRLS algorithm has a lower computational complexity than the RLS but it requires the initialization of certain stability and convergence pa- rameters. RRLS is the robust version of the RLS algorithm but it suffers from its high computational complexity. In this paper, we propose a new recursive algorithm with performance comparable to the RRLS algorithm, but with much lower complexity. The paper is organized as follows. In Section 2, the new recur- sive algorithm is derived. In Section 3, the convergence analysis of the algorithm is presented. In Section 4, simulation results that compare the performance of the proposed algorithm with that of the RRLS, TDVSS and SFTRLS algorithms in different noise environ- ments are given. Finally, in the last section, conclusions are drawn. 2. The recursive inverse algorithm (RI) In this section, the recursive update equations for correlation matrices which are used in Newton-LMS and the RRLS algorithms will first be presented. The derivation of the RI algorithm will then * Corresponding author at: Department of Electrical and Electronic Engineering, Eastern Mediterranean University, Gazimagusa, via Mersin 10, North Cyprus, Turkey. E-mail addresses: [email protected], [email protected] (M.S. Ahmad), [email protected] (O. Kukrer), [email protected] (A. Hocanin). follow. The Wiener–Hopf equation [1] leads to the optimum solu- tion for the FIR filter coefficients. The coefficients are given by w(k) = R 1 (k)p(k), (1) where k is the time parameter (k = 1, 2,...), w(k) is the filter weight vector calculated at time k, R(k) is the estimate of the tap-input vector autocorrelation matrix, and p(k) is the estimate of the cross-correlation vector between the desired output signal and the tap-input vector. The solution of (1) is required at each it- eration where the filter coefficients are updated. As an additional requirement, the autocorrelation matrix should be nonsingular at each iteration [1]. Reconsider (1) where the correlations are estimated recursi- vely [2] as: R(k) = β R(k 1) + x(k)x T (k), (2) p(k) = β p(k 1) + d(k)x(k), (3) and β is the forgetting factor which is usually very close to one. The estimate in (2) would be definitely singular for k < N , and therefore, Eq. (1) cannot be applied. The estimate becomes nonsin- gular for k N . Under this condition substituting (2) and (3) in (1) yields w(k) = β R(k 1) + x(k)x T (k) 1 β p(k 1) + d(k)x(k) , (4) by using the matrix inversion lemma [6], Eq. (4) becomes w(k) = 1 β R 1 (k 1) R 1 (k 1)x(k)x T (k)R 1 (k 1) β + x T (k)R 1 (k 1)x(k) × ( β p(k 1) + d(k)x(k) ) 1051-2004/$ – see front matter © 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.dsp.2011.03.001

Upload: mohammad-shukri-ahmad

Post on 26-Jun-2016

227 views

Category:

Documents


8 download

TRANSCRIPT

Digital Signal Processing 21 (2011) 491–496

Contents lists available at ScienceDirect

Digital Signal Processing

www.elsevier.com/locate/dsp

Recursive inverse adaptive filtering algorithm

Mohammad Shukri Ahmad a,b,∗, Osman Kukrer a, Aykut Hocanin a

a Department of Electrical and Electronic Engineering, Eastern Mediterranean University, Gazimagusa, via Mersin 10, North Cyprus, Turkeyb Department of Electrical and Electronic Engineering, European University of Lefke, Gemikonagi, via Mersin 10, North Cyprus, Turkey

a r t i c l e i n f o a b s t r a c t

Article history:Available online 9 March 2011

Keywords:Adaptive filtersRRLSTDVSSSFTRLS

In this paper, a new FIR adaptive filtering algorithm is proposed. The approach uses a variable step-size and the instantaneous value of the autocorrelation matrix in the coefficient update equation thatleads to an improved performance. Convergence analysis of the algorithm has been presented. Simulationresults show that the algorithm performs better than the Transform Domain LMS with Variable Step-Size(TDVSS) in stationary Additive White Gaussian Noise (AWGN) and Additive Correlated Gaussian Noise(ACGN) environments in a system identification setting. It is shown that the algorithm has a performancebetter than RLS and very similar to RRLS algorithm with a considerable reduction in computationalcomplexity. Additionally, the performance of the proposed algorithm is shown to be superior to thatof the Stabilized Fast Transversal Recursive Least Squares (SFTRLS) algorithm under the same conditions.

© 2011 Elsevier Inc. All rights reserved.

1. Introduction

Adaptive filtering is one of the well-established topics in signalprocessing [1,2]. The RLS algorithm [3–5] offers superior speed ofconvergence compared to the LMS algorithm and its variants, espe-cially in highly correlated and nonstationary environments. SFTRLSalgorithm has a lower computational complexity than the RLS butit requires the initialization of certain stability and convergence pa-rameters. RRLS is the robust version of the RLS algorithm but itsuffers from its high computational complexity. In this paper, wepropose a new recursive algorithm with performance comparableto the RRLS algorithm, but with much lower complexity.

The paper is organized as follows. In Section 2, the new recur-sive algorithm is derived. In Section 3, the convergence analysisof the algorithm is presented. In Section 4, simulation results thatcompare the performance of the proposed algorithm with that ofthe RRLS, TDVSS and SFTRLS algorithms in different noise environ-ments are given. Finally, in the last section, conclusions are drawn.

2. The recursive inverse algorithm (RI)

In this section, the recursive update equations for correlationmatrices which are used in Newton-LMS and the RRLS algorithmswill first be presented. The derivation of the RI algorithm will then

* Corresponding author at: Department of Electrical and Electronic Engineering,Eastern Mediterranean University, Gazimagusa, via Mersin 10, North Cyprus, Turkey.

E-mail addresses: [email protected], [email protected](M.S. Ahmad), [email protected] (O. Kukrer), [email protected](A. Hocanin).

1051-2004/$ – see front matter © 2011 Elsevier Inc. All rights reserved.doi:10.1016/j.dsp.2011.03.001

follow. The Wiener–Hopf equation [1] leads to the optimum solu-tion for the FIR filter coefficients. The coefficients are given by

w(k) = R−1(k)p(k), (1)

where k is the time parameter (k = 1,2, . . .), w(k) is the filterweight vector calculated at time k, R(k) is the estimate of thetap-input vector autocorrelation matrix, and p(k) is the estimateof the cross-correlation vector between the desired output signaland the tap-input vector. The solution of (1) is required at each it-eration where the filter coefficients are updated. As an additionalrequirement, the autocorrelation matrix should be nonsingular ateach iteration [1].

Reconsider (1) where the correlations are estimated recursi-vely [2] as:

R(k) = βR(k − 1) + x(k)xT (k), (2)

p(k) = βp(k − 1) + d(k)x(k), (3)

and β is the forgetting factor which is usually very close to one.The estimate in (2) would be definitely singular for k < N , and

therefore, Eq. (1) cannot be applied. The estimate becomes nonsin-gular for k � N . Under this condition substituting (2) and (3) in (1)yields

w(k) = {βR(k − 1) + x(k)xT (k)

}−1[βp(k − 1) + d(k)x(k)

], (4)

by using the matrix inversion lemma [6], Eq. (4) becomes

w(k) = 1

β

[R−1(k − 1) − R−1(k − 1)x(k)xT (k)R−1(k − 1)

β + xT (k)R−1(k − 1)x(k)

]× (

βp(k − 1) + d(k)x(k))

492 M.S. Ahmad et al. / Digital Signal Processing 21 (2011) 491–496

=[

I − R−1(k − 1)x(k)xT (k)

β + xT (k)R−1(k − 1)x(k)

]w(k − 1)

+ 1

β

[I{β + xT (k)R−1(k − 1)x(k)}R−1(k − 1)x(k)

β + xT (k)R−1(k − 1)x(k)

]d(k)

+ 1

β

[R−1(k − 1)x(k)xT (k)R−1(k − 1)x(k)

β + xT (k)R−1(k − 1)x(k)

]d(k). (5)

Rearranging (5) becomes

w(k) =[

I − R−1(k − 1)x(k)xT (k)

β + xT (k)R−1(k − 1)x(k)

]w(k − 1)

+[

R−1(k − 1)x(k)

β + xT (k)R−1(k − 1)x(k)

]d(k)

= w(k − 1) + μ(k)R−1(k − 1)x(k)e(k). (6)

The update equation in (6) is the Newton-LMS algorithm [7] wherethe a priori filtering error is

e(k) = d(k) − xT (k)w(k − 1)

and the variable step-size is

μ(k) = 1

β + xT (k)R−1(k − 1)x(k).

Newton-LMS is equivalent to the Wiener solution with exponen-tial-forgetting window estimation of the autocorrelation and cross-correlation. However, Newton-LMS requires the inverse of the au-tocorrelation matrix. In the RI algorithm, which will be derived,this is avoided.

The RRLS algorithm is similar to Newton-LMS, except that theupdating of the correlations are not performed directly using (2)and (3). Instead, the inverse autocorrelation matrix is updated toavoid inverting it at each step. Consider solving the Wiener equa-tion (1) iteratively at each time step k. Specifically, the followingiteration converges to the Wiener solution,

wn+1(k) = [I − μR(k)

]wn(k) + μp(k), n = 0,1,2, . . . (7)

if μ satisfies the convergence criterion [1]

μ <2

λmax(R(k)). (8)

Considering the update equations for the correlations, and takingthe expectation of R(k) in (2) we get:

R(k + 1) = βR(k) + Rxx, (9)

where Rxx = E{x(k)xT (k)} and R(k) = E{R(k)}. Solving (9) yields

R(k) = 1 − βk

1 − βRxx, (10)

and as k → ∞

R(∞) = 1

1 − βRxx. (11)

Eq. (10) implies that the eigenvalues of the estimated autocorre-lation matrix increase exponentially, and in the limit become 1

1−β

times that of the original matrix. The implication is that since μmust be chosen to satisfy (8) in the limit as well, we get:

μ <2(1 − β)

λmax(Rxx), (12)

Eq. (12) restricts μ to values much smaller than the one requiredin (7) if Rxx was used instead of R(k). Hence, it would be advanta-geous to make the step-size μ variable so that

μ(k) <2

λmax(R(k))=

(1 − β

1 − βk

)(2

λmax(Rxx)

)= μmax

1 − βk, (13)

or equivalently,

μ(k) = μ0

1 − βkwhere μ0 < μmax. (14)

The iteration in (7) has a high computational cost. Therefore, withthe variable step-size, only one iteration at each time step may besufficient. Finally, the weight update equation for the proposed RIalgorithm becomes:

w(k) = [I − μ(k)R(k)

]w(k − 1) + μ(k)p(k). (15)

The RI algorithm has a major advantage compared to the RRLS al-gorithm in that it does not require the updating of the inverseautocorrelation matrix. Also, its computational complexity is muchless than that of the RRLS as will be shown in Section 4. Due tothe update of inverse autocorrelation matrix, RLS type algorithmsmay face numerical stability problems [2], which is not the casefor the RI algorithm.

3. Convergence analysis of the RI algorithm

By solving (2) we get:

Rk =k∑

i=0

βk−ix(i)xT (i). (16)

Substituting (16) in (15) gives

wk =[

I − μk

k∑i=0

βk−ix(i)xT (i)

]wk−1 + μkpk, (17)

where

x(i)xT (i) = Rxx + δR(i), (18)

where δR(i) is the random part of the autocorrelation matrix. Sub-stituting (18) in (16) gives

Rk =k∑

i=0

βk−iRxx +k∑

i=0

βk−iδR(i)

=(

1 − βk+1

1 − β

)Rxx +

k∑i=0

βk−iδR(i). (19)

Letting

μk =(

μ0(1 − β)

1 − βk+1

),

and multiplying both sides of (19) by μk yields

μkRk = μ0Rxx +(

μ0(1 − β)

1 − βk+1

) k∑i=0

βk−iδR(i). (20)

Define

δAk =(

μ0(1 − β)

1 − βk+1

) k∑i=0

βk−iδR(i)

and Ak ≡ μkRk ,

Ak = μ0Rxx + δAk. (21)

Following the same procedure for the cross-correlation vector, itresults in

μkpk = μ0p + δpk.

M.S. Ahmad et al. / Digital Signal Processing 21 (2011) 491–496 493

Now, substituting (21) and (22) in (17) yields

wk = [I − (μ0Rxx + δAk)

]wk−1 + μ0p + δpk. (22)

Let

wk = wk + δwk, (23)

where wk = E{wk} and δwk is the stochastic part of wk . By substi-tuting (23) in (22) we get

wk + δwk = [I − μ0Rxx](wk−1 + δwk−1)

− δAk(wk−1 + δwk−1) + μ0p + δpk. (24)

Subdividing (24) into deterministic and stochastic components,

wk = [I − μ0Rxx]wk−1 + μ0p (25)

and

δwk = [I − (μ0Rxx + δAk)

]δwk−1 + δpk − δAkwk−1 (26)

are obtained. Now, let

wk = w0 + �wk, (27)

where w0 is the optimum solution of wk . Keeping in mind that,

p = Rxxw0, (28)

substituting (27) and (28) in (25) gives

w0 + �wk = [I − μ0Rxx](w0 + �wk−1) + μ0p

= w0 − μ0Rxxw0 + [I − μ0Rxx]�wk−1 + μ0Rxxw0.

(29)

Rearranging (29),

�wk = [I − μ0Rxx]�wk−1 (30)

is obtained. Eq. (30) is a linear time-invariant equation and its so-lution is

�wk = [I − μ0Rxx]k�w(0). (31)

From (31) we notice that �wk → 0 as k → ∞ if |λ(I −μ0Rxx)| < 1,which shows that the coefficients converge to their optimum solu-tion in the mean sense.

Now, by rearranging (26) becomes

δwk = [I − μ0Rxx]δwk−1 − δAkwk−1 + δpk. (32)

Let us consider the last two terms on the right-hand side of (32),

−δAkwk−1 + δpk

= −μk

k∑i=0

βk−iδR(i)wk−1 + μk

k∑i=0

βk−iδp(i). (33)

Substituting (19), (22), (23) and (27) in (33) gives

−δAkwk−1 + δpk

= −μk

k∑i=0

βk−i(x(i)xT (i) − Rxx)(w0 + �wk−1 + δwk−1)

+ μk

[k∑

i=0

βk−i(x(i)d(i) − p)]

= μk

k∑βk−ix(i)

(d(i) − xT (i)w0

)

i=0

− μk

k∑i=0

βk−i p + μk

k∑i=0

βk−iRxxw0

+ μk

k∑i=0

βk−iRxx(�wk−1 + δwk−1)

− μk

k∑i=0

βk−ix(i)xT (i)(�wk−1 + δwk−1)

= μk

k∑i=0

βk−ix(i)e0(i)

− μk

k∑i=0

βk−iδR(i)(�wk−1 + δwk−1). (34)

The second term on the right-hand side of Eq. (34), is weightedtime averaging and by the ergodicity assumption is equal to its ex-pected value. Furthermore, this term can be neglected based onthe independence assumption used in the analysis of the LMS al-gorithm [1]. Now, defining the error to be:

e0 = d(i) − xT (i)w0,

and by the orthogonality property between x(k) and e0(k), the re-sult of the first term in Eq. (34), which is a weighted time averag-ing and is equal to the expectation for an ergodic process, becomeszero. (The necessary and sufficient condition for the cost functionto attain its minimum value is for the corresponding value of theestimation error e0(k) to be orthogonal to each input sample thatenters into the estimation of the desired response at time k [1].)Therefore, (32) becomes

δwk = [I − μ0Rxx]δwk−1. (35)

In order to find the covariance matrix of δwk ,

E{δwkδwT

k

} = [I − μ0Rxx]E{δwk−1δwT

k−1

}[I − μ0Rxx] (36)

is obtained. Defining Qk = E{δwkδwTk },

Qk = [I − μ0Rxx]Qk−1[I − μ0Rxx]. (37)

Solving (37) yields

Qk = [I − μ0Rxx]kQ0[I − μ0Rxx]k. (38)

It can be observed that Qk → 0 as k → ∞ if |λ(I − μ0Rxx)| < 1.Along with the previous result of �wk → 0 as k → ∞, if |λ(I −μ0Rxx)| < 1 this assures convergence in the mean square sense.

4. Simulation results

In the simulations, the performance of the proposed RI algo-rithm is compared to those of the RRLS [8], TDVSS [9], RLS andSFTRLS [10, p. 342] algorithms in the system identification prob-lem described in [9]. The filter length for all algorithms is N = 16taps (for Sections 4.1 and 4.2 it is selected to be equal to the lengthof the unknown system shown in Fig. 1) [9]. The received signalwas generated using [9]:

x(n) = 1.79x(n − 1) − 1.85x(n − 2) + 1.27x(n − 3)

− 0.41x(n − 4) + v0(n), (39)

where v0(n) is a Gaussian process with zero mean and varianceσ 2 = 0.3849. The unknown system coefficients are shown in Fig. 1.

494 M.S. Ahmad et al. / Digital Signal Processing 21 (2011) 491–496

Fig. 1. The unknown system impulse response.

Fig. 2. The ensemble MSE for RI, RRLS, TDVSS, RLS and SFTRLS in AWGN.

4.1. Additive white Gaussian noise

In this experiment, the SNR is held constant (SNR = 37 dB). TheSNR is selected high to meet the requirements of the system iden-tification problem. The signal is assumed to be corrupted with anAWGN process. Simulations were done with the following param-eters: For the RI algorithm: β = 0.991, μ0 = 0.00146, where μ0 isselected to be less than 2

λmax. As a result of property 8 [1, p. 816],

λmax can be assumed to be less than or equal to the maximumpower spectral density of the input signal (λmax � Smax). There-fore, the maximum eigenvalue of the autocorrelation matrix can beestimated by computing the periodogram of the input signal. Forthe RRLS algorithm: β = 0.991. For the TDVSS algorithm: α = 0.99,β = 0.9, ε = 0.025, μmin = 0.0047, μmax = 0.05, γ = 0.001, L =10. For the SFTRLS algorithm [10]: λ = 0.991, κ1 = 1.5, κ2 = 2.5and κ3 = 1. For the RLS algorithm: λ = 0.991. Fig. 2 shows thateven though the RRLS is converging faster at first, both the RI andthe RRLS algorithms converge finally to the same mean-square er-ror (mse = −50 dB) at approximately 500 iterations. On the otherhand, the RLS, TDVSS and STFRLS converge to a higher mse of−48 dB, −40 dB and −15 dB, respectively. Even though the RRLSand the proposed algorithm have similar performances in AWGN,the computational complexity of the proposed algorithm is muchlower as shown in Table 1.

Table 1Computational complexity of RI, RRLS and SFTRLS.

RI RLS RRLS SFTRLS

Mult./Div. 52 N2 + 7

2 N 3N2 + 11N + 9 3N3 + 10N2 + 2N 9N + 13Add./Sub. 2N2 + N 3N2 + 7N + 4 N3 + 6N2 − N 9N + 1

Fig. 3. The ensemble MSE for RI, RRLS, TDVSS, RLS and SFTRLS in ACGN.

4.2. Additive correlated Gaussian noise

In order to test the performance of the algorithms mentionedabove, the SNR is held constant (SNR = 30 dB) and the signal x(n)

is assumed to be corrupted by an ACGN process. A correlatedGaussian noise process is generated by using the first-order autore-gressive model (AR(1)) v(k + 1) = ρv(k) + v0(k), where v0(k) is awhite Gaussian noise process and ρ is the correlation parameter(ρ = 0.7). Simulations were done with the following parameters:For the RI algorithm: β = 0.991, μ0 = 0.00146. For the RRLS al-gorithm: β = 0.991. For the TDVSS algorithm: α = 0.99, β = 0.9,ε = 0.025, μmin = 0.0047, μmax = 0.05, γ = 0.001, L = 10. For theSFTRLS algorithm: λ = 0.991, κ1 = 1.5, κ2 = 2.5 and κ3 = 1. Forthe RLS algorithm: λ = 0.991. Fig. 3 shows that both the RI andRRLS algorithms converge to the same mse of −46 dB, whereas theRLS, TDVSS and SFTRLS converge to higher mse of −40 dB, −36 dBand −9.5 dB, respectively. This shows the clear advantage of theRI over the TDVSS and SFTRLS algorithms. Even though the pro-posed RI algorithm has similar performance as the RRLS algorithmin ACGN, the proposed algorithm offers a considerable reduction incomputational complexity. It should be noted that the RI algorithmdoes not need inversion of the autocorrelation matrix which willguarantee its numerical stability. Whereas RLS algorithm may facenumerical stability problems due to the loss of Hermitian symme-try and loss of positive definiteness of the inverse autocorrelationmatrix [11].

4.3. Effect of filter parameters on steady-state MSE

In order to study the effect of changing the filter parameters,μ0 and β have been changed while all the other parameters ofthe RI algorithm in Section 4.1 were kept the same. Fig. 4 showsthe steady-state MSE with different values of μ0. Fig. 4 shows thatreducing the value of μ0 slows the convergence of the RI algorithmto the steady-state, but the algorithm converges to the same MSEin all the cases, as expected. Secondly, β has been varied while allthe other parameters of the RI algorithm in Section 4.1 were keptthe same. Fig. 5 shows the steady-state MSE with different valuesof β . In Fig. 5 we note that increasing the value of β leads to a

M.S. Ahmad et al. / Digital Signal Processing 21 (2011) 491–496 495

Fig. 4. The ensemble MSE for the RI algorithm in AWGN with different values of μ0.

Fig. 5. The ensemble MSE for the RI algorithm in AWGN with different values of β .

better performance in terms of MSE but to a certain value. Afterthat value, increasing β will not reduce the MSE, but it will resultin an unstable performance of the algorithm as shown in Fig. 5 atβ = 0.9929.

To study the effect of changing the filter length, N was changedwhile all the other parameters in Section 4.1 were kept the samefor the RI algorithm. Fig. 6 shows the steady-state MSE with re-spect to different tap-lengths. It is clear from Fig. 6 that after acertain value of tap-length (in this case N = 16), changing the fil-ter length will not affect the steady-state MSE. However, increasingN will increase the number of computations needed for updatingthe tap-weight vector as shown in Fig. 7.

5. Conclusion

A new FIR adaptive filtering algorithm is introduced. The ap-proach uses a variable step-size and the instantaneous value of theautocorrelation matrix (R(k)) in the coefficient update equation.The performance of the RI, RRLS, TDVSS, RLS and SFTRLS algo-rithms has been compared in both AWGN and ACGN environments.Under the same conditions, the RI algorithm has a performancebetter than RLS and very similar to that of the RRLS algorithm buthas lower computational complexity (for RI O (N2) and for RRLSO (N3)). The SFTRLS algorithm has to be initialized carefully to as-sure convergence [10]. Additionally, the stability parameter λ has

Fig. 6. The curve of steady-state MSE with respect to the tap-length.

Fig. 7. Computational complexity of RI.

to be chosen such that its value is very close to 1. However, thisposes a limitation for the SFTRLS since small values of λ are re-quired in nonstationary environments. On the other hand, the RIalgorithm outperforms the TDVSS and the SFTRLS algorithms inboth AWGN and ACGN environments.

References

[1] S. Haykin, Adaptive Filter Theory, Prentice Hall, Upper Saddle River, NJ, 2002.[2] G.O. Glentis, K. Berberidis, S. Theodoridis, Efficient least squares adaptive algo-

rithms for FIR transversal filtering, IEEE Signal Process. Mag. (July 1999) 13–41.[3] S. Qiao, Fast adaptive RLS algorithms: a generalized inverse approach and anal-

ysis, IEEE Trans. Signal Process. 39 (6) (June 1991) 1455–1459.[4] S. Haykin, A.H. Sayed, J.R. Zeidler, P. Yee, P.C. Wei, Adaptive tracking of lin-

ear time-variant systems by extended RLS algorithms, IEEE Trans. Signal Pro-cess. 45 (5) (May 1997) 1118–1128.

[5] S. Makino, Y. Kaneda, A new RLS algorithm based on the variation characteris-tics of a room impulse response, in: IEEE International Conference on Acoustics,Speech, and Signal Processing, vol. 3, 19–22 April 1994, pp. 373–376.

[6] D.J. Tylavsky, G.R.L. Sohie, Generalization of the matrix inversion lemma, Proc.IEEE 74 (7) (July 1986) 1050–1052.

[7] S. Haykin, Kalman Filtering and Neural Networks, John Wiley & Sons, New York,2001.

[8] M.M. Chansarkar, U.B. Desai, A robust recursive least squares algorithm, IEEETrans. Signal Process. 45 (7) (July 1997) 1726–1735.

[9] R.C. Bilcu, P. Kuosmanen, K. Egiazarian, A transform domain LMS adaptive filterwith variable step-size, IEEE Signal Process. Lett. 9 (2) (February 2002) 51–53.

496 M.S. Ahmad et al. / Digital Signal Processing 21 (2011) 491–496

[10] P.S.R. Diniz, Adaptive Filtering Algorithms and Practical Implementation, 3rded., Springer, New York, 2008.

[11] A.H. Sayed, Adaptive Filters, John Wiley & Sons, NJ, 2008.

Mohammad Shukri Ahmad received the B.Sc. andM.Sc. degrees in Electrical and Electronics Engineer-ing from the Eastern Mediterranean University (NorthCyprus, Turkey), in 2006 and 2007, respectively, andhe is currently working towards the Ph.D. degree inAdaptive Filtering for Noise Reduction. From 2006 to2010, he was a Teaching Assistant at Electrical andElectronics Engineering department at EMU, in 2010;he joined the department of Electrical and Electronic

Engineering at European University of Lefke as Senior Lecturer for the De-partment. His research interests include Signal Processing, Adaptive Filters,Image Processing, Control Systems and Communications Systems.

Osman Kukrer (M’92) was born in 1956 in Lar-naca, Cyprus. He received the B.S., M.S., and Ph.D.degrees from the Middle East Technical University(METU), Ankara, Turkey, in 1979, 1982, and 1987, re-spectively, all in electrical engineering. From 1979 to1985, he was a Research Assistant in the Departmentof Electrical and Electronics Engineering, METU. From1985 to 1986, he was with the Department of Elec-trical and Electronics Engineering, Brunel University,

London, UK. He is currently a Professor in the Department of Electricaland Electronic Engineering, Eastern Mediterranean University, GaziMagosa,TRNC, Turkey. His research interests include adaptive signal processing,power electronics and control systems.

Aykut Hocanin received the B.S. degree in elec-trical and computer engineering from Rice Univer-sity, Houston, TX, USA in 1992 and the M.E. de-gree from Texas A&M University, College Station, TX,USA in 1993. He received the Ph.D. degree in electri-cal and electronics engineering from Bogazici Univer-sity, Istanbul, Turkey in 2000. He received the CyprusAmerica Scholarship Program (CASP) scholarship fromAMIDEAST for his undergraduate education. He also

received the Fahir Ilkel Ph.D. scholarship from Bogazici University. He wasa teaching assistant at Texas A&M, Koc and Isik Universities. He joined theElectrical and Electronic Engineering Department of the Eastern Mediter-ranean University, Gazimagusa, North Cyprus, in 2000 as an Assistant Pro-fessor. In 2003, he became the Vice Chairman of the same department. Hiscurrent research interests include error control coding, multi-user tech-niques for CDMA and adaptive filtering. He is a senior member of boththe Institute of Electrical and Electronics Engineers (IEEE) and the Asso-ciation of Computing Machinery (ACM). He has supervised 6 M.S. thesesand is currently supervising 3 M.S. and 3 Ph.D. theses. In 2007, Aykut Ho-canin was promoted to Associate Professor and later became the chairmanin the same department.