[ieee 2007 7th international conference on its telecommunications - sophia antipolis, france...

6
Maximum Likelihood Processing for Arrays with Partially Unknown Sensor Gains and Phases Minghui Li and Yilong Lu Intelligent Systems Center Nanyang Technological University, 50 Nanyang Drive, Singapore 637553 E-mail: {EMHLI, EYLU}gntu.edu.sg Abstract-This paper addresses the problem of source direction-of-arrival (DOA) estimation using a sensor array, where some of the sensors are perfectly calibrated, while others are uncalibrated. An algorithm is proposed for estimating the source directions in addition to the estimation of unknown array parameters such as sensor gains and phases, as a way of performing array self-calibration. The cost function is an extension of the maximum likelihood (ML) criteria that were originally developed for DOA estimation with a perfectly calibrated array. A particle swarm optimization (PSO) algorithm is used to explore the high-dimensional problem space and find the global minimum of the cost function. The design of the PSO is a combination of the problem-independent kernel and some newly introduced problem-specific features such as search space mapping, particle velocity control, and particle position clipping. This architecture plus properly selected parameters make the PSO highly flexible and reusable, while being sufficiently specific and effective in the current application. Simulation results demonstrate that the proposed technique may produce more accurate estimates of the source bearings and unknown array parameters in a cheaper way as compared with other popular methods, with the root-mean-squared error (RMSE) approaching and asymptotically attaining the Cramer Rao bound (CRB) even in unfavorable conditions. I. INTRODUCTION Source direction-of-arrival (DOA) estimation using a partially calibrated array (PCA) with a mixture of two types of sensors: calibrated and uncalibrated is an important, but challenging problem, which arises in many practical applications. For example, we want to augment a well- constructed array by placing a number of additional sensors to enlarge the aperture. Since these elements are placed in the field, there is no opportunity to calibrate them. Another example is in the situation where one or more array elements are damaged. In these cases, the response of the additional or replaced elements may be poorly known or completely unknown due to amplitude and phase mismatch of the receivers, inaccurate sensor location, and imperfect sensor gain or phase characteristics, or a combination of these effects, while the other elements are well calibrated. Weiss and Friedlander [1] confirm that the direction finding performance of a calibrated array can be enhanced by the addition of completely uncalibrated elements. Since almost all high-resolution direction-finding techniques such as MUSIC, ESPRIT, WSF, as well as maximum likelihood (ML) algorithms [2], are sensitive even to small array manifold model errors and require perfect array calibration, direct application of these techniques to PCAs for DOA estimation seems unrealistic [3]. For completely uncalibrated arrays, estimation of the steering vectors and signal parameters is only possible for non-Gaussian signals by using high-order statistics of the received data [4]. However, by introducing some additional constraints into the problem, estimation based on second-order moments is possible. The previous works can be classified into three categories according to the problem formulation. Some researchers attempt to estimate the signal DOAs without jointly exploiting the knowledge of the unknown manifold parameters using particular array structures [5]-[6]. The methods in the second category are derived for estimating the sensor gain and phase characteristics when certain source parameters are known exactly [7]-[8]. Another main approach is to jointly estimate the source DOAs and unknown array parameters with certain constrains. In [9]-[10], algorithms are derived under the assumption that the uncalibrated sensors have unknown angularly independent gains and arbitrary phases, where the sources are uncorrelated. In this paper, we develop an algorithm based on the maximum likelihood methodology for joint estimation of source DOAs and gains and phases of uncalibrated sensors in a PCA. The cost function is an extension of the ML criteria that were originally developed for angle estimation with a perfectly calibrated array. As in [9]-[10], the uncalibrated array elements are assumed to have unknown angularly independent gains, and arbitrary and unknown phases. However, the sources can be correlated, even coherent. In most of the published works, the estimates of unknown parameters are computed by optimizing a nonlinear complicated cost function, and Newton-type techniques are preferred to global search methods as the computing tools. The main reason is that conventional global optimization algorithms such as genetic algorithms (GA) and simulated annealing (SA) are prone to suffering from slow convergence and require huge computation [11]. However, Newton-type methods intrinsically are local search techniques, where global convergence cannot be guaranteed and sufficiently good initialization is crucial for a success. Furthermore, a Newton- type method is not always a guarantee of efficiency in computation. 1-4244-1 178-5/07/$25.00 §2007 IEEE.

Upload: yilong

Post on 24-Mar-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2007 7th International Conference on ITS Telecommunications - Sophia Antipolis, France (2007.06.6-2007.06.8)] 2007 7th International Conference on ITS Telecommunications - Maximum

Maximum Likelihood Processing for Arrays withPartially Unknown Sensor Gains and Phases

Minghui Li and Yilong LuIntelligent Systems Center

Nanyang Technological University, 50 Nanyang Drive, Singapore 637553E-mail: {EMHLI, EYLU}gntu.edu.sg

Abstract-This paper addresses the problem of sourcedirection-of-arrival (DOA) estimation using a sensor array, wheresome of the sensors are perfectly calibrated, while others areuncalibrated. An algorithm is proposed for estimating the sourcedirections in addition to the estimation of unknown arrayparameters such as sensor gains and phases, as a way ofperforming array self-calibration. The cost function is anextension of the maximum likelihood (ML) criteria that wereoriginally developed for DOA estimation with a perfectlycalibrated array. A particle swarm optimization (PSO) algorithmis used to explore the high-dimensional problem space and findthe global minimum of the cost function. The design of the PSO isa combination of the problem-independent kernel and some newlyintroduced problem-specific features such as search spacemapping, particle velocity control, and particle position clipping.This architecture plus properly selected parameters make thePSO highly flexible and reusable, while being sufficiently specificand effective in the current application. Simulation resultsdemonstrate that the proposed technique may produce moreaccurate estimates of the source bearings and unknown arrayparameters in a cheaper way as compared with other popularmethods, with the root-mean-squared error (RMSE) approachingand asymptotically attaining the Cramer Rao bound (CRB) evenin unfavorable conditions.

I. INTRODUCTION

Source direction-of-arrival (DOA) estimation using apartially calibrated array (PCA) with a mixture of two types ofsensors: calibrated and uncalibrated is an important, butchallenging problem, which arises in many practicalapplications. For example, we want to augment a well-constructed array by placing a number of additional sensors toenlarge the aperture. Since these elements are placed in thefield, there is no opportunity to calibrate them. Anotherexample is in the situation where one or more array elementsare damaged. In these cases, the response of the additional orreplaced elements may be poorly known or completelyunknown due to amplitude and phase mismatch of thereceivers, inaccurate sensor location, and imperfect sensor gainor phase characteristics, or a combination of these effects,while the other elements are well calibrated. Weiss andFriedlander [1] confirm that the direction finding performanceof a calibrated array can be enhanced by the addition ofcompletely uncalibrated elements.

Since almost all high-resolution direction-finding techniquessuch as MUSIC, ESPRIT, WSF, as well as maximumlikelihood (ML) algorithms [2], are sensitive even to small

array manifold model errors and require perfect arraycalibration, direct application of these techniques to PCAs forDOA estimation seems unrealistic [3]. For completelyuncalibrated arrays, estimation of the steering vectors andsignal parameters is only possible for non-Gaussian signals byusing high-order statistics of the received data [4]. However,by introducing some additional constraints into the problem,estimation based on second-order moments is possible. Theprevious works can be classified into three categoriesaccording to the problem formulation. Some researchersattempt to estimate the signal DOAs without jointly exploitingthe knowledge of the unknown manifold parameters usingparticular array structures [5]-[6].The methods in the second category are derived for

estimating the sensor gain and phase characteristics whencertain source parameters are known exactly [7]-[8]. Anothermain approach is to jointly estimate the source DOAs andunknown array parameters with certain constrains. In [9]-[10],algorithms are derived under the assumption that theuncalibrated sensors have unknown angularly independentgains and arbitrary phases, where the sources are uncorrelated.

In this paper, we develop an algorithm based on themaximum likelihood methodology for joint estimation ofsource DOAs and gains and phases of uncalibrated sensors in aPCA. The cost function is an extension of the ML criteria thatwere originally developed for angle estimation with a perfectlycalibrated array. As in [9]-[10], the uncalibrated array elementsare assumed to have unknown angularly independent gains,and arbitrary and unknown phases. However, the sources canbe correlated, even coherent.

In most of the published works, the estimates of unknownparameters are computed by optimizing a nonlinearcomplicated cost function, and Newton-type techniques arepreferred to global search methods as the computing tools. Themain reason is that conventional global optimizationalgorithms such as genetic algorithms (GA) and simulatedannealing (SA) are prone to suffering from slow convergenceand require huge computation [11]. However, Newton-typemethods intrinsically are local search techniques, where globalconvergence cannot be guaranteed and sufficiently goodinitialization is crucial for a success. Furthermore, a Newton-type method is not always a guarantee of efficiency incomputation.

1-4244-1 178-5/07/$25.00 §2007 IEEE.

Page 2: [IEEE 2007 7th International Conference on ITS Telecommunications - Sophia Antipolis, France (2007.06.6-2007.06.8)] 2007 7th International Conference on ITS Telecommunications - Maximum

Instead of using a Newton-type procedure, we present amore reliable and robust global search algorithm - particleswarm optimization (PSO) algorithm for finding the minimumof the cost function. PSO is a recent addition to evolutionaryalgorithms first introduced by Eberhart and Kennedy in 1995[12]. PSO is a population-based stochastic optimizationparadigm. As an emerging technology, PSO has attracted a lotof attention in recent years, and has been successfully appliedin many fields, such as phased array synthesis [13],electromagnetic optimization [14], blind source separation [15],artificial neural network training [16], and etc. Most of theapplications demonstrated that PSO could give competitive oreven better results in a faster and cheaper way, compared toother heuristic methods such as GA and SA. In addition, PSOappears to be robust to control parameters.Due to the multimodal, nonlinear and high-dimensional

nature of the parameter space, the problem seems to be a goodapplication arena for PSO, by which the optimal performanceof the ML criteria can be fully explored. The design of theoptimization algorithm is a combination of the problem-independent PSO kernel and some newly introduced problem-specific features such as search space mapping, particlevelocity control, and particle position clipping. Thisarchitecture plus properly selected parameters make the PSOalgorithm highly flexible and reusable, while being sufficientlyspecific and effective in the current application. By pairingPSO with the ML criteria, the proposed PSO-ML techniqueachieves some desired advantages over previous methods: 1) itis less sensitive to initialization, however, insertion of a goodinitial estimate speeds up the computation; 2) it has betterchances to attain the global convergence; 3) it may offer higherquality estimates of the unknown parameters; 4) correlated oreven coherent sources can be accurately treated. Via extensivesimulation studies, we demonstrate that with the proposedtechnique, the uncalibrated sensors improve the DOAestimation performance dramatically. PSO-ML produces moreaccurate estimates of the unknown parameters as comparedwith other popular methods, which can attain the Cramer Raobound (CRB) asymptotically; furthermore, it is more efficientin computation.

II. DATA MODEL AND PROBLEM FORMULATION

We consider an array ofM sensors arranged in an arbitrarygeometry and N narrowband far-field signal sources atunknown locations. The complex M-vector of array outputs ismodeled by the standard equation

y(t) =A(O)s(t)+n(t), t =1, 2, ..., L (1)

where 0 = [0, ,_ ON ]T is the source DOA vector, and the kth

column of the complex MxN matrix A (9) is the so called

steering vector a (0k) for the DOA 0k . The ith element

ai (0k) models the gain and phase adjustments of the kthsignal at the ith sensor. Furthermore, the complex N-vector s(t)

is composed of the emitter signals, and n(t) models the additivenoise.The vectors of signals and noise are assumed to be stationary,

temporally white, zero-mean complex Gaussian randomprocesses with second-order moments given by

E{s (t)sH (s)} = P8tsE{s(t)sT (s)} = 0

E{n(t)nH (s)} 2=u18,sE{n(t)nT (s)} = 0

(2)

where (it is the Kronecker delta, (.)H denotes complex

conjugate transpose, (.)T denotes transpose, and E[.] standsfor expectation. Assuming that the noise and signals areindependent, the data covariance matrix is given by

R =E{y(t)yH (t)} = APAH + &I2. (3)

We focus on the case where some of the array sensors areuncalibrated. Without loss of generality, it is assumed that thefirst K sensors are calibrated, while the last J=M-K sensors areuncalibrated. We use the following model for A

A =Ai(O)] (4)

where A, (9) consists of the first K rows ofA corresponding to

the K calibrated sensors, and A2 consists of the last J rows ofAand associates with the other J uncalibrated sensors. The (i,k)th element a, (Sk) ofA2 has the form

ai (0k) = gie ik, k = 1, N I (5)

where g, is the element gain term, and fik is the individualphase term given by

V'ik 2zrfji (Ok ) + Oik, (6)

where f is the center frequency, i, (0k) is the relative timedelay between reference point and the ith sensor for the kthsignal, and Oik is the uncompensated phase error of the kthsignal at the ith sensor.We assume that the uncalibrated sensors have unknown

direction independent gain g, in (5). The assumption issuitable for most practical systems, since the sensor gains donot change much with direction and a typical change is 1dB[10]. Since the phase values Yfk are treated as free parameters,the expression in (5) accounts for a relatively broad range ofimperfect array problems including arbitrary sensor positionerrors, phase distortion due to, e.g., the near field effect, andrandom sensor phase errors. Furthermore, it is assumed that thenumber of calibrated sensors is larger than the number of

Page 3: [IEEE 2007 7th International Conference on ITS Telecommunications - Sophia Antipolis, France (2007.06.6-2007.06.8)] 2007 7th International Conference on ITS Telecommunications - Maximum

sources (known or estimated) as in [6], [10], in order toguarantee the identifiability of unknown parameters.We define

g =[gl ^ gj ]T, (7)

k ik, i 1,I ,J, k= 1,,N (8)

for the unknown sensor gains and phases. The problemaddressed herein is the joint estimation of U, g and T, from abatch ofL measurements y(l), ... , y(L).

III. MAXIMUM LIKELIHOOD SOURCE AND ARRAY PARAMETERESTIMATION

As a chief systematic approach to most estimation problems,the ML method is known to be asymptotically (with largenumber of snapshots) unbiased and statistically efficient. Thistechnique requires a probabilistic setup of the problem at hand.Under the assumption of additive Gaussian noise and Gaussiandistributed emitter signals, and deterministic unknown gainsand phases of uncalibrated sensors, the probability densityfunction of the complete data set is given by

f(y(1l),.y(L) o,gjpj2) fi 1I eL

YH(t)R-ly(t)

where 1-1 denotes the determinant. By ignoring parameter-independent terms, maximization of (9) is equivalent tominimizing the following normalized negative log-likelihoodfunction

I (9, g, ,p,2) =log R +tr{R 'R (10)where tr {.} stands for trace, logl- denotes the naturallogarithm of the determinant, and R is the covariance matrixof the measured data

L

R = L ,y(t)YH (t). (1t=l

Besides the parameters of interest, the function (10) alsodepends on P and 72 . To reduce the search dimension, we cansolve P and c72 as the functions of 0, g and I, and substitutethem back into the likelihood function. Following similarderivations in [17], the ML estimates of c72 and P are given by

&2(9, g,R ) M RN (12)

P(6,g,T) =A(R- 62)AH' (13)

where A = (AHA) AH, PA = AA , and PA = I A For

conciseness, the dependence of A on 9 , g and 'IP has beensuppressed. Substituting (12) and (13) into (10), we get (inlarge samples)

I(0,g,T)=log APA +&2I (14)

The ML estimate of DOAs and sensor gains and phases iscomputed by minimizing the cost function (14) with respect tothe N+J+NJ unknown real parameters. We note that (14) has asimilar format as the stochastic ML DOA estimator in [18],however, (14) depends on both source and system parameters.Although the extension is straightforward and not complicatedin mathematics, the resulting estimator is effective. We alsostress that other optimal DOA algorithms, such as WSF can besimilarly treated and extended to estimate array parametersbesides source bearings, if a proper global optimizationalgorithm is employed.

IV. PARTICLE SWARM OPTIMIZATION FORML ESTIMATION

A. Particle Swarm OptimizationParticle swarm optimization is a stochastic optimization

algorithm, which mimics animal social behaviors such asflocking of birds and the methods by which they find roostingplaces or food sources. PSO starts with the initialization of apopulation of individuals in the search space and works on thesocial behavior of the particles in the swarm. Each particle isassigned a position in the problem space, which represents acandidate solution to the problem under consideration. Each ofthese particle positions is scored to obtain a scalar cost, namedfitness, based on how well it solves the problem. Theseparticles then fly through the problem space subject to bothdeterministic and stochastic update rules to new positions,which are subsequently scored. Each particle adaptivelyupdates its velocity and position according to its own flyingexperience and its companions' flying experience. With theoscillation and stochastic adjustment, particles explore regionsthroughout the problem space and eventually settle down neara good solution.

Consider a D-dimensional problem space and a swarmconsisting ofP particles. The position of the ith particle is a D-dimensional vector xi [x1l, xi2, ", XiD ]. The velocity of this

particle is represented as vi [v1l, vi2, , ViD] The bestprevious position of the ith particle, which gives the bestfitness value, is denoted as pi = [Pil, Pi2, *', PiD] The bestposition found by any particle in the swarm is represented by

Pg =[Pg,Pg2, ,PgD]. At every iteration, the velocity and

the position of each particle are updated according to thefollowing equations:

k+l k k k (k k k k k)vi vi+cVri p xi+C2r2ip xi

k+l k k+lxi xi+v

(15)

(16)

where 0 denotes element-wise product, i = 1, 2, ..., P I

k = 1,2, ..., indicates the iterations, c is a parameter called theinertia weight, c1 and c2 are positive constants referred to as

cognitive and social parameters respectively, r1 and r2 are D-dimensional vectors consisting of independent randomnumbers uniformly distributed between 0 and 1.

Page 4: [IEEE 2007 7th International Conference on ITS Telecommunications - Sophia Antipolis, France (2007.06.6-2007.06.8)] 2007 7th International Conference on ITS Telecommunications - Maximum

Setup problem: Initialize swarm:

* Define problem - * Random positions

* Select PSO parameters * Random velocities

Solution is final global best position p,

Fig. 1. Flowchart illustrating the main steps of the PSO-ML estimator.

B. PSO-ML Estimation andParameter SelectionIn this section, we describe the formulation of the PSO

algorithm for ML estimation of source and array parameters.The main steps are outlined in Fig. 1. In this study, thealgorithm starts by initializing a population of particles in the"normalized" search space with random positions constrainedbetween zero and one in each dimension, and randomvelocities. The D-dimensional position vector of the pthparticle takes the form xp .l, ..N..gl,.**gJ,vJ11,.,VJJN]'

where 0 <Oi ,gk, <1, i=1,...,N,k=l,...,J,andN+J+NJ=D.The fitness of each particle is computed based on the flowchartin Fig. 2. At first, the normalized DOA, gain and phase valuesare mapped and scaled to the desired ranges; then, the steeringvector components corresponding to the calibrated sensors anduncalibrated sensors are determined; and finally the fitness ofthe particle is evaluated using the cost function (14). Theintroduction of normalized search space and mapping makesparameter selection and algorithm design less problem-dependent, which enhances the reusability of the PSO method.The manipulation of a particle's velocity according to (17) is

regarded as the central element of the entire optimization.Three components typically contribute to the new velocity. Thefirst part refers to the inertial effect of the movement, which isjust proportional to the old velocity and is the tendency of theparticle to proceed in the same direction it has been traveling.The inertial weight is considered critical for the convergence

behavior ofPSO [19]. A larger facilitates searching new area

and global exploration while a smaller tends to facilitatelocal exploitation in the current search area. In this study, isselected to decrease during the optimization process, thus PSOtends to have more global search ability at the beginning of therun while having more local search ability near the end of the

Fig. 2. Flowchart depicting determination of the particle fitness.

optimization. Given a maximum value comax and a minimumvalue cOmin, ) is updated as follows:

k [Wmax0)

0)min

°max-min (k-1), 1<k<[rK]

[rK]+1<k<K(17)

where [rK] is the number of iterations with time decreasinginertial weight, 0 < r < 1 is a ratio, K is the maximum iterationnumber, and [.] is a rounding operator. Based on empiricalpractice [20] and extensive test runs, we select °max = 0.9,

°min = 0.4, and r = 0.4 - 0.8.

The second and third components of the velocity updateequation introduce stochastic tendencies to return towards theparticle's own best historical position and the group's besthistorical position. These paradigms allow particles to profitboth from their own discoveries as well as the discoveries ofthe swarm as a whole, mixing local and global informationuniquely for each particle on each iteration. Constants c1 and c2

are used to bias the particle's search towards the two bestlocations. These two parameters are not critical for theconvergence of PSO. Following common practice in theliterature [21], cl=c2=2, although these values could be fine-turned for the problem at hand.

Since there was no actual mechanism for controlling thevelocity of a particle, it is necessary to define a maximumvelocity to avoid the danger of swarm explosion anddivergence [22]. The velocity limit can be applied to vi alongeach dimension separately by

(18)Vd4X I Vid >VK 4X

Vld_VAX, Vid < V4AX

where d=1, ... D, or for the modulus of the velocity vector bythe rule

Repeatfor each iterationr-M--t-e-i-n-o-i-i-e-p-a-----------------____Repeatfor each particle

Map particle position to solution inproblem spaceEvaluate fitness

Update personal best pi and global best pgUpdate particle velocityLimit particle velocity

Update particle positionClip or adjust particle position if required

Test termination criterion

Page 5: [IEEE 2007 7th International Conference on ITS Telecommunications - Sophia Antipolis, France (2007.06.6-2007.06.8)] 2007 7th International Conference on ITS Telecommunications - Maximum

TABLE ITHE SELECTED PSO PARAMETERS

Parameter Value

cl 2.0

C2 2.0P 30K 300

VMAX 0.5

Cmax 0.9

0min 0.4

r 0.5

vi = Vi, if vi|>VM4X

PCA, PSO - MLPCA, Weiss' MethodCalibrated 8-ULACalibrated 6-ULA101

aD

a)

< 1 000

(19)

-10 L-5

SNR (dB)10 15 20

Like the inertial weight, large values of VM4X or V., encourage

global searching while small values encourage local searching.In this study, limitation along each dimension is applied andVM4X is set to the half value of the dynamic range [20], i.e.,VMAX =0.5.

The new particle position is calculated using equation (16).If any dimension of the new position vector is less than zero or

more than one, it is clipped or adjusted to stay within this range.

It should be noted that, at any time of the optimization process,

two DOA components in a position vector are not allowed tohave the same values. For individuals with same components,one of the elements will be replaced by a valid random valuetill no collision exists.Some previous works [23] demonstrate that the performance

of the PSO algorithm is not significantly affected by changingthe swarm size P. The typical range of P is 20 to 50, which issufficient for most of the problems to achieve good results.Furthermore, PSO is not sensitive to initial particle positions;however, insertion of a reasonable initial estimate speeds up

the convergence. The optimization iteration will be terminatedif the specified maximum iteration number K is reached or thebest particle position of the whole swarm keeps static for a

sufficiently large number of successive iterations. The finalglobal best position pg is taken as the ML estimates of source

and array parameters.

V. SIMULATION RESULTS

This section provides numerical examples to illustrate thesuperior performance of the PSO-ML algorithm, with a

comparison against the Weiss and Friedlander's method [10],or the Weiss' method in short. The Weiss' method is chosenbecause it has the same function and is based on a similar datamodel. The root-mean-squared error (RMSE) of the estimatesof source DOAs and phases of steering vectors obtained usingthe two methods are evaluated and compared, and against theCRB. We have performed 300 Monte Carlo experiments foreach point of the plot.

Fig. 3. DOA estimation RMSE versus SNR.

The selected PSO parameters are summarized in Table I,which are empirically determined based on adequate test runs.The PSO algorithm starts with a random initialization, and isterminated if the maximum iteration number is reached or theglobal best particle position is not updated in 50 successiveiterations. The Weiss' method is initialized with the maximumlikelihood DOA estimates [11] obtained using the calibratedportion of the array. It has been observed that goodinitialization is crucial for the Weiss' method to achievemeaningful results, which is a common drawback for Newton-type procedures.We consider a uniform linear array (ULA) of 8 elements

with sensor separation of half a wavelength. The first 6 sensorsare calibrated while the others are not. All the sensors areomnidirectional with unit gain. Two uncorrelated equal-poweremitters are present at DOAs 80 and 83 relative to the arrayend-fire. The number of snapshots is 100, and the SNR isvaried. Fig. 3 depicts the DOA estimation RMSE obtainedusing PSO-ML and the Weiss' method with the PCA. Forcomparison, the dotted line shows the RMSE when only 6calibrated elements are considered and the uncalibratedelements are ignored, while the dashdot line illustrates theperformance of a perfectly calibrated ULA of 8 sensors. It isclear from the figure that uncalibrated sensors may improve theDOA estimation performance. PSO-ML outperforms theWeiss' method as a whole by demonstrating less RMSE,especially when the SNR is low. It is interesting to note thatwhen the SNR is lower than 10dB, the PCA with PSO-MLperforms as well as a perfectly calibrated eight-sensor ULA. Itseems that the contribution of the uncalibrated sensorsbecomes more significant when the array is forced to operate inless favorable conditions: low SNR and closely spaced sources.

Fig. 4 shows the RMSE for estimating the phases of thesteering vectors obtained using PSO-ML and the Weiss'method, and compares them with the CRB. As can be seenfrom this figure, PSO-ML produces more accurate phase

Page 6: [IEEE 2007 7th International Conference on ITS Telecommunications - Sophia Antipolis, France (2007.06.6-2007.06.8)] 2007 7th International Conference on ITS Telecommunications - Maximum

101

REFERENCES102

wDaD)a)

LU

CO)

a1U)

(n1

-10-10 5 0 5 10

SNR (dB)

Fig. 4. RMSE of steering vector phase estimation veline represents the theoretic CR

estimates than the Weiss' method, with theand asymptotically attaining the CRB.accuracy of PSO-ML for phase estimatiorthe SNR. Although the Weiss'asymptotically efficient estimates, it denthreshold effect in the phase curve when th416dB.

VI. CONCLUSIONS

This paper addresses the problem of soui

using a partially calibrated array. Achallenging category of solutions to this prmthe source directions in addition to the esti

array parameters such as sensor gains andperforming array self-calibration. An algomaximum likelihood methodology is deribfunction is an extension of the ML criteriadeveloped for angle estimation with a perfeA PSO algorithm is used to explore thcomplicated problem space and find the glocost function. The architecture design pluparameters make the PSO algorithm hreusable for other applications, whilespecific and effective in the current pi

results demonstrate that with the propo

uncalibrated sensors improve the DOA estiidramatically. PSO-ML produces more accu

unknown parameters in a cheaper way

another popular method, with the RMS]asymptotically attaining the CRB; furtherrwith correlated or even coherent sources.

PSO - ML [1] A. J. Weiss and B. Friedlander, "Comparison of signal estimation usingWeiss' Method calibrated and uncalibrated arrays," IEEE Trans. Aerosp. Electron. Syst.,CRB vol. 33, pp. 241-249, Jan. 1997.

[2] L. C. Godara, "Application of antenna arrays to mobile communications,Part II: Beam-forming and direction-of-arrival considerations," Proc.IEEE, vol. 85, pp. 1195-1245, July 1997.

[3] M. Li and Y. Lu, "Dimension reduction for array processing with robustinterference cancellation," IEEE Trans. Aerosp. Electron. Syst., vol. 42,pp. 103-112, Jan. 2006.

[4] N. Yuen and B. Friedlander, "Asymptotic performance analysis of blindsignal copy using fourth-order cumulants," International Journal ofAdaptive Control and Signal Processing, vol. 10, pp. 239-265, Mar. 1996.

[5] C. M. S. See and A. B. Gershman, "Direction-of-arrival estimation inpartly calibrated subarray-based sensor arrays," IEEE Trans. SignalProcessing, vol. 52, pp. 329-338, Feb. 2004.

[6] P. Stoica, M. Viberg, K. M. Wong, and Q. Wu, "Maximum-likelihoodbearing estimation with partly calibrated arrays in spatially correlatednoise fields," IEEE Trans. Signal Processing, vol. 44, pp. 888-899, Apr.

15 20 25 1996.[7] D. R. Fuhrman, "Estimation of sensor gain and phase," IEEE Trans.

Signal Processing, vol. 42, pp. 77-87, Jan. 1994.*rsus SNR. The dashdot [8] Q. Cheng, Y. Hua, and P. Stoica, "Asymptotic performance of optimalB3. gain-and-phase estimators of sensor arrays," IEEE Trans. Signal

Processing, vol. 48, pp. 3587-3590, Dec. 2000.[9] C. Y. Tseng, D. D. Feldman, and L. J. Griffiths, "Steering vector

RM\4SE approaching estimation in uncalibrated arrays," IEEE Trans. Signal Processing, vol.It seems that the 43, pp. 1397-1412, June 1995.is not sensitive to [10] A. J. Weiss and B. Friedlander, "DOA and steering vector estimation

using a partially calibrated array," IEEE Trans. Aerosp. Electron. Syst.,method produces vol. 32, pp. 1047-1057, July. 1996.nionstrates a strong [11] M. Li and Y. Lu, "Accurate direction-of-arrival estimation of multiplee SNR is lower than sources using a genetic approach," Wirel. Commun. Mob. Comput., vol. 5,

pp. 343-353, May 2005.[12] R. C. Eberhart and J. Kennedy, "A new optimizer using particle swarm

theory," in Proc. 6th Symp. Micro Machine and Human Science, Nagoya,Japan, pp. 39-43, 1995.

rce DOA estimation [13] M. M. Khodier and C. G. Christodoulou, "Linear array geometrysynthesis with minimum sidelobe level and null control using particle

Ln interesting but swarm optimization," IEEE Trans. Antennas Propagat., vol. 53, pp.oblem is to estimate 2674-2679, Aug. 2005.mation of unknown [14] J. Robinson and Y. Rahmat-Samii, "Particle swarm optimization in

electromagnetics," IEEE Trans. Antennas Propag., vol. 52, pp. 397-407,phases, as a way of Feb. 2004.)rithm based on the [15] Y. Gao and S. Xie, "A blind source separation algorithm using particleved, where the cost swarm optimization," in Proc. IEEE 6th Circuits and Systems Symposiumon Emerging Technologies, Shanghai, China, pp. 297-300, May 2004.that were originally [16] L. Messerschmidt and A. P. Engelbrecht, "Learning to play games usingctly calibrated array. a PSO-based competitive learning approach," IEEE Trans. Evol. Comput.,e high-dimensional vol. 8, pp. 280-288, June 2004.

[17] A. G. Jaffer, "Maximum likelihood direction finding of stochastic)bal minimum of the sources: A separable solution," in Proc. ICASSP, New York, NY, Apr.is properly selected 1988, pp. 2893-2896.ighly flexible and [18] M. Li and Y. Lu, "Improving the performance of GA-ML DOA estimator

with a resampling scheme," Signal Processing, vol. 84, pp. 1813-1822,being sufficiently Oct. 2004.

roblem. Simulation [19] K. E. Parsopoulos and M. N. Vrahatis, "Recent approaches to globalsed technique, the optimization problems through particle swarm optimization," Natural

Computing, vol. 1, pp. 235-306, 2002.mation performance [20] Y. Shi and R. C. Eberhart, "A modified particle swarm optimizer," inrate estimates of the Proc. 1998 IEEE International Conference on Evolutionary Computationas compared with (ICEC98), Anchorage, AK, pp. 69-73, May 1998.[21] R. C. Eberhart and Y. Shi, "Particle swarm optimization: Developments,E approaching and applications and resources," in Proc. 2001 Congress on Evolutionarymore, it works well Computation (CEC2001), Seoul, Korea, pp. 81-86, May 2001.

[22] M. Clerc and J. Kennedy, "The particle swarm-explosion, stability, andconvergence in a multidimensional complex space," IEEE Trans. EvolComput., vol. 6, pp.58-73, Feb. 2002.

[23] R. C. Eberhart and Y. Shi, "Comparing inertia weights and constrictionfactors in particle swarm optimization," in Proc. Congress onEvolutionary Computation, Piscataway, NJ, vol. 1, pp. 84-88, 2000.