artificial neural networks for calculating the association probabilities in multi-target tracking

8
Artificial neural networks for calculating the association probabilities in multi-target tracking I. Turkmen and K. Guney Abstract: A simple method based on the multilayered perceptron neural network architecture for calculating the association probabilities used in target tracking is presented. The multilayered perceptron is trained with the Levenberg –Marquardt algorithm. The tracks estimated by using the proposed method for multiple targets in cluttered and non-cluttered environments are in good agreement with the original tracks. Better accuracy is obtained than when using the joint probabilistic data association filter or the cheap joint probabilistic data association filter methods. 1 Introduction The subject of multi-target tracking (MTT) has appli- cations in both civilian and military areas. The aim of MTT is to partition the sensor data into sets of observations, or tracks produced by the same source. Once tracks are formed and confirmed, the number of targets can be estimated, and the positions and velocities of the targets can be computed from each track. A number of methods [1–3] have been presented and used to estimate the states of multiple targets. These methods have different levels of complexity and require vastly different computational effort. The joint probabilistic data association filter (JPDAF) [1] is a powerful and reliable algorithm for MTT. It works without any prior infor- mation about the targets and clutter. In the JPDAF algorithm, the association probabilities are computed from the joint likelihood functions corresponding to the joint hypotheses associating all the returns to different permu- tations of the targets and clutter points. The computational complexity of the joint probabilities increases exponen- tially as the number of targets increases. To reduce this computational complexity significantly, Fitzgerald [4] proposed a simplified version of the JPDAF, called the cheap JPDAF algorithm (CJPDAF). The association probabilities were calculated in [4] using an ad hoc formula. The CJPDAF method is very fast and easy to implement; however, in either a dense target or a highly cluttered environment the tracking performance of the CJPDAF decreases significantly. In this article, a method based on artificial neural networks (ANNs) for computing the association probabil- ities is presented. These computed association probabilities are then used to track the multiple targets in cluttered and noncluttered environments. ANNs are developed from neurophysiology by morphologically and computationally mimicking human brains. Although the precise details of the operation of ANNs are quite different from those of human brains, they are similar in three aspects: they consist of a very large number of processing elements (the neurons), each neuron connects to a large number of other neurons, and the functionality of networks is determined by modifying the strengths of connections during a learning phase. Ability and adaptability to learn, generalisability, smaller information requirement, fast real-time operation, and ease of implementation features have made ANNs popular in recent years [5, 6]. To calculate the association probabilities, different structures and architectures of ANNs, i.e. the standard Hopfield, the modified Hopfield, the Boltzmann machine and the mean-field Hopfield network were proposed in [7–10], respectively. In these works [7–10], the task of finding association probabilities is viewed as a constrained optimisation problem. The constraints were obtained by careful evaluation of the properties of the JPDA rule. Some of these constraints are analogous to those of the classical travelling salesman problem. Usually, there are five constants to be decided arbitrarily [7–9]. In practice, it is very difficult to choose the five constants to ensure optimisation. On the other hand, the Boltzmann machine’s [9] convergence speed is very slow, even though it can achieve an optimal solution. To cope with these problems, the mean-field Hopfield network, which is an alternative to the Hopfield network and the Boltzmann machine, was proposed by Wang et al. [10]. This mean-field Hopfield network has the advantages of both the Hopfield network and the Boltzmann machine; however, the higher perform- ance of the mean-field Hopfield network is achieved at the expense of the complexity of equipment structure. In this paper, the multilayered perceptron (MLP) neural network [5] is used to calculate accurately the association probabilities. MLPs are the simplest and therefore most commonly used neural network architectures. In this paper, MLPs are trained using the Levenberg–Marquardt algorithm [11–13]. 2 JPDAF and CJPDAF The JPDAF is a moderately complex algorithm designed for MTT [1]. It calculates the probabilities of measurements being associated with the targets, and uses them to form a weighted average innovation for updating each target state. q IEE, 2004 IEE Proceedings online no. 20040739 doi: 10.1049/ip-rsn:20040739 I. Turkmen is with the Civil Aviation School, Department of Aircraft Electrical and Electronics Engineering, Erciyes University, 38039, Kayseri, Turkey K. Guney is with the Faculty of Engineering, Department of Electronics Engineering, Erciyes University, 38039, Kayseri, Turkey Paper received 28th January 2004 IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004 181

Upload: samkian

Post on 08-Sep-2015

225 views

Category:

Documents


0 download

DESCRIPTION

Artificial Neural Networks

TRANSCRIPT

  • Artificial neural networks for calculating theassociation probabilities in multi-target tracking

    I. Turkmen and K. Guney

    Abstract: A simple method based on the multilayered perceptron neural network architecture forcalculating the association probabilities used in target tracking is presented. The multilayeredperceptron is trained with the LevenbergMarquardt algorithm. The tracks estimated by using theproposed method for multiple targets in cluttered and non-cluttered environments are in goodagreement with the original tracks. Better accuracy is obtained than when using the jointprobabilistic data association filter or the cheap joint probabilistic data association filter methods.

    1 Introduction

    The subject of multi-target tracking (MTT) has appli-cations in both civilian and military areas. The aim ofMTT is to partition the sensor data into sets ofobservations, or tracks produced by the same source.Once tracks are formed and confirmed, the number oftargets can be estimated, and the positions and velocitiesof the targets can be computed from each track. A numberof methods [13] have been presented and used toestimate the states of multiple targets. These methodshave different levels of complexity and require vastlydifferent computational effort. The joint probabilistic dataassociation filter (JPDAF) [1] is a powerful and reliablealgorithm for MTT. It works without any prior infor-mation about the targets and clutter. In the JPDAFalgorithm, the association probabilities are computed fromthe joint likelihood functions corresponding to the jointhypotheses associating all the returns to different permu-tations of the targets and clutter points. The computationalcomplexity of the joint probabilities increases exponen-tially as the number of targets increases. To reduce thiscomputational complexity significantly, Fitzgerald [4]proposed a simplified version of the JPDAF, called thecheap JPDAF algorithm (CJPDAF). The associationprobabilities were calculated in [4] using an ad hocformula. The CJPDAF method is very fast and easy toimplement; however, in either a dense target or a highlycluttered environment the tracking performance of theCJPDAF decreases significantly.

    In this article, a method based on artificial neuralnetworks (ANNs) for computing the association probabil-ities is presented. These computed association probabilitiesare then used to track the multiple targets in cluttered andnoncluttered environments. ANNs are developed fromneurophysiology by morphologically and computationally

    mimicking human brains. Although the precise details of theoperation of ANNs are quite different from those of humanbrains, they are similar in three aspects: they consist ofa very large number of processing elements (the neurons),each neuron connects to a large number of other neurons,and the functionality of networks is determined bymodifying the strengths of connections during a learningphase. Ability and adaptability to learn, generalisability,smaller information requirement, fast real-time operation,and ease of implementation features have made ANNspopular in recent years [5, 6].

    To calculate the association probabilities, differentstructures and architectures of ANNs, i.e. the standardHopfield, the modified Hopfield, the Boltzmannmachine and the mean-field Hopfield network wereproposed in [710], respectively. In these works [710],the task of finding association probabilities is viewed as aconstrained optimisation problem. The constraints wereobtained by careful evaluation of the properties of the JPDArule. Some of these constraints are analogous to those of theclassical travelling salesman problem. Usually, there arefive constants to be decided arbitrarily [79]. In practice, itis very difficult to choose the five constants to ensureoptimisation. On the other hand, the Boltzmann machines[9] convergence speed is very slow, even though it canachieve an optimal solution. To cope with these problems,the mean-field Hopfield network, which is an alternative tothe Hopfield network and the Boltzmann machine, wasproposed by Wang et al. [10]. This mean-field Hopfieldnetwork has the advantages of both the Hopfield networkand the Boltzmann machine; however, the higher perform-ance of the mean-field Hopfield network is achieved at theexpense of the complexity of equipment structure.

    In this paper, the multilayered perceptron (MLP) neuralnetwork [5] is used to calculate accurately the associationprobabilities. MLPs are the simplest and therefore mostcommonly used neural network architectures. In this paper,MLPs are trained using the Levenberg Marquardtalgorithm [1113].

    2 JPDAF and CJPDAF

    The JPDAF is a moderately complex algorithmdesigned for MTT [1]. It calculates the probabilities ofmeasurements being associated with the targets, and usesthem to form a weighted average innovation for updatingeach target state.

    q IEE, 2004

    IEE Proceedings online no. 20040739

    doi: 10.1049/ip-rsn:20040739

    I. Turkmen is with the Civil Aviation School, Department of AircraftElectrical and Electronics Engineering, Erciyes University, 38039, Kayseri,Turkey

    K. Guney is with the Faculty of Engineering, Department of ElectronicsEngineering, Erciyes University, 38039, Kayseri, Turkey

    Paper received 28th January 2004

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004 181

    AdministratorHighlight

  • The update equation of the Kalman filter is

    x^xitjt x^xitjt 1 Kityit 1where x^xitjt 1 is the predicted state vector, Kit is theKalman gain, and yit is the combined innovation given by

    yit Xmitj1

    bijyijt 2

    where mit is the number of validated measurements fortrack i, bij is the probability of associating track i withmeasurement j, and yijt is the innovation of track i andmeasurement j. The measurement innovation term, yijt; isgiven by

    yijt zjt Hitx^xitjt 1 3where zjt is the set of the all validated measurementsreceived at the time t and Hit is the measurement matrixfor target i.

    A method that is as simple as possible for calculating theassociation probabilities needs to be obtained, but theestimated tracks obtained using these association probabil-ities must be in good agreement with the true tracks. In thiswork, a method based on the ANN for efficiently solvingthis problem is presented. First, the parameters relatedto the association probabilities are determined, then theassociation probabilities depending on these parameters arecalculated using the neural model.

    In the standard JPDAF, the association probabilities bijare calculated by considering every possible hypothesisconcerning the association of the new measurements withthe existing tracks. To reduce the complexity of the JPDAF,the CJPDAF algorithm was proposed in [4]. It avoids theformation of hypotheses and approximates the associationprobabilities by

    bijt Gijt

    Stit Smjt Gijt B4a

    with

    Stit Xmitj1

    Gijt 4b

    and

    Smjt XMi1

    Gijt 4c

    where Gijt is the distribution of the innovation yijt;usually assumed to be Gaussian, and B is a bias termintroduced to account for the nonunity probability ofdetection and clutter. The elements of the yij vector aredefined by ~xxij and ~yyij for a Cartesian sensor. It is clear thatonly two parameters, ~xxij and ~yyij; are needed to describe theassociation probabilities.

    3 Artificial neural networks (ANNs)

    ANNs are biologically inspired computer programsdesigned to simulate the way in which the human brainprocesses information [5]. ANNs gather their knowledge bydetecting the patterns and relationships in data and learn (orare trained) through experience, not by programming. AnANN is formed from hundreds of single units, artificialneurons or processing elements connected with weights,which constitute the neural structure and are organised in

    layers. The power of neural computations comes fromweight connection in a network. Each neuron has weightedinputs, a summation function, transfer function, and output.The behaviour of a neural network is determined by thetransfer functions of its neurons, by the learning rule, and bythe architecture itself. The weights are the adjustableparameters and, in that sense, a neural network is aparameterised system. The weighted sum of the inputsconstitutes the activation of the neuron. The activationsignal is passed through a transfer function to produce theoutput of a neuron. The transfer function introducesnonlinearity to the network. During training, inter-unitconnections are optimised until the error in predictions isminimised and the network reaches the specified level ofaccuracy. Once the network is trained, new unseen inputinformation is entered to the network to calculate the outputfor test. The ANN represents a promising modellingtechnique, especially for data sets having nonlinearrelationships that are frequently encountered in engineering.In terms of model specification, ANNs require no know-ledge of the data source but, since they often contain manyweights that must be estimated, they require large trainingsets. In addition, ANNs can combine and incorporate bothliterature-based and experimental data to solve problems.ANNs have many structures and architectures [5, 6]. In thispaper, the multilayered perceptron (MLP) neural networkarchitecture [5] is used to compute the associationprobabilities.

    3.1 Multilayered perceptrons (MLPs)MLPs are the simplest and therefore most commonly usedneural network architectures. MLPs can be trained usingmany different learning algorithms [5, 6]. In this paper,MLPs are trained using the LevenbergMarquardt algor-ithm [1113], because this algorithm given is capable offast learning and good convergence. This algorithm is aleast-squares estimation method based on the maximumneighbourhood idea. It combines the best features of theGaussNewton and the steepest-descent methods, butavoids many of their limitations. As shown in Fig. 1, anMLP consists of three layers: an input layer, an output layer,and one or more hidden layers. Each layer is composed of apredefined number of neurons. The neurons (indicated inFig. 1 by a circle) in the input layer only act as buffers fordistributing the input signals xi to neurons in the hiddenlayer. Each neuron j in the hidden layer sums up its inputsignals xi after weighting them with the strengths of the

    Fig. 1 General form of multilayered perceptrons

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004182

    AdministratorHighlight

    AdministratorHighlight

  • respective connections wji from the input layer, andcomputes its output yj as a function f of the sum, namely

    yj fX

    wjixi

    5

    where f can be a simple threshold function, a sigmoidal orhyperbolic tangent function. The output of neurons in theoutput layer is similarly computed.

    Training a network consists of adjusting the networkweights using the different learning algorithms. A learningalgorithm gives the change Dwjit in the weight of aconnection between neurons i and j at time t. For theLevenbergMarquardt learning algorithm, the weights areupdated according to the following formula

    wjit 1 wjit Dwjit 6with

    Dwji JTwJw mI1JTwEw 7where J is the Jacobian matrix, m is a constant, I is anidentity matrix, and E(w) is an error function. The Jacobianmatrix contains the first derivatives of the errors withrespect to the weights and biases. It can be calculated byusing a standard back propagation algorithm. The value ofm is decreased after each successful step and is increasedonly when a tentative step would increase the sum ofsquares of errors. Detailed discussion of the LevenbergMarquardt learning algorithm can be found in [11].

    3.2 Application of ANN to the calculation ofthe association probabilitiesThe ANN has been adapted for the computation of theassociation probabilities bij: For the neural model, the inputsare the absolute values of the elements of the measurementinnovation vector yijj~xxijj and j~yyijj; and the output is theassociation probabilities bij: A neural model used incalculating the bij is shown in Fig. 2.

    The accuracy of the ANN model strongly depends on thedata sets used for training. If the training data sets areinsufficient or do not cover all the necessary representativefeatures of the problem, it can cause large errors with testdata sets. If the training data sets are too large, this maycause overfitting and training may take a long time. When asuitable network configuration is found and the training datasets are selected, network training can be started. Goodtraining of ANN is observed when the tracks estimatedusing ANN are close to the true tracks in different testscenarios.

    Training an ANN using a learning algorithm to computethe association probabilities involves presenting it sequen-tially with different sets (j~xxijj and j~yyijj) and correspondingdesired bij values. Differences between the desired output bijand the actual output of the ANN are evaluated by a learningalgorithm. Adaptation is carried out after the presentation ofeach set (j~xxijj and j~yyijj) until the calculation accuracy of thenetwork is deemed satisfactory according to some criterion(for example, when the error between the desired bij and theactual output for all the training set falls below a giventhreshold) or when the maximum allowable number ofepochs is reached.

    The values of the input variables j~xxijj and j~yyijj used in thispaper are between 0 and 1.2 km. The bij values, whichdepend on the absolute values of the input variables, must bebetween 0 and 1. While the values of the input variablesapproach zero, the value of bij approaches 1. After manytrials, the desired bij values, which lead to excellentagreement between the true tracks and the estimated tracks,are determined. In this paper, 630 data sets most suitable forcorrect modelling computation of the association probabil-ities were used to train the networks.

    In the MLP, the input and output layers have lineartransfer functions and the hidden layers have hyperbolictangent functions. After several trials, it was found that themost suitable network configuration was two hidden layerswith twelve neurons. The input and output data tuples werescaled between 0.0 and 1.0 before training. The number ofepochs was 1000 for training. The seed number was fixed at16 755. The seed number is a number for the randomnumber generator to ensure randomness when initialisingthe weights of networks. The value of m in (7) was chosenas 0.2.

    After training, the association probabilities bij arecomputed rapidly using the neural model under different testscenarios. These computed association probabilities areused in (2) to determine the combined innovation, and thenthe estimated states of the targets are found using theKalman filter equations. The approach proposed in thispaper can be termed an ANN data association filter(ANNDAF).

    4 Simulation results

    The aim in this section is to test the performance of thepresent ANNDAF model against the JPDAF and theCJPDAF algorithms. For this purpose, five differenttracking scenarios are considered. The trajectories of twocrossing targets in scenario 1, four crossing targets inscenario 2, six crossing targets in scenario 3, two paralleltargets in scenario 4, and four parallel targets in scenario 5,are shown in Figs. 37, respectively. These trajectories aresimilar to those widely used in the literature. The initialpositions and velocities of crossing targets are listed in thefirst two rows, in the first four rows, and in all rows ofTable 1 for scenarios 1, 2 and 3, respectively. The initialpositions and velocities of parallel targets are given in thefirst two rows and in all rows of Table 2 for scenarios 4 and5, respectively. The targets were assumed to have constantvelocities in a two-dimensional plane. For scenarios in acluttered environment, a uniform clutter density of 0:6 km2was selected, which produced on average two clutter pointsper validation gate. In the simulation the sampling intervalwas assumed to be 1 s. The covariance matrix Qit of theprocess noise wit is given by

    Qit sixt

    20

    0 siyt 2

    " #

    The associated variances were chosen as

    sixt 2 0:005 km2s4siyt 2 0:005 km2s4

    It was assumed that only position measurements wereavailable so that

    Ht 1 0 0 00 1 0 0

    for all t

    Fig. 2 Neural model for association probabilities computation

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004 183

    AdministratorHighlight

    AdministratorHighlight

    AdministratorHighlight

    AdministratorHighlight

  • Fig. 3 Tracking two crossing targets in scenario 1 using CJPDAF and ANNDAF

    a Non-cluttered environmentb Cluttered environment

    Fig. 4 Tracking four crossing targets in scenario 2 using CJPDAF and ANNDAF

    a Non-cluttered environmentb Cluttered environment

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004184

  • Fig. 6 Tracking two parallel targets in scenario 4 using CJPDAF and ANNDAF

    a Non-cluttered environmentb Cluttered environment

    Fig. 5 Tracking six crossing targets in scenario 3 using CJPDAF and ANNDAF

    a Non-cluttered environmentb Cluttered environment

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004 185

  • The measurement noise covariance matrix was Rt diag0:1; 0:1 assuming all measurement noise to beuncorrelated. The probability of detection was selected as0.9. The threshold of the validation gate was set to 10.

    The tracking performances of the CJPDAF and theANNDAF are compared in Figs. 37 for the five testscenarios in both cluttered and non-cluttered environments.

    It can be seen from Figs. 37 that the ANNDAF tracks arecloser to the true tracks than tracks predicted by theCJPDAF for all scenarios. The results of JPDAF are notshown in Figs. 37 for clarity but the RMS tracking errorsof the JPDAF algorithm are given in Table 3. Table 3 alsogives comparative performances of the JPDAF, CJPDAF,and ANNDAF methods in terms of RMS tracking error. Thepercentage improvement obtained using ANNDAF iscalculated as the ratio of the difference between the RMSerrors of the ANNDAF method and the competing method(JPDAF or CJPDAF) to the RMS error of the competingmethod. It is clear from Table 3 that in all cases the results ofthe ANNDAF method are better than those of the JPDAFand the CJPDAF methods. The RMS tracking error valuesshow that a significant improvement is obtained over theresults of the JPDAF and CJPDAF methods. WhenANNDAF is used, the average percentage improvementwith respect to JPDAF and CJPDAF is 35% and 33%;respectively. Accurate computation of the associationprobabilities using ANNs leads to good accuracy in trackingmultiple targets.

    Accurate, fast, and reliable ANN models can bedeveloped from measured=simulated data. Once developed,these ANN models can be used in place of computationallyintensive models to speed up target tracking. The ANN,using simple addition, multiplication, division, andthreshold operations in the basic processing element, canbe readily implemented in analogue VLSI or opticalhardware, or can be implemented on special purposemassively parallel hardware. If the ANN can beimplemented in simple analogue integrated circuits, thenthe ANNDAF method presented in this paper could provide

    Fig. 7 Tracking four parallel targets in scenario 5 using CJPDAF and ANNDAF

    a Non-cluttered environmentb Cluttered environment

    Table 1: Initial positions and velocities for crossingtargets in scenarios 1, 2, and 3

    Crossing targets x, km y, km _xx , km=s _yy , km=s

    1 1.55 3.55 0.42 0.56

    2 1.01 4.02 0.52 0.44

    3 0.05 8.02 0.56 0.08

    4 1.65 17.02 0.42 20.09

    5 2.51 2.22 0.46 0.32

    6 0.01 18.52 0.44 20.11

    Table 2: Initial positions and velocities for parallel targetsin scenarios 4 and 5

    Parallel targets x, km y, km _xx , km=s _yy , km=s

    1 1.01 1.41 0.41 0.001

    2 1.02 2.82 0.39 0.002

    3 1.01 4.81 0.43 0.001

    4 1.39 6.72 0.40 0.003

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004186

    AdministratorHighlight

    AdministratorHighlight

  • a practical and efficient solution to the data associationproblem. A distinct advantage of neural computation is that,after proper training, a neural network completely bypassesthe repeated use of complex iterative processes for newcases presented to it. Thus, neural computation is very fastafter the training phase. Therefore, ANN can be usedeffectively in real-time applications.

    5 Conclusions

    A neural network approach has been presented for trackingmultiple targets in both cluttered and non-clutteredenvironments. In this approach, association probabilitiesare computed using ANNs. These computed associationprobabilities are used to determine the combined

    innovation. The estimated states of the targets are thenfound using Kalman filter equations. It was shown thatANNDAF tracks are in good agreement with true tracks.This good agreement supports the validity of the approachproposed in this paper. Better accuracy is obtained usingANNDAF than with the well known JPDAF and CJPDAFalgorithms.

    6 References

    1 Bar-Shalom, Y., and Fortmann, T.E.: Tracking and data association(Academic Press, San Diego, CA, USA, 1988)

    2 Blackman, S.S.: Multiple target tracking with radar applications(Artech House, Boston, USA, 1986)

    3 Bar-Shalom, Y.: Multitarget-multisensor tracking: principles andtechniques (YBS Publishing, 1995)

    4 Fitzgerald, R.J.: Development of practical PDA logic for multitarget

    Table 3: Performance comparison of JPDAF, CJPDAF and ANNDAF methods

    RMS tracking errors, km Percentage improvement Percentage improvement

    Scenarios Targets JPDAF CJPDAF ANNDAF with respect to JPDAF with respect to CJPDAF

    1 Two crossing targets in

    non-cluttered environment

    1 0.7981 0.7049 0.4516 43 36

    2 0.8732 0.8851 0.5546 36 37

    Two crossing targets in 1 0.8473 0.8256 0.5359 37 35

    cluttered environment 2 0.9905 0.9987 0.6917 30 31

    2 Four crossing targets in 1 0.8468 0.7182 0.6851 19 5

    non-cluttered environment 2 1.0693 0.8585 0.7156 33 17

    3 1.4175 1.5054 0.9828 31 35

    4 1.6524 1.6646 0.8237 50 51

    Four crossing targets in 1 1.1323 1.2320 0.5891 48 52

    cluttered environment 2 1.1378 1.0363 0.5198 54 50

    3 1.4707 1.4563 1.3950 5 4

    4 1.9010 1.9404 1.0595 44 45

    3 Six crossing targets in 1 0.8094 0.7426 0.6656 18 10

    non-cluttered environment 2 1.0401 0.8468 0.4644 55 45

    3 1.3653 1.3542 0.6656 51 51

    4 1.7306 1.6933 1.2781 26 25

    5 1.4138 1.2496 1.0498 26 16

    6 1.0726 1.1868 0.5151 52 57

    Six crossing targets in 1 1.0786 0.8762 0.5476 49 38

    cluttered environment 2 1.2461 1.3838 1.225 2 11

    3 1.4631 1.5054 1.2041 18 20

    4 2.0205 2.2326 0.8732 57 61

    5 1.6201 1.5016 0.8614 47 43

    6 1.0401 1.1022 0.9641 7 13

    4 Two parallel targets in 1 1.1129 1.0857 0.6996 37 36

    non-cluttered environment 2 0.7981 0.8851 0.4648 42 47

    Two parallel targets in 1 1.1937 1.2816 0.6734 44 47

    cluttered environment 2 0.9672 1.0176 0.5808 40 43

    5 Four parallel targets in 1 1.0498 0.9 0.5929 44 34

    non-cluttered environment 2 0.9151 0.8791 0.6786 26 23

    3 1.1731 1.3141 1.1022 6 16

    4 1.8361 1.7472 0.9548 48 45

    Four parallel targets in 1 1.2602 1.1424 0.9703 23 15

    cluttered environment 2 0.906 0.8821 0.4601 49 48

    3 1.156 1.2816 1.0117 12 21

    4 1.8105 1.6687 1.1492 37 31

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004 187

    AdministratorHighlight

  • tracking by microprocessor. Proc. American Control Conf., Seattle,WA, USA, 1986, pp. 889897

    5 Haykin, S.: Neural networks: a comprehensive foundation (MacmillanCollege Publishing Company, New York, USA, 1994)

    6 Haykin, S.: Kalman filtering and neural networks (John Wiley & Sons,2001)

    7 Sengupta, D., and Iltis, R.A.: Neural solution to the multitargettracking data association problem, IEEE Trans. Aerosp. Electron.Syst., 1989, 25, (1), pp. 96108

    8 Leung, H.: Neural-network data association with application tomultiple-target tracking, Opt. Eng., 1996, 35, (3), pp. 693700

    9 Iltis, R.A., and Ting, P.Y.: Computing association probabilities using

    parallel Boltzmann machines, IEEE Trans. Neural Netw., 1993, 4, (2),pp. 221233

    10 Wang, F., Litva, J., Lo, T., and Bosse, E.: Performance of neuraldata associator, IEE Proc., Radar Sonar Navig., 1996, 143, (2),pp. 7178

    11 Hagan, M.T., and Menhaj, M.: Training feedforward networks with theMarquardt algorithm, IEEE Trans. Neural Netw., 1994, 5, (6),pp. 989993

    12 Levenberg, K.: A method for the solution of certain nonlinear problemsin least squares, Q. Appl. Math., 1944, 2, pp. 164168

    13 Marquardt, D.W.: An algorithm for least-squares estimation ofnonlinear parameters, J. Soc. Ind. Appl. Math., 1963, 11, pp. 431441

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004188