artificial neural networks for calculating the association probabilities in multi-target tracking

Download Artificial Neural Networks for Calculating the Association Probabilities in Multi-target Tracking

Post on 08-Sep-2015

222 views

Category:

Documents

0 download

Embed Size (px)

DESCRIPTION

Artificial Neural Networks

TRANSCRIPT

  • Artificial neural networks for calculating theassociation probabilities in multi-target tracking

    I. Turkmen and K. Guney

    Abstract: A simple method based on the multilayered perceptron neural network architecture forcalculating the association probabilities used in target tracking is presented. The multilayeredperceptron is trained with the LevenbergMarquardt algorithm. The tracks estimated by using theproposed method for multiple targets in cluttered and non-cluttered environments are in goodagreement with the original tracks. Better accuracy is obtained than when using the jointprobabilistic data association filter or the cheap joint probabilistic data association filter methods.

    1 Introduction

    The subject of multi-target tracking (MTT) has appli-cations in both civilian and military areas. The aim ofMTT is to partition the sensor data into sets ofobservations, or tracks produced by the same source.Once tracks are formed and confirmed, the number oftargets can be estimated, and the positions and velocitiesof the targets can be computed from each track. A numberof methods [13] have been presented and used toestimate the states of multiple targets. These methodshave different levels of complexity and require vastlydifferent computational effort. The joint probabilistic dataassociation filter (JPDAF) [1] is a powerful and reliablealgorithm for MTT. It works without any prior infor-mation about the targets and clutter. In the JPDAFalgorithm, the association probabilities are computed fromthe joint likelihood functions corresponding to the jointhypotheses associating all the returns to different permu-tations of the targets and clutter points. The computationalcomplexity of the joint probabilities increases exponen-tially as the number of targets increases. To reduce thiscomputational complexity significantly, Fitzgerald [4]proposed a simplified version of the JPDAF, called thecheap JPDAF algorithm (CJPDAF). The associationprobabilities were calculated in [4] using an ad hocformula. The CJPDAF method is very fast and easy toimplement; however, in either a dense target or a highlycluttered environment the tracking performance of theCJPDAF decreases significantly.

    In this article, a method based on artificial neuralnetworks (ANNs) for computing the association probabil-ities is presented. These computed association probabilitiesare then used to track the multiple targets in cluttered andnoncluttered environments. ANNs are developed fromneurophysiology by morphologically and computationally

    mimicking human brains. Although the precise details of theoperation of ANNs are quite different from those of humanbrains, they are similar in three aspects: they consist ofa very large number of processing elements (the neurons),each neuron connects to a large number of other neurons,and the functionality of networks is determined bymodifying the strengths of connections during a learningphase. Ability and adaptability to learn, generalisability,smaller information requirement, fast real-time operation,and ease of implementation features have made ANNspopular in recent years [5, 6].

    To calculate the association probabilities, differentstructures and architectures of ANNs, i.e. the standardHopfield, the modified Hopfield, the Boltzmannmachine and the mean-field Hopfield network wereproposed in [710], respectively. In these works [710],the task of finding association probabilities is viewed as aconstrained optimisation problem. The constraints wereobtained by careful evaluation of the properties of the JPDArule. Some of these constraints are analogous to those of theclassical travelling salesman problem. Usually, there arefive constants to be decided arbitrarily [79]. In practice, itis very difficult to choose the five constants to ensureoptimisation. On the other hand, the Boltzmann machines[9] convergence speed is very slow, even though it canachieve an optimal solution. To cope with these problems,the mean-field Hopfield network, which is an alternative tothe Hopfield network and the Boltzmann machine, wasproposed by Wang et al. [10]. This mean-field Hopfieldnetwork has the advantages of both the Hopfield networkand the Boltzmann machine; however, the higher perform-ance of the mean-field Hopfield network is achieved at theexpense of the complexity of equipment structure.

    In this paper, the multilayered perceptron (MLP) neuralnetwork [5] is used to calculate accurately the associationprobabilities. MLPs are the simplest and therefore mostcommonly used neural network architectures. In this paper,MLPs are trained using the Levenberg Marquardtalgorithm [1113].

    2 JPDAF and CJPDAF

    The JPDAF is a moderately complex algorithmdesigned for MTT [1]. It calculates the probabilities ofmeasurements being associated with the targets, and usesthem to form a weighted average innovation for updatingeach target state.

    q IEE, 2004

    IEE Proceedings online no. 20040739

    doi: 10.1049/ip-rsn:20040739

    I. Turkmen is with the Civil Aviation School, Department of AircraftElectrical and Electronics Engineering, Erciyes University, 38039, Kayseri,Turkey

    K. Guney is with the Faculty of Engineering, Department of ElectronicsEngineering, Erciyes University, 38039, Kayseri, Turkey

    Paper received 28th January 2004

    IEE Proc.-Radar Sonar Navig., Vol. 151, No. 4, August 2004 181

    AdministratorHighlight

  • The update equation of the Kalman filter is

    x^xitjt x^xitjt 1 Kityit 1where x^xitjt 1 is the predicted state vector, Kit is theKalman gain, and yit is the combined innovation given by

    yit Xmitj1

    bijyijt 2

    where mit is the number of validated measurements fortrack i, bij is the probability of associating track i withmeasurement j, and yijt is the innovation of track i andmeasurement j. The measurement innovation term, yijt; isgiven by

    yijt zjt Hitx^xitjt 1 3where zjt is the set of the all validated measurementsreceived at the time t and Hit is the measurement matrixfor target i.

    A method that is as simple as possible for calculating theassociation probabilities needs to be obtained, but theestimated tracks obtained using these association probabil-ities must be in good agreement with the true tracks. In thiswork, a method based on the ANN for efficiently solvingthis problem is presented. First, the parameters relatedto the association probabilities are determined, then theassociation probabilities depending on these parameters arecalculated using the neural model.

    In the standard JPDAF, the association probabilities bijare calculated by considering every possible hypothesisconcerning the association of the new measurements withthe existing tracks. To reduce the complexity of the JPDAF,the CJPDAF algorithm was proposed in [4]. It avoids theformation of hypotheses and approximates the associationprobabilities by

    bijt Gijt

    Stit Smjt Gijt B4a

    with

    Stit Xmitj1

    Gijt 4b

    and

    Smjt XMi1

    Gijt 4c

    where Gijt is the distribution of the innovation yijt;usually assumed to be Gaussian, and B is a bias termintroduced to account for the nonunity probability ofdetection and clutter. The elements of the yij vector aredefined by ~xxij and ~yyij for a Cartesian sensor. It is clear thatonly two parameters, ~xxij and ~yyij; are needed to describe theassociation probabilities.

    3 Artificial neural networks (ANNs)

    ANNs are biologically inspired computer programsdesigned to simulate the way in which the human brainprocesses information [5]. ANNs gather their knowledge bydetecting the patterns and relationships in data and learn (orare trained) through experience, not by programming. AnANN is formed from hundreds of single units, artificialneurons or processing elements connected with weights,which constitute the neural structure and are organised in

    layers. The power of neural computations comes fromweight connection in a network. Each neuron has weightedinputs, a summation function, transfer function, and output.The behaviour of a neural network is determined by thetransfer functions of its neurons, by the learning rule, and bythe architecture itself. The weights are the adjustableparameters and, in that sense, a neural network is aparameterised system. The weighted sum of the inputsconstitutes the activation of the neuron. The activationsignal is passed through a transfer function to produce theoutput of a neuron. The transfer function introducesnonlinearity to the network. During training, inter-unitconnections are optimised until the error in predictions isminimised and the network reaches the specified level ofaccuracy. Once the network is trained, new unseen inputinformation is entered to the network to calculate the outputfor test. The ANN represents a promising modellingtechnique, especially for data sets having nonlinearrelationships that are frequently encountered in engineering.In terms of model specification, ANNs require no know-ledge of the data source but, since they often contain manyweights that must be estimated, they require large trainingsets. In addition, ANNs can combine and incorporate bothliterature-based and experimental data to solve problems.ANNs have many structures and architectures [5, 6]. In thispaper, the multilayered perceptron (MLP) neural networkarchitecture [5] is used to compute the associationprobabilities.

    3.1 Multilayered perceptrons (MLPs)MLPs are the simplest and therefore most commonly usedneural network architectures. MLPs can be trained usingmany different learning algorithms [5, 6]. In this paper,MLPs are trained using the LevenbergMarquardt algor-ithm [1113], because this algorithm given is capable offast learning and good convergence. This algorithm is aleast-squares estimation method based on the maximumneighbourhood idea. It combines the best features of theGaussNewton and the steepest-de

Recommended

View more >