embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. in...

8
RESEARCH Open Access Embedded tracking algorithm based on multi-feature crowd fusion and visual object compression Zheng Wenyi 1,2* and Dong Decun 1 Abstract The accuracy and poor real-time performance of moving objects in a dynamic range complex environment become the bottleneck problem of the target location and tracking. In order to improve the positioning accuracy and the quality of tracking service, we propose an embedded tracking algorithm based on multi-feature fusion and visual object compression. On the hand, according to the feature of the target, the optimal feature matching method is selected, and the multi-feature crowd fusion location model is proposed. On the other hand, to reduce the dimension of the multidimensional space composed of the moving object visual frame and the compression of the visual object, the embedded tracking algorithm is established. Experimental results show that the proposed tracking algorithm has high precision, low energy consumption, and low delay. Keywords: Visual object compression, Embedded tracking, Crowd fusion, Multi-feature systems 1 Introduction Moving target tracking is one of the most active areas in the development of science and technology. The target tracking algorithms have been widely valued by all coun- tries in the world [1]. With the performance of continu- ous improvement and expansion, location and tracking algorithm for successful application in industry, agricul- ture, health care, and service industry [2], in the urban security, defense and space exploration have dangerous situations [3] is to show their talents. The low-complexity and high-accuracy algorithm was presented in [4], for reducing the computational load of the traditional data-fusion algorithm with heterogeneous obser- vations for location tracking. Trogh et al. [5] presented a radio frequency-based location tracking system, which could improve the performance by eliminating the shadow- ing. In [6], Liang and Krause proposed the proof-of- concept system based on a sensor fusion approach, which was built with considerations for lower cost, and higher mobility, deplorability, and portability, by combining the drift velocities of anchor nodes. The scheme of [7] could es- timate the drift velocity of the tracked node by using spatial correlation of ocean current. The distributed multi-human location algorithm was researched by Yang et al. [8] for a binary piezoelectric infrared sensor tracking system. The model-based approach was presented in [9], which is used to predict the geometric structure of an object using its visual hull. The task-dependent codebook com- pression framework was proposed to learn a compression function and adapt the codebook compression [10]. Ji et.al [11] proposed a novel compact bag-of-patterns descriptor with an application to low bit rate mobile landmark search. The blocks were flagged by lying in the object regions flag- ging compression blocks and an object tree would be added in each coding tree unit to describe the objects shape in its additional object tree [12]. However, how to provide guarantee for the high precision tracking of moving objects in the dynamic range and com- plex environment and the complexity of the optimization al- gorithm is one of the most difficult problems. Based on the results of the above researches, the embedded tracking algo- rithm based on multi-feature crowd fusion and visual object compression was proposed for mobile object tracking. The rest of the paper is organized as follows. Section 2 describes the location model based on multi-feature * Correspondence: [email protected] 1 The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji University, Shanghai 201804, China 2 School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, Jiangsu, China EURASIP Journal on Embedded Systems © 2016 The Author(s). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 DOI 10.1186/s13639-016-0053-7

Upload: others

Post on 22-Jan-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression

RESEARCH Open Access

Embedded tracking algorithm based onmulti-feature crowd fusion and visualobject compressionZheng Wenyi1,2* and Dong Decun1

Abstract

The accuracy and poor real-time performance of moving objects in a dynamic range complex environment becomethe bottleneck problem of the target location and tracking. In order to improve the positioning accuracy and thequality of tracking service, we propose an embedded tracking algorithm based on multi-feature fusion and visualobject compression. On the hand, according to the feature of the target, the optimal feature matching method isselected, and the multi-feature crowd fusion location model is proposed. On the other hand, to reduce the dimensionof the multidimensional space composed of the moving object visual frame and the compression of the visual object,the embedded tracking algorithm is established. Experimental results show that the proposed tracking algorithm hashigh precision, low energy consumption, and low delay.

Keywords: Visual object compression, Embedded tracking, Crowd fusion, Multi-feature systems

1 IntroductionMoving target tracking is one of the most active areas inthe development of science and technology. The targettracking algorithms have been widely valued by all coun-tries in the world [1]. With the performance of continu-ous improvement and expansion, location and trackingalgorithm for successful application in industry, agricul-ture, health care, and service industry [2], in the urbansecurity, defense and space exploration have dangeroussituations [3] is to show their talents.The low-complexity and high-accuracy algorithm was

presented in [4], for reducing the computational load of thetraditional data-fusion algorithm with heterogeneous obser-vations for location tracking. Trogh et al. [5] presented aradio frequency-based location tracking system, whichcould improve the performance by eliminating the shadow-ing. In [6], Liang and Krause proposed the proof-of-concept system based on a sensor fusion approach, whichwas built with considerations for lower cost, and highermobility, deplorability, and portability, by combining the

drift velocities of anchor nodes. The scheme of [7] could es-timate the drift velocity of the tracked node by using spatialcorrelation of ocean current. The distributed multi-humanlocation algorithm was researched by Yang et al. [8] for abinary piezoelectric infrared sensor tracking system.The model-based approach was presented in [9], which

is used to predict the geometric structure of an objectusing its visual hull. The task-dependent codebook com-pression framework was proposed to learn a compressionfunction and adapt the codebook compression [10]. Ji et.al[11] proposed a novel compact bag-of-patterns descriptorwith an application to low bit rate mobile landmark search.The blocks were flagged by lying in the object regions flag-ging compression blocks and an object tree would beadded in each coding tree unit to describe the object’sshape in its additional object tree [12].However, how to provide guarantee for the high precision

tracking of moving objects in the dynamic range and com-plex environment and the complexity of the optimization al-gorithm is one of the most difficult problems. Based on theresults of the above researches, the embedded tracking algo-rithm based on multi-feature crowd fusion and visual objectcompression was proposed for mobile object tracking.The rest of the paper is organized as follows. Section 2

describes the location model based on multi-feature

* Correspondence: [email protected] Key Laboratory of Road and Traffic Engineering, Ministry of Education,Tongji University, Shanghai 201804, China2School of Computer Science and Communication Engineering, JiangsuUniversity, Zhenjiang 212013, Jiangsu, China

EURASIP Journal onEmbedded Systems

© 2016 The Author(s). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, andreproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link tothe Creative Commons license, and indicate if changes were made.

Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 DOI 10.1186/s13639-016-0053-7

Page 2: Embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression

crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression. We ana-lyzed and evaluated the proposed scheme in Section 4.Finally, we conclude the paper in Section 5.

2 Location model based on multi-feature crowdfusionGenerally, the target location is divided into two stages.In the first phase, the feature of moving target is ex-

tracted. Feature extraction would be completed with thefollowing steps.

(1)Capture moving target image frame sequence.(2)The characteristics of real-time target image frames

would be extracted.(3)From the current image frame to the still image

frame between the target, search and the extractedfeatures of the image frame the most similar to thetarget motion characteristics.

The second stage involves the characteristics of themoving target matching.Choosing different features, according to the charac-

teristics of the target, is selecting the best feature match-ing scheme.The above localization scheme has the following defects:

(1)The extracted features are single. Such featureextraction is difficult to locate for the complexmoving objects with multiple states.

(2)The change features of the moving object in complexscenes such Cdef as various deformation, Clgf (light),Csiz (size), and Ccol (color), making the single featurematching success rate SRFM very low, as shown informula (1).

f IFi ¼ ifð Þ ¼XNi¼1

Gi; h if ; hi−1;Xij¼1

hj

! !

if Cdef ;C lgf� �

h if ; hi−1;Xij¼1

hj

!¼ if i Csiz;Ccolð Þ−αj jhi−1 α;

Xij¼1

hj

!

SRFM ¼XMi¼1

f if i; hið Þ 1XNj¼1

f ðif jÞ≤f ifM; hMð Þ

N

8>>>>>>>>>>>>>><>>>>>>>>>>>>>>:

ð1Þ

Here, if represents the image frames. IF is the representa-tion of image frame sequence. G is the vector representa-tion of image frame matrix. H is the function used to solvethe image frame characteristics and frame similarity. N rep-resents the captured motion target image frame sequencelength. M represents the frames of image feature matching.From formula (1), it is found that the upper bound of the

matching success rate is f ifM ;hMð ÞN . But the success rate of

image frames captured is inversely proportional to thenumber of captured image frames. The conclusion showsthat the captured image frames will restrict the single fea-ture matching characteristic.

(3)The accuracy and robustness of target motion inreal time are poor, as shown in formula (2). In orderto improve the accuracy and robustness, a singlefeature set is tracking the feature series, but thecomplexity of the transition algorithm is too high, asshown in formula (3).

ATR ¼XMi¼1

sinβ−αi� � 1

NþXMi¼1

hiffiffiffiα

p

RUSTR ¼ ρf IFMð ÞSRFM

8>>><>>>:

ð2Þ

Here, ATR indicates the positioning accuracy. RUSTR indi-cates the location robustness. β is the included angle betweenadjacent image frames. ρ denotes the expressed error vector.

CLETSA ¼ IFi−ρ sinβj j≤M � g hi; αð Þg hi; αð Þ≈

XMi¼1

hiif Cdef ;C lgf ;Csiz;Ccol� �� �

≥M � ifM Cdef ;C lgf ;Csiz;Ccol� �

8>>><>>>:

ð3Þ

Here, CLETSA represents the complexity of the transi-tion algorithm. The function g(hi, α) represents transi-tion algorithm. It can be found that the complex imageframes are proportional to the degree and featurematching. This shows that more space, time, and com-putation must be paid in order to get more features tomatch the image frames.In order to solve the above problems, we propose a

multi-feature crowd fusion location model. The modelanalyzes the dynamic motion of the target, the movingtrack, and the structure parameters of the image frame.The state characteristics of different targets are captured;the composition of multiple feature vectors such as for-mula (4) is presented. This vector integrates the character-istics of motion state and deformation, light, size, andcolor and can effectively improve the low matching suc-cess rate of single feature extraction, such as formula (5).

MLF ¼

" vmot if11 ⋯ vmotf 1L⋮ ⋱ ⋮

vmotK f K1⋯vmot

K f KL

#

vmot ¼ MN

Xi¼1

N−M

f if i; hið Þ

8>>>>>>><>>>>>>>:

ð4Þ

Here, vmot is the target motion trajectory fitting func-tion. K is the representation of time series features. L isthe representation of spatial sequence features.

Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 Page 2 of 8

Page 3: Embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression

SRFM ¼ α⋅rank MLFf g tanβ; β < αjjβ > ρ1; α≤β≤ρ

�ð5Þ

Here, rank {MLF} is the rank of multi-feature vector.From formula (5), we can see that the high matchingsuccess rate can be guaranteed as long as the multiplefeature vectors are solved correctly.In order to further reduce the complexity and improve

the accuracy and reliability of multi-feature matching, thismodel combines the multi-feature fusion mechanismbased on crowd feature analysis. Multi-feature vector andthe target motion curve of the knowledge combinationare shown in Fig. 1. In the crowd analysis characteristics,the curve and the multiple features are relatively inde-pendent, the relative independence between the arc andthe multiple features. By crowd analysis, multi-feature vec-tors are optimized. Multi-features in this vector are notmutually exclusive. Multi features in this vector are not

mutually exclusive for improving the performance of multifeature fusion. This can reduce the amount of fusion oper-ations, as shown in formula (6).

fucomp ¼ 1

M2

XKi¼1

XLj¼1

G i; jð ÞMLF i;jð Þ

G i; jð Þ ¼ f IFið Þf IFj� �

h f ;MLFi;MLFj� �

8>>>><>>>>:

ð6ÞIn summary,the multi feature crowd fusion algorithm

is shown in Fig. 2.

3 Embedded tracking algorithm for visual objectcompressionThe visual frame of the moving object constitutes themultidimensional space of Q dimension. The visualframe in this space is denoted as xQ. The visual frame of

Motionobject

Single feature

Single feature

Single feature

MLF

Cdef Ccol Csiz Clgf

fusion

Crowd analyzing model

Feature extraction

SRFM ATRfucomp

Fig. 2 Multi-feature crowd fusion location algorithm and workflow

Fig. 1 Crowd fusion between multiple features

Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 Page 3 of 8

Page 4: Embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression

the internal elements is used to form a visual matrixVM. The element of the visual matrix receives the inter-ference of the multidimensional space, and the informa-tion is easy to be distorted. In order to solve thisproblem, the visual matrix VM can be compressed. Thevisual frame of the L-dimensional space tracking systemis shown in Fig. 3. Here, the VM is selected as the objectof the center visual frame VF, perpendicular to the co-ordinate axes of the L-dimensional space. The angle be-tween the vertical line is denoted as θ1, …, θQ. MOrepresents a moving target. In the moving process of ob-jects, φ is the angle between the VF point and the motiondirection. The VF point can collect the visual targets withdifferent degrees. These targets are used to update the ele-ments of the visual matrix VM. The fusion results of VFand VM are mapped into the MO plane. The compressionmatrix must satisfy the omni-directional low-dimensional

characteristics, as shown in formula (7). After the matrixis compressed, the characteristics of the distribution of thevisual frame must be satisfied, as shown in formula (8).

F θ;φð Þ ¼ VMj jH sinθ cosφ2d

Deff ¼ d 2πQ

XQi¼1

sinθicosφθi

� �mine

8>>><>>>:

ð7Þ

Here, F(θ, φ) represents the direction function of visualframes in L-dimensional space. D represents the distancebetween the VF point and the origin of the L-dimensionalspace. Deff represents the effective dimension of visualtracking space. The parameter value is obviously less thanL, which can effectively reduce the dimension and im-prove the compression efficiency of the visual frame.

features

1 L

Q dimensional space

1 Q

crowd compresion

Location

fusionVFU

Normal planeembedded

Target object tracking

Fig. 4 Embedded tracking algorithm architecture with visual frame target compression

Fig. 3 L-dimensional space tracking system for visual frame

Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 Page 4 of 8

Page 5: Embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression

f VFð Þ ¼

VMj jHDeff

; θ1 < φ < θDeff

VMj jHQ

;φ ¼ θ1;⋯θQ

Defff gi¼1;…; VMj jHmax ;φ > θDeff jjφ < θ1

ð8Þ

8>>>>>>><>>>>>>>:

Here, f(VF) is the visual frame distribution density

function. Defff gi¼1;…; VMj jHmax represents the largest spatial

dimension in the VM rank of the visual matrix.After the visual matrix VM is compressed and recon-

structed by the visual frame, the tracking signal is shownin formula (9).

vfDeff ¼ VMsinθ cosφXVMj jH

i¼1

xi ð9Þ

The visual object compression method can obtain themapping relationship between the visual frame and themoving object from the L dimension or Deff dimensionalspace by choosing the Defff gi¼1;…; VMj jH

min and realize thetarget motion prediction. The specific steps of the visualtarget compression algorithm are as follows:

(1)The visual matrix of the core visual frame-orientedmigration: The VFC state of the visual frame is{θC, φC,Deff_C}, which is captured by the currentmoving target. The visual frames propagate along

the direction of the F(θC, φC). The new state{θU, φU,Deff_U} of the visual frame is obtained afterspreading on the dimensional space Deff_C.

(2)The moving object, the current state of the visualframe, and the diffusion of the visual frame form acompressed plane PCV: The compressed point set PTis formed in the plane. Arbitrary two points PTj andPTi in the plane into a visual line: PTi normal vector isNVi = sin θC cos φU‖PTj‖. The normal vector of anypoint PTj on the plane is NVj = sin θC cos φU‖PTi‖.

(3)The included angle between the normal plane of themethod vector NVi and the normal plane of thenormal vector NVj: The relation between the planeangle and the direction arc is sinγ ¼ NVi �NVj PTk karctan θC−θUj j

φC−φUj j� �

. The vector mapping relation

between the plane angle and the direction field ofthe moving object is shown in formula (10).

Mγ ¼ MPTi MPTj

MVFC MVFU

M i;jð Þ ¼ F sinθC ; cosφUð Þ PTik k PTj

�� ��XVMj jH

i¼1

xi

8>>>><>>>>:

ð10Þ

The mapping matrix Mγ is divided into 4 submatrices.Each submatrix M(i,j) is obtained by solving the directionfunction, compressing the visual frame point and thesignal strength.

(4)The 4 submatrices of the moving object multi-feature fusion mapping matrix are obtained by thevisual frame analysis. Tracking matrix MT isobtained through the target compression.

MT ¼ MLF i;jð Þfucomp SRFM � rank MLFf g tanγM � ifMðCdef ;C lgf ;Csiz;CcolÞ

XMi¼1 sinγ−αi� � 1

N

#"

200 400 600 800

5

10

15

20

25

30

Number of visual frame samples

)%(

rorrenoitacoL

Single featureMultiple features fusion

Fig. 5 Location error

Table 1 Delay with 1000 visual frame samples

Normal planeradian

Location delay withsingle feature

Location delay with multiplefeatures and fusion

30 25.7 ms 1.9 ms

50 89.4 ms 2.0 ms

110 345.2 ms 1.8 ms

Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 Page 5 of 8

Page 6: Embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression

(5)Mγ and MT through the operational matrix ofintegration to predict the moving object space of thelatest state.

To sum up, the embedded tracking algorithm basedon multi-feature fusion and localization of visual targetcompression is shown in Fig. 4.

4 Performance analysis of embedded trackingalgorithmWe focus on the library environment and the play-ground environment to test the proposed algorithm inthis paper. The server of the experimental platform iscore i5 Intel, physical memory is 8G, and virtual mem-ory is 4G (algorithm using C language programming). Inthe library environment, the digital cameras were usedto obtain the actual value of the pedestrian trajectory;the measurement range is 120 m2. During the experi-ment, the pedestrians once every move, capture a visualframe image, and upload to the server through the Wi-Fi. In the playground, the scope of the experiment sceneis larger. The maximum circumference of the play-ground is 700 m. By using an outdoor electric car

combined with HD industrial cameras, the pedestriantrajectory is captured.In order to analyze the improvement of the positioning

accuracy of the multi-feature fusion and location algorithmfor moving objects, we test three different methods. Whenthe number of visual frames is 100200300, the single fea-ture and multi-feature fusion and location algorithm arethe work delay. The number of random sampling is set to1000 times. Table 1 shows the statistical positioning delaywith different curvature method pinging, feature extractionand location analysis, and the execution time required. Itwas found that the greater the plane curve, the greater thepositioning delay. Visual object compression algorithm caneffectively reduce the spatial dimension and reduce theplane curve. Single feature location delay is several times ofthe multi-feature fusion and localization. In the worst case,the single feature location delay is about 150 times of themulti-feature fusion. This is because the multi-feature fu-sion location algorithm using multi-feature extraction ac-celeration and different dimensions of feature fusionestimation moving trajectory of the target object not onlyreduces the processing time but also the computationalcomplexity and the positioning method, so that the

Fig. 7 Reconstruction track

200 400 600 800

40

50

60

70

80

90

Number of visual frame samples

)%(

noitazilitue caps

ataD single feature

Multiple features fusion

Fig. 6 Data space utilization

Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 Page 6 of 8

Page 7: Embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression

positioning acceleration effect is very obvious. At the sametime, Fig. 5 shows the positioning accuracy of single andmulti-feature fusion and location algorithm under differentsamples. It is found that, with the increase of the visualframe samples, the error of the single feature location algo-rithm increases obviously. When the sample number is twotimes the speed of the moving object, the position error is25 %. The single feature location has been unable to cap-ture the visual frame and the normal tracking object. Incontrast, multi-feature fusion algorithm always maintainshigh precision, which benefits from the feature fusion.Figure 6 shows the storage space of the server with the

tracking algorithm. It is found that the proposed algo-rithm has a better space utilization. The proposed algo-rithm can achieve accurate target localization using asmall amount of data.In the playground, the pedestrians walk at a speed of

10 m/s. The experiment time is 9 a.m. The weather iscloudy, the sun is not enough, the embedded tracking al-gorithm based on multi-feature fusion and visual targetcompression is recorded as ET-MVC, and the target ob-ject feature tracking algorithm is denoted as T-OTF.Pedestrian trajectory reconstruction is shown in Fig. 7.Blue track represents the T-OTF tracking results. Blacklocus represents the actual value. The red trace repre-sents the ET-MVC’s location tracking results. The bluetrace is the most distant from the actual value of theblack locus. The main source of error is that the spacedimension is too large. High spatial dimension leads tothe excessive discretization of the visual frame, and the

reconstruction error will be relatively large. ET-MVCcan eliminate these errors significantly. The reconstruc-tion accuracy of the motion trajectory is effectively re-strained by the error. As seen from the graph, the redtrajectory generation optimization results are signifi-cantly improved, which is obviously better than the bluetrajectory.Figures 8 and 9 show the effect of the moving target

tracking on the actual scene of the experiment. As shownin Figs. 8 and 9, the proposed algorithm is almost alwaysable to accurately track the target in the entire trackingprocess. The proposed algorithm can eliminate the effectof occlusion and interference in the scene. However, theT-OTF algorithm quickly lost the tracking target.

5 ConclusionsPositioning accuracy, real-time performance, and robust-ness are important performance indexes of moving targetlocation and tracking. In order to improve the positioningaccuracy and improve the quality of tracking service, wepropose an embedded tracking algorithm based on multi-feature crowd fusion and visual object compression. Firstof all, it analyzes the dynamic motion of the target, themoving track, and the structure parameters of the imageframe. By capturing the state characteristics of differenttargets, the multiple feature vectors are formed. Secondly,the visual matrix of the core visual frame-oriented migra-tion is obtained. Moving object, the current state of thevisual frame, and the diffusion of the visual frame form acompression plane. Embedded tracking is implemented.

Fig. 9 Tracking results of moving object with ET-MVC algorithm

Fig. 8 Tracking results of moving object with T-OTF algorithm

Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 Page 7 of 8

Page 8: Embedded tracking algorithm based on multi-feature crowd … · 2017. 8. 23. · crowd fusion. In Section 3, we show the embedded track-ing algorithm for visual object compression

The experimental results show that the multi-feature fu-sion can effectively reduce the positioning error andshorten the delay. Compared with the target object track-ing algorithm, the embedded tracking algorithm based onvisual object compression has a significant advantage inthe reconstruction of the moving target.

AcknowledgementsThis work is supported in part by the National Youth Fund Project No. 61300228.

Competing interestsThe authors declare that they have no competing interests.

Received: 9 June 2016 Accepted: 7 September 2016

References1. PH Tseng, KT Feng, YC Lin et al., Wireless location tracking algorithms for

environments with insufficient signal sources. IEEE Transactions on MobileComputing 8(12), 1676–1689 (2009)

2. CT Chiang, PH Tseng, KT Feng, Hybrid unified Kalman tracking algorithmsfor heterogeneous wireless location systems. IEEE Transactions on VehicularTechnology 61(61), 702–715 (2012)

3. YC Lai, JW Lin, YH Yeh et al., A tracking system using location predictionand dynamic threshold for minimizing SMS delivery. Journal ofCommunications & Networks 15(15), 54–60 (2013)

4. YS Chiou, F Tsai, A reduced-complexity data-fusion algorithm using beliefpropagation for location tracking in heterogeneous observations. IEEETransactions on Cybernetics 44(6), 922–935 (2014)

5. J Trogh, D Plets, A Thielens et al., Enhanced indoor location tracking throughbody shadowing compensation. IEEE Sensors Journal 16(7), 2105–2114 (2016)

6. P-C Liang, P Krause, Smartphone-based real-time indoor location trackingwith 1-m precision. IEEE Journal of Biomedical and Health Informatics20(3), 756–762 (2016)

7. R Diamant, LM Wolff, L Lampe, Location tracking of ocean-current-relatedunderwater drifting nodes using Doppler shift measurements. IEEE Journalof Oceanic Engineering 40(4), 887–902 (2015)

8. B Yang, Y Lei, B Yan, Distributed multi-human location algorithm usingnaive Bayes classifier for a binary pyroelectric infrared sensor trackingsystem. IEEE Sensors Journal 16(1), 1–1 (2015)

9. SS Hwang, WJ Kim, J Yoo et al., Visual hull-based geometric data compressionof a 3-D object. IEEE Transactions on Circuits & Systems for Video Technology25(7), 1151–1160 (2015)

10. R Ji, H Yao, W Liu et al., Task-dependent visual-codebook compression. IEEETransactions on Image Processing A Publication of the IEEE SignalProcessing Society 21(4), 2282–2293 (2012)

11. R Ji, LY Duan, J Chen et al., Mining compact bag-of-patterns for low bit ratemobile visual search. IEEE Transactions on Image Processing A Publicationof the IEEE Signal Processing Society 23(7), 3099–113 (2014)

12. T Huang, S Dong, Y Tian, Representing visual objects in HEVC coding loop.IEEE Journal on Emerging & Selected Topics in Circuits & Systems 4(1), 5–16 (2014)

Submit your manuscript to a journal and benefi t from:

7 Convenient online submission

7 Rigorous peer review

7 Immediate publication on acceptance

7 Open access: articles freely available online

7 High visibility within the fi eld

7 Retaining the copyright to your article

Submit your next manuscript at 7 springeropen.com

Wenyi and Decun EURASIP Journal on Embedded Systems (2016) 2016:16 Page 8 of 8